Home SAN/NAS for virtualization

Status
Not open for further replies.

phpcoder

Cadet
Joined
Jul 23, 2018
Messages
5
Hi guys,

I need a NAS solution for home.
First off a little background. I have two Windows Hyper-V servers running around 8 VM's each on local RAID 10/RAID 5 storage.
I would like to put a 10G network card in each server and connect them to a FreeNAS system that is able to run the 16 VM's, and at the same time deliver Samba/CIFS storage for video's.

Originally I looked into Synology DS1817+ with 16GB RAM - that should deliver around 600 MB/s write and 1500 MB/s read.
The cost of the Synology box with 10G is around $1800 (I live in Denmark, so the hardware is probably a bit more expensive here).

Because of that I am considering FreeNAS instead, but I have a hard time figuring out what hardware to use. I have read the guide, but most of the motherboards recommended is either EOL or simply too old to get a hold of.

Do you guys have any recommandation for a motherboard and CPU or motherboard with CPU built in?
Performance is important, I need something that will perform better than the Synology (or else I might just as well just purchase that).
Power is also an "issue". In Denmark electricity is pretty expensive, so I'd like to keep as low a watt usage as possible.

I think I will buy around 8 WD Red 8TB disks (and add some SSD's)

And lastly... thank you very much for reading my post. Much appreciated :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I live in Denmark, so the hardware is probably a bit more expensive here
You are still going to need to spend quite a bit to do this with FreeNAS because only the operating system is free and you still need good hardware to get this work done.
but most of the motherboards recommended is either EOL or simply too old to get a hold of.
You don't need new hardware to run FreeNAS. As long as it isn't too old, old is good because it means the driver support is more likely to be there.
I think I will buy around 8 WD Red 8TB disks (and add some SSD's)
I am guessing that you will use iSCSI for the sharing to the HyperV servers? You will need to decide how much storage you need for that and configure that storage for speed, possibly all SSD, and separately configure another pool of storage for the CIFS/SMB as that does not need to be able to sustain the high random IO that virtualization needs for the VMs to be responsive.

Due to the difficulty getting components in your region, I may not be able to make suggestions that you would find useful.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I would think some of the supermicro Xeon-D boards would be ideal. I bit more expensive but low power and fast. You can't have cheap, fast, and low power!
 

phpcoder

Cadet
Joined
Jul 23, 2018
Messages
5
Thank you both for your help.

The only SuperMicro motherboard that I seem to be able to get in Denmark (for a reasonable price) that fits my needs is the X10SDV-4C-7TP4F
Can anyone here confirm that both the SAS and SATA ports are working (I will be using 8 SATA ports) - and have anyone confirmed that the 10G ports work?
RAM 32 GB: Kingston KVR24R17S4K4/32
Chassis: Fractal Design Node 804
PSU: Corsair RM750i
Disks (8): WesternDigital Red 8TB
I have a spare Kingston DCP1000 at 800GB laying around - any suggestion on what I should use it for? Beware that the device represents itself as 4x200GB drives.

What do you think of this setup? Do you think the performance will be better than with the Synology? (specifications in first thread)
Is there any parts you would switch for something better? (Any recommandation on another PSU for example?)

Again...thank you guys.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I would like to put a 10G network card in each server and connect them to a FreeNAS system that is able to run the 16 VM's, and at the same time deliver Samba/CIFS storage for video's.
I suppose that you will have enough storage for your SMB / CIFS but what are you planning to use for the 16 VMs?
I have a spare Kingston DCP1000 at 800GB laying around
How are you planning to use the 8 x 8TB drives? If that is all the drives you plan to have, you may be disappointed with the performance when using them for a datastore for VM hosting.
The best configuration for IOPS (which are crucial for VMs) is to configure the 8 drives as mirror pairs in a pool but that only gives you 4 vdevs in the pool and each vdev gives about the performance of a single drive. The spec sheet says that the internal transfer rate is 210 MB/s (which is probably the best, not sustained or average) so your 4 vdevs would cumulatively give you 840 MB/s but the random IO would be limited by the relatively small number of vdevs. If you setup that Kingston card (4 drives) with 2 drives dedicated to SLOG (Separate Log device for ZFS intent log) and the other two a L2ARC, it might help your pool performance but I would say it is a bit of guess work as to whether or not it will work well.
https://www.wdc.com/content/dam/wdc/website/downloadable_assets/eng/spec_data_sheet/2879-800002.pdf
What do you think of this setup? Do you think the performance will be better than with the Synology?
Synology DS1817+ with 16GB RAM - that should deliver around 600 MB/s write and 1500 MB/s read.
For sequential transfer of large files, I would say you will be right in the ball park of the Synology. The real advantage of FreeNAS is the flexibility to configure it the way you need it to be and the fact that it uses ZFS to maintain the long term integrity of your data. Personally, I don't like the way Synology handles data. I don't trust their implementation of software RAID. I wouldn't take one if you tried to give it to me, unless it was with the expectation that I could load some other operating system on it and use it the way I wanted. I have a couple QNAP rack mounted systems (48 drives each) at work where I replaced the proprietary version of Linux that QNAP supplied with FreeNAS.

Anyhow, if this is the hardware you want, it should work about as well (maybe better) than the Synology would but you can do better if you invest in better hardware. If this is a home lab for your entertainment, it might not be worth the cost.

I still don't see an answer to the question of what method of storage sharing you will use to connect the VMs to the FreeNAS. That does make some difference.

PS. If you look at my NAS, in my signature, you will see that I have a storage pool that is RAIDz2 for storing my files like movies, music, etc. That gives me the most 'bang for the buck' for bulk storage while still giving me enough redundancy (2 drives) for me to be comfortable.
Then I have a separate pool for iSCSI that is made of 16 drives, in mirrored pairs, for 8 vdevs and I have SLOG and L2ARC attached to that pool. The two pools are configured differently because they are for different purposes. You have limited your installation by limiting the total number of drives.
You would get much better performance from your VMs by using SSDs, but you never answered the question about how much storage you would need.
You came and asked for suggestions, but never answered the questions that need to be asked and answered for us to know what to suggest.
 
Last edited:

phpcoder

Cadet
Joined
Jul 23, 2018
Messages
5
I suppose that you will have enough storage for your SMB / CIFS but what are you planning to use for the 16 VMs?
You have limited your installation by limiting the total number of drives.
You would get much better performance from your VMs by using SSDs, but you never answered the question about how much storage you would need.
You came and asked for suggestions, but never answered the questions that need to be asked and answered for us to know what to suggest.

Sorry, you are absolutely right..

For my VM's I was planning the same storage. I was hoping to create a RAIDz2 for all the disks, and then as you suggest use the DCP1000 for SLOG and L2ARC.
Currently my VM's are running off of local disks. Each server have 4x2TB harddrives running HW-RAID5 on LSI controllers. As they are actually running well right now, I was hoping that even with RAIDz2 the performance would only get better.

Right now I'm thinking iSCSI for sharing data, but I might switch to ESXi and in that case it seems that I might get better performance using NFS. I read somwhere that iSCSI begins to degrade performance on ZFS if more than 50% space used. (can't find the article right now, maybe I read wrong).
Another option would be looking into SMB3 - from a very light google search it seems that SMB3 is supported on FreeNAS, even though I have looked at no performance graphs.

Ideally I would stay on Hyper-V, and use SMB3/iSCSI for connection.. but if performance is bad, then I'll probably move to ESXi and NFS.

Obviously you are an expert at this. So could you give me an advice. The new X11 series of the Xeon D motherboards cost me the exact same thing as the X10, so I'm considering going for the SuperMicro X11SDV-4C-TP8F board instead. Do you think it will be compatible with FreeNAS? I'm thinking that the SAS controller or 10G network might be the stuff having trouble with drivers, but I'm pretty rusty with FreeBSD - the version I used the most was FreeBSD 4.4 :)

Your setup looks amazing.. I could consider something like it, but I'm afraid of the power bill. Do you know how much power it takes? (I'm curious).

Thank you for your help.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Another forum member, @Stux , did a build a little while back that I thought you might find interesting. Especcially considering the desire to have a small size and power impact.

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

Currently my VM's are running off of local disks. Each server have 4x2TB harddrives running HW-RAID5 on LSI controllers. As they are actually running well right now, I was hoping that even with RAIDz2 the performance would only get better.
The LSI hardware RAID cards have cache memory that might make them have a bit more performance than you will see with RAIDz2, because the random IO of a RAIDz2 with your drive configuration would be limited because it is just one vdev. With the SLOG and L2ARC you may still see acceptable performance. I would suggest doing a good bit of testing before you start moving data into the system so you have the option of reconfiguring if you see the need.
Here is a link to the testing done by one of our other contributors showing the difference between with and without SLOG:

Testing the benefits of SLOG
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561

I read somwhere that iSCSI begins to degrade performance on ZFS if more than 50% space used.
This is because ZFS is a copy-on-write file system and that means there needs to always be enough free space to write back anything that was changed by the VM and is being written back to the disk.
Do you think it will be compatible with FreeNAS? I'm thinking that the SAS controller or 10G network might be the stuff having trouble with drivers
The last person that I recall posting on the forum about that, unless I am remembering wrong, was having problems getting the drivers working for the 10G LAN interfaces, but that may be resolved by now. I think we moved to a newer version of BSD (10 to 11) since then. It says on the Supermicro site that it is compatible with FreeBSD 11 with the exception that you can't use the SATA in fake-RAID.
https://www.supermicro.com/support/resources/OS/skylake-d.cfm
If you don't mind taking a risk, and you have the funds, I would say to give it a shot. It would give you a bit better power efficiency and maybe even a little more performance.
Your setup looks amazing.. I could consider something like it, but I'm afraid of the power bill. Do you know how much power it takes?
I have three UPS units and two of them are telling me the load is around 500 watts and the third is around 120 watts; but that is all the servers, switches, KVM, desktop and laptop; the whole home office.

You might benifit from reading through this guide. It has a section on initial burn-in and testing for examplel:

Uncle Fester's Basic FreeNAS Configuration Guide
https://www.familybrown.org/dokuwiki/doku.php?id=fester:intro

I think I addressed the questions here, but just post if you have more questions, someone is bound to chime in.
 
Status
Not open for further replies.
Top