New FreeNAS Build

Status
Not open for further replies.

aluris

Cadet
Joined
Dec 13, 2014
Messages
5
Hi Folks,

Have read through the stickies and some other threads. Will like to check for some advise.

Some background first:
- There is no SuperMicro sold in my country. I could get ASRock though
- FreeNAS jails used will be OwnCloud and XMBC. Maybe 1-2 more as well
- Will like to use FreeNas for some virtual machines hosting in the future . Performance is not overly important nor data as they are just test machines. I just want around 100 IOPS.
- A large part of my data is unimportant (ripped movies/music/etc). The rest (<500 GB) will be backed up to external hard disk (finance, important documents, personal photos, etc)
- Space is very limited. I can't afford a rack solution in my house
- FreeNAS will serve 2 media boxes, 2 laptops and 1 PC

My intended build
SilverStone DS380 Mini-ITX Case (8 x 3.5" drives (hot swapped) and 4 x 2.5" drives)
Asrock E3C226D2I Intel C226 Skt1150 Mini-ITX Server Motheboard
Intel i3-4150T 3GHz Dual-Core
Kingston KVR16LE11/8 1x 8gb DDR3 1600 Unbuffered ECC CL11 1.35V w/TS x2
Silverstone ST45SF-G 450W Modular 80+ Gold
HGST DeskStar NAS H3IKNAS40003272SN 4TB SATA3 64mb 7200rpm x 6

For the NICs, should I:
- LACP with 2 VLANS (1 for communications with ESXi, 1 for NAS sharing)
- each NIC is tied to 1 VLAN with 1 IP
I will build a mirrored zpool (better performance) giving me 12 TB. Some questions:
- Can I split up this zpool (1 CIFS share on 1 VLAN, 1 NFS or iSCSI on another VLAN). Reason been my media data storage doesn't need a lot of IOPS while VM storage requirements are smaller and require higher IOPS.
- iSCSI or NFS? I understand a ZIL (Intel Solid-State Drive DC S3500 Series SSDSC2BB120G401 120GB x 2) is required. As the motherboard only support 6 SATA ports, can I use a MSI Star-SATA6 2-Port SATA 6Gbps PCI-e x1 Controller Card for the SSD? Do I need an actual RAID controller or does ZFS handle the mirroring on the ZIL? My usage will be maybe 1-2 VMs on most of the time, with the remaining 10 or so booted up when I need to do some testing

Thanks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm gonna give you a few tips...

100 IOPS is nothing for a VM. So saying that you need 100 IOPS, then later saying "10 or so booted up when I need to do some testing" is kind of contradictory. You're going to need much more than 100 IOPS. Ever seen Windows take 10 minutes to bootup? That's what you get when you say things like "performance isn't overly important". Also you will have writes that timeout resulting in corrupted files and file systems in your VM. You can argue with me as much as you want that performance isn't important, but there is a minimum standard that must be met.

If you read through my noobie guide you'd know that as soon as you start talking VMs you are talking a MUCH MUCH more expensive server than what you are probably wanting. We're talking 64GB of RAM, L2ARC, and a ZIL if you want to go NFS.

You are welcome to argue that your hardware will work, and there's a very slim chance it might. Just don't be shocked if you get this built and it can't do your VMs. I run a system with 32GB of RAM and 10 disks in a single RAIDZ2 and I couldn't even run 1 VM without problems. I stopped trying because it was so many problems. I haven't bothered adding an L2ARC because I knew it would hurt performance as you sacrifice ARC by using an L2ARC.

You can't split zpools. So you need to go with lots of vdevs from the get-go if you want higher IOPS.
 

aluris

Cadet
Joined
Dec 13, 2014
Messages
5
Thanks.

So what you are stating is that even with 32GB RAM and ZIL, I can't even run a 2 GB VM on CentOS without severe issues?

If that is the case, then do you have any recommends for other NAS distros that this can work with.

I don't see the point to upgrade the hardware as such to be able to host some VMs. Yes that will be the case for production or if I am running a high amount of VMs.

If ZFS requires that much, it may be a robust file system but I guess the hardware requirements are not really for the home user.
 

DKarnov

Dabbler
Joined
Nov 25, 2014
Messages
44
Most 'home users' aren't running VMs.

At the casual level of VM usage you're looking at, You're really better off building a separate box for VMs and leaving the FreeNAS machine for FreeNAS. The other box doesn't need anything but a mini-ITX board (you can use that same Asrock C226, or even an Avoton board), memory and an SSD for VM storage so it can be tiny and fairly inexpensive. Mine fits in one of these: http://www.supermicro.com/products/chassis/Mini-ITX/101/SC101i.cfm , I know you said you can't get SuperMicro but I'm sure you can find something similar. It ends up being cheaper and smaller to do two separate boxes than to try to get a FreeNAS box beefy enough for virtualization (or an ESXi box that can safely virtualize FreeNAS.)

Also a note, the i3 T (low-power) processors are not recommended, they're for thermally limited applications and will give you worse performance (and no real power savings) over a regular i3-4150 CPU.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Also a note, the i3 T (low-power) processors are not recommended, they're for thermally limited applications and will give you worse performance (and no real power savings) over a regular i3-4150 CPU.

This. They do not save any power at idle! All their "savings" come from artificially throttling themselves once the 43W (or whatever it is) power envelope has been reached.
 

aluris

Cadet
Joined
Dec 13, 2014
Messages
5
Hmm. That is a good idea actually. Maybe not a SSD (limited storage) but I can do with some local storage there.

So if I don't need the virtualisation, I guess RAIDZ2 will be fine for bulk storage (or is mirror still recommended?).

Is the specs above good for CIFS sharing with a couple of jails (changing i3-4150T to i3-4150T).

What if I use G3240 instead of I3? I want to actually cut down on power consumption as much as possible as well.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Hmm. That is a good idea actually. Maybe not a SSD (limited storage) but I can do with some local storage there.

So if I don't need the virtualisation, I guess RAIDZ2 will be fine for bulk storage (or is mirror still recommended?).

Is the specs above good for CIFS sharing with a couple of jails (changing i3-4150T to i3-4150T).

What if I use G3240 instead of I3? I want to actually cut down on power consumption as much as possible as well.

The G3240 will maybe use a tiny bit less power at idle, but don't expect it to be very different. It's all the same silicon.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
OP,

I can confirm what Cyberjock said about having issues with VMs if your performance is too low.

Example: I have two ESX servers doing VM clustering. I had *ONE* VM on one of them (Server 2012 running exchange 2013/SharePoint 2013). Just that one. At the time, it was connected via a single 1GB interface to FreeNAS. Windows was starting to show disk errors because read/write operations were timing out. The latency between the ESX server and FreeNAS was too high, minimum performance was not being met. In that case, iSCSI/network was the bottleneck. Since then, I added the other three interfaces (total of 4 interfaces) and setup MPIO (all separate vlans/subnets for each interface). Latency is much lower, problem is gone. I was testing for issues like that specifically and found them, just as these guys say/expect. Sometimes it's fun to intentionally do terrible things if you know it's going to be terrible, then fix it, and watch the system then do what it's supposed to, but I'm weird like that.

They are not kidding about needing a load of RAM for virtualization type workloads. My system works well enough for my needs with 32GB, but it really should have more. I only run 5-8 VMs (all windows servers) and my most taxing applications are exchange 2013 and SharePoint 2013. They service two people/users, so the workload is basically non-existent or extremely low. My biggest performance enemy is the random nature of VMs and my limited ARC (because of only 32GB) of memory. This causes my read latency to spike up to 10-15ms some times, sometimes more, though my average is low to none because my workload is so low. This causes your VMs to load/boot slow, takes a long time to find data, and backups and file copies are slower. Since you said test machines, maybe some latency will be OK for you too, but your disks concern me.

I am using 15 disks, and mine are 10K, not 7200. My performance with all those disks is 'good enough' for VMs, not awesome. I should have done mirrors and it would be a lot better, but I would not have had enough capacity, so I compromised. Also, that disk pool is ONLY used for VMs, all my other stuff goes on other disks. Your intended setup only lists 6 disks, and I assume you are thinking 1 vDev. That will limit your read rate more than having multiple vDevs I think (someone back me up on that?). For VMs, I would want to see mirrors, or at least a couple of vDevs to help performance. Again, you are planning on test VMs only, maybe it's good enough that they don't crash and burn.

You wanted to split pools, you need to separate the disks then. If you're planning on running other data on those disks, that obviously takes away from your VMs performance. Again, for testing, this might be OK, or it might not. If you don't plan on running those VMs full time, even better.

Like Cyberjock said, this might not even work well enough to run stuff at all, or it might be just good enough to get by with some of your intended testing, but I would bet it will be pretty terrible. Me saying terrible and everyone talking about performance is extremely relative, based on what you want to do. So if your expectation is very low, you might not be disappointed.

Don't do the crappy processors Eric is warning you about. L2ARC is out unless you got serious memory like Cyber said, be ready for high read latency because your ARC will be small. If it was me, I woudn't even think about NFS with a proper SLOG (you can read awesome posts about that, there's one from cyber and jgreco). At least iSCSI can do sync=standard and you get a lot less SYNC writes (You need to read all about that if you aren't tracking already, so you know if you're willing to risk corruption). If all your test is test and you aren't concerned about a slight chance of corruption, not big deal. I'd lean towards iSCSI over NFS personally (maybe someone wants to argue that in this case?). You will be VERY limited using only one interface for iSCSI. Total sequential throughput will be around 100MB/s if your pool can even do that. Latency is more of a concern with VMs rather than raw sequential throughput. The more interfaces you have available, the faster the systems get get things between them. My latency went down considerably when I added additional iSCSI interfaces.

My system and suggestions aren't perfect, but hope this helps.
 

aluris

Cadet
Joined
Dec 13, 2014
Messages
5
Thank

Yes it seems to me that ZFS has horribly high hardware requirements. I have ran VMs on network storages that uses much lesser hardware. I am worried about the IO with the disks (reason for mirror. The write penalty is halved!)

My take is that either I buy some local storage for my ESXi (and make sure to get a supported raid card. Vmware is pesky about this), or I will have to change to a different NAS distribution that requires much less. It doesn't make sense to increase my hardware purchase by that much just for my use case.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Don't expect alternative NAS solutions to be cheaper for the performance you want.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thank

Yes it seems to me that ZFS has horribly high hardware requirements. I have ran VMs on network storages that uses much lesser hardware. I am worried about the IO with the disks (reason for mirror. The write penalty is halved!)

That is definitely the big-boy answer. ZFS protects your data at all costs, literally. Those costs are heavy, you need lots of RAM, sometimes an L2ARC and sometimes a ZIL. If you aren't worried about losing some bits here and there you don't need to use ZFS. ZFS was for the people that cannot handle losing bits here and there. ;)
 

aluris

Cadet
Joined
Dec 13, 2014
Messages
5
Yup. Should I use UFS with FreeNas then? I certainly don't need the security of ZFS :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The point of FreeNAS is to put a nice GUI and a low ($0) price on a ZFS-based NAS device, and in the current version, ZFS is the only supported filesystem. If you don't want to use ZFS, you'd really probably be better off with a different OS. Some options would include NAS4Free, unRAID, or XPEnology.
 
Status
Not open for further replies.
Top