Green light?

Status
Not open for further replies.

John Murry

Cadet
Joined
Oct 12, 2015
Messages
4
Heya all,

I have been checking into making a pure NAS. This will need to serve my family, 4 of us, watching movies, photos, and files. I also want this to serve my Hyper-V and future ESXi environments. Is this achievable?

What the VM's are for:
SharePoint development. I run SharePoint versions 2010, 13 and now 16. So far this takes 10 VM's and are lightly used to develop, test and to show. I like to leave them on 24 / 7 as if they are "production" servers. Still, they are not 100% critical as I have other machines to use if my crap out. I just like to keep them up. I also run a pair of SQL Servers for other web applications, and also a web server.

What I have now waiting to be used is:
Supermicro A1SRi-2558F
32GB ECC RAM

I know this is a Atom CPU so if it will not cut it, then I can go for something else. If this setup will work, I read that the LSI SAS 9201-16i HBA card is good with FreeNAS since the motherboard has only one expansion slot, I like to get a card that can hold as many drives as possible. Only question is about the hard drives.

For the VM's I see I have to mirror them. I like to use SSD for this since the speed is better than normal spindles. Or am I off here?

For the file sharing, I guess WD RED's are okay? Getting larger drives say 4 of 4TB is better than 8 of 2TB drives?

What do you all think?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
8 2TB drives in a mirror will be faster than 4 4TB. The more vdevs you have in your pool the better your iops will be.

The 16i card is a little bit overkill and will cost a pretty penny. Get a 8i and if you wanted more drives get a expander.
 

John Murry

Cadet
Joined
Oct 12, 2015
Messages
4
I was going with the 4 4TB drives for the file shares and not the VMs. I am really a noob here with the pooling stuff and all. I will start watching the videos and reading the manual tonight. I think making a temp VM of FreeNAS might help too while I am waiting to get this all together.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
A couple points:

I really don't think that Atom CPU is going to cut it here. Perhaps you could get by on the C2750 (with its 8 cores), but the C2550 in that mobo is awfully weak. However, when people start throwing VM-level requirements around, I generally default to Xeons if it's in the budget. If you use compression or deduplication, you'll definitely need a high-end Xeon.

The biggest issue for VMs is generally not throughput, but random I/O. So the advantage to the SSDs is not the speed, but the latency. However, I'm thinking that you might be able to get away with a SSD SLOG on top of a large HDD array, as opposed to both HDD and SSD arrays. If it was me, I'd spend the money on a high-end SSD for the SLOG, and then put the rest of the money into HDDs, doing 8x4TB or 10x3TB in mirrored pairs (more drives is better than larger drives, in this case). I'd also probably look at 7200RPM drives instead of 5400RPM (or 5900) drives.

If you decide you want to separate the arrays, for your HDD array, I'd use a parity storage over mirroring. 6x4TB drives in RAIDZ2 is fairly common around here, and it will also provide you with better storage utilization than mirrors, while still being able to saturate a GB link.
 

John Murry

Cadet
Joined
Oct 12, 2015
Messages
4
High end Xeon, something like this CPU: Xeon E3-1231 ? I am not sure if I am going to use compression or deduplication. Just in case though, I would not mind upgrading the storage system before I move on to the actual ESXi system. I would think the VMs that run in ESXi will only run as good as the storage system allows.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
That E3 is more in line what I'd recommend just as a default. If you are doing high levels of compression, or any deduplication, you'll benefit from stepping up to the E5 Xeons (which requires fancier hardware, like buffered memory)
 

John Murry

Cadet
Joined
Oct 12, 2015
Messages
4
I was researching more online last night and think this will be a good setup to start with:

Supermicro X10SRH-CLN4F
Intel E5-2620v3
and start with 64GB ECCreg ram
 
Joined
Apr 9, 2015
Messages
1,258
I would probably go for this board than the one you are looking at. http://www.supermicro.com/products/motherboard/xeon/c600/x10srh-cf.cfm
You will end up being able to connect a lot more drives this way since a SAS controller is included (18 vs10) . No need to be running quad LAN as by the time you have all the supporting hardware you will have enough money to throw in to some 10Gb equipment and still be faster. If you need more drives then all you need to do is get a SAS expander(or a case with one built in)

The cpu should be good to start off with but if you can spring for more with as many VM's as you are showing you want to run on top of things it would be better. Single fast cores will serve files better with samba but the more VM's you have going the more that cores will start to be the bottleneck from my understanding.

The more ram the better. FreeNAS will find a way to use all the RAM you can throw at it and then some. I am sure the VM's will need a chunk of that as well. Just make sure you get something that is either on the list for the board or something someone else has tested to work or you know you can RMA for something else.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
High end Xeon, something like this CPU: Xeon E3-1231 ? I am not sure if I am going to use compression or deduplication. Just in case though, I would not mind upgrading the storage system before I move on to the actual ESXi system. I would think the VMs that run in ESXi will only run as good as the storage system allows.

VM's (or other random block data) running on NAS storage are among the stressiest of things you can do to ZFS. You'll find that it is much more pleasant if you throw lots of resources at it, but you may also find that "lots" translates to horrifyingly large.

ZFS is a copy-on-write filesystem which means that as individual sectors of a VM virtual disk are updated, they're written to some noncontiguous location in the datastore ("fragmentation"). This means access to data gets progressively slower as the fragmentation increases. ZFS copes with this by caching lots in ARC (and L2ARC); frequently accessed data, even if fragmented on the pool, actually reads faster because it is served from cache. Infrequently accessed data tends to suffer a little.

So we do two horrifying things to cope with this. One, we throw lots of RAM and ARC and L2ARC at the problem. Two, we throw GOBS of free space at the pool, which makes it easier for ZFS to find contiguous blocks of space to allocate when writing. When I say "gobs", I really mean it - a ZFS pool used for block storage and that is showing as 50-60% full will become highly fragmented over time, and writes will become ponderously slow.

https://forums.freenas.org/index.php?threads/zfs-fragmentation-issues.11818/
 
Status
Not open for further replies.
Top