Considering moving to FreeNAS - Looking to double check information

Status
Not open for further replies.

Kontu

Cadet
Joined
Dec 18, 2014
Messages
4
Hey all, I've been considering the move to FreeNAS for awhile now (from UnRAID), but have some questions I haven't been able to get a firm answer on in searching both these forums and the web as a whole. I've been working with it on a secondary system that has helped me work through a lot of configuration / set up questions I would have, but it also was only 5x1TB drives in varying RaidZ/2/3 configurations (to experiment / gain familiarity) with 16GB of RAM so performance was always fine / as expected.

Basic specs:
Case: Norco RPC-4224
PSU: Seasonic 750W
Motherboard: Supermicro MBD-X9SCM-F-O
Processor: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
Memory: 16GB Kingston DDR3 ECC (2 x 8GB, 32GB maximum supported by system)
SATA Connectivity: Flashed IBM M1015 and a RES2SV240 expander
Storage: 8 x 3TB WD Reds, approximately 6TB of data

Overview:
This system is meant to be primarily storage for home use right now. Most data is static and doesn't change often (media).

I will have the following running on it, which most of have built in plugins already and I'll see about getting the installs updated (which I've researched to varying extents):
Crashplan
Plex Media Server
CouchPotato
Sickbeard
Sabnzbd
Some torrent program (undecided on which, might just start with transmission like I'm using now)
Teamspeak server
Dropbox alternative (maybe)

Configuration and concerns:
I was planning on doing a 6 drive RaidZ2 vdev, with expansion of the pool being in additional 6 drive RaidZ2 vdevs as needed. This would end up being 3 vdevs total for main storage, with at least 2 vdevs being all 3TB drives (partly since I'll have two spares leftover after the first vdev) and the third might be bigger drives. We'll see. My original planning was ~63TB usable using all 3TB drives on UnRAID, but that only left 2 for parity and 1 for cache (not that UnRAID does dual parity yet even), but in retrospect over 24 drives that's just silly to trust. So now I'm probably looking at ~40-56TB usable (depending on that third vdev, if I get 4,6,8TB drives depending on the market at that time).

My main concern is memory - everyone says 1GB/TB of usable data. But I don't care about dedup because I'm the only one adding data to it, and as long as I can come close to saturating a gigabit link (and maybe a bonded pair of gigabit eventually with all 3 vdevs set up) with reads or writes when I need to copy a file (so on demand really, not constantly), I don't care about anything being cached in RAM. I won't be running any VM's directly on the NAS (though I may one day add a 1-2TB SSD zpool/vdev just for that, but maybe that will be it's own system entirely, we will see), so I don't need any overhead there. I would *love* to get away with just the 16GB I currently have installed, but am absolutely prepared to max it out to the 32GB the board maxes at. But I'm not sure if 32GB is enough for ~56TB of data. I'm thinking because of the lack of dedup and lack of need for any cache that it would be OK on 16 or 32GB.

In terms of saturating a gigabit link -- I absolutely know I am limited to the the slowest drive in each vdev. So having a single RaidZ2 vdev might not quite saturate gigabit, but when I get the second RaidZ2 vdev up and running I'd expect the network to be the full bottleneck (though I'll be nearly saturating it with just a single one anyways). So I'm never really expecting transfers for the spinning drive zpool to break ~300MB/s (maybe ~315-320 if I'm somehow super lucky with overhead) when it's all said and done, regardless of what the network speed is (such as if I upgrade my home to 10G in the future from gigabit). I'm also aware I may hit a throughput bottleneck with the M1015 and SAS expander as I fill up the system more and more, but I'm not going to be annoyed if that's the reason I'm having issues getting the higher throughput I may want.

Do any of you see any issue with what I'm planning, or have any input on my concern on memory needed given what I expect it to do? I currently run it all on UnRAID with the 8 x 3TB drives (18TB usable) limited down to just 4GB of memory (cursed 32bit kernel on UnRAID5) and performance is generally fine (~65-90MB/s writes, 80-103MB/s reads), but I want something that can handle the drive parity and redundancy a bit better, as well as being based on an OS that has more of a community / documentation / etc (FreeBSD instead of Slackware).


Summary:
1) Are the amounts of memory I'm considering for the possible
2) Any obvious reasons I shouldn't be able to saturate gigabit / dual bonded gigabit once everything is fleshed out?
3) Any issues with planning 3x6 RaidZ2 vdev's for the zpool (added over time)?
4) Anything I might not have covered / be overlooking?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Your post is long. I would go with 32gig of memory or ability to grow to that size. At the same time you have just maxed out your motherboard. You should plan for the next 5 years and you needs in that time frame.
 

Kontu

Cadet
Joined
Dec 18, 2014
Messages
4
Your post is long. I would go with 32gig of memory or ability to grow to that size. At the same time you have just maxed out your motherboard. You should plan for the next 5 years and you needs in that time frame.

Yea my post is long, sorry. Was trying to cover the information I had available. Yes I realize 32GB would max out my motherboard, which is why I'm trying to determine if 32GB would handle the amount of usable storage I would like to eventually expand to.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Throughput is cumulative for your drives (minus some overhead.). Not limited to the slowest drive. IOPs are limited by the number of vdevs. 8 Reds will do 700MB/s all day in Z2. The bottleneck is the wire, and when you kick to 10Gb it is CIFS.

For a media load, with only one user you may be fine at ~60TB with 32GB. Cyberjock has 10 6TB drives running, and it works. Though he has mentioned he gets hit with random slowdowns, and the 32Gb limit is frustrating. So that is likely the edge case and you are right at it when fully configured. The good news is if you are adding slowly, you'll have a good feel for it. No real choice but to hop up to the e5 and more ram when it becomes intolerable. But you should be able to go a long way. If it is 10TB drives soon, we are going to need some upgrades. ;)

3x6 z2 vdevs is perfect, imho. Gives a nice balance between performance and space.

Good Luck. Enjoy.
 

Kontu

Cadet
Joined
Dec 18, 2014
Messages
4
Thanks mjws00. In plotting things out I'm hoping to end up with less than 60TB on this hardware (instead being closer to the 36TB usable) and hopefully instead rebuild in a server that can handle quite a bit more memory in another build with fewer but larger drives. But that all depends on how data grows over the next few years (I mean movies scaling from just 700MB to the sizes we have for 1080p and up being anywhere from 7GB to 70GB...it's nuts) and how quickly I continue to collect.

That's good to know that throughput isn't limited by the slowest drive, just IOPs being limited by vdevs - I had thought my reading mentioned both, but now that you point it out I'm recalling it more correctly. So IOPs would be caught at a single drives performance per vdev (so 3 vdevs, 3 drives performance in iops), but throughput by the raw number of drives. If I ever need a high amount of IOPs for VM storage I'll throw some of these PCIe SSD's in and make a separate zpool I suppose.

Now the downside, it's going to take ~28 hours to rsync my data over to the temporary machine. Yay.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It should work. If it comes to that, you can always add more RAM. It's not like Haswell lifted the stupid 8GB UDIMM restriction...
 

Kontu

Cadet
Joined
Dec 18, 2014
Messages
4
It should work. If it comes to that, you can always add more RAM. It's not like Haswell lifted the stupid 8GB UDIMM restriction...
Board caps out at 32GB anyways over 4x8GB modules so nothing I can do there right now. Stuck with this hardware because I've had it for about 9 months already running UnRAID, there's just parts of development/usability/expansion/ etc with that project that make me want to switch.
 
Status
Not open for further replies.
Top