Kontu
Cadet
- Joined
- Dec 18, 2014
- Messages
- 4
Hey all, I've been considering the move to FreeNAS for awhile now (from UnRAID), but have some questions I haven't been able to get a firm answer on in searching both these forums and the web as a whole. I've been working with it on a secondary system that has helped me work through a lot of configuration / set up questions I would have, but it also was only 5x1TB drives in varying RaidZ/2/3 configurations (to experiment / gain familiarity) with 16GB of RAM so performance was always fine / as expected.
Basic specs:
Case: Norco RPC-4224
PSU: Seasonic 750W
Motherboard: Supermicro MBD-X9SCM-F-O
Processor: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
Memory: 16GB Kingston DDR3 ECC (2 x 8GB, 32GB maximum supported by system)
SATA Connectivity: Flashed IBM M1015 and a RES2SV240 expander
Storage: 8 x 3TB WD Reds, approximately 6TB of data
Overview:
This system is meant to be primarily storage for home use right now. Most data is static and doesn't change often (media).
I will have the following running on it, which most of have built in plugins already and I'll see about getting the installs updated (which I've researched to varying extents):
Crashplan
Plex Media Server
CouchPotato
Sickbeard
Sabnzbd
Some torrent program (undecided on which, might just start with transmission like I'm using now)
Teamspeak server
Dropbox alternative (maybe)
Configuration and concerns:
I was planning on doing a 6 drive RaidZ2 vdev, with expansion of the pool being in additional 6 drive RaidZ2 vdevs as needed. This would end up being 3 vdevs total for main storage, with at least 2 vdevs being all 3TB drives (partly since I'll have two spares leftover after the first vdev) and the third might be bigger drives. We'll see. My original planning was ~63TB usable using all 3TB drives on UnRAID, but that only left 2 for parity and 1 for cache (not that UnRAID does dual parity yet even), but in retrospect over 24 drives that's just silly to trust. So now I'm probably looking at ~40-56TB usable (depending on that third vdev, if I get 4,6,8TB drives depending on the market at that time).
My main concern is memory - everyone says 1GB/TB of usable data. But I don't care about dedup because I'm the only one adding data to it, and as long as I can come close to saturating a gigabit link (and maybe a bonded pair of gigabit eventually with all 3 vdevs set up) with reads or writes when I need to copy a file (so on demand really, not constantly), I don't care about anything being cached in RAM. I won't be running any VM's directly on the NAS (though I may one day add a 1-2TB SSD zpool/vdev just for that, but maybe that will be it's own system entirely, we will see), so I don't need any overhead there. I would *love* to get away with just the 16GB I currently have installed, but am absolutely prepared to max it out to the 32GB the board maxes at. But I'm not sure if 32GB is enough for ~56TB of data. I'm thinking because of the lack of dedup and lack of need for any cache that it would be OK on 16 or 32GB.
In terms of saturating a gigabit link -- I absolutely know I am limited to the the slowest drive in each vdev. So having a single RaidZ2 vdev might not quite saturate gigabit, but when I get the second RaidZ2 vdev up and running I'd expect the network to be the full bottleneck (though I'll be nearly saturating it with just a single one anyways). So I'm never really expecting transfers for the spinning drive zpool to break ~300MB/s (maybe ~315-320 if I'm somehow super lucky with overhead) when it's all said and done, regardless of what the network speed is (such as if I upgrade my home to 10G in the future from gigabit). I'm also aware I may hit a throughput bottleneck with the M1015 and SAS expander as I fill up the system more and more, but I'm not going to be annoyed if that's the reason I'm having issues getting the higher throughput I may want.
Do any of you see any issue with what I'm planning, or have any input on my concern on memory needed given what I expect it to do? I currently run it all on UnRAID with the 8 x 3TB drives (18TB usable) limited down to just 4GB of memory (cursed 32bit kernel on UnRAID5) and performance is generally fine (~65-90MB/s writes, 80-103MB/s reads), but I want something that can handle the drive parity and redundancy a bit better, as well as being based on an OS that has more of a community / documentation / etc (FreeBSD instead of Slackware).
Summary:
1) Are the amounts of memory I'm considering for the possible
2) Any obvious reasons I shouldn't be able to saturate gigabit / dual bonded gigabit once everything is fleshed out?
3) Any issues with planning 3x6 RaidZ2 vdev's for the zpool (added over time)?
4) Anything I might not have covered / be overlooking?
Basic specs:
Case: Norco RPC-4224
PSU: Seasonic 750W
Motherboard: Supermicro MBD-X9SCM-F-O
Processor: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz
Memory: 16GB Kingston DDR3 ECC (2 x 8GB, 32GB maximum supported by system)
SATA Connectivity: Flashed IBM M1015 and a RES2SV240 expander
Storage: 8 x 3TB WD Reds, approximately 6TB of data
Overview:
This system is meant to be primarily storage for home use right now. Most data is static and doesn't change often (media).
I will have the following running on it, which most of have built in plugins already and I'll see about getting the installs updated (which I've researched to varying extents):
Crashplan
Plex Media Server
CouchPotato
Sickbeard
Sabnzbd
Some torrent program (undecided on which, might just start with transmission like I'm using now)
Teamspeak server
Dropbox alternative (maybe)
Configuration and concerns:
I was planning on doing a 6 drive RaidZ2 vdev, with expansion of the pool being in additional 6 drive RaidZ2 vdevs as needed. This would end up being 3 vdevs total for main storage, with at least 2 vdevs being all 3TB drives (partly since I'll have two spares leftover after the first vdev) and the third might be bigger drives. We'll see. My original planning was ~63TB usable using all 3TB drives on UnRAID, but that only left 2 for parity and 1 for cache (not that UnRAID does dual parity yet even), but in retrospect over 24 drives that's just silly to trust. So now I'm probably looking at ~40-56TB usable (depending on that third vdev, if I get 4,6,8TB drives depending on the market at that time).
My main concern is memory - everyone says 1GB/TB of usable data. But I don't care about dedup because I'm the only one adding data to it, and as long as I can come close to saturating a gigabit link (and maybe a bonded pair of gigabit eventually with all 3 vdevs set up) with reads or writes when I need to copy a file (so on demand really, not constantly), I don't care about anything being cached in RAM. I won't be running any VM's directly on the NAS (though I may one day add a 1-2TB SSD zpool/vdev just for that, but maybe that will be it's own system entirely, we will see), so I don't need any overhead there. I would *love* to get away with just the 16GB I currently have installed, but am absolutely prepared to max it out to the 32GB the board maxes at. But I'm not sure if 32GB is enough for ~56TB of data. I'm thinking because of the lack of dedup and lack of need for any cache that it would be OK on 16 or 32GB.
In terms of saturating a gigabit link -- I absolutely know I am limited to the the slowest drive in each vdev. So having a single RaidZ2 vdev might not quite saturate gigabit, but when I get the second RaidZ2 vdev up and running I'd expect the network to be the full bottleneck (though I'll be nearly saturating it with just a single one anyways). So I'm never really expecting transfers for the spinning drive zpool to break ~300MB/s (maybe ~315-320 if I'm somehow super lucky with overhead) when it's all said and done, regardless of what the network speed is (such as if I upgrade my home to 10G in the future from gigabit). I'm also aware I may hit a throughput bottleneck with the M1015 and SAS expander as I fill up the system more and more, but I'm not going to be annoyed if that's the reason I'm having issues getting the higher throughput I may want.
Do any of you see any issue with what I'm planning, or have any input on my concern on memory needed given what I expect it to do? I currently run it all on UnRAID with the 8 x 3TB drives (18TB usable) limited down to just 4GB of memory (cursed 32bit kernel on UnRAID5) and performance is generally fine (~65-90MB/s writes, 80-103MB/s reads), but I want something that can handle the drive parity and redundancy a bit better, as well as being based on an OS that has more of a community / documentation / etc (FreeBSD instead of Slackware).
Summary:
1) Are the amounts of memory I'm considering for the possible
2) Any obvious reasons I shouldn't be able to saturate gigabit / dual bonded gigabit once everything is fleshed out?
3) Any issues with planning 3x6 RaidZ2 vdev's for the zpool (added over time)?
4) Anything I might not have covered / be overlooking?