BUILD New Build - Picking Drive Size Based on RAM Usage

Status
Not open for further replies.

S1RC

Dabbler
Joined
Jul 28, 2016
Messages
28
New to FreeNAS here. Upgrading from my old hardware, i7 920 / 6GB Ram / SAS controller RAID 5.

So far I've purchased:
CPU: Xeon E3-1230V5
Motherboard: ASRock Rack E3C236D4U
RAM: 64GB (4x16GB) Kingston ECC RAM
Chassis: SuperMicro SC213LT-563LPB with redundant power supplies

I know Kingston isn't the best choice for compatibility, but the RAM passed multiple 4 pass sessions in memtest.

My dilemma now is that I'm choosing drive size. I'm buying WD Red Pro's to slowly replace my ageing 2TB drives. With 6TB option being affordable compared to 4TB I was thinking of building either of the following scenarios:

- 7x6TB WD Red Pro RaidZ3 + hot spare
- 4x6TB WD Red Pro RadZ2 -> x2 in a zpool

I've maxed out my motherboard at 64GB, however with either option using 6TB drives I'm at 42/48GB ram just for ZFS performance. This leaves me with 22/16GB left for services, etc.

Being new to FreeNAS, is that sufficient to run the following services:
- Plex
- SABnzbd
- Sonarr
- Couch Potato
- OwnCloud
- CrashPlan
- OpenVPN
- VirtualBox (with the possibility of several VMs on a need to use basis)
- mySQL
- Apache
- nginx

Should I just stick with the 4TB drives and leave myself with 36/32GB free RAM?
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The FreeNAS RAM recommendation of 1GB/TB is strictly around performance - you can get away with less and it's not a hard rule.

Based on the additional pieces you're adding (Plex/Couchpotato/etc) I have a feeling that the primary filesharing use case is media streaming inside your house, so you can definitely get away with those services on 64GB and still maintain performance, as long as you don't let MySQL run wild on RAM.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
I'm assuming you're basing your thinking on the 8GB RAM + 1GB per TB of storage which was a general rule of thumb at some point?

My FreeNAS box has 32GB RAM and 9x4TB drives, so using that rule I'd be a few GBs short of RAM, but the machine is rock solid and transferred generally max out my gigabit network.

On top of that I'm running a number of jails 24x7 (owncloud, plex, virtual box, openvpn) and at times a number of VMs (in the virtual box jails) which are often allocated 4-6GB RAM.

So based on my experience, I think you'll be pretty good with either of those configurations and 64GB RAM :D
 

S1RC

Dabbler
Joined
Jul 28, 2016
Messages
28
I'd say 60% media hosting, 40% support for SaaS development (i.e. running test environments, DB's for internal use, I don't suspect huge data sets).

Thanks for the quick replies. I feel more safe now in my choice of 6TB drives.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Screw it.. go with the 6TB disks. You aren't likely to be disapointed with that setup.

If you are doing database stuff and it feels too slow for you, look at adding an L2ARC SSD to the zpool. ;)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
7x6TB WD Red Pro RaidZ3 + hot spare
- 4x6TB WD Red Pro RadZ2 -> x2 in a zpool

A hot spare is an anti-pattern. When a drive fails you should refresh your backup before replacing the failed drive and forcing a full resilver. A hot spare will just force a full resilver before you have refreshed your backup. Also if you are using a drive bay and drive up as a hot spare you might as well have it be used for redundancy anyway.

It seems like your basing your drive pool shape on some obselete rule, like 2*n+p drives. This rule is no longer applicable since ZFS support compresses blocks.

Pick your level of redundancy, raidz2 or raidz3. Or how wide/narrow you want your vdevs and be done with it.

Personally, I think raidz3 for a home media server and test environment with only circa 6/7 disks is overkill. Especially since you'll have a backup.

I use 8 disk raidz2 and intend to scale to 3 vdevs for a total of 24 drives ;)

I did consider 6 disk raidz2 and 4 vdevs.
 

S1RC

Dabbler
Joined
Jul 28, 2016
Messages
28
The hot spare is probably the old RAID days mentality, and with the power of two rule I was just going to fill the spot.

I didn't realise that the power of two rule for data drives was no longer applicable. I will have to look more into that, seems people are still talking about it.

I didn't want to go off topic in this thread about ZFS RaidZ, but even with 6x6TB in Z2 vs 7x6TB in Z3 (forgetting filling up all 8 bays) I can't see myself realistically needing more than 17TB of usable storage for the life of this server. I was leaning towards Z3 because why not have the extra security, but was looking into how the extra parity would affect performance and if I'd notice given my application.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
With potentially 8 bays I see a few options

1) mirrors. grow as you need... 50% storage efficiency
2) 2 vdevs of 4 disks in RAIDZ2, this has the same storage efficiency of mirrors...means replacing/growing in batches of 4... slightly better redundancy as you can survive ANY two disk failures.
3) 1 vdev of 6 disks in RAIDZ2, a nice compromise on storage efficency/performance compared to mirrors/4disk RAIDZ2
4) 1 vdev of 7 or 8 disks in RAIDZ2 or RAIDZ3, choice depends if storage efficiency AND performance is more important than that 3rd level of redundancy, and what capacity you require. Remembering if you lose one disk, and you have a bad block, then the bad block would have to be coincident with another bad block on another disk for RAIDZ2 to be a problem. And if that were to occur, then you can restore the affected file from your backup. Or the entire pool.

Also, remember that you shouldn't fill your ZFS pool more than about 80% full (i think that's it). If you do, then ZFS enters a slow hunt-and-write state when writing to minimise further fragmentation.

Re: the power of 2 rules, etc. I'm having trouble relocating the articles, but basically, lots of people are still quoting the old rules, which used to apply when all zfs blocks were a power of 2 in size, but with compression, which you probably should be using in most cases, that is no longer the case.
 

S1RC

Dabbler
Joined
Jul 28, 2016
Messages
28
I think I'm still leaning towards option 3, maybe 2. I might do some testing to see what kind of performance loss RAIDZ3 has over RAIDZ2, and if it will affect me enough.

There was an article I read from Solaris that you shouldn't start a RAIDZ2 until 6 total disks and RAIDZ3 until 9 total disks, but I'm not sure on it's age or relevance anymore.

Even 80% of a usable 17TB is more that I can see myself needing. So I'm not worried about going over 80% usage.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Status
Not open for further replies.
Top