1st storage server - FreeNAS!

Status
Not open for further replies.

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
Note that the following use the same amount of parity disks, 6. But, the RAID-Z2 will get slight better read and write performance as it's 3 x vDevs, each with one disk less parity.
  • 3 vDevs of 8 drives - RAID-Z2
  • 2 vDevs of 12 drives - RAID-Z3
However, since writes would be less important for a mostly read media server, you might take that in mind. I also like free slots and or warm spares. So something like this;
  • 1 vDev of 11 drives - RAID-Z3
  • 1 vDev of 12 drives - RAID-Z3
  • Warm spare, or free slot and cold spare
You may even be able to get away with RAID-Z2. It's pushing the limit for RAID-Z2 at 11/12 disks. But with a warm spare ready to go, that helps reduce the risk.

Please note that (slightly) mis-matched vDevs are both allowed and won't impact performance much. Especially for the home or non-business use case.

Note: What I define as a warm spare, is a disk installed in the server, ready to replace a failed disk. But, requires operator intervention. Hot spares don't require operator intervention. And while hot spares are supported by ZFS, that feature is generally not used. Partly because you can't tell ZFS which vDev is more important to use a hot spare disk when you have failed disks in multiple vDevs.

Fantastic and very informative post. Because my servers primary use is to store media (movies, music, photos) would it be better to run with 2 vdevs of 12 drives in Raid-Z3? I do value my data and also have backups just in case.

I was doing a little reading at the following site

https://calomel.org/zfs_raid_speed_capacity.html

and found it interesting with the Raid comparison they did. Doesn't seem to be much performance wise between a Raid-Z2 & Raid-Z3 for 24 drives. I do like the insurance of Raid-Z3 with the provision to loose up to 3 hard drives...

A small update on my server, I have now totally completed the burn-in test on all 24 hard drives. For anyone thinking of using 8tb drives like I did, be prepared for the burn-in test to take a VERY long time. In total, for 24 drives, this took close to 2 weeks of straight testing (with the exception of 2 days for a new drive to be shipped to me). I have since shut the server down as I won't be able to finish configuring it until the weekend (plus a reboot/shut down is required to enable the geom debug flags).

I'm looking forward to finishing the final configuration of the server and to start transferring files. Been a long time coming :D
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'd still recommend 3x 8-way RaidZ2

But I guess you have a few options

12x mirrors
4x 6way Raidz2
3x 8way RaidZ2
3x 8way RaidZ3
2x 12way RaidZ3

For 12, 8, 6, 9, and 6 disks of parity.

I've seen quite a few stories where people who went with massive wide RaidZ3 have ended up converting to narrower RaidZ2 because of performance problems. I specifically refer to @cyberjock and @Chris Moore (I think)
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
I'd still recommend 3x 8-way RaidZ2

But I guess you have a few options

12x mirrors
4x 6way Raidz2
3x 8way RaidZ2
3x 8way RaidZ3
2x 12way RaidZ3

For 12, 8, 6, 9, and 6 disks of parity.

I've seen quite a few stories where people who went with massive wide RaidZ3 have ended up converting to narrower RaidZ2 because of performance problems. I specifically refer to @cyberjock and @Chris Moore (I think)

I vaguely remember reading somewhere that cyberjock had problems with either a 12 or 14 disk Raid-Z3.

How do you find the performance of the 3 x 8 disk Raid-Z2
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I vaguely remember reading somewhere that cyberjock had problems with either a 12 or 14 disk Raid-Z3.

How do you find the performance of the 3 x 8 disk Raid-Z2

I have no complaints. I actually just received an X550-T2 10gbe card to add to my system, so I'll be playing with 10gbe benchmarking in the near future ;)

Oh, I'm only on 2 vdevs at the moment. Built it with one, extended to two in June. Plan to extend to three in year or two based on data growth.

So, I have 8 bays free.

I did test 6 8TB drives in the spare bays recently as a separate pool for thermal testing. End result, Need to improve cooling, so have a little bit of maintenance planned. Will add a p3700 for slog, the 10gbe, and the Noctua industrial fans and a couple of SSDs for boot etc. And turn it into an ESXi/FreeNAS box. The pool etc will not change.

Been testing the upgrade in my mini Node 304/XeonD rig which is like a baby version of it :)
 
Last edited:

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
I have no complaints. I actually just received an X550-T2 10gbe card to add to my system, so I'll be playing with 10gbe benchmarking in the near future ;)

Oh, I'm only on 2 vdevs at the moment. Built it with one, extended to two in June. Plan to extend to three in year or two based on data growth.

So, I have 8 bays free.

I did test 6 8TB drives in the spare bays recently as a separate pool for thermal testing. End result, Need to improve cooling, so have a little bit of maintenance planned. Will add a p3700 for slog, the 10gbe, and the Noctua industrial fans and a couple of SSDs for boot etc. And turn it into an ESXi/FreeNAS box. The pool etc will not change.

Been testing the upgrade in my mini Node 304/XeonD rig which is like a baby version of it :)

That sounds awesome :)

10gbe is really something! Will you be purchasing a 10gbe switch? I cannot recommend enough the Netgear ProSafe XS series
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
That sounds awesome :)

10gbe is really something! Will you be purchasing a 10gbe switch? I cannot recommend enough the Netgear ProSafe XS series

I purchased the XS716T. Its nice :)

Been able to verify that I can use iperf to slam half a dozen gigabit clients at full speed.. simultaneously... from a single 10gbe uplink to the switch.

Also been able to verify that my little pilot XeonD system can push 20gbps or so, using vmxnet internal networking, and that I can get 9.9gbps back and forth between two freenas instances over 10gbe. Bidirectionally. Ie full duplex out one 10gbe port to the switch and back in the other port.

The vmxnet only does 10gbps bidirectionally.

That's just the iperf testing at this stage.

I've also done some testing already in the vmware environment, and using iSCSI and internal VMWare networking I'm getting about a GB/s read/write from the VMs to the 6way RaidZ2 pool... utilising the P3700 as SLOG, forcing sync. It peaks at 1.2GB/s write, and up to about 1750MB/s read.

I now need to renovate a bunch of runs so I can run 10gbe at full speed over that.

Screen Shot 2017-08-16 at 12.10.52 AM.png


We throw video content around... which is why I focus on sequential performance mostly.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I've seen quite a few stories where people who went with massive wide RaidZ3 have ended up converting to narrower RaidZ2 because of performance problems. I specifically refer to @cyberjock and @Chris Moore (I think)
Yes, that was me. I had setup a 12 drive RaidZ3 pool (single vdev) and found the performance to be too slow to even keep up with wire speed for a 1gig network. I ended up backing all my data up, destroying the pool and making two vdevs of 6 drives each in RaidZ2. Performance is much better. More vdevs generally equates to more IOPS which generally equates to better perceived performance especially with random IO or small files but it still helps when dealing with larger files like the media (photos and videos) that is my primary data at home.
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
Yes, that was me. I had setup a 12 drive RaidZ3 pool (single vdev) and found the performance to be too slow to even keep up with wire speed for a 1gig network. I ended up backing all my data up, destroying the pool and making two vdevs of 6 drives each in RaidZ2. Performance is much better. More vdevs generally equates to more IOPS which generally equates to better perceived performance especially with random IO or small files but it still helps when dealing with larger files like the media (photos and videos) that is my primary data at home.

Thanks for the insight. So looks like I might be better to go with Raid-Z2 for my purpose.

Any recommendations for/against the following
  • 4 way 6 disk Raid-Z2
  • 3 way 8 disk Raid-Z2
I'd rather decide on the most suitable Raid from the beginning so hopefully I only have to copy files once to the server
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Thanks for the insight. So looks like I might be better to go with Raid-Z2 for my purpose.

Any recommendations for/against the following
  • 4 way 6 disk Raid-Z2
  • 3 way 8 disk Raid-Z2
I'd rather decide on the most suitable Raid from the beginning so hopefully I only have to copy files once to the server
The 4 way 6 disk Raid-Z2 seems a bit too much parity. I'd go with 3 separate vDevs, but still one pool;
  • 7 disk Raid-Z2
  • 8 disk Raid-Z2
  • 8 disk Raid-Z2
  • Free slot for warm spare or backup disk.
You could go with 2 free slots as well, if you want both a warm / hot spare, and a free slot for putting in / removing a backup disk.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
8 disks in raid z2 is what I did. I currently have 2 vdevs and I plan on putting in one more vdev

Sent from my Nexus 5X using Tapatalk
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
The 4 way 6 disk Raid-Z2 seems a bit too much parity. I'd go with 3 separate vDevs, but still one pool;
  • 7 disk Raid-Z2
  • 8 disk Raid-Z2
  • 8 disk Raid-Z2
  • Free slot for warm spare or backup disk.
You could go with 2 free slots as well, if you want both a warm / hot spare, and a free slot for putting in / removing a backup disk.

8 disks in raid z2 is what I did. I currently have 2 vdevs and I plan on putting in one more vdev

Sent from my Nexus 5X using Tapatalk

Awesome, thanks for your experiences/input guys :)

I'll set up a 3 way 8 disk Raid-Z2 this weekend after I finish some more configurations on the server.

Might have some more questions on these configurations too so be prepared lol :p
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Any recommendations for/against the following
  • 4 way 6 disk Raid-Z2
  • 3 way 8 disk Raid-Z2
I'd rather decide on the most suitable Raid from the beginning so hopefully I only have to copy files once to the server
I would go with the 4 way 6 disk RAIDz2 which is what I had planned to work up to by adding additional vdevs as my storage need increased.
I think having more parity is a good thing to reduce the possibility of a fault taking the pool out.
The statement from @Arwen about too much parity is due to the fact that each of the 4 vdevs gives you 2 drives worth of parity data, so 8 of the drives would effectively be dedicated to redundancy vs 6 following the other suggestion.
At the same time Arwen is suggesting having an online spare. I have cold spares sitting on the shelf instead of having them in the server getting old and accumulating power on hours. With your drives being 8TB each, it would be about 14TB difference in available storage between the two options. I would keep a couple spares on hand (sealed bag on the shelf) instead of having a spare burning power in the server. When a drive fails, you can remove it and put the new drive in where the old drive came out. I have tested it both ways and a vdev does not resilver faster with the failed drive still in the pool. If you have a RAIDz2 pool, it is best to just take the failed drive out and put a new drive in and resilver. The recovery is faster than trying to do it with the failed drive in place.
Still, I don't fault the 8 drive option and with all three vdevs populated, I think it would be acceptably fast even with your 10gib network.
I kind of feel like I rambled a bit there. I hope it makes sense.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I would go with the 4 way 6 disk RAIDz2 which is what I had planned to work up to by adding additional vdevs as my storage need increased.
I think having more parity is a good thing to reduce the possibility of a fault taking the pool out.
The statement from @Arwen about too much parity is due to the fact that each of the 4 vdevs gives you 2 drives worth of parity data, so 8 of the drives would effectively be dedicated to redundancy vs 6 following the other suggestion.
At the same time Arwen is suggesting having an online spare. I have cold spares sitting on the shelf instead of having them in the server getting old and accumulating power on hours. With your drives being 8TB each, it would be about 14TB difference in available storage between the two options. I would keep a couple spares on hand (sealed bag on the shelf) instead of having a spare burning power in the server. When a drive fails, you can remove it and put the new drive in where the old drive came out. I have tested it both ways and a vdev does not resilver faster with the failed drive still in the pool. If you have a RAIDz2 pool, it is best to just take the failed drive out and put a new drive in and resilver. The recovery is faster than trying to do it with the failed drive in place.
Still, I don't fault the 8 drive option and with all three vdevs populated, I think it would be acceptably fast even with your 10gib network.
I kind of feel like I rambled a bit there. I hope it makes sense.
One comment and that to test and burn in those drives you have sitting on the shelf. Would be even worse to replace a bad drive with a bad drive.

Sent from my Nexus 5X using Tapatalk
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I'd still leave a slot free. Makes local backups easier, and can be used for a cold spare drive replacement. (And with that many disks, I too agree having a disk or 2 available, ready to go is a Good Idea.

Note that ZFS is one of the few, (perhaps only commonly used one), RAID system that will allow using the failing, but not yet failed disk as a source of re-syncs / re-silvers. The reason I suggest it, (even though it likely won't speed up re-silvers), is that if you get another drive failure, (perhaps a complete drive failure), is that you still have some redudnacy.
For RAID-Z2 it's not as much of a concern, as it would be for 2 way Mirrors or RAID-Z1, each only having 1 disk of redundancy.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Any recommendations for/against the following
  • 4 way 6 disk Raid-Z2
  • 3 way 8 disk Raid-Z2

If performance is more important than capacity, 4 vdevs, if capacity is more important, 3.
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
If performance is more important than capacity, 4 vdevs, if capacity is more important, 3.

Considering the server will only have media such as movies, music, photos I am leaning towards a 3 way 8 disk Raid-Z2.

My 2 htpc's running openelec will be connecting to the server via smb shares so I'm hoping the 3x8 will be plenty to play uninterrupted media to the htpc's.

Another question I have from my research is should I change the record size to 1M?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Considering the server will only have media such as movies, music, photos I am leaning towards a 3 way 8 disk Raid-Z2.

My 2 htpc's running openelec will be connecting to the server via smb shares so I'm hoping the 3x8 will be plenty to play uninterrupted media to the htpc's.

Another question I have from my research is should I change the record size to 1M?
Record size can make a difference. I have mine set to 1m.

Sent from my Nexus 5X using Tapatalk
 

Sir SSV

Explorer
Joined
Jan 24, 2015
Messages
96
I've been reading the FreeNAS documentation and am wanting to establish my Raid-Z2 configuration. To create the pool of drives, I do this manually under volume manager (selecting all disks and Raid-Z2) then create the dataset (setting record size to 1M). However, how do I create the vdev's inside the pool (3 vdev x 8 disks)?

Also, do I have the order of configuration correct? Pool creation -> dataset creation -> vdev creation

Just want to get this right before I start transferring files. I'd rather get it right the first time :D
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
However, how do I create the vdev's inside the pool (3 vdev x 8 disks)?
Three rows of eight disks.

Also, do I have the order of configuration correct? Pool creation -> dataset creation -> vdev creation
No. The pool is the stripe of all vdevs. You can add vdevs at any time (but you cannot remove them, ever).

You can create and destroy datasets whenever you want, with no real restrictions other than what you're willing to deal with.
 
Status
Not open for further replies.
Top