Vdev/drive layout advice for 20x drives

Status
Not open for further replies.

vegaman

Explorer
Joined
Sep 25, 2013
Messages
58
I'm just about ready to rebuild my storage and trying to make a call on how to configure it in terms of RAIDZ levels and width.
The main use is as a media server, then local backups (to then be sent offsite as well) and I'll likely move things from my linux server (home automation, backup management scripts... nothing too resource hungry) onto it if I'm happy with the VM/docker support in FreeNAS these days.

Server is:
Supermicro X10SL7-F
32GB ECC RAM
SAS-9211-8i card of some description - I think it was IBM, not that it matters much now it's got new IT mode firmware on it
Norco RPC-4220 case
20x 6TB WD Red drives
1x SSD as a boot drive, I've got another and a spare SATA port if it's worth mirroring that
Plan to add 10GbE once I can decide on a switch - I definitely don't need more than 24 ports for now, could make do with a lot less if it's cheaper too, a bunch of things I have plugged in are low traffic so could hang off a second 1Gbit switch quite happily

I previously had been adding drives as I could then it's been left untouched/unused while I travel around for work. Now I'm home again for a while, I've bought more RAM (was only on 8GB previously), some Noctua iPPC-3000 Fans and a whole bunch of drives. I'm just in the process of juggling data between drives so I have all the new drives available and hopefully avoid pulling down backups. Once that's done it will be time to rebuild my NAS.
Getting advice from workmates who work more in the storage space is difficult because we only deal with enterprise gear where they don't have to deal with decisions this far down so much (nor with such cheap gear) - one of them said to just chuck all the drives in one massive RAIDZ2. Would I be better doing 2x 10 drive RAIDZ2's instead?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
On a 6tb drive the rebuild/resilver can take a substantial amount of time. This puts you at risk for subsequent drive failures during a period of high load on the drives. Personally I would run 2 vdevs.
 

vegaman

Explorer
Joined
Sep 25, 2013
Messages
58
Thanks, sounds like 2x 10 is the way to go then. Hopefully get that up and running this weekend :D
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Btw, if it makes it easier, you can start with just the first vdev of 10 drives, evacuate contents to that and then add the next vdev
 

vegaman

Explorer
Joined
Sep 25, 2013
Messages
58
Btw, if it makes it easier, you can start with just the first vdev of 10 drives, evacuate contents to that and then add the next vdev
Now that I think about it I could have attached 2 more drives above the drive trays - where the SSDs and optical drive (if you want one of those) go in the 4220 - so I could do that... too late now anyway.
But that also means you end up unbalanced (all of your data on one vdev) right?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@vegaman, yes, any existing data remains where it is. So adding a new, (2nd), vDev would not affect any existing data. ZFS will attempt to balance the new writes so that both vDevs have similar amount of data used.

With that many drive slots, I would have left one or two free for backups or in-place drive replacements. Thus, perhaps gone with 2 vDevs of 9 disks in a RAID-Z2. (Assuming you did not need the extra storage...)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Drive testing, burnins.

I still have 8 bays free in my 24 bay NAS, and I find it very useful.

(Which if you haven’t looked at you should)

If I get a new batch of drives in for whatever reason I just chuck em in the spare 8 slots ;)
 
Status
Not open for further replies.
Top