Sanity check for drive config

Status
Not open for further replies.

jasonhalljax

Cadet
Joined
Dec 17, 2017
Messages
5
Hello! This setup will be a NAS-only application (Supermicro board, E3 Xeon, 32GB ECC RAM) 95% for media needs. Plex and other media apps will be on a separate Ubuntu Server elsewhere on the network.

I have (8) 8TB NAS drives and (9) 4TB NAS drives. Can you recommend a storage configuration? My thought is 6x8TB in one RAIDZ2 vdev and 6x4TB in another. I’ve read in a couple places mixing drive sizes like this isn’t ideal but will work.

My enclosure only holds 15 drives, and one will be taken by the boot SSD. Speed isn’t that big of a concern, though it’s possible I could have 5 Plex streams going at once. I picked 6x instead of 7x as I saw somewhere that space efficiency goes down significantly (more overhead) from 6 disks to 7 and I could use the extra drives as cold spares (or even in the Ubuntu server).

Thoughts? Thanks.
 

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
6x8 and 6x4 vdevs in a pool sounds fine, you don't need to have same sized vdevs just the same sized drives within a vdev otherwise your only using the smallest drives capacity across all others in the vdev...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Of course, the space efficiency thing is not that big of a deal anymore since LZ4 compression was added to ZFS. If you want to use 6, 7 or 8 way RaidZ2 then go for it.

A boot SSD doesn't really need to use a bay... you could just stick it anywhere in the case it will fit.

15 bays leaves you 14 bays and a spare bay for a replacement drive. This is handy.

With your drives, I'd probably use 2x 7-way RaidZ2.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You also don't need to have the same number of disks between 2 vDevs. So this works just fine, though you may not get the top performance. ZFS favors the vDev with the most free space.

vDev1 - 6 x 4TB
vDev2 - 8 x 8TB

You can even mix RAID-Zx levels in a pool. It is best to keep vDevs similar though.
 
Status
Not open for further replies.
Top