Thoughts on upgrade strategy going from 8TB to 14TB drives

Evan Richardson

Explorer
Joined
Dec 11, 2015
Messages
76
I've got a box with 18x 8TB drives in it currently, Room for 6 additional drives, and I'm planning on upgrading storage soon now that 14 TB drives are starting to go on sale. I'm trying to figure out how best to increase the storage in this pool with the least amount (although open to some risk).

Options:

1.) Replace drives 1 by 1 with larger drives, wait for rebuild

2.) buy another chassis, hook up as JBOD, and add new drives to that.

I'd prefer to do option 1), but option 2 is safer. Regarding Option 1, My pool is currently 3x RAIDZ2 vdevs. I've never tried this before, but I guess I could technically replace 3 drives at once (1 from each vdev)? since I can afford 2x drives out of each vdev failing, as long as there are no technical issues, this would mean I'd have to only do six drive swaps...not too bad, although I anticipate it taking 10-12 hours per set (currently around 65% full). I've worked with storage for quite a while, so am not a stranger to zfs, but given the size of the drives, just looking for thoughts on which option I should go with (I'm leaning towards option 1 to save space in my rack and additional power usage (plus the additional issue of a external sas cable)

Also, since I have 6 slots free, I guess I could technically add 6x 14TB drives, use them as hotspares, then pull 3 drives at a time?

Thanks!
 

cjc1103

Dabbler
Joined
May 19, 2017
Messages
10
Best practice is to use mirrored drives, then stripe the mirrored pairs (RAID10). This configuration is a lot more efficient. After replacing a failed drive the server only has to rebuild one drive pair, which is quicker, and less likely to fail during rebuild. Also the array can be accessed normally during the rebuild, using the good drive in the faulty mirrored pair, no parity calculations necessary, a lot less stress on the drives. You do have an external backup just in case, right? Also this configuration lets you add another mirrored pair to the stripe. RAIDZ1 is not relevant with >4TB drives, even RAIDZ2 is impractical with > 8TB drives. Since you already have a RAIDZ configuration, you should migrate the data to a separate server having mirrored drive pairs.
 

Evan Richardson

Explorer
Joined
Dec 11, 2015
Messages
76
man raid 10 is such a huge penalty though, basically need 2x drives for the amount of storage you want. I had thought about that, but they're so expensive to give up 50% lol. I guess however, that with almost double the space per drive, I can start with a fewer number of drives and work my way up in pairs though... Thanks for the comment @cjc1103
 

cjc1103

Dabbler
Joined
May 19, 2017
Messages
10
RAID5 (RAIDZ1) and even RAID6 (RAIDZ2) are obsolete for large data arrays. You will very likely get an unrecoverable read error (URE) when rebuilding, because the server has to read the entire array and recalculate the lost data from parity. You are wasting your time. Give it up already. Sure RAID10 only gives you one half of the raw storage space. Better that than losing your data. Actually cloud storage works great these days, try backing your server up to Amazon S3, or Backblaze. Sure it costs money, up to you whether your data is worth it. Check out Backblaze's blog for the latest disk drive reliability data, some drives have a 2.7% failure rate. I've had good luck with HGST drives.
This article about RAID5 was written in 2009, 11 years ago:
 

cjc1103

Dabbler
Joined
May 19, 2017
Messages
10
As a footnote, ZFS RAIDZ is a little more reliable than RAID5/6, because ZFS checksums data, but the argument is still valid, the server needs to read the entire array when rebuilding a drive.
 
Top