Upgrading and Replacing Hard Drives With Different Speeds

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
Currently I have (4) WD Red 3TB hard drives which run at 5400 RPM that I will be replacing with 8TB WD Red Pro's which run at 7200 RPM. Can I replace these drives gradually (one at a time) if the new ones spin faster? They are set up in a RAIDZ1 configuration.

Updated with info:
My hardware:

  • FreeNAS version 11.2 U1
  • Supermicro - MBD-X11SSM-F-O Micro ATX LGA1151 Motherboard
  • Intel - Xeon E3-1230 V5 3.4 GHz Quad-Core Processor
  • Crucial - 16 GB (1 x 16 GB) DDR4-2133 ECC Memory
  • Fractal Design - Node 804 MicroATX Mid Tower Case
  • EVGA - 500 W 80+ Bronze Certified ATX Power Supply
  • (4) 3TB WD Red hard drives in RAIDZ1
  • No hard disk controllers
  • Onboard NIC
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
As was pointed out, RaidZ1 is not a good idea when the disks get as large as 8 Tb. If you ever encounter a problem and have to replace a disk, there is a significant chance of encountering another error during resilvering - and with no redundancy during resilvering, your data could be at risk.

But to answer your original question, it would work OK to mix the drives. Keep an eye on your disk temps with the faster disks as they may run hotter than what you have now.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
But to answer your original question, it would work OK to mix the drives.
I was hoping to get some response and open a dialog. I guess I scared @Digitaldreams out of the forum. Sorry.
Keep an eye on your disk temps with the faster disks as they may run hotter than what you have now.
Absolutely, they will run hotter. I have systems at work where the only difference is that one chassis has WD Red drives and the other has WD Red Pro drives and the RPM difference amounts to as much a 10 C difference in temperature. If your cooling isn't up to it, that difference can be even more.
 

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
Sorry guys! It's been a very, very busy week. Thank you for the replies.


So after reading the article, the idea seems to be that RAID5 will not be a good choice with my configuration consisting (4) 8TB drives. With the statement, "So RAID 6 will give you no more protection than RAID 5 does now, but you'll pay more anyway for extra disk capacity and slower write performance.", I get the feeling that RAID6 isn't a great option either. I'm a bit at a loss on this now.

What version of FreeNAS are you using?
  • Supermicro - MBD-X11SSM-F-O Micro ATX LGA1151 Motherboard
  • Intel - Xeon E3-1230 V5 3.4 GHz Quad-Core Processor
  • Crucial - 16 GB (1 x 16 GB) DDR4-2133 Memory
  • Fractal Design - Node 804 MicroATX Mid Tower Case
  • EVGA - 500 W 80+ Bronze Certified ATX Power Supply

FreeNAS version 11.2 U1


I searched the forum and found info on resleeving bigger drives and that I can do that with no issue. Once all of the drives are replaced, the volume should expand to take advantage of the new space. I did have trouble finding info on resleeving with different speed drives. Maybe I just lack skills in searching. :(

So what RAID config should I consider and can I change it without losing the data I currently have?
 
Last edited:

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
As was pointed out, RaidZ1 is not a good idea when the disks get as large as 8 Tb. If you ever encounter a problem and have to replace a disk, there is a significant chance of encountering another error during resilvering - and with no redundancy during resilvering, your data could be at risk.

But to answer your original question, it would work OK to mix the drives. Keep an eye on your disk temps with the faster disks as they may run hotter than what you have now.

Thank you. I will keep an eye on temps.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So after reading the article, the idea seems to be that RAID5 will not be a good choice with my configuration consisting (4) 8TB drives. With the statement, "So RAID 6 will give you no more protection than RAID 5 does now, but you'll pay more anyway for extra disk capacity and slower write performance.", I get the feeling that RAID6 isn't a great option either. I'm a bit at a loss on this now.
The article is talking about hardware RAID and it isn't exactly the same in ZFS, but the point is that RAIDz2 is the thing to do with drives larger than 1TB.
Please review this documentation:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
I did have trouble finding info on resleeving with different speed drives. Maybe I just lack skills in searching.
The operating system and ZFS act as a buffer between the drives so that drives running at different mechanical speed should not cause an issue for you in FreeNAS. At the same time, the performance of the pool is going to be limited by the slowest drive in the pool.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. You appear to have not read the forum rules, because you didn't provide the information suggested in the rules.
 

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
PS. You appear to have not read the forum rules, because you didn't provide the information suggested in the rules.
In my reply to your first post, I supplied my hardware configuration. I also just updated my original post with the info. I didn't mean to cause any confusion. Thank you.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
This seems to get quoted quite often, however most of the statistical assumptions listed in the article are inaccurate. Also, it doesn't take into account that the URE rate refers to unrecoverable bits on disk, not unrecoverable SECTORS. A sector on disk will contain both the data and the CRC for that data. If there's just a single failed bit in the sector, the data can be reconstructed from the CRC. Alternately, if the bad bit is in the CRC, the data is untouched and a new is CRC created. Modern drives will perform multiple steps to try to verify whether that bit has gone bad or not. But the vast majority of the time, it will recover from the single flipped bit, re-write the sector, find that a subsequent read of that sector matches the CRC, and move on with life. This is common; as long as all the data is read with reasonable frequency -- like with scheduled scrubs -- you'll rarely if ever see the UREs creep into your data from these single-bit-flip events, and usually the sector can be re-used because the bit flipped for reasons outside of the drive's control (i.e. solar flares or other causes of high-energy particles).

The idea that raidz1 is dead is fundamentally flawed because it assumes that the parity is the backup of the data, and if the array were to fail the data would be lost. Anyone who relies on any level of raidz, 1 2 or 3, as their backup (only copy of their data) has a primary design error.

First and foremost, do the right thing and have a backup of your data, then run whatever level of raidz that best fits your build/budget/comfort. ;)
 

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
This seems to get quoted quite often, however most of the statistical assumptions listed in the article are inaccurate. Also, it doesn't take into account that the URE rate refers to unrecoverable bits on disk, not unrecoverable SECTORS. A sector on disk will contain both the data and the CRC for that data. If there's just a single failed bit in the sector, the data can be reconstructed from the CRC. Alternately, if the bad bit is in the CRC, the data is untouched and a new is CRC created. Modern drives will perform multiple steps to try to verify whether that bit has gone bad or not. But the vast majority of the time, it will recover from the single flipped bit, re-write the sector, find that a subsequent read of that sector matches the CRC, and move on with life. This is common; as long as all the data is read with reasonable frequency -- like with scheduled scrubs -- you'll rarely if ever see the UREs creep into your data from these single-bit-flip events, and usually the sector can be re-used because the bit flipped for reasons outside of the drive's control (i.e. solar flares or other causes of high-energy particles).

The idea that raidz1 is dead is fundamentally flawed because it assumes that the parity is the backup of the data, and if the array were to fail the data would be lost. Anyone who relies on any level of raidz, 1 2 or 3, as their backup (only copy of their data) has a primary design error.

First and foremost, do the right thing and have a backup of your data, then run whatever level of raidz that best fits your build/budget/comfort. ;)

Thank you for the post. It gives me a good sense on how to view and approach this. Explaining the difference in having backups vs just relying on RAID makes a lot of sense. Chris Moore posted some great links that I'm going through as well.

So is the biggest risk with RAIDz1 in how it allows for only one disk failure versus 2 with RAIDz2 and that if one disk does fail, you your data and system is in a crippled (and non-functioning) state until the data rebuilds itself onto the replacement hard disk?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
and that if one disk does fail, you your data and system is in a crippled (and non-functioning) state until the data rebuilds itself onto the replacement hard disk?
No, data remains available during a rebuild. The problem with having only 1 disk of redundancy in RAIDz1 is, if you have a disk failure, and another disk fails while you are recovering from the first, you can loose the storage pool.
When you have RAIDz2, you have 2 disks of redundancy, so if one disk fails and you have a second disk fail while the first is being rebuilt, you can still survive it and complete the rebuild. I have done a resilver of a RAIDz2 pool where I was replacing two disks at the same time, but that is not recommended.
 

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
No, data remains available during a rebuild. The problem with having only 1 disk of redundancy in RAIDz1 is, if you have a disk failure, and another disk fails while you are recovering from the first, you can loose the storage pool.
When you have RAIDz2, you have 2 disks of redundancy, so if one disk fails and you have a second disk fail while the first is being rebuilt, you can still survive it and complete the rebuild. I have done a resilver of a RAIDz2 pool where I was replacing two disks at the same time, but that is not recommended.

Okay. Is there a higher chance of a another disk failing during the rebuild process or is it just more a question of having the extra redundancy?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Okay. Is there a higher chance of a another disk failing during the rebuild process or is it just more a question of having the extra redundancy?
The point of the article is, if I recall correctly, with the size of modern hard drives, it is more likely that you will have a second failure due to URE rates, which is the reason the article suggests double parity (RAIDz2) instead of single parity (RAIDz1) and some people are even going with triple parity (RAIDz3) now, but it is all down to your tolerance of risk.
 

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
The point of the article is, if I recall correctly, with the size of modern hard drives, it is more likely that you will have a second failure due to URE rates, which is the reason the article suggests double parity (RAIDz2) instead of single parity (RAIDz1) and some people are even going with triple parity (RAIDz3) now, but it is all down to your tolerance of risk.
Okay, I'm understanding it more. I've never gone this in-depth with RAID and storage so these terms are new to me. Thanks for all the assistance.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Okay, I'm understanding it more. I've never gone this in-depth with RAID and storage so these terms are new to me. Thanks for all the assistance.
You might want to take a look at these resources. It might help your understanding:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
 

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
You might want to take a look at these resources. It might help your understanding:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

I have those links open in tabs right next to this one right now. lol I skimmed the first one already and plan on reading more in depth when I get a few minutes of free time. Absolutely great resource material.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Just to add... I run a mixed pool. I tend to buy whatever is cheap when I need it, with a bias in favor of HGST (now WD). I run my primary pool as a striped mirror, so the I/O round robins on the devices, and the disks within a vdev can actually perform different read requests at the same time. At the moment, I have a 4Tb vdev and a 3Tb vdev that used to be a 2Tb vdev. Much of the data was written before the pool expansion and hasn't been touched, so I suspect it tends to slightly favor the faster 7200 RPM 4Tb on reads, but I do have a minor write impact from one slower disk. Overall it's generally faster than any requirement I have. I can confirm @Chris Moore's assertion that the 7200 RPM drives run nearly 10 deg/C hotter. Mine are parked in front of a pair of 120mm fans, with lots of airflow...

The thing to remember when mixing drives in a RAIDz pool is the write must complete on all devices within a vdev before returning as complete. A read event must occur on two of the three before the CPU can reconstruct the pending/missing stripe. The net effect is pairing two 7200 RPM's drives with a 5400 RPM drive, will tend to read at nearly the same speed as a 3 way RPM 7200 pool, but will write at the speed of the single 5400 RPM device.
 

Digitaldreams

Explorer
Joined
Mar 7, 2017
Messages
80
So I was able to get a good deal on the WD drives and ended up purchasing (4) WD 8TB Red Pro's. So this brings me to some questions in the best way to approach this. My goal is to set up the 4 drives in a RAIDZ2 array. I have the spare SATA ports on my motherboard. I also want to eventually add another 2 drives. I understand that I cannot add drives to a VDEV once it's created so would I later just create a new zpool and add it to the VDEV later? I also read on another thread that I can any extra disk I have in order to create the larger VDEV from the get go. But wouldn't this limit my *TB drives to the smallest size in the VDEV? THis option also doesn't really work for me as I only have 8 total SATA ports to work with. 4 of them are current;y being taken up by the existing 3TB drives. So how should I approach this?

Are these the basic steps to follow?
  1. Install drives
  2. Burn-in
  3. Create new RAIDZ2 VDEV
  4. Copy data over

I read the official documentation but I was still a bit confused..."The recommended method for expanding the size of a ZFS pool is to pre-plan the number of disks in a vdev and to stripe additional vdevs using Pools as additional capacity is needed."
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My goal is to set up the 4 drives in a RAIDZ2 array. I have the spare SATA ports on my motherboard. I also want to eventually add another 2 drives.
The ideal thing to do is wait until you have all six drives, then create the RAIDz2 vdev with all six drives in it. The only thing you can do with two drives is create a mirror vdev. Although you can add a mirror vdev to a RAIDz2 vdev, mixing a pool like that is strongly discouraged and I know that the GUI will warn you about it and it might not even let you. I have not tried to force that.
The reason for NOT doing it is this, it mixes parity levels. The RAIDz2 is double parity while the mirror is single parity. The thing you could do is create your new pool as two mirror vdevs, using the drives you have, then add another mirror vdev when you get the additional drives. The system will allow that with no errors as all vdevs would be single parity and you would end up having three vdevs, which would give you higher random IO potential.
I understand that I cannot add drives to a VDEV once it's created so would I later just create a new zpool and add it to the VDEV later?
This sentence indicates some misunderstandings. I would suggest you go to these resources and review them again:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

My understanding of your situation is this. You currently have a storage pool (zpool) that is made up of a single vdev of 4 drives in RAIDz1.
You want to expand your storage capacity and move to RAIDz2 for greater protection against single disk failure.
The steps involved would be.
  1. Connect all the new drives to the system and do burn-in testing on them.
  2. Create a new storage pool using the new drives.
    https://www.ixsystems.com/documentation/freenas/11.2/storage.html#creating-pools
  3. Copy all data from the old pool to the new pool.
  4. After all the data is copied, in the GUI, there is a way to "Detach" the old pool
    https://www.ixsystems.com/documentation/freenas/11.2/storage.html#export-disconnect-a-pool
  5. Physically remove the old drives.
 
Top