Consensus on benefit of spare vs parity++

Status
Not open for further replies.

STREBLO

Patron
Joined
Oct 23, 2015
Messages
245
I was thinking about whether or not it made sense to buy another drive just to have as a hot spare in my pool to make replacement of drives faster and so I can have a drive completely burned in and ready for a placement when a failure happens. I was wondering about if I'm going to be getting another drive if it might make more sense just added as another parody drive in my pool. I am currently running, or about to be running, 6 disks in RAIDZ2 so I could also add a another drive and make Z3 but then the question is what about when one those dies so should I get up another spare drive.

I was wondering what the consensus was on what makes the most sense about having spare drives or adding parody drives and what everyone else has found to work best...
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Spare drive if you don't have physical access so you can replace easily and remotely.

Why not run 7 disk raid z2?
 

STREBLO

Patron
Joined
Oct 23, 2015
Messages
245
Spare drive if you don't have physical access so you can replace easily and remotely.

Why not run 7 disk raid z2?
Because that won't add anymore tolerance. I'm thinking about what to do when a drive fails, I don't actually need any more space
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
Because that won't add anymore tolerance. I'm thinking about what to do when a drive fails, I don't actually need any more space
To be fair, the types of events that are likely to take down a RAIDZ2 pool are likely to take down any system. Think about it, three hard drives going kaput simultaneously. You've either incurred the wrath of the furies or have some serious hardware issues going on. The point have having hot spares is if the system is sitting in a data center, or if you can't have your server sit in a degraded state while waiting for a new drive to ship. If you're that concerned with redundancy, then you should get 5 more drives and set up a second FreeNAS server as a replication target. :D
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
To me, the only reason to use a spare is if you have multiple vdevs in a single pool. Like this:
Code:
[root@freenas1] ~# zpool status backup-tank
  pool: backup-tank
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        backup-tank                                     ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/47f4175e-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/48d8d60b-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/49bd46ca-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4ab3e647-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4b9e6364-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4c862ac8-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4d72ebb5-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4e5c9527-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4f4a31ea-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5031936b-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/512c0226-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/521d5ff4-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5303de29-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/53f91f2c-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/54daed66-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/55ded62d-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/56e46d23-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/57d64d4c-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/58cb455a-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/59aa77d7-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/5aa26b53-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5b7a49d6-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5cc27257-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5dfda90e-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5f4e8c42-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/60301043-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
        spares
          gptid/854c08bc-8a11-11e5-969d-002590fbfb40    AVAIL  

errors: No known data errors
[root@freenas1] ~# 
 

STREBLO

Patron
Joined
Oct 23, 2015
Messages
245
To be fair, the types of events that are likely to take down a RAIDZ2 pool are likely to take down any system. Think about it, three hard drives going kaput simultaneously. You've either incurred the wrath of the furies or have some serious hardware issues going on. The point have having hot spares is if the system is sitting in a data center, or if you can't have your server sit in a degraded state while waiting for a new drive to ship. If you're that concerned with redundancy, then you should get 5 more drives and set up a second FreeNAS server as a replication target. :D
I guess maybe having a hot spare makes more sense then as then I can swap in a replacement right away but I only need to get one more drive as opposed to running in Z3 and having to get another spare for when one fails.


To me, the only reason to use a spare is if you have multiple vdevs in a single pool. Like this:
Code:
[root@freenas1] ~# zpool status backup-tank
  pool: backup-tank
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        backup-tank                                     ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/47f4175e-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/48d8d60b-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/49bd46ca-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4ab3e647-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4b9e6364-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4c862ac8-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4d72ebb5-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4e5c9527-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/4f4a31ea-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5031936b-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/512c0226-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/521d5ff4-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5303de29-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/53f91f2c-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/54daed66-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/55ded62d-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/56e46d23-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/57d64d4c-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/58cb455a-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/59aa77d7-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/5aa26b53-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5b7a49d6-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5cc27257-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5dfda90e-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/5f4e8c42-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
            gptid/60301043-816a-11e5-a89a-002590fbfb40  ONLINE       0     0     0
        spares
          gptid/854c08bc-8a11-11e5-969d-002590fbfb40    AVAIL

errors: No known data errors
[root@freenas1] ~#
Why would that benefit from a spare differently than a different set up?
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Because if any drive in any of the 3 RAIDz2 vdevs fails, the spare will be used as a replacement. Alternatively, in your parity++ scenario, I would need to add 2 additional drives so that each of the 3 vdevs in the pool could be RADZ3 (vs RAID Z2).
 

random003

Dabbler
Joined
Sep 5, 2015
Messages
15
I believe Z3 is better for single vdev. When a drive fails in Z3 it is effectively a Z2 instantly. When a drive fails in Z2 + spare the array is effectively a Z1 until the array finishes rebuilding the spare.
 

STREBLO

Patron
Joined
Oct 23, 2015
Messages
245
Yeah I had considered that, but in that case won't I need to get another drive to have as a spare? I wonder if I would be better off running the six disks I have currently in a Z3... What are the optimal number of drives for a raids at 3? I had heard 6 was a good number for the Z2, would that mean 7 was optimal for Z3?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I wonder if I would be better off running the six disks I have currently in a Z3... What are the optimal number of drives for a raids at 3? I had heard 6 was a good number for the Z2, would that mean 7 was optimal for Z3?
The guideline no longer applies. You can use whichever makes the most sense for your situation.

If you have a chassis that holds 6 disk, you should use all 6 disks (no spare, just include the disk in the pool). Now whether you use Z2 or Z3 is up to your needs, tolerance for failure, and backups.
 

STREBLO

Patron
Joined
Oct 23, 2015
Messages
245
The guideline no longer applies. You can use whichever makes the most sense for your situation.

If you have a chassis that holds 6 disk, you should use all 6 disks (no spare, just include the disk in the pool). Now whether you use Z2 or Z3 is up to your needs, tolerance for failure, and backups.
Well, I have room in my chassis for up to 8 but I don't actually need that much space. I was just thinking about how at some point a disk is going to fail, and if I'm going to have to buy replacement then maybe it makes more sense to get it now, burn it in, and have it ready.

So is there really no optimal configuration now? What were the old guidelines based on?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So is there really no optimal configuration now? What were the old guidelines based on?
Division of a fixed-size chunk into smaller chunks attributed to drives, which is optimal with 2^n+p drives, since it's evenly divided.

Compression removes the fixed-size part, which means that the rule no longer makes sense.
 
Status
Not open for further replies.
Top