I am adding 2 new 3TB disks to my system.
Given this current layout, what would people recommend?
Data = 2x 500 GB Disks ("RAID1")
vol1 = 4x 1TB disks ("RAID10")
root@nas:~ # zpool status -v
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/feadb553-61d8-11e7-b2f1-000c29ac4f63 ONLINE 0 0 0
gptid/fff86fde-61d8-11e7-b2f1-000c29ac4f63 ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0
errors: No known data errors
pool: vol1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/b8952a27-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
gptid/b9a99d08-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/bab21417-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
gptid/bbca52a6-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
logs
gptid/bc90cde1-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
cache
gptid/bc3bf5b1-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
errors: No known data errors
Add the new 3TB disks as a new mirror vdev of vol1, making it a 3 vdev stripe?
Add the new 3TB disks to "Data" and turn the "RAID1" into a "RAID10" (note the existing disks in this mirror are only 500GB)
Add the new 3TB disks as a new mirrored zpool "RAID1" style? I believe this is the worst option...?
Does it even matter? I was leaning to adding it as a 3rd mirror vdev to "vol1". The upgrade path here involves 4 new 3TB disks to make them match then. The 500 GB disks are my oldest, if I convert that single "RAID1" into a "RAID10" then it gives me an upgrade path to swap out the 2 500 G disks with new 3TB disks...
I'm sure I'm over thinking this, there are just so many options, I can't decide which might be best, if any...
I guess one benefit of adding the new disks to the vol1 is that's the volume where my ZIL and L2ARC cache drives are... maybe that is the answer, grow vol1 into a 3 vdev "RAID10" to take advantage of the log/cache disks in it?
Should I break down the 500GB data volume and add those disks to "vol1" also? I'm a bit confused how the disks are used in this type of setup when they are different sizes, it's not like a real RAID10 where the data is stripped across all disks, in a traditional RAID10, I would be limited to 500 GB on all 6 disks if I did this, but I don't think that's the case with ZFS... What happens then if I lost 1 500G, and 1 1TB disk from this pool? Would the data survive or not?
edit - It's probably worth noting that the new 3TB disks are 7200 RPM and the 4 existing 1TB disks are all only 5400 RPM, though I'm not sure I care if the 3TB run at 5400 if I put them in the same pool. Also the reason I went with mirrors over RAIDz2 is resliver time and this bog post: http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
Thanks!
Mark
Given this current layout, what would people recommend?
Data = 2x 500 GB Disks ("RAID1")
vol1 = 4x 1TB disks ("RAID10")
root@nas:~ # zpool status -v
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/feadb553-61d8-11e7-b2f1-000c29ac4f63 ONLINE 0 0 0
gptid/fff86fde-61d8-11e7-b2f1-000c29ac4f63 ONLINE 0 0 0
errors: No known data errors
pool: freenas-boot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0
errors: No known data errors
pool: vol1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/b8952a27-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
gptid/b9a99d08-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/bab21417-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
gptid/bbca52a6-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
logs
gptid/bc90cde1-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
cache
gptid/bc3bf5b1-5e0e-11e7-8f85-000c29ac4f63 ONLINE 0 0 0
errors: No known data errors
Add the new 3TB disks as a new mirror vdev of vol1, making it a 3 vdev stripe?
Add the new 3TB disks to "Data" and turn the "RAID1" into a "RAID10" (note the existing disks in this mirror are only 500GB)
Add the new 3TB disks as a new mirrored zpool "RAID1" style? I believe this is the worst option...?
Does it even matter? I was leaning to adding it as a 3rd mirror vdev to "vol1". The upgrade path here involves 4 new 3TB disks to make them match then. The 500 GB disks are my oldest, if I convert that single "RAID1" into a "RAID10" then it gives me an upgrade path to swap out the 2 500 G disks with new 3TB disks...
I'm sure I'm over thinking this, there are just so many options, I can't decide which might be best, if any...
I guess one benefit of adding the new disks to the vol1 is that's the volume where my ZIL and L2ARC cache drives are... maybe that is the answer, grow vol1 into a 3 vdev "RAID10" to take advantage of the log/cache disks in it?
Should I break down the 500GB data volume and add those disks to "vol1" also? I'm a bit confused how the disks are used in this type of setup when they are different sizes, it's not like a real RAID10 where the data is stripped across all disks, in a traditional RAID10, I would be limited to 500 GB on all 6 disks if I did this, but I don't think that's the case with ZFS... What happens then if I lost 1 500G, and 1 1TB disk from this pool? Would the data survive or not?
edit - It's probably worth noting that the new 3TB disks are 7200 RPM and the 4 existing 1TB disks are all only 5400 RPM, though I'm not sure I care if the 3TB run at 5400 if I put them in the same pool. Also the reason I went with mirrors over RAIDz2 is resliver time and this bog post: http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
Thanks!
Mark
Last edited: