Migrating To New Drives, Ironwolf HDD Pro? Other?

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
I want to migrate my drives sometime soon. Currently my system is running mirror pairs of x15 SAS drives. x2 of which are spares.
They are all used HGST HUS726040AL4210 & HITACHI HUS72604CLAR4000 drives, stamped with 2016 dates. So they are quite old and are making me nervous.

While they are cheap to replace, it has it's downsides. The major being the density. I always want more space, and while I have a 30+ drive NAS, despite the fact that I need to get a second HBA to make use of half of the bays.. more drives is just more power usage, heat, and less density overall. It feels so stupid to have a ton of 4tb drives, when I can just take a money hit and buy a handful of higher density drives. It will suck when they die to replace them, But I'm hoping overall they will be more power efficient, and reliable, as well as fast(er).

My question is, figuring out what to replace them with. I assume anything SAS new will be an insane price new.
So I will have to go with SATA (which I believe is fine on SAS ports, just not the other way around).

x15 (TOTAL) 4tb drives. (Currently it shows I am using 18.77TiB, and have 6.43TiB free).
2/15 are spare drives (4tb each).
And they are all mirrored.
So it's about 50tb raw. I'm already 75% full, so to factor that in I will need a bit more headroom.

The next part being which drives. I was considering IronWolf Pro?
The Ironwolf Pro 12tb drives are running $229 on Amazon. So to reach 50tb I'd need 5 of them. So I'm looking at $1145 before tax.
There are also these 14TB Seagate ironwolf drives. They are $250. The picture says Pro but not the title so I'm not sure. But it would come out to be $1,000 because I'd only need 4 of them for 50tb. The pictures are in French or something, so I'm not sure what is up with their listings on Amazon.

Edit: I see on Newegg currently I can get 16 TB Ironwolf Pro drives for $289.99. Expires in 5 days. $999.96 for 4 16tb drives
I'd basically have 2 drives mirrored, and then 2 spares to replace all my 15 drives. That's insane to think about.
Just will have to pray that when one fails, they will be down in price haha.

Edit 2: Or I can spend like $400 more and get x4 Seagate Exos 20TB drives for $363 a piece?
Weighing my options here.


Are these a solid option for drives?
Confirmation I'm doing my math (50tb = x15 4tb drives, mirrored) correctly would be nice.
And any input on my setup, I'm always open ears.
Thanks.
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
You can buy just 3 of the 16TB ones and create a 3-way mirror.
You won't have much headroom, but it gives the option to buy the other drive a few months later (then you can detach one drive from the 3-way mirror and create 2x 2-way mirrors).

However, you might not find a better deal in a few months. Really, most of your options suck. You will have to spend hard money.

:frown:

Also, I didn't really mention the safety issues in doing so... Buying from a single seller and having single parity with drives this big.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Any drive from NAS/Enterprise lines and which is not SMR should be good, so I'd take TB/$ as the primary criteria. (But from a reliable seller… The Amazon listing for 14 TB Ironwolf indeed uses advertisement pictures in French. If the seller is from Québec or New Brunswick and ships "south of the border" all may be fine; otherwise, I'd say the listing is suspicious.)

However with such large drives the good old 2-way mirror raise the same issue as RAID5/raidz1 did years ago: Should one drive fail, there's a lot of data at risk until resilver is complete. So, for data safety, you should consider 3-way mirrors at this point—which, of course raises cost.

Less vdevs also means less IOPS, if performance matters.
If IOPS performance does not matter, you may consider raidz2 instead for capacity at a lower cost—which then require to create a new pool and migrate everything in one go rather than progressively replacing drives and removing vdevs with your striped mirror pool.
 
Joined
Jul 3, 2015
Messages
926

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Not sure how easy these are to get in the US but the Toshiba 18TB SAS and SATA Enterprise drives are still very good value in the UK atm
Looking like they're in the $350-$400 range in US. Not great unfortunately.
You can buy just 3 of the 16TB ones and create a 3-way mirror.
You won't have much headroom, but it gives the option to buy the other drive a few months later (then you can detach one drive from the 3-way mirror and create 2x 2-way mirrors).
I've been thinking about it and I'm glad you said it too.
When you say 3 way mirror, you mean raidz2 correct? I'd feel quite a bit safer than my current mirrors setup, it'd allow me to lose 2 drives in a vdev.

When I use the ZFS calculator, a single radz2 pool doesn't hit my 50tb though. I am inputting it correctly, right?
That being said.. that 50tb accounted for my 2 spare drives. So I think it might be enough?
I'm a little confused going from my mirror setup + the two spares, to a raidz2, on how much I'm going to actually need for drives.
librewolf_j6PwMiG7ou - Copy.png



Code:
# zpool list
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
PrimaryPool  25.4T  19.3T  6.07T        -         -    34%    76%  1.00x    ONLINE  /mnt


Code:
# zpool list -o name,size
NAME          SIZE
PrimaryPool  25.4T
boot-pool    460G


If I am under the correct assumption and did it correctly, x3 drives in a 16tb pool would give me 43.5TiB of usable storage.
So I should have double my current storage. This makes sense correct? I should be good to pull the trigger and just buy x3 16tb drives?

Then in the future, I can add another raidz2 vdev with 3 more x16 tb drives to bump it up to 84TiB usable.


Also on a sidenote, is it better to buy say 2 drives from 1 merchant, and 1 drive from another? Like 2 on Newegg, 1 on Amazon.
So if 1 drive dies, it lowers the probability of the other 2 dying being from the same merchant?
 
Last edited:

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
I'll have to read up on it, but how is the migration process done? Is it a COPY? or a MOVE?
As in, will the original data be retained on these current drives, and simply copy it over to the new pool? Or will it be deleted as it's moving (or auto-delete when complete).

Ideally I'd like the data to retain on the original drives for a while.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Looking like they're in the $350-$400 range in US. Not great unfortunately.
The 16 TB drives are reportedly much cheaper.
When you say 3 way mirror, you mean raidz2 correct?
"3-way mirror" means just what it says: 3 identical drives, holding 3 copies of the same data. Available space = 1 drive (33% of total).

Raidz2 requires at least 4 drives.
Don't bother with calculators. A mirror vdev provides, as raw space, the space of its smallest drive. A M-wide raidzN vdev provides, again as raw space, (M-N)*(size of smallest drive). Add the raw space of vdevs to have the raw space of the pool. The usable space is about 80% of raw space, for bulk storage (file storage), or 50% of raw space for block storage (zvol, VM, iSCSI).

If I am under the correct assumption and did it correctly, x3 drives in a 16tb pool would give me 43.5TiB of usable storage.
So I should have double my current storage. This makes sense correct? I should be good to pull the trigger and just buy x3 16tb drives?
3*16 TB is 3-way mirror is 16 TB of raw space (minus some overhead for ZFS metadata, which we neglect). About 13 TB of usable space for file storage.
Three vdevs would give 38 TB of usable space, with 9 *16 TB drives, and the IOPS of 3 drives.
The same usable space with raidz2 requires only 5 drives, but provides the IOPS of a single drive.

Mind that mirrors are flexible: You can add vdevs, replace drives in existing vdevs by larger drives, but also change the width of vdevs and—space permitting—remove vdevs and reduce the number of vdevs in the pool, which would come handy to evolve from 6*(2-way) to 3*(3-way).
Raidz# is inflexible: You can add vdevs, replace drives, but not change the width or remove vdevs. If you go for 5-wide raidz2, the way forward is to add further 5-wide raidz2 vdevs.

I'll have to read up on it, but how is the migration process done? Is it a COPY? or a MOVE?
It is as you like. With ZFS, the easiest way is to use replication, which is a "copy".
Going to a raidz2 pool would indeed require replication from old_pool to new_pool.
With mirrors, evolution can be done within the existing pool.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
The 16 TB drives are reportedly much cheaper.
Yeah I will likely go with 16tb drives.
"3-way mirror" means just what it says: 3 identical drives, holding 3 copies of the same data. Available space = 1 drive (33% of total).
I did not know this was possible. I thought mirrors only were possible in pairs of 2. So basically it is the same setup I am running now, but with 1 extra drive in the vdev. So I would basically have a single vdev, and 3 drives in it. All identical copies.

Raidz2 requires at least 4 drives.
Don't bother with calculators. A mirror vdev provides, as raw space, the space of its smallest drive. A M-wide raidzN vdev provides, again as raw space, (M-N)*(size of smallest drive). Add the raw space of vdevs to have the raw space of the pool. The usable space is about 80% of raw space, for bulk storage (file storage), or 50% of raw space for block storage (zvol, VM, iSCSI).
Ah, I had it mixed up. Raidz1 is what I was thinking of I believe.

3*16 TB is 3-way mirror is 16 TB of raw space (minus some overhead for ZFS metadata, which we neglect). About 13 TB of usable space for file storage.
Three vdevs would give 38 TB of usable space, with 9 *16 TB drives, and the IOPS of 3 drives.
The same usable space with raidz2 requires only 5 drives, but provides the IOPS of a single drive.
Did not know about the IOPS thing restriction there. Interesting.

However, the way this is going to be setup is major for me because like I said currently I'm close to 50tb raw, and as shown above about 26TB usable (which I'm already running low on).
If I am understanding correctly, a 3 way mirror would give me 13TB of usable space. So as you state, I would need 9 x16 TB drives. The drives being $250 a piece, I'm already going from around $1200 to around $2250 which is insane. I get it's added safety and assurance, and running a home server can be costly, and this is more of an upfront cost, but I just was not looking to spend that much. I mean I was hoping to get away with under $1400 for now. The less the better right now though.

Based off what I am hearing though, mirrors are flexible. Meaning I can add an extra drive to each mirror in the future.
Mind you, my goal here is not only improving the density, but also ideally lowering the amount of drives in my system, and conserving a bit of electricity. I assume newer drives would be a bit more power efficient naturally, and the higher density means less drives, but it feels like I'm still getting close to my current setup of 15 drives. I mean 1 more 3 way mirror and I'm already almost there.
I also have to get my 2nd HBA working to utilize my other bays. Hoping that goes okay.


Would I be better off doing mirrors of 2, and then in a few months adding on 1 drive to each mirror?
If I can match correctly, 16*2=32, I will assume I will get around 28TB of usable space. So I am hitting my usable space (26TB) and only gaining about 2tb of free useable space from my current setup. But I lose my spare drives too.

The 16tb Ironwolf Pro drives are about $250 a piece. So 4 of them would bring me to $1000, not including tax and shipping. That would be x2 2 way mirrors 16tb. Then 1 extra drive for a spare hotswap? $1250.
Trying to find the balance of performance, cost, and especially scalability as I know I will want to increase my storage in the future, and I don't want to have to drop thousands on more drives in a few years just because I can't simply upgrade this pool.
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
@isopropyl I suggested a single 3-way mirror VDEV as a stopgap solution in order to spread out the expenses. Please read my post again.

Also, please read the following resources.
 
Joined
Jun 15, 2022
Messages
674
Being from the days when a 40MB HDD was $225 and took hours to duplicate to a second drive in the same system, and Ethernet was a hot new item compared to Token Ring, we've really got it good.

Personally, I love SAS vs SATA, similar to SCSI over MFM. 30 drives...how long are your cables? SAS signals differently than SATA.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
@isopropyl I suggested a single 3-way mirror VDEV as a stopgap solution in order to spread out the expenses. Please read my post again.
I was quoting Etorix as below
Three vdevs would give 38 TB of usable space, with 9 *16 TB drives, and the IOPS of 3 drives.


However @Davvo, I am a bit confused by your actual suggestion.
Are suggesting to replace a handful of my current drives with this mirror? Like take out x4 of my current mirrors (so x8 of my 4tb drives), and replace it with x1 3 way mirror?

So my pool will become x1 3-way mirror (x3 16tb drives) + x5 2 way mirrors (x10 4tb drives)
And I assume I can leave the x2 4tb spares in, and they'd work on any of the 4tb drives, just not the 16tb drives obviously.
Then in the future, I can work on replacing those 4tb drives with 16tb drives. Makes sense.

If I am understanding correctly, that is actually a good idea. I guess I didn't understand your suggestion originally
I think I originally misunderstood you, as I was confused how A 3 way mirror of 16tb drives would only provide 16tb of space comes anywhere near my current storage capacity.
My entire mindset was based around migrating the entire pool, not partially migrating. So this sounds like the better solution.
Also, please read the following resources.
Thank you!



Personally, I love SAS vs SATA, similar to SCSI over MFM. 30 drives
What do you like about SAS personally? I can't say I am familiar enough with either from a benchmark perspective, but if what I had read was correct, it basically has more lanes. Which I am a big fan of, and would prefer to use SAS. But I don't know if I'd actually notice a difference really.

I plan to throw 10g ethernet in this thing in the future, (and maybe even combine link it for 20g for shits and giggles) to my firewall, then have my firewall branch to my main computer, and my future hypervisor the same. So maybe I will notice a difference then but even then these are spinning drives I don't think I'd even come anywhere close to saturating 10g link without solid state.

That being said, if I did go the SAS route, I'd probably be looking at Exos? I mean for $20 more I can just get Exos drives. But is it worth it?

And at that point, I honestly could just spend a bit more and get x3 and do a 3 way mirror of the 20TB Exos drives instead. Because if I'm keeping the 4tb drives in the pool for now, that would work fine and probably be more preferred at this point tbh. I'm fine with spendin $1100 on x3 20tb drives, then a few months down the line, spending that again for x3 more of them.

how long are your cables? SAS signals differently than SATA.
What is the reasoning for needing to know the length?

Anyways I assume you mean the cables going from my HBA to my backplane?
Currently, not long. I need to get longer ones, when the case spreads apart it pulls so I have to be careful.

I forget which cable my HBA and Backplane needed. The connectors confuse the living fuck out of me, and I think I purchased the wrong one at one point.
If I recall based on my purchase history, I bought x2 of SFF8643 to SFF8087 and they were 0.5mm. That brand too.
But I can't recall if that one I ordered was the correct one. Those look correct though based off memory. It matches the connector on my HBA.
7127AspWt0L._SL1500_.jpg


I believe 2 ports on my HBA, both cables run to the backplane. So half the backplane is working currently.
But yeah 0.5m is too short so I have to upgrade that, as well as plug in my other HBA (and hopefully it even works). It's been sitting ontop of my NAS forever because I had to send one back that came not working and I never got around to putting the new one in because I needed to order more cables.
When I get that new HBA in, the other bays will be ready to go.

But I ask in case you had some input about cable length, before I went and ordered more cables?
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Are suggesting to replace a handful of my current drives with this mirror? Like take out x4 of my current mirrors (so x8 of my 4tb drives), and replace it with x1 3 way mirror?

So my pool will become x1 3-way mirror (x3 16tb drives) + x5 2 way mirrors (x10 4tb drives)
And I assume I can leave the x2 4tb spares in, and they'd work on any of the 4tb drives, just not the 16tb drives obviously.
Then in the future, I can work on replacing those 4tb drives with 16tb drives. Makes sense.

If I am understanding correctly, that is actually a good idea. I guess I didn't understand your suggestion originally
I think I originally misunderstood you, as I was confused how A 3 way mirror of 16tb drives would only provide 16tb of space comes anywhere near my current storage capacity.
My entire mindset was based around migrating the entire pool, not partially migrating. So this sounds like the better solution.

Thank you!
Yup, you can use your "retired" 4TB drives as spare ones for your active ones. Also, you don't even need to move data in any way, just extend the pool creating a new mirror vdev with the 16TB ones and then removing as much as you need, ZFS will move the data for you.

I plan to throw 10g ethernet in this thing in the future, (and maybe even combine link it for 20g for shits and giggles) to my firewall, then have my firewall branch to my main computer, and my future hypervisor the same. So maybe I will notice a difference then but even then these are spinning drives I don't think I'd even come anywhere close to saturating 10g link without solid state.
Yup, you won't go close saturating a 10 Gig network.

Resources:

That being said, if I did go the SAS route, I'd probably be looking at Exos? I mean for $20 more I can just get Exos drives. But is it worth it?
Just remember that you can plug SATA cables into SAS drives, but not the other way around.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Are suggesting to replace a handful of my current drives with this mirror? Like take out x4 of my current mirrors (so x8 of my 4tb drives), and replace it with x1 3 way mirror?
Yes, you can do that with mirrors.
Using a free slot (or taking one spare out), extend one the 2*4TB mirror vdevs to become 3-way with a 16 TB drive. Then, replace one of the 4 TB drives with a 16 TB drive. Replace the last 4 TB drive in this vdev with a 16 TB…
So my pool will become x1 3-way mirror (x3 16tb drives) + x5 2 way mirrors (x10 4tb drives)
And then you may, from the GUI, remove one or two of the 2*4 TB mirrors, causing data to be migrated to other vdevs (mostly the 3*16 TB).
This gives you free slots to add further mirrors vdevs of larger drives, now or later.

Very flexible, but not space efficient (because I would not recommend, or use myself, 2-way mirrors with 16+ TB drives…), so it's more expensive, and uses more drives that raidz2 for similar usable space and similar resiliency (basically: can lose one drive and still have redundancy while resilvering). Tough choice.

Anyways I assume you mean the cables going from my HBA to my backplane?
What matters is the total length (cables + traces) from the controller to drives, or from the controller to the expander.
If your traces are short enough (with a backplane, they should be), SATA drives have the benefit of providing more useful SMART reports and of being usually cheaper than their SAS counterparts.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Yup, you can use your "retired" 4TB drives as spare ones for your active ones. Also, you don't even need to move data in any way, just extend the pool creating a new mirror vdev with the 16TB ones and then removing as much as you need, ZFS will move the data for you.
Sounds easy enough. Good suggestion, This is definitely the route I'm probably going to go so I don't have to eat a massive upfront cost of replacing my entire pool.

Resources:
Awesome info, thanks!

Just remember that you can plug SATA cables into SAS drives, but not the other way around.
Yeah, I'm not seeing a major issue with that unless I will need to diag the drives from another chassis that doesn't have SAS capability.

Yes, you can do that with mirrors.
Using a free slot (or taking one spare out), extend one the 2*4TB mirror vdevs to become 3-way with a 16 TB drive. Then, replace one of the 4 TB drives with a 16 TB drive. Replace the last 4 TB drive in this vdev with a 16 TB…

And then you may, from the GUI, remove one or two of the 2*4 TB mirrors, causing data to be migrated to other vdevs (mostly the 3*16 TB).
This gives you free slots to add further mirrors vdevs of larger drives, now or later.

Very flexible, but not space efficient (because I would not recommend, or use myself, 2-way mirrors with 16+ TB drives…), so it's more expensive, and uses more drives that raidz2 for similar usable space and similar resiliency (basically: can lose one drive and still have redundancy while resilvering). Tough choice.
Little confused by that last statement. Are you saying you would prefer not to do a 3 way mirror with 16 tb drive? (Because you say 2-way mirror with 16tb drive, but I said 3 way mirror with 16tb drives). Or are you saying if a drive dies, and I lose one more while it's resilvering then I'm screwed? I mean it's a significant improvement from my current 2 way mirrors though as far as reliability goes. And I can eventually expand then to be 4-way mirrors by just adding on an extra drive to the vdevs in the future if I felt the need I guess.

But I don't understand how you state that it uses more drives than raidz2, because raidz2 as Davvo stated requires 4 drives, whereas a 3 way mirror only requires 3 drives in each vdev. So for raidz2 I'd need 1 more drive. Plus I would take more of a hit on performance if I used raidz2.

Little confused.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Being from the days when a 40MB HDD was $225 and took hours to duplicate to a second drive in the same system, and Ethernet was a hot new item compared to Token Ring, we've really got it good.
I thought of you as less... ancient.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Is there a command to see raw storage space?

I think it's actually a bit more than my original 50tb. I'm looking at it again, and they're all 4tb drives.
I actually have 16 drives (I guess I didn't count da0). 2 are spares. So it'd be 56TB, or 64TB with the two spares included.

I'm just trying to determine how much extra usable space I will gain from adding the x1 3 way mirror, and how many 4tb drives I can actually remove.

If I'm doing the math correctly, 4x12=48, +16 = 64.
So I'd need 12 of the 4tb drives to remain in. And I can remove x4 4tb drives (2 vdevs). You can also do 16/4=4 for sanity check and it checks out.

...But that also includes the spares. And that's where I'm getting a bit confused on the math.
Theoretically I can keep all the 4tb drives in and just "turn" them into spares if I wanted instead of removing them, but It'd just be added power consumption AND I really don't want to have the ability to actually put data on them on my pool because they're getting old. Keep one or two in, as spares, fine. But as mirror vdevs for more capacity, nah.

And to determine usable space, currently I have 6tb free usable
Code:
# zpool list
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
PrimaryPool  25.4T  19.3T  6.07T        -         -    34%    76%  1.00x    ONLINE  /mnt


I absolutely suck at math in case it wasn't clear. And I overthink everything.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
For bulk storage, a pool's usable space is 80% of its total space (allocated + free = size).
In your case: 25.4*0.8 = 20.0 TB (I'm guessing zpool list shows values in TeraBytes and not in TiB).

For block storage, such is 50%.

So you can remove 4 VDEVs (4 mirrors of two 4TB drives each) and have something around 9TB of usabile space left. That's considering a mirror of 20TB drives.

But I'm a bit confused by your current layout, could you please explain it?
 
Last edited:

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
But I'm a bit confused by your current layout, could you please explain it?
Current layout is all mirrors of 2 drives.
All 4tb each. 7 vdevs.

Code:
 state: ONLINE  scan: scrub repaired 0B in 14:29:29 with 0 errors on Mon Jul 24 17:29:30 2023
config:
        NAME                                              STATE     READ WRITE CKSUM
        PrimaryPool                                       ONLINE       0     0   0
          mirror-0                                        ONLINE       0     0   0
            gptid/d7476d46-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-1                                        ONLINE       0     0   0
            gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/db71bcb5-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-2                                        ONLINE       0     0
   0
            gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d96847a9-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-3                                        ONLINE       0     0   0
            gptid/d9fb7757-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/da1e1121-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-4                                        ONLINE       0     0   0
            gptid/9fd0872d-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
            gptid/9ff0f041-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
          mirror-5                                        ONLINE       0     0   0
            gptid/14811777-1b6d-11ed-8423-ac1f6be66d76    ONLINE       0     0   0
            spare-1                                       ONLINE       0     0   0
              gptid/03daa071-505c-11ed-a9fe-ac1f6be66d76  ONLINE       0     0   0
              gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76  ONLINE       0     0   0
          mirror-6                                        ONLINE       0     0   0
            gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76    ONLINE       0     0   0
            spare-1                                       ONLINE       0     0   0
              gptid/4710dd39-1b6d-11ed-8423-ac1f6be66d76  ONLINE       0     0   0
              gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76  ONLINE       0     0   0
        spares
          gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76      INUSE     currently in use
          gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76      INUSE     currently in use


For bulk storage, a pool's usable space is 80% of its total space (allocated + free = size).
In your case: 25.4*0.8 = 20.0 TB (I'm guessing zpool list shows values in TeraBytes and not in TiB).


So you can remove 4 VDEVs (4 mirrors of two 4TB drives each) and have something around 9TB of usabile space left. That's considering a mirror of 20TB drives.
I get confused going between raw storage and usable storage, plus factoring in the two spare drives I currently have.

Ok so a 3 way mirror of 20tb drives would give me 9tb of usable space, including my current 6tb of usable space? Or would it actually be 15tb because I currently have 6tb free still.

And if I went with 16 tb drives instead of 20tb, it'd be 5tb of usable space instead?

And lastly, did your calculation of being able to remove 4 vdevs, come from raw storage numbers or usable? My point being, is it 3 vdevs AND the two spares? or is it actually 4 vdevs?
Does that make sense? I'm trying to understand if you are representing the two spares. Because they don't make up that usable space, but they are part of my raw storage/drive count numbers.
 
Joined
Jun 15, 2022
Messages
674
I thought of you as less... ancient.
Go ahead and keep thinking I'm much less used...I do. It's only when I land one of these young office gals that hit the cardio heavy at the gym my body says otherwise. But then I think, "I don't have to keep this up forever, just until I can switch gears on her at an appropriate time and it'll seem like I'm a romantic, and girls love that shtick."
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Current layout is all mirrors of 2 drives.
All 4tb each. 7 vdevs.​
Perfect, I had understood correctly.
Ok so a 3 way mirror of 20tb drives would give me 9tb of usable space, including my current 6tb of usable space? Or would it actually be 15tb because I currently have 6tb free still.

And if I went with 16 tb drives instead of 20tb, it'd be 5tb of usable space instead?

And lastly, did your calculation of being able to remove 4 vdevs, come from raw storage numbers or usable? My point being, is it 3 vdevs AND the two spares? or is it actually 4 vdevs?
Does that make sense? I'm trying to understand if you are representing the two spares. Because they don't make up that usable space, but they are part of my raw storage/drive count numbers.​
So, your pool's total space is now 25.4 TB. usable space is 20.3 TB. If we add a single n-way mirror vdev (it's not important how wide the mirror is, the space increase is the same) made of 20 TB drives, we have to add 20 TB * 0.8 = 16 TB of usable space.

So that means 20.3 + 16 = 36.3 TB of usable space with ALL your 7 vdevs of 4TB drives + the new mirror of 20TB drives; the spares do never count because they don't add any space.

Now, of that 36.3 TB you would be using 19.3 TB for your current data: 36.3 - 19.3 = 17 TB of free, usable space.

Each 4 TB mirror vdev adds to the pool 4 * 0.8 = 3.2 TB of usable space, as such: 4 (vdevs) * 3.2 = 12.8 TB

If we remove those 12.8 TB from the 17 TB of free, usable space we get 4.2 TB of free, usable space with the single n-wide mirror vdev of 20 TB drives and 3 2-way mirror vdevs of 4TB drives, plus the couple of spares.

Instead if you want to use 16 TB drives, the usable space of the new mirror vdev would be 12.8 TB

20.3 + 12.8 = 33.1 TB of usable space (with all 7 your 4 TB vdevs and the new 16 TB vdev), which means 13.8 TB of free, usable space.

If we remove 12.8 TB from that number, we get 1 TB of free, usable space with the single n-wide mirror vdev of 16 TB and 3 2-way mirror vdevs of 4TB drives, plus the couple of spares.

I hope it's clearer this way.

P.S.: both of your spares are currently in use inside different vdevs.

EDIT#1: corrected math since it might be way past 4 am in my timezone; will reviw everything a third time when the sun will be high in my sky.
EDIT#2: sun is shining, math is correct; I don't remember how I got to that 9 TB of my previous post but disregard it: this post is accurate.​
 
Last edited:
Top