Disk Expand Question

Status
Not open for further replies.
Joined
Jun 12, 2016
Messages
5
Hello there,

So my drives were limited by 2.2tb because of the onboard controller I was using. I have since installed an 9207-8i and now my 4tb drives (which previously only were recognized as 2.2tb) now show correctly as 4tb.

However I'm not sure on how to get the volume expanded to take advantage of the new space. I checked and confirmed that autoexpand is enabled on the pool.

Ideas?

Thank you in advance!
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Are all the drives in the vdev now showing as 4tb (and were showing up as 2.2TB before)?

You can force autoexpand to work when you add the device to the pool. Of course, this means you must remove the device from the pool first, which is risky and can cause complete dataloss.
 
Joined
Jun 12, 2016
Messages
5
Correct, were showing 2.2TB before, now showing 4TB.

I have backed up the data, was just trying to find a way to do it without starting fresh and having to transfer the data back.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Joined
Jun 12, 2016
Messages
5
Shell
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0

errors: No known data errors

pool: mypool
state: ONLINE
scan: scrub repaired 0 in 2h32m with 0 errors on Sun Jun 12 15:19:42 2016
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/b234201b-2610-11e6-a379-a0369f8039b4 ONLINE 0 0 0
gptid/b30f96aa-2610-11e6-a379-a0369f8039b4 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/cda95a57-2610-11e6-a379-a0369f8039b4 ONLINE 0 0 0
gptid/ce87be0a-2610-11e6-a379-a0369f8039b4 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/f7660541-2610-11e6-a379-a0369f8039b4 ONLINE 0 0 0
gptid/f8442b00-2610-11e6-a379-a0369f8039b4 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gptid/1c725a22-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0
gptid/1d4f42d7-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
gptid/4b6bd711-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0
gptid/4c497a7d-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
gptid/774f694e-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0
gptid/7826cfec-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0
cache
gptid/c7b42b2a-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0
gptid/c859c4c1-2611-11e6-a379-a0369f8039b4 ONLINE 0 0 0

errors: No known data errors
[root@mypool ~]#

=> 34 4294967227 da0 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da0p1 freebsd-swap (2.0G)
4194432 4290772824 da0p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da1 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da1p1 freebsd-swap (2.0G)
4194432 4290772824 da1p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 312581741 da2 GPT (149G)
34 94 - free - (47K)
128 312581640 da2p1 freebsd-zfs (149G)
312581768 7 - free - (3.5K)

=> 34 312581741 da3 GPT (149G)
34 94 - free - (47K)
128 312581640 da3p1 freebsd-zfs (149G)
312581768 7 - free - (3.5K)

=> 34 62499933 da4 GPT (30G)
34 222 - free - (111K)
256 62483327 da4p1 !6a898cc3-1dd2-11b2-99a6-080020736631 (30G)
62483583 16384 da4p9 !6a945a3b-1dd2-11b2-99a6-080020736631 (8.0M)

=> 34 62533229 da5 GPT (30G)
34 222 - free - (111K)
256 62516623 da5p1 !6a898cc3-1dd2-11b2-99a6-080020736631 (30G)
62516879 16384 da5p9 !6a945a3b-1dd2-11b2-99a6-080020736631 (8.0M)

=> 34 4294967227 da6 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da6p1 freebsd-swap (2.0G)
4194432 4290772824 da6p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da7 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da7p1 freebsd-swap (2.0G)
4194432 4290772824 da7p2 freebsd-zfs (2.0T)

Shell
4294967256 5 - free - (2.5K)

=> 34 4294967227 da8 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da8p1 freebsd-swap (2.0G)
4194432 4290772824 da8p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da9 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da9p1 freebsd-swap (2.0G)
4194432 4290772824 da9p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da10 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da10p1 freebsd-swap (2.0G)
4194432 4290772824 da10p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da11 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da11p1 freebsd-swap (2.0G)
4194432 4290772824 da11p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da12 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da12p1 freebsd-swap (2.0G)
4194432 4290772824 da12p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da13 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da13p1 freebsd-swap (2.0G)
4194432 4290772824 da13p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da15 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da15p1 freebsd-swap (2.0G)
4194432 4290772824 da15p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 4294967227 da16 GPT (3.6T) [CORRUPT]
34 94 - free - (47K)
128 4194304 da16p1 freebsd-swap (2.0G)
4194432 4290772824 da16p2 freebsd-zfs (2.0T)
4294967256 5 - free - (2.5K)

=> 34 78165293 ada0 GPT (37G)
34 6 - free - (3.0K)
40 1024 ada0p1 bios-boot (512K)
1064 78164256 ada0p2 freebsd-zfs (37G)
78165320 7 - free - (3.5K)

=> 34 78165293 ada1 GPT (37G)
34 6 - free - (3.0K)
40 1024 ada1p1 bios-boot (512K)
1064 78164256 ada1p2 freebsd-zfs (37G)
78165320 7 - free - (3.5K)

(END)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The fact that each drive says "corrupt" and the free space doesn't appear, leads me to believe that your drives need to be formatted. The simplest way, if you have a full data backup, is to destroy this pool and recreate it. But you might be able to pull a drive from each mirror, quick format it and then re-install, let the data resilver and then repeat on the second drives.
 
Joined
Jun 12, 2016
Messages
5
Hmm ya I think i'll just go ahead and destroy/recreate next weekend.

Thank you so much for your time and assistance, very appreciated!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
As I suspected, the OS doesn't appreciate disks magically growing. Interesting to see that ZFS itself keeps working normally, except for the autoexpansion.

OP, if it's any comfort, you've contributed to the knowledge of this community. Some people in your situation had asked about this, but all we could say then was "It probably won't work".

One thing you should be able to do is in-place resilver one drive at a time. You do need a spare in working order and of the same size - or more, since you can parallelize this over the various vdevs for a speedup. It should be safe, though potentially slower than nuking everything and starting over.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
It sounds like it should be available according to @mav@ in the freebsd forums: https://forums.freebsd.org/threads/48828/#post-273142

I think the issue has to do with the way the drive was originally formatted (it was limited to 2.2TB).

The safest thing to do is restroy/recreate. Since you are using mirrors though, pulling a drive and resilvering wouldn't be that bad either.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Since you are using mirrors though, pulling a drive and resilvering wouldn't be that bad either.
No need to pull the drive before the new one is ready, assuming there's a port available for it. That way, it should be quite safe.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Good point. Although to speed things up, if it were me I would pull 1/2 of all the mirrors, wipe all of them quickly and resilver them at the same time. And then do the other side of the mirrors. :smile:
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Joined
Jun 12, 2016
Messages
5
No need to pull the drive before the new one is ready, assuming there's a port available for it. That way, it should be quite safe.

Have the spare ports, just not spare drives.

Good point. Although to speed things up, if it were me I would pull 1/2 of all the mirrors, wipe all of them quickly and resilver them at the same time. And then do the other side of the mirrors. :)

I wonder what would be faster, to do this or wipe and restore. I believe the last restore from backup took ~21 hours.
 
Status
Not open for further replies.
Top