Add mirror disk to existing pool

schoeller

Cadet
Joined
May 19, 2022
Messages
3
Dear all,

thanks for this opensource software.

I have the following structure in my zfs pool setup
root@freenas:~ # zpool status
pool: STORAGE
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: scrub repaired 0B in 15:04:09 with 0 errors on Sun May 8 15:04:13 2022
config:

NAME STATE READ WRITE CKSUM
STORAGE ONLINE 0 0 0
gptid/777f4001-b125-11ea-9083-9cb6540be2a7 ONLINE 0 0 0 block size: 512B configured, 4096B native
mirror-1 ONLINE 0 0 0
gptid/f71774ae-e73a-11eb-8971-b47af13d7670 ONLINE 0 0 0
gptid/f732fa8b-e73a-11eb-8971-b47af13d7670 ONLINE 0 0 0

errors: No known data errors

I would like to configure the existing (ada2) disk 777f4001-b125-11ea-9083-9cb6540be2a7 to a mirror. The disk has the following layout

root@freenas:~ # gpart list ada2
Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 15628053134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada2p1
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e1
efimedia: HD(1,GPT,776d0fa2-b125-11ea-9083-9cb6540be2a7,0x80,0x400000)
rawuuid: 776d0fa2-b125-11ea-9083-9cb6540be2a7
rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 2147483648
offset: 65536
type: freebsd-swap
index: 1
end: 4194431
start: 128
2. Name: ada2p2
Mediasize: 7999415652352 (7.3T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e2
efimedia: HD(2,GPT,777f4001-b125-11ea-9083-9cb6540be2a7,0x400080,0x3a3412a08)
rawuuid: 777f4001-b125-11ea-9083-9cb6540be2a7
rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 7999415652352
offset: 2147549184
type: freebsd-zfs
index: 2
end: 15628053127
start: 4194432
Consumers:
1. Name: ada2
Mediasize: 8001563222016 (7.3T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r2w2e5

I have thus prepared the new disk (ada3) as follows:

gpart add -t freebsd-swap -s 2G ada3
gpart add -t freebsd-zfs ada3

It now looks like so

root@freenas:~ # gpart list ada3
Geom name: ada3
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 31251759063
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: ada3p1
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
efimedia: HD(1,GPT,1b61fc5b-d74b-11ec-8b39-b47af13d7670,0x28,0x400000)
rawuuid: 1b61fc5b-d74b-11ec-8b39-b47af13d7670
rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 2147483648
offset: 20480
type: freebsd-swap
index: 1
end: 4194343
start: 40
2. Name: ada3p2
Mediasize: 15998753136640 (15T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0
efimedia: HD(2,GPT,6fc5e237-d74c-11ec-8b39-b47af13d7670,0x400028,0x7467fffb0)
rawuuid: 6fc5e237-d74c-11ec-8b39-b47af13d7670
rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 15998753136640
offset: 2147504128
type: freebsd-zfs
index: 2
end: 31251759063
start: 4194344
Consumers:
1. Name: ada3
Mediasize: 16000900661248 (15T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r0w0e0

I then try to add the new disk to the existing one, which returns an error as seen below.

root@freenas:~ # zpool attach STORAGE gptid/777f4001-b125-11ea-9083-9cb6540be2a7 gptid/6fc5e237-d74c-11ec-8b39-b47af13d7670
cannot attach gptid/6fc5e237-d74c-11ec-8b39-b47af13d7670 to gptid/777f4001-b125-11ea-9083-9cb6540be2a7: can only attach to mirrors and top-level disks

In my understanding gptid/777f4001-b125-11ea-9083-9cb6540be2a7 is a top-level disk. Any hints on how to proceed are highly appreciated.

Best regards

Sebastian

P.s.:
Alternatively I would like to replace the disk gptid/f71774ae-e73a-11eb-8971-b47af13d7670 (ada0) in the existing mirror with the larger disk.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You're getting blocked because of this:
gptid/777f4001-b125-11ea-9083-9cb6540be2a7 ONLINE 0 0 0 block size: 512B configured, 4096B native

You'll need to pay attention to the ashift (or do something about that disk to format it as 4k native... you didn't say, but if it's an Exos, maybe this will help https://www.truenas.com/community/t...ing-fast-format-seagate-exos-x16-drive.84094/)

Have a look at this post:
 
Last edited:

schoeller

Cadet
Joined
May 19, 2022
Messages
3
You're getting blocked because of this:
gptid/777f4001-b125-11ea-9083-9cb6540be2a7 ONLINE 0 0 0 block size: 512B configured, 4096B native

You'll need to pay attention to the ashift (or do something about that disk to format it as 4k native... you didn't say, but if it's an Exos, maybe this will help https://www.truenas.com/community/t...ing-fast-format-seagate-exos-x16-drive.84094/)

Have a look at this post:

thanks for your response.

I was too desperate, so I replaced the drives from mirror-1 with drives double the size. Maybe a mistake. I would now like to remove the non-mirrored drive from the pool without loosing data. The situation is like this

root@freenas:~ # zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------------- ----- ----- ----- ----- ----- -----
STORAGE 6.16T 15.6T 394 263 23.2M 40.0M
gptid/777f4001-b125-11ea-9083-9cb6540be2a7 5.86T 1.41T 182 17 2.99M 270K
mirror 309G 14.2T 211 245 20.2M 39.8M
gptid/9dc192b7-d77a-11ec-8b39-b47af13d7670 - - 211 19 20.2M 531K
gptid/0f204e3f-d78e-11ec-bd3a-b47af13d7670 - - 0 227 4.72K 39.6M

Is there a simple way to take gptid/777f4001-b125-11ea-9083-9cb6540be2a7 offline, shift the data and then remove it from the pool?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is there a simple way to take gptid/777f4001-b125-11ea-9083-9cb6540be2a7 offline, shift the data and then remove it from the pool?
zpool remove should be applicable for your current pool setup (would only not work if you have RAIDZ VDEVs in the pool).

That comand will do the data evacuation for you to the other pool members.

zpool remove STORAGE gptid/777f4001-b125-11ea-9083-9cb6540be2a7

Code:
     zpool remove [-np] pool device...
             Removes the specified device from the pool.  This command currently
             only supports removing hot spares, cache, log devices and mirrored
             top-level vdevs (mirror of leaf devices); but not raidz.

             Removing a top-level vdev reduces the total amount of space in the
             storage pool.  The specified device will be evacuated by copying
             all allocated space from it to the other devices in the pool.  In
             this case, the zpool remove command initiates the removal and
             returns, while the evacuation continues in the background.  The
             removal progress can be monitored with zpool status. This feature
             must be enabled to be used, see zpool-features(7)
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
But the drive in question is not a member of a mirrored vdev ?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
But the drive in question is not a member of a mirrored vdev ?
I know... but it still works for stripe top-level VDEVs as if they are a mirrored VDEV which is missing a member.
 

schoeller

Cadet
Joined
May 19, 2022
Messages
3
zpool remove should be applicable for your current pool setup (would only not work if you have RAIDZ VDEVs in the pool).

That comand will do the data evacuation for you to the other pool members.

zpool remove STORAGE gptid/777f4001-b125-11ea-9083-9cb6540be2a7

Code:
     zpool remove [-np] pool device...
             Removes the specified device from the pool.  This command currently
             only supports removing hot spares, cache, log devices and mirrored
             top-level vdevs (mirror of leaf devices); but not raidz.

             Removing a top-level vdev reduces the total amount of space in the
             storage pool.  The specified device will be evacuated by copying
             all allocated space from it to the other devices in the pool.  In
             this case, the zpool remove command initiates the removal and
             returns, while the evacuation continues in the background.  The
             removal progress can be monitored with zpool status. This feature
             must be enabled to be used, see zpool-features(7)
thanks for your reply. Coming back to the ashift comment I experimented as follows. Replacing the disk works through

zpool replace -o ashift=9 STORAGE gptid/777f4001-b125-11ea-9083-9cb6540be2a7 gptid/7d4a1cae-d80a-11ec-bd3a-b47af13d7670

Removing the disk does not. The ashift attribute is unknown.

root@freenas:~ # zpool iostat -v STORAGE
capacity operations bandwidth
pool alloc free read write read write
---------------------------------------------- ----- ----- ----- ----- ----- -----
STORAGE 6.16T 15.6T 28 68 1.37M 1.50M
gptid/7d4a1cae-d80a-11ec-bd3a-b47af13d7670 5.86T 1.41T 11 20 474K 267K
mirror 310G 14.2T 16 48 925K 1.24M
gptid/9dc192b7-d77a-11ec-8b39-b47af13d7670 - - 9 23 506K 634K
gptid/0f204e3f-d78e-11ec-bd3a-b47af13d7670 - - 7 24 419K 634K
---------------------------------------------- ----- ----- ----- ----- ----- -----
root@freenas:~ # zpool remove STORAGE gptid/7d4a1cae-d80a-11ec-bd3a-b47af13d7670
cannot remove gptid/7d4a1cae-d80a-11ec-bd3a-b47af13d7670: invalid config; all top-level vdevs must have the same sector size and not be raidz.
root@freenas:~ # zpool remove -o ashift=9 STORAGE gptid/7d4a1cae-d80a-11ec-bd3a-b47af13d7670
invalid option 'o'
usage:
remove [-npsw] <pool> <device> ...

For a moment I thought it might be due to the fact, that I have not yet upgraded my pool.

New ZFS version or feature flags are available for pool STORAGE. Upgrading pools is a one-time process that can prevent rolling the system back to an earlier TrueNAS version. It is recommended to read the TrueNAS release notes and confirm you need the new ZFS feature flags before upgrading a pool.

Any hints on how to proceed from here are highly welcome.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
cannot remove gptid/7d4a1cae-d80a-11ec-bd3a-b47af13d7670: invalid config; all top-level vdevs must have the same sector size and not be raidz.

Removing the disk does not. The ashift attribute is unknown.

Yes, that explanation is pretty clear, you won't be able to do it at all in a pool with mixed ashift...

When you think about it, the OpenZFS guys haven't managed to (or haven't wanted to) deal with modifying the blocks as they are transferred off the VDEV that's leaving the pool to the other members, so mixed ashift isn't in the code for that operation.

As a general rule, mixed ashift probably isn't good for a pool's performance for similar reasons.
 
Top