Cannot replace disks in pool

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
Code:
zpool replace -o ashift=9 ggmtank01 gptid/<faulty-rawuuid> gptid/<new-rawuuid>
You most likely don't want shift=9 here... ashift9 means its 512 block and you probably want 4096 which is ashift=12...
There is a command to read which ashift ur pool is configured at that you should follow I think?
 

wolfman

Dabbler
Joined
Apr 11, 2018
Messages
13
@Dariusz1989 You are right - and zpool status tells me so! But for the moment i take the non-degraded fully resilvered RaidZ2 VDEV over performance!

Code:
# zpool status -v ggmtank01
  pool: ggmtank01
state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: resilvered 277G in 11:32:59 with 0 errors on Tue Aug 31 19:51:41 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        ggmtank01                                       ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/071d138c-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/07d35682-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/ef627048-743e-11eb-8d93-e4434bb19fe0  ONLINE       0     0     0
            gptid/167a10f2-7aa8-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/e82432e5-8585-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/73340a80-5449-11e9-b326-000743400660  ONLINE       0     0     0
            gptid/49297ee3-00c5-11ec-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/625036be-8586-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/44286cae-7aa9-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/3ffd3cdc-7440-11eb-8d93-e4434bb19fe0  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/c071d681-743c-11eb-8d93-e4434bb19fe0  ONLINE       0     0     0
            gptid/0f702ce5-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/d1bdee26-78df-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/f3bcbd88-7aa9-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/0826e283-8587-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
          raidz2-4                                      ONLINE       0     0     0
            gptid/8aaceda5-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/8b2fc90b-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/8c2ee1c1-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/8bf70e18-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/ed595065-0a19-11ec-bdc3-e4434bb19fe0  ONLINE       0     0     0  block size: 512B configured, 4096B native
        logs
          mirror-3                                      ONLINE       0     0     0
            gptid/123e1981-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/12b0bdb1-9644-11e8-8380-000743400660  ONLINE       0     0     0
        cache
          gptid/f4918c31-ff0f-11e9-b449-000743400660    ONLINE       0     0     0

errors: No known data errors
 

melbournemac

Dabbler
Joined
Jan 21, 2017
Messages
20
Version: TrueNAS-12.0-U6

thankyou for posting the detail and fix for this issue.

Had a failing disk in a vdev mirror. The disks in that vdev had a different ashift value to the rest of the pool (how???).

Resilver is now in progress
 
Top