ssd gets different serial (and hence no gptid) after taken offline

lexik

Cadet
Joined
Nov 17, 2021
Messages
5
Hi,
I have a very strange situation. This is not production and I have a backup of the pool, so no data is lost, I just want to understand whats going on.
I run freenas as a virtual machine. It has attached multiple ssds via an dell perc h310 in IT mode. I made a copy of the virtual machine and wanted to upgrade the storage space of my pool. Therefore I offlined the first disk (all ssd), physically replaced it with the bigger one, and replaced the disk in the pool. All done via gui. By repeating the steps, I successfully upgraded the storage space of my pool and it is healty.
Now I have the original freenas VM and the original, smaller, disks and the new freenas VM with the bigger pool which is working fine.

Now I tried to boot the original freenas vm with the smaller disks but freenas is unable to bring the pool up. This is because two disk of the raidz1 (consisting of 3 disks) could not be opened. Two of the three disks are in state "can't open" and there there a not enough replicas to get the pool up.

But why is that? They are the same disks on the same controller and one of the disks even get successfully recognized. The disks are not faulty, at least the are detected on boot and I can inspect them with the usual tools (geom disk list, gpart show etc.). The serial(?) shown on a zpool import is not the one corresponding to the original disks. How can that be? Nor is it a serial from one of the new disks.

The orignal freenas VM is not aware of the pool upgrade, because I made it on the new VM, so in theory the pool should come up without problems, because there was no change and it should just be a "reboot" for freenas.


Is there a way to somehow force freenas to use one disk to use in a pool. I know exactly which two are the ones freenas needs / which were in the old pool.

I realy want to understand why this is happending. How can I get more information why I get the error=2 / freenas can open the disk/vdev?

Code:
console.log

Nov 26 09:06:20 freenas.test.lan Importing ssdtank
Nov 26 09:06:20 freenas.test.lan disk vdev '/dev/gptid/0386eb3f-193e-11e8-afdc-000c29dc81e2.eli': vdev_geom_open: failed to open [error=2]
Nov 26 09:06:20 freenas.test.lan disk vdev '/dev/gptid/0475818e-193e-11e8-afdc-000c29dc81e2.eli': vdev_geom_open: failed to open [error=2]
Nov 26 09:06:20 freenas.test.lan spa_load($import, config untrusted): vdev tree has 1 missing top-level vdevs.
Nov 26 09:06:20 freenas.test.lan spa_load($import, config untrusted): current settings allow for maximum 2 missing top-level vdevs at this stage.
Nov 26 09:06:20 freenas.test.lan spa_load($import, config untrusted): FAILED: unable to open vdev tree [error=2]
Nov 26 09:06:20 freenas.test.lan vdev 0: root, guid: 2259496383944431123, path: N/A, can't open
Nov 26 09:06:20 freenas.test.lan vdev 0: mirror, guid: 10615896676284670754, path: N/A, can't open
Nov 26 09:06:20 freenas.test.lan vdev 0: disk, guid: 14254854939328160923, path: /dev/gptid/0386eb3f-193e-11e8-afdc-000c29dc81e2.eli, can't open
Nov 26 09:06:20 freenas.test.lan vdev 1: disk, guid: 434134942599820089, path: /dev/gptid/0475818e-193e-11e8-afdc-000c29dc81e2.eli, can't open
Nov 26 09:06:20 freenas.test.lan spa_load($import, config untrusted): UNLOADING


Code:
root@freenas:/var/log # zpool import
   pool: ssdtank
     id: 11832098037785245314
  state: UNAVAIL
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
   see: http://illumos.org/msg/ZFS-8000-EY
 config:

   ssdtank                                         UNAVAIL  insufficient replicas
     raidz1-0                                      UNAVAIL  insufficient replicas
       10473122201232411810                        UNAVAIL  cannot open
       gptid/434b3cc0-ac19-11ea-a6d6-000c298f23c7  ONLINE
       16256749962109416599                        UNAVAIL  cannot open
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Because you replaced the disks one by one, each of the disks (except the first one) has pool metadata that knows about versions of the pool with the replaced disks in it.

Short version: You can't do what you're expecting to do. It doesn't work that way (with RAIDZ). If you were thinking about doing the same thing with mirrors (which would sort-of work), it's not the same as each mirror contains a full and independent copy of all blocks in the VDEV.

If you wanted to have that result, you would have needed to attach all 3 new disks, create a new pool with them, replicate the old pool content over to the 3 new disks, remove the old pool rename the new pool, export the old pool, then you can have the old pool disks with pool intact to use as you please on another system.
 
Top