Zpool import i/o error / import -fFX returns one or more device busy

Imbean

Cadet
Joined
Jan 2, 2021
Messages
2
Hi,

All of a sudden - and precisely when I was implementing my new server, needing all available disks I have to copy the data (synology SHR is not compatible with ZFS), including backups for that - my zpool 'proxpool' didn't come offline after a reboot of that server. And I had just transferred the disks from my Synology to my new HP Proliant and created an array. So no way back .... :-(

I went over this forum to look for any solutions and I tried a few things.
Below a screenshot.

--------------------------------------------

Linux truenas 6.1.55-production+truenas #2 SMP PREEMPT_DYNAMIC Tue Oct 31 16:07:08 UTC 2023 x86_64

TrueNAS (c) 2009-2023, iXsystems, Inc.
All rights reserved.
TrueNAS code is released under the modified BSD license with some
files copyrighted by (c) iXsystems, Inc.

For more information, documentation, help or support, go here:
http://truenas.com

Welcome to TrueNAS
Last login: Fri Dec 22 01:00:03 PST 2023 on pts/0

Warning: the supported mechanisms for making configuration changes
are the TrueNAS WebUI, CLI, and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAY
RESULT IN SYSTEM FAILURE.

root@truenas[~]# zpool import -f proxpool
cannot import 'proxpool': I/O error
Destroy and re-create the pool from
a backup source.


root@truenas[~]# zpool import -fFX proxpool
cannot import 'proxpool': one or more devices is currently unavailable
root@truenas[~]# zpool import
pool: proxpool
id: 14998297268174128833
state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:

proxpool FAULTED corrupted data
raidz1-0 ONLINE
e666f2de-a06c-494d-8aec-0f353fe019a2 ONLINE
748cc698-beb9-564d-bd58-83f4b23cc0e2 ONLINE
8ebf3eb1-2423-e44d-ae48-cb4371987123 ONLINE
436e3c7f-3714-374a-8602-bbdd92e29608 ONLINE
root@truenas[~]# zpool import -f proxpool
cannot import 'proxpool': I/O error
Destroy and re-create the pool from
a backup source.

root@truenas[~]# zpool import -fFX proxpool
cannot import 'proxpool': one or more devices is currently unavailable
root@truenas[~]#

-------------------------------------------------------

I tried to import using the -f parameter and using the -fFX parameter as you can see.
For the last one, it returns that one or more devices are unavailable. I did a fresh install of truenas scale, on bare metal this time (previously I used Proxmox and had TrueNAs Scale running as a VM) on a new boot disk to exclude it was the bootdrive having issues. I still have the old bootdrive and could use that one, if needed.

The data on the pool is not critical but it holds my 4k movies and all my CD's I ripped and it will take me quite some time to do it all over again.
Does anyone of you have some tips for me what I could still try? Or is this just the end of the line and I need to throw the towel in the ring?

Thank you in advance.

Gerco van der Boon
The Netherlands
 

Imbean

Cadet
Joined
Jan 2, 2021
Messages
2
Any additional info:
root@truenas[~]# zdb -l /dev/sda1
------------------------------------
LABEL 0
------------------------------------
version: 5000
name: 'proxpool'
state: 0
txg: 22658
pool_guid: 14998297268174128833
errata: 0
hostid: 1215758018
hostname: 'truenas'
top_guid: 11126089731963044583
guid: 15118724200382042372
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 11126089731963044583
nparity: 1
metaslab_array: 256
metaslab_shift: 34
ashift: 12
asize: 24004641423360
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 15118724200382042372
path: '/dev/disk/by-partuuid/e666f2de-a06c-494d-8aec-0f353fe019a2'
devid: 'ata-WDC_WD60EFRX-68L0BN1_WD-WX51D175F1CS-part1'
phys_path: 'pci-0000:00:1f.2-ata-2.0'
whole_disk: 1
DTL: 913
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 7660157904446496606
path: '/dev/disk/by-partuuid/748cc698-beb9-564d-bd58-83f4b23cc0e2'
devid: 'ata-WDC_WD60EFRX-68L0BN1_WD-WX51D175FUDN-part1'
phys_path: 'pci-0000:00:1f.2-ata-3.0'
whole_disk: 1
DTL: 912
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 2554733477272079798
path: '/dev/disk/by-partuuid/8ebf3eb1-2423-e44d-ae48-cb4371987123'
devid: 'ata-WDC_WD60EFRX-68L0BN1_WD-WX11DB65Z8DS-part1'
phys_path: 'pci-0000:00:1f.2-ata-4.0'
whole_disk: 1
DTL: 911
create_txg: 4
children[3]:
type: 'disk'
id: 3
guid: 7071953530495592961
path: '/dev/disk/by-partuuid/436e3c7f-3714-374a-8602-bbdd92e29608'
devid: 'ata-WDC_WD60EFRX-68L0BN1_WD-WX41DC6A1VKS-part1'
phys_path: 'pci-0000:00:1f.2-ata-6.0'
whole_disk: 1
DTL: 910
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
com.klarasystems:vdev_zaps_v2
labels = 0 1 2 3


This one of the 4 drives, if needed, I can provide this info for the other three.
 
Top