Hyper-V TrueNAS 12 vm panic:VERIFY(err==0)

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
Hello.

I am running TrueNAS vm under hyper-V cluster. It was woking fine for around 6 months but it fails today.

How do I force a fsck there?

Can anyone help me?

1646089167176.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How do I force a fsck there?

You do not. ZFS does not have a fsck of any sort. It would be virtually impossible to fsck petabyte or exabyte sized filesystems. ZFS relies on its checksumming and self-healing capabilities to maintain the pool integrity.

You have potentially compromised this by virtualizing it in a non-recommended manner. This can cause fatal pool corruption if the hypervisor is writing stuff out to disks in an inconsistent pattern, perhaps cached, who knows. You should be following the guidance I posted many years ago at


One poster recently commented "but that's so old". Bad ideas do not get better with age. Good ideas age well. The information is still relevant.

You could potentially be in luck, however. It looks to me like it is your boot pool that is ruined. Your main data pool may be unaffected. Try reinstalling TrueNAS. If you can, then import the pool, and then realize that the bad thing that happened to your boot pool could also happen to your main pool. BACK UP YOUR MAIN POOL AS SOON AS POSSIBLE.

Then start over, either with ESXi as your hypervisor, or put the NAS on bare metal, or if you have a tiny amount of storage, you may also try the following:


But if you insist on running TrueNAS on Hyper-V, plan to have good backups, and plan to need them.

Good luck.
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
Thanks for those info @jgreco. We will move our TrueNAS to baremetal servers.

But I wish I could recover data from this pool? Any suggestions?

I am able to see it using: zpool import -f, but the server restarts when I try to recover it.

Mar 1 01:15:02 Cofre syslogd: last message repeated 4 times
Mar 1 01:15:21 Cofre login[82183]: ROOT LOGIN (root) ON ttyv0
Mar 1 01:24:10 Cofre syslogd: kernel boot file is /boot/kernel/kernel
Mar 1 01:24:10 Cofre kernel: [558] panic: VERIFY3(0 == dmu_buf_hold_array(os, object, offset, size, FALSE, FTAG, &numbufs, &dbp)) failed (0 == 5)
Mar 1 01:24:10 Cofre kernel: [558]
Mar 1 01:24:10 Cofre kernel: [558] cpuid = 3
Mar 1 01:24:10 Cofre kernel: [558] time = 1646108641
Mar 1 01:24:10 Cofre kernel: [558] KDB: stack backtrace:
Mar 1 01:24:10 Cofre kernel: [558] #0 0xffffffff8099fd15 at kdb_backtrace+0x65
Mar 1 01:24:10 Cofre kernel: [558] #1 0xffffffff80951d81 at vpanic+0x181
Mar 1 01:24:10 Cofre kernel: [558] #2 0xffffffff81aa904a at spl_panic+0x3a
Mar 1 01:24:10 Cofre kernel: [558] #3 0xffffffff81b0cac2 at dmu_write+0x62
Mar 1 01:24:10 Cofre kernel: [558] #4 0xffffffff81b98364 at space_map_write+0x194
Mar 1 01:24:10 Cofre kernel: [558] #5 0xffffffff81b65667 at metaslab_flush+0x3b7
Mar 1 01:24:10 Cofre kernel: [558] #6 0xffffffff81b8f759 at spa_flush_metaslabs+0x1a9
Mar 1 01:24:10 Cofre kernel: [558] #7 0xffffffff81b85fed at spa_sync+0xd6d
Mar 1 01:24:10 Cofre kernel: [558] #8 0xffffffff81b9a483 at txg_sync_thread+0x3b3
Mar 1 01:24:10 Cofre kernel: [558] #9 0xffffffff8090f5ce at fork_exit+0x7e
Mar 1 01:24:10 Cofre kernel: [558] #10 0xffffffff80cdd40e at fork_trampoline+0xe
Mar 1 01:24:10 Cofre kernel: [558] Uptime: 9m18s
Mar 1 01:24:10 Cofre kernel: ---<<BOOT>>---
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
root@Cofre:/var/log # zdb -l /dev/da1p2
------------------------------------
LABEL 0
------------------------------------
version: 5000
name: 'boot-pool'
state: 0
txg: 7895779
pool_guid: 1525392888318996755
errata: 0
hostname: ''
top_guid: 1561236375997801541
guid: 1561236375997801541
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 1561236375997801541
path: '/dev/da0p2'
whole_disk: 1
metaslab_array: 64
metaslab_shift: 30
ashift: 12
asize: 136075280384
is_log: 0
DTL: 150
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
labels = 0 1 2 3
root@Cofre:/var/log #
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
root@Cofre:/mnt # zpool import -R /mnt/truenas/ boot-pool
cannot import 'boot-pool': pool was previously in use from another system.
Last accessed by <unknown> (hostid=0) at Sat Feb 26 16:34:28 2022
The pool can be imported, use 'zpool import -f' to import the pool.
root@Cofre:/mnt #
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
root@Cofre:/mnt # smartctl -a -q noserial /dev/da1 -T permissive
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.0-STABLE amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor: Msft
Product: Virtual Disk
Revision: 1.0
Compliance: SPC-3
User Capacity: 136,365,211,648 bytes [136 GB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
LU is thin provisioned, LBPRZ=0
>> Terminate command early due to bad response to IEC mode page

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature: 0 C
Drive Trip Temperature: 0 C

Error Counter logging not supported

Device does not support Self Test logging
root@Cofre:/mnt #
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
But I wish I could recover data from this pool? Any suggestions?

The boot pool is clearly in a bad way. Nothing you've posted gives information on your data storage pool.

Since the boot pool primarily holds the OS, it would probably be best to try reinstalling TrueNAS, and then see if it sees your data storage pool.
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
I did this:

root@Cofre:~ # zpool import -o readonly=on -f -R /mnt/truenas/ boot-pool
root@Cofre:~ #
root@Cofre:~ # zpool status -xv
pool: boot-pool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 0B in 00:00:10 with 1 errors on Thu Feb 24 03:45:10 2022
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
da1p2 ONLINE 0 0 0

errors: Permanent errors have been detected in the following files:

boot-pool/.system/rrd-17a88b624aa64e01b68939563b5fcb8c:/localhost/df-mnt-tank-data-volumes-mycompany-stable-rollbacks-stable-d587c/df_complex-free.rrd
root@Cofre:~
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well that's not a catastrophic file loss. What's the data pool look like?
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
The boot pool is clearly in a bad way. Nothing you've posted gives information on your data storage pool.

Since the boot pool primarily holds the OS, it would probably be best to try reinstalling TrueNAS, and then see if it sees your data storage pool.
I got the same error when trying to reinstall TrueNAS on that disk.

1646226346775.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, TrueNAS is not known to work well (or at all) on Hyper-V. Even less so on Microsoft virtual disks. Try deleting the virtual disk and using a real disk via PCIe passthru. That might work.
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
root@Cofre:/mnt # zfs set mountpoint=/mnt/truenas boot-pool
cannot set property for 'boot-pool': pool is read-only
root@Cofre:/mnt #
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
root@Cofre:/mnt # mount -t zfs -o zfsutil boot-pool /mnt/truenas/
root@Cofre:/mnt # cd /mnt/truenas/
root@Cofre:/mnt/truenas # s
s: Command not found.
root@Cofre:/mnt/truenas # ls
root@Cofre:/mnt/truenas # mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs)
/dev/da0p1 on /boot/efi (msdosfs, local)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
boot-pool on /mnt/truenas (zfs, local, noatime, read-only, nfsv4acls)
root@Cofre:/mnt/truenas #
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Excuse me? I want to access data from this pool ( inside this virtual disk).

The boot pool is reserved for the system. It is not supposed to be able to be shared for NAS purposes.
 

sekroots

Explorer
Joined
Feb 28, 2022
Messages
61
The boot pool is reserved for the system. It is not supposed to be able to be shared for NAS purposes.
Agreed. Who said I would share it?

As I've been saying, I installed TrueNAS 12.0 -RELEASE on disk1, and attached disk2 to store my data. For some reason, TrueNAS stored my configuration files at disk1. After disk1 failure I installed a new TrueNAS, attached disk2 and my data pool was there *BUT* I lost my trueNAS configs.


I just want to restore my trueNAS configs stored at disk1.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Agreed. Who said I would share it?

Well I think

I had only one hdd this pool and no backup.
I want to access data from this pool ( inside this virtual disk).
zfs set mountpoint=/mnt/truenas boot-pool

was sufficiently clear to suggest that. Anyways, I've given you the help I have to give, so, either follow my advice, hope someone else shows up with ideas, or good luck with that.
 
Top