Can't import zpool on truenas scale under proxmox

anet_id

Cadet
Joined
Feb 21, 2024
Messages
4
Hello truenas community,

The problem is I can't import the zfs disk with the command zpool import -f zpoolname.

I get an error log
"zio pool=zpoolname vdev=/dev/disk/by-partuuid/xxxxxx-xxxxxxx-xxxxxx-xxxxxx-xxxxx error=5 type=1 offset=846329233408 size=4096 flags=572992"
"sd 4:0:0:2 reservation conflict"
Video can't import zpool

the message appears continuously (loop).
I use truenas scale under the proxmox server.

Is there a solution to overcome zpool so I can import and access my data?
Is there another way to access the data on vm-100-disk-0.raw?
because I need that data.


Please help me

Thank You
 

Attachments

  • Cuplikan layar 2024-02-18 195521.png
    Cuplikan layar 2024-02-18 195521.png
    104 KB · Views: 159

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
you have not provided either your hardware nor your VM setup. without this, all I can do is guess

vm-100-disk-0.raw?
this sounds like you did something unsupported, and there are many if those that make data inaccessible.
the fact that you are pointing to a file at all makes me suspect you did some kind of virtutal disk, which is a very bad idea with zfs. you need to pass through the hardware due to issues exactly like this.

what changed that you are importing the pool? can you go back to the original config that was working? if so, you need to do so and make a backup and then build your VM properly.

do not deviate from the recommendations for running truenas as a VM, as it's VERY easy to make a time bomb.

if you post the above info more advice might be possible.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The reservation conflict sounds like the Proxmox host has your vm-100-disk-0.raw pool imported. Thus, the VM can't import the pool too.

But, @artlessknave is correct, both about additional information. And not deviating from the reliable method of passing through the storage hardware to the VM.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Proxmox host has your vm-100-disk-0.raw pool imported.
ahh. that could make sense. if so, that would confirm that you need to redesign it, as you obviously NEVER want that to happen.
 

anet_id

Cadet
Joined
Feb 21, 2024
Messages
4
Thank you very much for the response.

The following is hardware information
Code:
Prosesor : AMD Ryzen 3 2200G
RAM : 24GB (8GBx16GB) 2400MHz
iGPU : APU Vega 8
Hardisk :
- Seagate 1TB Barracuda 7200RPM
- Seagate 2TB Barracuda 7200RPM
- WD 3TB Purple 7200RPM


I will explain what happened.

I use truenas scale under the proxmox server
The configuration I did was add ZFS directroy to the hard disk hardware in Proxmox. then I added a Truenas Scale VM and created a .raw virtual disk.
After everything was finished and truenas was running normally, suddenly truenas crashed and I tried restarting VM Truenas, but truenas couldn't restart and I forced VM Truenas to shut down on the stop button in proxmox.
After that I tried restarting the Proxmox server and the result was that the hard disk was degraded, so the virtual disk vm-100-disk-0.raw was not readable on the Truenas Scale VM.
I did a ZFS clear on the degraded hard disk that was proxmoxed, the result was that VM Truenas Scale read the hard disk but the status on truenas was exported
I tried the command zpool import namepool and there was a message

Code:
# zpool import data-pribadi
cannot import 'share': I/O error
         Recovery is possible, but will result in some data loss.
         Returning the pool to its state as of Sun Feb 18 14:31:07 2024
         should correct the problem. Approximately 53 seconds of data
         will have to be discarded, irreversibly. Recovery can be
         attempted by executing 'zpool import -F data-pribadi'. A scrub of the pool
         is strongly recommended following a successful recovery.


I have done the command zpool import -F data-pribadi. the result is that my hard disk appears in truenas, but with a bad zfs health status.
after that I did a scrub and I got an error log

Code:
"zio pool=zpoolname vdev=/dev/disk/by-partuuid/xxxxxx-xxxxxxx-xxxxxx-xxxxxx-xxxxx error=5 type=1 offset=846329233408 size=4096 flags=572992"

"sd 4:0:0:2 reservation conflict"


the message appears continuously (loop).
I have uploaded the error video on YouTube :

dari kejadian diatas apakah ada solusi agar bisa import zpool?
atau ada solusi lain agar data saya bisa diakses meskipun menggunakan cara diluar server proxmox

Please help
Thank You.
 

anet_id

Cadet
Joined
Feb 21, 2024
Messages
4
you have not provided either your hardware nor your VM setup. without this, all I can do is guess


this sounds like you did something unsupported, and there are many if those that make data inaccessible.
the fact that you are pointing to a file at all makes me suspect you did some kind of virtutal disk, which is a very bad idea with zfs. you need to pass through the hardware due to issues exactly like this.

what changed that you are importing the pool? can you go back to the original config that was working? if so, you need to do so and make a backup and then build your VM properly.

do not deviate from the recommendations for running truenas as a VM, as it's VERY easy to make a time bomb.

if you post the above info more advice might be possible.
Before the crash, I had backed up the VM without a virtual hard disk.
I have also tried recreating the Truenas Scale VM with the default configuration (fresh install). but it still doesn't work and an error code appears

Code:
"zio pool=zpoolname vdev=/dev/disk/by-partuuid/xxxxxx-xxxxxxx-xxxxxx-xxxxxx-xxxxx error=5 type=1 offset=846329233408 size=4096 flags=572992"
"sd 4:0:0:2 reservation conflict"


Another alternative, I have also tried installing the Ubuntu GUI and trying to import the virtual disk vm-100-disk-0.raw using zfsutils linux.
but the error message I get
Code:
zpool import -f private-data
This pool uses the following feature(s) not supported by this system:
         com.klarasystems:vdev_zaps_v2
cannot import 'storage': unsupported version or feature
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Sorry, I don't have any further suggestions. While I can recognize some errors, I don't know the fix. Perhaps someone else will know how to proceed.

Your configuration is known to be unreliable. The only way to reliably run TrueNAS SCALE as a VM is to pass the hardware disk controller(s) through to the VM. Not a virtual disk, (of any type), to the VM.
 

anet_id

Cadet
Joined
Feb 21, 2024
Messages
4
Sorry, I don't have any further suggestions. While I can recognize some errors, I don't know the fix. Perhaps someone else will know how to proceed.

Your configuration is known to be unreliable. The only way to reliably run TrueNAS SCALE as a VM is to pass the hardware disk controller(s) through to the VM. Not a virtual disk, (of any type), to the VM.
Thank you very much for responding, and thank you very much for the advice.

OK, I understand I misconfigured.
If I configure it using virtualization, it should use direct hard disk passthrough, not by creating a virtual disk.

these few days I tried various ways and there is a chance to recover my data.
I tried copying data on the virtual disk on Proxmox to an NTFS hard disk with the command
Code:
dd if=/Seagate-1TB/DATA/images/100/vm-100-disk-0.raw of=/media/aldiansyah/DATA/100/vm- 100-disk-0.raw

then after it was finished it turned out that the size of the copied data did not match the virtual disk, because the original size of the virtual disk was around 740GB but what I received was only 342GB (you can check this picture).
Code:
dd: error reading '/Seagate-1TB/DATA/images/100/vm-100-disk-0.raw' Input/output error

photo_2024-02-29_18-40-50.jpg


then I removed the NTFS hard disk and reinstalled the hard disk on the Windows OS.
I tried third party software, namely Hetman RAID Recovery.
After I tried scanning, the results showed that some of my data could be accessed, and I immediately copied the data for safekeeping.
But many cannot be accessed because the data copying process might experience an error.

Maybe someone here can understand why the copy process with the dd command experienced an error?

Thank you very much for taking the time to read this discussion
 
Top