TrueNAS Scale as Hyper-V: Pool is exportet cannot import in GUI

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
Hi,

I had to set up TrueNAS as VM in Hyper-V as I wanted to have ony one Computer instead of running multiple.

Everything was working fine. Today I have shutdown my computer and took the SATA cables of attached serial number labels on the hard drives. In case one of them needs to be replaced. I might have switched some SATA cables. I did not thought that TrueNAS might need the drives in certain sequence. However now the pool shows exported and the pool does not exist. All 4 drives shows exported and the pool is called TrueNasHome

Not sure what to do. I do not know much shell commands so I tried via GUI. In case I need to run shell commands please give me step by step. All the commands i tried didnt even work. Even LS didnt work when i went into the sell window in TrueNAS .

Have latest TrueNAS Scale installed on Hyper-V as an VM. I passed thru all 4 hard drives via SCSI two of them are parity drives. Similar like raid 6 don remember what is called now.

How can restore my exported pool please. I do not want to loose the data. Do I need to have them in certain sequence to be past thru to the VM?

Any help would appreciated.
 

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
Some Screenshots

1691977701820.png


I tried to create a new VM see if the pool might show in the import drop down which it did but it failed please see log below.


Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 438, in import_pool
zfs.import_pool(found, pool_name, properties, missing_log=missing_log, any_host=any_host)
File "libzfs.pyx", line 1265, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1293, in libzfs.ZFS.__import_pool
libzfs.ZFSException: cannot import 'TrueNASHome' as 'TrueNASHome': I/O error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 115, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1382, in nf
return func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 444, in import_pool
self.logger.error(
File "libzfs.pyx", line 465, in libzfs.ZFS.__exit__
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 442, in import_pool
raise CallError(f'Failed to import {pool_name!r} pool: {e}', e.code)
middlewared.service_exception.CallError: [EZFS_IO] Failed to import 'TrueNASHome' pool: cannot import 'TrueNASHome' as 'TrueNASHome': I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 428, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 463, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1378, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1246, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1459, in import_pool
await self.middleware.call('zfs.pool.import_pool', guid, opts, any_host, use_cachefile, new_name)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1395, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1352, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1358, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1273, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1258, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_IO] Failed to import 'TrueNASHome' pool: cannot import 'TrueNASHome' as 'TrueNASHome': I/O error
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Aside from the specific recommendation to avoid using Hyper-V as the hypervisor if TrueNAS must be virtualized, it's worth mentioning that what you're doing with the disks (even if you do get it to work) will be expected to result in eventual pool loss.

I recommend strongly that you don't keep any data that you care about on such a pool.

 

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
Nothing it says command not found. Please do not kick my head off. Barely any experience with shell.

1692033738723.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Just got the command to work


View attachment 69304
Metadata corruption is never pleasant.

I can only surmise that something in the host OS/Hyper-V passthrough didn't correctly feed things back to the actual bare-metal disks, and has resulted in an inconsistent state on the disks. Hopefully a prior transaction group is intact.

Please try zpool import -Fn TrueNASOnyx - the "F" is for "force import by rewinding" but the "n" is "don't actually do it" - this will hopefully give you an indication if the pool can be rolled back to an earlier transaction, and how much data might be lost if that's the case.

If it's successful and provides an acceptable rollback time, you can try importing with zpool import -F TrueNASOnyx.

If it's not successful, try zpool import -FXn TrueNASOnyx to attempt "X" extreme rollbacks, but still not actually complete the import.

If that's not successful, we may have to go even more aggressively searching.
 

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
I rant it and it said it is recoverable with some data loss of 536 mins.
Not so much worried about the data loss for the 536 minutes because the server was sitting on idle for all night.


I tried also this
Code:
zpool import -F TrueNASOnyx


and it imported successfully. Now the pools show 2 errors. How do I fix that?
1692038305309.png
 

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
Hopefully this is not the reason why it happened. Before all this happened rearranged the drives and labelled. Since they were passed thru I think i passed thru the wrong drive which was not part of the pool and might have been connected were one of the pool drive plus I might have maybe changed the sequence of the hard drives because didnt keep track of the cables when I was labeling them. After the pool was goone, I noticed that the hard drive offline yesterday under disk management. First I could not get them online but somehow I managed to get them online. Now I can but the offline/online under disk management.
 

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
Thank you very much. Everything works great. I am thinking maybe I should go with Unraid maybe its better as a VM
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
As general advice if you're not able to get a disk controller/HBA that can be put in PCIe Passthrough to the VM, ZFS is not for you in any VM OS.

If you run UNRAID, you're clear of ZFS, so that should be fine to do the disk-level passthrough you're currently doing.

A decent HBA is available on Ebay for something like US$50 if you care to seek one out and use it with ZFS.
 

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
Can you please suggest a good HBA. I will get one right away. Just worried now might gonna get the wrong one
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The Quick hardware recommendations guide (in resources) says this:

Any card using the SAS2008, SAS2308 or SAS3008 controller will work, but some may need to be crossflashed to IT mode to operate as HBAs and not entry-level RAID cards. This includes rebadged cards from other vendors – although Dell cards are somewhat trickier to crossflash.

You're only talking about hard drives (not SSD), so you can get away with SAS2 cards like this one:


You're looking for the LSI 9211. I've linked to TheArtOfServer as I see others in the forum rate them well, but I have no affiliation to them and you can feel free to search for other vendors of the same products (of which there are many).
 
Last edited:

tsgill

Dabbler
Joined
Aug 13, 2023
Messages
19
Thanks this one has only two ports and it does not look like has sata connections .

I have 6x 18 TB Barracuda NAS which are SATA drives. I have no experience in HBA cards since it is my first build. Since my TrueNAS is a VM I would like to choose the right one.
 
Top