Pool went Missing and cant reimport them

Junior007

Cadet
Joined
Dec 10, 2023
Messages
5
Hi Truenas Community,

Currently having an issue with my truenas, i decided to power it down for a couple of days as i wasnt using it and was getting ready to move it to a new room.

However when i have repowered the machine on after a week or so after relocating it to my other room, The TrueNas Server managed to loose the Pool that i created.

Here is my system Truenas System:

CPU: Intel Core i3-12100
Motherboard: Asus Prime H610M-E
Drives: Seagate 4TB BarraCuda 3.5 inch Hard Drive, SATA III, 5400RPM, 256MB Cache, x 4
Crucial P3 1TB Nvme (Currently setup as a Cache VDEV)
Boot Drive: WD Blue 250gb SSD
RAM: 16GB DDR4 G.SKILL (2 x 8GB)
NIC: TP-LINK TX401 10Gbe card

I looked up and saw someone else had a similar issue and exported the pool and re imported them however i am unable to re import them

1704357471014.png



1704357542221.png


And get the below error:

[EZFS_BADDEV] Failed to import 'STORAGE' pool: cannot import 'STORAGE' as 'STORAGE': one or more devices is currently unavailable

Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 438, in import_pool zfs.import_pool(found, pool_name, properties, missing_log=missing_log, any_host=any_host) File "libzfs.pyx", line 1265, in libzfs.ZFS.import_pool File "libzfs.pyx", line 1293, in libzfs.ZFS.__import_pool libzfs.ZFSException: cannot import 'STORAGE' as 'STORAGE': one or more devices is currently unavailable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 115, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1383, in nf return func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 444, in import_pool self.logger.error( File "libzfs.pyx", line 465, in libzfs.ZFS.__exit__ File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 442, in import_pool raise CallError(f'Failed to import {pool_name!r} pool: {e}', e.code) middlewared.service_exception.CallError: [EZFS_BADDEV] Failed to import 'STORAGE' pool: cannot import 'STORAGE' as 'STORAGE': one or more devices is currently unavailable """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 427, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 465, in __run_body rv = await self.method(*([self] + args)) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1379, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1247, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1459, in import_pool await self.middleware.call('zfs.pool.import_pool', guid, opts, any_host, use_cachefile, new_name) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1368, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1325, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1331, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1246, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1231, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.service_exception.CallError: [EZFS_BADDEV] Failed to import 'STORAGE' pool: cannot import 'STORAGE' as 'STORAGE': one or more devices is currently unavailable

I have also tried command line:
zpool import STORAGE
Error: cannot Import 'STORAGE': one or more devices currently unavailable

i stopped here as i dont want to loose the data i have on the drives when i try and recreate the pool

1704357970271.png



I was accessing this data through SMB some help in getting this data back would be greatly be appreciated.

Thanks Team as i am not sure what to do
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First, correct. You don't want to "re-create" your pool unless you want to destroy the existing data.

Next, Seagate Barracuda disks are not suitable for use with ZFS as they are SMR. But, this should not make you loose data.

This item is odd;
Crucial P3 1TB Nvme (Currently setup as a Cache VDEV)
In general, ZFS L2ARC / Cache devices should be at maximum of 5 times the size of memory. In your case, 80GBytes. Some say it is possible / reasonable to go to 10 times. Which for you would be 160GBytes. Having a 1TByte L2ARC / Cache device with only 16GBytes of RAM means you can starve your L1ARC / ARC...


Please supply the output of these commands. They will tell us more.
zpool import lsblk
 

Junior007

Cadet
Joined
Dec 10, 2023
Messages
5
First, correct. You don't want to "re-create" your pool unless you want to destroy the existing data.

Next, Seagate Barracuda disks are not suitable for use with ZFS as they are SMR. But, this should not make you loose data.

This item is odd;

In general, ZFS L2ARC / Cache devices should be at maximum of 5 times the size of memory. In your case, 80GBytes. Some say it is possible / reasonable to go to 10 times. Which for you would be 160GBytes. Having a 1TByte L2ARC / Cache device with only 16GBytes of RAM means you can starve your L1ARC / ARC...


Please supply the output of these commands. They will tell us more.
zpool import lsblk

Hello Arwen,
Oh right as i was told by a friend of mind is you need 1 gb of ram to 1tb of data (16GB Ram - 16TB of Hard disks) and as the motherboard had a second NVME slot i did play with it for abit before just leaving it as is

Yer as soon as i was testing my system when i first built it i realized the mistake i did with getting SMR drives which one day will be swapped over to NAS Drives.


Regarding the Zpool import see below

1704422520904.png


root@truenas[~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 3.6T 0 disk
├─sda1 8:1 0 2G 0 part
└─sda2 8:2 0 3.6T 0 part
sdb 8:16 0 3.6T 0 disk
├─sdb1 8:17 0 2G 0 part
└─sdb2 8:18 0 3.6T 0 part
sdc 8:32 0 3.6T 0 disk
├─sdc1 8:33 0 2G 0 part
└─sdc2 8:34 0 3.6T 0 part
sdd 8:48 0 3.6T 0 disk
├─sdd1 8:49 0 2G 0 part
└─sdd2 8:50 0 3.6T 0 part
nvme1n1 259:0 0 232.9G 0 disk
├─nvme1n1p1 259:1 0 1M 0 part
├─nvme1n1p2 259:2 0 512M 0 part
├─nvme1n1p3 259:3 0 216.4G 0 part
└─nvme1n1p4 259:4 0 16G 0 part
└─nvme1n1p4 253:0 0 16G 0 crypt [SWAP]
nvme0n1 259:5 0 931.5G 0 disk
└─nvme0n1p1 259:6 0 931.5G 0 part
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Looks like you setup a SLOG / Log device, but no longer have it. Are you sure the 1TB NVMe drive was a Cache VDEV? Perhaps you mistakenly added it as a SLOG / Log vDev.

Do as the command says. If the shutdown was un-clean, then you may loose the data in-flight. But, unless you can re-install the missing SLOG device, you don't have many choices.
zpool import -m STORAGE

Whence the pool is imported, remove the log device, from the command line. Then export your pool, again from the command line. This allows you to then import the pool from the GUI.

Please note: Do not expect any GUI access to your pool. Mixing command line and GUI is not only not supported, but actually does not work at all in some cases. But, trouble shooting as I've listed does work. Plus, some other command line work is fully supported. There is a learning curve to understand which is which.


The old 1GB of RAM per 1TB of disk was an old suggestion, not a hard, fast rule. Other suggestions about the width of various RAID-Zx are not relevant today due to compression.
 
Last edited:

Junior007

Cadet
Joined
Dec 10, 2023
Messages
5
Looks like you setup a SLOG / Log device, but no longer have it. Are you sure the 1TB NVMe drive was a Cache VDEV? Perhaps you mistakenly added it as a SLOG / Log vDev.

Do as the command says. If the shutdown was un-clean, then you may loose the data in-flight. But, unless you can re-install the missing SLOG device, you don't have many choices.
zpool import -m STORAGE

Whence the pool is imported, remove the log device, from the command line. Then export your pool, again from the command line. This allows you to then import the pool from the GUI.

Please note: Do not expect any GUI access to your pool. Mixing command line and GUI is not only not supported, but actually does not work at all in some cases. But, trouble shooting as I've listed does work. Plus, some other command line work is fully supported. There is a learning curve to understand which is which.


The old 1GB of RAM per 1TB of disk was an old suggestion, not a hard, fast rule. Other suggestions about the width of various RAID-Zx are not relevant today due to compression.

Whoever you are you are the REAL MVP as you role says, i did what you told me and it got it up and running amazing work
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Glad that worked for you.
 
Top