Need Help with Importing Offline Pool After Power Outage

tamen

Cadet
Joined
Aug 9, 2023
Messages
5
Hello TrueNAS community,

I'm facing an issue with my TrueNAS server after a recent power outage. The server shut down unexpectedly due to the power loss, and since then, my data pool has been showing as offline. I've attempted to restore the connection by disconnecting and reconnecting, but unfortunately, I'm unable to access any data or re-add the pool in TrueNAS.

When I go to the WebUI and navigate to "Storage" -> "Import Pool," I'm encountering the following error message:
Error importing pool - 2090 is not a valid Error
Long Version - Pastebin

This error has been puzzling, and I've been trying to find a solution. I did some research and came across a similar thread on the TrueNAS forums (link: https://www.truenas.com/community/threads/zfs-pool-corrupted.98445/) where a user faced a similar issue. In their case, the solution was to reimport the ZFS pool within the shell. Unfortunately, attempting this method has not resolved my problem.

When I run the command zpool import -f, I get the following output:
pool: inotank
id: 17677109458747968008
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
config:
inotank FAULTED corrupted data
mirror-0 ONLINE
61207036-955f-447d-b85a-c387515d3892 ONLINE
2c5f554c-47f7-41ae-a963-90deeea598a8 ONLINE
indirect-1 FAULTED corrupted data
Additionally, I've tried the command zpool import -f -FX inotank and received the error message

"cannot import 'inotank': one or more devices is currently unavailable."

For context, here's the configuration file of my VM-Ware:

agent: 1
boot: order=scsi0;net0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1682874672
name: truenas
net0: virtio=96:7F:43:95:5C:FC,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-101-disk-0,iothread=1,size=32G
scsi1: /dev/disk/by-id/ata-ST14000NM000J-2TX103_ZR600TVG,size=13039G
scsi2: /dev/disk/by-id/ata-ST14000NM000J-2TX103_ZR600WMS,size=13039G
scsi3: /dev/disk/by-id/ata-ST16000NM001G-2KK103_ZL2PVF38,size=14902G
scsihw: virtio-scsi-single
smbios1: uuid=73e69195-753a-45f3-af87-4ae7ec9ecac3
sockets: 1
vmgenid: 9dcfe925-168e-443e-ae05-e68d12a62aa1
I would greatly appreciate any insights, suggestions, or guidance to resolve this issue. If you've encountered a similar situation or have expertise with ZFS pools, please share your knowledge. Thank you in advance for your assistance.

Best regards
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If you could give complete hardware configuration, including brands & models, as well as TrueNAS version, those would help.

Next, it appears, though I have no direct experience in it, that you removed a Mirror vDev in your pool, making an "indirect-1" vDev. Is this true?

Further, you imply that your TrueNAS is running as a VM under VMWare. If this is the case, their are some things that are needed to make a reliable, (aka reduce the chances of data loss!), VM of TrueNAS;
Specifically pass through the disk controller, NOT the disks.

Other than asking questions, I can't offer any advice. Perhaps someone else can, hopefully with the additional information I requested.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
obligatory: "RAIDz is not a backup". hopefully you have a backup, as this looks...bad.
obligatory: you need to post your hardware, and importantly, how the disks are connected, and you need to map out which drives are online.

if you virtualized or passed through the disks, there is a really good chance that in transit data was lost to the pool, and it's in an inconsistant state. the only reliable way is to pass through the entire disk controller via PCI passthrough.
depending on what, exactly, went wrong, you *might* have some luck inporting the pool readonly. if so, you MUST back up the data and read the docs on virtualizing truenas. this greatly looks like you have designed it to fail.

your import command looks...strange. from what I can find, the indirect-1 comes from removing devices, but afaik a failed or removed disk from a mirror vdev should never cause redirection tables, as there is nothing to redirect. mirrors are simple copies, there is no parity calculations.

what was this set up as? if that was a simple mirror of 3 drives, there should be no problems.

as it looks like you used <quote> instead of <code>, the indentation is all messed up. this is VERY important, as the indentation tells us the topology of your pool.
the way the import is showing your pool potentially indicates to me that the device that would have been at indirect-1 was a stripe. if so, this pool is dead, as that vdev is lost. my guess is you had a mirror of the 2 smaller HDDs and at some point managed to attach the 6TB as a stripe, defeating the redundancy.

here are 2 samples I threw together on an unused server. they look identical in <quote> but are VERY, though subtly, different in <code>.
the first is a healthy 3-way mirror, the 2nd was a healthy 2x2 mirror that I detached a disk from to make a mirror+stripe example.
pool: good-mirror
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
good-mirror ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/ac73bf5e-373b-11ee-ad31-002590eff002 ONLINE 0 0 0
gptid/ac75f140-373b-11ee-ad31-002590eff002 ONLINE 0 0 0
gptid/ac792386-373b-11ee-ad31-002590eff002 ONLINE 0 0 0

errors: No known data errors

zpool status -v
pool: bad-mirror-and-stripe
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
bad-mirror-and-stripe ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/d06528d7-373b-11ee-ad31-002590eff002 ONLINE 0 0 0
gptid/d071f3ca-373b-11ee-ad31-002590eff002 ONLINE 0 0 0
gptid/57cac76f-373c-11ee-ad31-002590eff002 ONLINE 0 0 0

errors: No known data errors
Code:
 pool: good-mirror
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        good-mirror                                     ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/ac73bf5e-373b-11ee-ad31-002590eff002  ONLINE       0     0     0
            gptid/ac75f140-373b-11ee-ad31-002590eff002  ONLINE       0     0     0
            gptid/ac792386-373b-11ee-ad31-002590eff002  ONLINE       0     0     0

errors: No known data errors

zpool status -v
  pool: bad-mirror-and-stripe
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        bad-mirror-and-stripe                           ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/d06528d7-373b-11ee-ad31-002590eff002  ONLINE       0     0     0
            gptid/d071f3ca-373b-11ee-ad31-002590eff002  ONLINE       0     0     0
          gptid/57cac76f-373c-11ee-ad31-002590eff002    ONLINE       0     0     0

errors: No known data errors
 

tamen

Cadet
Joined
Aug 9, 2023
Messages
5
Thank you for your reply. First of all, I apologize for missing out on crucial information.
If you could give complete hardware configuration, including brands & models, as well as TrueNAS version, those would help.
Hardware Configuration:
  • CPU: Intel i5-12400 @ 2.4 GHz
  • RAM: 32 GB
  • Motherboard: ASRock Z690M-ITX/ax
  • Storage Drives: 2x14 TB HDD, 1x16 [unused]
TrueNAS Version:
  • TrueNAS: TrueNAS-SCALE-22.12.3.3
And also my
Further, you imply that your TrueNAS is running as a VM under VMWare. If this is the case, their are some things that are needed to make a reliable, (aka reduce the chances of data loss!), VM of TrueNAS;
I am running TrueNAS in a Proxmox 8.0.4 environment.
obligatory: you need to post your hardware, and importantly, how the disks are connected, and you need to map out which drives are online.
Maybe this picture helps there? https://i.imgur.com/jn9aUXK.png
if you virtualized or passed through the disks, there is a really good chance that in transit data was lost to the pool, and it's in an inconsistant state. the only reliable way is to pass through the entire disk controller via PCI passthrough.
depending on what, exactly, went wrong, you *might* have some luck inporting the pool readonly. if so, you MUST back up the data and read the docs on virtualizing truenas. this greatly looks like you have designed it to fail.
Right now my passthrough looks like this: https://i.imgur.com/kqfJjGR.png
what was this set up as? if that was a simple mirror of 3 drives, there should be no problems.
I would be glad if I could give you a confident and correct answer to that. But it would have to be mirrored. Additionally, I only have two hard drives in my pool; the third one was not used.
here are 2 samples I threw together on an unused server. they look identical in <quote> but are VERY, though subtly, different in <code>.
the first is a healthy 3-way mirror, the 2nd was a healthy 2x2 mirror that I detached a disk from to make a mirror+stripe example.
Thank you for mentioning it.
Code:
agent: 1
boot: order=scsi0;net0
cores: 4
memory: 8192
meta: creation-qemu=7.2.0,ctime=1682874672
name: truenas
net0: virtio=96:7F:43:95:5C:FC,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-101-disk-0,iothread=1,size=32G
scsi1: /dev/disk/by-id/ata-ST14000NM000J-2TX103_ZR600TVG,size=13039G
scsi2: /dev/disk/by-id/ata-ST14000NM000J-2TX103_ZR600WMS,size=13039G
scsi3: /dev/disk/by-id/ata-ST16000NM001G-2KK103_ZL2PVF38,size=14902G
scsihw: virtio-scsi-single
smbios1: uuid=73e69195-753a-45f3-af87-4ae7ec9ecac3
sockets: 1
vmgenid: 9dcfe925-168e-443e-ae05-e68d12a62aa1
Code:
root@truenas[~]# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH                                       ALTROOT
boot-pool    31G  5.33G  25.7G        -         -     1%    17%  1.00x    ONLINE                                       -
Code:
Error importing pool - 2090 is not a valid Error
 Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 115, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1382, in nf
    return func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 444, in import_pool
    self.logger.error(
  File "libzfs.pyx", line 465, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 438, in import_pool
    zfs.import_pool(found, pool_name, properties, missing_log=missing_log, any_host=any_host)
  File "libzfs.pyx", line 1265, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1293, in libzfs.ZFS.__import_pool
  File "libzfs.pyx", line 562, in libzfs.ZFS.get_error
  File "/usr/lib/python3.9/enum.py", line 360, in __call__
    return cls.__new__(cls, value)
  File "/usr/lib/python3.9/enum.py", line 677, in __new__
    raise ve_exc
ValueError: 2090 is not a valid Error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 428, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 463, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1378, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1246, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1459, in import_pool
    await self.middleware.call('zfs.pool.import_pool', guid, opts, any_host, use_cachefile, new_name)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1395, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1352, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1358, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1273, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1258, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
ValueError: 2090 is not a valid Error
Code:
root@truenas[~]# zpool import -f inotank
cannot import 'inotank': insufficient replicas
        Destroy and re-create the pool from
        a backup source.
root@truenas[~]# zpool import -f -FX inotank
cannot import 'inotank': one or more devices is currently unavailable
root@truenas[~]#

Thank you guys for taking time to look in to my problem. Really appreciated.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Sorry, I can't help. Perhaps someone else can.

However, I can list 3 comments. Proxmox is a much less tested hypervisor OS for TrueNAS, when compared to VMWare.

Second, passing through just the disks as you have done is prone to problems with ZFS. Problems like pools having too many errors and can't be imported. TrueNAS is just not designed as a VM, though others have made it reliable under certain conditions. Like passing the entire PCIe disk controller through the TrueNAS.

Last, 8GBs is pushing the minimum for TrueNAS SCALE. A better amount is 16GBs.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I believe your pool is deceased. unfortunately, the way you put this together was designed as a ticking time bomb waiting to fail, and it looks like fail it did. without sufficient replicas, the data is functionally just gone (technically there are bits of it remaining but they are basically random giberish without enough replicas to reconstruct with)
the ZFS message is the only solution, so if you do not have a backup source, there is nothing left.
Code:
        Destroy and re-create the pool from
        a backup source.

there are some paid services for ZFS restore, but they are typically...prohibitively expensive.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Yeaaahh. If your pool were bigger and a top level vdev failed, there are certainly options to try to recover at least some data. You seem to have had 1 pool with 2 drives in a 2 way mirror. One shit the bed, then the other had a stroke. Feel free to PM me a debug if you're interested in the causal chain here - but she's definitely dead jim.
 

tamen

Cadet
Joined
Aug 9, 2023
Messages
5
Yeaaahh. If your pool were bigger and a top level vdev failed, there are certainly options to try to recover at least some data. You seem to have had 1 pool with 2 drives in a 2 way mirror. One shit the bed, then the other had a stroke. Feel free to PM me a debug if you're interested in the causal chain here - but she's definitely dead jim.
You really think both are dead?
Code:
root@pve:~# zpool import
   pool: inotank
     id: 17677109458747968008
  state: FAULTED
status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        inotank                                   FAULTED  corrupted data
          mirror-0                                ONLINE
            61207036-955f-447d-b85a-c387515d3892  ONLINE
            2c5f554c-47f7-41ae-a963-90deeea598a8  ONLINE
          indirect-1                              FAULTED  corrupted data
there are some paid services for ZFS restore, but they are typically...prohibitively expensive.
How expensive? :<
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Did you do a fresh install on 8/9?

Looks like you setup the boot pool 2023-04-30 so this is a relatively new deployment, and there are no logs at all before 2023-08-09, and the boot pool reports:
Code:
2023-04-30.19:15:06 zfs create -o mountpoint=legacy -o truenas:kernel_version=5.15.79+truenas -o zectl:keep=False boot-pool/ROOT/22.12.2
....
2023-07-24.03:45:02 py-libzfs: zpool scrub boot-pool
2023-07-24.18:28:57 zpool import -N -f boot-pool
2023-07-25.03:45:01 py-libzfs: zpool scrub boot-pool
2023-07-26.03:45:01 py-libzfs: zpool scrub boot-pool
2023-07-27.03:45:02 py-libzfs: zpool scrub boot-pool
2023-07-27.15:26:33 zfs create -o mountpoint=legacy -o truenas:kernel_version=5.15.107+truenas -o zectl:keep=False boot-pool/ROOT/22.12.3.3
2023-07-27.15:26:54 zpool set bootfs=boot-pool/ROOT/22.12.3.3 boot-pool
2023-07-27.15:28:11 zpool import -N -f boot-pool
2023-08-03.03:45:02 py-libzfs: zpool scrub boot-pool
2023-08-06.16:05:49 zpool import -N -f boot-pool
2023-08-09.08:49:49 zpool import -N -f boot-pool


You updated on 7/27 and rebooted on 8/3. 8/6 and 8/9

In any case however we got here, if there were any clues in the debug to help you they are gone. No logs before 8/9 means no help really :(

EDIT: I looked through the debug a little more in some of the iXsystems specific logs (the fndebug folder) which provide some more insight.


Code:
+--------------------------------------------------------------------------------+
+                          smartctl output @1691795796                           +
+--------------------------------------------------------------------------------+
/dev/sda (NOT USING TRANSLATION)
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107+truenas] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               QEMU
Product:              QEMU HARDDISK
Revision:             2.5+
Compliance:           SPC-3
User Capacity:        34,359,738,368 bytes [34.3 GB]
Logical block size:   512 bytes
LU is thin provisioned, LBPRZ=0
Device type:          disk
Local Time is:        Sat Aug 12 01:16:36 2023 CEST
SMART support is:     Unavailable - device lacks SMART capability.

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

Device does not support Self Test logging

/dev/sdb (NOT USING TRANSLATION)
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107+truenas] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               QEMU
Product:              QEMU HARDDISK
Revision:             2.5+
Compliance:           SPC-3
User Capacity:        14,000,519,643,136 bytes [14.0 TB]
Logical block size:   512 bytes
LU is thin provisioned, LBPRZ=0
Serial number:        ST14000NM000J-2TX103_ZR600TVG
Device type:          disk
Local Time is:        Sat Aug 12 01:16:36 2023 CEST
SMART support is:     Unavailable - device lacks SMART capability.

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

Device does not support Self Test logging

/dev/sdc (NOT USING TRANSLATION)
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107+truenas] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               QEMU
Product:              QEMU HARDDISK
Revision:             2.5+
Compliance:           SPC-3
User Capacity:        14,000,519,643,136 bytes [14.0 TB]
Logical block size:   512 bytes
LU is thin provisioned, LBPRZ=0
Serial number:        ST14000NM000J-2TX103_ZR600WMS
Device type:          disk
Local Time is:        Sat Aug 12 01:16:36 2023 CEST
SMART support is:     Unavailable - device lacks SMART capability.

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

Device does not support Self Test logging

/dev/sdd (NOT USING TRANSLATION)
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.107+truenas] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               QEMU
Product:              QEMU HARDDISK
Revision:             2.5+
Compliance:           SPC-3
User Capacity:        16,000,900,661,248 bytes [16.0 TB]
Logical block size:   512 bytes
LU is thin provisioned, LBPRZ=0
Serial number:        ST16000NM001G-2KK103_ZL2PVF38
Device type:          disk
Local Time is:        Sat Aug 12 01:16:36 2023 CEST
SMART support is:     Unavailable - device lacks SMART capability.

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

Device does not support Self Test logging

debug finished in 0 seconds for smartctl output


You are not running this system on bare metal are you? :P
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
the
You really think both are dead?
the problem, as I greatly suspect it based on what I saw in your information, is that you ended up with a stripe pool. in a stripe pool ANY disk lost means the pool is lost. that you had 1 single mirror vdev is irelevant, because the other vdev was likely a stripe.

the disk might not even be dead, itself, but your virtualized setup likely chewed on it a few times and what it spat out is insufficient to put humpy dumty back together again.
 

tamen

Cadet
Joined
Aug 9, 2023
Messages
5
Did you do a fresh install on 8/9?

Looks like you setup the boot pool 2023-04-30 so this is a relatively new deployment, and there are no logs at all before 2023-08-09, and the boot pool reports:
[...]
You updated on 7/27 and rebooted on 8/3. 8/6 and 8/9
No I created this VM in April: https://i.imgur.com/zLJKXDG.png
On 6/8, I upgraded Proxmox from version 7 to 8. However, after that update, everything worked fine.

You are not running this system on bare metal are you? :P
I am not sure to be honest. I am running a VM in Proxmox.
the problem, as I greatly suspect it based on what I saw in your information, is that you ended up with a stripe pool. in a stripe pool ANY disk lost means the pool is lost. that you had 1 single mirror vdev is irelevant, because the other vdev was likely a stripe.

the disk might not even be dead, itself, but your virtualized setup likely chewed on it a few times and what it spat out is insufficient to put humpy dumty back together again.
That's strange because I'm certain that I configured the pool as a mirror.

Would it be possible to take one drive offline and create a new pool just to save some of the data? I definitely don't need all of it.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Because you aren’t using real hard drives this is instance I’m a little out of my depth here as to suggesting a path forward for you.

This is not something that is ever recommended for these exact types of unpredictable scenarios.

I doubt it but @HoneyBadger may have some thoughts and ideas here
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
@tamen did you at any point try adding the 16T drive to this pool and then later remove it, or perhaps create the pool as a stripe of 2x14 and then later detach one and make it a mirrored 2x14?

You've got an indirect vdev there which as @artlessknave points out results from a vdev removal - however that indirect one should reside on the remaining vdevs (your 2x14)
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Would it be possible to take one drive offline and create a new pool just to save some of the data?
no. the pool wont import. that means something is outright missing or so badly damaged as to be unrecognizable. this should not have happened if you had a genuine mirror pool properly attached TrueNAS. something else went on, but we cant really tell what since its already broken.
How expensive? :<
only a first child or so. maybe an arm.
in the nature of thousands of dollars.
 
Top