TrueNAS Scale incorrectly reporting Mixed Capacity VDEVS

Brandito

Explorer
Joined
May 6, 2023
Messages
72
Yeah, I have 91TB on the disks, I have 3 spares and some room on another nas but not a contiguous 91TB free anywhere. I would need a second disk shelf most likely.

The checksums have increased again

Code:
root@truenas[~]# zpool status
  pool: Home
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: resilvered 1.32T in 02:06:45 with 0 errors on Sat Nov 11 13:45:36 2023
config:

        NAME                                      STATE     READ WRITE CKSUM
        Home                                      ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            a7d78b0d-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0    33
            a7b00eef-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0    33
            a7d01f81-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0    33
            a7c951e3-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0     0
            a7bfef1b-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0     0
            e4f37ae1-f494-4baf-94e5-07db0c38cb0c  ONLINE       0     0     0
          raidz2-1                                ONLINE       0     0     0
            8cca2c8f-39ee-40a6-88e0-24ddf3485aa0  ONLINE       0     0     2
            74f3cc23-1b32-4faf-89cc-ba0cd72ba308  ONLINE       0     0    46
            4e5f5b16-6c2b-4e6b-a907-3e1b9b1c4886  ONLINE       0     0    46
            cde58bb6-9d8e-4cdc-a1bf-847f459b459b  ONLINE       0     0    46
            58c22778-521b-4e8f-aadd-6d5ad17a8f68  ONLINE       0     0     2
            33633f68-920b-4a40-bd4d-45e30b6872bc  ONLINE       0     0     2
          raidz2-2                                ONLINE       0     0     0
            2a2e5211-d4ea-4da9-8ea5-bdabdc542bdb  ONLINE       0     0     0
            56c07fd7-6cb6-4985-9a20-2b5ff9d42631  ONLINE       0     0     0
            1147286d-8cd8-4025-8e5d-bbf06e2bd795  ONLINE       0     0    44
            7e1fa408-7565-4913-b045-49447ef9253b  ONLINE       0     0    44
            3d56d2fa-d505-4bea-b9a2-80c121e4e559  ONLINE       0     0    44
            a9906b32-2690-4f7b-8d8f-00ca915d8f3d  ONLINE       0     0     0
          raidz2-5                                ONLINE       0     0     0
            b8c63108-353b-4ed7-a927-ca3df817bd21  ONLINE       0     0     0
            58782264-02f1-41c6-9b91-d07144cb0ccb  ONLINE       0     0    32
            03df98a5-a86d-4bc8-879a-5cf611d4306c  ONLINE       0     0    32
            022c7ffb-0a07-45cb-b3af-ad1730a08054  ONLINE       0     0    32
            a5786a1f-a7ad-4a30-877a-88a03c94a774  ONLINE       0     0     0
            4c59238e-5cbd-428e-8a72-a018d9dae9c2  ONLINE       0     0     0
        logs
          mirror-6                                ONLINE       0     0     0
            5ba1f70b-be51-470f-94ed-777683425477  ONLINE       0     0     0
            f2605776-46a9-4455-a4bc-322d4cf8a688  ONLINE       0     0     0

errors: 2 data errors, use '-v' for a list


It's extra odd that it tells me to do a zpool status -v for a list but when I do that it doesn't actually list anything

With the swap change, I don't really recall adding or removing the swap. I believe it was enabled when I started on core, but I quickly migrated to scale after a few weeks. What I do recall doing was moving the system dataset from the boot-pool to another pool. I thought it was my home pool, but that can't be because I know I tried moving it there when this whole issue started. Does that affect the swap location? The two options are in the same menu, so I assume it does?

When I got the error that started this thread, I tried moving the system dataset to my home pool again thinking it would add the swap partition. Then I offlined a single drive in the new vdev and replaced it with itself assuming again that it would add in the swap partition. Is it possible that caused the corruption? Like I mentioned before, the resilver was strange, starting at a 1 day to complete and then actually completing within hours.

I am still using the "new" to me HBAs and brand new cables, but they're likely not as good as the original sff cables I had which were supermicro branded.

Should I shut down and at least swap the cables? I'm just worried to bring the machine offline now.
 
Last edited:

Brandito

Explorer
Joined
May 6, 2023
Messages
72
I have a backblaze account, would I be able to zfs send to backblaze? What would the next step be though? Start the pool from scratch and egress from backblaze? I have synchronous gigabit, but it would take 10 days at full bandwidth to back up. If I could finish the whole process in a month, that's $600. If it had a high likely hood of working I might try it.

If the corruption of my data is limited to the directory I was last rebalancing. I think I'd be fine with that.\

Edit: Sorry for the spam, but I decided to do a df -h and it looks like most of my datasets are mounted, the only one missing is my media dataset, that's the largest of them and the one I was actively writing to at the time of the failure. Maybe that's why the checksums are increasing? Data is being accessed after all? I have my single vm disabled and I disabled apps. Shares, sync tasks scrubs, and anything I could think of that would access data was also disabled.

Should I export?

Edit2: I ran the numbers, and while it's a more expensive option, I'm considering getting enough drives to create a 10 disk raidz2 1 vdev pool. I have 3 16tb exos drives on hand and would need 7 more. I can get refurbs from serverpartdeals, which I wouldn't normally do, but it's the most palatable option and I've had good luck with their recertified drives. I have room in my diskshelf for 21 more disks, so if this works I can actually use these drives for backups in the future. It wouldn't be the best scenario for backups, but it's what I can afford in parts and electricity. The bonus is I have more storage when all is said and done. If nothing else I have hot/cold spares for the foreseeable future.

Does this sound realistic? using this calculator https://wintelguy.com/zfs-calc.pl I should have 85TB of "practical" space accounting for slop and 20% free space. So I'd go over on free space to transfer the whole pool but not by a lot. The zpool wouldn't be long term in this config either, just to get the data out and back into the rebuilt pool.

Edit3: If I don't have a snapshot of the dataset, can I still use zfs send? Part of rebalancing is deleting the snapshots that will consume all your available space, so I may not have snapshots of my Home/Media dataset which is the one with all the data. From what I can tell, snapshots are a prerequisite for zfs send to work

I also see now that my pool is mounted in the "/" directory instead of under "/mnt". I assume this is from one of the commands suggested earlier in this thread.

One last note, I may have mentioned before, but my Home/Media datset is the only one not mounted, here is the error I get when trying to view it under datasets in the webui

Code:
 Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 75, in get_quota
    with libzfs.ZFS() as zfs:
  File "libzfs.pyx", line 529, in libzfs.ZFS.__exit__
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 77, in get_quota
    quotas = resource.userspace(quota_props)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "libzfs.pyx", line 3642, in libzfs.ZFSResource.userspace
libzfs.ZFSException: cannot get used/quota for Home/Media: I/O error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
    res = MIDDLEWARE._run(*call_args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
    with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
  File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
    return methodobj(*params)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 79, in get_quota
    raise CallError(f'Failed retreiving {quota_type} quotas for {ds}')
middlewared.service_exception.CallError: [EFAULT] Failed retreiving USER quotas for Home/Media
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 201, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1341, in _call
    return await methodobj(*prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
    return await func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/dataset_quota_and_perms.py", line 223, in get_quota
    quota_list = await self.middleware.call(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
    return await self._call(
           ^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1349, in _call
    return await self._call_worker(name, *prepared_call.args)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1355, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.CallError: [EFAULT] Failed retreiving USER quotas for Home/Media
 
Last edited:

Brandito

Explorer
Joined
May 6, 2023
Messages
72
I ordered the 7 drives, hoping to have them by Wednesday. Actually went with recerts.
 

Brandito

Explorer
Joined
May 6, 2023
Messages
72
@HoneyBadger any thoughts on my zfs send dilemma? I've backed up what I can from the datasets that mounted, and I have the drives coming tomorrow so I can make a zpool big enough to ingest the rest of my data, I just don't know how I'll get access to it.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
So regarding zfs send - it won't work to Backblaze, as it requires a compatible ZFS filesystem on the other end, which isn't something that Backblaze does - they work at the file level.

Snapshots are a requirement for a zfs send as well, either local or remote - so you'd have to make a new snap of the dataset and hope that it's able to actually do it.

I'll be honest, I've not seen a scenario of "pool mounts, but will explicitly not mount one dataset" - although it's possible that the mount root point being / rather than /mnt has something to do with it.

Re: the rebalancing, was this happening under SCALE or CORE?
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
@HoneyBadger any thoughts on my zfs send dilemma? I've backed up what I can from the datasets that mounted, and I have the drives coming tomorrow so I can make a zpool big enough to ingest the rest of my data, I just don't know how I'll get access to it.
It sounds like you've done your homework and gotten much of the prep work done.

Given that you are continuing to experience various weird problems we have to assume:
  1. There's something wrong with some of your hardware if the counters are continuing to go up.
  2. It's going to take time to migrate the data off and over to somewhere else.
Given that, you should probably mount the pool as read-only. This is a risk mitigation strategy in order to help try and prevent any further corruption, given that we're not sure why it is happening. In a read-only state you may have the ability to naviate and view the files in the dataset in question. If you don't that dataset is likely hosed and will require much deeper troubleshooting.

Code:
zpool import -o readonly=on -fR /mnt POOLNAME


The play at this point is likely "partial data recovery" not a full block copy. I would focus my efforts on file-based copying instead of block-based copying. This will help you find the pockets of corruption and allow you to annotate what data you need to implement a different DR strategy on.

When you try to copy a file that is messed up, it will let you know. ZFS really tries HARD to only give you the data back that you gave it, so it should error on files that are messed up. ZFS will TELL YOU when there’s something wrong. If this were ANY OTHER FILESYSTEM, you’d be receiving files with holes in them and you wouldn’t even know. THANK YOU ZFS.

Just use cp -r and take it slow. Mount the destination as an NFS mount. TrueNAS SCALE should let you. On a read-only pool in an unknown state, this will give you the highest chance of success of making sure the data is safe. Or at least, the data thats not already bad. The bad data can be a secondary objective, but your play now is to get what you can while the getting is good. We can circle back afterwards to try and save some more.
 
Last edited:

Brandito

Explorer
Joined
May 6, 2023
Messages
72
So regarding zfs send - it won't work to Backblaze, as it requires a compatible ZFS filesystem on the other end, which isn't something that Backblaze does - they work at the file level.

Snapshots are a requirement for a zfs send as well, either local or remote - so you'd have to make a new snap of the dataset and hope that it's able to actually do it.

I'll be honest, I've not seen a scenario of "pool mounts, but will explicitly not mount one dataset" - although it's possible that the mount root point being / rather than /mnt has something to do with it.

Re: the rebalancing, was this happening under SCALE or CORE?
Rebalancing was under scale, I was only using core very briefly well before any of this happened.

Since the dataset didn't get mounted, I'm suspecting that's why I don't see any snapshots, at the point the system crashed I should have had at least a few snapshots, as I take them hourly on that dataset and hadn't removed any for some time
 
Last edited:

Brandito

Explorer
Joined
May 6, 2023
Messages
72
It sounds like you've done your homework and gotten much of the prep work done.

Given that you are continuing to experience various weird problems we have to assume:
  1. There's something wrong with some of your hardware if the counters are continuing to go up.
  2. It's going to take time to migrate the data off and over to somewhere else.
Given that, you should probably mount the pool as read-only. This is a risk mitigation strategy in order to help try and prevent any further corruption, given that we're not sure why it is happening. In a read-only state you may have the ability to naviate and view the files in the dataset in question. If you don't that dataset is likely hosed and will require much deeper troubleshooting.

Code:
zpool import -o readonly=on -fR /mnt POOLNAME


The play at this point is likely "partial data recovery" not a full block copy. I would focus my efforts on file-based copying instead of block-based copying. This will help you find the pockets of corruption and allow you to annotate what data you need to implement a different DR strategy on.

When you try to copy a file that is messed up, it will let you know. ZFS really tries HARD to only give you the data back that you gave it, so it should error on files that are messed up. ZFS will TELL YOU when there’s something wrong. If this were ANY OTHER FILESYSTEM, you’d be receiving files with holes in them and you wouldn’t even know. THANK YOU ZFS.

Just use cp -r and take it slow. Mount the destination as an NFS mount. TrueNAS SCALE should let you. On a read-only pool in an unknown state, this will give you the highest chance of success of making sure the data is safe. Or at least, the data thats not already bad. The bad data can be a secondary objective, but your play now is to get what you can while the getting is good. We can circle back afterwards to try and save some more.
Having trouble exporting the pool completely, It's not mounted under /Home anymore, however it does show in the GUI still. When I try to import again it asks me to rename the pool, maybe that's the best option, but I'm worried that truenas still thinks something is imported.

here was the output when exporting

Code:
root@truenas[~]# zpool export Home          
cannot export 'Home': pool is busy


I'm not actively using the pool, my last zfs send completed successfully some time ago.

I will also add, I ran some checksums on the linux isos (not a euphemism) in a dataset I was able to backup and they passed. I also was able to zfs send a proxmox backup server datastore and I'm running verify on the VM/ct's in that datastrore

One more thing, I rebooted the machine yesterday and the checksum errors reduced and have slowly crept up as the machine has been running.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
One more thing, I rebooted the machine yesterday and the checksum errors reduced and have slowly crept up as the machine has been running.
This is indicative of a hardware issue, which could be anything from bad cables/connectors/backplane, overheating controller, bad RAM or… whatever. So you have multiple issues.
 

Brandito

Explorer
Joined
May 6, 2023
Messages
72
Did you mount the pool manually or did middleware/TrueNAs mount it?
I mounted it through cli as suggested earlier in the thread, I had to choose an earlier txg

Edit: tried a reboot, still cannot export, says pool is busy
 
Last edited:

Brandito

Explorer
Joined
May 6, 2023
Messages
72
This is indicative of a hardware issue, which could be anything from bad cables/connectors/backplane, overheating controller, bad RAM or… whatever. So you have multiple issues.
I have can replace the ram, the cables I swapped during the reboot, and I can either put in my original HBA or I have a 3rd HBA to try.

I'm not getting any checksum errors on the boot pool or the pool I've been backing up to. With the number of drives showing checksum errors I find it hard to stomach the thought that more than a dozen are failing all at once.

All the drives in this zpool were run through badblocks and long smart tests before being put into the pool
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I mounted it through cli as suggested earlier in the thread, I had to choose an earlier txg

Edit: tried a reboot, still cannot export, says pool is busy
So TrueNAS automatically mounted it after your reboot?
 

Brandito

Explorer
Joined
May 6, 2023
Messages
72
What does zpool list or zpool status show?
Code:
root@truenas[~]# zpool status
  pool: Home
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: resilvered 1.32T in 02:06:45 with 0 errors on Sat Nov 11 13:45:36 2023
config:

        NAME                                      STATE     READ WRITE CKSUM
        Home                                      ONLINE       0     0     0
          raidz2-0                                ONLINE       0     0     0
            a7d78b0d-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0    32
            a7b00eef-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0    32
            a7d01f81-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0    32
            a7c951e3-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0     0
            a7bfef1b-f891-11ed-a2f8-90e2baf17bf0  ONLINE       0     0     0
            e4f37ae1-f494-4baf-94e5-07db0c38cb0c  ONLINE       0     0     0
          raidz2-1                                ONLINE       0     0     0
            8cca2c8f-39ee-40a6-88e0-24ddf3485aa0  ONLINE       0     0     0
            74f3cc23-1b32-4faf-89cc-ba0cd72ba308  ONLINE       0     0    34
            4e5f5b16-6c2b-4e6b-a907-3e1b9b1c4886  ONLINE       0     0    34
            cde58bb6-9d8e-4cdc-a1bf-847f459b459b  ONLINE       0     0    34
            58c22778-521b-4e8f-aadd-6d5ad17a8f68  ONLINE       0     0     0
            33633f68-920b-4a40-bd4d-45e30b6872bc  ONLINE       0     0     0
          raidz2-2                                ONLINE       0     0     0
            2a2e5211-d4ea-4da9-8ea5-bdabdc542bdb  ONLINE       0     0     0
            56c07fd7-6cb6-4985-9a20-2b5ff9d42631  ONLINE       0     0     0
            1147286d-8cd8-4025-8e5d-bbf06e2bd795  ONLINE       0     0    34
            7e1fa408-7565-4913-b045-49447ef9253b  ONLINE       0     0    34
            3d56d2fa-d505-4bea-b9a2-80c121e4e559  ONLINE       0     0    34
            a9906b32-2690-4f7b-8d8f-00ca915d8f3d  ONLINE       0     0     0
          raidz2-5                                ONLINE       0     0     0
            b8c63108-353b-4ed7-a927-ca3df817bd21  ONLINE       0     0     0
            58782264-02f1-41c6-9b91-d07144cb0ccb  ONLINE       0     0    32
            03df98a5-a86d-4bc8-879a-5cf611d4306c  ONLINE       0     0    32
            022c7ffb-0a07-45cb-b3af-ad1730a08054  ONLINE       0     0    32
            a5786a1f-a7ad-4a30-877a-88a03c94a774  ONLINE       0     0     0
            4c59238e-5cbd-428e-8a72-a018d9dae9c2  ONLINE       0     0     0
        logs
          mirror-6                                ONLINE       0     0     0
            5ba1f70b-be51-470f-94ed-777683425477  ONLINE       0     0     0
            f2605776-46a9-4455-a4bc-322d4cf8a688  ONLINE       0     0     0

errors: 2 data errors, use '-v' for a list

  pool: WD-Backup
 state: ONLINE
config:

        NAME                                      STATE     READ WRITE CKSUM
        WD-Backup                                 ONLINE       0     0     0
          raidz1-0                                ONLINE       0     0     0
            ab4d12e8-d8a2-4dc8-8d44-3dfce076afe4  ONLINE       0     0     0
            b70dd50f-1429-4b7d-bc50-a6104ded8624  ONLINE       0     0     0
            6f70a575-b722-4cf7-bada-520d8a0ba68a  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:15 with 0 errors on Mon Nov 20 03:46:16 2023
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdad2   ONLINE       0     0     0
            sdac2   ONLINE       0     0     0

errors: No known data errors

root@truenas[~]# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Home        349T   134T   216T        -         -     0%    38%  1.00x    ONLINE  -
WD-Backup  10.9T  3.33T  7.57T        -         -     0%    30%  1.00x    ONLINE  /mnt
boot-pool   206G  20.6G   185G        -       14G     3%     9%  1.00x    ONLINE  -
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Do
cp /data/freenas-v1.db /data/db.bak

and then
sqlite3 /data/freenas-v1.db 'delete from storage_volume where vol_name="Home"'
This will tell TrueNAS to NOT mount it on boot.

And then reboot and then import the pool as read only

zpool import -o readonly=on -fR /mnt Home
 
Last edited:

Brandito

Explorer
Joined
May 6, 2023
Messages
72
Do
cp /data/freenas-v1.db /data/db.bak

and then
sqlite3 /data/freenas-v1.db 'delete from storage_volume where vol_name="Home"'
This will tell TrueNAS to NOT mount it on boot.

And then reboot and then import the pool as read only

zpool import -o readonly=on -fR /mnt Home
Did as you said, rebooted and the pool is mounted anyway. Shows in zpool list and zpool status
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Can you PM me a debug?
 
Top