Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

SOLVED Importing and unlocking encrypted pool on fresh FreeNAS 11.1 installation fails because it's too full??

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
I now have the following error in the GUI: Storage - Volumes - ZFS_8x_3TB_RAIDz2_pool:
Error getting available space (LOCKED)

I do have a lot more Icons for this pool:
Detach Volume + Scrub Volume + Volume Status + Lock Volume + Change Passphrase + Download Key + Encryption Re-key + Add Recovery Key + Remove Recovery Key

I'm gonna re-read your advise and see if I can figure out the next step :)
 
Joined
Oct 18, 2018
Messages
969
Out of curiosity… What we did manually just now, that's the first step the GUI would do if I press to unlock a pool? This decrypts the pool, then it needs to get "mounted" or something?
Prior to the steps you just took your drives were decrypted. Thus, all of your files and ZFS metadata on those disks was unusable. This is why the earlier zpool import did not list anything. The system could not read the disks to discover the zpool on them! We just manually decrypted those drives since the UI failed to do so (due to your disks being too full).

We now need to attempt to import the pool. Give zpool import ZFS_8x_3TB_RAIDz2_pool a shot.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
I now have the following error in the GUI: Storage - Volumes - ZFS_8x_3TB_RAIDz2_pool:
Error getting available space (LOCKED)

I do have a lot more Icons for this pool:
Detach Volume + Scrub Volume + Volume Status + Lock Volume + Change Passphrase + Download Key + Encryption Re-key + Add Recovery Key + Remove Recovery Key

I'm gonna re-read your advise and see if I can figure out the next step :) (Post-edit: giving it a shot!)
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
(thanks for the info)

We now need to attempt to import the pool. Give zpool import ZFS_8x_3TB_RAIDz2_pool a shot.

Code:
root@Freenas:/dev/gptid # zpool import ZFS_8x_3TB_RAIDz2_pool
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-839d4bf50898424ab2b76c72b7c93def': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-ab47b56698bd4a6c9dbb53d875ef5ec8': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-e2eccb3703ad46d2b19f2e4809443384': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/cores': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-839d4bf50898424ab2b76c72b7c93def': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-ab47b56698bd4a6c9dbb53d875ef5ec8': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-e2eccb3703ad46d2b19f2e4809443384': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/samba4': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-839d4bf50898424ab2b76c72b7c93def': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-ab47b56698bd4a6c9dbb53d875ef5ec8': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-e2eccb3703ad46d2b19f2e4809443384': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/B': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/M': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/Phoenix': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/Seagate_4TB': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/WD_2TB_EXT-HDD': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/Wallets Backup': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails/.warden-template-pluginjail': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails/.warden-template-pluginjail-9.3-x64': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails_2': failed to create mountpoint
cannot mount '/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails_3': failed to create mountpoint


Code:
root@Freenas:/dev/gptid # zpool import
root@Freenas:/dev/gptid #


PS: this is making a great manual for the next Googler I feel! Great co-op :)
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Oh! I thought we weren't there just yet but I might have spoken too soon! I refreshed the FreeNAS Volumes page, and now I can see my whole Pool in its fool glory.

Which I will test now and also leaves me wondering what all the "Failed to create Mountpoint" errors are all about…?

EDIT: oh, nothing shown in /mnt/ … Getting close no doubt! :)
 
Joined
Oct 18, 2018
Messages
969
Do you see your pool if you do zpool list? If not, lets manually set the mountpoint. It is likely that FreeNAS requires the mountpoint to be set specifically to the correct location. Give zpool import -R /mnt ZFS_8x_3TB_RAIDz2_pool. This tells the system to try to import ZFS_8x_3TB_RAIDz2_pool to /mnt per FreeNAS' expectation.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Looks like you are on the right track once again! Testing as I write...

Code:
root@Freenas:/mnt # zpool list
NAME                     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
VOLU10TB                9.06T  3.34T  5.73T         -     0%    36%  1.00x  ONLINE  /mnt
ZFS_8x_3TB_RAIDz2_pool  21.8T  21.1T   696G         -    42%    96%  1.00x  ONLINE  -
freenas-boot            14.4G   755M  13.6G         -      -     5%  1.00x  ONLINE  -


(yes, that fragmentation looks AWFUL! Lots of backups will be removed soon, I think. Hope.)
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Give zpool import -R /mnt ZFS_8x_3TB_RAIDz2_pool. This tells the system to try to import ZFS_8x_3TB_RAIDz2_pool to /mnt per FreeNAS' expectation.

Code:
root@Freenas:/mnt # zpool import -R /mnt ZFS_8x_3TB_RAIDz2_pool
cannot import 'ZFS_8x_3TB_RAIDz2_pool': a pool with that name is already created/imported,
and no additional pools with that name were found
root@Freenas:/mnt # ls
md_size         VOLU10TB
root@Freenas:/mnt # mkdir test
root@Freenas:/mnt # zpool import -R /mnt/test ZFS_8x_3TB_RAIDz2_pool
cannot import 'ZFS_8x_3TB_RAIDz2_pool': a pool with that name is already created/imported,
and no additional pools with that name were found


So close! Yet so far! Catching up on this article: https://www.ixsystems.com/community...-volume-not-showing-in-list.42414/post-275198. And this https://www.openattic.org/posts/unlock-geli-ecrypted-zfs-volume-freenas/ :
In our scenario it wasn't possible to mount the zfs volumes after the zpool import, because the default mountpath was wrong and the main path is read-only within FreeNAS. To change the default mount path from zfs:

zfs set mountpoint=/mnt poolname

Afterwards we could mount all existing zfs volumes to /mnt. To mount all existing volumes at a time:

zfs mount -a

That's it. Now we could access his data again.
 
Last edited:
Joined
Oct 18, 2018
Messages
969
I suspect it is because the first import was missing the correct flags. Try exporting then reimporting. zpool export ZFS_8x_3TB_RAIDz2_pool zpool import -R /mnt ZFS_8x_3TB_RAIDz2_pool.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Code:
root@Freenas:/mnt # zfs set mountpoint=/mnt ZFS_8x_3TB_RAIDz2_pool
root@Freenas:/mnt # zfs mount -a
cannot mount '/mnt/Pipi': failed to create mountpoint
cannot mount '/mnt/Pipi/.system': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/configs-839d4bf50898424ab2b76c72b7c93def': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/configs-ab47b56698bd4a6c9dbb53d875ef5ec8': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/configs-e2eccb3703ad46d2b19f2e4809443384': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/cores': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/rrd-839d4bf50898424ab2b76c72b7c93def': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/rrd-ab47b56698bd4a6c9dbb53d875ef5ec8': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/rrd-e2eccb3703ad46d2b19f2e4809443384': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/samba4': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/syslog-839d4bf50898424ab2b76c72b7c93def': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/syslog-ab47b56698bd4a6c9dbb53d875ef5ec8': failed to create mountpoint
cannot mount '/mnt/Pipi/.system/syslog-e2eccb3703ad46d2b19f2e4809443384': failed to create mountpoint
cannot mount '/mnt/Pipi/B': failed to create mountpoint
cannot mount '/mnt/Pipi/M': failed to create mountpoint
cannot mount '/mnt/Pipi/Phoenix': failed to create mountpoint
cannot mount '/mnt/Pipi/Seagate_4TB': failed to create mountpoint
cannot mount '/mnt/Pipi/WD_2TB_EXT-HDD': failed to create mountpoint
cannot mount '/mnt/Pipi/Wallets Backup': failed to create mountpoint
cannot mount '/mnt/Pipi/jails': failed to create mountpoint
cannot mount '/mnt/Pipi/jails/.warden-template-pluginjail': failed to create mountpoint
cannot mount '/mnt/Pipi/jails/.warden-template-pluginjail-9.3-x64': failed to create mountpoint
cannot mount '/mnt/Pipi/jails_2': failed to create mountpoint
cannot mount '/mnt/Pipi/jails_3': failed to create mountpoint
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
I suspect it is because the first import was missing the correct flags. Try exporting then reimporting. zpool export ZFS_8x_3TB_RAIDz2_pool zpool import -R /mnt ZFS_8x_3TB_RAIDz2_pool.

Time for a reboot and start the decryption again? Or is it better to remove encryption all-together before rebooting? Once I have enough free space I could always encrypt it again?
Code:
root@Freenas:/mnt # zpool export ZFS_8x_3TB_RAIDz2_pool
cannot unmount '/mnt': Device busy


Code:
root@Freenas:/mnt # mount
freenas-boot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
tmpfs on /etc (tmpfs, local)
tmpfs on /mnt (tmpfs, local)
tmpfs on /var (tmpfs, local)
freenas-boot/grub on /boot/grub (zfs, local, noatime, nfsv4acls)
fdescfs on /dev/fd (fdescfs)
VOLU10TB on /mnt/VOLU10TB (zfs, local, nfsv4acls)
VOLU10TB/Pipi on /mnt/VOLU10TB/Pipi (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system on /mnt/VOLU10TB/Pipi/.system (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/configs-839d4bf50898424ab2b76c72b7c93def on /mnt/VOLU10TB/Pipi/.system/configs-839d4bf50898424ab2b76c72b7c93def (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/configs-ab47b56698bd4a6c9dbb53d875ef5ec8 on /mnt/VOLU10TB/Pipi/.system/configs-ab47b56698bd4a6c9dbb53d875ef5ec8 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/configs-e2eccb3703ad46d2b19f2e4809443384 on /mnt/VOLU10TB/Pipi/.system/configs-e2eccb3703ad46d2b19f2e4809443384 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/cores on /mnt/VOLU10TB/Pipi/.system/cores (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/rrd-839d4bf50898424ab2b76c72b7c93def on /mnt/VOLU10TB/Pipi/.system/rrd-839d4bf50898424ab2b76c72b7c93def (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/rrd-ab47b56698bd4a6c9dbb53d875ef5ec8 on /mnt/VOLU10TB/Pipi/.system/rrd-ab47b56698bd4a6c9dbb53d875ef5ec8 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/rrd-e2eccb3703ad46d2b19f2e4809443384 on /mnt/VOLU10TB/Pipi/.system/rrd-e2eccb3703ad46d2b19f2e4809443384 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/samba4 on /mnt/VOLU10TB/Pipi/.system/samba4 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/syslog-839d4bf50898424ab2b76c72b7c93def on /mnt/VOLU10TB/Pipi/.system/syslog-839d4bf50898424ab2b76c72b7c93def (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/syslog-ab47b56698bd4a6c9dbb53d875ef5ec8 on /mnt/VOLU10TB/Pipi/.system/syslog-ab47b56698bd4a6c9dbb53d875ef5ec8 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/.system/syslog-e2eccb3703ad46d2b19f2e4809443384 on /mnt/VOLU10TB/Pipi/.system/syslog-e2eccb3703ad46d2b19f2e4809443384 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/B on /mnt/VOLU10TB/Pipi/B (zfs, local, nfsv4acls)
VOLU10TB/Pipi/M on /mnt/VOLU10TB/Pipi/M (zfs, local, nfsv4acls)
VOLU10TB/Pipi/jails on /mnt/VOLU10TB/Pipi/jails (zfs, local, nfsv4acls)
VOLU10TB/Pipi/jails/.warden-template-pluginjail on /mnt/VOLU10TB/Pipi/jails/.warden-template-pluginjail (zfs, local, nfsv4acls)
VOLU10TB/Pipi/jails/.warden-template-pluginjail-9.3-x64 on /mnt/VOLU10TB/Pipi/jails/.warden-template-pluginjail-9.3-x64 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/jails_2 on /mnt/VOLU10TB/Pipi/jails_2 (zfs, local, nfsv4acls)
VOLU10TB/Pipi/jails_3 on /mnt/VOLU10TB/Pipi/jails_3 (zfs, local, nfsv4acls)
VOLU10TB/jails on /mnt/VOLU10TB/jails (zfs, local, nfsv4acls)
VOLU10TB/jails_2 on /mnt/VOLU10TB/jails_2 (zfs, local, nfsv4acls)
VOLU10TB/jails_2/.warden-template-pluginjail-11.0-x64 on /mnt/VOLU10TB/jails_2/.warden-template-pluginjail-11.0-x64 (zfs, local, nfsv4acls)
VOLU10TB/jails_2/.warden-template-pluginjail-11.0-x64-20180914015336 on /mnt/VOLU10TB/jails_2/.warden-template-pluginjail-11.0-x64-20180914015336 (zfs, local, nfsv4acls)
VOLU10TB/jails_2/xmrig_1 on /mnt/VOLU10TB/jails_2/xmrig_1 (zfs, local, nfsv4acls)
tmpfs on /var/db/collectd/rrd (tmpfs, local)
devfs on /mnt/VOLU10TB/jails_2/xmrig_1/dev (devfs, local, multilabel)
procfs on /mnt/VOLU10TB/jails_2/xmrig_1/proc (procfs, local)
ZFS_8x_3TB_RAIDz2_pool on /mnt (zfs, local, nfsv4acls)
 
Joined
Oct 18, 2018
Messages
969
Time for a reboot and start the decryption again?
Sure yeah, try that and then to the import with the -R flag.

Sorry, I'm not as familiar with this side of things as I am with the encryption. You're welcome to wait for someone who knows this side better than I do if you prefer.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Some ppl are not willing to help me because I don't play by them rules, or so they say :) So I appreciate each and every step along the way!

Time for a reboot and decryption once more. I'll not remove the encryption for now. (edit: nor could I find an option to remove the encryption)
 
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
PS: would it be an option to (try to) remove a Snapshot through the GUI? It shows me the options and a couple of those… Is it safe to remove? (I'll google --> edit: https://www.ixsystems.com/community/threads/deleting-snapshots.41133/post-261873...

--> It seems safe to do, without a risk of losing data... With the exception of data that got removed but shouldn't have been... The snapshots seem to give the ability to do a restore of those accidental deletes… Which I actually might need later on for another recovery project... So I'll skip on removing Snapshots for now... I really need to find a big file to /dev/null :) Time for that reboot...
 
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
After the reboot and logging in to the GUI, the GUI is only "loading" the Volumes page and not actually showing the volumes found... and more worrisome, I'm getting A LOT of "vdev state changed" boot messages… Like, scrolling page after page :(

wtf could be going on? I posted a snippet below...

Code:
Mar 26 22:04:54 Freenas ZFS: vdev state changed, pool_guid=4057754529204707565 vdev_guid=14799165109861233241
Mar 26 22:04:55 Freenas ZFS: vdev state changed, pool_guid=4057754529204707565 vdev_guid=1092407030297009407
Mar 26 22:04:56 Freenas ZFS: vdev state changed, pool_guid=4057754529204707565 vdev_guid=11554200803400165353
Mar 26 22:04:56 Freenas ZFS: vdev state changed, pool_guid=4057754529204707565 vdev_guid=16833066816084926306
Mar 26 22:04:57 Freenas ZFS: vdev state changed, pool_guid=4057754529204707565 vdev_guid=11437423961006186802


I'm also getting this errors from the GUI:
Code:
 CRITICAL: March 26, 2019, 10 p.m. - The volume ZFS_8x_3TB_RAIDz2_pool state is UNAVAIL: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning.
 WARNING: March 26, 2019, 10:03 p.m. - New feature flags are available for volume ZFS_8x_3TB_RAIDz2_pool. Refer to the "Upgrading a ZFS Pool" section of the User Guide for instructions.


I haven't done anything not advised, didn't remove or change anything (except for decrypting and importing) so wtf is going on? How could this be?
 
Last edited:
Joined
Oct 18, 2018
Messages
969
I can't say exactly why you're seeing those specific errors. I'd have to do some research. I'm not surprised you're seeing errors given that we tried to import the pool through the terminal rather than the GUI due to the GUI being unable to decrypt and import the pool. Have you tried to decrypt and import the pool since rebooting?
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
I can't say exactly why you're seeing those specific errors. I'd have to do some research. I'm not surprised you're seeing errors given that we tried to import the pool through the terminal rather than the GUI due to the GUI being unable to decrypt and import the pool. Have you tried to decrypt and import the pool since rebooting?

No, I did nothing but reboot and login to the GUI. Not even SSH was started. All disks do report for duty:
Code:
root@Freenas:~ # geom disk list
Geom name: da0
Providers:
1. Name: da0
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w0e1
   descr: ATA WDC WD30EFRX-68E
   lunid: 50014ee003c0a9c1
   ident: WD-WMC4N2081759
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Geom name: da1
Providers:
1. Name: da1
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   descr: ATA WDC WD30EFRX-68E
   lunid: 50014ee20c340145
   ident: WD-WCC4N4SCDS21
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Geom name: da2
Providers:
1. Name: da2
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   descr: ATA WDC WD30EFRX-68E
   lunid: 50014ee05915f269
   ident: WD-WMC4N2146579
   rotationrate: 5400
   fwsectors: 63
   fwheads: 255

Geom name: da3
Providers:
1. Name: da3
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Mode: r0w0e0
   descr: ATA MB3000EBUCH
   lunid: 5000cca225f2b1fc
   ident: YHKLJ61A
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da4
Providers:
1. Name: da4
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Mode: r0w0e0
   descr: ATA MB3000EBUCH
   lunid: 5000cca225f176df
   ident: YHKHU7UA
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da5
Providers:
1. Name: da5
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Mode: r1w1e2
   descr: ATA MB3000EBUCH
   lunid: 5000cca225d6e6f6
   ident: YHHMBURA
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da6
Providers:
1. Name: da6
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Mode: r1w1e2
   descr: ATA MB3000EBUCH
   lunid: 5000cca225f24da0
   ident: YHKKNG8A
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da7
Providers:
1. Name: da7
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Mode: r1w1e2
   descr: ATA MB3000EBUCH
   lunid: 5000cca225f24e55
   ident: YHKKNN3A
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255


dmesg output: https://pastebin.com/PXMr8Sbw

Could this have messed it up? zfs set mountpoint=/mnt ZFS_8x_3TB_RAIDz2_pool?

What do I do now? Keep the messages scrolling, expecting it to stop at one point? Should I do another reboot? We were so fracking close and did everything by the book (except for yeah, the Pool is at 96% capacity).
 
Last edited:
Joined
Oct 18, 2018
Messages
969
What do I do now?
To be honest, I'm reluctant to offer more advice. If it were my machine with my data I'd experiment more, but I've got backups and so losing a pool isn't a huge deal for me. I don't want to keep poking around on your data and lose your pool.

Given that your pool lists at 96% full I suspect that you'll be able to get the data off once you can manage to import the pool.

Also, you won't be able to remove the encryption easily. Given that you are able to manually decrypt the drives I wouldn't worry about that as much right now. Encryption is not your issue anymore as far as I can tell, the issue is importing the pool now.

You may consider trying to post another thread or wait for more folks to chime in here. I expect that more experienced FreeNAS users may offer rather direct suggestions for your pool or setup or simply direct you to documentation. It can seem off putting at times but a little patience goes a long way around here. The experienced folks on these forums are quite knowledgable and are able to solve a wide variety of problems. It helps to be as clear as you can in your posts and provide as much specific detail as you can.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
All good things come to an end, huh? I so appreciate your help up to now. Not sure what kind of post to make next though :/ And I am usually rather patient, despite the urgency most of the times :) I just might have questions or ideas of my own, and that's not always appreciated in the BSD world. It has nothing to do with me "refusing" advise. Why would I?

Code:
root@Freenas:~ # zpool list
NAME                     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
VOLU10TB                9.06T  3.34T  5.73T         -     0%    36%  1.00x  ONLINE  /mnt
ZFS_8x_3TB_RAIDz2_pool      -      -      -         -      -      -      -  UNAVAIL  -
freenas-boot            14.4G   755M  13.6G         -      -     5%  1.00x  ONLINE  -


Thank you PhiloEpisteme - if you have something to share, don't hesitate! :)

xx

PS Could this have messed it up? zfs set mountpoint=/mnt ZFS_8x_3TB_RAIDz2_pool?
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Top