Exception Type: MiddlewareError at /storage/volume/2/unlock/

Joined
Oct 18, 2018
Messages
969
Now my question... Would it be really dumb to import my old settings again?
So glad it all worked out! If I were you, before anything else I'd remove data from that pool to free up some space. If you can try to get it down to 80%. Then you have to work how to go from here. You really dont want your pool getting that full in the first place.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
:) you are absolutely right and it's on my to do list. First see what's on it that I can use for another data rescue ;-)

Appreciated!
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Glad to see it's working now :)

you are absolutely right and it's on my to do list.
Please post whether it works as expected. I mean I'm curious if you have many snapshots and if they interfere with what you're going to do next...

Sent from my phone
 
Joined
Dec 2, 2015
Messages
730
wow - for real?? I should still have the key lying around but wow - that's DANGEROUS when the general advise is not to rebuild a boot pool but instead start fresh. Especially when the export mentions including the passwords for you… So without the key I'd have lost +10TB of data just like that?
There are three possibilities:
  1. If it was data you cared about, you should have a backup. You'd have to wipe the disks, make a new encrypted pool, and then copy the data back to the pool from the backup. It would be a PITA, but you wouldn't have lost any data
  2. If you didn't have a backup, that implies it was data you didn't care about. So, while you would have lost the data, you had already decided the data wasn't worth backing up, so the data loss is OK.
  3. The data was important, but you hadn't gone to the trouble and expense of backing it up. Important data that is not backed up is a very bad combination. If you care about the data, back it up, and have one copy of the backup off site, so it will survive a fire or theft.
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
It might surprise you, but for some FreeNAS is the actual end-point of SAFE storage of data I care about. How the frack am I gonna buy all the hardware needed to make a proper backup of my NAS?? It's like putting the horse before the carriage ;p
 
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Please post whether it works as expected. I mean I'm curious if you have many snapshots and if they interfere with what you're going to do next...

Thank you : )

I might have yelled victory too soon...
If I unlock a pool, it should be browsable through /mnt/*, right? Because all I see when I go to my pool is '.freenas'. I'm afraid I might end up having to study this article https://www.ixsystems.com/community/threads/zpool-imported-but-files-missing.55631/ (but I hope not). Sigh. I'll be back :/

Code:
root@freenas:/mnt # zpool status
  pool: VOLU10TB
state: ONLINE
  scan: scrub repaired 0 in 0 days 04:52:22 with 0 errors on Sun Mar 10 04:52:22 2019
config:

        NAME                                          STATE     READ WRITE CKSUM
        VOLU10TB                                      ONLINE       0     0     0
          gptid/6baca59b-0553-11e8-a1db-0025901159d4  ONLINE       0     0     0

errors: No known data errors

  pool: ZFS_8x_3TB_RAIDz2_pool
state: ONLINE
  scan: scrub repaired 0 in 0 days 09:02:03 with 0 errors on Sun Nov  4 09:02:05 2018
config:

        NAME                                                STATE     READ WRITE CKSUM
        ZFS_8x_3TB_RAIDz2_pool                              ONLINE       0     0     0
          raidz1-0                                          ONLINE       0     0     0
            gptid/7a2fc1c9-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0
            gptid/7b297d44-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0
            gptid/7c382ed2-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0
            gptid/7eade0a6-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0
          raidz1-1                                          ONLINE       0     0     0
            gptid/85693c1a-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0
            gptid/888ff6c8-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0
            gptid/8bb7c249-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0
            gptid/8ec07bd1-08ef-11e8-ab60-0025901159d4.eli  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da8p2     ONLINE       0     0     0

errors: No known data errors
root@freenas:/mnt # ls /mnt/ZFS_8x_3TB_RAIDz2_pool/
.freenas
root@freenas:/mnt # ls /mnt/ZFS_8x_3TB_RAIDz2_pool/*
ls: No match.
root@freenas:/mnt # ls -lias /mnt/ZFS_8x_3TB_RAIDz2_pool/
total 1
4 1 drwxr-xr-x  3 root  wheel    3 Mar 25 22:49 .
2 0 drwxr-xr-x  4 root  wheel  192 Apr  3 13:02 ..
10 1 drwxr-xr-x  2 www   www      2 Feb  7  2018 .freenas


MwWke1.jpg


Code:
root@freenas:/mnt # du -hs /mnt/ZFS_8x_3TB_RAIDz2_pool/
1.0K    /mnt/ZFS_8x_3TB_RAIDz2_pool/


Code:
root@freenas:/mnt # zfs list -r ZFS_8x_3TB_RAIDz2_pool
NAME                                                                           USED  AVAIL  REFER  MOUNTPOINT
ZFS_8x_3TB_RAIDz2_pool                                                        15.3T      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool
ZFS_8x_3TB_RAIDz2_pool/.system                                                 112M      0  16.1M  /mnt/ZFS_8x_3TB_RAIDz2_pool/.system
ZFS_8x_3TB_RAIDz2_pool/.system/configs-66311c036e824820af44b2dbf4c55f10       17.3M      0  17.3M  /mnt/ZFS_8x_3TB_RAIDz2_pool/.system/configs-66311c036e824820af44b2dbf4c55f10
ZFS_8x_3TB_RAIDz2_pool/.system/cores                                           128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/.system/cores
ZFS_8x_3TB_RAIDz2_pool/.system/rrd-66311c036e824820af44b2dbf4c55f10           71.1M      0  71.1M  /mnt/ZFS_8x_3TB_RAIDz2_pool/.system/rrd-66311c036e824820af44b2dbf4c55f10
ZFS_8x_3TB_RAIDz2_pool/.system/samba4                                          512K      0   512K  /mnt/ZFS_8x_3TB_RAIDz2_pool/.system/samba4
ZFS_8x_3TB_RAIDz2_pool/.system/syslog-66311c036e824820af44b2dbf4c55f10        6.49M      0  6.49M  /mnt/ZFS_8x_3TB_RAIDz2_pool/.system/syslog-66311c036e824820af44b2dbf4c55f10
ZFS_8x_3TB_RAIDz2_pool/Pipi                                                   15.3T      0   140K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system                                           19.4M      0  1.79M  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-839d4bf50898424ab2b76c72b7c93def   128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-839d4bf50898424ab2b76c72b7c93def
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-ab47b56698bd4a6c9dbb53d875ef5ec8  12.6M      0  12.6M  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-ab47b56698bd4a6c9dbb53d875ef5ec8
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-e2eccb3703ad46d2b19f2e4809443384   215K      0   215K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/configs-e2eccb3703ad46d2b19f2e4809443384
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/cores                                      849K      0   849K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/cores
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-839d4bf50898424ab2b76c72b7c93def       128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-839d4bf50898424ab2b76c72b7c93def
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-ab47b56698bd4a6c9dbb53d875ef5ec8       128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-ab47b56698bd4a6c9dbb53d875ef5ec8
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-e2eccb3703ad46d2b19f2e4809443384       128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/rrd-e2eccb3703ad46d2b19f2e4809443384
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/samba4                                     651K      0   651K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/samba4
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-839d4bf50898424ab2b76c72b7c93def    128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-839d4bf50898424ab2b76c72b7c93def
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-ab47b56698bd4a6c9dbb53d875ef5ec8   1.02M      0  1.02M  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-ab47b56698bd4a6c9dbb53d875ef5ec8
ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-e2eccb3703ad46d2b19f2e4809443384    517K      0   517K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/.system/syslog-e2eccb3703ad46d2b19f2e4809443384
ZFS_8x_3TB_RAIDz2_pool/Pipi/B                                                 11.3T      0  10.4T  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/B
ZFS_8x_3TB_RAIDz2_pool/Pipi/M                                                  128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/M
ZFS_8x_3TB_RAIDz2_pool/Pipi/Phoenix                                            221G      0   221G  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/Phoenix
ZFS_8x_3TB_RAIDz2_pool/Pipi/Seagate_4TB                                       2.49T      0  2.49T  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/Seagate_4TB
ZFS_8x_3TB_RAIDz2_pool/Pipi/WD_2TB_EXT-HDD                                    1.06T      0  1.06T  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/WD_2TB_EXT-HDD
ZFS_8x_3TB_RAIDz2_pool/Pipi/Wallets Backup                                     246G      0   246G  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/Wallets Backup
ZFS_8x_3TB_RAIDz2_pool/Pipi/jails                                             1014M      0   291K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails
ZFS_8x_3TB_RAIDz2_pool/Pipi/jails/.warden-template-pluginjail                  517M      0   517M  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails/.warden-template-pluginjail
ZFS_8x_3TB_RAIDz2_pool/Pipi/jails/.warden-template-pluginjail-9.3-x64          496M      0   496M  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails/.warden-template-pluginjail-9.3-x64
ZFS_8x_3TB_RAIDz2_pool/Pipi/jails_2                                            128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails_2
ZFS_8x_3TB_RAIDz2_pool/Pipi/jails_3                                            128K      0   128K  /mnt/ZFS_8x_3TB_RAIDz2_pool/Pipi/jails_3


I'll pause any further investigations until you guys hopefully point out something dumb on my end, before continuing here: https://www.ixsystems.com/community/threads/zpool-imported-but-files-missing.55631/post-389480
 
Last edited:
Joined
Oct 18, 2018
Messages
969
It might surprise you, but for some FreeNAS is the actual end-point of SAFE storage of data I care about. How the frack am I gonna buy all the hardware needed to make a proper backup of my NAS?? It's like putting the horse before the carriage ;p
The general rule of thumb is that nothing is a substitute for a backup. For my storage needs I have my main FreeNAS machine and then two copies of the data it stores, one copy on-site (in my apt), and one copy I take to work. If my house burns down I don't want to lose my data :)
 
Joined
Oct 18, 2018
Messages
969

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Yes, I have not yet imported the old settings and am on a fresh install of 10.1-U6. I think it could be smart to import the settings now, if only to try I suppose? If all else fails, try the obvious, huh?

The outputs... If anyone sees something let us know :) Otherwise, I'll be importing the settings next... Right?

Code:
root@freenas:/mnt # zfs mount ZFS_8x_3TB_RAIDz2_pool
cannot mount 'ZFS_8x_3TB_RAIDz2_pool': filesystem already mounted
root@freenas:/mnt # zfs get all ZFS_8x_3TB_RAIDz2_pool
NAME                    PROPERTY              VALUE                        SOURCE
ZFS_8x_3TB_RAIDz2_pool  type                  filesystem                   -
ZFS_8x_3TB_RAIDz2_pool  creation              Sat Feb  3 15:36 2018        -
ZFS_8x_3TB_RAIDz2_pool  used                  15.3T                        -
ZFS_8x_3TB_RAIDz2_pool  available             0                            -
ZFS_8x_3TB_RAIDz2_pool  referenced            128K                         -
ZFS_8x_3TB_RAIDz2_pool  compressratio         1.74x                        -
ZFS_8x_3TB_RAIDz2_pool  mounted               yes                          -
ZFS_8x_3TB_RAIDz2_pool  quota                 none                         default
ZFS_8x_3TB_RAIDz2_pool  reservation           none                         default
ZFS_8x_3TB_RAIDz2_pool  recordsize            128K                         default
ZFS_8x_3TB_RAIDz2_pool  mountpoint            /mnt/ZFS_8x_3TB_RAIDz2_pool  default
ZFS_8x_3TB_RAIDz2_pool  sharenfs              off                          default
ZFS_8x_3TB_RAIDz2_pool  checksum              on                           default
ZFS_8x_3TB_RAIDz2_pool  compression           lz4                          local
ZFS_8x_3TB_RAIDz2_pool  atime                 on                           default
ZFS_8x_3TB_RAIDz2_pool  devices               on                           default
ZFS_8x_3TB_RAIDz2_pool  exec                  on                           default
ZFS_8x_3TB_RAIDz2_pool  setuid                on                           default
ZFS_8x_3TB_RAIDz2_pool  readonly              off                          default
ZFS_8x_3TB_RAIDz2_pool  jailed                off                          default
ZFS_8x_3TB_RAIDz2_pool  snapdir               hidden                       default
ZFS_8x_3TB_RAIDz2_pool  aclmode               passthrough                  local
ZFS_8x_3TB_RAIDz2_pool  aclinherit            passthrough                  local
ZFS_8x_3TB_RAIDz2_pool  canmount              on                           default
ZFS_8x_3TB_RAIDz2_pool  xattr                 off                          temporary
ZFS_8x_3TB_RAIDz2_pool  copies                1                            default
ZFS_8x_3TB_RAIDz2_pool  version               5                            -
ZFS_8x_3TB_RAIDz2_pool  utf8only              off                          -
ZFS_8x_3TB_RAIDz2_pool  normalization         none                         -
ZFS_8x_3TB_RAIDz2_pool  casesensitivity       sensitive                    -
ZFS_8x_3TB_RAIDz2_pool  vscan                 off                          default
ZFS_8x_3TB_RAIDz2_pool  nbmand                off                          default
ZFS_8x_3TB_RAIDz2_pool  sharesmb              off                          default
ZFS_8x_3TB_RAIDz2_pool  refquota              none                         default
ZFS_8x_3TB_RAIDz2_pool  refreservation        none                         default
ZFS_8x_3TB_RAIDz2_pool  primarycache          all                          default
ZFS_8x_3TB_RAIDz2_pool  secondarycache        all                          default
ZFS_8x_3TB_RAIDz2_pool  usedbysnapshots       163K                         -
ZFS_8x_3TB_RAIDz2_pool  usedbydataset         128K                         -
ZFS_8x_3TB_RAIDz2_pool  usedbychildren        15.3T                        -
ZFS_8x_3TB_RAIDz2_pool  usedbyrefreservation  0                            -
ZFS_8x_3TB_RAIDz2_pool  logbias               latency                      default
ZFS_8x_3TB_RAIDz2_pool  dedup                 off                          default
ZFS_8x_3TB_RAIDz2_pool  mlslabel                                           -
ZFS_8x_3TB_RAIDz2_pool  sync                  standard                     default
ZFS_8x_3TB_RAIDz2_pool  refcompressratio      1.00x                        -
ZFS_8x_3TB_RAIDz2_pool  written               81.4K                        -
ZFS_8x_3TB_RAIDz2_pool  logicalused           26.6T                        -
ZFS_8x_3TB_RAIDz2_pool  logicalreferenced     36.5K                        -
ZFS_8x_3TB_RAIDz2_pool  volmode               default                      default
ZFS_8x_3TB_RAIDz2_pool  filesystem_limit      none                         default
ZFS_8x_3TB_RAIDz2_pool  snapshot_limit        none                         default
ZFS_8x_3TB_RAIDz2_pool  filesystem_count      none                         default
ZFS_8x_3TB_RAIDz2_pool  snapshot_count        none                         default
ZFS_8x_3TB_RAIDz2_pool  redundant_metadata    all                          default


I don't feel confident continuing from here: https://www.ixsystems.com/community/threads/zpool-imported-but-files-missing.55631/post-389558
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
referenced 128K
This suggests the data is only in snapshots. Will you list the Gigabytes occupied by the snapshots? I can't remember the command from the top of my head but it's pretty simple...

available 0
It suggests your pool is 100% full. Is it possible? If so then even cloning a snapshot might not be possible (just a guess)...

A safest way seems to be replicating the pool to a bigger one (using send recv subcommands) thus having all the snapshots replicated... I can't remember from the top of my head if you have enough spare disks and enough spare SATA ports and a PSU big enough... Can you borrow a server? ;)

Some other way (but not sure if not risky) would be replacing all the disks with bigger ones one by one to grow the pool but IIRC you weren't planning this...

Sent from my phone
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
@devnullius
I've almost forgotten: you can safely do this:
You can try browsing the snapshots directly.

This way you should be able to access your data.

Sent from my phone
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Last edited:

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
This suggests the data is only in snapshots. Will you list the Gigabytes occupied by the snapshots? I can't remember the command from the top of my head but it's pretty simple...


It suggests your pool is 100% full. Is it possible? If so then even cloning a snapshot might not be possible (just a guess)...
The screenshot shows almost 700GB in the pool itself should still be available...

zI4mFf.jpg
 
Last edited:

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
screenshot shows almost 700GB in the pool itself should still be available.
I'm not familiar with this value. As you probably already know, the top level line indicates raw values. That is: 21.1TiB plus 696GiB = 21.7~21.8TiB should be your total size of all disks including parity disks.

Having said that... I don't know why the top line indicates "only" 96% occupation while the top level dataset is 100% full. Nor whether there is space for cloning snapshots...

Might be worth a separate thread here or somewhere ...

Sent from my phone
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
I finally had some more time to invest (I started over once more with a fresh installation) but this time, even setup, came with a
Code:
SCSI sense: MEDIUM ERROR, Unrecovered read error
.

The RAID controller itself sees no problems nor complains.

So I find this highly suspect that the disk suddenly should've crashed. Yet here we are :(

What is now my best bet? Reminder: the volume is near the absolute max in occupied size and there are some problems lingering, probably because of lack of available freespace...

1. If money was no object I'd buy 8*4TB disks and start fresh through some kind of resilvering process. While I still look at this option, I think it's too expensive.
2. So I at the bare minimum should buy 1*3TB disk and resilver the broken platter (which still should be possible, I hope: unless you know better, we'll cross that bridge when the new disk is here).
3. And ideally I'd buy 2x 3TB disks and expand the volume to 9*3TB instead of 8*3TB. Is that technically possible? Or not?

Thank you :)
 

devnullius

Patron
Joined
Dec 9, 2015
Messages
289
Having said that... I don't know why the top line indicates "only" 96% occupation while the top level dataset is 100% full. Nor whether there is space for cloning snapshots...
I now have 1 unreadable medium (a disk part of the volume). So something might have been lingering to crash only to have finally crashed now, a few weeks later... I dunno :)
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
1. If money was no object I'd buy 8*4TB disks and start fresh through some kind of resilvering process. While I still
Were you able to access your data?

2. So I at the bare minimum should buy 1*3TB disk and resilver the broken platter (which still should be possible, I hope: unless you know better, we'll cross that bridge when the new disk is here).
Dunno, sorry, maybe someone else will chime in...


3. And ideally I'd buy 2x 3TB disks and expand the volume to 9*3TB instead of 8*3TB. Is that technically possible? Or not?
Not yet, development in progress, someone supposed in two years... And only within same redundancy level...

Sent from my phone
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
And only within same redundancy level...
And it may not help if you still can't access your data :( Edit: or it may do the trick! I've just realized I don't know, apologies.

Sent from my phone
 
Last edited:

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
SCSI sense: MEDIUM ERROR, Unrecovered
I'd recommend reading other threads mentioning this error.

Having said that: what is your zpool status now, after the error?

Sent from my phone
 
Top