Issue when importing zpool

javifo

Cadet
Joined
Feb 16, 2019
Messages
3
Hi people.

I want to recover the data of my zpool after my motherboard died, so I installed Freenas into an usb drive and tried to import the pool.

I had 4 drives and an SSD as zpool cache, but I only installed the 4 hard drives.

zool import gave errors that forced me to use "zpool import raid_volume -f -m"

After a reboot "zpool status" had an assert "reason == ZPOOL_STATUS_OK" on that pool and I thought the cache drive was missing, so I added it.

After a reboot the pool was there, but there was no data, just the jails. I even tried to remove cache and rebooted. Nothing, still no data, just the jail data.

What can I do to recover my data? Or have I lost it?

Here is some info:

Code:
root@freenas[~]# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da1p2     ONLINE       0     0     0

errors: No known data errors

  pool: raid_volume
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: scrub repaired 960K in 0 days 03:54:20 with 0 errors on Sun Jan 27 06:10:42 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    raid_volume                                     ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/44b12b57-d48b-11e4-8585-d0509964a900  ONLINE       0     0     0
        gptid/452b171f-d48b-11e4-8585-d0509964a900  ONLINE       0     0     0
        gptid/6fabfb50-e504-11e7-9e30-d0509964a900  ONLINE       0     0     0
        gptid/9efa1757-17d8-11e9-b47b-d0509964a900  ONLINE       0     0     0
    logs
      gptid/dbc0d154-d84f-11e4-95f2-d0509964a900    ONLINE       0     0     0

errors: No known data errors
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I suppose you could try exporting it, then do an import with the -F option. The capital F attempts to recover the pool by discarding the last few transactions. This is a kind of last resort action though, because the discarded transaction data is permanently lost.
https://www.freebsd.org/cgi/man.cgi?zpool(8)
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Output of zfs list and zpool list? Everything looks fine to me see far.
 

javifo

Cadet
Joined
Feb 16, 2019
Messages
3
I think the pool is not right mounted. Here are more info:

Code:
zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  3.75G   891M  2.88G        -         -      -    23%  1.00x  ONLINE  -
raid_volume   10.9T  3.23T  7.65T        -         -    33%    29%  1.08x  ONLINE  -


Code:
zpool status
  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors

  pool: raid_volume
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: scrub repaired 960K in 0 days 03:54:20 with 0 errors on Sun Jan 27 06:10:42 2019
config:

    NAME                                            STATE     READ WRITE CKSUM
    raid_volume                                     ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/44b12b57-d48b-11e4-8585-d0509964a900  ONLINE       0     0     0
        gptid/452b171f-d48b-11e4-8585-d0509964a900  ONLINE       0     0     0
        gptid/6fabfb50-e504-11e7-9e30-d0509964a900  ONLINE       0     0     0
        gptid/9efa1757-17d8-11e9-b47b-d0509964a900  ONLINE       0     0     0
    logs
      gptid/dbc0d154-d84f-11e4-95f2-d0509964a900    ONLINE       0     0     0

errors: No known data errors


Code:
mount
freenas-boot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
tmpfs on /etc (tmpfs, local)
tmpfs on /mnt (tmpfs, local)
tmpfs on /var (tmpfs, local)
fdescfs on /dev/fd (fdescfs)
tmpfs on /var/db/collectd/rrd (tmpfs, local)
raid_volume/iocage/jails/plex on /raid_volume/iocage/jails/plex (zfs, local, nfsv4acls)
raid_volume/iocage/jails/plex/root on /raid_volume/iocage/jails/plex/root (zfs, local, nfsv4acls)


In the output of mount I see there are missing mount points from pool raid_volume.

Code:
zfs list
NAME                                                                    USED  AVAIL  REFER  MOUNTPOINT
freenas-boot                                                            891M  2.75G    64K  none
freenas-boot/ROOT                                                       890M  2.75G    29K  none
freenas-boot/ROOT/Initial-Install                                         1K  2.75G   888M  legacy
freenas-boot/ROOT/default                                               890M  2.75G   889M  legacy
raid_volume                                                            1.70T  3.54T  1.67T  /raid_volume
raid_volume/.system                                                     117M  3.54T   971K  legacy
raid_volume/.system/configs-0d7698ec2eec4c758c2f40f0eff24be9           94.1M  3.54T  94.1M  legacy
raid_volume/.system/cores                                               750K  3.54T   750K  legacy
raid_volume/.system/rrd-0d7698ec2eec4c758c2f40f0eff24be9                140K  3.54T   140K  legacy
raid_volume/.system/samba4                                              820K  3.54T   820K  legacy
raid_volume/.system/syslog-0d7698ec2eec4c758c2f40f0eff24be9            19.8M  3.54T  19.8M  legacy
raid_volume/.system/webui                                               128K  3.54T   128K  legacy
raid_volume/iocage                                                     2.12G  3.54T  3.81M  /raid_volume/iocage
raid_volume/iocage/download                                             272M  3.54T   128K  /raid_volume/iocage/download
raid_volume/iocage/download/11.2-RELEASE                                272M  3.54T   272M  /raid_volume/iocage/download/11.2-RELEASE
raid_volume/iocage/images                                               128K  3.54T   128K  /raid_volume/iocage/images
raid_volume/iocage/jails                                                633M  3.54T   128K  /raid_volume/iocage/jails
raid_volume/iocage/jails/plex                                           633M  3.54T   314K  /raid_volume/iocage/jails/plex
raid_volume/iocage/jails/plex/root                                      632M  3.54T  1.50G  /raid_volume/iocage/jails/plex/root
raid_volume/iocage/log                                                  134K  3.54T   134K  /raid_volume/iocage/log
raid_volume/iocage/releases                                            1.23G  3.54T   128K  /raid_volume/iocage/releases
raid_volume/iocage/releases/11.2-RELEASE                               1.23G  3.54T   128K  /raid_volume/iocage/releases/11.2-RELEASE
raid_volume/iocage/releases/11.2-RELEASE/root                          1.23G  3.54T  1.23G  /raid_volume/iocage/releases/11.2-RELEASE/root
raid_volume/iocage/templates                                            128K  3.54T   128K  /raid_volume/iocage/templates
raid_volume/jails                                                      7.32G  3.54T   273K  /raid_volume/jails
raid_volume/jails/.warden-template-pluginjail--x64                      799M  3.54T   795M  /raid_volume/jails/.warden-template-pluginjail--x64
raid_volume/jails/.warden-template-pluginjail-10.3-x64                  533M  3.54T   533M  /raid_volume/jails/.warden-template-pluginjail-10.3-x64
raid_volume/jails/.warden-template-pluginjail-11.0-x64                  608M  3.54T   607M  /raid_volume/jails/.warden-template-pluginjail-11.0-x64
raid_volume/jails/.warden-template-pluginjail-11.0-x64-20180628185343   608M  3.54T   607M  /raid_volume/jails/.warden-template-pluginjail-11.0-x64-20180628185343
raid_volume/jails/.warden-template-pluginjail-11.0-x64-20180911094043   608M  3.54T   607M  /raid_volume/jails/.warden-template-pluginjail-11.0-x64-20180911094043
raid_volume/jails/.warden-template-standard--x64                       2.18G  3.54T  2.11G  /raid_volume/jails/.warden-template-standard--x64
raid_volume/jails/gitweb                                               2.06G  3.54T  3.19G  /raid_volume/jails/gitweb
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Did you import the pool using the cli? You can't do that, you need to use the GUI. Export the pool and do it the correct way.
 

javifo

Cadet
Joined
Feb 16, 2019
Messages
3
Did you import the pool using the cli? You can't do that, you need to use the GUI. Export the pool and do it the correct way.
Yes I used the CLI. The case is that I exported the pool and reimported it via CLI again as @Chris Moore stated. I got access to the data via CLI and now I'm doing the backup. Later I'll export it again and will reimport it in the GUI.

The reason why I used the CLI was that via GUI I could not import the pool because it showed no pool at all (remember I started without cache ssd).
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yes I used the CLI.
You can use the CLI for zpool and zfs actions it is just more complicated because it doesn't handle all the details for you but it gives you access to functionality, like the -F, that is not available through the GUI.
Right now, your pool root is in the wrong place see this:
Code:
NAME                                                                    USED  AVAIL  REFER  MOUNTPOINT
raid_volume                                                            1.70T  3.54T  1.67T  /raid_volume

That isn't your fault, just a 'feature' of doing a manual import of the pool. You can, if you want to, use set mountpoint
to get the pool back where it belongs, which wold potentially, after a restart, have everything working the way it should.
Here is the command: zfs set mountpoint=/mnt/raid_volume raid_volume
The /mnt is where all pools under FreeNAS are mounted, so the path to the pool would be /mnt/raid_volume
The second appearance of raid_volume on that line is telling the command what pool to act on.

Keep in mind, these are file system actions, things that are configuration of the NAS actions, must be done in the GUI so they can be recorded in the configuration.db or they will not survive a reboot. The line between those can be a little blurry, so it is always best to do things from the GUI. This is a bit of an extreme case because you had a fault that damaged your pool.
 
Top