ZFS pool not available after upgrade from 8.3.1 to 9.1 beta

Status
Not open for further replies.

vdb

Cadet
Joined
Jul 12, 2013
Messages
5
After upgrade from version 8.3.1-RELEASE-p2-x64, to 9.1 beta on USB flash device, the zfs pool is not available. I'm working on a 4 disk setup with zfs version 28. I copied the configuration file from 8.3.1 into 9.1. Upon reboot the database is upgraded and everything seems to work flawlessly... except.. auto import and manual zpool import from the GUI does not work, neither does the zfs import from commandline in a terminal.

Any thoughts?
 
D

dlavigne

Guest
How did you upgrade? What is the output of "zpool status"?
 

vdb

Cadet
Joined
Jul 12, 2013
Messages
5
I installed 9.1. beta on a new USB device and booted from this device. After booting, I loaded the config file from my 8.3.1 USB device.

zpool status and zpool import gives. I also tried a "restore to factory defaults", which gives the same result.

[root@freenas ~]# zpool status no pools available [root@freenas ~]# zpool import pool: zfs id: 15019904328173081452 state: UNAVAIL status: The pool is formatted using a legacy on-disk version. action: The pool cannot be imported due to damaged devices or data. config: zfs UNAVAIL insufficient replicas raidz1-0 UNAVAIL corrupted data ada1 ONLINE ada3 ONLINE ada0 ONLINE ada2 ONLINE
 
D

dlavigne

Guest
Were the disks having any problems before the upgrade? Do you get the same error if you boot the system from the old 8.3.1 stick?

You could try a "zpool import -FX poolname" which will attempt to return the pool to an importable state.
 

vdb

Cadet
Joined
Jul 12, 2013
Messages
5
On 8.3.1 there are no problems. "zpool status" reports a healty pool (see bolew). I've never had any issues with one of the disks or the pool. Could the "zpool import -FX poolname" have any negative effects on de pool, i.e. is there any risk that I could "destroy" the pool?

[root@freenas] ~# zpool status
pool: zfs
state: ONLINE
scan: scrub repaired 0 in 8h23m with 0 errors on Thu Jul 11 11:23:15 2013
config:

NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada2 ONLINE 0 0 0

errors: No known data errors
 
D

dlavigne

Guest
If it works on 8.3.1, hold off and don't run that command (it is potentially destructive).

Which version is the pool running? v15 or v28?
 

vdb

Cadet
Joined
Jul 12, 2013
Messages
5
The pool version is v28. What is the baseline for testing 9.1 beta?
 
D

dlavigne

Guest
Not sure what you mean by baseline? You could try using the http://sourceforge.net/projects/freenas/files/FreeNAS-9.1.0/RC1/x64/ image to see if the error persists. It's not the real RC yet, but it is the most recent build.

You may have found a bug. Please create a ticket at support.freenas.org. Include the zpool status output from the 8.3.1 system, the zpool status from the 9.1 version, and the fullname of the 9.1 image you used. It wouldn't hurt to include the type of disk controller as well if you know it in case it is hardware specific.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi vdb,

Do you have the original 8.3.1 key? Can you boot that and see if the system comes up correctly (like nothing ever happened)?

-Will
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, doing -f or -F (with and without -X) can be destructive as it bypasses some sanity checks in ZFS. Those should be reserved for very last ditch efforts.

I'd go back to 8.3.1 and see what happens there. Sounds more like a bug than a coincidence that you upgraded from 8.3.1 to 9.1 beta and "happened" to start having problems now.

Post the output of:

gpart list
zpool import (this will not mount any pools, just provide some info)
camcontrol devlist
 
I

ixdwhite

Guest
From the earlier 'zpool status' it looks like this pool dates from a very early FreeNAS where the member disks weren't using gptid labels yet. That might be causing problems as another change is causing the base adaX devices to no longer be available.

Can you post the output of "camcontrol devlist" on FreeNAS 9.1, just so we can confirm the base devices are being detected, and if so, where? I suspect we will need "glabel list" after that, or perhaps an entire freenas-debug to save some time.
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
What's the hard drive model? Looking at your 8.3 output, the hard drive is used directly (which is not what FreeNAS would do, FreeNAS will always create partition and detect if the drive is 4k).

Would you, please, run the following from command line:

sysctl vfs.zfs.vdev.larger_ashift_disable=1

Then try importing the pool again?

You can make this setting permanent by adding it to loader tunables and sysctl variables from the GUI, however, I would suggest backing up your data and recreate your zpool because it is not properly aligned and that could increase the chance of data loss (due to read-modify-write by firmware) and greatly (about 3x to 6x) decreases performance.
 

vdb

Cadet
Joined
Jul 12, 2013
Messages
5
The disks are samsung ecogreen F4 2TB disks. I do not want to perform any actions that might endanger the current zpool. I also do not have a 2nd storage device that might funtion as a temporary backup for my data. All the critical data is backed-up on local clients by using windows7 offline folder synch. Thrwoing overboard the existing pool is simply not an option.

Can I still try the "sysctl vfs.zfs.vdev.larger_ashift_disable=1" ? Any other options?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm assuming you don't have a good thorough backup of the zpool. If this is the case you should stop right now. People have lost their zpool for reasons that didn't make sense and used commands that shouldn't have had negative consequences. You should probably go back to 8.3.1 and stick with that until you can do a thorough backup.

Also, I asked for some outputs as well as ixdwhite, but you never posted them...
 

Waxwurx

Cadet
Joined
Mar 14, 2013
Messages
5
I had a similar issue when upgrading from 8.3.1-P2 to 9.1RC. The upgrade was successful and I was able to restart into 9.1, my previous configuration was upgraded and appeared to work. However, none of my shares from the server was accessible. Also, I was unable to migrate the jail as directed in the instructions. The "/root/migrate_pluginjail.sh -D" appeared to hang, after an hour of no apparent activity I exited the terminal window and rebooted the server. The server took a very long time to restart and when it came back I could not access the web GUI. A look at the command line showed a warning that NGINX had failed to start. This also told me that the ZPOOL was unavailble or invalid. A second reboot failed to fix the issue so I reinstalled 8.3.1 and imported my old configuration. After that all was well again.

Like VDB, I am running Samsung ecogreen F4 2TB discs. The ZPOOL was created using ZFS V15 and migrated to V28 when I upgraded to FreeNAS 8.3.0. I'm wondering whether the IXDWHITE's comment, "the member disks weren't using gptid labels" is true. Is it possible to fix this in 8.3.1 before trying the upgrade again, or would people be best off not upgrading if their ZPOOL was created using a version earlier than V28?
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you, thank you,

i thought i lost my data but your trick helped me fix it. i was looking around for a week to find a fix and was almost going to reset to recreating the pool and copy it the data back (would take me 1 month).

i went from freenas 8.3.1p2 x64 to freenas 9.1.0rc1 x64.
 

Attachments

  • freenas 9-1-0rc1 issue.txt
    8.6 KB · Views: 359
  • check in freenas 8-3-1.txt
    2.5 KB · Views: 328
  • freenas9-1-0rc1 fixed.txt
    12.9 KB · Views: 383

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Alright, so what happens is that you have a pool created with too small ashift and the new FreeBSD version have fixed the recognition. For performance reasons, it's recommended to backup all data and recreate your pool, but you can work around the issue by adding the tunable via the GUI for now.

We will see what we can do to help mitigate this issue, thanks for reporting and stay tuned.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
Alright, so what happens is that you have a pool created with too small ashift and the new FreeBSD version have fixed the recognition. For performance reasons, it's recommended to backup all data and recreate your pool, but you can work around the issue by adding the tunable via the GUI for now.

We will see what we can do to help mitigate this issue, thanks for reporting and stay tuned.

sorry i'm a noob, how should i tune this? i know where and how to add it in the gui just don't know what to add.

note: now that the pool is working yes i am having performance problems, only using ~5 of 16GB of ram and only seeing ~25MB/s read & write over gigabit
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
sorry i'm a noob, how should i tune this? i know where and how to add it in the gui just don't know what to add.

note: now that the pool is working yes i am having performance problems, only using ~5 of 16GB of ram and only seeing ~25MB/s read & write over gigabit


You can tune the tunables from "System" -> "Tunables", shown here: http://wiki.freenas.org/index.php/Settings Note that this also disables the ability of detecting 4K drives for ZFS, we are working on a possible solution to make this workaround unnecessary.

For one-shoot change, you can do it from command line.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Is the tunable vfs.zfs.vdev.larger_ashift_disable custom just for FreeNAS? I searched Google and I got a single result. This page. I was curious as to how that tunable works.
 
Status
Not open for further replies.
Top