Can't import zfs pool (9.2 to 8.3), did not run zpool upgrade.

Status
Not open for further replies.

kogspg

Cadet
Joined
May 20, 2014
Messages
3
I have 4x 2TB drives in a Raid-Z1 and a 1.5TB ZFS configuration running on an old dualcore atom maxed out at 4GB of RAM. I'll admit I haven't been RTFM. I upgraded to 9.2.1.5 and it was a world of hurt, didn't see that 8GB requirement in the release notes but it's all over the forums. So I nuked 9.2 and installed a fresh 8.3.2 but now I can't import my zfs pool and the error it's reporting is the pool being a newer version. I remember the warnings of zfs upgrade from v15 to v28 with 8.3.0 so I was super careful to NOT run "zpool upgrade" when I had 9.3 installed. The 1.5TB disk imported fine so i'm puzzled as to why it thinks the RaidZ1 pool is a newer version and won't import it. Is there no hope other than to install 9.2, copy the data over somewhere, reinstall 8.3 and recopy over the data?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can you post the output of "zpool import" from the CLI in 8.3.2?
 

kogspg

Cadet
Joined
May 20, 2014
Messages
3
Code:
  pool: mydata
    id: 10029294955684581498
  state: UNAVAIL
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported.  Access the pool on a system running newer
        software, or recreate the pool from backup.
  see: http://www.sun.com/msg/ZFS-8000-A5
config:
 
       mydata                                         UNAVAIL  newer version
          raidz1-0                                      ONLINE
            gptid/b8e21439-a394-11e0-abe2-7071bc08b0f2  ONLINE
            gptid/ba31afa3-a394-11e0-abe2-7071bc08b0f2  ONLINE
            gptid/bb46e293-a394-11e0-abe2-7071bc08b0f2  ONLINE
            gptid/bc5edfd5-a394-11e0-abe2-7071bc08b0f2  ONLINE
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Please post here output of one of these commands:
  • zdb -l /dev/gptid/b8e21439-a394-11e0-abe2-7071bc08b0f2
  • zdb -l /dev/gptid/ba31afa3-a394-11e0-abe2-7071bc08b0f2
  • zdb -l /dev/gptid/bb46e293-a394-11e0-abe2-7071bc08b0f2
  • zdb -l /dev/gptid/bc5edfd5-a394-11e0-abe2-7071bc08b0f2
They should give identical output. Please post only content of the first label, that would be enough - I think you are going to see 8 labels.

The information will not give you access to your data using FreeNAS 8.x, but it will provide some insight into what could have possibly happened.
 

kogspg

Cadet
Joined
May 20, 2014
Messages
3
Code:
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 5000
    name: 'mydata'
    state: 0
    txg: 10138002
    pool_guid: 10029294955684581498
    hostid: 2643501243
    hostname: ''
    top_guid: 13297155307080904711
    guid: 9252424555940032977
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 13297155307080904711
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 31
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 12951820897006806168
            path: '/dev/gptid/b8e21439-a394-11e0-abe2-7071bc08b0f2'
            phys_path: '/dev/gptid/b8e21439-a394-11e0-abe2-7071bc08b0f2'
            whole_disk: 0
            DTL: 4201
        children[1]:
            type: 'disk'
            id: 1
            guid: 9252424555940032977
            path: '/dev/gptid/ba31afa3-a394-11e0-abe2-7071bc08b0f2'
            phys_path: '/dev/gptid/ba31afa3-a394-11e0-abe2-7071bc08b0f2'
            whole_disk: 0
            DTL: 4200
        children[2]:
            type: 'disk'
            id: 2
            guid: 18244984622068638617
            path: '/dev/gptid/bb46e293-a394-11e0-abe2-7071bc08b0f2'
            phys_path: '/dev/gptid/bb46e293-a394-11e0-abe2-7071bc08b0f2'
            whole_disk: 0
            DTL: 4199
        children[3]:
            type: 'disk'
            id: 3
            guid: 299901580361771943
            path: '/dev/gptid/bc5edfd5-a394-11e0-abe2-7071bc08b0f2'
            phys_path: '/dev/gptid/bc5edfd5-a394-11e0-abe2-7071bc08b0f2'
            whole_disk: 0
            DTL: 4195
    features_for_read:


That "version: 5000" doesn't bode well for me.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah... you are stuck with 9.x. Now, the question is "what version of 9.x?" because feature flags have changed with each release. If you've only used 9.2.1.5 then clearly you are forced to use 9.2.1.x unless you did the zpool upgrade without enabling the flags(pretty unlikely).

There's 2 things that concern me about this:

1. You claim to be aware of the reasons to not to upgrade to zpool version 5000, yet you clearly are on v5000.
2. You claim to have not performed an upgrade to v5000, yet you clearly are on v5000.

So here's where I can just throw my thoughts on the forums and you can do what you want:

1. You clearly have no option to go back to 8.3.2. You'll need to use a version of FreeNAS that supports the flags you are using.
2. If you go back to whatever version will mount the pool the command "zpool history" will have a date/time stamp for exactly when it was upgraded. If you boot from an install CD and exit the installer you should be able to do a "zpool import". If it's compatible it will say so. Now here's the catch. If you use a 9.2.1.5 CD and you've only ever used 9.2.1.5 and it tells you that you should do an upgrade to enable all flags then things are *really* fishy because that means something upgraded your zpool to v5000 that wasn't 9.2.1.x. More than likely this would be a scenario such as #4 below or something where you used another version of FreeNAS when the zpool upgrade was performed.
3. If you didn't do the zpool upgrade is there another admin that may have done the upgrade?
4. Any chance you played with some other OS for a little while that supports ZFS and may be responsible for upgrading you without your permission?

I'm *really* not buying that 9.2.1.x has done an upgrade for you. I'm not aware of anyone making the claim that it was automatic, it would be extremely irresponsible for FreeNAS to do that. I'm pretty sure I've used v28 pools on 9.2.1.5 and it didn't upgrade.

Anyway, you have your work cut out for you. I think its important to mention that even on 8.3.2 the RAM minimum was 6GB. So you are clearly below the limit no matter which OS you use.
 
Status
Not open for further replies.
Top