ZFS pool not available after upgrade from 8.3.1 to 9.1 beta

Status
Not open for further replies.

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Is the tunable vfs.zfs.vdev.larger_ashift_disable custom just for FreeNAS? I searched Google and I got a single result. This page. I was curious as to how that tunable works.


Yes, it's FreeNAS specific change and MAY be removed in the future.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What does it do?
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Note: the problem is fixed in 3549074fc582d2bb09ded6304174a2104da16fe8 and 50a82fba31979ddf14f234dbea2fa8e5573a2e58 so next RC will not need this workaround to import existing pools.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
oh boy, i really hope you guys can help me work some magic, 3/7TB is not backedup off server. i've recieved all good status emails from freenas 9.1.0rc1 last night and i decided to restart the computer. when starting it does not see my pool at all, the pool is listed as an option but says error for any information about it. i detached it and tried to auto import or shell import and it is not seen. i was not having any issues at all before powering down. i have upgraded the pool to zfs v28 5000 so i can no longer use the 8.3.1 usb stick to import the pool.


FreeNAS 9.1.0 RC1 x64 (e3440c6)
Code:
[smurfy@nas] /# zpool status
no pools available
[smurfy@nas] /# zpool import -FX stor
cannot import 'stor': invalid vdev configuration


some requested outputs...
[smurfy@nas] /# gpart list
Code:
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 7fe5333d-edb7-11e2-b265-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: da0p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 7ff753f5-edb7-11e2-b265-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 3998639463936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 7814037134
  start: 4194432
Consumers:
1. Name: da0
  Mediasize: 4000787030016 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 8048e1ef-edb7-11e2-b265-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: da1p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 805c1416-edb7-11e2-b265-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 3998639463936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 7814037134
  start: 4194432
Consumers:
1. Name: da1
  Mediasize: 4000787030016 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: da2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da2p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 80af35f3-edb7-11e2-b265-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: da2p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 80c4b729-edb7-11e2-b265-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 3998639463936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 7814037134
  start: 4194432
Consumers:
1. Name: da2
  Mediasize: 4000787030016 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da3p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 811932c4-edb7-11e2-b265-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: da3p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 812e5515-edb7-11e2-b265-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 3998639463936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 7814037134
  start: 4194432
Consumers:
1. Name: da3
  Mediasize: 4000787030016 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da4p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 81845e58-edb7-11e2-b265-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: da4p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 8197afa9-edb7-11e2-b265-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 3998639463936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 7814037134
  start: 4194432
Consumers:
1. Name: da4
  Mediasize: 4000787030016 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: da5
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 31326207
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: da5s1
  Mediasize: 1838301696 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r0w0e0
  rawtype: 165
  length: 1838301696
  offset: 32256
  type: freebsd
  index: 1
  end: 3590495
  start: 63
2. Name: da5s2
  Mediasize: 1838301696 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1838366208
  Mode: r1w0e1
  attrib: active
  rawtype: 165
  length: 1838301696
  offset: 1838366208
  type: freebsd
  index: 2
  end: 7180991
  start: 3590559
3. Name: da5s3
  Mediasize: 1548288 (1.5M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 3676667904
  Mode: r0w0e0
  rawtype: 165
  length: 1548288
  offset: 3676667904
  type: freebsd
  index: 3
  end: 7184015
  start: 7180992
4. Name: da5s4
  Mediasize: 21159936 (20M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 3678216192
  Mode: r1w1e2
  rawtype: 165
  length: 21159936
  offset: 3678216192
  type: freebsd
  index: 4
  end: 7225343
  start: 7184016
Consumers:
1. Name: da5
  Mediasize: 16039018496 (15G)
  Sectorsize: 512
  Mode: r2w1e4
 
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e1
  rawuuid: 824946e6-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada0p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  rawuuid: 8255a13f-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 998057319936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 1953525134
  start: 4194432
Consumers:
1. Name: ada0
  Mediasize: 1000204886016 (931G)
  Sectorsize: 512
  Mode: r1w1e2
 
Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 83eb7239-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada1p2
  Mediasize: 998057316352 (929G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 8402337d-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 998057316352
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 1953525127
  start: 4194432
Consumers:
1. Name: ada1
  Mediasize: 1000204886016 (931G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada2p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e1
  rawuuid: 81eabbfe-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada2p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  rawuuid: 81f6b150-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 998057319936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 1953525134
  start: 4194432
Consumers:
1. Name: ada2
  Mediasize: 1000204886016 (931G)
  Sectorsize: 512
  Mode: r1w1e2
 
Geom name: ada3
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada3p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e1
  rawuuid: 81873f0e-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada3p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  rawuuid: 8193e730-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 998057319936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 1953525134
  start: 4194432
Consumers:
1. Name: ada3
  Mediasize: 1000204886016 (931G)
  Sectorsize: 512
  Mode: r1w1e2
 
Geom name: ada4
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada4p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e1
  rawuuid: 84642d3e-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada4p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  rawuuid: 8470dafe-6aab-11e2-9f2f-00248c450018
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 998057319936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 1953525134
  start: 4194432
Consumers:
1. Name: ada4
  Mediasize: 1000204886016 (931G)
  Sectorsize: 512
  Mode: r1w1e2
 
Geom name: da5s1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 3590432
first: 0
entries: 8
scheme: BSD
Providers:
1. Name: da5s1a
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r0w0e0
  rawtype: 0
  length: 1838293504
  offset: 8192
  type: !0
  index: 1
  end: 3590432
  start: 16
Consumers:
1. Name: da5s1
  Mediasize: 1838301696 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r0w0e0
 
Geom name: da5s2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 3590432
first: 0
entries: 8
scheme: BSD
Providers:
1. Name: da5s2a
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1838374400
  Mode: r1w0e1
  rawtype: 0
  length: 1838293504
  offset: 8192
  type: !0
  index: 1
  end: 3590432
  start: 16
Consumers:
1. Name: da5s2
  Mediasize: 1838301696 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1838366208
  Mode: r1w0e1


[smurfy@nas] /# zpool import
Code:
Assertion failed: nvlist_lookup_nvlist(nvl, name, &rv) == 0 (0x2 == 0x0), file /tank/home/alfred/fn/9.1/FreeBSD/src/cddl/lib/libnvpair/../../../sys/cddl/contrib/opensolaris/common/nvpair/fnvpair.c, line 414.
Abort


[smurfy@nas] /# camcontrol devlist
Code:
<ATA ST4000VN000-1H41 SC42>        at scbus0 target 2 lun 0 (pass0,da0)
<ATA ST4000VN000-1H41 SC42>        at scbus0 target 3 lun 0 (pass1,da1)
<ATA ST4000VN000-1H41 SC42>        at scbus0 target 4 lun 0 (pass2,da2)
<ATA ST4000VN000-1H41 SC42>        at scbus0 target 5 lun 0 (pass3,da3)
<ATA ST4000VN000-1H41 SC42>        at scbus0 target 6 lun 0 (pass4,da4)
<ST31000528AS CC3E>                at scbus1 target 0 lun 0 (ada0,pass5)
<ST1000DM003-9YN162 CC82>          at scbus2 target 0 lun 0 (ada1,pass6)
<ST31000528AS CC3E>                at scbus3 target 0 lun 0 (ada2,pass7)
<ST31000333AS SD15>                at scbus4 target 0 lun 0 (ada3,pass8)
<ST31000528AS CC3E>                at scbus5 target 0 lun 0 (ada4,pass9)
<Lexar JumpDrive 1100>            at scbus8 target 0 lun 0 (pass10,da5)


[smurfy@nas] /# glabel list
Code:
Geom name: da0p2
Providers:
1. Name: gptid/7ff753f5-edb7-11e2-b265-00248c450018
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 7809842703
  length: 3998639463936
  index: 0
Consumers:
1. Name: da0p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
 
Geom name: da1p2
Providers:
1. Name: gptid/805c1416-edb7-11e2-b265-00248c450018
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 7809842703
  length: 3998639463936
  index: 0
Consumers:
1. Name: da1p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
 
Geom name: da2p2
Providers:
1. Name: gptid/80c4b729-edb7-11e2-b265-00248c450018
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 7809842703
  length: 3998639463936
  index: 0
Consumers:
1. Name: da2p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
 
Geom name: da3p2
Providers:
1. Name: gptid/812e5515-edb7-11e2-b265-00248c450018
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 7809842703
  length: 3998639463936
  index: 0
Consumers:
1. Name: da3p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
 
Geom name: da4p2
Providers:
1. Name: gptid/8197afa9-edb7-11e2-b265-00248c450018
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 7809842703
  length: 3998639463936
  index: 0
Consumers:
1. Name: da4p2
  Mediasize: 3998639463936 (3.7T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
 
Geom name: da5s3
Providers:
1. Name: ufs/FreeNASs3
  Mediasize: 1548288 (1.5M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 3676667904
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 3024
  length: 1548288
  index: 0
Consumers:
1. Name: da5s3
  Mediasize: 1548288 (1.5M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 3676667904
  Mode: r0w0e0
 
Geom name: da5s4
Providers:
1. Name: ufs/FreeNASs4
  Mediasize: 21159936 (20M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 3678216192
  Mode: r1w1e1
  secoffset: 0
  offset: 0
  seclength: 41328
  length: 21159936
  index: 0
Consumers:
1. Name: da5s4
  Mediasize: 21159936 (20M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 3678216192
  Mode: r1w1e2
 
Geom name: ada0p2
Providers:
1. Name: gptid/8255a13f-6aab-11e2-9f2f-00248c450018
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 1949330703
  length: 998057319936
  index: 0
Consumers:
1. Name: ada0p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
 
Geom name: ada1p2
Providers:
1. Name: gptid/8402337d-6aab-11e2-9f2f-00248c450018
  Mediasize: 998057316352 (929G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 1949330696
  length: 998057316352
  index: 0
Consumers:
1. Name: ada1p2
  Mediasize: 998057316352 (929G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
 
Geom name: ada2p2
Providers:
1. Name: gptid/81f6b150-6aab-11e2-9f2f-00248c450018
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 1949330703
  length: 998057319936
  index: 0
Consumers:
1. Name: ada2p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
 
Geom name: ada3p2
Providers:
1. Name: gptid/8193e730-6aab-11e2-9f2f-00248c450018
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 1949330703
  length: 998057319936
  index: 0
Consumers:
1. Name: ada3p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
 
Geom name: ada4p2
Providers:
1. Name: gptid/8470dafe-6aab-11e2-9f2f-00248c450018
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 1949330703
  length: 998057319936
  index: 0
Consumers:
1. Name: ada4p2
  Mediasize: 998057319936 (929G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r0w0e0
 
Geom name: da5s1a
Providers:
1. Name: ufsid/51dbbe14ea6d2a96
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 3590417
  length: 1838293504
  index: 0
Consumers:
1. Name: da5s1a
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r0w0e0
 
Geom name: da5s1a
Providers:
1. Name: ufs/FreeNASs1a
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r0w0e0
  secoffset: 0
  offset: 0
  seclength: 3590417
  length: 1838293504
  index: 0
Consumers:
1. Name: da5s1a
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r0w0e0
 
Geom name: da5s2a
Providers:
1. Name: ufs/FreeNASs2a
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1838374400
  Mode: r1w0e0
  secoffset: 0
  offset: 0
  seclength: 3590417
  length: 1838293504
  index: 0
Consumers:
1. Name: da5s2a
  Mediasize: 1838293504 (1.7G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1838374400
  Mode: r1w0e1


i saw this message before when i was having the issue importing the pool, which worked fine in 8.3.1, disabling "sysctl vfs.zfs.vdev.larger_ashift_disable=1" enabled me to import. after i was able to import it i reenabled it, with no issue until the reboot. i've now disabled it again and see:

Code:
[smurfy@nas] /# sysctl vfs.zfs.vdev.larger_ashift_disable=1
vfs.zfs.vdev.larger_ashift_disable: 0 -> 1
[smurfy@nas] /# zpool import
  pool: stor
    id: 14817132263352275435
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
 
stor                                            ONLINE
raidz1-0                                      ONLINE
  gptid/8193e730-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/81f6b150-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/8255a13f-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/8402337d-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/8470dafe-6aab-11e2-9f2f-00248c450018  ONLINE
raidz1-1                                      ONLINE
  gptid/7ff753f5-edb7-11e2-b265-00248c450018  ONLINE
  gptid/805c1416-edb7-11e2-b265-00248c450018  ONLINE
  gptid/80c4b729-edb7-11e2-b265-00248c450018  ONLINE
  gptid/812e5515-edb7-11e2-b265-00248c450018  ONLINE
  gptid/8197afa9-edb7-11e2-b265-00248c450018  ONLINE
[smurfy@nas] /# zpool import stor
cannot mount '/stor': failed to create mountpoint


i did see a file that was 1.98GB /mnt/stor/something.000 that i deleted, doh and realize now that may not have been user generated. note: auto import fails, shows no pool available. sorry about the long post i just wanted to make sure i gave everyone what they asked for.

strangely, i tried again
Code:
[root@nas ~]# zpool import
[root@nas ~]# zpool import stor cannot import 'stor': a pool with that name is already created/imported, and no additional pools with that name were found
 

Attachments

  • Screenshot - 07182013 - 10:28:34 AM.png
    Screenshot - 07182013 - 10:28:34 AM.png
    10.6 KB · Views: 279
  • Screenshot - 07182013 - 10:52:28 AM.png
    Screenshot - 07182013 - 10:52:28 AM.png
    17.8 KB · Views: 291

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
oh boy, i really hope you guys can help me work some magic, 3/7TB is not backedup off server. i've recieved all good status emails from freenas 9.1.0rc1 last night and i decided to restart the computer. when starting it does not see my pool at all, the pool is listed as an option but says error for any information about it. i detached it and tried to auto import or shell import and it is not seen. i was not having any issues at all before powering down. i have upgraded the pool to zfs v28 flags so i can no longer use the 8.3.1 usb stick to import the pool.

Don't worry! Based on what you have described, your data is safe.

What you need to do would be, for now, if you see this but there is no new RC published yet (which we are likely to do soon):

Go to "System" -> "Tunables", shown here: http://wiki.freenas.org/index.php/Settings and add the tunable, then restart the system.

Then the system should be able to import the pool from the GUI. Note that this is NOT required once we have a new RC and should be reverted.

====

The reason why you see:

cannot import 'stor': a pool with that name is already created/imported, and no additional pools with that name were found

Was that the pool was imported but it was not properly mounted. You could have used 'service ix-zfs start' which will do the proper mount steps instead of doing 'zpool import stor', but based on what you have described, there is no data harmed and you should be good to go.

PS. It's still recommended for you to backup all of your data and restore them after re-creating the pool to get better performance.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm with delphij. I'm not familiar with the bug that the tunable fixes, but if your zpool worked fine with 8.3.1 then your data is safe. You're just in a panic because you don't have access to it at this moment. That's fully understandable, just don't go doing crazy things and all will be fine. :)
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
thank you very much i very much appreciate the help. i really was panicking and made it worse. since i couldn't import the pool i tried finding other ways to do so. i ended up via shell cloning two snapshots and holding two of them, then exporting the pool, then destroying it to try to import it. checking in freenas 8.3.1p2x64 it saw the pool as being destroyed but said it couldn't import because of the zfs 5000. back in freenas 9.1.0rc1x64 i see this, sorry to add another layer. i will truly not do anything else, nor will i loose hope. if and when i get this back up i will make a full backup as the first thing i do. from what i have read via the illumos link, it's not great.

i should note that the link brought up few things, which i was reading earlier but i don't see much on the tuneable side, did i do something wrong http://wiki.freenas.org/index.php/Settings

other than all of this, the pool has not changed so i'm not sure why it would say cannot open on the drives, and therefore result in insufficient replicas.
 

Attachments

  • CaptureOfTerror.JPG
    CaptureOfTerror.JPG
    53.3 KB · Views: 307

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The biggest thing people can do is go freaking out and touching things they shouldn't... sadly, I've lost alot of hope. I don't have any good ideas at the moment. :/

The "insufficient replicas" means that your 5 disk RAIDZ1 doesn't have at least 4 disks in the system and being properly found by ZFS. In your case, none of the 10 disks in either vdev have a zfs partition that is valid. This is probably because you destroyed the zpool, but I'm not sure. Generally you only destroy a zpool because you plan to repurpose the drive or something. I've never seen someone try to restore a destroyed zpool, but its supposed to be possible in some circumstances.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
The biggest thing people can do is go freaking out and touching things they shouldn't... sadly, I've lost alot of hope. I don't have any good ideas at the moment. :/

The "insufficient replicas" means that your 5 disk RAIDZ1 doesn't have at least 4 disks in the system and being properly found by ZFS. In your case, none of the 10 disks in either vdev have a zfs partition that is valid. This is probably because you destroyed the zpool, but I'm not sure. Generally you only destroy a zpool because you plan to repurpose the drive or something. I've never seen someone try to restore a destroyed zpool, but its supposed to be possible in some circumstances.


i will fight through this, it's too important not to. as always you and delphij have been so great. i still have hope. i read that when you "destroy" a pool it does not do much and that it worded to strongly but that when you "create" a zpool it's more destructive to the underlying data that is/was there. what is strangest most of all is that it shows that each disk is unavailable and can't open them but in 8.3.1 they're online (just that it's the newer 5000 zfs). it just doesn't make sense...yet.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Did you try the tunable delphij mentioned earlier in this thread?
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
i couldn't find the tuneable he meant in the link but i did try 'service ix-zfs start' and that had no effect at this point. if i didn't read about the destroy command or thought to try it, wow.

on the bright side, i installed pc-bsd 9.1 on an external usb hard drive and found the the pool so it's seems it's just an update away from working. i wonder if the 'service ix-zfs start' will work there? i'd prefer by far freenas... but below is exactly what i saw in freenas 8.3.1.

note commands that aren't resolving this in 9.1.0rc1x64: zpool clear stor, zpool import -Df stor, ... this has to be a bug, since a freenas9.1.0 new feature is the volume manager (submitted http://support.freenas.org/ticket/2413).

Code:
[smurfy@pcbsd-6032] ~% zpool status
no pools available
[smurfy@pcbsd-6032] ~% zpool import -D
cannot discover pools: permission denied
[smurfy@pcbsd-6032] ~% su root
Password:
[smurfy@pcbsd-6032] /usr/home/smurfy# zpool import -D
  pool: stor
    id: 14817132263352275435
  state: UNAVAIL (DESTROYED)
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported.  Access the pool on a system running newer
software, or recreate the pool from backup.
  see: http://illumos.org/msg/ZFS-8000-A5
config:
 
stor                                            UNAVAIL  newer version
raidz1-0                                      ONLINE
  gptid/8193e730-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/81f6b150-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/8255a13f-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/8402337d-6aab-11e2-9f2f-00248c450018  ONLINE
  gptid/8470dafe-6aab-11e2-9f2f-00248c450018  ONLINE
raidz1-1                                      ONLINE
  gptid/7ff753f5-edb7-11e2-b265-00248c450018  ONLINE
  gptid/805c1416-edb7-11e2-b265-00248c450018  ONLINE
  gptid/80c4b729-edb7-11e2-b265-00248c450018  ONLINE
  gptid/812e5515-edb7-11e2-b265-00248c450018  ONLINE
  gptid/8197afa9-edb7-11e2-b265-00248c450018  ONLINE
 
[smurfy@pcbsd-6032] /usr/home/smurfy#
 

delphij

FreeNAS Core Team
Joined
Jan 10, 2012
Messages
37
Please do NOT use PC-BSD to import, apparently your PC-BSD version is using an old version of ZFS.

Looking at the output, you seem to have destroyed the pool? That's not a good sign but it's still possible to recover, but you need to be very careful. Please do exactly the following, on FreeNAS:

1. Go to System -> Tunables -> Add Tunable (http://wiki.freenas.org/index.php/Tunables)

2. "Variable" -> vfs.zfs.vdev.larger_ashift_disable

3. "Value" -> 1

4. Click "Ok"

Then, reboot the FreeNAS system.

5. Go to command line and do:

zpool import -D -R /mnt tank
zpool export tank

6. Then, go to storage and try an auto import.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
Please do NOT use PC-BSD to import, apparently your PC-BSD version is using an old version of ZFS.

Looking at the output, you seem to have destroyed the pool? That's not a good sign but it's still possible to recover, but you need to be very careful. Please do exactly the following, on FreeNAS:

1. Go to System -> Tunables -> Add Tunable (http://wiki.freenas.org/index.php/Tunables)

2. "Variable" -> vfs.zfs.vdev.larger_ashift_disable

3. "Value" -> 1

4. Click "Ok"

Then, reboot the FreeNAS system.

5. Go to command line and do:

zpool import -D -R /mnt tank
zpool export tank

6. Then, go to storage and try an auto import.

after adding the tunable the system booted to this: "This is a FreeNAS data disk and con not boot system. System halted." right now i'm re-downloading a fresh copy of FreeNAS-9.1.0-RC1-x64.img.xz, will burn it to the usb and start with it clean and just that tuneable to see if it persists or if it was a conflict of some sort.

update: strangely i'm having trouble booting to usb drives, checked bios settings and now it's good, trying fix... "GRUB loading, please wait... Error 17" looks like i've found the issue it was not imaged to the root of the usb drive... so i did the first line instead of the second at first. sorry i'm still a noob.
Code:
sudo xzcat /home/smurfy/Downloads/FreeNAS-9.1.0-RC1-x64.img.xz |dd of=/dev/sdb1 bs=64k
sudo xzcat /home/smurfy/Downloads/FreeNAS-9.1.0-RC1-x64.img.xz |dd of=/dev/sdb bs=64k


update 2: i've not had luck with those commands and the tuneable. i'm able to boot, see all disks, but not access them.
Code:
[smurfy@nas] /# zpool import -D -R /mnt stor
cannot import 'stor': no such pool or dataset
Destroy and re-create the pool from
a backup source.
[smurfy@nas] /# zpool import -D
   pool: stor
     id: 14817132263352275435
  state: UNAVAIL (DESTROYED)
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://illumos.org/msg/ZFS-8000-3C
 config:
 
stor                      UNAVAIL  insufficient replicas
 raidz1-0                UNAVAIL  insufficient replicas
   16752418983724484862  UNAVAIL  cannot open
   16923247324607746236  UNAVAIL  cannot open
   10063454983377925543  UNAVAIL  cannot open
   800970979980896464    UNAVAIL  cannot open
   11402190904943199729  UNAVAIL  cannot open
 raidz1-1                UNAVAIL  insufficient replicas
   13898617183391027350  UNAVAIL  cannot open
   10638658567095667509  UNAVAIL  cannot open
   10179731959774134998  UNAVAIL  cannot open
   18380244200663678529  UNAVAIL  cannot open
   14569402982510951241  UNAVAIL  cannot open
[smurfy@nas] /# smartctl -a /dev/ada0 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada1 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada2 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada3 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/ada4 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da0 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da1 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da2 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da3 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
[smurfy@nas] /# smartctl -a /dev/da4 | grep "test result: "
SMART overall-health self-assessment test result: PASSED
 

hidden72

Dabbler
Joined
Aug 8, 2011
Messages
22
I ran into this after an upgrade to 9.1.0x64 RC1. When the system came up, the pool didn't import and the output of "zpool import -d /dev" was this:

Code:
[root@freenas3] /dev# zpool import -d /dev/
  pool: tank
    id: 6159916456356973050
  state: UNAVAIL
status: The pool is formatted using a legacy on-disk version.
action: The pool cannot be imported due to damaged devices or data.
config:
 
        tank                                            UNAVAIL  insufficient replicas
          raidz1-0                                      UNAVAIL  corrupted data
            gptid/85b23e4c-5c49-11e2-8a17-00505693346f  ONLINE
            gptid/864c73f2-5c49-11e2-8a17-00505693346f  ONLINE
            gptid/869b9726-5c49-11e2-8a17-00505693346f  ONLINE
            gptid/87113a77-5c49-11e2-8a17-00505693346f  ONLINE
            gptid/87cbd63d-5c49-11e2-8a17-00505693346f  ONLINE
            gptid/88322bfd-5c49-11e2-8a17-00505693346f  ONLINE
            gptid/8895659c-5c49-11e2-8a17-00505693346f  ONLINE


I added the tunable and rebooted the box. I didn't even have to manually import the pool - it automatically showed up and worked properly. Thanks.
 
Status
Not open for further replies.
Top