zpool import not working

Status
Not open for further replies.

Dirceu

Cadet
Joined
Oct 28, 2013
Messages
2
[root@nas-rj-1] /# zpool import
pool: Dados
id: 5376331656238398075
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

Dados ONLINE
gptid/0425ea53-a837-11e2-886a-002590798bde ONLINE
gptid/3ded5e35-06d5-11e3-970c-002590798bde ONLINE

cache
dsk/gptid/044513a3-a837-11e2-886a-002590798bde
logs
gptid/04591341-a837-11e2-886a-002590798bde ONLINE

[root@nas-rj-1] /# zpool import -f -R /mnt Dados
-> No response...

Why the cache partition does not have ONLINE status ?
What else can I do to build this zfs ?
 
D

dlavigne

Guest
Where did the pool come from? A previous FreeNAS installation? If so, what version was the pool created on and what version of FreeNAS are you trying to import into?
 

Dirceu

Cadet
Joined
Oct 28, 2013
Messages
2
Installed in FreeNAS 8.2.3 and am trying to import into 9.1.1
the server with 24GB of RAM is the zpool import is already running at 4 hours.
has 8 GB free.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
If its taking 4 hours to import something is wrong.

Are the hard drives being accessed right now? What's the output of zpool status? Did you enable deduplication or compression?
 
Joined
Oct 29, 2013
Messages
2
On the GUI we destroyied a dataset called "poprj4" and the system seemed to be stucked. No terminal, web or ssh response. We could only ping the network interface.

So we rebooted normally the machine and after run zpool import command we found the following info in /var/log/messages

[root@nas-rj-1] /# zpool import -f -R /mnt Dados
...
Oct 29 19:17:13 nas-rj-1 kernel: Solaris: WARNING: Disk, '/dev/gptid/3ded5e35-06d5-11e3-970c-002590798bde', has a block alignment that is larger than the pool's alignment
Oct 29 19:17:13 nas-rj-1 kernel:
Oct 29 19:17:44 nas-rj-1 kernel: Solaris: WARNING: can't open objset for Dados/poprj4

While running zpool import the zpool status don't answer any result. But before running zpool import, the zpool status commands shows no datasets.
We are not sure if deduplication or compression is enabled. Is there some way to check it without having them mounted?
Code:
#gpart list
 
 
Geom name: mfid0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 19521474526
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: mfid0p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e1
  rawuuid: 0420da71-a837-11e2-886a-002590798bde
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
 
 
2. Name: mfid0p2
  Mediasize: 9992847408640 (9.1T)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r1w1e2
  rawuuid: 0425ea53-a837-11e2-886a-002590798bde
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 9992847408640
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 19521474526
  start: 4194432
Consumers:
1. Name: mfid0
  Mediasize: 9994994974720 (9.1T)
  Sectorsize: 512
  Mode: r2w2e5
 
 
Geom name: mfid1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 46851538910
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: mfid1p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e1
  rawuuid: 3de94fdd-06d5-11e3-970c-002590798bde
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: mfid1p2
  Mediasize: 23985840373248 (21T)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 2147549184
  Mode: r1w1e2
  rawuuid: 3ded5e35-06d5-11e3-970c-002590798bde
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 23985840373248
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 46851538910
  start: 4194432
Consumers:
1. Name: mfid1
  Mediasize: 23987987939328 (21T)
  Sectorsize: 512
  Mode: r2w2e5
 
Geom name: mfid2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 419430366
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: mfid2p1
  Mediasize: 214748282368 (200G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e2
  rawuuid: 044513a3-a837-11e2-886a-002590798bde
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 214748282368
  offset: 65536
  type: freebsd-zfs
  index: 1
  end: 419430366
  start: 128
Consumers:
1. Name: mfid2
  Mediasize: 214748364800 (200G)
  Sectorsize: 512
  Mode: r1w1e3
 
 
Geom name: mfid3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 203616222
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: mfid3p1
  Mediasize: 104251440640 (97G)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 65536
  Mode: r1w1e2
  rawuuid: 04591341-a837-11e2-886a-002590798bde
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 104251440640
  offset: 65536
  type: freebsd-zfs
  index: 1
  end: 203616222
  start: 128
Consumers:
1. Name: mfid3
  Mediasize: 104251523072 (97G)
  Sectorsize: 512
  Mode: r1w1e3
 
 
  Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 15633407
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: da0s1
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r1w0e1
  attrib: active
  rawtype: 165
  length: 988291584
  offset: 32256
  type: freebsd
  index: 1
  end: 1930319
  start: 63
 
2. Name: da0s2
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 988356096
  Mode: r0w0e0
  rawtype: 165
  length: 988291584
  offset: 988356096
  type: freebsd
  index: 2
  end: 3860639
  start: 1930383
3. Name: da0s3
  Mediasize: 1548288 (1.5M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1976647680
  Mode: r0w0e0
  rawtype: 165
  length: 1548288
  offset: 1976647680
  type: freebsd
  index: 3
  end: 3863663
  start: 3860640
 
4. Name: da0s4
  Mediasize: 21159936 (20M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1978195968
  Mode: r1w1e2
  rawtype: 165
  length: 21159936
  offset: 1978195968
  type: freebsd
  index: 4
  end: 3904991
  start: 3863664
Consumers:
1. Name: da0
  Mediasize: 8004304896 (7.5G)
  Sectorsize: 512
  Mode: r2w1e4
 
Geom name: da0s1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 1930256
first: 0
entries: 8
scheme: BSD
Providers:
 
1. Name: da0s1a
  Mediasize: 988283392 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r1w0e1
  rawtype: 0
  length: 988283392
  offset: 8192
  type: !0
  index: 1
  end: 1930256
  start: 16
Consumers:
1. Name: da0s1
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r1w0e1
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Few things:

First, I asked that the output be in CODE so that the formatting is saved. That is hard to read and I'm not going to try to organize that in my head. Please repost it within CODE.

Second, if you enabled dedup you'll need lots of RAM. The warnings are in the manual and there's no way to guess how much RAM you need. There is no upper limit to how much RAM you need. And if you don't have enough RAM, you can't access the pool until you do have enough. I'm not sure if you get an error message, warning, or kernel panic if you don't have enough RAM. I'm not aware of any way to check if dedup is enabled without the pool already being mounted. I'll look around a little though.

Thirdly, I see a device that is 21TB, and is ZFS. I'd wager that you didn't buy a 21TB hard drive, so you put ZFS on hardware RAID. That's a big no-no. It's covered in the manual that doing this can cost you your pool.

Fourth, Please tell me you have backups. There's a decent chance you're going to need them.

Fifth, I think you need to slow down and post all of your hardware and how you have it all configured. Especially since you appear to have mixed ZFS with hardware RAID.
 
Joined
Oct 29, 2013
Messages
2
Is there possible to import the pool without mounting any filesystem and so decide which dataset mount later? Checked that option -N do that but not sure about the correct parameter.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Is there possible to import the pool without mounting any filesystem and so decide which dataset mount later? Checked that option -N do that but not sure about the correct parameter.

Not that I know of. Normally if a dataset is corrupted it just won't mount the dataset. You could do some expert modifications to your disks and to ZFS if you really had to to prevent a dataset from mounting, but that's far beyond the scope of a forum post.
 
Status
Not open for further replies.
Top