Odd Mirror status after reboot

Status
Not open for further replies.
Joined
Jun 11, 2013
Messages
6
Hi,

Reading fair amount of threads about FreeNAS not playing ball after a reboot. Similar situation here which is disappointing. Been using 7.3 for ever with UFS and zero problems. Finally new kit arrived so went to 8.3 with ZFS. Built new zpool with 2x 3Tb (ada0 and ada1) disks to give me a mirrored 3gb volume. Built some datasets and off we went. All fine.

Extended the pool by adding 2x 2Tb (ada2 and ada3) disks so now had (I think) a mirrored 5Tb volume. Again all working fine.

After a reboot today the volume didn't import. After some reading on this forum and elsewhere couldn't find a fix but some CLI commands gave me something. I did a zpool import -V MAINSET1 which worked and now I get a status:

zpool status
pool: MAINSET1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
scan: none requested
config:

NAME STATE READ WRITE CKSUM
MAINSET1 UNAVAIL 0 0 0
mirror-0 ONLINE 0 0 0
gptid/12937171-d14f-11e2-bb79-28924a30613f ONLINE 0 0 0
gptid/1320d726-d14f-11e2-bb79-28924a30613f ONLINE 0 0 0
mirror-1 UNAVAIL 0 0 0
10530617518446502987 UNAVAIL 0 0 0 was /dev/ada2p2.nop
13005353957074361637 UNAVAIL 0 0 0 was /dev/ada3p2.nop

Maybe I am reading this wrong but this looks like one side of my mirror is fine with the other showing errors. But the disks in mirror-0 are my original 3Tb disks so this must be a concatenated mirror? So I am missing the 2x 2Tb mirror.

The disks were visible at boot time and it seems that maybe the addresses got jumbled? It says it attached ada2/3 (originally the 2Tb drives) but I can see that it is the 3Tb drives present.

ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <ST3000DM001-1CH166 CC26> ATA-8 SATA 3.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <ST3000DM001-1CH166 CC26> ATA-8 SATA 3.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: <ST2000DL003-9VT166 CC32> ATA-8 SATA 3.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
ada3: <ST2000DL003-9VT166 CC32> ATA-8 SATA 3.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
da0 at umass-sim0 bus 0 scbus6 target 0 lun 0SMP: AP CPU #1 Launched!
da0:
<USB Flash Disk 1100> Removable Direct Access SCSI-4 device
da0: 40.000MB/s transfers
da0: 1912MB (3915776 512 byte sectors: 255H 63S/T 243C)
GEOM_RAID5: MAINSET1: device created (stripesize=131072).
GEOM_RAID5: MAINSET1: ada2(2): disk attached.
GEOM_RAID5: MAINSET1: ada3(0): disk attached.
GEOM: da0s1: geometry does not match label (16h,63s != 255h,63s).
Root mount waiting for: GRAID5
Root mount waiting for: GRAID5
Root mount waiting for: GRAID5
Root mount waiting for: GRAID5
GEOM_RAID5: MAINSET1: activated (forced) (need about 57MiB kmem (max)).Trying to mount root from ufs:/dev/ufs/FreeNASs1a

GEOM_ELI: Device ada0p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI: Crypto: software
GEOM_ELI: Device ada1p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI: Crypto: software
ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present;
to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf.
ZFS filesystem version 5
ZFS storage pool version 28

So where do I go from here? Not sure why the addresses jumbled but any help appreciated in bringing the last bits of my volume online would be appreciated.
 
Joined
Jun 11, 2013
Messages
6
popped the 2x 2tb disks and rebooted. Still seeing messages for "GEOM: da0s1: geometry does not match label (16h,63s != 255h,63s)." IS this the issue maybe?
What can I do about this?
I don't believe my data has started to write to this concatenated mirror but I also don't believe ZFS will let me remove it either.
Any pointers greatly received.
 
Joined
Jun 11, 2013
Messages
6
Just realised that the da0 is the USB boot. Doh! So still unsure how to online these last two disks.
Disks are visible:

camcontrol devlist
<ST3000DM001-1CH166 CC26> at scbus0 target 0 lun 0 (pass0,ada0)
<ST3000DM001-1CH166 CC26> at scbus1 target 0 lun 0 (pass1,ada1)
<ST2000DL003-9VT166 CC32> at scbus2 target 0 lun 0 (pass2,ada2)
<ST2000DL003-9VT166 CC32> at scbus3 target 0 lun 0 (pass3,ada3)
<ATAPI DVD D DH16D2S EH33> at scbus4 target 1 lun 0 (pass4,cd0)
<USB Flash Disk 1100> at scbus6 target 0 lun 0 (pass5,da0)

gpart doesn't seem to know about the last 2 disks:

gpart show -l
=> 34 5860533101 ada0 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 (null) (2.0G)
4194432 5856338696 2 (null) (2.7T)
5860533128 7 - free - (3.5k)

=> 34 5860533101 ada1 GPT (2.7T)
34 94 - free - (47k)
128 4194304 1 (null) (2.0G)
4194432 5856338696 2 (null) (2.7T)
5860533128 7 - free - (3.5k)

=> 63 3915713 da0 MBR (1.9G)
63 1930257 1 (null) [active] (942M)
1930320 63 - free - (31k)
1930383 1930257 2 (null) (942M)
3860640 3024 3 (null) (1.5M)
3863664 41328 4 (null) (20M)
3904992 10784 - free - (5.3M)

=> 0 1930257 da0s1 BSD (942M)
0 16 - free - (8.0k)
16 1930241 1 (null) (942M)

I don't seem to have appropriate partition devices (*p2) anymore

ls -l /dev/ad*
crw-r----- 1 root operator 0, 94 Jun 23 20:23 /dev/ada0
crw-r----- 1 root operator 0, 96 Jun 23 20:23 /dev/ada0p1
crw-r----- 1 root operator 0, 115 Jun 23 20:24 /dev/ada0p1.eli
crw-r----- 1 root operator 0, 97 Jun 23 20:23 /dev/ada0p2
crw-r----- 1 root operator 0, 95 Jun 23 20:23 /dev/ada1
crw-r----- 1 root operator 0, 98 Jun 23 20:23 /dev/ada1p1
crw-r----- 1 root operator 0, 104 Jun 23 20:24 /dev/ada1p1.eli
crw-r----- 1 root operator 0, 99 Jun 23 20:23 /dev/ada1p2
crw-r----- 1 root operator 0, 100 Jun 23 20:23 /dev/ada2
crw-r----- 1 root operator 0, 101 Jun 23 20:23 /dev/ada3
 
Joined
Jun 11, 2013
Messages
6
A zdb shows the zpool with the concatenated mirror missing gpart info. I don't profess to know much about this level of things. Really would appreciate some help.

zdb
MAINSET1:
version: 28
name: 'MAINSET1'
state: 0
txg: 106597
pool_guid: 4988336559095372149
hostid: 212295474
hostname: 'EVONAS.evotech.biz'
vdev_children: 2
vdev_tree:
type: 'root'
id: 0
guid: 4988336559095372149
children[0]:
type: 'mirror'
id: 0
guid: 3988496848765527672
metaslab_array: 31
metaslab_shift: 34
ashift: 12
asize: 2998440558592
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 13881376484801877405
path: '/dev/gptid/12937171-d14f-11e2-bb79-28924a30613f'
phys_path: '/dev/gptid/12937171-d14f-11e2-bb79-28924a30613f'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 1600307017524357794
path: '/dev/gptid/1320d726-d14f-11e2-bb79-28924a30613f'
phys_path: '/dev/gptid/1320d726-d14f-11e2-bb79-28924a30613f'
whole_disk: 1
create_txg: 4
children[1]:
type: 'mirror'
id: 1
guid: 1326302410889248102
metaslab_array: 105
metaslab_shift: 34
ashift: 12
asize: 1998246641664
is_log: 0
create_txg: 106595
children[0]:
type: 'disk'
id: 0
guid: 10530617518446502987
path: '/dev/ada2p2.nop'
phys_path: '/dev/ada2p2.nop'
whole_disk: 1
create_txg: 106595
children[1]:
type: 'disk'
id: 1
guid: 13005353957074361637
path: '/dev/ada3p2.nop'
phys_path: '/dev/ada3p2.nop'
whole_disk: 1
create_txg: 106595
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Code:
GEOM_RAID5: MAINSET1: device created (stripesize=131072).
GEOM_RAID5: MAINSET1: ada2(2): disk attached.
GEOM_RAID5: MAINSET1: ada3(0): disk attached.
GEOM: da0s1: geometry does not match label (16h,63s != 255h,63s).
Root mount waiting for: GRAID5
Root mount waiting for: GRAID5
Root mount waiting for: GRAID5
Root mount waiting for: GRAID5
GEOM_RAID5: MAINSET1: activated (forced) (need about 57MiB kmem (max)).Trying to mount root from ufs:/dev/ufs/FreeNASs1a

First, use damn [code][/code] tags so the post is actually readable. I almost skipped it.

Second, this is your problem. There is old GRAID5 metadata on the disks. To get the pool imported, 6. Escape to loader prompt. Unload the GRAID5 module, /boot/modules/geom_raid5.ko (I think). Once you boot without GRAID being loaded you should be able to see the pool. At which point I would back it up, as you will want to wipe the GRAID metadata from the drives, one at a time, to not have to deal with this every reboot.
 
Joined
Jun 11, 2013
Messages
6
Apolgies for the poorly formatted post. I don't post often so sorry and note to self made. And thank you so much for not skipping it.
Secondly, yes 99% of these things are user error. Freenas 7 has been stable and I've been using it for so long that the glaring GEOM_RAID5 error message washed over me fogetting that I am now ZFS. Thanks for the pointers to unload the module. I'll have a go this evening.
As an aside if you were starting with a new FreeNAS with 2x 3Tb and 2x 2Tb would you be using ZFS or sticking with the trusted mirrors or stripes with UFS?
 
Joined
Jun 11, 2013
Messages
6
PaleoN you are a star. Ended up setting the tunable "geom_raid5_load" to "NO" via the GUI. Reboot and everything back to normal. Apart from the rogue GEOM metadata. I assume to clean this up would involve backup/destroy/reload? I'd also be interested in your views on ZFS vs UFS as mentioned above. But thanks a lot for your response.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Readability is important. Help others help you.

Apart from the rogue GEOM metadata. I assume to clean this up would involve backup/destroy/reload?
I would certainly backup first. You should be able to carefully clean the GRAID metadata off: export the pool, swapoff the disk you are cleaning, clean that one disk, import the pool (or reboot), scrub the pool (resilvering if needed) and repeat for the other drive.

I don't recall offhand where GRAID stores it's metadata. Make a backup of the GPT partition and try graid5 clear prov.
I'd also be interested in your views on ZFS vs UFS as mentioned above. But thanks a lot for your response.
I'm a big fan of ZFS, but it's important to read up on it. Mirrors are great, very flexible. You could also do a 4 x disk raidz2 pool if you recreate. Also you could mix it up and have one ZFS mirror & one gmirror volume.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Looks like I can't delete my own posts. Great.
 
Status
Not open for further replies.
Top