ZFS Pool not found after upgrade to 11.1

Status
Not open for further replies.

raider2000

Cadet
Joined
Dec 18, 2017
Messages
8
I am using FreeNAS as a VM on ProxMox and am passing through the disks to the VM. Disks are connected to a LSI 9212-4i PCI SAS Controller. My HDDs are 8TB Western Digital. My issue is that after the upgrade to 11.1 the pool cannot be accessed and cannot be found any longer. I have a similar setup with 2x 3TB (WD) and 1x 4TB (Seagate) also runnign on the same SAS Controller and there the pool is still available. I was assuming it might be a FW issue of the controller as the one working was running on FW 14 the one not working running on 12 - just managed to upgrade to 20 and it still does not work. In case there is someone around with an idea what the issue might be I'd be happy to get some advice. If some log files are needed please let me know which ones.

Thanks
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Post your camcontrol devlist and do a gpart show da1 or another one of your disks. Finally, zpool status and zpool import
 

raider2000

Cadet
Joined
Dec 18, 2017
Messages
8
Thanks for the reply

camcontrol devlist:
<QEMU QEMU DVD-ROM 2.5+> at scbus1 target 0 lun 0 (cd0,pass0)
<QEMU QEMU HARDDISK 2.5+> at scbus2 target 0 lun 0 (pass1,da0)
<QEMU HARDDISK 2.5+> at scbus3 target 0 lun 0 (ada0,pass2)
<QEMU HARDDISK 2.5+> at scbus4 target 0 lun 0 (ada1,pass3)

On the host system related drives are:
/dev/sdb1: TYPE="zfs_member" PARTUUID="a38bdf36-c305-11e6-bd9d-ef7f8e4689ee"
/dev/sdb2: TYPE="zfs_member" PARTUUID="a39a717e-c305-11e6-bd9d-ef7f8e4689ee"
/dev/sdc1: PARTUUID="3d06c5e9-c929-11e6-9bf3-01af3f2ae076"
/dev/sdc2: PARTUUID="3d23b89d-c929-11e6-9bf3-01af3f2ae076"

--------------------

gpart show ...
=> 34 15628053101 ada0 GPT (7.3T)
34 94 - free - (47K)
128 4194304 1 freebsd-swap (2.0G)
4194432 15623858696 2 freebsd-zfs (7.3T)
15628053128 7 - free - (3.5K)

=> 40 15628053088 ada1 GPT (7.3T)
40 4056 - free - (2.0M)
4096 4194304 1 freebsd-swap (2.0G)
4198400 15623852032 2 freebsd-zfs (7.3T)
15628050432 2696 - free - (1.3M)

------------------

zpool status (affected pool not imported because cannot be found):
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:07 with 0 errors on Fri Dec 15 03:45:07 2017
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors

--------------------

zpool import

pool: main
id: 1528714702491965645
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-3C
config:

main UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
17629389395766204468 UNAVAIL cannot open
11293553033617356016 UNAVAIL cannot open


I hope this helps - again FreeNAS is running as a VM - below therefore also the config file for qemu:

bios: ovmf
bootdisk: scsi0
cores: 4
ide2: none,media=cdrom
memory: 18432
name: NAS-FreeNAS-Artemis
net0: e1000=XX:XX:XX,bridge=vmbr0
numa: 0
onboot: 1
ostype: other
sata0: /dev/disk/by-path/pci-0000:01:00.0-sas-phy4-lun-0
sata1: /dev/disk/by-path/pci-0000:01:00.0-sas-phy5-lun-0
scsi0: local-zfs:vm-100-disk-1,size=20G
scsihw: virtio-scsi-pci
smbios1: uuid=88332963-5f6e-XXXX-XXXX-XXXXX
sockets: 1


Regards
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Have you tried the earlier version of FreeNAS to see if that works?

I notice your /dev/sdc and sdb lines do not look like mirrors. Are they correct?
 

raider2000

Cadet
Joined
Dec 18, 2017
Messages
8
This is what I get on the Debian/ProxMox host - booting with FreeNAS 11 the pool can still be assembled so I assume it is ok even though I was also curious to see that the devices are listed as they are. The disks are only being used from within the FreeNAS VM.

From host zpool import gives below output:
pool: freenas-boot
id: 12444805030050477259
state: ONLINE
status: Some supported features are not enabled on the pool.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

freenas-boot ONLINE
zd0 ONLINE
 

raider2000

Cadet
Joined
Dec 18, 2017
Messages
8
just for comparison - here zpool status running 11.0

zpool status
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Thu Dec 14 18:45:07 2017
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors

pool: main
state: ONLINE
scan: scrub repaired 0 in 11h59m with 0 errors on Sun Nov 12 02:59:27 2017
config:

NAME STATE READ WRITE CKSUM
main ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada0 ONLINE 0 0 0

errors: No known data errors
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
What is zd0? Are you certain there is no VM configuration difference between the two? There is no reason I can think of for the software to reject the pool.
 

raider2000

Cadet
Joined
Dec 18, 2017
Messages
8
zd0 is the same as da0p2 just seen from the host rather then from the guest FreeNAS; the VM configuration for my second sysem (pools running in 11.0 and 11.1) is very much the same (using 3 physical HDDs rather then 2 being passed through) - see below:

bios: ovmf
boot: cdn
bootdisk: scsi0
cores: 2
ide2: none,media=cdrom
memory: 12288
name: NAS-FreeNAS-Atlas
net0: e1000=XX:XX:XX:XX:XX:XX,bridge=vmbr0
numa: 0
onboot: 1
ostype: other
sata0: /dev/disk/by-path/pci-0000:01:00.0-sas-phy5-lun-0
sata1: /dev/disk/by-path/pci-0000:01:00.0-sas-phy6-lun-0
sata2: /dev/disk/by-path/pci-0000:01:00.0-sas-phy7-lun-0
scsi0: local-zfs:vm-100-disk-1,size=20G
scsihw: virtio-scsi-pci
smbios1: uuid=035d1481-2277-XXXX-XXXX-XXXXXXXXX
sockets: 1


Additional comment: I also copied the boot: cdn statement present here to the original configuration as this was the onla major deviation - this has no effect and the pool is still not accessible in 11.1
 

raider2000

Cadet
Joined
Dec 18, 2017
Messages
8
I just updated the firmware of the running/working systems SAS controller card to FW 20.0 to exclude any potential negative influence from this newer firmware. Now both are on the same firmware and still my problem persists.

Could there be any issues caused by larger drives (8TB vs. 3TB&4TB)?
Any other ideas on what to check?
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
One of your machines has phy4/phy5, and the other is phy5/phy6/phy7. Is that right?
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
If it works in 11, then stay with that for now. There is another user who reported that their partition tables showed as corrupt after going to 11.1, and the pool wouldn't import. It could be a similar issue, but I don't think anybody has an idea what it is right now.
 
Status
Not open for further replies.
Top