cstanley
Cadet
- Joined
- Apr 24, 2016
- Messages
- 8
Hey guys,
When trying to unlock the pool with the geli recovery key and the passphrase I always have to unlock my pool twice. It seems to fails on the first attempt. Here is the output of messages when this happens:
Anything else you need, please ask away!
After the first failed time, usually I can import it again using the same recovery geli and passphrase and it works.
Anyone else experiencing this? Perhaps there is already a bug report.
P.S: Using BHYVE to run Ubuntu CEPH nodes - Love FreeNAS - Trying to convince my friend ZFS is awesome. He is an ext4 fanboy ;)
Another issue I have noticed is when you change the location of a zvol into a new dataset I.E
Build FreeNAS-11.1-U1
When trying to unlock the pool with the geli recovery key and the passphrase I always have to unlock my pool twice. It seems to fails on the first attempt. Here is the output of messages when this happens:
Code:
==> /var/log/messages <== Feb 20 02:02:43 nas daemon[3020]: 2018/02/20 02:02:43 [WARN] agent: Check 'freenas_health' is now warning Feb 20 02:03:08 nas /autosnap.py: [tools.autosnap:259] Volume zfsp1 not imported, skipping snapshot task #1 Feb 20 02:03:26 nas GEOM_ELI: Device gptid/4c03c442-0b70-11e6-9bd7-d05099c0a242.eli created. Feb 20 02:03:26 nas GEOM_ELI: Encryption: AES-XTS 128 Feb 20 02:03:26 nas GEOM_ELI: Crypto: hardware Feb 20 02:03:28 nas GEOM_ELI: Device gptid/4e66a9d4-0b70-11e6-9bd7-d05099c0a242.eli created. Feb 20 02:03:28 nas GEOM_ELI: Encryption: AES-XTS 128 Feb 20 02:03:28 nas GEOM_ELI: Crypto: hardware Feb 20 02:03:29 nas GEOM_ELI: Device gptid/4da6ad9f-0b70-11e6-9bd7-d05099c0a242.eli created. Feb 20 02:03:29 nas GEOM_ELI: Encryption: AES-XTS 128 Feb 20 02:03:29 nas GEOM_ELI: Crypto: hardware Feb 20 02:03:31 nas GEOM_ELI: Device gptid/4b414e36-0b70-11e6-9bd7-d05099c0a242.eli created. Feb 20 02:03:31 nas GEOM_ELI: Encryption: AES-XTS 128 Feb 20 02:03:31 nas GEOM_ELI: Crypto: hardware Feb 20 02:03:33 nas GEOM_ELI: Device gptid/0a2d8e5a-f5ad-11e7-8e5b-d05099c0a242.eli created. Feb 20 02:03:33 nas GEOM_ELI: Encryption: AES-XTS 256 Feb 20 02:03:33 nas GEOM_ELI: Crypto: hardware Feb 20 02:03:34 nas ZFS: vdev state changed, pool_guid=6784740781070051272 vdev_guid=7706223624675849754 Feb 20 02:03:34 nas ZFS: vdev state changed, pool_guid=6784740781070051272 vdev_guid=17117203167006062083 Feb 20 02:03:34 nas ZFS: vdev state changed, pool_guid=6784740781070051272 vdev_guid=13929147149740981013 Feb 20 02:03:34 nas ZFS: vdev state changed, pool_guid=6784740781070051272 vdev_guid=8089053346901069256 Feb 20 02:03:34 nas ZFS: vdev state changed, pool_guid=6784740781070051272 vdev_guid=8689990836141529117
Anything else you need, please ask away!
After the first failed time, usually I can import it again using the same recovery geli and passphrase and it works.
Anyone else experiencing this? Perhaps there is already a bug report.
P.S: Using BHYVE to run Ubuntu CEPH nodes - Love FreeNAS - Trying to convince my friend ZFS is awesome. He is an ext4 fanboy ;)
Another issue I have noticed is when you change the location of a zvol into a new dataset I.E
zfs rename zfsp1/osd1 zfsp2/ceph/osd1
, then edit the device for that VM and change the DISK to the new zvol location, the VM's boot straight into UEFI Interactive Shell. Not sure why this is. I would think just moving/renaming them they would still have all the relevant information on the zvol to boot. Perhaps this should be brought up as a separate issue in another forum post. Thoughts?Build FreeNAS-11.1-U1
Last edited: