Good Morning All - Apologies that my first post is this, but I need some help... Please excuse my lack of knowledge; whilst I'm technically savvy my TrueNAS/FreeNAS knowledge is not great.
My hardware is (apologies I don't know the specific MB manufacturer etc..);
- Old Datto appliance.
- i5-4440 / 16GB RAM / 1238GB SSD Boot Volume / 4x Hitachi HDS5C3020ALA632 2TB HDDs.
I have a single pool running over two mirrored pairs:
pool1 ONLINE
mirror-0 ONLINE
gptid/24d7222f-6408-11e9-82f8-fcaa14913c0e ONLINE
gptid/2dd7e7ad-6408-11e9-82f8-fcaa14913c0e ONLINE
mirror-1 ONLINE
gptid/377f6874-6408-11e9-82f8-fcaa14913c0e ONLINE
gptid/3fad091f-6408-11e9-82f8-fcaa14913c0e ONLINE
On this I have (had!) a single iSCSI extent running in device mode on a single zvol on this pool. I have a single VMWare server which connects to this pool over two 'portals'. The single extent contains a single VMFS volume used by VMware.
All has been working perfectly fine for months on end until one day, it didn't. Upon connecting the machine to a screen I found it was boot-looping, with a panic when mounting the volume 'panic: Solaris(panic): zfs: allocating allocated segment(offset=1172842725376 size=4096) of (offset=1172488953856 size=4292273992). I only have a photo of this so can't type it all out.
This was on FreeNAS - I have since run an in-place upgrade to TrueNAS latest stable build.
I found that the only way I could get the system to boot was by pulling the disks before powering on. I can then insert the disks (with hot-swap enabled in bios) and then see and attempt to mount the volume.
I am now at the point where I can mount the volume as read only using: zpool import -F -f -o readonly=on.
This appears to mount the pool fine, and it appears as mounted as read only when I run zpool list.:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 111G 5.08G 106G - - - 4% 1.00x ONLINE -
pool1 3.62T 1.31T 2.32T - - 0% 36% 1.00x ONLINE -
But VMWare just hangs for ages when rescanning (possibly because it's trying to write to the volume) and eventually show the volume as size 0B?
Also having tried to mount the volume using the windows iscsi initiator it also seems to hang indefinitely.
Really not sure what is going on here, the disks seem to be in good health - and I obviously run two mirrored pairs for redundancy.
Your thoughts and expertise would be greatly appreciated. I just want to mount the VMFS volume somewhere and copy all the data off so I can re-build the NAS.
Any further info you need let me know.
Thank you in advance.
My hardware is (apologies I don't know the specific MB manufacturer etc..);
- Old Datto appliance.
- i5-4440 / 16GB RAM / 1238GB SSD Boot Volume / 4x Hitachi HDS5C3020ALA632 2TB HDDs.
I have a single pool running over two mirrored pairs:
pool1 ONLINE
mirror-0 ONLINE
gptid/24d7222f-6408-11e9-82f8-fcaa14913c0e ONLINE
gptid/2dd7e7ad-6408-11e9-82f8-fcaa14913c0e ONLINE
mirror-1 ONLINE
gptid/377f6874-6408-11e9-82f8-fcaa14913c0e ONLINE
gptid/3fad091f-6408-11e9-82f8-fcaa14913c0e ONLINE
On this I have (had!) a single iSCSI extent running in device mode on a single zvol on this pool. I have a single VMWare server which connects to this pool over two 'portals'. The single extent contains a single VMFS volume used by VMware.
All has been working perfectly fine for months on end until one day, it didn't. Upon connecting the machine to a screen I found it was boot-looping, with a panic when mounting the volume 'panic: Solaris(panic): zfs: allocating allocated segment(offset=1172842725376 size=4096) of (offset=1172488953856 size=4292273992). I only have a photo of this so can't type it all out.
This was on FreeNAS - I have since run an in-place upgrade to TrueNAS latest stable build.
I found that the only way I could get the system to boot was by pulling the disks before powering on. I can then insert the disks (with hot-swap enabled in bios) and then see and attempt to mount the volume.
I am now at the point where I can mount the volume as read only using: zpool import -F -f -o readonly=on.
This appears to mount the pool fine, and it appears as mounted as read only when I run zpool list.:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 111G 5.08G 106G - - - 4% 1.00x ONLINE -
pool1 3.62T 1.31T 2.32T - - 0% 36% 1.00x ONLINE -
But VMWare just hangs for ages when rescanning (possibly because it's trying to write to the volume) and eventually show the volume as size 0B?
Also having tried to mount the volume using the windows iscsi initiator it also seems to hang indefinitely.
Really not sure what is going on here, the disks seem to be in good health - and I obviously run two mirrored pairs for redundancy.
Your thoughts and expertise would be greatly appreciated. I just want to mount the VMFS volume somewhere and copy all the data off so I can re-build the NAS.
Any further info you need let me know.
Thank you in advance.