I have an iSCSI export on FreeNAS 11.2-beta3 consisting of a 5TB zvol exported to multiple ESXI hosts who all share a single datastore on it to store/migrate their vms. Everything worked fine.
I recently changed the underlying hardware of the machine while keeping the same disk controller and disks that host the zpool on which the zvol is (i.e., new mobo, nics, boot drive--same storage array). I reinstalled FreeNAS, upload my config file, and made the necessary tweaks to support the new hardware (changing network interfaces etc.). All works well including the other shares off my zpool (smb, nfs). I re-exported the zvol over iSCSI and here is what happens--
The iSCSi target shows up as a device in "Storage" on ESXi, but no datastores show up or mount.
Looking at vmkernel.log showed me an error "the Physical block size “16384” reported by the device naa.XXXXX is not supported. The only supported physical blocksizes are 512 and 4096"
So, I turned on the flag on the iSCSI extent on FreeNAS that causes it to suppress reporting the blocksize.
That cleared that error, but then I started getting an error message in vmkernel.log that said the datastore was being recognized only as a snapshot--
"LVM: 11136: Device naa.XXXXX:1 detected to be a snapshot:"
So, i ran esxcli storage vmfs snapshot resignature -l "[DATASTORE NAME"] to resignature it (i've also tried permanently mounting it, with same results)
That cleared that error, and so far, this is application of some well documented, if a bit esoteric, steps. (e.g., https://forums.freenas.org/index.php?threads/vmfs-partitions-gone.62031/)
But now I get the following error which I can't find any info on:
"WARNING: Vol3: 3102: [DATASTORE]/[UUID]: Invalid physDiskBlockSize 16384"
"FSS: 6092: No FS driver claimed device '[DATASTORE]/[UUID]': No filesystem on the device"
And this is where I'm stuck. I've tried this from multiple ESXI hosts with the same result. (the size set in the iSCSI extent is 512--which is what it was on the old machine where it worked) Does this mean the vmfs or underlying zvol is actually corrupted? That seems weird because everything else on the zpool imported fine and works fine.
Things are backed up so this isn't mission critical, but I'm wondering if anyone has any ideas or things I could check. I'd just as soon not have to restore all my vms. (I appreciate this is arguably more a VMware than a FreeNAS issue, but it seemed like this forum was better at dealing with these kinds of questions than elsewhere.)
I guess an alternative--short of starting over with a new zvol--is to put everything back on the old hardware, but that's a pain.
Any ideas?
Much appreciated.
I recently changed the underlying hardware of the machine while keeping the same disk controller and disks that host the zpool on which the zvol is (i.e., new mobo, nics, boot drive--same storage array). I reinstalled FreeNAS, upload my config file, and made the necessary tweaks to support the new hardware (changing network interfaces etc.). All works well including the other shares off my zpool (smb, nfs). I re-exported the zvol over iSCSI and here is what happens--
The iSCSi target shows up as a device in "Storage" on ESXi, but no datastores show up or mount.
Looking at vmkernel.log showed me an error "the Physical block size “16384” reported by the device naa.XXXXX is not supported. The only supported physical blocksizes are 512 and 4096"
So, I turned on the flag on the iSCSI extent on FreeNAS that causes it to suppress reporting the blocksize.
That cleared that error, but then I started getting an error message in vmkernel.log that said the datastore was being recognized only as a snapshot--
"LVM: 11136: Device naa.XXXXX:1 detected to be a snapshot:"
So, i ran esxcli storage vmfs snapshot resignature -l "[DATASTORE NAME"] to resignature it (i've also tried permanently mounting it, with same results)
That cleared that error, and so far, this is application of some well documented, if a bit esoteric, steps. (e.g., https://forums.freenas.org/index.php?threads/vmfs-partitions-gone.62031/)
But now I get the following error which I can't find any info on:
"WARNING: Vol3: 3102: [DATASTORE]/[UUID]: Invalid physDiskBlockSize 16384"
"FSS: 6092: No FS driver claimed device '[DATASTORE]/[UUID]': No filesystem on the device"
And this is where I'm stuck. I've tried this from multiple ESXI hosts with the same result. (the size set in the iSCSI extent is 512--which is what it was on the old machine where it worked) Does this mean the vmfs or underlying zvol is actually corrupted? That seems weird because everything else on the zpool imported fine and works fine.
Things are backed up so this isn't mission critical, but I'm wondering if anyone has any ideas or things I could check. I'd just as soon not have to restore all my vms. (I appreciate this is arguably more a VMware than a FreeNAS issue, but it seemed like this forum was better at dealing with these kinds of questions than elsewhere.)
I guess an alternative--short of starting over with a new zvol--is to put everything back on the old hardware, but that's a pain.
Any ideas?
Much appreciated.