From that recent conversation on the linked thread you might imagine that "we" wrote ESXi and forced everybody to use it based on the tone of certain replies! Nonetheless, here's my take on your situation. I hope I can help.
Something initially happened that caused the web GUI not to show? Would the VM still start and boot to the console? Even if the storage disk RDMs had messed up (in a content sense), BSD should still have started and booted to the console and GUI. If the RDMs/discs had disappeared from the bus the VM would have refused to start until the missing discs were removed or relinked.
In any case, you booted the FreeNAS ISO and asked it to reset/reinstall? Did you run this just once, and did you definitely select that 8GB da0 (virtual?) root disc as the target?
It seems odd that all 3 drives would be damaged simultaneously. Do they sound otherwise OK? Now, perhaps it's the ghost of ESXi, but could it have been a power spike / lightning strike? Have you checked the SMART status? (e.g. smartctl -a /dev/da1)
Running through what you've posted; under ESXi:
- ZFS didn't find any valid vdev/pool (blank zpool status & zpool import). This is definitely a problem.
- The emulated discs seem to be detected, enumerated on the SCSI buses OK (dmesg & camcontrol devlist) but have no disklabels (blank gpart status & gpart list). This isn't necessarily a problem if you were using a ZFS vdev running on "whole disks". Did you originally add the raw devices to the vdev by hand rather than using the FreeNAS GUI?
On bare metal:
- The emulated discs seem to be detected, enumerated on the SCSI buses, but again have no disklabels (seems like you've pasted geom disk list & camcontrol devlist). Again, the missing gpart status isn't necessarily a big deal.
- Did you try auto-import and the zpool commands at this stage too?
"
geom disk list" from under ESXi would have been useful, but no worries.
I'd suggest looking at the raw block device(s) and seeing if there's a GPT/MBR or ZFS header. On bare metal, it looks like your ZFS drives have been picked up as ada0, ada2 & ada3. Under ESXi, they were da1, da2 & da3. Try one of these, depending on where you are running now. If you were using whole-disks then hopefully you'll simply see a ZFS header (notice the version and name strings), in which case retry the import routine:
Code:
~# cat /dev/ada0 | od -A x -c | head
<or>
~# cat /dev/da1 | od -A x -c | head
0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
0003fd0 \0 \0 \0 \0 \0 \0 \0 \0 021 z \f 261 z 332 020 002
0003fe0 ? * n 177 200 217 364 227 374 316 252 X 026 237 220 257
0003ff0 213 264 m 377 W 352 321 313 253 _ F \r 333 222 306 n
0004000 001 001 \0 \0 \0 \0 \0 \0 \0 \0 \0 001 \0 \0 \0 $
0004010 \0 \0 \0 \0 \0 \0 \a v e r s i o n \0
0004020 \0 \0 \0 \b \0 \0 \0 001 \0 \0 \0 \0 \0 \0 \0 034
0004030 \0 \0 \0 \0 \0 \0 \0 \0 \0 004 n a m e
0004040 \0 \0 \0 \t \0 \0 \0 001 \0 \0 \0 004 z f s 1
If not, then try this:
Code:
~# cat /dev/ada0 | file -
/dev/stdin: x86 boot sector; partition...
If it's detected as an MBR or GPT partition then it may be a case of a mangled partition table, rejected by BSD and hence ZFS can't find any block devices with a ZFS header. Unless the "cat /dev/xxx | od -Ax -c" command is showing all \0's, \377's or random rubbish (an outright disc failure [power spike?], or ESXi causing the referred-to worst case corruption), my next suggestion would be to search the block device for a ZFS header:
Code:
~# cat /dev/da1 | od -A x -x | grep 7a11 | head
0003fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
001ffd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0020fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0021fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0022fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0023fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0024fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0025fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0026fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
0027fd0 0000 0000 0000 0000 7a11 b10c da7a 0210
If found, but not at effectively 0K/256K from the start or 512K/256K from the end of the block device, then something may simply have mangled the partition table(s) (seems unlikely for all 3 discs). (Address line 3fd0 above is at ~16K, and refers to a ZFS header beginning at 0). You could then conceivably construct a partition table that creates block devices at the base of the ZFS structure.
No need to go to extremes just yet though. The first "cat | od | head" will be the most insightful here.