Dealing with a small to major catastrophe

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
TrueNAS-12.0-U8
ESXi 6.7u3 (new and old servers)

Situation:
Replacing our ESXi hosts with newer hardware in prep to move to ESXi 7.0.x. The New ESXi host does not see the DataStore but the New ESXi hosts see the TrueNAS LUN under Devices in ESXi. However, ESXi wants us to create a new Datastore when highlighting the device (focusing on the 15.5TB Datastore):
1645714833931.png


No changes have been made to the TrueNAS device, pool or zvol. The tech who was working on the entire operation cannot remember if he hit "unmount" or delete" on the datastore. If the tech hit "delete" has the data been destroyed on the TrueNAS? Is there a way to recover the data on the TrueNAS?

Tech did not take a snapshot of the Pool or zvol before making any changes. We do have veeam backups which are on a separate NAS, but the VM resided on the Datastore in question.

If your wondering about the fate of the tech, his employment status does ride on the recovery of the DataStore and or data without having to rely on the Veeam backups.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
So I walked through the steps mentally,

wrote down all info on what was going to be copy/paste.

[root@ESXi-01:~] offset="128 2048"; for dev in `esxcfg-scsidevs -l | grep "Console Device:" | awk {'print $3'}`; do disk=$dev; echo $disk; > partedUtil getptbl $disk; { for i in `echo $offset`; do echo "Checking offset found at $i:"; hexdump -n4 -s $((0x100000+(512*$i))) $disk; hexdump -n4 -s $((0x1300000+(512*$i))) $disk; hexdump -C -n 128 -s $((0x130001d + (512*$i))) $dis k; done; } | grep -B 1 -A 5 d00d; echo "---------------------"; done /vmfs/devices/cdrom/mpx.vmhba0:C0:T2:L0 Unable to get device /vmfs/devices/cdrom/mpx.vmhba0:C0:T2:L0 --------------------- /vmfs/devices/disks/naa.6589cfc0000009b10152ac0d634424d8 gpt 126625 255 63 2034237504 1 2048 2034235392 AA31E02A400F11DB9590000C2911D1B8 vmfs 0 Checking offset found at 2048: 0200000 d00d c001 0200004 1400000 f15e 2fab 1400004 0140001d 69 73 63 73 69 2d 53 41 4e 31 2d 62 6f 6f 74 2d |iscsi-SAN1-boot-| 0140002d 30 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |01..............| --------------------- /vmfs/devices/disks/naa.6589cfc000000bd0b1349bd1481ef3f9 gpt 2071957 255 63 33285996672 Checking offset found at 2048: 0200000 d00d c001 0200004 1400000 f15e 2fab 1400004 0140001d 69 73 63 73 69 2d 42 61 63 6b 75 70 2d 30 32 00 |iscsi-Backup-02.| 0140002d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| --------------------- /vmfs/devices/disks/naa.6589cfc000000c22ce42f8f17907ae95 gpt 200029 255 63 3213466112 2 2048 3213463552 AA31E02A400F11DB9590000C2911D1B8 vmfs 0 Checking offset found at 2048: 0200000 d00d c001 0200004 * 1400000 0140001d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * --------------------- /vmfs/devices/disks/t10.NVMe____KBG40ZNS256G_NVMe_KIOXIA_256GB__________8D7B8C00048EE38C gpt 31130 255 63 500118192 1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128 5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0 8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0 2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0 3 15472640 500118158 AA31E02A400F11DB9590000C2911D1B8 vmfs 0 --------------------- [root@ESXi-01:~] partedUtil getptbl /vmfs/devices/disks/naa.6589cfc000000bd0b1349bd1481ef3f9 gpt 2071957 255 63 33285996672 [root@ESXi-01:~] partedUtil getUsableSectors /vmfs/devices/disks/naa.6589cfc000000bd0b1349bd1481ef3f9 34 33285996638 [root@ESXi-01:~] partedUtil mklabel /vmfs/devices/disks/naa.6589cfc000000bd0b1349bd1481ef3f9 gpt [root@ESXi-01:~] partedUtil showGuids Partition Type GUID vmfs AA31E02A400F11DB9590000C2911D1B8 vmkDiagnostic 9D27538040AD11DBBF97000C2911D1B8 vsan 381CFCCC728811E092EE000C2911D0B2 virsto 77719A0CA4A011E3A47E000C29745A24 VMware Reserved 9198EFFC31C011DB8F78000C2911D1B8 Basic Data EBD0A0A2B9E5443387C068B6B72699C7 Linux Swap 0657FD6DA4AB43C484E50933C84B4F4F Linux Lvm E6D6D379F50744C2A23C238F2A3DF928 Linux Raid A19D880F05FC4D3BA006743F0F84911E Efi System C12A7328F81F11D2BA4B00A0C93EC93B Microsoft Reserved E3C9E3160B5C4DB8817DF92DF00215AE Unused Entry 00000000000000000000000000000000 [root@ESXi-01:~] partedUtil setptbl /vmfs/devices/disks/naa.6589cfc000000bd0b1349bd1481ef3f9 gpt "1 2048 33285996672 AA31E02A400F11DB9590000C2911D1B8 0" gpt 0 0 0 0 1 2048 33285996672 AA31E02A400F11DB9590000C2911D1B8 0 Error: Can't have a partition outside the disk! AddNewPartitions: ped_partition_new failed



As you can see, it failed and I did not move forward with anything past rescanning.

Now I have the ability to create a DataStore via Datastores>New datastore (before I could only do this in Devices) and it seems ESXi can see a fully used partition. The partition just isn't mountable.
1645725196504.png

1645725208839.png



Any thoughts or suggestions?
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
No dice on anything. Pulling the 6 day old backups from the NAS. Difficult lesson for someone to learn by not following documentation and measuring twice before cutting.

Thanks for the article, @jgreco . It was just enough bait to give false hope.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
I apologize. If ever we meet IRL, I'll buy you a beer.
That was supposed to sound sarcastic and was originally followed by some laughing emojis'.

However, I'll never turn down a beer. I really do appreciate the help.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well of course it was sarcastic, but that doesn't make the event any less catastropic or regrettable.

I always cringe before committing changes towards vSphere with live data on it. Too often there's a gotcha in there.
 
Top