FreeNAS on VMware - How to correlate vDisk to /dev/daxx

JRE

Dabbler
Joined
Jan 22, 2019
Messages
11
A former colleague implemented FreeNAS on VMware. He provisioned several vDisks of identical size to the FreeNAS VM.
Then he created pools - one pool per vDisk.
At the OS level, I can use "zpool status" and "gpart list" commands, and this enables me to match zpools to the /dev/daxx devices that they use.
However, I need to figure out which VMware virtual disk correlates to each OS device /dev/daxx

If this were a CentOS machine, I would use the "dmesg" command and filter for useful messages. For example, on the following VM, I can see that vDisk 0 is sda, vDisk 1 is sdb and vDisk 2 is sdc. (But one cannot assume that the order of vDisks matches the order of VM OS disks)
Code:
[jeinhorn@useitconfluence ~]$ dmesg | grep " sd " | grep logical
[    1.156251] sd 2:0:0:0: [sda] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)
[    1.156407] sd 2:0:1:0: [sdb] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)
[    1.156905] sd 2:0:2:0: [sdc] 629145600 512-byte logical blocks: (322 GB/300 GiB)


I need to do something equivalent on our FreeNAS server so that I can tell which vDisk goes with which /dev/daxx device, so that I can safely remove the vDisk that corresponds to a deleted zpool. Can someone help?

Many Thanks,
Janet
 

JRE

Dabbler
Joined
Jan 22, 2019
Messages
11
Figured it out!

Code:
mynas# camcontrol devlist
<NECVMWar VMware IDE CDR10 1.00>   at scbus1 target 0 lun 0 (pass0,cd0)
<VMware Virtual disk 1.0>          at scbus2 target 0 lun 0 (pass1,da0)
<VMware Virtual disk 1.0>          at scbus2 target 1 lun 0 (pass2,da1)
<VMware Virtual disk 1.0>          at scbus2 target 2 lun 0 (pass3,da2)
<VMware Virtual disk 1.0>          at scbus2 target 3 lun 0 (pass4,da3)
<VMware Virtual disk 1.0>          at scbus2 target 4 lun 0 (pass5,da4)
<VMware Virtual disk 1.0>          at scbus2 target 5 lun 0 (pass6,da5)
<VMware Virtual disk 1.0>          at scbus2 target 6 lun 0 (da6,pass7)
<VMware Virtual disk 1.0>          at scbus2 target 8 lun 0 (da7,pass8)
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
You probably know this already, but having one pool per disk (v or otherwise!) loses much of ZFS's purpose. You get data integrity checking, but you don't get data healing or any forms of redundancy.

If you want redundancy on this setup, you can make those single-disk vdevs into mirror pairs - instructions are in the manual but although it doesn't mention the case where you have a nonredundant pool, you can just extend it to a mirror pair the same way as extending an already-mirrored vdev to three mirrors.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Make sure that you also check the order from the ESXi side. It is easy for disks to be wired in to specific SCSI ID's ("virtual device node" in the latest ESXi language) and this affects ordering as well.

Be aware that what's been done is probably very fragile and highly not-recommended.
 
Top