New integrated ESXi/TrueNAS build - general questions

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Hi all,

curiosity kills the cat - I finally bit the bullet and got myself an additional toy system. See the "Home VMware system" in my signature.

The system boots off a cheap Transcend 256 GB SATA SSD. I found that this brand in particular has awesome TBW for the SSD size and the price, that's why I've come to rely on them for boot media, specifically when I don't have redundancy as in this case or on my OPNsense appliance. Don't know what the downside is, but probably simply speed. Who cares for the boot drive?

I put two Samsung M.2 NVME SSDs in there that can be passed through. Seems to work great.

Then I installed a small (8G mem, 8 G virtual disk) VM with TrueNAS Core RC1. I could create the storage pool on the NVME drives. Woot!

General questions:

1. I get from the documentation that I should create a ZVOL to share via iSCSI. How large should I make that in terms of the pool size? This will of course be the only task of that pool. Should I plan for snapshots on the ZFS side? I think so. I would want to replicate that ZVOL to my main storage system for backup so a week of daily snapshots looks reasonable. I get the impression that one large blob of VMFS is not that manageable. Should I go for NFS instead so it's just files?

2. If I go for a ZVOL how large should the volblocksize be?

Problems so far:

I went ahead and just create a ZVOL and shared it via iSCSI to try out things. I can redo that any time should I need to change the size or the blocksize.
But while ESXi seems to see the target I don't see the drive itself and I cannot create a new datastore for that reason.

Any hints? I set up iSCSI sharing without any authentication - that runs at the office in a separate storage LAN with manual ctl.conf without problems. The config FreeNAS creates is slightly different, though.

I'll post a screenshot from ESXi 7.0 and the ctl.conf for starters. If you are inclined to help, of course ask for whatever additional info you need.

Thanks!
Patrick

Code:
portal-group "default" {
}

portal-group "pg1" {
        tag "0x0001"
        discovery-filter "portal-name"
        discovery-auth-group "no-authentication"
        listen "0.0.0.0:3260"
        option "ha_shared" "on"
}

lun "vmware" {
        ctl-lun "0"
        path "/dev/zvol/ssd/vmware"
        blocksize "512"
        serial "000c29226238000"
        device-id "iSCSI Disk      000c29226238000                "
        option "vendor" "TrueNAS"
        option "product" "iSCSI Disk"
        option "revision" "0123"
        option "naa" "0x6589cfc0000003e726054d88a68318d9"
        option "insecure_tpc" "on"
        option "avail-threshold" "109951175884"
        option "pool-avail-threshold" "192414534860"
        option "rpm" "1"
}

target "iqn.2005-10.org.freenas.ctl:vmware" {
        alias "vmware"
        portal-group "pg1" "no-authentication"

}
 

Attachments

  • Bildschirmfoto 2020-09-19 um 18.42.46.png
    Bildschirmfoto 2020-09-19 um 18.42.46.png
    1.2 MB · Views: 159
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @Patrick M. Hausen,

You configured authentication on your iSCSI. Here, I never tried that. The network links are physically separate and peer-to-peer. Also, to configure or access these links, one needs to be already root in the ESXi. As such, authentication does not really help. You can try to disable that authentication and see if it helps.

Also, I have not been able to use ESXi 7.0 because that one disabled all the non-native drivers. That includes the driver I need for the Perc controller I have in my own ESXi server... To have these drivers disabled and impossible to re-enable them is a big problem. I did not tried too hard to re-enable them because ESXi 7.0 also disabled the VNC access to its VMs. Because I also use this function, 7.0 is another No-Go for me.

As for iSCSI vs NFS, I am in favor of iSCSI. I prefer to have only few small blocks change here and there instead of gigantic VMDK files. The MPIO offered by iSCSI is another plus for me. As for the size, a frequent recommendation is not to load such a zvol above its 50% mark. That can give you an idea of how big a zvol you need.

As for backups, I do not rely on system level backups for VMs. I backup only their data with a different mechanism and should I need it, I will deploy a new VM and restore the data only. Whenever I wish a system-level snapshot, I do it from ESXi, most of the time after stopping the VM to ensure the snapshot is perfect. Ex: for my private cloud, the data storage is mounted over NFS from FreeNAS to the Docker Host running Nextcloud in my ESXi. That is how data are backed up. As for the configs and the database, I created another container for that. That second container mounted the config docker volumes used by Nextcloud, but as read-only. It does the backup from there and put it in FreeNAS over NFS after encrypting the backup. As for the database, that backup agent container connects to the database, do the backup, encrypt it and again send it FreeNAS over NFS. That way, should I need a restore, I deploy a new and empty SQL database and restore it. I then re-create the docker volumes for Nextcloud before deploying a brand new container using the default container image. That way, I never deal with system state in my backups. That greatly simplifies the backups, the restore and also reduce the size of the backup.

Good luck debugging your iSCSI.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I deleted the configuration and created a new one with the TrueNAS wizard - now I can detect and format the LUN. Great.

I think I will provision 70-80% of the pool to the single ZVOL and not do snapshots. Then use GhettoVCB to backup the VMs to an NFS share on my main storage NAS.

One open question - any idea what that means? See screenshot, please.

Thanks!
Patrick
 

Attachments

  • Bildschirmfoto 2020-09-19 um 19.46.52.png
    Bildschirmfoto 2020-09-19 um 19.46.52.png
    256 KB · Views: 173

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again,

Never had this on my side... Maybe a network problem like one of X network links being down ? Also, because you are 7.0, maybe something new in the UI a do not have in 6.7 u3....
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
According to Google if there is only a single path to an iSCSI unit it is shown as "degraded".
Currently setting up a second vSwitch and network ...
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
headshot_pitr.gif

Da! datastore2 now workink.
 
Top