Move data from Datasheet to zvol

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
OS: TrueNAS-12.0-U7
MB: GIGABYTE H370M DS3H
CPU: Intel i5 8500
RAM: 4x8GB (32GB) DDR4 2400 (NON-ECC)
HBA: SAS9211-8I 8PORT Int 6GB (in IT Mode) <- in a PCIe 4x slot
NIC (Management): Onboard Intel
NIC (iscsi): dual Intel 82599 SFP+
Case: SilverStone RM21-308
PSU: 600watt

Looked for previous posts but came up dry, feel free to point me to a thread if one really does exist.
Moving my esxi datastores from NFS to iscsi. In this process, I removed my two SFP+ from the switch and connected each port to the esxi host for iscsi.
Was wondering if I am able to move the data from the datasheet to the zvol. Data is located on the same pool.
If this isn't possible, what would you suggest being the safest - efficient way of moving this?
The only other option I can think of is using the 1Gb/s management nic to mount the datasheets as NFS and then use esxi to copy the data from each nfs to the vmfs datastores. This is just over 15tb of data and feel like that would take forever.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's no easy way to do this, at least assuming we're talking about VMFS on iSCSI.

One of the advantages to NFS is that ZFS and UNIX both understand the VM files as being files, and you can do basic manipulations on them as files.

With iSCSI, you are probably running VMFS, which is a VMware-proprietary filesystem, and all the NAS sees is a bunch of disk blocks.

In theory, things such as VAAI plugins are designed to allow storage to be managed in various ways without proxying through a hypervisor, but this is a tough trick even if you have access to such features, which you won't have on the free version of TrueNAS.

So what you need to do is to proxy through a hypervisor. You already have 10G, so what you can do is temporarily do an NFS mount of the NFS over the 10G, alongside the iSCSI. If you have vSphere VCSA, it then becomes an exercise in using Storage vMotion to move the data from your NFS datastore to the VMFS datastore. This allows the ESXi host to do the transformation from NFS to VMFS, and you can do it at 10G speeds, which may not be anywhere near 10G, but probably faster than 1G!
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
There's no easy way to do this, at least assuming we're talking about VMFS on iSCSI.

One of the advantages to NFS is that ZFS and UNIX both understand the VM files as being files, and you can do basic manipulations on them as files.

With iSCSI, you are probably running VMFS, which is a VMware-proprietary filesystem, and all the NAS sees is a bunch of disk blocks.

In theory, things such as VAAI plugins are designed to allow storage to be managed in various ways without proxying through a hypervisor, but this is a tough trick even if you have access to such features, which you won't have on the free version of TrueNAS.

So what you need to do is to proxy through a hypervisor. You already have 10G, so what you can do is temporarily do an NFS mount of the NFS over the 10G, alongside the iSCSI. If you have vSphere VCSA, it then becomes an exercise in using Storage vMotion to move the data from your NFS datastore to the VMFS datastore. This allows the ESXi host to do the transformation from NFS to VMFS, and you can do it at 10G speeds, which may not be anywhere near 10G, but probably faster than 1G!
Yup, this was my initial thought and also worse fear. Thanks for the help. This can be marked as solved.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, I hope you heard the correction to your initial incorrect assumption:

The only other option I can think of is using the 1Gb/s management nic to mount the datasheets as NFS

is false. You can do this over the 10G. Doing it over 1G is not HORRIBLE, but seeing speeds of 2-3Gbps might be the result if you do it over the 10G.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
Well, I take that back. Not fully resolved. Maybe this is a separate thread so feel free to correct me.

I've started transferring a VM from the datasheet over NFS to the iscsi vmfs block.
The VM on the NFS is only 2GB in total. A VMDK of 200mb and another VMDK of 1.2GB. When I copy it over to the vmfs, the 200MB vmdk is over 4GB and the other VMDK of 1.2GB is growing to over 6GB and counting. Is this a file compression issue?
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
Well, I hope you heard the correction to your initial incorrect assumption:



is false. You can do this over the 10G. Doing it over 1G is not HORRIBLE, but seeing speeds of 2-3Gbps might be the result if you do it over the 10G.
I cannot use the 10GB. There are only two 10GBs on the NAS. Both are already occupied as direct connections to the ESXi hosts. The MB does not have another pcie slot available to install another SFP+ card. I could always remove one of the SFP+ ports from the NAS to a host and plug the NAS SFP+ into the switch, but I'm not going to keep making configuration changes. The 1GB will have to do.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You can do the NFS mount over the same links you use for iSCSI - why not?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I cannot use the 10GB. There are only two 10GBs on the NAS. Both are already occupied as direct connections to the ESXi hosts.

So you already have the optimal situation for 10G. Why do you think you can't use it?

If you can get iSCSI working on the 10G, then NFS is trivial by comparison.

If you are laboring under some sort of mistaken assumption that you can only run one thing over an IP connection, you might want to wonder how it is you can SSH to two different things over a single ethernet port.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
So you already have the optimal situation for 10G. Why do you think you can't use it?

If you can get iSCSI working on the 10G, then NFS is trivial by comparison.

If you are laboring under some sort of mistaken assumption that you can only run one thing over an IP connection, you might want to wonder how it is you can SSH to two different things over a single ethernet port.
Well that was easy ‍o_O Feeling like an idiot.

Did ya'll see the comment about storage usage different between the datasheet and the vmfs?

This is the usage of a VM on the NFS / datashee:
1641230004699.png


Here is the same VM as it is being transferred from the datasheet to the vmfs:...its only at 9% complete...
1641230057560.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well that was easy ‍o_O Feeling like an idiot.

No worries. We do a pretty good job of making things easier around here.

What size is the virtual disk? VMFS typically allocates space and I believe would be reporting the total disk size while writing the contents. VMFS6 space reclamation and ZFS compression will also interact in slightly nonobvious ways to impact overall free space reporting on the pool, which is a different issue, obviously.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
What size is the virtual disk?
The VM I posted about has two virtual disks.
The source of the vmdks are coming from the datasheet. vmdk1 is 231,444kb. vmdk1_1 is 1,207,524kb
The destination is the vmfs. vmdk1 is now 4,199,424kb and vmdk1_1 is 30,619,648 and growing (the "copy to" is still in progress at 5%).
I created the vmfs as a vmfs6. Below is the zvol. Block size is 4KiB. Compression is showing 1.43
1641235132352.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't really care what the datastore browser says. I was interested in what the VM configuration has for disk size. This might give good clues as to reasonableness.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
Misunderstood the question.

The VM is set for a 4gb vmdk and a 512gb vmdk but both are set to thin provisioned.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, maybe interesting. I actually haven't really looked at VMFS6 vMotion operations in detail to know what it does during the operation, but I guess I'm not seeing anything shockingly wrong here. It could be that space reclamation comes around after the fact, since in many ways VMFS thin-provisioned files resemble UNIX sparse files.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
To be clear, I did not vMotion. I used the "Copy to" when in the datastore. Once the "Copy to" completes, I'll do a vMotion to another.
The VM in discussion is from an OVF Template by Fortinet.

*about an hour passed by*

ALL VMs are backed up with veeam and are on a datasheet.

As a test, I registered a Windows 10 VM (I do not care about this VM) that lived on the on the datasheet. I did a vMotion of the storage from the datasheet to the VMFS.
The original VM was a 60GB thin, with 24GB in use. After the vMotion completion, the 24gb vdmk grew to 36GB. I'm going to run an sdelete, then vmkfstools -K and see if that reduces the usage back to normal. If this works, that will be great for my Windows VMs but not sure if I can accomplish this with my FortiAnalyzer and FortiManager VMs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, let's see how it all ends up. I don't have much in the way of VMFS6 to experiment on here.
 
Top