Virtual TrueNAS Core with Virtual placement of disks

Syptec

Dabbler
Joined
Aug 3, 2018
Messages
42
*** May be in the wrong group. [mod note: Indeed. -JG]

I am wondering if anyone have built, laid out a Virtual Truenas Core 12.X in VMWare Cluster with the following and what it lead to?

1. Truenas Core on Cluster 1 of VMWare

2. Truenas Core Storage disk on 6 different NAS

3. Truenas Core built as ZFS Raidz2

4. Share NFS to a different Cluster from the Virtual Truenas Core

5, Run a Windows 10 VM

Looking for Performance, result, experience or direction.

Would this allow a Virtual Truenas Core to solve and work as "Gluster Performer" and provide reliable performance and serve as a reliable datastore that can have full faul tolerance in all aspects?

Will post my result since I have not seen anything quite like this in the forum...
 
Last edited by a moderator:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
1. Truenas Core on Cluster 1 of VMWare

3. Truenas Core built as ZFS Raidz2
 

Syptec

Dabbler
Joined
Aug 3, 2018
Messages
42


Maybe I was not clear on what I am doing. A virtual TNC install is easy and well documented. I get that. What I am doing now is settting up the storage of the TNC on 6 physically different NAS so that each vDEV member lives on a different NAS in the underlying infrastructure. Looking to make ZFS (like CEPH/Gluster) so that should a underlying NAS dies off, the Virtual TNC would treat as a disk and not an entire NAS failure.

I am looking to build this so that I can have the Virtual vTNC to VMWare as an NFS share and host a VM on the vTNC and have a Datastore that would perform near or better than the physical counterpart and allow for a 100% uptime. Presently with pTNC that really is not 100% possible.

*TNC - Truenas Core
**vTNC - Virtual Truenas Core
*** Physical Truenas Core.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
that should a underlying NAS dies off, the Virtual TNC would treat as a disk and not an entire NAS failure.

That's not going to work the way you think. When a VMware datastore goes away, I/O to the vmdk's typically stalls, causing the VM to pause. You are assuming that it will behave as though a disk were merely yanked out of a bare metal TrueNAS system, but instead what will happen to you is that ANY failure of ANY of your virtual disks will cause a failure. This means that you have reduced the reliability because your idea requires all six NAS systems to be operational.
 

Syptec

Dabbler
Joined
Aug 3, 2018
Messages
42
That's not going to work the way you think. When a VMware datastore goes away, I/O to the vmdk's typically stalls, causing the VM to pause. You are assuming that it will behave as though a disk were merely yanked out of a bare metal TrueNAS system, but instead what will happen to you is that ANY failure of ANY of your virtual disks will cause a failure. This means that you have reduced the reliability because your idea requires all six NAS systems to be operational.



We are on the same page. I am assuming it will treat it like a disk. Has this been built and explained and you can refreferme to an article. I am building this today to see if the assumption is correct. I believe what you are saying but I am very curious to what happens in detail.
 
Last edited:

Syptec

Dabbler
Joined
Aug 3, 2018
Messages
42
Present build.
vTNC is in Vmware cluster 1, vTNC disks are on nas 1 nas 2 and nas 3. The connection is 100Gb to each nas from the hypers in the cluster. Fabric switch is a qct 100Gb .
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're asking the person who writes the articles if he can refer you to a presumably more authoritative article?

No, I can't. The closest would be


which I wrote, and explains the need for the hypervisor's datastores to remain redundant. Nobody is going to have written an article describing what you're trying to do because it is fundamentally weird. People normally implement redundant systems using abstractions such as HAST (FreeBSD) or DRBD (Linux). If I were trying to do what you are describing, I would create a HAST cluster and then share that device using iSCSI to VMware. You could even virtualize the HAST nodes to make them vMotion'able, but if you do that, I strongly suggest that the underlying vmdk's reside on RAID1 datastores, along the lines of what I describe in the article above. That would be highly reliable.
 

Syptec

Dabbler
Joined
Aug 3, 2018
Messages
42
I read your article. In part it is what inspired what we are doing (presently TNC has no Gluster support) The underlying storage has 48 disks each and set with Raidz2 on each vDev giving us 12 total vdev. (aware of mirrors vs raidz2) Each storage has LAGG of 100Gb on 4 ports and uses 2 different switches. So the underlying remain redundant as possible. If drives fail, no worries. But when a SAN dies, it is trouble. All storage is flash.
 
Last edited:
Joined
Jun 15, 2022
Messages
674
What I am doing now is settting up the storage of the TNC on 6 physically different NAS so that each vDEV member lives on a different NAS in the underlying infrastructure. Looking to make ZFS (like CEPH/Gluster) so that should a underlying NAS dies off, the Virtual TNC would treat as a disk and not an entire NAS failure.

I am looking to build this so that I can have the Virtual vTNC to VMWare as an NFS share and host a VM on the vTNC and have a Datastore that would perform near or better than the physical counterpart and allow for a 100% uptime. Presently with pTNC that really is not 100% possible.

In my humble opinion, one does not virtualize TrueNAS for resiliency, one scales-out. If I'm understanding your goal correctly, virtualizing would add layers of unnecessary complexity.
 

Syptec

Dabbler
Joined
Aug 3, 2018
Messages
42
In my humble opinion, one does not virtualize TrueNAS for resiliency, one scales-out. If I'm understanding your goal correctly, virtualizing would add layers of unnecessary complexity.
Kinda. Look at it more like making TNC a storage cluster by being on top of a Virtualized stack. CEPH, Gluster MooseFS and others add complexity. A simple ZFS cluster would be "neat".
 
Top