Sneak Peek - Cluster Creation with SCALE + TrueCommand 2.0

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,448
Hey Folks! Wanted to give all our TrueNAS fans a quick sneak peek at the current status of Cluster creation for SCALE. This is all currently being done running the latest TrueNAS SCALE and TrueCommand Nightly images as of 4/8/2021. Since a picture (Or in this case a series of moving pictures) is worth a thousand words, without further ado here's what the setup procedure looks like at this stage:

tzdkPYe9VXqDFu0nP2nyE1IU-e1Vh38Kj-w6fdn7uSe0iDcUKPHnN2weqZL5J01JcHRflblPHQ4bKbs4hyjB83RQxT7IDVynIK8fd551Vpiv2jUIe1PB9tEz1V7U2M-PZJo71A3ypzc


In this example I was able to create a quick dispersed cluster (Erasure Coded) with minimal effort, which is now ready to accept network client connections. If you dig in further on each of the member nodes of the cluster you will find that a '/cluster/testvol' mountpoint was also created, providing local access to the cluster from any host machine.

1617904796319.png


This is all nightly image code, but has made some good progress in recent months, so we felt it was worth sharing more public now. We're looking to have more of this officially documented by the time of TrueNAS SCALE 21.06 and TrueCommand 2.0-BETA in the coming months. For the adventurous among you, feel free to kick the tires and as always, bug reports or other feedback is welcome.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Now I finally know why I installed ESXi on my toy system.
So I can run two SCALE systems with one NVMe drive and one Ethernet passed-through, each. :wink:
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Kris's setup was more complex because he had two networks on each node. In the simplest configuration, there is only a single network and hence there is no need to choose the network on each node.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
3 peers must be present and connected before the ctdb shared volume can be created.

No way around that?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
I ordered a third SSD and will try to build a three-node cluster on the weekend ...

Not quite sure about the bifurcation issue, but if it works out, I'll have one box with 3 NMVe drives, one SATA drive, 4 network interfaces, booting ESXi and three SCALE VMs all with disk and network passed-through. That would so rock. I'll keep you updated.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Not quite sure about the bifurcation issue, but if it works out, I'll have one box with 3 NMVe drives,
OK, I moved one of my NVMe drives from the mainboard to the add-on PCIe card. No worky. That platform cannot drive more than one drive in its single PCIe x4 slot.

Has anyone successfully installed SCALE in a bhyve VM on CORE? Or SCALE in a KVM VM on SCALE?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Scale in a KVM VM on SCALE should work..... but no claims that it will perform. It would be a good test of all the components.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Let's see ... there will be iSCSI :wink: The ESXi host can easily drive three VMs. My problem is storage. But I can put a VMware datastore on my CORE ...
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
3 peers must be present and connected before the ctdb shared volume can be created.

No way around that?
It should be noted that the 3rd peer does NOT have to be full size.... it can act as an arbiter and only store metadata.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
It should be noted that the 3rd peer does have to be full size.... it can act as an arbiter and only store metadata.
I think you are missing "not", as in:
"does not have to be full size"

In most cases it might be more efficient to have 3 nodes with erasure coding, than 2 with replication and 1 arbiter, though...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
It should be noted that the 3rd peer does have to be full size.... it can act as an arbiter and only store metadata.
Any hint how to do that? Thanks!
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Check the arbiter checkbox when setting it up, It's visible on the GIF :)
Now that I managed to get there - how does the system decide which "Brick" will carry data and which one will only be an arbiter? All I see is a global checkbox ...
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Now that I managed to get there - how does the system decide which "Brick" will carry data and which one will only be an arbiter? All I see is a global checkbox ...

I assume the last brick is the arbiter.... but I know that part of the UI is in active development. @Kris Moore
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Is there a good Gluster best-practices resource out there? I actually inherited a gluster setup in production (even worse, I might soon inherit one at a client site) and the team is basically flying blind and I've found the official documentation to be sub-optimal.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Is there a good Gluster best-practices resource out there? I actually inherited a gluster setup in production (even worse, I might soon inherit one at a client site) and the team is basically flying blind and I've found the official documentation to be sub-optimal.

Gluster on LVM will behave differently than Gluster with ZFS. I think we'll end up with a TrueNAS SCALE best practices. There will be many use cases and a diversity of configurations.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Is it intentional (for now) that I can create a cluster from TrueCommand, but afterwards I still get "No cluster volumes found." in the Cluster Volumes manager?
 
Top