Register for the iXsystems Community to get an ad-free experience

Cluster SMB shares

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
25
Hi,

I already created a glusterfs cluster with Truecommand 2.1, and now upgraded to Truecommad 2.2

While trying to complete the setup of the cluster, the SMB share wizard comes up, but I'm not able to select network/address from the SCALE noeds.
How can I enable the SMB shares in an already created and functioning cluster volume?

Thank you.
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,197
Sounds like you manually created the cluster originally? How was it setup? Guessing the CTDB / VIPS were not setup properly. Do you have a "Finish" Button up at the top of the cluster widget that can be clicked?
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
25
Hi,

I created it with Truecommand 2.1, before SMB was actually enabled.
Now the "Finish" button appears, and if I click it it goeas on asking for interface/address of the three nodes, but doens't let me select anything, so I'm stuck there.

I remember I had some troubles with CTDB / VIPS diring my first tries with the cluster creation, so it might be their setup is the problem. Is there a way to properly set them up without dropping the whole cluster?

Thank you.
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,197
Hi,

I created it with Truecommand 2.1, before SMB was actually enabled.
Now the "Finish" button appears, and if I click it it goeas on asking for interface/address of the three nodes, but doens't let me select anything, so I'm stuck there.

I remember I had some troubles with CTDB / VIPS diring my first tries with the cluster creation, so it might be their setup is the problem. Is there a way to properly set them up without dropping the whole cluster?

Thank you.

Ok, so before you can make SMB shares you have to "Finish" that setup. On that wizard, you need to select interfaces and then manually enter IP's you want to use for your cluster VIPS (Virtual IP's). So if you have 4 nodes, you will ideally want 4 VIPS defined, and they will float around the nodes as nodes go offline / online.

So in this example, here's how I did it.

TrueNAS Management IPs (Regular TrueNAS UI)
192.168.10.30
192.168.10.31
192.168.10.32
192.168.10.34

VIPS for SMB
192.168.10.50
192.168.10.51
192.168.10.52
192.168.10.53

In this case, my SMB clients would use 192.168.10.5X for connection, not the TrueNAS management IPS.
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
25
Well, but the form does not let me select any interface form the undeliyng nodes...
Is there something I should enable or set anywhere else to see them?
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,197
Can you provide a screenshot? Not sure if there's a bug or what exactly you are seeing..
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
25
Sorry if it's from mobile, but the dropdown is empty for all three nodes (they are up to date SCALE instances)

Screenshot_20220805-184502_Chrome.jpg


Screenshot_20220805-183925_Chrome.jpg



Screenshot_20220805-183949_Chrome.jpg
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,197
Ok, so if I'm following, the Interface dropdown has no interfaces listed in it at all? Thats annoying. Can you please open a bug ticket here with the screenshots and a debug file from one of the TrueNAS systems?


I'll get an engineer to take a look into this and we'll figure out why those aren't populating in this case.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,036
Can you tear down the previous clustering and start from scratch.
You will need a front-ned SMB network and a back-end gluster network.
So each node need two interfaces on different subnets.
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
25
Could I just add the network interfaces to the nodes, being them VMs?
Or should I actually redo all the cluster?

Thank you.
 

malco_2001

iXsystems
iXsystems
Joined
Sep 10, 2013
Messages
20
Let's say you have 4 virtual machines each should have 2 interfaces. Each VM should have at least 1 interface that private, and reachable to only the other virtual machines. This could be a private bridge or if your hypervisor has a setting for isolated network use that. Here is an example of mine using home network.

eno1 - Management interface, has additional static IP aliases for cluster traffic
eno2 - Private interface for cluster traffic only

nas01
eno1 - 192.168.1.1/24
eno2 - 192.168.100.1/24

nas02
eno1 - 192.168.1.2/24
eno2 - 192.168.100.2/24

nas03
eno1 - 192.168.1.3/24
eno2 - 192.168.100.3/24

nas04
eno1 - 192.168.1.4/24
eno2 - 192.168.100.4/24

When you complete the first part of the clustering wizard if you select 192.168.100.1, 192.168.100.2, 192.168.100.3, 192.168.100.4 then eno2 is not going to show as an interface during the second part of the wizard where you will need to provide additional IP's that will be used to access the SMB shares.

You will need 4 additional address during 2nd part of the wizard to use for the SMB shares. Lets say eno1 is used for your management subnet you can provide 192.168.1.5/24, 192.168.1.6/24, 192.168.1.7/24, 192.168.1.8/24 assuming these are outside of your DHCP range.

There is also an open bug ticket where even a hostname like tn-kvm01-nodea can be seen as too long 15 characters and will prevent the wizard from completing. A shorter hostname like nas01 will work. A fix will be coming in TrueCommand 2.2.1 that uses the cluster name to generate the NETBIOS computer object for the fused AD join that all hosts in the cluster share, and ensures proper validation. Another issue to be aware of is that creating a dispersed volume in the 3rd part of the wizard may fail with 22.02.2.1 but creating a replicated volume should be worked fine. That issue will be fixed in 22.02.3. Hope this helps.
 

malco_2001

iXsystems
iXsystems
Joined
Sep 10, 2013
Messages
20
Could I just add the network interfaces to the nodes, being them VMs?
Or should I actually redo all the cluster?

Thank you.
If your VMs are non production anyway and you don't mind scrapping them if fails it might be worth a try. For best practice one interface should always be completely private for the clustering traffic only but it might be worth a try if it's just for experimental testing anyway. If it fails hard it would be better to start over.
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
25
Hi,

Followup: I deleted the previous gluster volume, added a proper second interface, restarted the sequence.

Now in the first step of the cluster creation, it does not let me select any interface, same as before but in the very first step.

Should I reinstall all of my truenas intances? Hope not.
 

aervin

iXsystems
iXsystems
Joined
Jun 13, 2018
Messages
114
Hey @Urbaman -- which version of Angelfish are you running? There was a bug which prevented interfaces from being "available." The workaround was to assign an arbitrary IP to the second interface (the public one).
 

Urbaman

Dabbler
Joined
Jan 8, 2022
Messages
25
Version TrueNAS-SCALE-22.02.3, should be latest, also Truecommand is latest.

I actually give the IPs both primary and secondary, as fixed from the DHCP.

All Nodes:
ens18 10.0.50.2x (GUI, public LAN)
ens19 10.0.70.2x (Internal for Gluster)

I also tried to set a fixed IP to the interface for gluster on one node (on the interface itself), with no luck.
Should I set the fixed IP on the interface (not from DHCP) on the public one? Or by "arbitrary" you mean a free DHCP IP?

Thank you very much.
 

duongle90

Cadet
Joined
Oct 25, 2022
Messages
7
I have the same issue. trying to reinstall the Scale system and true command 3 times. still exist
 

duongle90

Cadet
Joined
Oct 25, 2022
Messages
7
Well, but the form does not let me select any interface form the undeliyng nodes...
Is there something I should enable or set anywhere else to see them?
Hi,
I just figured out exact same issues that you have. if you were able to add the clusters. at this step (finish setup step), if you do not see the interface. you just need to click the Menu -- system-- delete all the systems you have. You can refresh the web interface of each system, and then add them back to the true Command. after that, go back to the "finish setup". and you'll see the interface.
 
Top