crispyduck
Cadet
- Joined
- Jan 30, 2022
- Messages
- 9
Hi! First thanks for SCALE, really awsome and I already use it since some month for my home NAS.
Now I am about to switch my lab environment from FreeBSD HAST setup being a NFS4.1 datastore for ESXi hosts to TrueNAS SCALE. Actually I am just testing on one server.
As I am limited to gigabit network, I used in the past NFS session trunking with 4x1GB. Now I had to find out that nfs-ganesha used by SCALE is not supporting session trunking as in the RFC for 4.1. :-(
But great that SCALE supports docker out of the box, so for testing it was quiet easy running a NFS server in docker which now with the loaded modules supports nfs session trunking. So I get reads and writes on my on ESXi mounted NFS datastore up to 450MB/s.
Tested this now for a week, and it works fine with several ESXi hosts connected.
As next step I would like to try the SCALE cluster functionality. But here I have now problems planning the network config for it.
All my servers have 4x1GB uplinks to a Cisco switch. For gluster I need bonding, suggested is balance-alb or LACP.
SCALE does not support balance-alb; why?
But okay, I can use LACP, but what should I then do with my NFS session trunking? It is like iSCSI MPIO, it makes no sense to run it on top of a bonded interface.
Any way to have both on the same physical uplinks? Or specify a uplink for each vlan,...? e.g. 6 vlans, vlan_mgmt, vlan_nfs1-4 and vlan_gluster, vlan mgmt and gluster should be bonded and nfs1-4 one per uplink.
Is there any way to do this? On top of a ESXi it should be possible to do this with multiple port groups and correct active/standby settings. Any way to configure something similar in linux and scale?
Same for containers and VMs running on SCALE, is there a way to have my mgmt and storage network on a bond while still be able to assign individual uplinks to containers and VMs?
Beside this bonding questions, does it at all make any sense to serve NFS via session trunking, or also iSCSI MPIO when the underlying storage is a redundant gluster with bonded 4x1G? Reads should be fine, but writes? Will writes to one NFS datastore or iSCSI volume ever utilize more than one gluster network uplink?
regards,
crispyduck
Now I am about to switch my lab environment from FreeBSD HAST setup being a NFS4.1 datastore for ESXi hosts to TrueNAS SCALE. Actually I am just testing on one server.
As I am limited to gigabit network, I used in the past NFS session trunking with 4x1GB. Now I had to find out that nfs-ganesha used by SCALE is not supporting session trunking as in the RFC for 4.1. :-(
But great that SCALE supports docker out of the box, so for testing it was quiet easy running a NFS server in docker which now with the loaded modules supports nfs session trunking. So I get reads and writes on my on ESXi mounted NFS datastore up to 450MB/s.
Tested this now for a week, and it works fine with several ESXi hosts connected.
As next step I would like to try the SCALE cluster functionality. But here I have now problems planning the network config for it.
All my servers have 4x1GB uplinks to a Cisco switch. For gluster I need bonding, suggested is balance-alb or LACP.
SCALE does not support balance-alb; why?
But okay, I can use LACP, but what should I then do with my NFS session trunking? It is like iSCSI MPIO, it makes no sense to run it on top of a bonded interface.
Any way to have both on the same physical uplinks? Or specify a uplink for each vlan,...? e.g. 6 vlans, vlan_mgmt, vlan_nfs1-4 and vlan_gluster, vlan mgmt and gluster should be bonded and nfs1-4 one per uplink.
Is there any way to do this? On top of a ESXi it should be possible to do this with multiple port groups and correct active/standby settings. Any way to configure something similar in linux and scale?
Same for containers and VMs running on SCALE, is there a way to have my mgmt and storage network on a bond while still be able to assign individual uplinks to containers and VMs?
Beside this bonding questions, does it at all make any sense to serve NFS via session trunking, or also iSCSI MPIO when the underlying storage is a redundant gluster with bonded 4x1G? Reads should be fine, but writes? Will writes to one NFS datastore or iSCSI volume ever utilize more than one gluster network uplink?
regards,
crispyduck