Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

TrueNAS SCALE Project Start

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

RLe

Newbie
Joined
Apr 3, 2020
Messages
2
Really really excited about TrueNAS Scale! Even so that I installed nearly two identical nodes. I want to join in to test, debug, document and develop.

Hardware specs (2 nodes)
- Supermicro X10SDV-6C+-TLN4F
- Supermicro SuperChassis 721TQ-250B2
- 32 GB RAM (registered ecc). Planning to double that.
- 4x 4TB HDD (6x Toshiba N300 and 2x HGST Deskstar NAS)
- Samsung PM981 256GB (L2ARC)
- Intel Optane 900P PCI-e 280GB (ZIL/SLOG)
- Samsung Evo 250 GB (mirrored boot drives)
- Two bonded 10 GbE (link aggregation LACP)

Bug report
Both nodes are configured identically with TrueNAS Scale (latest available version). Both available 10Gb ethernet ports are bonded together as LACP.

On each node the Mac address for the bonded interface are identical to each other and therefore my DHCP server (pfSense) servers only one fixed IP address to both nodes. That doesn't work as expected.

I suspect that the algorithm for generating the Mac address is not correct.

I expect for each node an unique Mac address so that the DHCP service serves an unique fixed IP address for each dedicated node. Normally - at least what I was used to until now - one of the interfaces within the bond serves the mac address for the bond.

Is there a workaround available for now?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
504
Bug report
Both nodes are configured identically with TrueNAS Scale (latest available version). Both available 10Gb ethernet ports are bonded together as LACP.

On each node the Mac address for the bonded interface are identical to each other and therefore my DHCP server (pfSense) servers only one fixed IP address to both nodes. That doesn't work as expected.

I suspect that the algorithm for generating the Mac address is not correct.

I expect for each node an unique Mac address so that the DHCP service serves an unique fixed IP address for each dedicated node. Normally - at least what I was used to until now - one of the interfaces within the bond serves the mac address for the bond.

Is there a workaround available for now?
Welcome aboard and thanks for finding that. if you can report through the jira bug tracker that would be great. .. see "report a bug" at top of this page. The linux networking code is different.

I'd suggest starting with the single node at this stage. The clustering is being added, but requires a stable base (e.g no networking bugs), so it isn't the first phase of testing.

Let us know which areas you are interested in developing, testing documenting....
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,037
I'd suggest starting with the single node at this stage. The clustering is being added, but requires a stable base (e.g no networking bugs), so it isn't the first phase of testing.
I think this is important to highlight in the developer update though... Even though things are "working via CLI", doesn't mean they are all working equally great ;)
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,209
I think this is important to highlight in the developer update though... Even though things are "working via CLI", doesn't mean they are all working equally great ;)
I thing caution needs to be exercised in the case of clustered services (there's more involved here than simply running a service on a clustered filesystem). For instance, SMB should not be used on a clustered volume until clustering support is properly added, jumping the gun on this can result in data corruption due to lack of coordination of locking, file opens, etc, etc, between cluster nodes.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,037
I thing caution needs to be exercised in the case of clustered services (there's more involved here than simply running a service on a clustered filesystem). For instance, SMB should not be used on a clustered volume until clustering support is properly added, jumping the gun on this can result in data corruption due to lack of coordination of locking, file opens, etc, etc, between cluster nodes.
Yeah in general this should be a warning I guess.
Also locking and gluster is quite a thing that needs carefull design, it can totally FUBAR things if you're using (SQLite) databases and such.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,209
Yeah in general this should be a warning I guess.
Also locking and gluster is quite a thing that needs carefull design, it can totally FUBAR things if you're using (SQLite) databases and such.
For case of SMB in particular this goes beyond simply being a matter of coordination of byte-range locks between cluster nodes. Samba on glusterfs without a correct ctdb configuration is simply not safe because there will be no coordination of shared mode locks, oplocks, etc.
 

Chris Moore

Wizened Sage
Joined
May 2, 2015
Messages
10,050
SCALE is an exciting new addition to the TrueNAS software family.
Not that I am not interested in this, I am, but I would like to ask if there is a plan to make tiered storage a possibility. For example, I am looking into a procurement at work that would give us the ability to migrate data between SSD, disk and tape automatically based on policy / demand.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
504
Not that I am not interested in this, I am, but I would like to ask if there is a plan to make tiered storage a possibility. For example, I am looking into a procurement at work that would give us the ability to migrate data between SSD, disk and tape automatically based on policy / demand.
The general problem is complex to solve.... it depends on the data and the use cases. Should the data be moved at the block, file or dataset level? Is it a massive system or a small system? TrueNAS SCALE is accumulating tools to simplify these data movement processes, but is not a generalized solution to the very complex problem.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,037
The general problem is complex to solve.... it depends on the data and the use cases. Should the data be moved at the block, file or dataset level? Is it a massive system or a small system? TrueNAS SCALE is accumulating tools to simplify these data movement processes, but is not a generalized solution to the very complex problem.
That being said: With custom allocation classes tiering per dataset is already possible, as is tiering at block leven (by blocksize)
 

Chris Moore

Wizened Sage
Joined
May 2, 2015
Messages
10,050
That being said: With custom allocation classes tiering per dataset is already possible, as is tiering at block leven (by blocksize)
I wasn't aware. Is that a new development? Something that is not quite ready for production? I can't risk active data at work with something that is still experimental, but I might be able to setup a test system.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,037
I wasn't aware. Is that a new development? Something that is not quite ready for production? I can't risk active data at work with something that is still experimental, but I might be able to setup a test system.
OpenZFS2.0 RC (aka Truenas Core), so a few months away from production, sadly enough.
On datasets you can set the blocksize below which blocks are moved to a special vdev, which can be an ssd-only vdev.

In theory you can cheat this system to even store all blocks on the SSD tier :)
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
504
OpenZFS2.0 RC (aka Truenas Core), so a few months away from production, sadly enough.
TrueNAS 12.0 CORE will get to RELEASE next week. Support and production can start with this, but some users will want to test further. U1 is expected in December.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,037
TrueNAS 12.0 CORE will get to RELEASE next week. Support and production can start with this, but some users will want to test further. U1 is expected in December.
Shoot, I was mixing up the two release times... Because officially OpenZFS2.0 is not production-ready yet.
But thats mostly just a formality, the OpenZFS2.0 RC is REALLY sollid :)
 

Don Dayton

Neophyte
Joined
Jan 7, 2021
Messages
9
I attempted to do the same thing I have done with FreeNAS, TrueNAS Core and TrueCommand releases. I pre-tested them by installing them on my laptop using VirtualBox. My laptop has 16GB of RAM and an i5 quad core, so this has been sufficient for testing most features of NAS and even a Plex server. I installed TrueNAS-SCALE-20.12-ALPHA.iso defining the network interface as always using my only network connection which is the wifi and setting it as a bridged adapter to the VM. Scale will not show a working network interface in this mode as all the other products have.
 

ornias

Neophyte Sage
Joined
Mar 6, 2020
Messages
1,037
If it doesn't thats a VM issue and not really a SCALE issue.
 

Don Dayton

Neophyte
Joined
Jan 7, 2021
Messages
9
If it doesn't thats a VM issue and not really a SCALE issue.
This works fine for FreeNAS, TrueNAS which are FreeBSD OS and it works for TrueCommand which is Debian OS like Scale uses and it works for fine for Ubuntu and CentOS installs, so I believe the problem is in the Debian OS that is being installed and not in the VM.
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
2,767
I have some network quirks on physical hardware since the last update which are already acknowledged and supposedly fixed upstream (in Debian). You might just want to wait for the next alpha release 21.02 like I do.
 

Misterb

Newbie
Joined
Dec 23, 2015
Messages
2
I had a similar problem with upgrading an Arch Linux LXC container under Proxmox. Apparently upgrading systemd 246.6-1 => 247.1-1 breaks networking within the LXC container and as Patrick says above there is an upstream fix waiting to be integrated. The workaround for me was to enable nesting in CT preferences.
 
Top