TrueNAS SCALE Announcement and Nightly Image Downloads

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,448
Hello TrueNAS and FreeNAS Community,

With TrueNAS 12.0 BETA now released, the iX engineering team is making some great progress with TrueNAS SCALE. In anticipation of the long July 4th holiday weekend here in the US, we are pleased to make available for the first time the official nightly images for enthusiast and developer participation.

Hbe6vVDxQPbAwrMG4p_uDS_VYyg5xfM1fr-bwpWFrrYrzOLJGA14NiF1J8C3QTducHLRxvOO-3D5_R_g6JZHOSmQEZ7ksP-RAF3R0WiNZWc-MMtHUMMJ9F31coFam5CKXmrDdx9W


As we described in an earlier post, SCALE is an exciting new addition to the TrueNAS software family. It uses much of the same TrueNAS 12.0 source code, but adds a few different twists. The TrueNAS SCALE project is defined by this acronym:

Scale-out
Converged
Active-active
Linux containers
Easy-to-manage

The “core” platform is TrueNAS with its middleware, REST & Websockets APIs and Web UI. All the alerting, monitoring and external protocols are maintained. To make the SCALE project feasible, we invested in making TrueNAS into a multi-OS platform and used Debian 11 (bullseye) as the base OS. What are the other key technologies?

“Scale-out” applies to the compute and the storage where nodes can be incrementally added to increase compute or storage capacity and performance. To scale-out the storage, we scale-out ZFS by using gluster as a multi-node storage manager while retaining ZFS as the data protection, snapshot and replication manager.

“Converged” implies that the nodes can operate as compute-only, storage-only or hyper-converged compute and storage. SCALE includes both the capability to support VMs through KVM and libvirt as well as containers via Docker or Kubernetes.

“Active-Active” is another way of saying Extreme Availability. It is important to primary business applications and kubernetes clusters delivering non-stop applications. If a single node fails, the same data needs to be available on other nodes.

“Linux Containers” is a deliberately vague description of the technology. Docker, Kubernetes and other container management technologies like Nomad can be used.

“Easy-to-manage” is a catch-all that makes it clear that cool technologies are no use if they are not easy to deploy, configure, operate and repair. With TrueNAS we have an API-first model that allows the users to automate anything that can be done in the web interface. In addition, TrueCommand provides the tools to do cluster-wide reporting, alerting, Role Based Access Control, auditing.


The source code for TrueNAS SCALE is already available on GitHub and under very active development. The base functionality for a very early developer preview image is available as a nightly image that has all the base TrueNAS functionality and provides CLIs and APIs for some of the newer technologies. Updates to the nightly images can be automated, so it will be easy to see progress as it is made. You can download TrueNAS SCALE today!

SCALE will be a development project for the remainder of 2020 with a planned release in 2021.
There is a discussion group for project SCALE as well. If you or your organization are keen to contribute to a project with these goals, then please introduce yourself and we’ll invite you to join the TrueNAS SCALE developers chat group which is hosted on Slack.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
760
This announcement has gotten many wheels turning.

I am thinking about future design choices for my home lab given this recent announcement. I have several questions swirling in my mind and I hope there is official answers for at least some of them.

First, given that there are currently two tiers of TrueNAS, Core and Enterprise. The former being free and open source and capable of being installed on "any" hardware, and the later being locked down to IX Hardware as part of a paid product. What will the model be for SCALE? Being it is currently on github, is the intention to keep it free and open source? Will it instead be be "freemium" with additional features having to be unlocked?

Second, will there be an update path to move from the FreeBSD based system to the TrueNAS SCALE system? A config backup and restore wizard or something like that?

Third, Does gluster work on a system wide, a pool or a dataset basis?

Fourth: How does one map the drive? Does the Gluster Cluster get it's own IP?

Fifth: Can you have a two-node cluster that is effectively a mirror, or do you need at least 3?

Sixth: What does Gluster performance look like in a 3-node design vs a single node?

Finally, the big picture question:
My current home lab is setup such that I have a primary FreeNAS server and a backup one. I have one of the datasets on that server do a ZFS send to my backup FreeNAS server. My backup server then does another ZFS send to an offsite location at a friend's. This gives me 1 local and 1 geo-redundant backup.

I have been planning on what I will do next as my pool is sitting at about 80% capacity. My initial thoughts were to plan on buying 3x of the new 18TB (~$1500?) drives coming this year and put them in my backup server, and then add the existing hard drives in my backup server to my primary one as a second vdev. This would give me 32TB on my backup server and 36TB on my primary.
Given we now have Gluster coming with SCALE, I think it may make more sense to buy a 3rd node (something like this) and pool the three nodes together.

All in a "new" node would be $300+ the cost of 3 10TB easy shucks. Let's say i get them on sale for $150, so we'll call it $750, or about half what my original plan was. Doing so I would double my current storage to the same 36TB as the first plan, but my backup box wouldn't be short 4TB. I would still have the same data redundancy in case of a failure (one node).

Given the HA nature of this design, I think it may be wise to forego a traditional local "backup" node for my use. While RAID is not a backup, and Gluster being just a virtual RAID of hosts is also not a backup, it's an interesting idea. Given we have snapshots, etc, alot of the traditional reasons for having a local backup are starting to evaporate.

To be clear, this dataset is for Plex lol.

Also, I appologize for my lack of knowledge surrounding Gluster
 
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
NickF,

All good questions. Some answers below.
1) SCALE is Open Source and Free. iX is a business and will have add-ons that assist larger customers and provide whole systems with support.
2) CORE and SCALE are different and similar. Pools can be imported from one to the other, but there are features that are different. After SCALE is reliable and released, some tools will automate some migration tasks.
3) Gluster is a mechanism to scale-out ZFS... it provides "cluster datasets" which span multiple pools
4) You can map the dataset (not drive) using standard SMB or NFS. Some datasets can remain as standard ZFS datasets (single node)
5) Three nodes is preferred, but the 3rd node could be a wimpier one (not in 1st release)
6) Performance will depend on configuration and there will be many choices. The goal is to have the system scale linearly from a bandwidth perspective.
Final): Async backup is more efficient on WAN and 1Gbe. Clustering via SCALE needs good bandwidth between nodes.
Extra). If TrueNAS CORE nodes are set-up for ZFS replication, then that can also be supported by SCALE. After migrating to SCALE you can then add cluster datasets. The goal of SCALE is to add options, but remove as few of the standard capabilities as possible.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
760
NickF,

All good questions. Some answers below.
1) SCALE is Open Source and Free. iX is a business and will have add-ons that assist larger customers and provide whole systems with support.
2) CORE and SCALE are different and similar. Pools can be imported from one to the other, but there are features that are different. After SCALE is reliable and released, some tools will automate some migration tasks.
3) Gluster is a mechanism to scale-out ZFS... it provides "cluster datasets" which span multiple pools
4) You can map the dataset (not drive) using standard SMB or NFS. Some datasets can remain as standard ZFS datasets (single node)
5) Three nodes is preferred, but the 3rd node could be a wimpier one (not in 1st release)
6) Performance will depend on configuration and there will be many choices. The goal is to have the system scale linearly from a bandwidth perspective.
Final): Async backup is more efficient on WAN and 1Gbe. Clustering via SCALE needs good bandwidth between nodes.
Extra). If TrueNAS CORE nodes are set-up for ZFS replication, then that can also be supported by SCALE. After migrating to SCALE you can then add cluster datasets. The goal of SCALE is to add options, but remove as few of the standard capabilities as possible.
Morgan, I just want to say that I have no idea what your other duties and responsibilities are in the company. But if you give them half as much effort as you put into these forums, you deserve a raise.:smile:

1 and 2 -- That is good information to hear! Thanks!

3, So the functionality isn't such that the pool itself is on the different clusters. The clustering takes place at the dataset layer. Neat--that makes it easy to manage. Can an existing standard dataset then be changed to be a cluster dataset? Can a ZVOL also be clustered?

4, I guess my confusion is that if the dataset lives on all 3 servers, I wouldn't want to map it to an individual node. So lets say I have a boxes at 10.10.10.10, 10.10.10.11 and 10.10.10.12 and I have an SMB share that lives in one of the datasets, we'll call it video. Currently on my Windows box i would go to \\10.10.10.10\video (or the hostname equivalent) to access those files. Is there like a virtual IP that is shared among the node?

5, Understood

6, Linear. Interesting. So a 3-node cluster could have 1 nodes worth of redundancy and 2 nodes worth of throughput?

Final: So In other words, that would be possible upgrade path for me given that I have the switching capacity to handle it, and I would just leave the off-site backup as is. Cool!

Thanks for all you do!
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Thanks Nick.... I'm in Product Management and enjoy the feedback we get from the the forums. Its helps us work out what we should do.
3) ZVOL is not clustered... but a dataset with iSCSI LUNs can be. Datasets are not changed, but data can be migrated from one dataset to another with standard tools.
4) A cluster dataset can be accessed from multiple nodes, each with their own IP.
 

pitbullb

Cadet
Joined
Mar 25, 2020
Messages
7
I do not want to bother you, but just to be clear:
does your answer refer only to the versions under development or also to the final versions of Core and Scale.

Just asking because if the latest is the case I have to change my setup
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
Last edited:

echelon5

Explorer
Joined
Apr 20, 2016
Messages
79
Not recommended. None of the work to avoid writes to boot-devices has been done yet. Frankly even CORE shouldn't be on USB anymore, its just asking for issues.

IMO you should make this clearer during the next few updates and in the docs. I just found out about it recently when my stick died.

When I started out with FN, this was the go-to-way, somewhere along the way this changed and I missed it. Now it says:

Home users experimenting with FreeNAS® can install FreeNAS® on an inexpensive USB thumb drive and use the computer disks for storage.

Since you’re now no longer recommending it, I think you should add a disclaimer.
 
Joined
Jan 27, 2020
Messages
577
Not recommended. None of the work to avoid writes to boot-devices has been done yet. Frankly even CORE shouldn't be on USB anymore, its just asking for issues.
Shouldn't we than adjust the instructions in the installer and change the documentation?
1594113511362.png
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,448
The installer message is correct, says that we prefer flash for boot-devices vs spinning media. Flash != USB thumb-drive.

As for the guide, yea we should re-word that to indicate more strongly that USB thumb-drives shouldn't be used. We'll have to clarify carefully, since USB itself isn't the issue, its the general quality of your typical thumb-drive you get for dirt-cheap.
 
Joined
Jan 27, 2020
Messages
577
Flash != USB thumb-drive
I'm not sure it is that simple and leaves room for interpretation.

What is flash storage?

Flash storage is a solid-state technology that uses flash memory chips for writing and storing data. Solutions range from USB drives to enterprise-level arrays. Flash storage can achieve very fast response times (microsecond latency), compared to hard drives with moving components.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
How strict is the minimum install size? Need to know if I can get away with some of the pile of 16G SATADOMs I've got.

I'm not sure it is that simple and leaves room for interpretation.

Hence why Kris said that wording would have to be carefully chosen. "All USB thumb drives are flash - not all flash is a USB thumb drive."

Although to be honest, I and several others here have had a fine time using spinning media for boot (mirrored laptop drives) back when SSD wasn't as easily accessible - it beats the pants off of cheap USB.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Don't worry, the SATADOMs will fail a bit before they're full anyway.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
It's more of a comment on how unreliable some of those were. iX apparently was replacing them left and right after a while.
 

js_level2

Dabbler
Joined
May 4, 2018
Messages
10
I'd like to clarify a few things as this issue does keep coming up. If anything I say conflicts with what Kris Moore says, take his word over mine.

1. Flash media's biggest enemy is writes, not reads.

Flash media can handle a nearly unlimited number of reads, but writes are what really wears out the media, so minimizing writes in all ways possible is the best way to achieve the longest life from flash media.

There are a lot of other ways, but I won't go into them here as I'm trying to give some background and a relatively small summary of the situation.

2. iXsystems has been recommending against using USB in the manual since around 9.3 (March 2015ish or so).

Prior to 9.3 we were using UFS on the boot devices in a read-only mode (aside from upgrading) so there was negligible writes to the boot devices. The only mountpoint on the boot device that was regularly in a writeable condition was /data (where the config file was).

The typical failures prior to 9.3 involved corruption because the media itself was bad and the device just outright no longer being detected by the system at all.

If you were around in the forums or IRC prior to 9.3 a lot of common advice when people wanted to change things about the boot device files was to tell people to mount the storage writable, make the change, then make it read-only again. If you didn't do this, your changes would look to be applied, but on reboot they'd be lost. There was a lot of forum threads and IRC chats about that very confusing behavior.

Since FreeNAS and TrueNAS 9.3 came out, the boot devices changed to ZFS based. The entire boot device's zpool is writable, and if you put the system dataset on the boot device, you will generate a lot of writes on your boot device because of all of the logging and rrd data generated over time.

3. Not all flash devices are created equal.

Some flash devices (read: USB and microsd cards most commonly) have only a rudimentary flash controller. It doesn't necessarily evenly distribute the "wear and tear" on the flash memory, so if you write to block zero 10000 times, it will literally write to that block 10000 times without trying to redistribute the wear and tear with a flash translation layer. Adding a flash translation layer and a more complex flash controller to handle these features adds cost, and USB devices are really a race to the bottom with regards to price. Of course, there are a few exceptions.

Flash is great, when properly applied for the given application. For years I've been recommending people use the smallest, least expensive name brand SSD they feel comfortable using. Personally, I find a 120GB Kingston SSD for $25 on Amazon to be far, far more reliable than any spinning disk I can find, hence the recommendation in the installer that you should use flash media for the boot device. I've probably been responsible for people buying 50 or so of those in the last few years. Even an old SSD you have lying around that's only 30-40GB is plenty of space for FreeNAS/TrueNAS. I'm using multiple old Intel G2 40GB SSD that's 10+ years old in some of my servers. They run great despite their age! I just always recommend people use an SSD that has full TRIM support and is running the latest firmware.

---

One other noteworthy thing I'd like to add. SSDs don't significantly improve your boot times over USB (even against USB 2.0). The bottleneck really isn't the boot device. Even if they did, you (hopefully) don't have to boot your FreeNAS often enough that the few seconds of savings is worth the extra costs of high speed NVMe disks for booting on FreeNAS. Save that money for better things like more RAM or use that NVMe disk for an L2ARC.

I do agree that the installer probably shouldn't use the word "flash" as that really seems confusing when we say "dont use USB" and then say "use flash". While it may be obvious for many of us that one is a subset of the other, it's nonetheless confusing, and I'm betting Kris will have that fixed in the next release.

Nobody wins if we confuse new users with what appears to be conflicting messages and our amazing software appears to be unreliable because USB devices often suck.
 
Top