Register for the iXsystems Community to get an ad-free experience

TrueNAS SCALE 21.04 makes its Debut

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,178
After a very successful development cycle with thousands of downloads and deployments, TrueNAS SCALE 21.04 is now available for download. TrueNAS SCALE “Angelfish” is maturing for single node use and is almost feature complete for multi-node or scale-out use. Unlike other HyperConverged Infrastructure (HCI) solutions, TrueNAS SCALE will have deployment benefits as a single node, a dual-node “high-availability” system, or as a cluster of multiple nodes.

PWQE7fwQ0W1IgCpZATZlh3eyQqdjUB0S-4PUtHsbY9Se9z2jtclBnEtHGQZ4Km1ZGCcnYcSmeLTUaf1JSGeiwc-_Xw2641RZe4wtwLJQOyaL6YCBn0mfHS_u4b2-pFFP2VS1BSLV




TrueNAS SCALE 21.04 is planned to be the last ALPHA version on the path toward BETA. TrueNAS SCALE 21.04 is based on Debian “Bullseye” Linux and includes:

KVM Virtualization:

Mature Hypervisor with good reliability, Guest OS support, and enterprise features. This hypervisor is performing well in the field with our early adopters.

Kubernetes:

3rd Party Applications can now be deployed as a single (docker) image or “pods” of containers. Using Helm Charts, complex applications can now be easily deployed with dynamic charts, giving users fine-grained control and flexibility. TrueNAS SCALE 21.04 now includes the ability to utilize community-provided catalogs, including TrueCharts. The screen capture below shows the power of the UI and Kubernetes integration.

xAhmV50qkdt0WzONWQ1CRhO_NFQKKaAUb9u_v8tAlM8Y5XajjupfCr37mZrzsMHu6_jCrF7WGZrIAXUtLD05WooI4rS4esuJZz8Zln9EkbQWQbMUugmDHzps22JXmXf--bdnCdIU


GPU Passthrough:

TrueNAS SCALE 21.02 introduced Intel QuickSync GPU passthrough to containers. Version 21.04 improves this support by bringing NVIDIA GPU/CUDA passthrough to the UI and containers as well. Now containers which have GPU offload capabilities, such as Plex, can take advantage of a wider-range of GPU hardware. The sharing of GPU resources across multiple containers simultaneously is also supported.

Scale-out ZFS:

Cluster volumes which span multiple nodes and ZFS pools can be created to provide scalable and robust data stores. The web UI for these is included in TrueCommand 2.0 which is available as a nightly image. The screen capture below shows a Cluster Volume being created across 3 nodes.

0eZO4ZCMRmEW_Dki3EnyCwNaM7H24Fkns67U_xQhhY7OLK7UHjxP6YTTlOzrljcmiGYahAFbT3wslnNktnImJJKWarbJz4EuCQAR1Nxy5hR-GBGJfP5teNDM3vbxx8rvGClIGjDB



The UI, while similar to TrueNAS CORE, has also been improved with some new UX enhancements across the ‘Data Protection’ and ’Sharing’ sub-sections. Further UX improvements are expected to arrive in version 21.06.

In the 21.02 version, we also introduced the new TrueNAS CLI that uses the API and persists all changes. This CLI will make it easier to script the set-up and configuration of TrueNAS. Feedback on the CLI has been very positive and has provided much assistance in its rapid maturation for field-use.

In March, the TrueNAS CORE documentation received a major facelift which greatly improved navigation and ease of use. TrueNAS SCALE documentation is taking shape as a clone of TrueNAS CORE which has many of the same configuration options. TrueNAS SCALE documentation is starting its journey to completeness and includes both Developer Notes and Release Notes.

We appreciate the community feedback and bug reports and hope to get SCALE to production quality faster. A special thanks also goes to the large number of community members who joined the development and test team. We’ve really appreciated your contributions and teamwork which have greatly contributed to the accelerated development process.

Is TrueNAS SCALE for Users or Developers?
At this ALPHA stage of its Software Development Lifecycle, TrueNAS SCALE is still primarily for developers and enthusiasts and can be downloaded here. For Linux developers, there are many opportunities to contribute to the Open Source TrueNAS SCALE project and we have a vibrant Slack community for contributors. It is a well coordinated and managed environment to develop the best Open Hyperconverged Infrastructure. For more information, see this community post.

This TrueNAS SCALE 21.04 version is also intended for tech-savvy enthusiasts who have a single node, a backup plan, and a willingness to resolve any issues they find. The feedback from enthusiasts has been good and the Kubernetes capabilities simplify the addition of applications.

Users with standard NAS (NFS, SMB, iSCSI, S3) requirements are still advised to use TrueNAS CORE and Enterprise, which have five hundred times more data under management and over 10 years of operation. If you have any additional questions or need advice on a new project, please email us at info@iXsystems.com. We are standing by to help.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
5,178
That's absolutely great news, Kris. Kudos to the team.

I will start running this version tomorrow, when my super-duper triple-hyperconverged three-node SCALE cluster on a single ESXi instance will get an active CPU cooler. Because putting a Xeon-D mainboard in a case designed for an Atom seems to call for some more cooling.

Apart from that, one single question - RTFM with pointer to the FM of course welcome - can I already combine the GlusterFS and the container features? I.e. how to deploy an Application on the clustered file system?

Thanks and kind regards,
Patrick
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,178
That's absolutely great news, Kris. Kudos to the team.

I will start running this version tomorrow, when my super-duper triple-hyperconverged three-node SCALE cluster on a single ESXi instance will get an active CPU cooler. Because putting a Xeon-D mainboard in a case designed for an Atom seems to call for some more cooling.

Apart from that, one single question - RTFM with pointer to the FM of course welcome - can I already combine the GlusterFS and the container features? I.e. how to deploy an Application on the clustered file system?

Thanks and kind regards,
Patrick

Awesome!

On the container feature, each member of the cluster has the distributed volume mounted to /cluster/<volname> automatically. It should be possible to map those into the container's host-volumes. I don't think the WebUI allows you to pick that location at this moment (It's still restricted to /mnt locations), but it may be possible through the API/CLI already. Just haven't tested it personally yet. ;)

EDIT: Made a Jira Ticket for the team to explore this.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,461
I don't think the WebUI allows you to pick that location at this moment (It's still restricted to /mnt locations)
Depends on the app designer, weither they selected "hostpath" or "path", we use "path" for selection of /dev/usb mounts for example for TrueCharts.

I think it's more fruitfull to wait for full k8s support rather than hack something together though...
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,461
EDIT: Made a Jira Ticket for the team to explore this.
I find it a bit of an overzealus ticket to be honest, support for GlusterFS PVC's is quite an obvious part of the clustered K8S support, that isn't designed yet and is on the agenda to-be-designed already. making all sorts of seperate hacky tickets for individual sub-parts seems a bit... weird imho...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
5,178
@ornias I agree if this is a still-to-be-designed-from-the-ground-up area of the system. Please don't make $somefeature work by ad-hocery.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,461
Ahh I seem to have missed the part where they put it in the /cluster dir... Yeah that one could be added to at least be available to hostPath mounting. Just not advicable for the main config for Apps (due to lack of snapshots), but thats a whole other issue
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
5,178
Can one switch back to the regular alpha train from the nightlies?
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
Just installed the .04 build, but Im running into some issues with SSL certs:
For instance, trying to view applications results in an error. Refreshing the charts manual results int:

truenas# git clone -v https://github.com/truenas/charts.git /mnt/TestPool/ix-applications/catalogs/github_com_truenas_charts_git_master
Cloning into '/mnt/TestPool/ix-applications/catalogs/github_com_truenas_charts_git_master'...
fatal: unable to access 'https://github.com/truenas/charts.git/': server certificate verification failed. CAfile: none CRLfile: none

If I try and run `apt-get update` pretty every repo reports being invalid for many days:

is not valid yet (invalid for another 42d 22h 52min 53s). Updates for this repository will not be applied.

Have I missed a step in setup or is something wrong here (and how do I fix it)?
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
133
@StanAccy can you please check if your system time is accurate ? It's possible if it's askew SSL fails. Otherwise please create a ticket at https://jira.ixsystems.com and we can have a peek at it ( do include your system debug please which can be retrieved via System Settings -> Advanced -> Save Debug ). Thank you
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
Sys clock is/was fine. The first boot was configured via DHCP. I set a static IP and everything seemed ok (except the applications). I rebooted again, and this functionality now seems to work.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
422
After a very successful development cycle with thousands of downloads and deployments, TrueNAS SCALE 21.04 is now available for download. TrueNAS SCALE “Angelfish” is maturing for single node use and is almost feature complete for multi-node or scale-out use. Unlike other HyperConverged Infrastructure (HCI) solutions, TrueNAS SCALE will have deployment benefits as a single node, a dual-node “high-availability” system, or as a cluster of multiple nodes.

PWQE7fwQ0W1IgCpZATZlh3eyQqdjUB0S-4PUtHsbY9Se9z2jtclBnEtHGQZ4Km1ZGCcnYcSmeLTUaf1JSGeiwc-_Xw2641RZe4wtwLJQOyaL6YCBn0mfHS_u4b2-pFFP2VS1BSLV




TrueNAS SCALE 21.04 is planned to be the last ALPHA version on the path toward BETA. TrueNAS SCALE 21.04 is based on Debian “Bullseye” Linux and includes:

KVM Virtualization:

Mature Hypervisor with good reliability, Guest OS support, and enterprise features. This hypervisor is performing well in the field with our early adopters.

Kubernetes:

3rd Party Applications can now be deployed as a single (docker) image or “pods” of containers. Using Helm Charts, complex applications can now be easily deployed with dynamic charts, giving users fine-grained control and flexibility. TrueNAS SCALE 21.04 now includes the ability to utilize community-provided catalogs, including TrueCharts. The screen capture below shows the power of the UI and Kubernetes integration.

xAhmV50qkdt0WzONWQ1CRhO_NFQKKaAUb9u_v8tAlM8Y5XajjupfCr37mZrzsMHu6_jCrF7WGZrIAXUtLD05WooI4rS4esuJZz8Zln9EkbQWQbMUugmDHzps22JXmXf--bdnCdIU


GPU Passthrough:

TrueNAS SCALE 21.02 introduced Intel QuickSync GPU passthrough to containers. Version 21.04 improves this support by bringing NVIDIA GPU/CUDA passthrough to the UI and containers as well. Now containers which have GPU offload capabilities, such as Plex, can take advantage of a wider-range of GPU hardware. The sharing of GPU resources across multiple containers simultaneously is also supported.

Scale-out ZFS:

Cluster volumes which span multiple nodes and ZFS pools can be created to provide scalable and robust data stores. The web UI for these is included in TrueCommand 2.0 which is available as a nightly image. The screen capture below shows a Cluster Volume being created across 3 nodes.

0eZO4ZCMRmEW_Dki3EnyCwNaM7H24Fkns67U_xQhhY7OLK7UHjxP6YTTlOzrljcmiGYahAFbT3wslnNktnImJJKWarbJz4EuCQAR1Nxy5hR-GBGJfP5teNDM3vbxx8rvGClIGjDB



The UI, while similar to TrueNAS CORE, has also been improved with some new UX enhancements across the ‘Data Protection’ and ’Sharing’ sub-sections. Further UX improvements are expected to arrive in version 21.06.

In the 21.02 version, we also introduced the new TrueNAS CLI that uses the API and persists all changes. This CLI will make it easier to script the set-up and configuration of TrueNAS. Feedback on the CLI has been very positive and has provided much assistance in its rapid maturation for field-use.

In March, the TrueNAS CORE documentation received a major facelift which greatly improved navigation and ease of use. TrueNAS SCALE documentation is taking shape as a clone of TrueNAS CORE which has many of the same configuration options. TrueNAS SCALE documentation is starting its journey to completeness and includes both Developer Notes and Release Notes.

We appreciate the community feedback and bug reports and hope to get SCALE to production quality faster. A special thanks also goes to the large number of community members who joined the development and test team. We’ve really appreciated your contributions and teamwork which have greatly contributed to the accelerated development process.

Is TrueNAS SCALE for Users or Developers?
At this ALPHA stage of its Software Development Lifecycle, TrueNAS SCALE is still primarily for developers and enthusiasts and can be downloaded here. For Linux developers, there are many opportunities to contribute to the Open Source TrueNAS SCALE project and we have a vibrant Slack community for contributors. It is a well coordinated and managed environment to develop the best Open Hyperconverged Infrastructure. For more information, see this community post.

This TrueNAS SCALE 21.04 version is also intended for tech-savvy enthusiasts who have a single node, a backup plan, and a willingness to resolve any issues they find. The feedback from enthusiasts has been good and the Kubernetes capabilities simplify the addition of applications.

Users with standard NAS (NFS, SMB, iSCSI, S3) requirements are still advised to use TrueNAS CORE and Enterprise, which have five hundred times more data under management and over 10 years of operation. If you have any additional questions or need advice on a new project, please email us at info@iXsystems.com. We are standing by to help.
can SCALE connect to an iscsi target on TNC yet?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,461
can SCALE connect to an iscsi target on TNC yet?
The underlaying kubernetes deployment should be able to using Democratic CSI...
But you should ask Travis, The guy building Democratic CSI.

It's not supported from the core of SCALE afaik.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
422
The underlaying kubernetes deployment should be able to using Democratic CSI...
But you should ask Travis, The guy building Democratic CSI.

It's not supported from the core of SCALE afaik.
which doesn't make sense IMO as the base debian has an iscsi client built in. I am looking to be able to run vm's(not containers) from SCALE as the host from an ISCSI target on the TNC. I am surprised this use case wasn't apparently considered.
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,178
which doesn't make sense IMO as the base debian has an iscsi client built in. I am looking to be able to run vm's(not containers) from SCALE as the host from an ISCSI target on the TNC. I am surprised this use case wasn't apparently considered.

Its been considered, but right now we're primarily focused on the storage and local application / container management. Adding some sort of iscsi client support is on our radar, just not quite there yet. As mentioned you can used the democratic CSI driver in the meantime though, if that is your preference.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
422
Its been considered, but right now we're primarily focused on the storage and local application / container management. Adding some sort of iscsi client support is on our radar, just not quite there yet. As mentioned you can used the democratic CSI driver in the meantime though, if that is your preference.
but that's the kub system right? Can i use it to connect ot the storage on the TNC and then run a vm on it instead of a container?
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
422
You think middleware and GUI's write themselves because debian supports something?! >.<



Thats just for kubernetes, yes.

first to answer your question:
If you honestly believe I think this way..go read my posts on this subject. I have never implied that..i have questioned why this basic functionality wasn't considered from the beginning...

I am not interested in Kub. my use case is very clearly laid out in my posts AND in the feature request i have linked in my sig..have you read it?
 
Top