Register for the iXsystems Community to get an ad-free experience

TrueNAS SCALE 22.02.1 Overview

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
1,683
There is a full blog here: https://www.truenas.com/blog/truenas-scale-22-02-1/

Release Notes: https://www.truenas.com/docs/releasenotes/scale/22.02.1/
Download: https://www.truenas.com/download-truenas-scale/


TrueNAS SCALE gets its First Major Update!​

TrueNAS SCALE 22.02.0 (“Angelfish”) was released on “Twosday”, 2/22/22 and now gets its first major update after being deployed on over 16,000 active systems. TrueNAS SCALE 22.02.1 includes over 270 bug fixes and improvements and is a major step on the path to quality and reliability.

The growth of TrueNAS SCALE is extraordinary with over 100% system count growth per quarter since the start of the BETA process. We are excited to see widespread adoption by experienced Linux admins and look forward to working with more Linux admins and users.

The amount of storage under management by TrueNAS SCALE is also growing rapidly and is on track to pass an ExaByte this year. The enormity of the data stored requires extremely high software quality and excellent data management. Each software update takes this reliability another step forward as described in the quality lifecycle.

image1.png


TrueNAS SCALE is still TrueNAS…with differences​

TrueNAS SCALE is the culmination of an almost three-year collaborative effort from the iXsystems engineering team and the TrueNAS Community. The journey started with iXsystems contributions in promoting the combination of both Linux and FreeBSD as the primary operating systems for OpenZFS 2.0. This allowed the TrueNAS middleware to be ported between both OSes, with the goal of eventually supporting existing TrueNAS features atop a Linux base to unlock several Linux-specific capabilities.

image3.png

The major additions to TrueNAS SCALE are:

  • Kubernetes Apps enable Linux/Docker Containers
    • Vast library of dockerized applications and Apps Catalogs
    • Supports Helm charts and now Docker Compose apps
  • TrueNAS CLI provides robust interface via REST API to middleware
  • KVM provides robust and feature-rich hypervisor with good Windows guest support
  • Updated WebUI provides a greatly improved NAS management experience
  • Scale-out ZFS enabled via Glusterfs
    • Allows scale-out capacity and bandwidth via native client or SMB
    • Supports mirroring and dispersed (erasure code) volumes
  • Scale-out SMB clustering
    • Leverages Glusterfs and provides increased capacity/bandwidth
  • High Availability also applies to Apps and VMs
  • Scale-out S3 is supported via the Minio App
    • Migration via CloudSync or Minio replication.

Migrating from TrueNAS CORE is possible​

TrueNAS CORE 12.0-U8 is a very mature software release with all the benefits of millions of machine years of testing and bug fixes since it started life as FreeNAS. Migrating from CORE to SCALE is possible, but only recommended to users that see significant benefits from the unique TrueNAS SCALE features.
The migration path from TrueNAS CORE to SCALE is now better tested and is improved with this first update. Below is a summary of the pot-holes to avoid along the way:

  • Jails & Plugins cannot be migrated to Kubernetes Apps.
    • Each application must be recreated or reinstalled on SCALE.
    • Plugins and datasets can be migrated to App with the same application software
  • Netcli functionality is replaced by TrueNAS cli. (see docs – more to come)
  • Bhyve removed – VMs auto-migrate to KVM with same zvol
  • AFP Shares are retired
    • Migrate to an SMB share with AFP compatibility enabled.
  • wheel group exists in CORE, not in SCALE
    • This impacts permissions settings and can prevent shares from functioning. Change any permissions set to the wheel group before migrating.
  • Multipath is not supported
    • Turn off multipathing within CORE/Enterprise before migrating.
  • GELI encryption is not supported and there is no migration
    • File level backup/restore is required.
    • Unlock the pool then use ZFS/rsync replication to replicate the data to a new pool.
  • iSCSI ALUA & Fibre Channel are not supported until TrueNAS SCALE Bluefin
  • Asigra plugin is currently not supported (support coming in a future release)
  • TrueNAS (Enterprise) High Availability is demonstrable, but not yet mature. Users are advised to wait until Update 3 or 4.
Linux was also missing a driver for SATA backplanes and this has delayed the delivery of enclosure management for Minis and R-series. This should be resolved soon.

The changes in TrueNAS SCALE 22.02.1​

The feature set for TrueNAS SCALE 22.02.1 is described in the TrueNAS SCALE datasheet, and the TrueNAS SCALE documentation provides most of what you need to know to build and run your first systems. If you are missing some information or need advice, the TrueNAS Community forums provide a great source of information and community.

The details of TrueNAS SCALE 22.02.1 are in the release notes. There are over 270 new bug fixes and improvements that will provide a significant quality jump from the RELEASE version. Notable inclusions are:

  • The much loved Netdata App
  • Increased kernel NFS robustness and performance
  • Self-encrypting Drive support
  • Improved pool management UI
  • Better UPS support
  • Improved Gluster and Clustered SMB APIs
 
Joined
Jun 2, 2019
Messages
516
  • Better UPS support
I'll be the judge of that.

UPS Slave Reporting has not worked in any of the nightlies.


UPDATE: No joy!


Screen Shot 2022-05-04 at 8.20.05 PM.png
 
Last edited:

mikegleasonjr

Dabbler
Joined
Sep 14, 2020
Messages
10
My CPU usage on TrueNAS-SCALE-22.02.1 versus TrueNAS-SCALE-22.02.0.1 is all over the place. `middlewared` seems the culprits. Also, the task manager consistently blinks a red dot, way more than I was seeing on the previous version...

EDIT: activated the previous version and things went back to normal
 

Attachments

  • Screen Shot 2022-05-05 at 10.31.24 AM.png
    Screen Shot 2022-05-05 at 10.31.24 AM.png
    148.4 KB · Views: 408
  • Screen Shot 2022-05-05 at 10.32.00 AM.png
    Screen Shot 2022-05-05 at 10.32.00 AM.png
    627.5 KB · Views: 277
  • 2022-05-05_14-38.png
    2022-05-05_14-38.png
    43 KB · Views: 302
Last edited:

larod241

Dabbler
Joined
May 2, 2022
Messages
19
Same here. High cpu usage and lot of "catalog.items" entries in job list...
Also for me, the task manager consistently blinks a red dot too...
 

Attachments

  • Capture d’écran 2022-05-05 à 14.51.38.png
    Capture d’écran 2022-05-05 à 14.51.38.png
    187 KB · Views: 316
Last edited:

soleous

Dabbler
Joined
Apr 14, 2021
Messages
21
Just some slight confusion that might need cleaning up regarding Docker-Compose:

From the blog/post:
  • Supports Helm charts and now Docker Compose apps

From the release notes:
  • [NAS-115010] - Disable the docker-compose binary

I know there are alternatives from third parties and I can create a VM to deploy Docker and Docker-Compose, but as far as I'm aware there is no official native support for Docker-Compose or Helm Charts?
 

dotsonic

Cadet
Joined
Apr 23, 2022
Messages
4
When trying to view an app/docker log via the GUI (ellipses -> Logs), nothing is displayed in the Pod Logs window. Clicking Download Logs produces the correct output (.log file) with the correct information.
 

pyrodex

Dabbler
Joined
Jul 2, 2014
Messages
10
After the upgrade my SNMP-UPS UPS isn't working. Nothing changed other than an upgrade and below shows some insight.

root@tardis[~]# /lib/nut/snmp-ups -a APCUPS
Network UPS Tools - Generic SNMP UPS driver 0.97 (2.7.4)
No matching MIB found for sysOID '.1.3.6.1.4.1.318.1.3.27'!
Please report it to NUT developers, with an 'upsc' output for your device.
Going back to the classic MIB detection method.
Detected Smart-UPS 1500 on host 192.168.14.98 (mib: apcc 1.2)
[APCUPS] Warning: excessive poll failures, limiting error reporting (OID = .1.3.6.1.4.1.318.1.1.1.9.2.3.1.5.1.1.3)
[APCUPS] Warning: excessive poll failures, limiting error reporting (OID = .1.3.6.1.4.1.318.1.1.1.9.3.3.1.6.1.1.1)
root@tardis[~]# cat /etc/nut/ups.conf
[APCUPS]
driver = snmp-ups
privProtocol=AES
port = 192.168.14.98
desc = ""
pollinterval = 15
root@tardis[~]#


I was able to fix this temporarily...

Looks like the issue maybe permissions on the files in /etc/nut when you've had it previously configured... After changing everything under /etc/nut/* to the nut group it works. As I said I didn't do anything except for upgrade and this was working prior to that since I talk to the NUT daemon and pull data via prometheus.

root@tardis[~]# systemctl status nut-server
● nut-server.service - Network UPS Tools - power devices information server
Loaded: loaded (/lib/systemd/system/nut-server.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2022-05-05 17:54:32 EDT; 15s ago
Process: 3108985 ExecStart=/sbin/upsd (code=exited, status=1/FAILURE)

May 05 17:54:32 tardis upsd[3108985]: listening on ::0 port 3493
May 05 17:54:32 tardis upsd[3108985]: not listening on 0.0.0.0 port 3493
May 05 17:54:32 tardis upsd[3108985]: listening on ::0 port 3493
May 05 17:54:32 tardis upsd[3108985]: not listening on 0.0.0.0 port 3493
May 05 17:54:32 tardis upsd[3108985]: Can't open /etc/nut/ups.conf: Can't open /etc/nut/ups.conf: Permission denied
May 05 17:54:32 tardis upsd[3108985]: Network UPS Tools upsd 2.7.4
May 05 17:54:32 tardis upsd[3108985]: Can't open /etc/nut/ups.conf: Can't open /etc/nut/ups.conf: Permission denied
May 05 17:54:32 tardis systemd[1]: nut-server.service: Control process exited, code=exited, status=1/FAILURE
May 05 17:54:32 tardis systemd[1]: nut-server.service: Failed with result 'exit-code'.
May 05 17:54:32 tardis systemd[1]: Failed to start Network UPS Tools - power devices information server.
root@tardis[~]#

root@tardis[/etc]# ls -lad nut
drwxr-xr-x 2 root root 8 May 5 17:54 nut
root@tardis[/etc]# cd nut
root@tardis[/etc/nut]# ls -la
total 36
drwxr-xr-x 2 root root 8 May 5 17:54 .
drwxr-xr-x 138 root root 250 May 5 17:54 ..
-r--r----- 1 root ladvd 15 May 5 17:54 nut.conf
-r--r----- 1 root ladvd 98 May 5 17:54 ups.conf
-r--r----- 1 root ladvd 37 May 5 17:54 upsd.conf
-r--r----- 1 root ladvd 127 May 5 17:54 upsd.users
-r--r----- 1 root ladvd 428 May 5 17:54 upsmon.conf
-r--r----- 1 root ladvd 520 May 5 17:54 upssched.conf
root@tardis[/etc/nut]# chgrp nut *
root@tardis[/etc/nut]# ls -la
total 36
drwxr-xr-x 2 root root 8 May 5 17:54 .
drwxr-xr-x 138 root root 250 May 5 17:54 ..
-r--r----- 1 root nut 15 May 5 17:54 nut.conf
-r--r----- 1 root nut 98 May 5 17:54 ups.conf
-r--r----- 1 root nut 37 May 5 17:54 upsd.conf
-r--r----- 1 root nut 127 May 5 17:54 upsd.users
-r--r----- 1 root nut 428 May 5 17:54 upsmon.conf
-r--r----- 1 root nut 520 May 5 17:54 upssched.conf
root@tardis[/etc/nut]#

root@tardis[/etc/nut]# systemctl restart nut-server
root@tardis[/etc/nut]# systemctl status nut-server
● nut-server.service - Network UPS Tools - power devices information server
Loaded: loaded (/lib/systemd/system/nut-server.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2022-05-05 17:55:44 EDT; 7s ago
Process: 3126474 ExecStart=/sbin/upsd (code=exited, status=0/SUCCESS)
Main PID: 3126475 (upsd)
Tasks: 1 (limit: 153801)
Memory: 748.0K
CGroup: /system.slice/nut-server.service
└─3126475 /lib/nut/upsd

May 05 17:55:44 tardis upsd[3126474]: fopen /run/nut/upsd.pid: No such file or directory
May 05 17:55:44 tardis upsd[3126474]: listening on ::0 port 3493
May 05 17:55:44 tardis upsd[3126474]: not listening on 0.0.0.0 port 3493
May 05 17:55:44 tardis upsd[3126474]: listening on ::0 port 3493
May 05 17:55:44 tardis upsd[3126474]: not listening on 0.0.0.0 port 3493
May 05 17:55:44 tardis upsd[3126474]: Connected to UPS [APCUPS]: snmp-ups-APCUPS
May 05 17:55:44 tardis upsd[3126474]: Connected to UPS [APCUPS]: snmp-ups-APCUPS
May 05 17:55:44 tardis upsd[3126475]: Startup successful
May 05 17:55:44 tardis systemd[1]: Started Network UPS Tools - power devices information server.
May 05 17:55:47 tardis upsd[3126475]: User apc@::1 logged into UPS [APCUPS]
root@tardis[/etc/nut]# cd nut
root@tardis[/etc/nut]# upsc APCUPS@localhost
Init SSL without certificate database
battery.charge: 100.00
battery.date: 08/30/2020
battery.runtime: 3257.00
battery.runtime.low: 120
battery.voltage: 26.00
device.mfr: APC
device.model: Smart-UPS 1500
device.serial: 3S2021X12001
device.type: ups
driver.name: snmp-ups
driver.parameter.pollinterval: 15
driver.parameter.port: 192.168.14.98
driver.parameter.privProtocol: AES
driver.parameter.synchronous: no
driver.version: 2.7.4
driver.version.data: apcc MIB 1.2
driver.version.internal: 0.97
input.frequency: 60.00
input.sensitivity: high
input.transfer.high: 127
input.transfer.low: 106
input.transfer.reason: selfTest
input.voltage: 123.70
input.voltage.maximum: 123.70
input.voltage.minimum: 123.50
output.current: 2.00
output.frequency: 60.00
output.voltage: 123.70
output.voltage.nominal: 120
ups.firmware: UPS 04.1 (ID1015)
ups.id: TARDIS-UPS
ups.load: 23.60
ups.mfr: APC
ups.mfr.date: 05/20/2020
ups.model: Smart-UPS 1500
ups.serial: 3S2021X12001
ups.status: OL
ups.temperature: 31.00
ups.test.date: 04/29/2022
ups.test.result: Ok
root@tardis[/etc/nut]#
 
Last edited:

Sveken

Cadet
Joined
May 3, 2022
Messages
7
Having several reports on the forum, including myself that this update seems to break containers,
Reverting to the last build fixes the issue
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
1,683
Just some slight confusion that might need cleaning up regarding Docker-Compose:

From the blog/post:


From the release notes:


I know there are alternatives from third parties and I can create a VM to deploy Docker and Docker-Compose, but as far as I'm aware there is no official native support for Docker-Compose or Helm Charts?
Agreed, we are referring to the TrueCharts Docker-Compose app. THis is a more reliable way of supporting Docker Compose apps and doesn't require the overhead of a Linux VM.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
1,683
I'll be the judge of that.

UPS Slave Reporting has not worked in any of the nightlies.


UPDATE: No joy!


View attachment 55215

It does look like we have resolved one issue and created another.
If you have reported a bug, please post the bug-id in response.
We have been missing a UPS in our automated tests.. and will fix this.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
1,683
Having several reports on the forum, including myself that this update seems to break containers,
Reverting to the last build fixes the issue

Thanks, Can you describe the symptoms..... is i the previously running containers or the setup of new Apps?
It's always useful to report a bug or find the bug-id that is related - then we can all track resolution.
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
611
"Supports Helm charts and now Docker Compose apps"

@morganL NOT AGAIN

Please try to keep those marketing people in line. You FINALLY got people to understand docker-compose wasn't officially supported and now you've thrown this out, while it's still not officially supported (besides the fact we've build it as a single(!) app).
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
611
Having several reports on the forum, including myself that this update seems to break containers,
Reverting to the last build fixes the issue

We've done quite a lot of testing by now, and can confirm this update in-and-of itself, does not break Apps.
There are some cases however, where it might work to change the IP apps are bound to under "advanced settings" in the apps overview, wait for it to process and change it back.
 

soleous

Dabbler
Joined
Apr 14, 2021
Messages
21
Agreed, we are referring to the TrueCharts Docker-Compose app. THis is a more reliable way of supporting Docker Compose apps and doesn't require the overhead of a Linux VM.

Talking of docker, I know RKE has moved to support docker using the external dockershim from Mirantis, not sure what K3s is doing, I believe they are following k8s.

Whats SCALE plan for runtime, move to containerd and possibly free-up docker to utilise native docker/docker-compose support?
 
Joined
Jun 2, 2019
Messages
516
It does look like we have resolved one issue and created another.
If you have reported a bug, please post the bug-id in response.
We have been missing a UPS in our automated tests.. and will fix this.
I did, but the assigned developer and I could never manage to find a time for him to remote into my system. It only affects UPS Slave reporting, not actual UPS Slave monitoring, thus not mission critical in my situation.

 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
1,683
"Supports Helm charts and now Docker Compose apps"

@morganL NOT AGAIN

Please try to keep those marketing people in line. You FINALLY got people to understand docker-compose wasn't officially supported and now you've thrown this out, while it's still not officially supported (besides the fact we've build it as a single(!) app).
Agreed... we should have included the hyperlink to the docker Compose forum post . I will ask the marketing team to add it.

 

alexeym

Cadet
Joined
May 6, 2022
Messages
1
After upgrade found an issue, it looks like k3s trying to use now system configured proxy for local calls, that was not the case before the upgrade. Had to disable proxy for now to make applications work until the solution is available, here is the some info from alerts and my squid log
1. TNAS ALERT: CRITICAL
Failed to start kubernetes cluster for Applications: [EFAULT] Unable to configure node: 403, message='Forbidden', url=URL('http://<<PROXY>>:3128')
2022-05-06 11:27:31 (America/New_York)
2. SQUID LOG: TCP_DENIED/403 127.0.0.1:6443
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
611
Agreed... we should have included the hyperlink to the docker Compose forum post . I will ask the marketing team to add it.


Awesome (and maybe apps -> app ), but a great solution to prevent any confusion :)
 

amayer

Cadet
Joined
May 10, 2022
Messages
1
I have 768G RAM 2*84 disk jbod, and 8 nvme ssds successfully set up a raid group in 22.02.0.1, now upgraded to 22.02.1, I get, I changed raid z1-3 1-17vdevs, none of them helps
Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 114, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/disks.py", line 30, in get_disks for link in (dev.properties['DEVLINKS'] or '').split(): File "/usr/lib/python3/dist-packages/pyudev/device/_device.py", line 1067, in __getitem__ raise KeyError(prop) KeyError: 'DEVLINKS' """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 175, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1257, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/availability.py", line 21, in get_unused reserved = await self.middleware.call('disk.get_reserved') File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1308, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1257, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/availability.py", line 39, in get_reserved return await self.middleware.call('boot.get_disks') + await self.middleware.call('pool.get_disks') File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1308, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1257, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/boot.py", line 122, in get_disks return await self.middleware.call('zfs.pool.get_disks', BOOT_POOL_NAME) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1308, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1265, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1271, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1186, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1169, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) KeyError: 'DEVLINKS'
 
Top