TrueNAS SCALE 23.10.2 has been released!

eturgeon

Super Moderator
Moderator
iXsystems
Joined
Nov 29, 2021
Messages
60
We are pleased to release TrueNAS SCALE 23.10.2!

This maintenance release addresses community-reported bugs in SCALE 23.10.1 and improves stability.

Notable changes:
  • Linux Kernel is updated to v. 6.1.74 (NAS-126897).
  • OpenZFS is updated to an early release version of v. 2.2.3. OpenZFS feature flags are not changed.
  • Network statistics on the dashboard and reporting screen now have consistent units (NAS-125453).
  • Failed cleanup after attempting to open ZFS snapshot directory are prevented (NAS-126808).
  • Accidental discard of NFSV4 ACLs is prevented (NAS-127021).
  • Fix bug relating to expanding VDEVs when replacing drives with larger drives (NAS-126809).
  • Fix disk temperature reporting (NAS-127100).
  • An NFS group permissions bug is fixed (NAS-126067).
  • RESTful API pagination parameters are fixed (NAS-126080).
  • Privacy improvements for debug files (NAS-126925).
  • Fix third-party apps catalog validation exhausting space in /var/run (NAS-127213).
See the Release Notes for more details.

Changelog: https://www.truenas.com/docs/scale/23.10/gettingstarted/scalereleasenotes/#23102-changelog
Download: https://www.truenas.com/download-truenas-scale
Documentation: https://www.truenas.com/docs/scale/23.10/

Thanks for using TrueNAS SCALE! As always, we appreciate your feedback!
 

cmykpro

Cadet
Joined
Feb 22, 2024
Messages
6
Does this happen to add support for Intel Arc A770? Or do I need to still try Dragonfish?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Does this happen to add support for Intel Arc A770? Or do I need to still try Dragonfish?
Hey @cmykpro

Some users have been able to enable experimental support for the ARC cards in the existing 6.1 kernel, but native support requires 6.2 or newer, which would be in Dragonfish.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Does this happen to add support for Intel Arc A770? Or do I need to still try Dragonfish?
My understanding is that it may require the Dragonfish Kernel.. feel free to try both.

Appreciate feedback from those that have updated.... smoothly or not.
We want the feedback on success rates and any bugs that we have overlooked.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Yet another uneventful upgrade process, everything came back up without problems. Thanks guys.
Same. All TrueCharts apps up and running.
 

cmykpro

Cadet
Joined
Feb 22, 2024
Messages
6
Hey @cmykpro

Some users have been able to enable experimental support for the ARC cards in the existing 6.1 kernel, but native support requires 6.2 or newer, which would be in Dragonfish.
Dragonfish installed fine with my stock setup, RTX4060ti. Everything came back up and everything worked, including Plex.

Once I removed the RTX4060ti and replaced it with an Intel ARC A770 I can no longer get plex to run.

Apparently its waiting for pods to be scaled, but is always stuck at 40%. Not sure how or why a video card change would cause this.

Help plz and ty...
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Updated from 23.10.1, and it booted up fine, SMB is running fine, but all apps are stuck in "deploying," including iX' Storj app and a bunch of TrueCharts apps, but one custom Docker app (Urbackup) is running.

What may or may not be related is that I'm getting a periodic email alert saying:
Code:
Failed to check for alert Smartd: Traceback (most recent call last):  File "/usr/lib/python3/dist-packages/middlewared/plugins/alert.py", line 808, in __run_source    alerts = (await alert_source.check()) or []              ^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/alert/base.py", line 335, in check    return await self.middleware.run_in_thread(self.check_sync)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1254, in run_in_thread    return await self.run_in_executor(self.thread_pool_executor, method, *args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run    result = self.fn(*self.args, **self.kwargs)             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/alert/source/smartd.py", line 22, in check_sync    if not self.middleware.call_sync("service.started", "smartd"):           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1421, in call_sync    return self.run_coroutine(methodobj(*prepared_call.args))           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1461, in run_coroutine    return fut.result()           ^^^^^^^^^^^^  File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result    return self.__get_result()           ^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result    raise self._exception  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf    return await func(*args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf    res = await f(*args, **kwargs)          ^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/plugins/service.py", line 201, in started    state = await service_object.get_state()            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/base.py", line 38, in get_state    return await self.middleware.run_in_thread(self._get_state_sync)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1254, in run_in_thread    return await self.run_in_executor(self.thread_pool_executor, method, *args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run    result = self.fn(*self.args, **self.kwargs)             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/base.py", line 41, in _get_state_sync    unit = self._get_systemd_unit()           ^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/base.py", line 77, in _get_systemd_unit    unit.load()  File "/usr/lib/python3/dist-packages/pystemd/base.py", line 90, in load    unit_xml = self.get_introspect_xml()               ^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/pystemd/base.py", line 75, in get_introspect_xml    bus.call_method(  File "pystemd/dbuslib.pyx", line 446, in pystemd.dbuslib.DBus.call_method pystemd.dbusexc.DBusTimeoutError: [err -110]: b'Connection timed out'
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
...and since someone decided to set permissions in this forum so that users can't edit their posts, and the "code" block put that whole thing on one line, here it is as inline code in hopes that will be more readable:
Failed to check for alert Smartd: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/alert.py", line 808, in __run_source alerts = (await alert_source.check()) or [] ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/alert/base.py", line 335, in check return await self.middleware.run_in_thread(self.check_sync) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1254, in run_in_thread return await self.run_in_executor(self.thread_pool_executor, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/alert/source/smartd.py", line 22, in check_sync if not self.middleware.call_sync("service.started", "smartd"): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1421, in call_sync return self.run_coroutine(methodobj(*prepared_call.args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1461, in run_coroutine return fut.result() ^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf return await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf res = await f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/service.py", line 201, in started state = await service_object.get_state() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/base.py", line 38, in get_state return await self.middleware.run_in_thread(self._get_state_sync) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1254, in run_in_thread return await self.run_in_executor(self.thread_pool_executor, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/base.py", line 41, in _get_state_sync unit = self._get_systemd_unit() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/service_/services/base.py", line 77, in _get_systemd_unit unit.load() File "/usr/lib/python3/dist-packages/pystemd/base.py", line 90, in load unit_xml = self.get_introspect_xml() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/pystemd/base.py", line 75, in get_introspect_xml bus.call_method( File "pystemd/dbuslib.pyx", line 446, in pystemd.dbuslib.DBus.call_method pystemd.dbusexc.DBusTimeoutError: [err -110]: b'Connection timed out'
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
At 2:30 uptime, one more app is up. Seems to be coming along, but good heavens it's slow. The apps pool is a mirrored pair of SATA SSDs, so it should be reasonably performant.

And something is really slowing down the middleware. I'm not getting the DBus.Timeout.Error any more, but I'm getting lots of alerts of errors connecting to the middleware. Here's one example, though I'm seeing what seems to be the same error affecting multiple tasks: Failed to check for alert ZpoolCapacity: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker res = MIDDLEWARE._run(*call_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 292, in __init__ raise ClientException('Failed connection handshake') middlewared.client.client.ClientException: Failed connection handshake """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/alert.py", line 808, in __run_source alerts = (await alert_source.check()) or [] ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/alert/source/zpool_capacity.py", line 48, in check for pool in await self.middleware.call("zfs.pool.query"): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1399, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1350, in _call return await self._call_worker(name, *prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1356, in _call_worker return await self.run_in_proc(main_worker, name, args, job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ middlewared.client.client.ClientException: Failed connection handshake
 

dan3408

Dabbler
Joined
Jan 30, 2014
Messages
15
Install went OK except for an issue I asked about in a previous version (forum link): my reporting data was reset, and there is no data from before the reboot with the new version. Is there a setting I should check, or is this a known issue?

Thanks
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
my reporting data was reset
I saw the same thing. Didn't bother me much as I rarely use the reporting pages. But the apps thing is a problem. After over four hours of waiting for them to come up (one would be up for a bit, then go down, then another come up, etc.), I gave up and rebooted into 23.10.1, where they came up time.

Maybe 23.10.3 will be usable.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
At 2:30 uptime, one more app is up. Seems to be coming along, but good heavens it's slow. The apps pool is a mirrored pair of SATA SSDs, so it should be reasonably performant.

And something is really slowing down the middleware. I'm not getting the DBus.Timeout.Error any more, but I'm getting lots of alerts of errors connecting to the middleware. Here's one example, though I'm seeing what seems to be the same error affecting multiple tasks: Failed to check for alert ZpoolCapacity: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker res = MIDDLEWARE._run(*call_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 292, in __init__ raise ClientException('Failed connection handshake') middlewared.client.client.ClientException: Failed connection handshake """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/alert.py", line 808, in __run_source alerts = (await alert_source.check()) or [] ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/alert/source/zpool_capacity.py", line 48, in check for pool in await self.middleware.call("zfs.pool.query"): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1399, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1350, in _call return await self._call_worker(name, *prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1356, in _call_worker return await self.run_in_proc(main_worker, name, args, job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ middlewared.client.client.ClientException: Failed connection handshake

Suggest we report a bug and document the NAS-ticket. If Ok with you, please have the ticket stay public.

After that, any other users who have something similar can "like" here or add notes.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Suggest we report a bug and document the NAS-ticket. If Ok with you, please have the ticket stay public.

After that, any other users who have something similar can "like" here or add notes.
It may be a few days before "time at home" and "can tolerate apps being down for several hours" will coincide, and I have no confidence at all that the system will be able to generate a debug file--but if it can, I'll try to get a ticket filed.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
There are now over 10,000 systems running 23.10.2 with a few reports of Apps not coming up.
We'll try to chase this issue in the coming week. If you have the issue, please report it and document the number and type of Apps. Other assistance with troubleshooting is greatly appreciated.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
For the "Apps not coming up" issues, we'd recommend each person lodge their own ticket since we see diverse reasons for these types of issues.

High level data that is appreciated:
  • How many Apps come up vs stay down?
  • Are the Apps.. the official Apps or 3rd party Apps?
  • What version of software did you migrate from?
 

IndieCoopz

Explorer
Joined
Nov 4, 2022
Messages
50
I have no temperature data for my disks anymore^^

Bildschirmfoto-2024-02-29-um-10-21-55.png
 
Top