TrueNAS SCALE 23.10.1 has been released!

eturgeon

Super Moderator
Moderator
iXsystems
Joined
Nov 29, 2021
Messages
60
We are pleased to release TrueNAS SCALE 23.10.1!

This maintenance release addresses community-reported bugs in SCALE 23.10 (Cobia) and improves stability.

Notable changes:
  • Reported issues involving cached Web UI artifacts are addressed in 23.10.1 (NAS-124602).
  • OpenZFS is updated to version 2.2.2 to fix a data integrity issue discovered in that project (NAS-125541). While this bug has been present in OpenZFS for many years, this issue has not been found to impact any TrueNAS systems to date. See this TrueNAS Community announcement for more details.
  • The ZFS block cloning feature is temporarily disabled in 23.10.1. This is being done out of an abundance of caution while the OpenZFS project conducts additional testing. While re-enabling this feature is anticipated in a future 23.10 release, SCALE nightly builds continue to have ZFS block cloning enabled for experimentation and testing.
  • Exporting Netdata reporting metrics to a third party database (Graphite) is now supported (NAS-123668).
  • The Linux kernel is updated to version 6.1.63 (NAS-125309).
  • All network interface hardware addresses persist at upgrade to address a name change some TrueNAS Enterprise system NICs experience when upgrading from TrueNAS SCALE Bluefin to TrueNAS SCALE Cobia (NAS-124679).
  • The deprecated Use System Dataset option in System Settings > Advanced > Syslog is removed (WebUI PR #9026).
  • Improved sorting and filtering of replace disk search results (NAS-124732).
  • Fix issue with immutable fields preventing additional storage configuration for applications (NAS-125196).
  • The only install option supported by the 23.10.1 (Cobia) ISO installer is a clean installation. The ISO installer Upgrade Install and Fresh Install options are removed. Only the Fresh Install behavior is supported by the SCALE 23.10.1 (and later versions) ISO file. Continue to use the TrueNAS SCALE update process to seamlessly upgrade from one SCALE major version to another.
See the Release Notes for more details.

Changelog: https://www.truenas.com/docs/scale/23.10/gettingstarted/scalereleasenotes/#23101-changelog
Download: https://www.truenas.com/download-truenas-scale
Documentation: https://www.truenas.com/docs/scale/23.10/

Thanks for using TrueNAS SCALE! As always, we appreciate your feedback!
 

bcat

Explorer
Joined
Oct 20, 2022
Messages
84
The deprecated Use System Dataset option in System Settings > Advanced > Syslog is removed (WebUI PR #9026).
Just curious, what's the rationale for this change? (It doesn't cause me any problems... I was just wondering the reasoning.)
 

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88

ABain

Bug Conductor
iXsystems
Joined
Aug 18, 2023
Messages
172
Still seems to display incorrectly. I cleared cache. Not a big deal at all, just curious why I'm still seeing GiB/s speeds :)
When you say cache you mean UI cache? Interested to know if the dashboard is showing some historic data in the mix which might show this inflated value and if the reporting page in the UI is correct? Could you check the reporting page for the network?
 

Jorsher

Explorer
Joined
Jul 8, 2018
Messages
88
When you say cache you mean UI cache? Interested to know if the dashboard is showing some historic data in the mix which might show this inflated value and if the reporting page in the UI is correct? Could you check the reporting page for the network?
Sorry that was vague. I cleared the browser cache. Reporting shows correctly, but I think it showed correctly before the update. Only the dashboard Network panel shows it incorrectly -- and only the values. The graph seems accurate.

As I said, not a huge deal but seems like it'd be an easy fix.
 

probain

Patron
Joined
Feb 25, 2023
Messages
211
Sorry that was vague. I cleared the browser cache. Reporting shows correctly, but I think it showed correctly before the update. Only the dashboard Network panel shows it incorrectly -- and only the values. The graph seems accurate.

As I said, not a huge deal but seems like it'd be an easy fix.
Are you seeing this on the network interfaces, or are the abnormally high numbers on a bridge maybe?

I'm seeing high numbers too, 20-30GiB/s. But I'm guessing that this might be due to VMs asking for network data via the bridge. And that data is being served from ARC... but this is just a hunch
 

bcat

Explorer
Joined
Oct 20, 2022
Messages
84
After updating my primary machine ("Ivy" in my signature) to 23.10.1, it seems that none of my Debian 12 ("bookworm") VMs boot anymore. Instead, they simply drop in to the EFI shell on startup.

This may be a Debian issue rather than a TrueNAS issue. I recall having to hack up the EFI boot partition a bit to get Debian 11 ("bullseye") VMs to boot on TrueNAS SCALE in the past. But it's a little weird that I didn't need to do this with Debian 12 and 23.10.0.1. Not sure what might have changed with 23.10.1.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Sorry that was vague. I cleared the browser cache. Reporting shows correctly, but I think it showed correctly before the update. Only the dashboard Network panel shows it incorrectly -- and only the values. The graph seems accurate.

As I said, not a huge deal but seems like it'd be an easy fix.
I'm not sure that it does. This may be a weird quirk with your particular network adapter misreporting something. The graph shows 400MB/s but the theoretical maximum of 2.5GB is less than 300MB/s.
 

bcat

Explorer
Joined
Oct 20, 2022
Messages
84
To work around the VM issue:

1. Run FS0:\EFI\debian\grubx64.efi in the EFI shell (via a SPICE display, setting up a temporary one if needed) to boot.
2. Log in at the Debian TTY.
3. Follow Debian's instructions to install GRUB to the "EFI removable media path".

Again, I am not sure this is a TrueNAS regression... perhaps something changed in a recent Debian update. It's just weird I didn't need that workaround with Debian 12 + TrueNAS SCALE 23.10.0.1, but it is needed again with Debian 12 + TrueNAS SCALE 23.10.1.
 

probain

Patron
Joined
Feb 25, 2023
Messages
211
Huh... I was expecting to see a notice that the pools could be updated to 2.2.2.. But this is not the case. Does disabling flags not prompt an upgrade of the pool?
 

probain

Patron
Joined
Feb 25, 2023
Messages
211
Huh... I was expecting to see a notice that the pools could be updated to 2.2.2.. But this is not the case. Does disabling flags not prompt an upgrade of the pool?
Digging further into this.
Doing a:
zpool get all hdd-pool | grep feature
Results in
Code:
hdd-pool  feature@async_destroy          enabled                        local
hdd-pool  feature@empty_bpobj            active                         local
hdd-pool  feature@lz4_compress           active                         local
hdd-pool  feature@multi_vdev_crash_dump  enabled                        local
hdd-pool  feature@spacemap_histogram     active                         local
hdd-pool  feature@enabled_txg            active                         local
hdd-pool  feature@hole_birth             active                         local
hdd-pool  feature@extensible_dataset     active                         local
hdd-pool  feature@embedded_data          active                         local
hdd-pool  feature@bookmarks              enabled                        local
hdd-pool  feature@filesystem_limits      enabled                        local
hdd-pool  feature@large_blocks           active                         local
hdd-pool  feature@large_dnode            enabled                        local
hdd-pool  feature@sha512                 enabled                        local
hdd-pool  feature@skein                  enabled                        local
hdd-pool  feature@edonr                  enabled                        local
hdd-pool  feature@userobj_accounting     active                         local
hdd-pool  feature@encryption             active                         local
hdd-pool  feature@project_quota          active                         local
hdd-pool  feature@device_removal         enabled                        local
hdd-pool  feature@obsolete_counts        enabled                        local
hdd-pool  feature@zpool_checkpoint       enabled                        local
hdd-pool  feature@spacemap_v2            active                         local
hdd-pool  feature@allocation_classes     enabled                        local
hdd-pool  feature@resilver_defer         enabled                        local
hdd-pool  feature@bookmark_v2            enabled                        local
hdd-pool  feature@redaction_bookmarks    enabled                        local
hdd-pool  feature@redacted_datasets      enabled                        local
hdd-pool  feature@bookmark_written       enabled                        local
hdd-pool  feature@log_spacemap           active                         local
hdd-pool  feature@livelist               enabled                        local
hdd-pool  feature@device_rebuild         enabled                        local
hdd-pool  feature@zstd_compress          enabled                        local
hdd-pool  feature@draid                  enabled                        local
hdd-pool  feature@zilsaxattr             active                         local
hdd-pool  feature@head_errlog            active                         local
hdd-pool  feature@blake3                 enabled                        local
hdd-pool  feature@block_cloning          enabled                        local
hdd-pool  feature@vdev_zaps_v2           active                         local


I'm noticing this especially
hdd-pool feature@block_cloning enabled local
 

bcat

Explorer
Joined
Oct 20, 2022
Messages
84
Huh... I was expecting to see a notice that the pools could be updated to 2.2.2.. But this is not the case. Does disabling flags not prompt an upgrade of the pool?
OpenZFS introduced a kernel module parameter to disable block cloning regardless of the value of the pool flag. For 2.2.2, that parameter is set to 0 (false) by default, so block cloning will effectively always be disabled.

You can verify this yourself as follows:

Code:
$ cat /sys/module/zfs/parameters/zfs_bclone_enabled
0
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Joe-freenas

Dabbler
Joined
Jan 15, 2015
Messages
16
Jumped quick onto the new release, on a test machine. Successful and as uneventful as (99+%) usual.
It's only a maintenance release, so no big changes or new code, so should be safer than a new release.

Quickly did another 3 live systems so far, no issues and don't anticipate any, given the changelog and
current "early" issues.

Thanks again IX-Systems for a well tested, quality release.
 

Joe-freenas

Dabbler
Joined
Jan 15, 2015
Messages
16
Jumped quick onto the new release, on a test machine. Successful and as uneventful as (99+%) usual.
It's only a maintenance release, so no big changes or new code, so should be safer than a new release.

Quickly did another 3 live systems so far, no issues and don't anticipate any, given the changelog and
current "early" issues.

Thanks again IX-Systems for a well tested, quality release.

Did forget: These systems were NOT running apps or vms.
I will wait a couple days before I upgrade the systems with more complex configurations...
 

Brandito

Explorer
Joined
May 6, 2023
Messages
72
Huh... I was expecting to see a notice that the pools could be updated to 2.2.2.. But this is not the case. Does disabling flags not prompt an upgrade of the pool?
Also expecting this, I'd like to get off of 2.2 altogether
 

Brandito

Explorer
Joined
May 6, 2023
Messages
72
OpenZFS introduced a kernel module parameter to disable block cloning regardless of the value of the pool flag. For 2.2.2, that parameter is set to 0 (false) by default, so block cloning will effectively always be disabled.

You can verify this yourself as follows:

Code:
$ cat /sys/module/zfs/parameters/zfs_bclone_enabled
0
Does this mean I can remove this
Code:
echo 0 >> /sys/module/zfs/parameters/zfs_dmu_offset_next_sync
from my init? This was the suggested bandaid until 23.10.1
 
Top