TrueNAS 13.0-U6 is Now Available

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I've been trying the reproducer script on a machine at work running TrueNAS 13. Nothing so far, but it's slow as molasses due to an ongoing replication (stupidly IOPS bound, I guess, since it's a single RAIDZ vdev).

Note that on FreeBSD/Core GNU-style options for cp are unavailable, so you'll want to remove the "--reflink=never" bit. For systems prior to OpenZFS 2.2, also comment out the opening if block.

TrueNAS Core-friendly version of the script below (adapted from Tony Hutter's):

Code:
#!/bin/bash
#
# Run this script multiple times in parallel inside your pool's mount
# to reproduce https://github.com/openzfs/zfs/issues/15526.  Like:
#
# ./reproducer.sh & ./reproducer.sh & ./reproducer.sh & ./reproducer.sh & wait
#

#if [ $(cat /sys/module/zfs/parameters/zfs_bclone_enabled) != "1" ] ; then
#       echo "please set /sys/module/zfs/parameters/zfs_bclone_enabled = 1"
#       exit
#fi

prefix="reproducer_${BASHPID}_"
dd if=/dev/urandom of=${prefix}0 bs=1M count=1 status=none

echo "writing files"
end=1000
h=0
for i in `seq 1 2 $end` ; do
        let "j=$i+1"
        cp  ${prefix}$h ${prefix}$i
        cp  ${prefix}$i ${prefix}$j
        let "h++"
done

echo "checking files"
for i in `seq 1 $end` ; do
        diff ${prefix}0 ${prefix}$i
done


For shells that support it, run arbitrary natural numbers of instances in parallel with for i in {1..n} ; do ./reproducer.sh & done; wait - replace n with your favorite natural number or iterate over a bunch of them - go crazy, more data is more betterer.
 
Joined
Oct 22, 2019
Messages
3,641
To avoid clutter in this release announcement, I started another thread:

FYI: I was able to reproduce silent corruption in an Arch Linux system with OpenZFS 2.2.0. However, (thankfully?), I could not reproduce it in TrueNAS Core 13.0-U6.

I ran 16 instances of the script in parallel.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yes, let's please move any relevant discussion to said thread. I'll keep the existing posts here to not pollute the other thread, which has a nice summary of the situation as is currently understood.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

Simon Mackenzie

Dabbler
Joined
Aug 9, 2013
Messages
43
Another smooth update.
Thank you.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Easy update - worked great.
 

ThreeDee

Guru
Joined
Jun 13, 2013
Messages
700
I updated .. and everything is still working ...?!!
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I first updated System->Tunables to include the sysctl vfs.zfs.dmu_offset_next_sync = 0 to deal with the potential block duplication bug and then ran the U6 system update to restart the system for good measure. As expected, the system updated and restarted w/o issues.
 

silmor_senedlen

Dabbler
Joined
Sep 12, 2017
Messages
10
Good day
What's about situation with SAS Multipath deprecation in future major version?

I don't really understand the reasons for removing this functionality.
Is it just marketing decision, to not "interfere" with Enterprise version's "HA " features?..
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Good day
What's about situation with SAS Multipath deprecation in future major version?

I don't really understand the reasons for removing this functionality.
Is it just marketing decision, to not "interfere" with Enterprise version's "HA " features?..
From what I read they are not removing it, just completely stopped any work on it. Hence, it might not work in the future.

Reason? It's on CORE and their focus is on SCALE if you ask me, and the effort in their yes is not worth being put into CORE.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I don't really understand the reasons for removing this functionality.
Well, why do you feel the need for multipath? What problem of yours does it solve?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Failure of HBA (with 2 controllers) or expander(with dual-expander backplane)?
How often do those actually happen? In practical terms, you'd be much better off with a spare system than multipath. Of course, HA would be a completely different story.
 

silmor_senedlen

Dabbler
Joined
Sep 12, 2017
Messages
10
How often do those actually happen?
This is not only about probability of occurrence, but also severity of consequences.
And in these cases, they will be disaster.
In practical terms, you'd be much better off with a spare system than multipath.
This is not a full-fledged replacement.
At least in the case of using VMFS volumes via iSCSI for VMware cluster.
Of course, this does not negate the need to have a second (spare) storage system(and of course there is one), but also does not solve problem of long downtime in case of failure of one of the mentioned system components.
Alas, our budget cannot afford a branded HA-system. Therefore, I tried to reduce the chance of serious (albeit quite rare) problems in quite affordable way.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
This is not only about probability of occurrence, but also severity of consequences.
And in these cases, they will be disaster.
Far less of a disaster than if your CPU dies, or a DIMM dies catastrophically. If the HBA dies, the pool stops. Big deal, replace it and life goes on.
Apart from HA, multipath is a scam to sell expanders. It addresses a mostly imagined problem, and thus does little to improve reliability.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One reason iX may be removing SAS multi-pathing, is that it appears that some customers, (aka NOT home or small office free users), have had a problem. It appears, (again reading between the lines), that a few customers have seen the extra path as free disks and tried to use them. This can and does result in serious data loss. And even if their are perfect backups, it likely means extended down time for the affected data.

Now does that happen to all users of SAS multi-pathing?
Of course not.

Is it a rare occurrence?
Almost certainly.

As @Ericloewe pointed out, this does not for HA purposes.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Due to security vulnerabilities and maintainability issues, the S3 service is deprecated in TrueNAS CORE 13.0 and scheduled for removal in CORE 13.1. Beginning in CORE 13.0-U6, the CORE web interface generates an alert when the deprecated service is either actively running or is enabled to start on boot. Users should plan to migrate to a separately maintained MinIO plugin or otherwise move any production data away from the S3 service storage location.
A word of caution. If you don't already have any Jails running, enabling Jails to add the Minio plugin may have a nontrivial impact on the network performance of all services on your TrueNas box, due to the networking changes required to host jails.

It's probably not a concern for casual use, but in our case, the changes were substantial enough to force us to move minio to a dedicated non TrueNas box. I would test your implementation before committing to the plugin to be sure there are no performance impacts on your environment.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
A word of caution. If you don't already have any Jails running, enabling Jails to add the Minio plugin may have a nontrivial impact on the network performance of all services on your TrueNas box, due to the networking changes required to host jails.

It's probably not a concern for casual use, but in our case, the changes were substantial enough to force us to move minio to a dedicated non TrueNas box. I would test your implementation before committing to the plugin to be sure there are no performance impacts on your environment.
Could you please elaborate on that? The Did you create a bridge interface manually and did you move the IP address of your NAS to the bridge interface and remove it from the physical? The completely reworked bridge code in FreeBSD 13 and above should definitely not have any noticeable impact on the performance of other services.

What is your setup, please?
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Could you please elaborate on that? The Did you create a bridge interface manually and did you move the IP address of your NAS to the bridge interface and remove it from the physical? The completely reworked bridge code in FreeBSD 13 and above should definitely not have any noticeable impact on the performance of other services.

What is your setup, please?

The Jail creation process creates a bridge interface, enables promiscuous mode and disables a number of features including TSO on the physical interface that have an outsized impact on performance on 40G Mellanox NICs. Basic testing in our environment takes a 40G NIC that can do ~3-Gb/s down to 4 Gb/s.. More importantly, some of the changes to the networking stack "stick around" even if the jail is subsequently disabled and destroyed. I haven't really dug into that.. it's just simpler for us to move s3 to another box vs troubleshooting this any further.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Did you create a bridge interface manually and did you move the IP address of your NAS to the bridge interface and remove it from the physical?
Specifically, I used the tool in the UI. I did not do anything manually.. Based on my observations, it did do this.
 
Top