TrueNAS 13.0 BETA Experiences

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
Whatever floats your boat. My suggestion is the "official" way to load kernel modules at boot.
Yup, already switched over. Better to have the three tunables for this at one place and the official way™ as well.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
Some more feedback on the beta:

  • While this is purely cosmetic, it annoys me that the motd shows

Code:
Last login: Tue Feb 15 11:47:34 2022 from 192.168.88.21
FreeBSD ?.?.?  (UNKNOWN)


It works in the jail, so why not in the main system?

The jails show
Code:
Last login: Tue Feb 15 12:40:09 on pts/0
FreeBSD 13.0-STABLE (TrueNAS.amd64) #0 n245330-c073d5cd0b8: Tue Feb  8 14:42:10 EST 2022


As others have noted, the pool upgrade via the middleware fails.

  • The upgrade is still possible via the command line and didn't cause any issues. For the curious: It enables the draid feature.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It works in the jail, so why not in the main system?

Because jails use stock FreeBSD code. I don't know what iX is doing, but FreeBSD rejiggered the motd subsystem in FreeBSD 13 and moved /etc/motd to /var/run/motd, and some other random changes. A lot of what TrueNAS does is wound up in creating an appliance version of the OS, so when upstream makes changes, that can affect the appliance behaviour.
 

blubyu

Cadet
Joined
Feb 19, 2016
Messages
8
I don't know if this is a bug or not.....

I upgraded from 12.x to 13 beta 1. Upgrade went fine and everything has been working fine for a few days. I decided to replace one of my 2TB drives with a 4TB drive. I took the 2TB drive offline, shut down the server, replaced the drive with the new 4TB drive and brought the system back up. The system sees the new drive fine. I go into the pool and click the three dots next to the offline disk and choose Replace. In the pop up window the Replace button is greyed out. I choose a member disk and the Replace button turns active but I can't click it. Actually I can click it but nothing happens. I tried checking the Force box but that made no difference. I can click the cancel button just fine but not the Replace button.

This a bug or am I doing it wrong?
 

blubyu

Cadet
Joined
Feb 19, 2016
Messages
8
I don't know if this is a bug or not.....

I upgraded from 12.x to 13 beta 1. Upgrade went fine and everything has been working fine for a few days. I decided to replace one of my 2TB drives with a 4TB drive. I took the 2TB drive offline, shut down the server, replaced the drive with the new 4TB drive and brought the system back up. The system sees the new drive fine. I go into the pool and click the three dots next to the offline disk and choose Replace. In the pop up window the Replace button is greyed out. I choose a member disk and the Replace button turns active but I can't click it. Actually I can click it but nothing happens. I tried checking the Force box but that made no difference. I can click the cancel button just fine but not the Replace button.

This a bug or am I doing it wrong?
I was able to start the resilvering process by running the zpool replace command from the command line. The GUI is now showing that it is in the process of replacing the drive.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
For me having the intention to use jails and vm's next to TrueNas as NAS IPV6 is >essential<

However up to my big surprise already half a year ago, IPV6 is not properly supported in TrueNAS yet :frown::frown: . Proper IPV6 support was one of my big hopes for TrueNAS 13. I raised a ticket for that. And it looks like .... it is not feasible .....

So to my regret (I really like TrueNas) I have to wait for a Scale version with good IPV6 support ...... or change to another "platform"
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Proper IPv6 support exists in almost nothing of consequence, alas.

Speaking as someone who does both network engineering and also releases an appliance-like version of FreeBSD, I think the fundamental issue is that most networks run IPv4 or at best IPv4+IPv6 dual stack. Running v6-only is unusual, and trying to maintain a system that correctly supports either-or-both is a real PITA. As a result, IPv4 still remains the de-facto standard. This is especially true in corporate/enterprise environments, which is the target of TrueNAS.

Part of this involves the fact that IPv6 still lacks feature parity with v4; for example, OSPFv3 chose not to implement authentication but rely on IPsec instead, yet major implementations such as FRR only have sketchy support for it and it doesn't work in practice. This is a disincentive to IPv6 deployment to those of us with production networks.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@jgreco the problems of @Louis2 are simpler. Basic addressing in jails does not work if you are not using SLAAC ... I was not yet able to find the cause for that. Even with correctly set up bridged networking - no cookie. We run around a thousand jails all with static addresses on plain FreeBSD (11, 12, now 13 ...) without problems so the root cause must be somehow specific to TrueNAS or the iocage version they use.

In my home lab it's all SLAAC - works like a charm.

We'll see. After I created the ticket complaining about the bridge config again, they kind of promised to have a serious go at that subsystem for 13.0-U1 ...

Kind regards,
Patrick
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well I understand that the problem w.r.t. a NAS is "simpler" f.s.v.o. that word, but the fundamental reasons behind IPv6's laggy deployment more than two decades after the first "real world" deployments remain. This surely impacts the emphasis, or, rather, lack of emphasis on IPv6 in many products, including TrueNAS.

I know from direct experience that FreeBSD interactions between the bridge, epair, multicast, and ipfw are a real PITA to investigate and resolve. IPv6's heavier reliance on multicast (no broadcast address) tends to tease out certain firewall rule issues and other fun stuff, and with FreeBSD's twitchiness about multicast address assignments on bridges, I can see that being a world of pain to get all the finicky details just right.

As for iocage, I thought that was strictly a userland jail manager.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
For me having the intention to use jails and vm's next to TrueNas as NAS IPV6 is >essential<

However up to my big surprise already half a year ago, IPV6 is not properly supported in TrueNAS yet :frown::frown: . Proper IPV6 support was one of my big hopes for TrueNAS 13. I raised a ticket for that. And it looks like .... it is not feasible .....

So to my regret (I really like TrueNas) I have to wait for a Scale version with good IPV6 support ...... or change to another "platform"

There are some annoying issues with support of IPv6 on FreeBSD where IPv6 is the primary management protocol. We have worked around them with SCALE and recommend SCALE for anyone that wants a pure IPv6 platform. SCALE gets to RELEASE on Tuesday.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I was able to start the resilvering process by running the zpool replace command from the command line. The GUI is now showing that it is in the process of replacing the drive.

Please report the bug if you can't find it in then bug database already.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I agree. A significant reduction in CPU utilization and a significant increase in transfer speed, on older hardware.
Thanks.. please quantify your estimates.. is it 10% or 20% less cpu intensive?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
When you go to the pools, it asks you to upgrade the pool. It just failed.

We'd appreciate if you report this bug and let people know the bugID. Thanks.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I upgraded from 12.x to 13 beta 1. Upgrade went fine and everything has been working fine for a few days. I decided to replace one of my 2TB drives with a 4TB drive. I took the 2TB drive offline, shut down the server, replaced the drive with the new 4TB drive and brought the system back up. The system sees the new drive fine. I go into the pool and click the three dots next to the offline disk and choose Replace. In the pop up window the Replace button is greyed out. I choose a member disk and the Replace button turns active but I can't click it. Actually I can click it but nothing happens. I tried checking the Force box but that made no difference. I can click the cancel button just fine but not the Replace button.

This a bug or am I doing it wrong?

I'd recommend reporting it and then letting us know the bug ID.... if its not a bug, it is at least a lack of UI information.
 

Volts

Patron
Joined
May 3, 2021
Messages
210
Thanks.. please quantify your estimates.. is it 10% or 20% less cpu intensive?

On a nasty old Core2 Duo E7500 it's even more significant than that.

I've been using the kmod for a while, so I reinstalled and briefly tested wireguard-go again in a jail on 13.0 BETA. If you want science and better data I can be more diligent.

Downloading a 1000+ seed Ubuntu (lol) .torrent -
  • wireguard-go: peak download speeds of 3-5MB, and wireguard-go CPU utilization is 50%+ - more than qbittorrent itself
  • if_wg.ko: peak download speeds of 20MB+, but all CPU/user/system stats are lower. It's as though the wireguard-go CPU utilization is just erased from the system.
I bet most of the load is just bouncing in and out of user space. I suspect it would have much less impact on a faster CPU.

The if_wg.ko I built and the one in 13.0 BETA (which has debugging symbols) perform the same, so I'm happy using the provided one.
 
Last edited:

cpt_pi

Cadet
Joined
Feb 22, 2022
Messages
1
Used the manual upgrade tar to upgrade 12.0-U8 to 13-BETA1.

Upgrade went through without any issues.
I've noticed reduced Service memory usage since doing the upgrade.

Like others have mentioned, trying to upgrade a 12 pool to 13 via the WebUI just errors out, I'm not aware of anything yet that I'd personally gain from upgrading the zpool so I'll just leave it until 13.0 full release.

I was previously able to use drm_load and i915kms_load loader tunables to load Intel iGPU kernel modules, however now I'm getting the following errors:
Code:
KLD drm.ko: depends on debugfs - not available or version mismatch
linker_load_file: /boot/modules/drm.ko - unsupported file type
KLD i915kms.ko: depends on drmn - not available or version mismatch
linker_load_file: /boot/modules/i915kms.ko - unsupported file type


Also I use an i5-9600K and was hoping that the upgrade to 13 would add support for the Intel iGPU, it's not available right now, maybe in a future update.

With regards to the iGPU (someone's probably gonna say "Make sure your hardware/motherboard/bios supports it!") I've tested TrueNAS Scale and Debian 11, both OS's had access to the iGPU (I set the integrated GPU to 'enabled' instead of 'auto' in the BIOS) so it's definitely a software issue.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
I have a change that I'm preparing to make that should significantly improve directory listing performance over SMB and potentially many other areas of SMB as well. This will transition the SMB server to using the O_RESOLVE_BENEATH open flag in FreeBSD (new feature in FreeBSD 13) to have kernel prevent any symlink escapes from share's connectpath (rather than implementing same in user-space via stat, realpath, chdir, and getwd). The practical impact of this (if / once it's merged) is that "widelinks" (symlinks outside a share path) will not be possible even if you add auxiliary parameters to the SMB share to enable them (they are not enabled by default and not supported).

If anyone wants to test these changes and give feedback, please send me a PM and I'll provide an update file.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
There are some annoying issues with support of IPv6 on FreeBSD where IPv6 is the primary management protocol.
Could you elaborate on that? We run an entire data centre's worth of partly dual-stack and partly IPv6-only hosts and jails without issues. For some years now.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Could you elaborate on that? We run an entire data centre's worth of partly dual-stack and partly IPv6-only hosts and jails without issues. For some years now.

Mostly related to running HA systems with IPv6
However, your IPv6-only systems will not get automated software updates...I assume.

Glad to know that in general iPv6 is working for you.
 
Top