TrueNAS 13.0-RC1 has been released

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Last edited:

cent_one

Cadet
Joined
Apr 15, 2021
Messages
4
I have upgraded my 13.0-BETA1 to 13.0-RC1. I'm using it for remote replication from a 12.0-U8.1 system.

All replication tasks are configured as pull tasks on the 13.0 NAS which have worked well when running 13.0-BETA1.

These have stopped working with 13.0-RC1.

The notes in the Known Issues for replication don't look to cover this scenario, and I tried adding PubkeyAcceptedAlgorithms +ssh-rsa in case that would help.

Are pulls from 13 expected to work?

Thanks
 

hervon

Patron
Joined
Apr 23, 2012
Messages
353
Successful update.
 

ThreeDee

Guru
Joined
Jun 13, 2013
Messages
700
I was on 13 nightlies .. wasn't able to update from there so rolled back to 13 beta and I'm all updated to RC1 now without issue :smile:
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
I successfully updated my backup/test system from 12.0-U8.1 to 13.0-RC1. However, I found that a 4-hourly replication task from my primary box (still on 12.0-U8.1) errored out because of key mismatch so I've rolled back the backup system until I can spend time to poke on it.

EDIT #1: Now Release Notes are available, this issue is described and a workaround provided.

EDIT #2: Did not work for me. Looking at the release note's text again, I cannot tell if its message is that 12->13 replication will never work, but 13->12 can be made to work with the addition of the auxiliary parameter (and the linked reference presently seems to be above my paygrade to decode...).
 
Last edited:

dlanor

Cadet
Joined
Dec 17, 2020
Messages
3
Updated 12.0-U8 to 13.0-RC1. Boots fine. But my data pool is not imported because /etc/rc.d/geli does not seem to run well at boot. When I run it manually, the encrypted disks are attached. Known issue?
 

dlanor

Cadet
Joined
Dec 17, 2020
Messages
3
Updated 12.0-U8 to 13.0-RC1. Boots fine. But my data pool is not imported because /etc/rc.d/geli does not seem to run well at boot. When I run it manually, the encrypted disks are attached. Known issue?
Found something. /etc/rc.conf.freenas is a generated file and contains necessary geli_* vars, but it is not generated soon enough for /etc/rc.d/geli to use it. The next command works around the issue for me.
# grep geli_ /etc/rc.conf.freenas > /conf/base/etc/rc.conf.d/geli
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Found something. /etc/rc.conf.freenas is a generated file and contains necessary geli_* vars, but it is not generated soon enough for /etc/rc.d/geli to use it. The next command works around the issue for me.
# grep geli_ /etc/rc.conf.freenas > /conf/base/etc/rc.conf.d/geli

Go ahead and create a bug report on this.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I have upgraded my 13.0-BETA1 to 13.0-RC1. I'm using it for remote replication from a 12.0-U8.1 system.

All replication tasks are configured as pull tasks on the 13.0 NAS which have worked well when running 13.0-BETA1.

These have stopped working with 13.0-RC1.

The notes in the Known Issues for replication don't look to cover this scenario, and I tried adding PubkeyAcceptedAlgorithms +ssh-rsa in case that would help.

Are pulls from 13 expected to work?

Thanks
Yes, expected to work..... if you can file a ticket with any debugs and the roll back to BETA to confirm it works again, that would be very useful.
 

cent_one

Cadet
Joined
Apr 15, 2021
Messages
4
Yes, expected to work..... if you can file a ticket with any debugs and the roll back to BETA to confirm it works again, that would be very useful.

Nothing obvious in the logs of either machine.

Will wait to roll-back if there is any specific debug process I can run?
 

jlpellet

Patron
Joined
Mar 21, 2012
Messages
287
Installed RC1 on a test system & all seems normal in 1st look but did get the error outlined below in the log during 1st boot.

Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz, 8GB RAM

LOG

Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.014970-05:00 fn3.local daemon 1515 - - 2022-04-20 06:59:42,014:wsdd WARNING(pid 1516): no interface given, using all interfaces
Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.177042-05:00 fn3.local collectd 1521 - - cannot get CTL max ports
Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.177066-05:00 fn3.local collectd 1521 - - Initialization of plugin `ctl' failed with status -1. Plugin will be unloaded.
Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.189266-05:00 fn3.local collectd 1521 - - Error: one or more plugin init callbacks failed.

TOP

last pid: 1751; load averages: 0.32, 0.23, 0.12 up 0+00:09:02 07:07:57
52 processes: 1 running, 51 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 1102M Active, 168M Inact, 704M Wired, 5575M Free
ARC: 188M Total, 61M MFU, 109M MRU, 260K Anon, 2377K Header, 14M Other
134M Compressed, 334M Uncompressed, 2.49:1 Ratio
Swap: 4096M Total, 4096M Free

PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
366 root 27 20 0 372M 264M kqread 3 0:19 0.17% python3.
1745 root 1 20 0 14M 4088K CPU3 3 0:00 0.11% top
1532 www 1 20 0 38M 10M kqread 2 0:00 0.01% nginx
1342 uucp 1 20 0 19M 2696K select 2 0:00 0.01% upsd
1340 uucp 1 20 0 13M 2884K select 3 0:00 0.01% usbhid-u
1418 ntpd 1 20 0 21M 7128K select 0 0:00 0.01% ntpd
1432 root 8 20 0 52M 11M select 2 0:00 0.00% rrdcache
509 root 3 20 0 245M 172M usem 3 0:06 0.00% python3.

NAS-115875 filed as requested.
 
Last edited:

cent_one

Cadet
Joined
Apr 15, 2021
Messages
4
Installed RC1 on a test system & all seems normal in 1st look but did get the error outlined below in the log during 1st boot.

Intel(R) Core(TM) i3-2100 CPU @ 3.10GHz, 8GB RAM

LOG

Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.014970-05:00 fn3.local daemon 1515 - - 2022-04-20 06:59:42,014:wsdd WARNING(pid 1516): no interface given, using all interfaces
Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.177042-05:00 fn3.local collectd 1521 - - cannot get CTL max ports
Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.177066-05:00 fn3.local collectd 1521 - - Initialization of plugin `ctl' failed with status -1. Plugin will be unloaded.
Apr 20 06:59:42 fn3 1 2022-04-20T06:59:42.189266-05:00 fn3.local collectd 1521 - - Error: one or more plugin init callbacks failed.

TOP

last pid: 1751; load averages: 0.32, 0.23, 0.12 up 0+00:09:02 07:07:57
52 processes: 1 running, 51 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 1102M Active, 168M Inact, 704M Wired, 5575M Free
ARC: 188M Total, 61M MFU, 109M MRU, 260K Anon, 2377K Header, 14M Other
134M Compressed, 334M Uncompressed, 2.49:1 Ratio
Swap: 4096M Total, 4096M Free

PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
366 root 27 20 0 372M 264M kqread 3 0:19 0.17% python3.
1745 root 1 20 0 14M 4088K CPU3 3 0:00 0.11% top
1532 www 1 20 0 38M 10M kqread 2 0:00 0.01% nginx
1342 uucp 1 20 0 19M 2696K select 2 0:00 0.01% upsd
1340 uucp 1 20 0 13M 2884K select 3 0:00 0.01% usbhid-u
1418 ntpd 1 20 0 21M 7128K select 0 0:00 0.01% ntpd
1432 root 8 20 0 52M 11M select 2 0:00 0.00% rrdcache
509 root 3 20 0 245M 172M usem 3 0:06 0.00% python3.
I'm seeing the collectd error on every boot.

collectd 1311 - - cannot get CTL max ports
collectd 1311 - - Initialization of plugin `ctl' failed with status -1. Plugin will be unloaded.
collectd 1311 - - Error: one or more plugin init callbacks failed.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I'm seeing the collectd error on every boot.

collectd 1311 - - cannot get CTL max ports
collectd 1311 - - Initialization of plugin `ctl' failed with status -1. Plugin will be unloaded.
collectd 1311 - - Error: one or more plugin init callbacks failed.
Please "report a bug" using the standard process... thanks
Reporting the bugID here is useful for future readers
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
After updating both of my private systems to RC1 all my replication tasks fail - local ones as well as remote.
All of my tasks have custom retention periods for the destination.

People seem to have noticed on SCALE first, but I now have that problem with CORE --> CORE:

Bildschirmfoto 2022-04-20 um 20.13.26.png
 

cent_one

Cadet
Joined
Apr 15, 2021
Messages
4
Nothing obvious in the logs of either machine.

Will wait to roll-back if there is any specific debug process I can run?
I've rebooted both systems to earlier versions which led to the root of the problem.
There was a name resolving issue with the primary LAN DNS server on the remote network.
My apologies for the noise.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
After updating both of my private systems to RC1 all my replication tasks fail - local ones as well as remote.
All of my tasks have custom retention periods for the destination.

Here's a quick hack - would be great if someone with greater knowledge of python would check it for correctness.

Change the file /usr/local/lib/python3.9/site-packages/zettarepl/replication/task/retention_policy.py - line 57 - from:
Code:
if parsed_dst_snapshot.datetime < now - self.period]
to:
Code:
if parsed_dst_snapshot.datetime.timestamp() < (now - self.period).timestamp()]

Then clear the pycache and restart middlewared:
Code:
rm /usr/local/lib/python3.9/site-packages/zettarepl/replication/task/__pycache__/retention_policy.*
service middlewared restart


Honestly, I "stackoverflowed" it - no idea what I'm doing. OK, a bit of an idea. The comparison uses a type with a timezone and a type without and python throws an error about that. I do not know if my way to force identical types is the correct/recommended/canonical one.

Kind regards,
Patrick
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Here's a quick hack - would be great if someone with greater knowledge of python would check it for correctness.

Change the file /usr/local/lib/python3.9/site-packages/zettarepl/replication/task/retention_policy.py - line 57 - from:
Code:
if parsed_dst_snapshot.datetime < now - self.period]
to:
Code:
if parsed_dst_snapshot.datetime.timestamp() < (now - self.period).timestamp()]

Then clear the pycache and restart middlewared:
Code:
rm /usr/local/lib/python3.9/site-packages/zettarepl/replication/task/__pycache__/retention_policy.*
service middlewared restart


Honestly, I "stackoverflowed" it - no idea what I'm doing. OK, a bit of an idea. The comparison uses a type with a timezone and a type without and python throws an error about that. I do not know if my way to force identical types is the correct/recommended/canonical one.

Kind regards,
Patrick
You would want to use datetime's function that returns the equivalent in UTC (or whatever TZ you want) and manually strip out the TZ data part of the datetime to convert it into a naïve datetime object. Your solution works if everything's consistent anyway, but there's TZ info missing on one end.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Last edited:
Top