TrueNAS SCALE 22.02.0 (Angelfish) Release

proligde

Dabbler
Joined
Jan 29, 2014
Messages
21
Can you PM me a debug please?

Thanks for the offer. I tried to switch back to RELEASE - unfortunately now I can't get anything to work anymore.

In this screen neither my USB keyboard nor my IPMI-Remote-Console can interact with the grub menu (keyboard strokes are ignored)

2022-02-23 13_50_24-Remote KVM [192.168.178.58] - [720 x 400 ].png


and after that I end in:

2022-02-23 13_50_44-Remote KVM [192.168.178.58] - [720 x 400 ].png


a force mount succeeds, but it doesn't continue on exit.

I guess I should just reinstall and restore from the truenas backup I created before the update to release? Or is there any hint what I could do in such a situation?

UPDATE: Nevermind - I could boot it up after I tried it a second or third time importing the boot-pool with "import -f boot-pool" and exiting afterwards. I'd say I didn't do anything different (which I know is said to be a first sign of madness, but it worked).

I'll PM you the NFS debug output of both the old and the new versions.

Best - Max
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,546
This is an FYI for NFS users who have migrated from Core to SCALE, we have added a sanity check during exports generation. If a user / group does not exist in the Linux O.S. (e.g. "wheel") we refuse to generate the exports file. Hence, if you don't have /etc/exports, verify that your "mapall" and "maproot" entries actually exist on the server.
 

SImon Smith

Cadet
Joined
Sep 29, 2016
Messages
9
This is an FYI for NFS users who have migrated from Core to SCALE, we have added a sanity check during exports generation. If a user / group does not exist in the Linux O.S. (e.g. "wheel") we refuse to generate the exports file. Hence, if you don't have /etc/exports, verify that your "mapall" and "maproot" entries actually exist on the server.
thank you for this info! i spent all morning trying to work out why my NFS stopped working,
i just edited each of my NFS shares and removed the mapall user/group and suddenly started working!
might be an idea this information is put in the install-guide/release notes!
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,546
thank you for this info! i spent all morning trying to work out why my NFS stopped working,
i just edited each of my NFS shares and removed the mapall user/group and suddenly started working!
might be an idea this information is put in the install-guide/release notes!
Yes, I'll notify our Docs team to add a known-impacts statement to the release notes.
edit: https://github.com/truenas/documentation/pull/1280
 
Last edited:

Janus0006

Dabbler
Joined
Mar 27, 2021
Messages
46
I updated yesterday. Update gone well
As I can see, we cannot add anymore apps via cli/apt-get. Probably because the system is now official and not in beta anymore. But i'm now unable to add/configure the zabbix-agent anymore.
Am I the only one in this situation ? someone have a solution ?
 

SImon Smith

Cadet
Joined
Sep 29, 2016
Messages
9
I updated yesterday. Update gone well
As I can see, we cannot add anymore apps via cli/apt-get. Probably because the system is now official and not in beta anymore. But i'm now unable to add/configure the zabbix-agent anymore.
Am I the only one in this situation ? someone have a solution ?
same here after migrating from CORE
Code:
root@freenas[~]# apt update
zsh: permission denied: apt
root@freenas[~]# apt-get update
zsh: permission denied: apt-get
root@freenas[~]# apt install nano
zsh: permission denied: apt
root@freenas[~]# which apt
/usr/bin/apt
root@freenas[~]# ls -lah `which apt`
-rw-r--r-- 1 root root 19K Jun 10  2021 /usr/bin/apt
root@freenas[~]#

not sure if this is maybe by design?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,546
same here after migrating from CORE
Code:
root@freenas[~]# apt update
zsh: permission denied: apt
root@freenas[~]# apt-get update
zsh: permission denied: apt-get
root@freenas[~]# apt install nano
zsh: permission denied: apt
root@freenas[~]# which apt
/usr/bin/apt
root@freenas[~]# ls -lah `which apt`
-rw-r--r-- 1 root root 19K Jun 10  2021 /usr/bin/apt
root@freenas[~]#

not sure if this is maybe by design?
That is by design. https://github.com/truenas/middleware/pull/8322/files

See updated warning text here.
 

Magnus33

Patron
Joined
May 5, 2013
Messages
429
Scale updated without issue from beta version.

Odd bug though the release update is still showing available for install after it's installed.
 

SImon Smith

Cadet
Joined
Sep 29, 2016
Messages
9
weird bug for me ive found
i keep getting 'ix-etc.service failed to start' on startup ?
output from 'systemctl status ix-etc.service' isnt helpful either?
Code:
root@freenas[~]# systemctl status ix-etc.service
● ix-etc.service - Generate TrueNAS /etc files
     Loaded: loaded (/lib/systemd/system/ix-etc.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Wed 2022-02-23 14:46:18 GMT; 1h 21min ago
   Main PID: 1424 (code=exited, status=1/FAILURE)

Feb 23 14:45:16 freenas.local systemd[1]: Starting Generate TrueNAS /etc files...
Feb 23 14:46:18 freenas.local systemd[1]: ix-etc.service: Main process exited, code=exited, status=1/FAILURE
Feb 23 14:46:18 freenas.local systemd[1]: ix-etc.service: Failed with result 'exit-code'.
Feb 23 14:46:18 freenas.local systemd[1]: Failed to start Generate TrueNAS /etc files.
 

PK1048

Cadet
Joined
Feb 23, 2022
Messages
7
After fresh install TrueNAS SCALE 22.02.0 (Angelfish) Release i have this message to update. Why? iso 8ab1cb587ac03e6b8f3688c07863b8746a009bed26c40b104a005bd1f06e47cd isn't release or what? Why i got


View attachment 53395
I can confirm this behavior running as a VM under VBox on top of FreeBSD 12.3
 

beardmann

Cadet
Joined
Oct 11, 2021
Messages
8
I'm currently testing with 22.02.00 in a VMWare environment before installing it on bare metal.
I am especially testing the replacement of boot devices, and I must say that it is not going well :smile:
My setup is very simple.. just a VM with two 20GB disks, I then install TrueNAS from ISO and choose both disks as installation targets.
This of cause creates a mirrored boot pool.
I then try to simulate a disk failure, then add a fresh virtual disk, and do the replacement on the boot pool... which almost works...
The zpool seems to be fixed, but the grub installation fails with this error:
[EFAULT] Command grub-install --target=i386-pc /dev/sdb failed (code 1): Installing for i386-pc platform. grub-install: error: failed to get canonical path of `/dev/replacing-1'.
Followed by these details:
Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 175, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self, File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/boot.py", line 234, in replace await self.middleware.call('boot.install_loader', dev) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1318, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/plugins/boot_/boot_loader_linux.py", line 19, in install_loader await run('grub-install', '--target=i386-pc', f'/dev/{dev}') File "/usr/lib/python3/dist-packages/middlewared/utils/__init__.py", line 64, in run cp.check_returncode() File "/usr/lib/python3.9/subprocess.py", line 460, in check_returncode raise CalledProcessError(self.returncode, self.args, self.stdout, subprocess.CalledProcessError: Command '('grub-install', '--target=i386-pc', '/dev/sdb')' returned non-zero exit status 1.

As pointed out, the zpool is now resilvered and OK

I can run the command "grub-install --target=i386-pc /dev/sdb" from the command line without any errors...
But... if I then remove the other drive and try to boot of this drive (sdb), it then fails to find any bootloader.. no grub no nothing...
I guess grub is not installed correctly?

Not sure if I am doing something wrong here? But I am trying to do it all from the web GUI...

This is exactly why I do these tests before installing this on a semi production setup :smile:

Any help is welcome... and it is very easy to replicate this...

/Beardmann
 

malco_2001

Dabbler
Joined
Sep 10, 2013
Messages
20
weird issue after upgrading from RC2
The release notes for 22.02.0 are being updated and we should hopefully have the known issue posted soon. This is an empty update notification that is coming from a bug in the backend update train. This notification can be ignored and is being resolved in SCALE 22.02.1.
 

Magnus33

Patron
Joined
May 5, 2013
Messages
429
The release notes for 22.02.0 are being updated and we should hopefully have the known issue posted soon. This is an empty update notification that is coming from a bug in the backend update train. This notification can be ignored and is being resolved in SCALE 22.02.1.
In terms of bugs this about as minor as it gets and everything else seems to be functioning correctly.
 

HITMAN

Dabbler
Joined
Nov 20, 2021
Messages
33
Nice news!
Upgrade went perfect, all services and pools are working.
SMB, NFS, LDAP + apps

Thanks guys!
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
I updated yesterday. Update gone well
As I can see, we cannot add anymore apps via cli/apt-get. Probably because the system is now official and not in beta anymore. But i'm now unable to add/configure the zabbix-agent anymore.
Am I the only one in this situation ? someone have a solution ?

A lot of these kinds of services, like Zabbix-agent (and Prometheus, Telegraf, etc.) can be easily run inside from a docker container, which SCALE supports. I've been doing this for months.
 

Magnus33

Patron
Joined
May 5, 2013
Messages
429
A lot of these kinds of services, like Zabbix-agent (and Prometheus, Telegraf, etc.) can be easily run inside from a docker container, which SCALE supports. I've been doing this for months.
As a bonus plugins get updated within a day or hours of a update compared to truenas core where they get updated maybe when there a truenas core update if they remember..

That's always dumbfounded me as they sell products with promised supported plugins and support for them is something of a joke.
How they don't realize this is a bad idea is confusing since if there a security hole in say plex like there was awhile back exposing user information ixsystems becomes legally libel.
Plex is of course required to patch the bug but if ixsystems doesnt update plex for months after the fact it is no longer plex legal problem.

Not a smart move anyway you look at it.
 

Ixian

Patron
Joined
May 11, 2015
Messages
218
As a bonus plugins get updated within a day or hours of a update compared to truenas core where they get updated maybe when there a truenas core update if they remember..

That's always dumbfounded me as they sell products with promised supported plugins and support for them is something of a joke.
How they don't realize this is a bad idea is confusing since if there a security hole in say plex like there was awhile back exposing user information ixsystems becomes legally libel.
Plex is of course required to patch the bug but if ixsystems doesnt update plex for months after the fact it is no longer plex legal problem.

Not a smart move anyway you look at it.

Two different things really (FreeBSD packages/ports vs. docker containers) but in both cases I don't believe Ix has ever guaranteed support for third party apps like Plex and reading the EULA they are indemnified against situations like, for example, an old Plex port having a security hole, so I'm not clear on what your point is.

That said sounds like we can agree that it's a better solution now :)
 

mjtbrady

Cadet
Joined
Feb 8, 2022
Messages
4
@beardmann
I haven't tried this on TrueNAS yet, but making a guess based on generic Debian/Red Hat experience, it sounds like you do not have the BIOS boot and/or the EFI system partition on the new disk?

My TrueNAS OS disk has three partitions on it.

They have partition types:
  1. ef02 BIOS boot partition
  2. ef00 EFI system partition
  3. bf01 Solaris /usr & Mac ZFS
The first is needed for a BIOS (meaning non EFI) boot from a GPT partitioned disk.

The second is needed to boot using EFI and is a vfat file system. This cannot be RAIDed. It is usually (in Red Hat/Debian) mounted at /boot/efi, but isn't on TrueNAS Scale for some reason. You can just mount it and copy the contents over to the new disk once it is partitioned and has a filesystem.

The 3rd is for the boot-pool pool/vdev and is the rest of the disk.

I have notes somewhere on how to fo all of this for a vanilla Debian/Red Hat system if needed.
 
Top