TrueNAS SCALE 23.10.0 has been released!

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
There is one major issue for the situation where the Apps dataset is unencrypted but it located on a fully encrypted pool.
In the move from Angelfish to Bluefin, something happened that completely (and irretrievably) broke apps for me (and some other users)--and as part of the "upgrade", changes were made to the ix-applications dataset that meant I couldn't just roll back to the Angelfish boot environment and get my apps back. It seems the same has happened with Bluefin -> Cobia. Don't you think it'd be good to snapshot that dataset before making any changes to it, so that users can roll back if a bad thing happens again?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
In the move from Angelfish to Bluefin, something happened that completely (and irretrievably) broke apps for me (and some other users)--and as part of the "upgrade", changes were made to the ix-applications dataset that meant I couldn't just roll back to the Angelfish boot environment and get my apps back. It seems the same has happened with Bluefin -> Cobia. Don't you think it'd be good to snapshot that dataset before making any changes to it, so that users can roll back if a bad thing happens again?


That would a good best practice for all updates.. especially major ones.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I'm not too fond of this message. It indicates that the users affected by the apps not starting issue in the latest release did something wrong.

My whole pool is encrypted. And if I recall correctly, I didn't create the ix-applications dataset myself - the Apps automatic setup did that.
I'm pretty sure there was no warning triggered at any step that would tell me that I did something wrong.


And here's the interesting part - I ran RC1! And my apps were running stable there (I rebooted the server multiple and every time things started correctly).
It's the upgrade to the final that broke things. Which means the bug was introduced there.

That should never happen. There should be hardly any changes between RC and final - no new features introduced, no non-critical changes.

I really don't want any iXsystems to feel bad. I paid nothing to anyone and I have no right to demand anything. All I ask is to not blame it on users.

Its not that anyone did something wrong..... its just not a recommended config because it is more complex. We'd recommend that a major "parent" dataset be encrypted that is a peer of the Apps dataset.

There are a lot (billions) of permutations of how TrueNAS is configured and we don't even pretend that our QA tests a large percentage of these configurations. Mistakes can be made and we catch them when an early adopter reports the issue. Our goal is to make fewer mistakes and correct issues ASAP.

We use the software status page to let users know whether a software version is recommended for general or conservative use. Until then, its testers and early adopters. SCALE 23.10.0 is recommended for testers and early adopters only.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It'd be a better practice yet if the updater did it automatically.
Its an interesting suggestion .. please make it along with your war stories.
I've put a comment in the Jira feature request.

Basically, any ABE, (Alternate Boot Environment), creation should be accompanied by a matching ix-applications dataset snapshot. Then, any roll back of an older ABE, should automatically clone that ix-applications dataset snapshot, and use that clone.

That should allow full back out.


Of course if a user / Admin enabled a new ZFS pool feature that is not supported by that older version of SCALE, that is on them for being hasty.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I'm sorry, my fault. It looks like I should have done more research before switching to Scale.
No-one's fault. You were unlucky to find one of our mistakes. Software quality is a never-ending battle. We can always get better, but can never be perfect.
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
Am I the only one affected by apps not starting after the upgrade? I upgraded from 23.10-RC.1 and I see an "Initializing Apps Service" message with a spinning wheel. Moreover, kubectl shows that there are no ix-* namespaces:

Code:
$ k3s kubectl get ns
NAME              STATUS   AGE
default           Active   8h
kube-system       Active   8h
kube-public       Active   8h
kube-node-lease   Active   8h
openebs           Active   8h
No. Most likely you had your unencrypted ix-applications dataset within an encrypted root dataset.
Roll back to Bluefin and restore ix-applications folder from snapshots.
Note that if you set up your apps again under Cobia and you reboot then you'll be stuck all over again.
A bug ticket has already been filed.
 

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10
I was using RC1, after the update that had to be done manually with the .update file, my pool started to be exported.
Fui importar o pool e recebi este erro:
(sqlite3.IntegrityError) Falha na restrição UNIQUE: storage_volume.vol_name [SQL: INSERT INTO storage_volume (vol_name, vol_guid) VALUES (?, ?)] [parâmetros: ('Storage', '10900901337770132667')] (Antecedentes deste erro em: https://sqlalche.me/e/14/gkpj)
When I click on more info:
Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: storage_volume.vol_name

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 464, in __run_body
rv = await self.method(*([self] + args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/import_pool.py", line 181, in import_pool
pool_id = await self.middleware.call('datastore.insert', 'storage.volume', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1341, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/write.py", line 62, in insert
result = await self.middleware.call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1352, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/connection.py", line 106, in execute_write
result = self.connection.execute(sql, binds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1365, in execute
return self._exec_driver_sql(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1669, in _exec_driver_sql
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
self._handle_dbapi_exception(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
util.raise_(
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: storage_volume.vol_name
[SQL: INSERT INTO storage_volume (vol_name, vol_guid) VALUES (?, ?)]
[parameters: ('Storage', '10900901337770132667')]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
The Pool works again and I have access to the data, but SMB complains:
SMB shares have path-related configuration issues that may impact service stability: Share: Path does not exist.
2023-10-30 10:14:59 (America/Sao_Paulo)
And when I restart TrueNAS everything starts again, it comes back without the pool, as if I had just updated.
 

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10
Another thing, even though the UPS service is turned off, after restarting, it comes back on.
 

gwaitsi

Patron
Joined
May 18, 2020
Messages
243
i upgraded from bluefin 3.x to 4.x and then upgraded to 23.10
I don't have encrypted drives, but i am getting this error too.
(not to mentioned screwed up datasets in the UI i.e. duplicates at root level for a sublevel. clicking one, highlights both)

That will learn me. Next time wait until .2 minimum
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
i upgraded from bluefin 3.x to 4.x and then upgraded to 23.10
I don't have encrypted drives, but i am getting this error too.
(not to mentioned screwed up datasets in the UI i.e. duplicates at root level for a sublevel. clicking one, highlights both)

That will learn me. Next time wait until .2 minimum
Start a separate thread with details.... sounds like an unrelated issue.
 

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10
Is there anything I can do about this?:
I have 2 pools, one with 4 disks and the other with just one NVME for less important things.
The Pool that is in the NVME returns normally after restarting, but the one with 4 disks does not, it always returns with the exported disks and with the pool disconnected.
If I import it, it works again and I have access to the data, but when I restart it, the pool is exported again and I have to import it again.

I was using RC1, after the update that had to be done manually with the .update file, my pool started to be exported.
Fui importar o pool e recebi este erro:

When I click on more info:
Error: Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: storage_volume.vol_name

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 464, in __run_body
rv = await self.method(*([self] + args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/import_pool.py", line 181, in import_pool
pool_id = await self.middleware.call('datastore.insert', 'storage.volume', {
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1341, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/write.py", line 62, in insert
result = await self.middleware.call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1352, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/connection.py", line 106, in execute_write
result = self.connection.execute(sql, binds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1365, in execute
return self._exec_driver_sql(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1669, in _exec_driver_sql
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
self._handle_dbapi_exception(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
util.raise_(
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: storage_volume.vol_name
[SQL: INSERT INTO storage_volume (vol_name, vol_guid) VALUES (?, ?)]
[parameters: ('Storage', '10900901337770132667')]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
The Pool works again and I have access to the data, but SMB complains:

And when I restart TrueNAS everything starts again, it comes back without the pool, as if I had just updated.
 
Top