Bluefin Upgrade Discussion

rollee

Cadet
Joined
Sep 18, 2021
Messages
4
I ran into the same
updated our second storage from 22.02.4 to 22.12.0. After the update i get this error:

New alerts:

  • Disk(s): sdaa, sdz, sdah, sdag, sdao, sdak, sdai, sdaf, sdad, sdac, sdaj, sdu, sdam, sdan, sdaq, sdar, sdas, sdav, sdax, sdap, sdau, sdat, sdal, sdaw, sdj, sdd, sdb, sda, sdf, sdc, sdm, sdg, sdk, sdi, sde, sdh, sdr, sdx, sdw, sdl, sdab, sdv, sdae, sdq, sdy, sdp, sds, sdt, sday, sdaz, sdbb, sdbf, sdbc, sdbe, sdbh, sdba, sdbj, sdbd, sdbi, sdbg, sdbk, sdbl, sdbu, sdbv, sdbm, sdbp, sdbq, sdbo, sdbn, sdbr, sdbs, sdbt, sdn, sdo, sdcf, sdcg, sdch, sdci, sdcr, sdcs, sdcj, sdck, sdcl, sdcm, sdcn, sdco, sdcp, sdcq, sdbx, sdby, sdbz, sdca, sdcb, sdcc, sdcd, sdce are formatted with Data Integrity Feature (DIF) which is unsupported.

It seems evertything ist working.
I'm seeing the same issue, and everything also seems to be functioning normally still. Strangely, there were only two posts for the error found in search (including this one) and neither had no replies. From what I remember, my drives were formatted when I first installed Scale 22.02.1.
 

rollee

Cadet
Joined
Sep 18, 2021
Messages
4
UPDATE: A Google search returned one new result, https://ixsystems.atlassian.net/browse/NAS-119271, which indicates these are T10 formatted drives, which "does not play well with ZFS and is unneeded since ZFS has it's own checksumming." What kind of complications should we expect if we were to leave the drives as they are, since everything seems to be functioning now?
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Depending on the specific disks and what firmware is publicly available you may be able to flash to 512 byte from 520, but without knowing the specifics not sure that you will be able to use the drives safely.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
UPDATE: A Google search returned one new result, https://ixsystems.atlassian.net/browse/NAS-119271, which indicates these are T10 formatted drives, which "does not play well with ZFS and is unneeded since ZFS has it's own checksumming." What kind of complications should we expect if we were to leave the drives as they are, since everything seems to be functioning now?

No problems are known.
If you want to replace 1 drive at a time and reformat, you can.
 

MR.B

Cadet
Joined
Dec 15, 2022
Messages
6
can't replace the disc
1.jpg
 

bonfire62

Dabbler
Joined
Nov 27, 2022
Messages
21
I ran into the same

I'm seeing the same issue, and everything also seems to be functioning normally still. Strangely, there were only two posts for the error found in search (including this one) and neither had no replies. From what I remember, my drives were formatted when I first installed Scale 22.02.1.
Having the same issue here, all my drives are gone from my main pool, and the datasets have been removed.
1671205661561.png
 

bonfire62

Dabbler
Joined
Nov 27, 2022
Messages
21
more info:
Code:
The pool still shows up in zpool, so the pants-pooping has slowed:
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
VM          460G  62.0G   398G        -         -     6%    13%  1.00x    ONLINE  /mnt
boot-pool   111G  5.35G   106G        -         -     1%     4%  1.00x    ONLINE  -
main_pool  72.7T  21.3T  51.4T        -         -     0%    29%  1.00x    ONLINE  /mnt
 
Last edited:

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
@bonfire62 please post the output zpool status -v main_pool, from screenshot, you have mixed capacity disks, which should not matter. Also, can you please surround the above terminal text output with [CODE]text[/CODE], for readability?

I created this troubleshooting thread for DIF warning, let’s continue the discussion there. cc @rollee @MR.B
the pants-pooping has slowed
I love your humor! :grin:
 
Last edited:

bonfire62

Dabbler
Joined
Nov 27, 2022
Messages
21
@bonfire62 When you run zpool status main_pool, do you see the list or attached disks? From screenshot, you have mixed capacity disks, which should not matter. What happens when you press Manage Devices in UI, please show us what you see.
I got scared and I'm downgrading with my old boot image, it's still coming back up :(.
I hadn't mixed different capacity drives in the main_pool, for some reason it had added a fourth vdev from drives that I had added to the JBOD but not allocated to a vdev yet. Looking at the issue above, I think the issue that I'm seeing is from there being the ?metadata? volume under /dev/ that was originally from the 520b vs 512b format issue shown above.
.....downgrading by switching boot images should work.....right?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Also, can you please surround the text output with [CODE]text[/CODE], for readability?

Please note that I've closed your edit request. CODE tags have to be present at the time of posting. If not, the forumware loses the "superfluous" whitespace and condenses it all down to single spaces. While I could sit there and manually re-add all the spaces, .... I'm not going to.
 

kofeyh

Dabbler
Joined
Jun 20, 2022
Messages
10
Its probably a general check to make sure user is aware of the risk. Its not done per App.
This appears to be the case. I have 3 apps that have host paths in use, however only one path is 'shared' with another service (SMB). All 3 refuse to deploy. I am not sure what the intended outcome is, however the actual outcome will be the temptation to disable an otherwise good feature purely due to the approach of being (what appears to be) a global setting - which is exactly what I have done in the interim.

I now have to decide which is more important, access to a host path in-an app only, in order to have (what I am sure will be the only supported option) or both the app and externally (eg for plex) and the risk associated with the option turned off.
 

ajpaterson

Cadet
Joined
Dec 19, 2022
Messages
1
First post!

Now that Bluefin supports non-root administration users (and appears to require you to create at least one non-root local admin user to avoid the root password disabled warning message), is there any way to populate and obtain password credentials from my Active Directory's Domain Administrators group? It would save significant time setting up and keeping secure passwords up to date on each server that runs or gets upgraded to Bluefin. The workaround right now to let other users keep their password secret is to share the root user password (which in itself seems incredibly unsafe), and then have each admin user login and create their own root/builtin_administrators account and password.
 

truefriend-cz

Explorer
Joined
Mar 4, 2022
Messages
54
Where i can allow login to webUI another user than root?

After upgrade to Bluefin i will get notification:
Root user has their password disabled, but as there are no other users granted with a privilege of Local Administrator, they can still log in to the Web UI. Please create a separate user for the administrative purposes in order to forbid root from logging in to the Web UI.

so I guess I have to create a user other than root to manage TrueNAS - including logging into the web console.
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
and appears to require you to create at least one non-root local admin user
It does not, you get a warning message if root has the password disabled in UI or does not have builtin_administrators aux group assigned. However, in Bluefin you can now login with a non-root local user, which is new. Obviously there is a reasoning behind all this, iX should share more details. So far, there is only one small hint.
 

dublea

Dabbler
Joined
May 27, 2015
Messages
33
bug in scale 22.12.0?
my truenas scale is running under proxmox ve (7.3-3) with virtio-scsi drives. I upgraded from core some months ago to scale 22.02.4 without any problems. now I got 22.12.0 and get the following message:

every reboot gives me this message again
cat /proc/mdstat after starting up does not show this problem:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>

zpool gives no errors and all pools are up to date
switching back to 22.02.4 does not generate this message during startup.
Any hints what to do (running 22.02.4 again)/anyone with similar problems?
Seeing the same thing reported by, so far, two users on r/TrueNAS on Reddit:


Anyone know if a bug has been reported?
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
Seeing the same thing reported by, so far, two users on r/TrueNAS on Reddit
A newer kernel version is used in Bluefin, which produces useful system related warnings. See this thread for warnings related to badly formatted disks that made surface in Bluefin, for example. Basically, it is telling you whatever HBA you have installed is not fully compatible with Linux distributions using more recent kernels. That's one of the reasons why I did not purchased a Dell R730xd and went with a Dell R720xd instead. To my knowledge, the H730 PERC cannot be flashed to IT mode.
 
Last edited:

Astraea

Dabbler
Joined
Sep 7, 2019
Messages
28
I am trying to update my Core install that is running the latest stable version to the latest stable version of Scale but when the install gets to 95% I get the following error: Error: [EFAULT] Non-matching first partition types across disks. I have checked all of my boot and data disks and there are not issues or errors and I have rebooted the server to confirm everything is working correctly with Core. Other than backing up the configuration and installing from scratch is there a way to get this error and getting the upgrade to complete?
 
Top