When will openzfs 2.1.10 be available as an update?

thex

Dabbler
Joined
Apr 3, 2023
Messages
14
I'm on Truneas Core 13 and current investigations of some kernel panics indicate that I might be affected by this bug:
(also discussed here: https://www.truenas.com/community/threads/sos-kernel-panic-help.104524/)

It looks like the code was merged into 2.1.10 which was released 3 days ago.
Now I'm wondering how long it will take until I can update to it.

Days? Weeks? Months? I'm not familiar with Truenas release practices.
Would be great to know, already had several of these panics.

Thanks
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The best answer to your specific question is "file a ticket and make your case for urgency".

That said, from what little I've seen of that (no time to dig into it at the moment), @mav@ says it's likely due to a corrupted pool, which is not a good position to be in and may be unrecoverable, in the sense that you may need to recreate the pool.
 

thex

Dabbler
Joined
Apr 3, 2023
Messages
14
The current behavior I have is slightly different so keeping my fingers crossed that it still is not a corrupted pool.

Scrub of the pool is running so let’s see if there is a problem (still new to zfs, hope it can detect if there are any issues)

However new harddrives are on their way.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Days? Weeks? Months? I'm not familiar with Truenas release practices.

Generally months or longer. There is no desire for TrueNAS CORE to be on the bleeding edge of ZFS, as this is an enterprise product and not some homebrew Linux project storing pictures from the birdfeeder camera in your backyard. It is possible that specific well-understood patches might be backported into CORE, and this is made more likely where someone like amotin is doing the work. However, if the panic is the result of a corrupted pool, you will almost certainly need to backup and restore the pool. Once faults are introduced into a pool, they are hard to expunge.
 

thex

Dabbler
Joined
Apr 3, 2023
Messages
14
Ok, sure makes sense. Is there some kind of pre release build I could switch to or manually install/compile the new zfs version?

But zfs should detect that there is a problem, right?

Code:
truenas% zpool status -v thex_nas_6tb
  pool: thex_nas_6tb
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 07:47:36 with 0 errors on Tue Apr 18 07:15:28 2023
config:

        NAME                                            STATE     READ WRITE CKSUM
        thex_nas_6tb                                    ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/c1ccf86d-91a6-11eb-9f95-471895f2e5c0  ONLINE       0     0     0
            gptid/c2116de8-91a6-11eb-9f95-471895f2e5c0  ONLINE       0     0     0

errors: No known data errors
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is that pool containing an SMR disk (or two)?

I note that you have talked about having some of those in one of your other threads.

New versions of ZFS won't make it work better with SMR.
 

thex

Dabbler
Joined
Apr 3, 2023
Messages
14
Yes correct, the disks are currently SMR. That’s why new disks are on their way now.

However tonight there really seems to have been a problem. I’m now wondering if zfs can recover that or at least tell me which file is the problem.
IMG_1849.jpeg
IMG_1850.jpeg


Drives should be here tomorrow
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
zpool status -v should now list the impacted file(s)
 

thex

Dabbler
Joined
Apr 3, 2023
Messages
14
Thanks, too obvious. Seems to be only one file for now, let's see.
 
Top