Process 2561462 (smartctl) of user 0 dumped core.

HITMAN

Dabbler
Joined
Nov 20, 2021
Messages
33
Hello guys,
from yesterday i'm receiving the error below shown on my ssh console. I did check the disk but smart reports all disks with no errors. I'm using latest Scale version. It seems

Process 2561462 (smartctl) of user 0 dumped core.
Stack trace of thread 2561462:
#0 0x00007fb9cb4b1ce1 __GI_raise (libc.so.6 + 0x3bce1)
#1 0x00007fb9cb49b537 __GI_abort (libc.so.6 + 0x25537)
#2 0x00007fb9cb4f4768 __libc_message (libc.so.6 + 0x7e768)
#3 0x00007fb9cb4fba5a malloc_printerr (libc.so.6 + 0x85a5a)
#4 0x00007fb9cb4fcc14 _int_free (libc.so.6 + 0x86c14)
#5 0x000055631149ed11 n/a (smartctl + 0x58d11)
#6 0x00007fb9cb4b44d7 __run_exit_handlers (libc.so.6 + 0x3e4d7)
#7 0x00007fb9cb4b467a __GI_exit (libc.so.6 + 0x3e67a)
#8 0x00007fb9cb49cd11 __libc_start_main (libc.so.6 + 0x26d11)
#9 0x0000556311474f2a n/a (smartctl + 0x2ef2a)
 

李仁博

Cadet
Joined
Jul 3, 2022
Messages
3
I have same problem.
Yesterday, i update to TrueNAS-SCALE-22.02.2。
I have a Long S.M.A.R.T. Test protection ,cron trigger。

today,when i wake up,i got console info every 5 minutes:

2022 Jul 4 12:06:07 truenas Process 4032147 (smartctl) of user 0 dumped core.

Stack trace of thread 4032147:
#0 0x00007f0d759a1ce1 __GI_raise (libc.so.6 + 0x3bce1)
#1 0x00007f0d7598b537 __GI_abort (libc.so.6 + 0x25537)
#2 0x00007f0d759e4768 __libc_message (libc.so.6 + 0x7e768)
#3 0x00007f0d759eba5a malloc_printerr (libc.so.6 + 0x85a5a)
#4 0x00007f0d759ecc14 _int_free (libc.so.6 + 0x86c14)
#5 0x000055697d71ed11 n/a (smartctl + 0x58d11)
#6 0x00007f0d759a44d7 __run_exit_handlers (libc.so.6 + 0x3e4d7)
#7 0x00007f0d759a467a __GI_exit (libc.so.6 + 0x3e67a)
#8 0x00007f0d7598cd11 __libc_start_main (libc.so.6 + 0x26d11)
#9 0x000055697d6f4f2a n/a (smartctl + 0x2ef2a)

with my System->Advanced->Save Debug
 

Attachments

  • debug-truenas-20220704115926.tgz
    8.2 MB · Views: 193

Fiberton

Cadet
Joined
Jul 9, 2022
Messages
3
If I recall this started in 22.02.1 . I have been curious when this would eventually get resolved. This is on a Dell Poweredge 720XD , 384 gigs of ram, dual 2687w v2s. Not sure that makes a difference.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
If I recall this started in 22.02.1 . I have been curious when this would eventually get resolved. This is on a Dell Poweredge 720XD , 384 gigs of ram, dual 2687w v2s. Not sure that makes a difference.
If you are experiencing this issue, then feel free to file a jira ticket and attach both corefile and system debug.
 

Fiberton

Cadet
Joined
Jul 9, 2022
Messages
3
I get the exact same error showing up for 24 hours in shell after an upgrade. After 12 too 24 hours it stops showing it but may keep showing it happen in taskmanager. . If I reboot it will not show the error in shell but you can see the error happen in task manager sometimes . Saturday I updated to 22.02.2.1 it did it into Yesterday afternoon. /me shrugs. This also happens on another machine that is a 10100/ 16 gigs of ram nothing special. Noticed the same thing when I put scale on it. Hope the debug helps.. The zfs config its 90 drives. Assuming why this debug is 19 megs.
 

Attachments

  • debug-Olympus-20220711112301.tgz
    19.1 MB · Views: 160
Top