md(?!) constantly resyncing

jsclayton

Dabbler
Joined
Aug 27, 2020
Messages
15
Searched and didn't find anything related to this. SCALE 22.02.4. Noticed that the NVMe disks in a mirror were running super hot when nothing should be active on them, the system log is just scrolling and scrolling these issues about md resyncing arrays?

I collected a debug dump while it's happening. I think if I reboot it'll go away (for a while). What information can I provide to help track this down? Thanks!

Code:
[99639.913290] md127: detected capacity change from 0 to 2147418624
[99640.040992] md/raid1:md126: not clean -- starting background reconstruction
[99640.048991] md/raid1:md126: active with 3 out of 3 mirrors
[99640.056493] md126: detected capacity change from 0 to 2147418624
[99640.376021] md/raid1:md125: not clean -- starting background reconstruction
[99640.383928] md/raid1:md125: active with 3 out of 3 mirrors
[99640.391505] md125: detected capacity change from 0 to 2147418624
[99640.519682] md/raid1:md124: not clean -- starting background reconstruction
[99640.527698] md/raid1:md124: active with 3 out of 3 mirrors
[99640.535629] md124: detected capacity change from 0 to 2147418624
[99640.630267] md/raid1:md123: not clean -- starting background reconstruction
[99640.638134] md/raid1:md123: active with 3 out of 3 mirrors
[99640.645004] md123: detected capacity change from 0 to 2147418624
[99640.833735] md: resync of RAID array md127
[99640.889268] Adding 2097084k swap on /dev/mapper/md127.  Priority:-2 extents:1 across:2097084k FS
[99641.292645] md: resync of RAID array md126
[99641.441280] Adding 2097084k swap on /dev/mapper/md126.  Priority:-3 extents:1 across:2097084k FS
[99641.577285] md: resync of RAID array md125
[99641.681259] Adding 2097084k swap on /dev/mapper/md125.  Priority:-4 extents:1 across:2097084k FS
[99641.804027] md: resync of RAID array md124
[99641.901237] Adding 2097084k swap on /dev/mapper/md124.  Priority:-5 extents:1 across:2097084k FS
 

Attachments

  • md.txt
    29.3 KB · Views: 76

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
These are the swap MD-RAID 1 that TrueNAS SCALE sets up.

In theory, Linux, (which is used as the OS for TrueNAS SCALE), should only check for valid devices and not need to re-sync these on every reboot. My current desktop uses Linux & MD-RAID 1 for both swap & "/boot", (then ZFS for OS and other storage). These don't need to re-sync at boot.

If you have not already rebooted, DON'T. You might just trigger the "problem" again.


There does seem to be a difference in how TrueNAS SCALE sets up these swap mirror devices, compared to straight Linux.

Perhaps someone else can give further information on why SCALE re-syncs the swap mirrors on boot.
 

jsclayton

Dabbler
Joined
Aug 27, 2020
Messages
15
To clarify, this behavior began after the system being online for a little over a day, not at boot. I did end up rebooting to reduce the temps, which it did. Very strange behavior indeed.

I'm hoping there are clues in the debug dump, not sure if that's worth posting here or opening a ticket in Jira.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Weird that it started after one day. I don't have a clue why, sorry.
 
Top