Swap gets rebuilt every boot?

MountainMan

Dabbler
Joined
Dec 10, 2020
Messages
42
I'm curious I'm an outlier here, but I'm seeing all the raid1 swap partitions getting rebuilt every boot -- even clean restarts. Is that normal, and if so is there some reason?

Thanks!
 

MountainMan

Dabbler
Joined
Dec 10, 2020
Messages
42
For reference..

TrueNAS-scale-swap.png
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Is that normal, and if so is there some reason?
Yes.

SWAP is reserved space to temporarily swap out memory to disk if memory should ever become unsolvably full, allowing the system a chance to properly log things and in some cases to get itself out of trouble.

A server running normally will never use it.

Its content is therefore not permanent and the assessment of where it should be placed (not necessarily all SWAP candidate partitions are used) is made at boot and can be re-evaluated on disk failure also.
 

MountainMan

Dabbler
Joined
Dec 10, 2020
Messages
42
Thanks, I get that. But still curious why the swap MD raid1 devices are marked unclean and re-constructed each boot -- even with no disk configuration changes and a clean reboot. It's as though they aren't cleanly unmounted or maybe can't be for some reason?
 
Joined
Oct 22, 2019
Messages
3,641
Maybe it's a Linux/SCALE thing?

At bootup on TrueNAS Core (FreeBSD), this is the only mention of swap I get:
Code:
GEOM_MIRROR: Device mirror/swap0 launched (2/2).
GEOM_ELI: Device mirror/swap0.eli created.
GEOM_ELI: Encryption: AES-XTS 128
GEOM_ELI:     Crypto: hardware

GEOM_MIRROR: Device mirror/swap1 launched (2/2).
GEOM_ELI: Device mirror/swap1.eli created.
GEOM_ELI: Encryption: AES-XTS 128
GEOM_ELI:     Crypto: hardware

GEOM_MIRROR: Device mirror/swap2 launched (2/2).
GEOM_ELI: Device mirror/swap2.eli created.
GEOM_ELI: Encryption: AES-XTS 128
GEOM_ELI:     Crypto: hardware

Simple and to the point.

(swap0 is from my boot drives, while swap1 and swap2 are from my data drives.)
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
I also see this in dmesg. It looks bad, even if it's by design.
I'm not sure if I interpret the log correctly but it looks like it means some time is wasted cleaning them on boot.
 

MountainMan

Dabbler
Joined
Dec 10, 2020
Messages
42
Thanks... Since I'm not alone I filed a bug. It seems unintended and definitely takes boot time (even on a decent machine) if there are a bunch of spinning drives.
 

MountainMan

Dabbler
Joined
Dec 10, 2020
Messages
42
Updated to 22.02.3 but this is still an issue for me. Takes over 15 minutes to reboot while it rebuilds encrypted raid 1 swap partitions. (presumably because they aren't cleanly unmounted?) Fun part is that I have to reboot twice after updates because of another issue with rrd data for some reason.

Anyway, if anyone else has veeeeerrrry long boot times for Scale, I have an issue here: https://ixsystems.atlassian.net/browse/NAS-116662

You can check if you hare having swap rebuilt by running
Code:
dmesg |grep 'not clean'
 

bzb-rs

Cadet
Joined
Aug 4, 2022
Messages
4
Same for our system too running TrueNAS-22.02.2.1 on epyc, was curious to know if it holds true but as it turns out, the sw raid is for the swap.

root@truenas[~]# dmesg |grep 'not clean'
[ 26.988091] md/raid1:md127: not clean -- starting background reconstruction
[ 27.021430] md/raid1:md126: not clean -- starting background reconstruction
[ 27.067276] md/raid1:md125: not clean -- starting background reconstruction

Running raidz on 6 disk.
 
Top