Hello,
I'm having some strange issue regarding the CPU usage during a scrub, maybe I missed something... I'm far from beeing a FreeBSD guru but I'm not exactly a complete beginner either.
Shortly after unlocking the drives during the boot process, the NAS seemed to hang (refused connection both with ssh and the web service), however I noticed some activity on the drives. After a few minutes I have eventually been allowed to log in and noticed a very high CPU usage (which explains the difficulties to log in), while a scrub is ongoing :
Digging a bit around, the RAM doesn't seem to be the issue, however it seems that the processes g_eli* are eating all the CPU ressources :
I'm not too sure on that one, but I believe these "g_eli" processes are related to the full drive encryption.
I checked and confirmed that my hardware is AES-NI capable and that the encryption is hardware backed :
So well, that's it I'm not too sure what could be the issue at this point.
Is this CPU usage normal/expected with my hardware ? I somehow doubt it but who knows
Thanks to whoever is having a look at this...
** EDIT : My bad, I probably should have created this in the "General Questions and Help" subforum instead, feel free to move it **
I'm having some strange issue regarding the CPU usage during a scrub, maybe I missed something... I'm far from beeing a FreeBSD guru but I'm not exactly a complete beginner either.
Shortly after unlocking the drives during the boot process, the NAS seemed to hang (refused connection both with ssh and the web service), however I noticed some activity on the drives. After a few minutes I have eventually been allowed to log in and noticed a very high CPU usage (which explains the difficulties to log in), while a scrub is ongoing :
Code:
load averages: 57.84, 61.93, 64.13
Code:
zpool status -v zpool-raidz2 pool: zpool-raidz2 state: ONLINE scan: scrub in progress since Sun Jun 23 00:01:48 2019 4.65T scanned at 740M/s, 3.43T issued at 546M/s, 5.23T total 0 repaired, 65.69% done, 0 days 00:57:22 to go
Digging a bit around, the RAM doesn't seem to be the issue, however it seems that the processes g_eli* are eating all the CPU ressources :
Code:
1 [|||||||||||||||||||||||||||||||||||||||||||93.9%] Tasks: 64, 0 thr, 82 kthr; 4 running 2 [|||||||||||||||||||||||||||||||||||||||||||95.4%] Load average: 69.08 63.72 64.56 3 [|||||||||||||||||||||||||||||||||||||||||||92.9%] Uptime: 01:56:56 4 [|||||||||||||||||||||||||||||||||||||||||||93.9%] Mem[||||||||||||||||| 3.26G/15.9G] Swp[ 0K/10.00G] PID USER PRI NI VIRT RES S CPU% MEM% TIME+ Command 0 root -16 0 0 13120 S 116. 0.1 3h13:47 kernel 11 root 155 52 0 64 R 12.2 0.0 16:34.01 idle 1133 root 27 0 0 16 R 8.2 0.0 6:58.63 g_eli[0] gptid/2292 1148 root 33 0 0 16 R 7.4 0.0 8:08.74 g_eli[3] gptid/26e9 1142 root 23 0 0 16 R 7.2 0.0 7:26.32 g_eli[3] gptid/24b5 1134 root 27 0 0 16 R 7.9 0.0 6:58.00 g_eli[1] gptid/2292 1172 root 23 0 0 16 R 6.9 0.0 7:07.13 g_eli[3] gptid/2fae 1154 root 34 0 0 16 R 7.5 0.0 8:07.13 g_eli[3] gptid/291e 1160 root 23 0 0 16 S 6.8 0.0 7:11.39 g_eli[3] gptid/2b3f 1145 root 32 0 0 16 R 6.0 0.0 7:03.86 g_eli[0] gptid/26e9 1178 root 23 0 0 16 R 6.0 0.0 7:08.32 g_eli[3] gptid/31dd 264 root 72 0 260M 227M R 7.7 1.4 4:16.46 python3.6: middlewared 1171 root 27 0 0 16 R 5.7 0.0 6:43.79 g_eli[2] gptid/2fae 1177 root 27 0 0 16 R 5.7 0.0 6:46.41 g_eli[2] gptid/31dd 1140 root 25 0 0 16 R 5.5 0.0 7:03.45 g_eli[1] gptid/24b5 1159 root 27 0 0 16 R 6.0 0.0 6:55.63 g_eli[2] gptid/2b3f 1166 root 23 0 0 16 R 6.0 0.0 6:59.99 g_eli[3] gptid/2d60 1165 root 26 0 0 16 R 6.0 0.0 6:50.11 g_eli[2] gptid/2d60 1135 root 27 0 0 16 R 5.2 0.0 6:57.82 g_eli[2] gptid/2292 1141 root 26 0 0 16 R 6.1 0.0 6:59.26 g_eli[2] gptid/24b5 1151 root 32 0 0 16 R 5.3 0.0 6:57.22 g_eli[0] gptid/291e 1164 root 26 0 0 16 S 6.3 0.0 6:54.36 g_eli[1] gptid/2d60 1146 root 32 0 0 16 R 5.0 0.0 6:39.37 g_eli[1] gptid/26e9 1158 root 26 0 0 16 S 5.1 0.0 7:01.54 g_eli[1] gptid/2b3f 1153 root 32 0 0 16 R 5.4 0.0 6:20.88 g_eli[2] gptid/291e 1163 root 25 0 0 16 R 4.8 0.0 6:55.12 g_eli[0] gptid/2d60 1170 root 26 0 0 16 R 5.4 0.0 6:58.40 g_eli[1] gptid/2fae 1176 root 26 0 0 16 R 5.2 0.0 6:49.33 g_eli[1] gptid/31dd 1175 root 26 0 0 16 R 5.2 0.0 6:37.55 g_eli[0] gptid/31dd 1152 root 32 0 0 16 R 5.5 0.0 6:34.86 g_eli[1] gptid/291e 1157 root 27 0 0 16 S 4.9 0.0 6:43.60 g_eli[0] gptid/2b3f 1147 root 32 0 0 16 R 4.9 0.0 6:21.35 g_eli[2] gptid/26e9 1139 root 25 0 0 16 R 4.9 0.0 6:57.28 g_eli[0] gptid/24b5 1169 root 24 0 0 16 R 4.8 0.0 6:39.89 g_eli[0] gptid/2fae
I'm not too sure on that one, but I believe these "g_eli" processes are related to the full drive encryption.
I checked and confirmed that my hardware is AES-NI capable and that the encryption is hardware backed :
Code:
$cat /var/run/dmesg.boot | grep aesni aesni0: <AES-CBC,AES-XTS,AES-GCM,AES-ICM> on motherboard $ geli list | grep -E '(Crypto|Encryption)' EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware EncryptionAlgorithm: AES-XTS Crypto: hardware
So well, that's it I'm not too sure what could be the issue at this point.
Is this CPU usage normal/expected with my hardware ? I somehow doubt it but who knows
Thanks to whoever is having a look at this...
** EDIT : My bad, I probably should have created this in the "General Questions and Help" subforum instead, feel free to move it **
Last edited: