Excessive disk IO while idle - TrueNAS SCALE (22.02.2.1 -> 22.02.4)

lofgren

Cadet
Joined
Dec 10, 2014
Messages
8
Hi All, hoping for a little help or guidance on this.

Issue:
Constant disk IO that has become quite noisy while the NAS should be idle.
It is typically in repetitive patterns that sound like retrying the same process over and over.
Occasionally the heads seem to be trying multiple functions at once and make a buzz that sounds like a beep and seems to culminate in what I guess is a head crash?

This is the repetitive pattern at present (not a sample of the head crash sound) - [Sample disk noise]

Previously:
Disks were fairly quiet unless there was some major activity, but even the initial re-silvering as I upgraded the array was pretty quiet.

Disks:
4x Seagate Ironwolf 8TB - ST8000VN004-2M2101 configured in RAIDZ2

Disk temps range between 34-38

SMART:
Drives are around 5291 hrs old with the last Extended Offline test completing error-free ~22hrs ago at the time of writing.
Daily Short tests were error-free as well.

smartctl shows the following (believe this is nothing to worry about?)-
High values in:
#1 Raw_read_error_rate
#195 Hardware_ECC_recovered

Zero values for:
#197 Current_Pending_Sector
#198 Offline_Uncorrectable
#199 UDMA_CRC_Error_Count

Last scrub completed 25 Sept 2022 (35day threshold) and found no issues.

Tried so far:
- checked served data - confirmed files were still usable and not getting encrypted by malware
- stopped all docker images - no change
- moved system dataset to the bootpool (mirrored SSD) - no change
- scoured logs - but nothing stands out to me as a novice
- reviewed htop - k3s server has a lot of processes, typically consuming the most CPU (<=15%)

- upgraded from 22.02.2.1 to 22.02.4 - found my zfs import to be comparatively slower than previous reboots and docker images failed to load without resetting config items

- reviewed general linux disk IO checks - found most recommend iotop, which is not installed (how do we install packages without causing dependency issues during later upgrades?)

- stopped k3s.service (sudo systemctl stop k3s.service) - IO did not stop but definitely reduced and had longer pauses between cycles
- started k3s.service - extra IO started again
- searched forums - few references to similar behaviour on old freenas versions, but not much for TrueNAS SCALE.

Below samples are obviously from 1 disk, but all 4 show very similar patterns.

Sample disk IO with k3s running:
disk_io_graph.png


Sample disk IO with k3s stopped:
disk_io_graph-k3s_stopped.png


Thanks in advance!
 

lofgren

Cadet
Joined
Dec 10, 2014
Messages
8
Minor update-
The 48hrs of constant grinding eased off. It now only happens periodically.

Interestingly, the disk IO seems fairly constant even without the loud disk activity.

So there are two behaviours -
- a baseline of writes, even while idling
- loud repetitive disk activity

I suspect there is some background task running that triggers this behaviour, most likely in zfs containers/workload.

Any suggestions for commands to run to confirm the cause?
 

daku

Cadet
Joined
Nov 4, 2022
Messages
4
I'm seeing very similar behaviour on a new Scale install, also with 8GB ST8000VN004-2M2101 driver (2 mirrored).

Similar story with searching solutions, tried shutting down all apps, moved dataset to bootpool SSD and shut down all shares with no change.
IO is mostly writes, few hundred KB written, then idle for a second or so and another write, idle, write...

Still a newbie to Truenas but happy to try any possible ideas
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Have you guys tried to identify what is being written? Simply searching for recently written files could be interesting.
 

lofgren

Cadet
Joined
Dec 10, 2014
Messages
8
I'm seeing very similar behaviour on a new Scale install, also with 8GB ST8000VN004-2M2101 driver (2 mirrored).

Similar story with searching solutions, tried shutting down all apps, moved dataset to bootpool SSD and shut down all shares with no change.
IO is mostly writes, few hundred KB written, then idle for a second or so and another write, idle, write...

Still a newbie to Truenas but happy to try any possible ideas
Hey daku,
Are you getting a similar repetitive disk noise? Or just seeing consistent writes in the dashboard?

My worry is the noise sounds like some disk scan or function is being attempted over and again unsuccessfully.
I don't know if that might be purely a disk issue or if some process in TrueNAS has created it over time..

Have you guys tried to identify what is being written? Simply searching for recently written files could be interesting.
Need to head out but will try this later.

Quick test on /var/log seems to list all files, must be doing something wrong-
Code:
sudo find . -type f -mmin +6
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

Plus? Files *older* than six minutes ago? I would suggest -6, and also probably not worrying about -type f either, since any metadata writes would cause pool activity, even if it was just updating directories.
 

daku

Cadet
Joined
Nov 4, 2022
Messages
4
Hey daku,
Are you getting a similar repetitive disk noise? Or just seeing consistent writes in the dashboard?

My worry is the noise sounds like some disk scan or function is being attempted over and again unsuccessfully.
I don't know if that might be purely a disk issue or if some process in TrueNAS has created it over time..
To me it sounds like your standard disk operation. Nothing suggesting the disk is not happy, just a normal disk operation repeated every second or so.
I'll see if list of recently written files sheds some light on the matter once I get back home later.
 

daku

Cadet
Joined
Nov 4, 2022
Messages
4
Looks like there is a container writing logs that look very much like debug mode has been left on.
cat ./ix-applications/docker/containers/b30320414bd87a576c9950e6a3a5f21c53fd2cf5ac508494e901bcc282fe5c11/b30320414bd87a576c9950e6a3a5f21c53fd2cf5ac508494e901bcc282fe5c11-json.log
........ {"log":"I1105 23:05:11.855305 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-05T23:05:11.855360631Z"}
{"log":"I1105 23:05:16.906802 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-05T23:05:16.906877661Z"}
{"log":"I1105 23:05:21.916918 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-05T23:05:21.916985748Z"} ....
 

lofgren

Cadet
Joined
Dec 10, 2014
Messages
8
With my user containers all stopped, I am assumedly only left with system containers for zfs etc-

For the ix-applications/docker/containers logs, there are 5 -json.log files with very chatty, repetitive data:

Two separate logs capturing leaderelection.go:273 event every 5 seconds with occasional reflector.go:xxx entries
Another log capturing leaderelection.go:278 event every 5 seconds.
+
A fourth log with a cycle of zfsnode.go:100 / zfsnode.go:110 / zfsnode.go:114 / zfsnode.go:139 events every minute.
+
The fifth log noting No files matching import glob pattern every 30 seconds


Do these need to be so chatty? Is this debug/info level accidentally left engaged or is this necessary?

./aed8444bc696df810709d3b09bafa31a971dac8509fed3a69bca701d122486ea/aed8444bc696df810709d3b09bafa31a971dac8509fed3a69bca701d122486ea-json.log

{"log":"I1106 06:47:02.115659 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:02.115922547Z"}
{"log":"I1106 06:47:07.154579 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:07.154813888Z"}
{"log":"I1106 06:47:10.471304 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 0 items received\n","stream":"stderr","time":"2022-11-06T06:47:10.471574044Z"}
{"log":"I1106 06:47:12.185342 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:12.185568624Z"}
{"log":"I1106 06:47:17.207370 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:17.207450282Z"}
{"log":"I1106 06:47:22.230836 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:22.231034491Z"}
{"log":"I1106 06:47:27.266737 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:27.266940471Z"}
{"log":"I1106 06:47:32.293647 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:32.293879146Z"}
{"log":"I1106 06:47:37.329618 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:37.329806623Z"}
{"log":"I1106 06:47:42.363788 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:42.364041817Z"}
{"log":"I1106 06:47:47.374221 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:47.374288917Z"}
{"log":"I1106 06:47:52.409964 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:52.410219995Z"}
{"log":"I1106 06:47:57.451262 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:47:57.451490588Z"}
{"log":"I1106 06:48:02.485895 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:02.486116639Z"}
{"log":"I1106 06:48:07.524596 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:07.524838803Z"}
{"log":"I1106 06:48:12.610288 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:12.610491889Z"}
{"log":"I1106 06:48:17.649212 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:17.649422581Z"}
{"log":"I1106 06:48:22.684922 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:22.685187859Z"}
{"log":"I1106 06:48:27.720059 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:27.720279088Z"}
{"log":"I1106 06:48:32.754338 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:32.754591461Z"}
{"log":"I1106 06:48:37.788276 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:37.788517775Z"}
{"log":"I1106 06:48:41.854607 1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync\n","stream":"stderr","time":"2022-11-06T06:48:41.854863947Z"}
{"log":"I1106 06:48:42.060394 1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync\n","stream":"stderr","time":"2022-11-06T06:48:42.060639857Z"}
{"log":"I1106 06:48:42.821776 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:42.821988333Z"}
{"log":"I1106 06:48:47.860025 1 leaderelection.go:273] successfully renewed lease kube-system/external-resizer-zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:48:47.860310747Z"}


./ca2beccf6c79a1ebcfba95337d64cb2af49ce69da8bda89539aaf43730362445/ca2beccf6c79a1ebcfba95337d64cb2af49ce69da8bda89539aaf43730362445-json.log

{"log":"I1106 06:49:04.640572 1 zfsnode.go:100] zfs node controller: node pools updated current=[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080174592 scale:0} d:{Dec:\u003cnil\u003e} s:442461108Ki Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270065984 scale:0} d:{Dec:\u003cnil\u003e} s:9668270065984 Format:DecimalSI}}], required=[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080141824 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270089792 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}}]\n","stream":"stderr","time":"2022-11-06T06:49:04.640695123Z"}
{"log":"I1106 06:49:04.640600 1 zfsnode.go:110] zfs node controller: updating node object with \u0026{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:ix-truenas GenerateName: Namespace:openebs SelfLink: UID:6de4ee01-4bcc-48b6-a501-4d704d2a11a6 ResourceVersion:18971915 Generation:336277 CreationTimestamp:2022-04-03 07:27:53 +0000 UTC DeletionTimestamp:\u003cnil\u003e DeletionGracePeriodSeconds:\u003cnil\u003e Labels:map[] Annotations:map[] OwnerReferences:[{APIVersion:v1 Kind:Node Name:ix-truenas UID:6a740cf7-a2eb-4e70-acac-3cf6af4c40e2 Controller:0xc0007280f8 BlockOwnerDeletion:\u003cnil\u003e}] Finalizers:[] ClusterName: ManagedFields:[{Manager:zfs-driver Operation:Update APIVersion:zfs.openebs.io/v1 Time:2022-05-21 05:20:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:metadata\":{\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"6a740cf7-a2eb-4e70-acac-3cf6af4c40e2\\\"}\":{}}},\"f:pools\":{}}}]} Pools:[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080141824 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270089792 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}}]}\n","stream":"stderr","time":"2022-11-06T06:49:04.640736948Z"}
{"log":"I1106 06:49:04.648547 1 zfsnode.go:114] zfs node controller: updated node object openebs/ix-truenas\n","stream":"stderr","time":"2022-11-06T06:49:04.648597272Z"}
{"log":"I1106 06:49:04.648840 1 zfsnode.go:139] Got update event for zfs node openebs/ix-truenas\n","stream":"stderr","time":"2022-11-06T06:49:04.648882388Z"}
{"log":"I1106 06:50:04.647587 1 zfsnode.go:100] zfs node controller: node pools updated current=[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080141824 scale:0} d:{Dec:\u003cnil\u003e} s:442461076Ki Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270089792 scale:0} d:{Dec:\u003cnil\u003e} s:9668270089792 Format:DecimalSI}}], required=[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080244224 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270179072 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}}]\n","stream":"stderr","time":"2022-11-06T06:50:04.647725807Z"}
{"log":"I1106 06:50:04.647615 1 zfsnode.go:110] zfs node controller: updating node object with \u0026{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:ix-truenas GenerateName: Namespace:openebs SelfLink: UID:6de4ee01-4bcc-48b6-a501-4d704d2a11a6 ResourceVersion:18971977 Generation:336278 CreationTimestamp:2022-04-03 07:27:53 +0000 UTC DeletionTimestamp:\u003cnil\u003e DeletionGracePeriodSeconds:\u003cnil\u003e Labels:map[] Annotations:map[] OwnerReferences:[{APIVersion:v1 Kind:Node Name:ix-truenas UID:6a740cf7-a2eb-4e70-acac-3cf6af4c40e2 Controller:0xc000696000 BlockOwnerDeletion:\u003cnil\u003e}] Finalizers:[] ClusterName: ManagedFields:[{Manager:zfs-driver Operation:Update APIVersion:zfs.openebs.io/v1 Time:2022-05-21 05:20:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:metadata\":{\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"6a740cf7-a2eb-4e70-acac-3cf6af4c40e2\\\"}\":{}}},\"f:pools\":{}}}]} Pools:[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080244224 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270179072 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}}]}\n","stream":"stderr","time":"2022-11-06T06:50:04.647773145Z"}
{"log":"I1106 06:50:04.653456 1 zfsnode.go:114] zfs node controller: updated node object openebs/ix-truenas\n","stream":"stderr","time":"2022-11-06T06:50:04.65359948Z"}
{"log":"I1106 06:50:04.653697 1 zfsnode.go:139] Got update event for zfs node openebs/ix-truenas\n","stream":"stderr","time":"2022-11-06T06:50:04.653742767Z"}
{"log":"I1106 06:50:05.189127 1 zfsnode.go:100] zfs node controller: node pools updated current=[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080244224 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270179072 scale:0} d:{Dec:\u003cnil\u003e} s:9668270179072 Format:DecimalSI}}], required=[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080244224 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270161216 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}}]\n","stream":"stderr","time":"2022-11-06T06:50:05.189252172Z"}
{"log":"I1106 06:50:05.189155 1 zfsnode.go:110] zfs node controller: updating node object with \u0026{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:ix-truenas GenerateName: Namespace:openebs SelfLink: UID:6de4ee01-4bcc-48b6-a501-4d704d2a11a6 ResourceVersion:18972037 Generation:336279 CreationTimestamp:2022-04-03 07:27:53 +0000 UTC DeletionTimestamp:\u003cnil\u003e DeletionGracePeriodSeconds:\u003cnil\u003e Labels:map[] Annotations:map[] OwnerReferences:[{APIVersion:v1 Kind:Node Name:ix-truenas UID:6a740cf7-a2eb-4e70-acac-3cf6af4c40e2 Controller:0xc000696508 BlockOwnerDeletion:\u003cnil\u003e}] Finalizers:[] ClusterName: ManagedFields:[{Manager:zfs-driver Operation:Update APIVersion:zfs.openebs.io/v1 Time:2022-05-21 05:20:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{\"f:metadata\":{\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"6a740cf7-a2eb-4e70-acac-3cf6af4c40e2\\\"}\":{}}},\"f:pools\":{}}}]} Pools:[{Name:boot-pool UUID:7496457477561935431 Free:{i:{value:453080244224 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}} {Name:tank UUID:13692609233649395333 Free:{i:{value:9668270161216 scale:0} d:{Dec:\u003cnil\u003e} s: Format:BinarySI}}]}\n","stream":"stderr","time":"2022-11-06T06:50:05.189309071Z"}
{"log":"I1106 06:50:05.193068 1 zfsnode.go:114] zfs node controller: updated node object openebs/ix-truenas\n","stream":"stderr","time":"2022-11-06T06:50:05.193125065Z"}
{"log":"I1106 06:50:05.193373 1 zfsnode.go:139] Got update event for zfs node openebs/ix-truenas\n","stream":"stderr","time":"2022-11-06T06:50:05.193404221Z"}


./5d5e75edf25656c368812746b3c2b45d8b199ea0825c1a147162fba4639dd1af/5d5e75edf25656c368812746b3c2b45d8b199ea0825c1a147162fba4639dd1af-json.log

{"log":"I1106 06:51:01.883531 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:01.883583429Z"}
{"log":"I1106 06:51:06.892400 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:06.892496327Z"}
{"log":"I1106 06:51:11.902968 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:11.903014915Z"}
{"log":"I1106 06:51:16.912002 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:16.912083659Z"}
{"log":"I1106 06:51:21.922627 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:21.922690935Z"}
{"log":"I1106 06:51:26.931305 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:26.931378269Z"}
{"log":"I1106 06:51:28.096747 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 0 items received\n","stream":"stderr","time":"2022-11-06T06:51:28.097022646Z"}
{"log":"I1106 06:51:29.937804 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 0 items received\n","stream":"stderr","time":"2022-11-06T06:51:29.938098377Z"}
{"log":"I1106 06:51:31.941045 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:31.941124429Z"}
{"log":"I1106 06:51:36.951275 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:36.951349968Z"}
{"log":"I1106 06:51:41.963857 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:41.96391222Z"}
{"log":"I1106 06:51:46.973505 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:46.9735594Z"}
{"log":"I1106 06:51:51.983050 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:51.983097327Z"}
{"log":"I1106 06:51:56.992907 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:51:56.993001199Z"}
{"log":"I1106 06:52:02.004147 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:52:02.004244783Z"}
{"log":"I1106 06:52:07.016056 1 leaderelection.go:278] successfully renewed lease kube-system/zfs-csi-openebs-io\n","stream":"stderr","time":"2022-11-06T06:52:07.01611285Z"}

./b92d012e652f819fd3b44cc4baf296e7151e1a7551c020b6fa6183fb894026cb/b92d012e652f819fd3b44cc4baf296e7151e1a7551c020b6fa6183fb894026cb-json.log

{"log":"[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server\n","stream":"stdout","time":"2022-11-06T06:51:09.721976108Z"}
{"log":"[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server\n","stream":"stdout","time":"2022-11-06T06:51:42.321899558Z"}
{"log":"[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server\n","stream":"stdout","time":"2022-11-06T06:52:14.921226017Z"}
{"log":"[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server\n","stream":"stdout","time":"2022-11-06T06:52:47.521758983Z"}


./f65c27dd3badb29ab2f462fd144596830a891b42c0badb0b9bb145c0e310b6d8/f65c27dd3badb29ab2f462fd144596830a891b42c0badb0b9bb145c0e310b6d8-json.log

{"log":"I1106 06:52:02.005207 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:02.005264674Z"}
{"log":"I1106 06:52:07.015085 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:07.015152055Z"}
{"log":"I1106 06:52:12.023763 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:12.023833145Z"}
{"log":"I1106 06:52:17.033469 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:17.033538405Z"}
{"log":"I1106 06:52:22.042924 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:22.04296939Z"}
{"log":"I1106 06:52:27.052008 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:27.05205961Z"}
{"log":"I1106 06:52:30.570142 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 0 items received\n","stream":"stderr","time":"2022-11-06T06:52:30.570237725Z"}
{"log":"I1106 06:52:32.060995 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:32.061038794Z"}
{"log":"I1106 06:52:37.070291 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:37.07036819Z"}
{"log":"I1106 06:52:42.078831 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:42.078909068Z"}
{"log":"I1106 06:52:47.087590 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:47.087667387Z"}
{"log":"I1106 06:52:52.095916 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:52.096013202Z"}
{"log":"I1106 06:52:57.105554 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:52:57.105637691Z"}
{"log":"I1106 06:53:02.115492 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:53:02.115609458Z"}
{"log":"I1106 06:53:07.125112 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:53:07.125205884Z"}
{"log":"I1106 06:53:12.145598 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:53:12.145682364Z"}
{"log":"I1106 06:53:17.155409 1 leaderelection.go:273] successfully renewed lease kube-system/snapshot-controller-leader\n","stream":"stderr","time":"2022-11-06T06:53:17.155479322Z"}
 

lofgren

Cadet
Joined
Dec 10, 2014
Messages
8
Plus? Files *older* than six minutes ago? I would suggest -6, and also probably not worrying about -type f either, since any metadata writes would cause pool activity, even if it was just updating directories.
PS - Cheers! I had spent about 30 seconds on it as I was running out the door, but wanted to get a quick response in since you'd made the suggestion.
 

lofgren

Cadet
Joined
Dec 10, 2014
Messages
8
Researching some of the ix-container log entries led me to look again at the k3s_daemon.log

I see I am getting a lot of Trace entries, which some quick reading suggests might be caused by... high io

Nov 6 15:28:16 server_name k3s[3050120]: I1106 15:28:16.347394 3050120 trace.go:205] Trace[1650547161]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/zfs-csi-openebs-io,user-agent:csi-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:ed3f55c2-9799-4c61-9419-2432a4de3206,client:172.16.0.10,accept:application/json, */*,protocol:HTTP/2.0 (06-Nov-2022 15:28:15.784) (total time: 563ms):
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1650547161]: ---"Object stored in database" 562ms (15:28:16.347)
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1650547161]: [563.155203ms] [563.155203ms] END
Nov 6 15:28:16 server_name k3s[3050120]: I1106 15:28:16.417616 3050120 trace.go:205] Trace[1308440325]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Nov-2022 15:28:15.784) (total time: 633ms):
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1308440325]: ---"Transaction committed" 632ms (15:28:16.417)
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1308440325]: [633.364397ms] [633.364397ms] END
Nov 6 15:28:16 server_name k3s[3050120]: I1106 15:28:16.417931 3050120 trace.go:205] Trace[1857111481]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/external-resizer-zfs-csi-openebs-io,user-agent:csi-resizer/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7824dc95-9bcf-4087-84c7-213812e1b780,client:172.16.0.10,accept:application/json, */*,protocol:HTTP/2.0 (06-Nov-2022 15:28:15.783) (total time: 633ms):
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1857111481]: ---"Object stored in database" 633ms (15:28:16.417)
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1857111481]: [633.872428ms] [633.872428ms] END
Nov 6 15:28:16 server_name k3s[3050120]: I1106 15:28:16.418509 3050120 trace.go:205] Trace[1376954379]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Nov-2022 15:28:15.786) (total time: 632ms):
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1376954379]: ---"Transaction committed" 632ms (15:28:16.418)
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1376954379]: [632.419107ms] [632.419107ms] END
Nov 6 15:28:16 server_name k3s[3050120]: I1106 15:28:16.418850 3050120 trace.go:205] Trace[1359568093]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/external-snapshotter-leader-zfs-csi-openebs-io,user-agent:csi-snapshotter/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:6d052cc0-13d7-4691-99f0-5a2a2deb7945,client:172.16.0.10,accept:application/json, */*,protocol:HTTP/2.0 (06-Nov-2022 15:28:15.785) (total time: 632ms):
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1359568093]: ---"Object stored in database" 632ms (15:28:16.418)
Nov 6 15:28:16 server_name k3s[3050120]: Trace[1359568093]: [632.870281ms] [632.870281ms] END
Nov 6 15:28:16 server_name k3s[3050120]: I1106 15:28:16.517829 3050120 trace.go:205] Trace[401475017]: "GuaranteedUpdate etcd3" type:*coordination.Lease (06-Nov-2022 15:28:15.784) (total time: 733ms):
Nov 6 15:28:16 server_name k3s[3050120]: Trace[401475017]: ---"Transaction committed" 733ms (15:28:16.517)
Nov 6 15:28:16 server_name k3s[3050120]: Trace[401475017]: [733.511035ms] [733.511035ms] END
Nov 6 15:28:16 server_name k3s[3050120]: I1106 15:28:16.518238 3050120 trace.go:205] Trace[141335456]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/snapshot-controller-leader,user-agent:snapshot-controller/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:e0866ce2-bfb8-44dd-8dd3-445a5374bd97,client:172.16.0.10,accept:application/json, */*,protocol:HTTP/2.0 (06-Nov-2022 15:28:15.784) (total time: 734ms):
Nov 6 15:28:16 server_name k3s[3050120]: Trace[141335456]: ---"Object stored in database" 733ms (15:28:16.517)
Nov 6 15:28:16 server_name k3s[3050120]: Trace[141335456]: [734.028185ms] [734.028185ms] END

The same file also contains a lot of spam with these:
Code:
k3s[3050120]: {"level":"warn","ts":"2022-11-06T15:25:29.183+0800","logger":"etcd-client","caller":"v3@v3.5.1-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001a5c700/kine.sock","attempt":0,"error":"rpc error: code = Unknown desc = no such table: dbstat"}


Which apparently might need this setting added to a config somewhere - "SQLITE_ENABLE_DBSTAT_VTAB=1"
Reference: https://github.com/canonical/microk8s/issues/2965
 

johnzee

Cadet
Joined
Jan 20, 2023
Messages
5
Hi,

I'm facing the same issue with TrueNas SCALE. Every 5 seconds I hear disk activity with no justification.
Is there a fix for this?

Thanks
 

daku

Cadet
Joined
Nov 4, 2022
Messages
4
I migrated my installation from native to a Proxmox VM and have not had any issues since. Not a fix but perhaps some clue to the find the problem.
 

johnzee

Cadet
Joined
Jan 20, 2023
Messages
5
Hi,

Meanwhile I found this and it has helped me. The constant disk activity had gone down significantly:

Curious thing is that I have read most of the posts I could find before posting here and after posting I found this out of the blue. Oh well :smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Puttering with txg_timeout is not advisable. This does suggest that stuff is being written to your pool though. It would be best to find out what, and to resolve it that way.
 

johnzee

Cadet
Joined
Jan 20, 2023
Messages
5
Hi @jgreco ,

Thanks for your feedback.
I'm running my server through a UPS so the risk is minimal. Like the original poster, I have my k3s log being spammed with info. Example:

Code:
Jan 20 17:01:34 truenas k3s[5224]: time="2023-01-20T17:01:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/00-multus.conf"


Tried to find how to disable this or to lower it to a more acceptable level, that's why I went via txg_timeout route.
Do you have an advice?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm running my server through a UPS so the risk is minimal.

I'm curious to know what you think A has to do with B. Your UPS will not stop the flooding of a TXG that could in turn cause your system to stall under heavy write loads. A UPS can only help with very specific types of issues.

Tried to find how to disable this or to lower it to a more acceptable level, that's why I went via txg_timeout route.
Do you have an advice?

As I said earlier in this thread, find out what's being written, and cause it to stop.
 

johnzee

Cadet
Joined
Jan 20, 2023
Messages
5
I'm curious to know what you think A has to do with B. Your UPS will not stop the flooding of a TXG that could in turn cause your system to stall under heavy write loads. A UPS can only help with very specific types of issues.
From what I've read about txg and the cons of increasing it: Data corruption in case of powerloss could be an issue, hence my reply but I know nothing about it or any other issues.

As I said earlier in this thread, find out what's being written, and cause it to stop.
I see issue being commented over and over, I have the same issue but I don't know how to stop it, especially because it seems to be connected with compilation of sqlite with support for dbstat as mentioned in this thread?


Code:
Jan 20 17:45:41 truenas k3s[5224]: {"level":"warn","ts":"2023-01-20T17:45:41.666Z","logger":"etcd-client","caller":"v3@v3.5.3-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000760380/kine.sock","attempt":0,"error":"rpc error: code = Unknown desc = no such table: dbstat"}
Jan 20 17:45:45 truenas k3s[5224]: time="2023-01-20T17:45:45Z" level=info msg="Using CNI configuration file /etc/cni/net.d/00-multus.conf"


I don't know where to go from here but I'm open for help and feedback and even submit bug reports if that's the case, otherwise this dirty fix has to do it.

Thanks
 
Top