I have a Freenas box running 100% cpu utilization. A quick "top" shows that the top process is "zpool":
last pid: 56013; load averages: 1.11, 1.14, 1.10 up 6+23:30:58 08:55:22
41 processes: 2 running, 39 sleeping
CPU: % user, % nice, % system, % interrupt, % idle
Mem: 880M Active, 148M Inact, 12G Wired, 1168K Cache, 671M Buf, 2725M Free
ARC: 11G Total, 1939M MFU, 9534M MRU, 1861K Anon, 66M Header, 29M Other
Swap: 22G Total, 22G Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
55677 root 1 103 0 781M 744M CPU1 1 11:34 100.00% zpool
2005 root 6 20 0 9900K 1580K rpcsvc 1 96:17 0.00% nfsd
2713 root 7 20 0 114M 14604K uwait 1 7:07 0.00% collectd
However, "zpool status" does not show a running scrub:
[root@freenas3] /var/log# zpool status
pool: tank2
state: ONLINE
scan: scrub repaired 0 in 0h24m with 0 errors on Sun Oct 27 00:24:42 2013
config:
NAME STATE READ WRITE CKSUM
tank2 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/bf73d8ab-fb40-11e2-8281-00505693005d ONLINE 0 0 0
gptid/c19ce3f3-fb40-11e2-8281-00505693005d ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/c3d10d8f-fb40-11e2-8281-00505693005d ONLINE 0 0 0
gptid/c6077b53-fb40-11e2-8281-00505693005d ONLINE 0 0 0
errors: No known data errors
pool: tank3
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank3 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/e7c6599b-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
gptid/e8c9af47-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/e99596de-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
gptid/ea625d4a-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/eb8f3c06-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
gptid/ecc008e6-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
spares
gptid/edb17fe6-39f0-11e3-98ad-00505693005d AVAIL
errors: No known data errors
[root@freenas3] /var/log#
If I kill the "zpool" process, it just starts up again.
The "gstat" command shows each drive in a mostly idle state (green across the board).
I think this has happened once before, and I ended up rebooting the box. What other things can I check? Happy to file a bug if no one has any ideas.
last pid: 56013; load averages: 1.11, 1.14, 1.10 up 6+23:30:58 08:55:22
41 processes: 2 running, 39 sleeping
CPU: % user, % nice, % system, % interrupt, % idle
Mem: 880M Active, 148M Inact, 12G Wired, 1168K Cache, 671M Buf, 2725M Free
ARC: 11G Total, 1939M MFU, 9534M MRU, 1861K Anon, 66M Header, 29M Other
Swap: 22G Total, 22G Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
55677 root 1 103 0 781M 744M CPU1 1 11:34 100.00% zpool
2005 root 6 20 0 9900K 1580K rpcsvc 1 96:17 0.00% nfsd
2713 root 7 20 0 114M 14604K uwait 1 7:07 0.00% collectd
However, "zpool status" does not show a running scrub:
[root@freenas3] /var/log# zpool status
pool: tank2
state: ONLINE
scan: scrub repaired 0 in 0h24m with 0 errors on Sun Oct 27 00:24:42 2013
config:
NAME STATE READ WRITE CKSUM
tank2 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/bf73d8ab-fb40-11e2-8281-00505693005d ONLINE 0 0 0
gptid/c19ce3f3-fb40-11e2-8281-00505693005d ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/c3d10d8f-fb40-11e2-8281-00505693005d ONLINE 0 0 0
gptid/c6077b53-fb40-11e2-8281-00505693005d ONLINE 0 0 0
errors: No known data errors
pool: tank3
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank3 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/e7c6599b-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
gptid/e8c9af47-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/e99596de-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
gptid/ea625d4a-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/eb8f3c06-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
gptid/ecc008e6-39f0-11e3-98ad-00505693005d ONLINE 0 0 0
spares
gptid/edb17fe6-39f0-11e3-98ad-00505693005d AVAIL
errors: No known data errors
[root@freenas3] /var/log#
If I kill the "zpool" process, it just starts up again.
The "gstat" command shows each drive in a mostly idle state (green across the board).
I think this has happened once before, and I ended up rebooting the box. What other things can I check? Happy to file a bug if no one has any ideas.