Drives constantly reading/writing

SangieWolf

Dabbler
Joined
Jun 13, 2018
Messages
21
I tried searching for help but couldn't find anything relevant.

One of my drives was giving bad sector errors so I swapped it out and let it resilver, which took quite a few hours. Several days later, I noticed the drives were still constantly busy. I realized that I wasn't on the most recent update channel, so I upgraded to FreeNAS-11.2-U4 and no change with the drives constantly being busy.

I even shut off my workstation to see if it was something client level causing the drives to be busy, and no such luck.

Also, reading and writing to the AFP share is also slow.

Please see my drive stats below and my hardware stats in my signature.
Screen Shot 2019-06-09 at 11.13.10 PM.png Screen Shot 2019-06-09 at 11.13.44 PM.png Screen Shot 2019-06-09 at 11.13.48 PM.png Screen Shot 2019-06-09 at 11.13.51 PM.png Screen Shot 2019-06-09 at 11.13.54 PM.png
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
no change with the drives constantly being busy.
If you have your system dataset on the storage pool, it is constantly being written to. Pretty normal. This is the disk activity on the disks that hod my system dataset:

1560190515492.png


Also, reading and writing to the AFP share is also slow.
That is outside my experience because I don't do any Apple hardware, but I think there is some use of synchronous transfer with Apple that might be causing the slowdown.
 

SangieWolf

Dabbler
Joined
Jun 13, 2018
Messages
21
If you have your system dataset on the storage pool, it is constantly being written to. Pretty normal.
OOPS! I selected the wrong disk. I selected two that are in a pool and didn't include the SSD. The hard drives are constantly clicking away like they're resilvering but they aren't. It's weird the metrics don't show it like it's 24/7. I can't figure out how to find out what's causing the writing. It even continues when I turn off my system.
Code:
OS Version:
FreeNAS-11.2-U5

(Build Date: Jun 24, 2019 18:41)

Processor:
Intel(R) Xeon(R) CPU X5650 @ 2.67GHz (6 cores)

Memory:
24 GiB

HostName:
awoobox.local

Uptime:
12:10AM up 11 days, 6:20, 0 users
 

Attachments

  • Screen Shot 2019-11-29 at 12.01.54 AM.png
    Screen Shot 2019-11-29 at 12.01.54 AM.png
    1.4 MB · Views: 540

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
Have you looked at top (open a shell and enter the command "top") to see if there is something working overtime? Also, if you feel this is a problem, start disabling your features for file sharing and jails/etc... You can also disconnect your server from the network and then listen, see if the drive accessing stops, that would tel you if it's a remote device causing the data accessing.

It even continues when I turn off my system.
WHAT!!! Sounds like Black Magic to me. How can it make noise without power?
 

guermantes

Patron
Joined
Sep 27, 2017
Messages
213
I was under the impression that zfs will almost constantly access the pools to do maintenance and parity checking and thus a silent system with no access sounds will never be achieved.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
I was under the impression that zfs will almost constantly access the pools to do maintenance and parity checking and thus a silent system with no access sounds will never be achieved.
ZFS does periodically (by default or user specified) perform a Scrub on your pool(s) but it is not a constant thing. Silent is defined by perspective. Many hard drives are very quiet, difficult to hear under normal use, not impossible to hear. If you have lots of hard drives then the noise level will go up. I have currently 7 hard drives in my FreeNAS case (all are low RPM), 4 drives dedicated to FreeNAS storage, the others are for ESXi. Fans were the main source of noise for me, I was able to minimize it. During a scrub I can hear the hard drives accessing but only when I place my ear close to the case.

The other reason FreeNAS may be accessing the hard drives often is if the System Dataset are located on the hard drive pool, but this isn't a large amount of data being accessed. This topic mainly comes up when someone wants to sleep the drives and forgets about the System Dataset.

Of course if someone does want a completely silent system, that would require a fanless motherboard, fanless power supply, and SSDs. If I had the money to burn, I'd play around with this type of setup. I'd likely still run a fan at a slow RPM just to ensure good airflow to keep things cool. One can dream.
 

SangieWolf

Dabbler
Joined
Jun 13, 2018
Messages
21
WHAT!!! Sounds like Black Magic to me. How can it make noise without power?
Oh sorry! I meant my client system which tells me that it's something on FreeNAS causing the drives to be active not a client computer (an iMac using AFP).

I would like to stress my read speeds are ridiculously slow and it takes so long to just list the contents of a directory. I'm using RAIDZ(1) with four drives. One of the drives did degrade and give errors and I replaced it MONTHS ago (so it's obviously done rebuilding) and the speeds are still slow. There's at least no more alerts in FreeNAS though.

I have debated expanding the RAM and buying a second Xeon X5650 to put in the empty socket. I'm not sure if that would help me or not. I'm also considering adding a second drive to make it a RAIDZ2.

What I'm worried about is the pool was created in the wrong way and I have to redo the whole thing.

FreeNAS-11.2-U7
Code:
last pid: 63198;  load averages:  0.44,  0.25,  0.19    up 5+03:36:43  19:00:44
66 processes:  1 running, 65 sleepingCPU:  0.3% user,  0.0% nice,  3.3% system,  0.0% interrupt, 96.3% idle
Mem: 323M Active, 15G Inact, 1058M Laundry, 6206M Wired, 518M Free
ARC: 3326M Total, 816M MFU, 1451M MRU, 6104K Anon, 31M Header, 1022M Other
     852M Compressed, 1452M Uncompressed, 1.70:1 Ratio
Swap: 4096M Total, 321M Used, 3775M Free, 7% Inuse

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
61469 root          1  24    0 40684K 13092K zio->i  5   2:19   5.09% afpd
  353 root          2  21    0   126M 83732K usem    0   2:20   4.23% python3.6
  239 root         26  20    0   286M   190M kqread  0  25:04   0.51% python3.6
 7255    972       13  52   15   325M   142M piperd  5  87:16   0.08% Plex Scri
 6078    972       16  20    0   346M 84896K uwait   5 138:51   0.05% Plex Medi
63167 root          1  20    0  7940K  3788K CPU0    0   0:00   0.04% top
 7300    972       13  23    0   207M 69980K usem    2   1:38   0.01% Plex DLNA
 4907 root          6  20    0   116M  1612K select  5   0:21   0.01% python2.7
 6151 root          6  20    0   116M  9088K select  1   0:21   0.01% python2.7
 2887 root          1  20    0 12488K 12596K select  5   0:12   0.00% ntpd
 3309 root          1  20    0   115M 56952K kqread  3   0:12   0.00% uwsgi-3.6
 3136 root          8  20    0 40124K  6972K select  5   4:30   0.00% rrdcached
40772 www           1  20    0 30728K  7280K kqread  0   0:00   0.00% nginx
19194 root          2  20    0 14176K  5748K kqread  5   0:04   0.00% netatalk
  352 root          2  20    0   123M 80612K piperd  3   2:14   0.00% python3.6


Think it's Plex holding it up? I am using CAT-6A too so I know it isn't bottlenecked there. Are there any additional tests I can do?

freenas-cpu.png freenas-dashboard.png freenas-services.png freenas-disks.png
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
Go into the Shell and provide the output of the two statements:
zpool status
zfs list

Now it appears that "top" is telling us that your CPU is not being overworked but your Apple file sharing is using the most CPU power, you do not have anything running in overdrive killing your system. Plex has been running a long time however it is barely using any CPU power. I don't see any issues right now based on that information.

Assuming that the results of the two requests I asked for are okay then you can do some tests the verify the throughput of your system, and I would do this regardless just so you know how things are running. Let me see if I can find some old posts made that give those instructions to you...

Okay, here is a good thread to start off with:

Note that a few postings down I have listed a link to another thread where I have a lot of benchmark testing. While testing you should try to have your FreeNAS server directly connected to your main computer. You should make sure that you create a data share that is not compressed as well and use it. Compression will skew the results. Hopefully you will have some good results. If those results are good then I'd say that anyplace you have a slow transfer speed would likely be a network issue.

Good Luck!
 

SangieWolf

Dabbler
Joined
Jun 13, 2018
Messages
21
Sorry for the delay! Been busy moving. It seems it's still scrubbing v.v I'll run this again tomorrow but here's what I got.
Code:
FreeBSD 11.2-STABLE (FreeNAS.amd64) #0 r325575+c9231c7d6bd(HEAD): Mon Nov 18 22:46:47 UTC 2019

        FreeNAS (c) 2009-2019, The FreeNAS Development Team
        All rights reserved.
        FreeNAS is released under the modified BSD license.

        For more information, documentation, help or support, go here:
        http://freenas.org
Welcome to FreeNAS

Warning: settings changed through the CLI are not written to
the configuration database and will be reset on reboot.

root@awoobox:~ # zpool status
  pool: Awoo
 state: ONLINE
  scan: scrub in progress since Sun Feb 16 04:00:03 2020
        8.72T scanned at 74.8M/s, 8.71T issued at 74.8M/s, 13.5T total
        0 repaired, 64.72% done, 0 days 18:29:18 to go
config:

        NAME                                            STATE     READ WRITE CKSUM
        Awoo                                            ONLINE       0     0 0
          raidz1-0                                      ONLINE       0     0 0
            gptid/ee86e510-823a-11e9-8487-d8d385945e79  ONLINE       0     0 0
            gptid/5cebb230-76b6-11e8-baad-d8d385945e79  ONLINE       0     0 0
            gptid/5d8b9c83-76b6-11e8-baad-d8d385945e79  ONLINE       0     0 0
            gptid/5e308c7d-76b6-11e8-baad-d8d385945e79  ONLINE       0     0 0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:44 with 0 errors on Fri Feb 14 03:45:44 2020
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da4p2     ONLINE       0     0     0

errors: No known data errors

Code:
root@awoobox:~ # zfs list
NAME                                                             USED  AVAIL  REFER  MOUNTPOINT
Awoo                                                            10.1T  5.24T   140K  /mnt/Awoo
Awoo/.system                                                    73.0M  5.24T   140K  legacy
Awoo/.system/configs-c0708eef149b49a396f66530a6f1b80a           70.9M  5.24T  70.9M  legacy
Awoo/.system/cores                                              1.06M  5.24T  1.06M  legacy
Awoo/.system/rrd-c0708eef149b49a396f66530a6f1b80a                128K  5.24T   128K  legacy
Awoo/.system/samba4                                              523K  5.24T   523K  legacy
Awoo/.system/syslog-c0708eef149b49a396f66530a6f1b80a             128K  5.24T   128K  legacy
Awoo/.system/webui                                               128K  5.24T   128K  legacy
Awoo/Backup                                                     1.67T  5.24T  1.67T  /mnt/Awoo/Backup
Awoo/Misc                                                       3.45T  5.24T  3.45T  /mnt/Awoo/Misc
Awoo/Music                                                      3.32T  5.24T  3.32T  /mnt/Awoo/Music
Awoo/Video                                                      1.40T  5.24T  1.40T  /mnt/Awoo/Video
Awoo/iocage                                                     4.88M  5.24T  4.13M  /mnt/Awoo/iocage
Awoo/iocage/download                                             128K  5.24T   128K  /mnt/Awoo/iocage/download
Awoo/iocage/images                                               128K  5.24T   128K  /mnt/Awoo/iocage/images
Awoo/iocage/jails                                                128K  5.24T   128K  /mnt/Awoo/iocage/jails
Awoo/iocage/log                                                  128K  5.24T   128K  /mnt/Awoo/iocage/log
Awoo/iocage/releases                                             128K  5.24T   128K  /mnt/Awoo/iocage/releases
Awoo/iocage/templates                                            128K  5.24T   128K  /mnt/Awoo/iocage/templates
Awoo/jails                                                      27.4G  5.24T   209K  /mnt/Awoo/jails
Awoo/jails/.warden-template-pluginjail-11.0-x64                  600M  5.24T   590M  /mnt/Awoo/jails/.warden-template-pluginjail-11.0-x64
Awoo/jails/.warden-template-pluginjail-11.0-x64-20190411200602   590M  5.24T   590M  /mnt/Awoo/jails/.warden-template-pluginjail-11.0-x64-20190411200602
Awoo/jails/plexmediaserver_1                                    25.8G  5.24T  26.4G  /mnt/Awoo/jails/plexmediaserver_1
Awoo/jails/subsonic_1                                            486M  5.24T  1.04G  /mnt/Awoo/jails/subsonic_1
Awoo/murrvol                                                     203G  5.43T  10.6G  -
freenas-boot                                                    4.73G  49.0G64K  none
freenas-boot/ROOT                                               4.70G  49.0G29K  none
freenas-boot/ROOT/11.1-U6                                       10.9M  49.0G   849M  /
freenas-boot/ROOT/11.1-U7                                       10.2M  49.0G   751M  /
freenas-boot/ROOT/11.2-U4.1                                     13.5M  49.0G   771M  /
freenas-boot/ROOT/11.2-U5                                       12.9M  49.0G   770M  /
freenas-boot/ROOT/11.2-U7                                       4.65G  49.0G   773M  /
freenas-boot/ROOT/Initial-Install                                  1K  49.0G   837M  legacy
freenas-boot/ROOT/Wizard-2018-06-23_02-24-08                       1K  49.0G   837M  legacy
freenas-boot/ROOT/default                                       8.59M  49.0G   846M  legacy
freenas-boot/grub                                               6.96M  49.0G  6.96M  legacy
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
Kind of funny, I just landed a new job and am in the process of moving from Virginia to Georgia, well I'm moving right away, the family will follow behind. The output of those commands look fine to me. If you still are having issues they try my other suggestion of testing the throughput with direct hard wired connections, don't use wireless as it will give you inaccurate results.

Good Luck.
 

SangieWolf

Dabbler
Joined
Jun 13, 2018
Messages
21
Kind of funny, I just landed a new job and am in the process of moving from Virginia to Georgia, well I'm moving right away, the family will follow behind. The output of those commands look fine to me. If you still are having issues they try my other suggestion of testing the throughput with direct hard wired connections, don't use wireless as it will give you inaccurate results.

Good Luck.
The whole house is wired with CAT-6A so wireless isn't being used. How would I do a throughput test? I did find out the Finder replacer I used on my Mac was causing some slowdown and using Finder directly helped some. I really notice the slowdown when I open folders with many files. Would adding a drive and making it RAIDZ2 help?

Also for some reason, SMB isn't able to be read so none of my Windows computers can see it :< I should probably make a new topic for that.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
If you are going to do throughput testing then you need to do it right. You need to find out the internal throughput of your system hard drives, then if those numbers are good (and they should be) you can test over Ethernet transfer rates with different file sizes. I did a lot of testing when I had LAN card issues so I'd recommend you read that thread before starting any testing, ensure you are using an uncompressed share for the testing and then let it rip. Here is the link to the posting, it will help if you take your time to do it correctly. Conduct tests 1 through 5. https://www.ixsystems.com/community/threads/intel-nic-vs-realtek-nic-performance-testing.10325/

As for no SMB shares, you need to get that fixed first. You probably have a permissions issue, so many people do. There are lots of threads dealing with permissions issues, find one and fix your problem. At least make one Dataset that has 777 permissions and get SMB fixed enough to share this dataset. So do the internal speed testing first.

Would adding a drive and making it RAIDZ2 help?
It might but do the internal speed tests first. If they are slow then another drive should help, but it should be fast enough just with a RAIDZ1 setup. If you find out that it's a problem with lots of small files, you may be better off setting up Mirrors to gain the speed you desire.

Lastly, I don't use Apple stuff so the OS is not what I'm familiar with. I don't want to tell you to do something if it turns out that the way the Apple OS works is causing the issue.
 

rayeason

Cadet
Joined
Oct 29, 2021
Messages
5
Hello and good day.

I am having this issue as well.

However, after looking at TOP, I am seeing collectd with 3.65% WCPU running and then hearing the hard drive read/write in my raidz2.

What is collectd and can it be optimized or stopped all together.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Hello and good day.

I am having this issue as well.

However, after looking at TOP, I am seeing collectd with 3.65% WCPU running and then hearing the hard drive read/write in my raidz2.

What is collectd and can it be optimized or stopped all together.

Please don't revive necro threads, but start a new one.

Collectd is what creates the graphs under Reporting. The data files for collectd are part of the system dataset, which is automatically moved to your data pool on creation. See System->System Dataset if you would like to move it to a different pool. Some folks move it to their boot-pool, if the boot pool is an SSD that can tolerate constant writes. If you have just a thumb drive, it's better to keep the system dataset in your data pool.
 
Top