Automatic pool scraping

adam23450

Contributor
Joined
Feb 19, 2020
Messages
142
I have automatic pool scraping running, but I can't see it happening anywhere. I have no notifications. After the charts of the disks, I can say that they are not busy at the time when the pool scraping task is ordered.
 

Attachments

  • scrub.JPG
    scrub.JPG
    23.6 KB · Views: 214

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
What do you see with zpool status -v ?
 

adam23450

Contributor
Joined
Feb 19, 2020
Messages
142
What do you see with zpool status -v ?
Code:
                                                                                                      
root@dom[~]# zpool status -v
  pool: HDD
state: ONLINE
  scan: scrub repaired 0B in 00:08:12 with 0 errors on Mon Jan 11 15:54:15 2021
config:

        NAME                                          STATE     READ WRITE CKSUM
        HDD                                           ONLINE       0     0     0
          gptid/a90c8662-0e0d-11eb-9058-b42e9967ad61  ONLINE       0     0     0

errors: No known data errors

  pool: Maszyny
state: ONLINE
  scan: scrub repaired 0B in 00:13:34 with 0 errors on Mon Jan 11 15:59:29 2021
config:

        NAME                                          STATE     READ WRITE CKSUM
        Maszyny                                       ONLINE       0     0     0
          gptid/2bc01a86-6b69-11ea-b865-b42e9967ad61  ONLINE       0     0     0

errors: No known data errors

  pool: Raid-5-1-RAID-Z
state: ONLINE
  scan: scrub repaired 0B in 03:04:30 with 0 errors on Thu Jan 14 14:32:16 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        Raid-5-1-RAID-Z                                 ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/60ca5a0e-5410-11eb-9607-b42e9967ad61  ONLINE       0     0     0
            gptid/60b1be9c-5410-11eb-9607-b42e9967ad61  ONLINE       0     0     0
            gptid/60e0061e-5410-11eb-9607-b42e9967ad61  ONLINE       0     0     0
        cache
          gptid/5eeae037-5410-11eb-9607-b42e9967ad61    ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:00:18 with 13 errors on Wed Jan 20 03:45:18 2021
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  DEGRADED     0     0     0
          ada0p2      DEGRADED     0     0    15  too many errors

errors: Permanent errors have been detected in the following files:

        freenas-boot/ROOT/default@2020-10-11-16:05:18:/usr/local/lib/python3.6/site-packages/aiohttp/__pycache__/client_proto.cpython-36.opt-1.pyc
        freenas-boot/ROOT/default@2020-10-11-16:05:18:/conf/base/etc/rc.d/var
        freenas-boot/ROOT/12.0-U1@2020-11-01-12:09:25:/usr/local/lib/python3.7/site-packages/cryptography/hazmat/backends/openssl/__pycache__/ed448.cpython-37.opt-1.pyc
        freenas-boot/ROOT/12.0-U1@2020-11-01-12:09:25:/usr/local/lib/python3.7/site-packages/josepy/__pycache__/interfaces_test.cpython-37.pyc
        freenas-boot/ROOT/12.0-U1@2020-11-01-12:09:25:/usr/local/lib/python3.7/site-packages/onedrivesdk/request/__pycache__/subscription_request.cpython-37.opt-1.pyc
        freenas-boot/ROOT/12.0-U1@2020-11-01-12:09:25:/usr/local/lib/python3.7/site-packages/raven/__pycache__/context.cpython-37.opt-1.pyc
        freenas-boot/ROOT/default-20201011-161523:<0x0>
        freenas-boot/ROOT/default-20201011-161523:/usr/local/lib/python3.7/site-packages/acme/__pycache__/fields.cpython-37.opt-1.pyc
        freenas-boot/ROOT/default-20201011-161523:/usr/local/lib/python3.7/site-packages/dns/__pycache__/node.cpython-37.pyc
        freenas-boot/ROOT/default-20201011-161523:/usr/local/lib/migrate93/django/utils/__pycache__/image.cpython-37.pyc
        freenas-boot/ROOT/default-20201011-161523:/usr/local/lib/python3.7/site-packages/raven/utils/__pycache__/testutils.cpython-37.pyc
        freenas-boot/ROOT/default-20201011-161523:/usr/local/lib/python3.7/site-packages/raven/utils/__pycache__/transaction.cpython-37.opt-1.pyc
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Well, your boot pool has corruption... are you using a WD Green SSD perhaps? (or something else with a sandforce controller?).

I see the last scrubs on the rest of your pools were the 11th and 14th of Jan.
 

adam23450

Contributor
Joined
Feb 19, 2020
Messages
142
Well, your boot pool has corruption... are you using a WD Green SSD perhaps? (or something else with a sandforce controller?).

I see the last scrubs on the rest of your pools were the 11th and 14th of Jan.
This is how I use WD green. From the very beginning, when I bought this disk, it was displayed like that.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
This is how I use WD green. From the very beginning, when I bought this disk, it was displayed like that.
The WD Green SSDs are known to use a controller that has issues in FreeBSD (at least in older versions) and would cause corruption due to the way that TRIM was handled.

If you get a chance, at very least, you should reinstall (backup/restore config). You are currently working with some corrupt OS files which may contribute to your problem. You could also consider buying a cheap replacement for that SSD (Kingston don't use that controller, so it's a safe choice among the many choices out there).
 

adam23450

Contributor
Joined
Feb 19, 2020
Messages
142
The WD Green SSDs are known to use a controller that has issues in FreeBSD (at least in older versions) and would cause corruption due to the way that TRIM was handled.

If you get a chance, at very least, you should reinstall (backup/restore config). You are currently working with some corrupt OS files which may contribute to your problem. You could also consider buying a cheap replacement for that SSD (Kingston don't use that controller, so it's a safe choice among the many choices out there).
But tomorrow the pot scraping should be done? Does the fact that the system disk may be damaged can it cause logical data corruption?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
Does the fact that the system disk may be damaged can it cause logical data corruption?
The corruption is on the boot pool itself. It may impact the ability of the system to report things to you or to perform system tasks in the middleware (or other parts of TrueNAS). Data on your other pools is not impacted by that corruption (that's what the scrubs will confirm).

If the system is working properly, the scrub should run on Sundays if the last scrub was more than 7 days ago.
 
Top