Self-check is not working again.

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
So i got a problem and i'm not sure how it happened. In the past i've had issues like this aswell but then i saw things that were wrong in the shell. This time when checking i never saw anything wrong. However today my server needed a shutdown because i wanted to implement a UPS and needed 3 reboots before turning on. Naturally i went to go check why i needed that.

First things first, i've been getting this error either for ada1 or ada0 they seem to alternate and never did give me an error when running smartctl -a /dev/ada1 (or ada0 if the situation required it)
Notification when logged into the shell.
Device: /dev/ada1, not capable of SMART self-check

This is the smart ctl response.
Code:
root@Brisingr[~]# smartctl -i /dev/ada1
smartctl 7.0 2018-12-30 r4883 [FreeBSD 11.3-RELEASE-p11 amd64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     WD Blue and Green SSDs
Device Model:     WDC WDS120G2G0A-00JH30
Serial Number:    2003BQ461911
LU WWN Device Id: 5 001b44 4a830af06
Firmware Version: UE510000
User Capacity:    120,040,980,480 bytes [120 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Dec  4 15:04:34 2020 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

Seems ok to me?

But since it needed 3 reboots i decided to check the status

Code:
root@Brisingr[~]# zpool status -v freenas-boot
  pool: freenas-boot
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 0 in 0 days 00:15:56 with 139 errors on Mon Nov 30 04:00:56 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        freenas-boot                DEGRADED     0     0   103
          mirror-0                  DEGRADED     0     0   413
            replacing-0             UNAVAIL      0     0     0
              15747358114187901216  REMOVED      0     0     0  was /dev/ada0p2/old
              25115368915956188     FAULTED      0     0     0  was /dev/ada0p2
            ada1p2                  DEGRADED     0     0   413  too many errors

errors: Permanent errors have been detected in the following files:

        <metadata>:<0x1d>
        <metadata>:<0x29>
        <metadata>:<0x2a>
        <metadata>:<0x2b>
        <metadata>:<0x38>
        <metadata>:<0x42>
        <metadata>:<0x47>
        <metadata>:<0x54>
        <metadata>:<0x6f>
        <metadata>:<0x83>
        freenas-boot/ROOT:<0x0>
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:<0x0>
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7site-packages/docutils/writers
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7/site-packages/docutils/writers/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7/site-packages/dojango/contrib/auth/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7/site-packages/dns_lexicon-3.3.3-py3.7.egg-info
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7/site-packages/dojango/data/modelstore
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7/site-packages/docutils/writers/pep_html
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7/site-packages/dns/rdtypes/CH/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/lib/python3.7/site-packages/dns/rdtypes
        freenas-boot/ROOT/11.3-U4.1@2020-05-16-00:55:51:/usr/local/share/locale/uk/LC_MESSAGES/gas.mo
        freenas-boot/ROOT/11.3-U4.1@2020-06-16-01:02:20:<0x0>
        freenas-boot/ROOT/11.3-U4.1@2020-06-16-01:02:20:/usr/local/lib/python3.7/site-packages/django/views/decorators/__pycache__/debug.cpython-37.pyc
        freenas-boot/ROOT/11.3-U3.2:/data
        freenas-boot/ROOT/11.3-U3.2:<0xffffffffffffffff>
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:<0x0>
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/lib/python3.7/site-packages/django/core/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/x86_64-portbl-freebsd11.0/bin/ld.gold
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/root
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/lib/python3.7/site-packages/django/contrib/auth/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/lib/python3.7/site-packages/mako/__pycache__/cmd.cpython-37.pyc
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/lib/python3.7/site-packages/onedrivesdk/model/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/lib/python3.7/site-packages/middlewared/plugins/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/lib/python3.7/site-packages/nacl/__pycache__
        freenas-boot/ROOT/11.3-U4.1@2020-09-18-18:07:25:/usr/local/lib/python3.7/site-packages/django/contrib/sessions/backends/__pycache__
        freenas-boot/ROOT/11.3-U4.1:<0x0>
        //usr/local/share/smartmontools
        //usr/local/lib/python3.7/site-packages/django/contrib/auth/migrations
        //usr/local/lib/python3.7/site-packages/cryptography/hazmat/backends
        //usr/local/lib/python3.7/xml
        //usr/local/lib/python3.7/site-packages/django/conf/locale/sv
        //usr/local/lib/python3.7/site-packages/iocage_lib/__pycache__/ioc_clean.cpython-37.opt-1.pyc
        //usr/local/lib/python3.7/site-packages/django/conf/locale/ta
        //usr/local/lib/python3.7/site-packages/gitdb/utils
        //usr/local/lib/python3.7/site-packages/asyncssh/__pycache__/sftp.cpython-37.pyc
        //usr/local/www/dojo/dojox/xml
        //usr/share/misc/magic
        //usr/local/lib/python3.7/site-packages/paramiko/__pycache__/sftp.cpython-37.opt-1.pyc
        //usr/local/www/freenasUI/tasks
        //usr/local/www/freenasUI/system
        //usr/local/www/dojo/dojox/widget/nls/ro
        //usr/local/lib/python3.7/site-packages/django/contrib/auth/tests/__pycache__
        //usr/share/misc/termcap
        //usr/local/share/git-core/templates/hooks/fsmonitor-watchman.sample
        //usr/local/lib/python3.7/site-packages/django/contrib/sitemaps/__pycache__
        //usr/local/lib/python3.7/site-packages/git/test/performance
        //usr/local/lib/python3.7/site-packages/django/conf/locale/is/__pycache__
        //usr/local/lib/python3.7/site-packages/middlewared/etc_files/__pycache__
        //usr/local/lib/python3.7/site-packages/botocore/data/neptune/2014-10-31/service-2.json
        //usr/local/lib/python3.7/site-packages/git/test/performance/__pycache__
        freenas-boot/ROOT/11.3-U4.1:<0x30556>
        freenas-boot/ROOT/11.3-U4.1:<0x30557>
        //usr/local/pydevd_attach_to_process/winappdbg/plugins
        //usr/local/lib/python3.7/site-packages/django/utils/__pycache__/version.cpython-37.opt-1.pyc
        //usr/local/lib/python3.7/site-packages/django/conf/locale/tr/__pycache__
        //usr/local/lib/python3.7/site-packages/django/contrib/staticfiles/management/commands
        //usr/local/lib/python3.7/site-packages/django/contrib/staticfiles/management/commands/__pycache__
        //usr/local/www/dojo/dojox/widget/gauge
        //usr/local/lib/python3.7/site-packages/botocore/data/organizations/2016-11-28/service-2.json
        //usr/local/lib/python3.7/site-packages/aiohttp/__pycache__/http_exceptions.cpython-37.opt-1.pyc
        //usr/local/lib/migrate93/django/test
        //usr/local/lib/migrate93/django/test/__pycache__
        //usr/local/www/dojo/dojox/widget/nls/sk
        //usr/local/www/freenasUI/tools/__pycache__
        //usr/local/lib/python3.7/site-packages/django/contrib/staticfiles/templatetags/__pycache__
        //usr/local/share/glib-2.0/codegen/__pycache__/utils.cpython-37.opt-1.pyc
        //usr/local/lib/python3.7/site-packages/aiohttp/__pycache__/log.cpython-37.opt-1.pyc
        //usr/local/lib/python3.7/site-packages/ws4py/server
        //usr/share/locale/is_IS.ISO8859-1
        //usr/local/lib/python3.7/site-packages/django/conf/locale/km
        //usr/local/lib/python3.7/site-packages/aiohttp/__pycache__/payload_streamer.cpython-37.opt-1.pyc
        //usr/local/lib/python3.7/site-packages/requests_toolbelt/utils
        //usr/local/lib/python3.7/site-packages/botocore/data/pinpoint/2016-12-01/service-2.json
        //usr/local/lib/perl5/5.30/Tie
        //usr/local/lib/python3.7/site-packages/zettarepl/snapshot/__pycache__
        //usr/local/lib/python3.7/site-packages/xattr-0.9.6-py3.7.egg-info
        //usr/local/lib/python3.7/site-packages/django/conf/locale/kn/__pycache__
        //usr/local/lib/python3.7/site-packages/django/db/models/sql
        //usr/local/lib/python3.7/site-packages/git/test/fixtures
        //usr/local/lib/python3.7/site-packages/middlewared/etc_files/local
        //usr/local/lib/python3.7/site-packages/jsonschema/__pycache__
        //usr/local/lib/python3.7/site-packages/botocore/data/lex-models/2017-04-19/service-2.json
        //usr/local/lib/python3.7/site-packages/nacl/bindings/__pycache__
        //usr/local/lib/perl5/5.30/Unicode/Collate/CJK
        //usr/local/lib/python3.7/site-packages/samba/dcerpc/drsuapi.so
        //usr/local/lib/python3.7/site-packages/middlewared/etc_files/local/nut
        //usr/local/libexec/freenas-debug
        //usr/local/www/dojo/dojox/widget/nls/th
        //usr/local/lib/python3.7/site-packages/south/tests/brokenapp/migrations/__pycache__
        //usr/local/www/freenasUI/common/__pycache__
        //usr/local/lib/python3.7/site-packages/django/forms
        //usr/local/lib/python3.7/site-packages/zettarepl/transport/__pycache__
        //usr/share/locale/ja_JP.UTF-8
        //usr/local/lib/migrate93/freenasUI/account
root@Brisingr[~]#


Now why it says REMOVED or replacing in there? One thing that i thought of myself is that it could be because of power shortages, we had 3 outages in a short span of time. Is it from that? I have no idea but it's a possibility, nothing got changed, the case that holds it all didn't move to my knowledge. The last time i did zpool status command, it showed both drives online and everything was working fine. But those outages are why i bought the UPS and it's plugged into that at the moment.

If anyone has any idea how this happened i'm open to ideas. The freenas box was updated to FreeNAS-11.3-U4.1 from 11.3-U3.2 in september, i still have the DB file and a compressed tar from then as it was recommended to backup those. At the time i checked after the update and all commands came back positive, showing everything online, so i don't think the update is the problem.

If more information is needed, just ask i'll do my best. Still new at all this and have not encountered this before.
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
1 short addendum to this, i just did zpool status -v to check on all pools and every other pool comes back good, showing all drives online. The bootpool is on my SSD's which is split to give me a small ssd pool for applications that can use fast storage. The checksums on that are 0 and devices show online, so it's only the mirrored bootpartition on the SSD's that is having that problem.
 
Joined
Jan 18, 2017
Messages
525
oh this looks familiar I believe the Silicon Motion SM2256S is your issue based on this now closed bug report

disabling trim on them should stop the errors and reinstalling to fix damaged files
 
Last edited:

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Reading through those threads i gotta agree, seems to be that controller that's causing the check issues. They advise to just replace the SSD since other things could be wrong in the background. So that solves that mystery. However if i'm gonna replace that SSD i'm gonna want my bootpool in a decent condition first.

Is there anyway i can point freenas to the base installer and let it repair itself from that?

Thanks for finding that one btw cobra!
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Is there anyway i can point freenas to the base installer and let it repair itself from that?
No need. Save your config, reinstall FreeNAS, upload saved config, reboot and you're good.
 
Joined
Jan 18, 2017
Messages
525
I'm using a variant of that controller and have trim off when they die I will be more careful with selecting replacements
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Yeah atm i'm still contemplating what i'm gonna do with that SSD, i wish that controller was mentioned in more guides as one to avoid tbh. Because i bought this one to replace an old SSD that didn't have smart logging option and didn't find any info to avoid that.
 
Joined
Jan 18, 2017
Messages
525
rumor was the drives work fine as long as they are not part of the boot pool, I have not tested that as I only have them in the boot pool.
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Ok so my new SSD arrived and i was thinking of atleast letting the bootpool try and resilver to that one. Why? Because i have split my SSD mirror into a bootpool and a fast storage area (for emby & potentially VM's although that doesn't seem to benefit from the fast storage). If i just do a reinstall on of the bootpool then i'll loose my emby data too which i'm not a fan off.

So my question then becomes, can i just do replace on ada1 and plugin the new drive in it's slot with a boot pool that looks like the picture i'm adding.
 

Attachments

  • Boot pool.JPG
    Boot pool.JPG
    36.6 KB · Views: 186

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
After a reboot the pool status now says something else. Don't really understand why. (the reboot was for a change of UPS)

Edit: I did a scrub on the bootpool to see what it said and it went back to UNAVAIL for my ada0 drive, does unavail mean that it can't see it? Because i can't find it in the documentation.
 

Attachments

  • after reboot.JPG
    after reboot.JPG
    35.4 KB · Views: 187
Last edited:

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Think i'll try to replace the ada1 drive tomorrow and see what happens, as long as i backup first i should be good i think.
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Because i need to replace the ada1 drive i figured i should detach the old ada2 drive so the replace group isn't on ada2 anymore. But i got a problem that pops up: [EZFS_NOTSUP] Can detach disks from mirrors and spares only
Code:
Error: concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 247, in __zfs_vdev_operation
    op(target, *args)
  File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 247, in __zfs_vdev_operation
    op(target, *args)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 256, in <lambda>
    self.__zfs_vdev_operation(name, label, lambda target: target.detach())
  File "libzfs.pyx", line 1764, in libzfs.ZFSVdev.detach
libzfs.ZFSException: Can detach disks from mirrors and spares only

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 97, in main_worker
    res = loop.run_until_complete(coro)
  File "/usr/local/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 53, in _run
    return await self._call(name, serviceobj, methodobj, params=args, job=job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 965, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 256, in detach
    self.__zfs_vdev_operation(name, label, lambda target: target.detach())
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 249, in __zfs_vdev_operation
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_NOTSUP] Can detach disks from mirrors and spares only
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 130, in call_method
    io_thread=False)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1084, in _call
    return await methodobj(*args)
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 961, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/boot.py", line 214, in detach
    await self.middleware.call('zfs.pool.detach', 'freenas-boot', dev)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1141, in call
    app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1081, in _call
    return await self._call_worker(name, *args)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1101, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1036, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1010, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_NOTSUP] Can detach disks from mirrors and spares only


Did this pop up because i didn't detach the old drive before these problems popped up or is there another reason?
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
So i'm a feeling a bit like an idiot stumbling around in the dark here. I tried to unplug the ada0 drive because i figured it kept saying unavail and couldn't be removed. So i shutdown the machine, unplugged the drive and then booted back up and detach the old drive. Shutdown again and plugged the ada0 drive back in only to see it didn't appear in the boot pool drive list anymore (which i think is odd since i removed the old one not the one that seemed to still be in use). Now it seems the machine still runs and is able to boot but to say i'm uncomfortable doing more is an understatement.

I considered reinstalling for a brief moment but i read in some posts that you should export disconnect your storagepools before doing that. However when looking at the "export/disconnect" it gives me an option to delete configuration of shares that used this pool. It seems i would not want that right? So when i reinstall, everything has their rights still? Or should i redo everything?

So currents status, i'm running on 1 SSD that has an increasing amount of checksum errrors and as a result a bootpool that's degraded.
Does anyone have clear tips or a guide somewhere for such a situation? I'm at a loss right now because i don't know anything about this procedure yet and as such i can't use other people's topics at the moment to try and compare them to my own situation.

At this point i think it's smartest i wait for someone to respond and not touch the machine untill some more qualified people then me jump in.
 
Joined
Jan 18, 2017
Messages
525
I assume you have two new SSDs right? with the four SSDs you should make two new pools. shut down your jails (also set them not to auto start) and copy the data you need off the old SSD to you storage pool temporarily as you will be formatting them.
Jailer said what to do already save your config onto your desktop, do a new install on the new SSDs and import your config file. Format the old SSDs reinstall the jails you want onto them and copy any files you wanted back to them from your storage pool.
You should end up with a mirrored boot pool on the brand new SSDs and a mirrored fast jail storage like you had before on the old SSDs.
 
Last edited:
Joined
Jan 18, 2017
Messages
525
i read in some posts that you should export disconnect your storagepools before doing that. However when looking at the "export/disconnect" it gives me an option to delete configuration of shares that used this pool. It seems i would not want that right? So when i reinstall, everything has their rights still? Or should i redo everything?

This is a safety precaution so you do not accidentally format the wrong drives during the process, you do not want to delete share configuration unless you are unhappy with your current configuration.
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Hey @cobrakiller58 Thanks for replying. I was losing grip with what i was doing, that's why i decided to leave it for a couple of days. I should have known not to do that when i'm tired late at night, especially when my knowledge is still this limited.

Now for my current situation, i have bought 1 new SSD because 1 of the 2 currently used SSD's was perfectly fine and 1 had a bad controller so i will just be replacing the one with a bad controller.

When it comes to data on the SSD storage pool, luckily there is no data on it yet because i'm still trying things out (good thing i went slow it seems). The only things on the SSD's is a mirror bootpool and a storage pool (with no data) that has 1 jail on it, namely for my Emby plugin. I have a subscription on Emby so it backs up my view history and such so i don't need to worry about that part either.

So looking at it now, here's what i think i will/should do. 1) Stop all jails, 2) take screenshots of settings just in case, 3) Save config to my desktop, 4) Format the ssd's (both the used one and the new one), 5) reinstall freenas (it's a split config of half bootpool, half ssd storage as a guy named Patrick explained here), 6) upload the config, 7) import the data pools. Not sure if there is a step 8 here?

So if i'm reading you correctly i don't need to do the "export/disconnect" option on my HDD pool before reinstalling? It's safe to say i won't format the wrong drives since they are vastly different sizes and i'll attach the SSD's to a different machine to format it. When installing i'll also know the port numbers, so i doubt that will be an issue.

I'm gonna wait a day or 2 more before doing this, just in case something pops into my head that i should do first, maybe try to find other people that had to do this and see what they recommend doing first. Rather safe then sorry after all.
 
Joined
Jan 18, 2017
Messages
525
Patrick clearly says that it is an unsupported configuration, you are welcome to do with it as you like.
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Oh yeah i know he said that and i believe my problems don't stem as a result of that configuration. The self check problem comes from the ssd controller on one of the drives and the issue of a drive not being in the pool is because i did something wrong when replacing a drive last time an ssd had a problem (i believe it was smart logging that wasn't supported, was a very old ssd). Only sad part of that one is that i replaced it with a new ssd that has a controller that is known to be buggy (sadly i didn't know of this beforehand). Add to that a series of powerlosses one night in my house and you got a recipe for disaster. I'm honestly impressed with how well freenas is still running.

So yeah, current list of problems:
1) self check issue: solved when i replace the ssd with a new one. Shouldn't happen after the reinstall.
2) mirror missing a drive: i'm gonna call that a user error and my bad, clearly i didn't do a good job when trying to replace the old ssd. Gonna be something i have to do better next time a drive needs replacing.
3) powerlosses: i now have ups installed so that won't happen again.

Also the steps i listed i think apply to every reinstall no? And that's why i asked that last part, has nothing to do with Patrick's unsupported configuration i think because the HDD pools aren't involved in that setup, they are a completely seperate pool.
 

Jerren

Explorer
Joined
Apr 15, 2020
Messages
84
Ok a couple hours ago i started and i seem to have hit a snag with exporting my (HDD) data pool, keep in mind this isn't the ssd boot pool. I did it through the gui if that matters. But it kept revolving at 5%, same thing in the task manager. Since it's been over 3 hours now i decided to go look at the actual machine ( i had attached a screen & keyboard before starting this). What i see on the screen is alot of timeouts and lastly retrying command, the timeouts on slots seem to change a couple of times. Here's the output on screen at the moment.

shcichi1: Timeout on slot 25 port 0
shcichi1: is 00000000 cs 02000000 ss 00000000 rs 02000000 tfd 50 serr 00000000 cmd 0020d917
shcich1: Erorr while READ LOG EXT
(ada1:ahcich1:0:0:0): WRITE_FPDMA_Queued. ACB: 61 50 78 2c 8c 40 83 00 00 00 00 00
(ada1:ahcich1:0:0:0): CAM status: ATA Status Error
(ada1:ahcich1:0:0:0): ATA status: 00 ()
(ada1:ahcich1:0:0:0): RES: 00 00 00 00 00 00 00 00 00 00 00
(ada1:ahcich1:0:0:0): Retrying command
Dec 16 16.20.56 servername zfsd: Degrade vdev(2232220053474433758/9001984713377441883) : cannot degrade 9001984713377441883: pool I/O is currently suspended.

This output repeats every 5minutes, only thing that changes is the timestamp. Only thing i've found online is people on reddit saying only thing "you can do is do a hard reboot". Anyone got some tips here?
 
Joined
Jan 18, 2017
Messages
525
shcichi1:
was that supposed to be "ahcich1"

(ada1:ahcich1:0:0:0): CAM status: ATA Status Error
ada1 was your ssd in your first post, is it still an ssd?

What is the current zpool status of the pool you are trying to export? I believe a clean shutdown does export the pool and it just imports it during start up.
 
Top