Are 2 of My Drives Failed? (See Edit: Moving Data Onto To New Vdev, To Remove Old)

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Unfortunately that is a real concern in most situations. The good thing is you are only stressing the drives in the specific mirror as I understand it. And you have two good drives already (the original and a spare) and you are reading from both of those drives to create the new mirror for that set. So you should not be as concerned since you have a good pair even before the resilvering is complete. If I have this wrong, I know someone will chime in. I am not the resilvering Guru.
I was under the impression it only resilvered from the spare, and left the other drive alone. This is good to know though.
All that aside, it is now at 82%. It seems like it maybe got stuck showing a low percent, or needed to handle metadata before it sped up, or it was just being stupid.
The long time might be because of a CPU cap. Looks at your specs. Nah, unless resilvering is single-threaded.
Lmao. I do question things in general about teh CPU usage, because my CPU only reamains around 1-4% overall. I get it's a bit overkill, especially seeing I'm not running any Jails currently. But idk it just seems lower than I expect.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I was under the impression it only resilvered from the spare, and left the other drive alone.
I've never heard that before but as I said, I'm not the resilvering Guru.
All that aside, it is now at 82%. It seems like it maybe got stuck showing a low percent, or needed to handle metadata before it sped up, or it was just being stupid.
The method the percent complete is calculated/displayed was explained before. It makes an initial assessment on how much data there is to write and the current speed the data is flowing and it tells you the time remaining. Over time as the resilvering continues, the completion time is updated based on more samples of the resilvering speed. It always seems to start with a very high number and then it tends to become more accurate and the time is generally lowered. When I do a SCRUB on my system, my initial time is over 11 hours, but when all is complete I'm at just over 5 hours. I'm not saying everyone's time will be halved, that is just my case.

because my CPU only reamains around 1-4% overall.
It just means that your EPYC CPU is not the bottleneck. This is a good thing since that is one heck of a CPU. I have no idea why you would put a CPU like that into a NAS. I guess if you plan to run a lot of VM's from that computer then that would be justification.

Glad your resilvering is almost complete, then you can detach your 'spare' drives.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
The method the percent complete is calculated/displayed was explained before. It makes an initial assessment on how much data there is to write and the current speed the data is flowing and it tells you the time remaining. Over time as the resilvering continues, the completion time is updated based on more samples of the resilvering speed. It always seems to start with a very high number and then it tends to become more accurate and the time is generally lowered. When I do a SCRUB on my system, my initial time is over 11 hours, but when all is complete I'm at just over 5 hours. I'm not saying everyone's time will be halved, that is just my case.
Unless I missed it, percent complete was not originally solidified in the statement. But estimated completion time was. The fact that it was stuck around 5-7% for almost a day is what worried me most. But ok good to know.
It just means that your EPYC CPU is not the bottleneck. This is a good thing since that is one heck of a CPU. I have no idea why you would put a CPU like that into a NAS. I guess if you plan to run a lot of VM's from that computer then that would be justification.
My original plan was to be able to run a handful of things from the Jails, but once I started realizing how much I very much dislike dealing with the jails, seeing 95% of the plugins alone are broken, it is what it is. But I got the CPU for a good deal with the motherboard and the ram. Worst case I move it over to my future planned Proxmox server which is ideally where I always wanted to host everything, and just have TrueNAS how it is as a storage unit. But Jails has been a pretty terrible experience. I've been debating just migrating to scale, but idk. I care more about the UI on scale than the container support tbh lol. Core feels like it gets little work with it's web gui compared to scale.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The good thing is you are only stressing the drives in the specific mirror as I understand it.
That theory always made sense to me, but I've seen that it doesn't necessarily go like that in reality.

I think it's the shared code that is used for both resilver and scrub that means that more-or-less the whole pool needs to be involved to do any resilver.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
That theory always made sense to me, but I've seen that it doesn't necessarily go like that in reality.

I think it's the shared code that is used for both resilver and scrub that means that more-or-less the whole pool needs to be involved to do any resilver.
That is why I'm not the Guru for this. But I would think that since it's a mirror, the data would come from the other drive but I could see that some data could come from other drives as well, I just never thought of it that way. Thanks for that information, it's good to learn.

I've been debating just migrating to scale, but idk. I care more about the UI on scale than the container support tbh lol. Core feels like it gets little work with it's web gui compared to scale.
CORE gets little GUI adjustments compared to SCALE because CORE is vastly more mature. SCALE is just now considered stable. Also, CORE is what the TrueNAS paid product is based off of (for right now) and changing the GUI for the hell of it would not please customers who know the GUI in it's present format. I personally dislike the GUI differences between the two, there are things I like about each but I'm more comfortable with the CORE GUI.

Let me tell you something that I do because I manage a script that runs in both CORE and SCALE. I've stopped installing "ZFS Feature Updates" for my pool. almost 100% of the time the new features would not make my system work better or be more reliable. This allows me to also roll back to a previous version of TrueNAS, well as far back as to when the last feature update that is in my pool was introduced. So that is step 1. Step 2 is I use ESXi (free version) which has no real bells and whistles but it works for what I need. I have created two VM's, the first one is CORE and I configure everything, make sure TrueNAS is running properly. I give it 16GB RAM, 2 CPUs, and a single 16GB virtual boot drive. I would not run TrueNAS with less than 10GB RAM. Now I shut down the VM, clone the VM, and name it SCALE. I power up the SCALE VM and of course at this time it's really CORE, but then I perform the upgrade to SCALE. I make sure it's all working, I do not update the ZFS features, I just dismiss those. Step 3, I setup my AutoStart to run the CORE VM when I power on the computer. SCALE is set only for manual power on, but you can choose which one you would prefer of course. You do not want both VM's running at the same time, that is trouble because you are sharing the same hard drives. Honestly, I have not tried to run both at the same time, I hope ESXi would throw an error message and stop it from happening, but I don't know if it would. Also, setup automatic shutdowns and make sure you give it enough time to actually shutdown. I give mine 2 minutes. Why do this? What if you go on UPS and Proxmox need to automatically shutdown? I'm assuming you would have your system configured that way. Proxmox would need to shutdown any VM's before powering itself down.

Now you can switch between CORE and SCALE. It's not a perfect thing to do but it's what I do and have been doing for a few years now, but I primarily run CORE because until recently, SCALE was not very stable. You can do this for as long as you like and then eventually settle on the version you like. Also, in the future there may be a ZFS feature update you desire, but always read the description of the update, what it gives you. Sometimes it's an update to make deduplication work better, but if yo do not use deduplication, why update?

And since you desire to run Proxmox (some folks have had problems with it, some have been fine) then you would run TrueNAS alone in it's own VM, no plugins, jails. truecharts, or anything. Well you could do all of those if you wanted to but I only run Plex in a jail (not plugin). I don't know if it works in SCALE because I'm in SCALE for script proofing only. But that is just an option you could take.

The Disclaimer: If you are not really paying attention to running two different VM's on the same Pool, then DON'T. I'm certain you could damage your pool, I don't know how but it's a risk. Why take the risk if you are not going to at least try your best to do the right thing from a VM perspective. I am careful in what I do, but to be honest, so long as I do not run both VM's at the same time, all is good. It's a risk you need to think about before leaping. I have no idea how computer/Proxmox savvy you are. If you are a novice, this might not be the option for you.

I hope your resilvering is completed by now so you can think about removing the spares.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Haven't had a chance to read that, but I want to come back when I can look it through in detail.
But I just noticed when I woke up, that the re-silvering was at like 96% last night, and now it's at about 9%.
Was very confused, but I checked the pool and it looks like it only resilvered one of the drives. So I guess it does 1, and then re-silvers the other after. Not both at once.

Also.. it automatically detached the spare.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Haven't had a chance to read that, but I want to come back when I can look it through in detail.
But I just noticed when I woke up, that the re-silvering was at like 96% last night, and now it's at about 9%.
Was very confused, but I checked the pool and it looks like it only resilvered one of the drives. So I guess it does 1, and then re-silvers the other after. Not both at once.

Also.. it automatically detached the spare.
Sorry to hear it is taking a very long time to resilver but glad to know the spare detached itself.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Also.. it automatically detached the spare
Interesting.

I have had dozens of threads here where that hasn't happened and people are seeking advice on what to do... I guess there must be some conditions where it does happen automatically, but it's unclear to me what they would be.

Can you share zpool status again after the spare is back to available? (and confirm that really happened automatically?)
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Ok so just checked the system this morning.
It is not currently re-silvering anymore, and it looks like both spares are detached.

HOWEVER, there is a concerning notification from it:
Code:
CRITICAL
Pool PrimaryPool state is ONLINE: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.
2023-08-20 08:12:15


From the zpool status, I assume it is gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76 da9p2 ?

I believe this is the smartctl for that disk?
Code:
[~]# smartctl -a /dev/da9
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               HGST
Product:              HUS726040AL4210
Revision:             A980
Compliance:           SPC-4
User Capacity:        4,000,787,030,016 bytes [4.00 TB]
Logical block size:   4096 bytes
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Logical Unit id:      0x5000cca2440f4444
Serial number:        N8G8D8PT
Device type:          disk
Transport protocol:   SAS (SPL-3)
Local Time is:        Sun Aug 20 09:22:42 2023 EDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature:     40 C
Drive Trip Temperature:        85 C

Accumulated power on time, hours:minutes 43186:49
Manufactured in week 19 of year 2016
Specified cycle count over device lifetime:  50000
Accumulated start-stop cycles:  59
Specified load-unload count over device lifetime:  600000
Accumulated load-unload cycles:  1824
Elements in grown defect list: 26

Vendor (Seagate Cache) information
  Blocks sent to initiator = 41498802640650240

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:          0       18         0        18    7775767     106993.386  3
write:         0     1023         0      1023    5463158     125349.325  2
verify:        0        0         0         0     174824          0.000  0

Non-medium error count:        0

SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background short  Completed                   -   43177                 - [-   -    -]
# 2  Background short  Completed                   -   43153                 - [-   -    -]


Code:
# zpool status -v
  pool: PrimaryPool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: resilvered 1.41T in 1 days 08:35:23 with 0 errors on Sun Aug 20 08:11:50 2023
config:

        NAME                                            STATE     READ WRITE CKSUM
        PrimaryPool                                     ONLINE       0     0 0
          mirror-0                                      ONLINE       0     0 0
            gptid/d7476d46-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
            gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
          mirror-1                                      ONLINE       0     0 0
            gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
            gptid/db71bcb5-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
          mirror-2                                      ONLINE       0     0 0
            gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
            gptid/d96847a9-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
          mirror-3                                      ONLINE       0     0 0
            gptid/d9fb7757-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
            gptid/da1e1121-32ca-11ec-b815-002590f52cc2  ONLINE       0     0 0
          mirror-4                                      ONLINE       0     0 0
            gptid/9fd0872d-8f64-11ec-8462-002590f52cc2  ONLINE       0     0 0
            gptid/9ff0f041-8f64-11ec-8462-002590f52cc2  ONLINE       0     0 0
          mirror-5                                      ONLINE       0     0 0
            gptid/14811777-1b6d-11ed-8423-ac1f6be66d76  ONLINE       0     0 0
            gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76  ONLINE       0     0 0
          mirror-6                                      ONLINE       0     0 0
            gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76  ONLINE       0     0 0
            gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76  ONLINE      34     026
        spares
          gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76    AVAIL
          gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76    AVAIL

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:06 with 0 errors on Sat Aug 19 03:46:06 2023


Code:
# glabel status
                                      Name  Status  Components
gptid/db71bcb5-32ca-11ec-b815-002590f52cc2     N/A  da1p2
gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2     N/A  da0p2
gptid/9fd0872d-8f64-11ec-8462-002590f52cc2     N/A  da4p2
gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2     N/A  da6p2
gptid/d9fb7757-32ca-11ec-b815-002590f52cc2     N/A  da8p2
gptid/9ff0f041-8f64-11ec-8462-002590f52cc2     N/A  da5p2
gptid/d96847a9-32ca-11ec-b815-002590f52cc2     N/A  da7p2
gptid/14811777-1b6d-11ed-8423-ac1f6be66d76     N/A  da3p2
gptid/da1e1121-32ca-11ec-b815-002590f52cc2     N/A  da2p2
gptid/d7476d46-32ca-11ec-b815-002590f52cc2     N/A  da10p2
gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76     N/A  da12p2
gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76     N/A  da13p2
gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76     N/A  da15p2
gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2     N/A  da11p2
gptid/0db68b72-32a7-11ec-8c36-002590f52cc2     N/A  da16p1
gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76     N/A  da9p2
gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76     N/A  da14p2
gptid/0cbc0474-3c2e-11ee-96af-ac1f6be66d76     N/A  da14p1
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The drive in question looks like one you just installed. However, run a Scrub, if the pool has no new errors, run zpool clear PrimaryPool to clear the errors. If they come back then you have a problem. You can also post the output of smartctl -x /dev/da9 to see if any additional relevant information is presented. You may have received a bad drive, just to not assume there is a problem until after you have run all the proper tests.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
The drive in question looks like one you just installed. However, run a Scrub, if the pool has no new errors, run zpool clear PrimaryPool to clear the errors. If they come back then you have a problem. You can also post the output of smartctl -x /dev/da9 to see if any additional relevant information is presented. You may have received a bad drive, just to not assume there is a problem until after you have run all the proper tests.
Alright, running scrub now then will post reports and will clear errors.
Worst case I just pop in a new one. Sucks to have to resilver again though. But at least these drives are like under $20 each on ebay :cool:
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Hm so it looks like it killed the scrub, marked drive as faulty, and is currently resilvering in a spare.. nice.
Code:
 zpool status -v
  pool: PrimaryPool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Aug 20 19:59:21 2023
        832G scanned at 417M/s, 766G issued at 384M/s, 19.2T total
        34.2G resilvered, 3.90% done, 13:58:53 to go
config:

        NAME                                              STATE     READ WRITE CKSUM
        PrimaryPool                                       DEGRADED     0     0   0
          mirror-0                                        ONLINE       0     0   0
            gptid/d7476d46-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-1                                        ONLINE       0     0   0
            gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/db71bcb5-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-2                                        ONLINE       0     0   0
            gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d96847a9-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-3                                        ONLINE       0     0   0
            gptid/d9fb7757-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/da1e1121-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-4                                        ONLINE       0     0   0
            gptid/9fd0872d-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
            gptid/9ff0f041-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
          mirror-5                                        ONLINE       0     0   0
            gptid/14811777-1b6d-11ed-8423-ac1f6be66d76    ONLINE       0     0   0
            gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76    ONLINE       0     0   0
          mirror-6                                        DEGRADED     0     0   0
            gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76    ONLINE       0     0   0
            spare-1                                       DEGRADED     0     0   0
              gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76  FAULTED    105     0 117  too many errors
              gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76  ONLINE       0     0   0  (resilvering)
        spares
          gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76      INUSE     currently in use
          gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76      AVAIL

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:06 with 0 errors on Sat Aug 19 03:46:06 2023
config:
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well da9 was a well used drive with over 40,000 hours on it. Do you have a lot of old drives?
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Well da9 was a well used drive with over 40,000 hours on it. Do you have a lot of old drives?
They are all used drives so they will vary, but for the most part yeah they are all high hours lol
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
There is a reason for that. I think you just found out why.
Haha overall, I've had good luck in general considering the amount of drives I have.
I realistically have done maybe 3 replacements in about 2 years, until these events recently. And it is like whatever, $20 just order a new one.

I can't say I ever expected them to be reliable for long term though. But now I really want that, plus more space, hence my whole reason for wanting to simply upgrade the pool to newer, non-used drives. Figured I'd just bite the bullet soon.
But in the meantime, I am a bit nervous. The 2-way mirror pools are sketching me out, and the constant resilvering is making me worry a lot. I think I will need 3-way mirrors when I upgrade for better piece of mind.
I have a backup of my most important data, but I still would like to minimize the risk of losing my entire pool here.
Once I get this settled, I'd ideally like to get something low power and figure out a way to attach a few sata 20tb together and use it as a backup system for this entire pool.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Don't stress out if you have your important data backed up. You could power off your NAS until your new system is ready. Resilvering all the time puts more stress on your other drives as well. It might be better to get that new system ready to power on.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Don't stress out if you have your important data backed up. You could power off your NAS until your new system is ready. Resilvering all the time puts more stress on your other drives as well. It might be better to get that new system ready to power on.
Majority of my daily data is on the NAS still, yes the most important pieces are backed up but still I use it daily.

As for new system, it's going to be a little until I can gather some more savings to do it properly. It's not going to be a new system, just new drives. This already is a new system I built it all.
I will likely purchase x3 20tb drives and add them in as a 3-way mirror, and replace a handful of the drives. But I will need I think another 3 to replace the full pool, if not then 9. At about $370 a drive, it gets kind of insane, I don't have $2-3300 to drop on drives upfront right now unfortunately or I'd just bite the bullet and do it.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Most of us don't have that kind of money to drop on a project like this unless it were a business. Well I have the money, I just choose to use it for fuel, food, fun. Oh, and a 65" OLED TV either tomorrow or Friday. Wife said okay since our current TV has a lot of dark spots from the backlighting going out.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
Something weird just happened. So I took the server out and in the process because it's so heavy, I took all the drives out (kept them in order so they all go back to the same slots). I was able to plug in my other HBA so that's nice.

Upon booting the machine back up though, my pool shows healthy. Not degraded anymore. My first thought was maybe the drive that was faulted was maybe mis-seated? Anyways, I checked zpool status -v, and to my surprise all drives showed online.
However, the spare is still showing currently in use. So both drives in that vdev show healthy, plus a spare is in that vdev.

Code:
 zpool status -v
  pool: PrimaryPool
 state: ONLINE
  scan: scrub in progress since Thu Aug 24 11:53:12 2023
        136G scanned at 364M/s, 43.5G issued at 116M/s, 19.2T total
        0B repaired, 0.22% done, 1 days 23:48:08 to go
config:

        NAME                                              STATE     READ WRITE CKSUM
        PrimaryPool                                       ONLINE       0     0   0
          mirror-0                                        ONLINE       0     0   0
            gptid/d7476d46-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-1                                        ONLINE       0     0   0
            gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/db71bcb5-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-2                                        ONLINE       0     0   0
            gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d96847a9-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-3                                        ONLINE       0     0   0
            gptid/d9fb7757-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/da1e1121-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-4                                        ONLINE       0     0   0
            gptid/9fd0872d-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
            gptid/9ff0f041-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
          mirror-5                                        ONLINE       0     0   0
            gptid/14811777-1b6d-11ed-8423-ac1f6be66d76    ONLINE       0     0   0
            gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76    ONLINE       0     0   0
          mirror-6                                        ONLINE       0     0   0
            gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76    ONLINE       0     0   0
            spare-1                                       ONLINE       0     0   0
              gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76  ONLINE       0     0   0
              gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76  ONLINE       0     0   0
        spares
          gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76      INUSE     currently in use
          gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76      AVAIL

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:06 with 0 errors on Sat Aug 19 03:46:06 2023
config:


That being said, I am currently running a scrub again. It seems a little too weird to me. But maybe the drive was just seated weird or something.
 
Top