All disk change to de DEGRADED , all new disk setup

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
i replace one disk running all night.

today morning but ..... what is happen?
my all data not lose yet , can read and write now
how can i do it to online?

#zpool clear ? after scanned finished?

root@truenas[~]# zpool status
pool: HomePool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Nov 28 23:54:38 2022
1.70T scanned at 85.7M/s, 1.58T issued at 79.6M/s, 4.19T total
71.0G resilvered, 37.70% done, 09:32:36 to go
config:

NAME STATE READ WRITE CKSUM
HomePool DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
a2a4471f-c7ef-4e31-a39c-71345f922ce8 DEGRADED 0 0 5.89K too many errors
e2c94101-3f15-435f-95b4-53f26b1f5e72 DEGRADED 0 0 5.89K too many errors
e96c2b49-98cc-4b72-b059-515f268ff346 DEGRADED 0 0 5.95K too many errors
e8382bbd-d369-49d7-b1e9-709fea3e6de6 DEGRADED 73.9K 2 5.97K too many errors (resilvering)
e9ddfd9a-997c-4806-859d-868aa8a8c2cb DEGRADED 0 0 5.95K too many errors
1ec87af4-7b0b-4e4a-a461-b48c537f8e44 DEGRADED 9.44K 29 5.95K too many errors (resilvering)
9e0c8a31-01eb-4766-95f3-f26e682380e3 DEGRADED 0 0 5.95K too many errors
b8035223-6b9c-4954-801a-07d461ee037b DEGRADED 332 0 5.95K too many errors (resilvering)
dba5b5d2-1b59-44bf-8144-d2b5cf9e4ac3 DEGRADED 0 0 5.95K too many errors
53d01a6b-cced-48db-8a1c-918c84652846 DEGRADED 39.9K 255 5.95K too many errors (resilvering)
0e2d892d-bfb7-47a6-8210-47054905a378 DEGRADED 37.0K 4 5.95K too many errors (resilvering)
2d44d0ab-02c5-4c72-8eac-bf607ca7c672 DEGRADED 0 0 5.95K too many errors
a2e69f62-a7a0-4a92-8c92-c509fca4b9fc DEGRADED 0 0 5.95K too many errors
615105ba-558f-4828-923d-eb8ba790fb22 DEGRADED 0 0 5.95K too many errors
replacing-14 DEGRADED 0 0 5.95K
16752858698199900939 UNAVAIL 0 0 0 was /dev/disk/by-partuuid/a64222d9-5a9f-4e5a-b7a4-3f295a866654
da148efe-31e2-4e0a-927d-1925378a89ed DEGRADED 0 0 0 too many errors (resilvering)
2d251bba-1faf-43c5-98c9-04c13df6f4d1 DEGRADED 0 0 5.94K too many errors
61d6cf52-c171-49ff-ae69-c45bf13c7059 DEGRADED 0 0 5.88K too many errors
eee02552-500a-4143-8776-36158ab2385f DEGRADED 0 0 5.88K too many errors
a726ac5f-4c98-4cea-9a4c-625e337d3c08 DEGRADED 0 0 5.88K too many errors
cf9010f9-05a7-4d5f-9491-f1363d22e590 DEGRADED 0 0 5.88K too many errors
12172b1d-8078-491f-8377-ca4c057c5811 DEGRADED 0 0 5.88K too many errors
ff8f4b49-4e61-4a73-b814-f96e94b925d2 DEGRADED 0 0 5.88K too many errors
cache
b0f87678-043d-4b8d-b33c-3c71c3f58989 ONLINE 0 0 0

errors: 255956 data errors, use '-v' for a list

pool: boot-pool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:06 with 0 errors on Mon Nov 28 03:45:08 2022
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
sda3 ONLINE 0 0 0

errors: No known data errors

-------------------------------------------------------------------------------------------------
GUI error is

Core files for the following executables were found:​

/usr/bin/rrdtool (Tue Nov 29 12:18:39 2022), /usr/bin/rrdtool (Tue Nov 29 12:25:04 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:30 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:08 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:32 2022), /usr/bin/rrdtool (Tue Nov 29 12:31:17 2022), /usr/bin/rrdtool (Tue Nov 29 12:31:17 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:44 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:21 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:15 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:15 2022), /usr/bin/rrdtool (Tue Nov 29 12:39:57 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:23 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:24 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:25 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:23 2022), /usr/bin/rrdtool (Tue Nov 29 12:39:57 2022), /usr/bin/rrdtool (Tue Nov 29 12:40:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:40:00 2022), /usr/bin/rrdtool (Tue Nov 29 12:42:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:40:59 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:02 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:04 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:06 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:02 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:58 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:05 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:39 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:39 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:46:26 2022), /usr/bin/rrdtool (Tue Nov 29 12:46:49 2022), /usr/bin/rrdtool (Tue Nov 29 12:47:05 2022), /usr/bin/rrdtool (Tue Nov 29 12:47:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:47:45 2022), /usr/bin/rrdtool (Tue Nov 29 12:48:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:48:18 2022), /usr/bin/rrdtool (Tue Nov 29 12:49:15 2022), /usr/bin/rrdtool (Tue Nov 29 12:49:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:50:07 2022), /usr/bin/rrdtool (Tue Nov 29 12:50:59 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:43 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:43 2022), /usr/bin/rrdtool (Tue Nov 29 12:52:04 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:45 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:45 2022), /usr/bin/rrdtool (Tue Nov 29 12:58:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:57:40 2022), /usr/bin/rrdtool (Tue Nov 29 13:17:32 2022), /usr/bin/rrdtool (Tue Nov 29 13:19:24 2022), /usr/bin/rrdtool (Tue Nov 29 13:19:23 2022), /usr/bin/rrdtool (Tue Nov 29 13:22:37 2022), /usr/bin/rrdtool (Tue Nov 29 13:38:01 2022), /usr/bin/rrdtool (Tue Nov 29 13:37:37 2022), /usr/bin/rrdtool (Tue Nov 29 13:36:53 2022), /usr/bin/rrdtool (Tue Nov 29 13:41:33 2022), /usr/bin/rrdtool (Tue Nov 29 13:59:58 2022), /usr/bin/rrdtool (Tue Nov 29 13:59:34 2022), /usr/bin/rrdtool (Tue Nov 29 13:59:14 2022), /usr/bin/rrdtool (Tue Nov 29 14:14:18 2022), /usr/bin/rrdtool (Tue Nov 29 14:14:42 2022), /usr/bin/rrdtool (Tue Nov 29 14:14:16 2022). Please create a ticket at https://jira.ixsystems.com/ and attach the relevant core files along with a system debug. Once the core files have been archived and attached to the ticket, they may be removed by running the following command in shell: 'rm /var/db/system/cores/*'.​


--------------------------------------
no networking map display.......

1669703023865.png

2022-11-29 14:19:39 (Asia/Shanghai)
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You have most likely lost at least some of the data in 255956 files:
errors: 255956 data errors, use '-v' for a list

You can get a list of those files as suggested by using zpool status -v HomePool

I think you should do your best to copy what you can off that pool and plan on a rebuild after you work out what you have going wrong with what appears to be the controller or something else quite major in the system.

As already mentioned by @jgreco, you should share the hardware details or we can't do much more to help you
 

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
i replace one disk running all night.

today morning but ..... what is happen?
my all data not lose yet , can read and write now
how can i do it to online?

#zpool clear ? after scanned finished?

root@truenas[~]# zpool status
pool: HomePool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Nov 28 23:54:38 2022
1.70T scanned at 85.7M/s, 1.58T issued at 79.6M/s, 4.19T total
71.0G resilvered, 37.70% done, 09:32:36 to go
config:

NAME STATE READ WRITE CKSUM
HomePool DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
a2a4471f-c7ef-4e31-a39c-71345f922ce8 DEGRADED 0 0 5.89K too many errors
e2c94101-3f15-435f-95b4-53f26b1f5e72 DEGRADED 0 0 5.89K too many errors
e96c2b49-98cc-4b72-b059-515f268ff346 DEGRADED 0 0 5.95K too many errors
e8382bbd-d369-49d7-b1e9-709fea3e6de6 DEGRADED 73.9K 2 5.97K too many errors (resilvering)
e9ddfd9a-997c-4806-859d-868aa8a8c2cb DEGRADED 0 0 5.95K too many errors
1ec87af4-7b0b-4e4a-a461-b48c537f8e44 DEGRADED 9.44K 29 5.95K too many errors (resilvering)
9e0c8a31-01eb-4766-95f3-f26e682380e3 DEGRADED 0 0 5.95K too many errors
b8035223-6b9c-4954-801a-07d461ee037b DEGRADED 332 0 5.95K too many errors (resilvering)
dba5b5d2-1b59-44bf-8144-d2b5cf9e4ac3 DEGRADED 0 0 5.95K too many errors
53d01a6b-cced-48db-8a1c-918c84652846 DEGRADED 39.9K 255 5.95K too many errors (resilvering)
0e2d892d-bfb7-47a6-8210-47054905a378 DEGRADED 37.0K 4 5.95K too many errors (resilvering)
2d44d0ab-02c5-4c72-8eac-bf607ca7c672 DEGRADED 0 0 5.95K too many errors
a2e69f62-a7a0-4a92-8c92-c509fca4b9fc DEGRADED 0 0 5.95K too many errors
615105ba-558f-4828-923d-eb8ba790fb22 DEGRADED 0 0 5.95K too many errors
replacing-14 DEGRADED 0 0 5.95K
16752858698199900939 UNAVAIL 0 0 0 was /dev/disk/by-partuuid/a64222d9-5a9f-4e5a-b7a4-3f295a866654
da148efe-31e2-4e0a-927d-1925378a89ed DEGRADED 0 0 0 too many errors (resilvering)
2d251bba-1faf-43c5-98c9-04c13df6f4d1 DEGRADED 0 0 5.94K too many errors
61d6cf52-c171-49ff-ae69-c45bf13c7059 DEGRADED 0 0 5.88K too many errors
eee02552-500a-4143-8776-36158ab2385f DEGRADED 0 0 5.88K too many errors
a726ac5f-4c98-4cea-9a4c-625e337d3c08 DEGRADED 0 0 5.88K too many errors
cf9010f9-05a7-4d5f-9491-f1363d22e590 DEGRADED 0 0 5.88K too many errors
12172b1d-8078-491f-8377-ca4c057c5811 DEGRADED 0 0 5.88K too many errors
ff8f4b49-4e61-4a73-b814-f96e94b925d2 DEGRADED 0 0 5.88K too many errors
cache
b0f87678-043d-4b8d-b33c-3c71c3f58989 ONLINE 0 0 0

errors: 255956 data errors, use '-v' for a list

pool: boot-pool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:06 with 0 errors on Mon Nov 28 03:45:08 2022
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
sda3 ONLINE 0 0 0

errors: No known data errors

-------------------------------------------------------------------------------------------------
GUI error is

Core files for the following executables were found:​

/usr/bin/rrdtool (Tue Nov 29 12:18:39 2022), /usr/bin/rrdtool (Tue Nov 29 12:25:04 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:30 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:08 2022), /usr/bin/rrdtool (Tue Nov 29 12:30:32 2022), /usr/bin/rrdtool (Tue Nov 29 12:31:17 2022), /usr/bin/rrdtool (Tue Nov 29 12:31:17 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:44 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:21 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:15 2022), /usr/bin/rrdtool (Tue Nov 29 12:34:15 2022), /usr/bin/rrdtool (Tue Nov 29 12:39:57 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:23 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:24 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:25 2022), /usr/bin/rrdtool (Tue Nov 29 12:38:23 2022), /usr/bin/rrdtool (Tue Nov 29 12:39:57 2022), /usr/bin/rrdtool (Tue Nov 29 12:40:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:40:00 2022), /usr/bin/rrdtool (Tue Nov 29 12:42:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:40:59 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:02 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:43:04 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:06 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:02 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:58 2022), /usr/bin/rrdtool (Tue Nov 29 12:44:05 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:39 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:39 2022), /usr/bin/rrdtool (Tue Nov 29 12:45:53 2022), /usr/bin/rrdtool (Tue Nov 29 12:46:26 2022), /usr/bin/rrdtool (Tue Nov 29 12:46:49 2022), /usr/bin/rrdtool (Tue Nov 29 12:47:05 2022), /usr/bin/rrdtool (Tue Nov 29 12:47:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:47:45 2022), /usr/bin/rrdtool (Tue Nov 29 12:48:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:48:18 2022), /usr/bin/rrdtool (Tue Nov 29 12:49:15 2022), /usr/bin/rrdtool (Tue Nov 29 12:49:42 2022), /usr/bin/rrdtool (Tue Nov 29 12:50:07 2022), /usr/bin/rrdtool (Tue Nov 29 12:50:59 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:43 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:43 2022), /usr/bin/rrdtool (Tue Nov 29 12:52:04 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:45 2022), /usr/bin/rrdtool (Tue Nov 29 12:51:45 2022), /usr/bin/rrdtool (Tue Nov 29 12:58:01 2022), /usr/bin/rrdtool (Tue Nov 29 12:57:40 2022), /usr/bin/rrdtool (Tue Nov 29 13:17:32 2022), /usr/bin/rrdtool (Tue Nov 29 13:19:24 2022), /usr/bin/rrdtool (Tue Nov 29 13:19:23 2022), /usr/bin/rrdtool (Tue Nov 29 13:22:37 2022), /usr/bin/rrdtool (Tue Nov 29 13:38:01 2022), /usr/bin/rrdtool (Tue Nov 29 13:37:37 2022), /usr/bin/rrdtool (Tue Nov 29 13:36:53 2022), /usr/bin/rrdtool (Tue Nov 29 13:41:33 2022), /usr/bin/rrdtool (Tue Nov 29 13:59:58 2022), /usr/bin/rrdtool (Tue Nov 29 13:59:34 2022), /usr/bin/rrdtool (Tue Nov 29 13:59:14 2022), /usr/bin/rrdtool (Tue Nov 29 14:14:18 2022), /usr/bin/rrdtool (Tue Nov 29 14:14:42 2022), /usr/bin/rrdtool (Tue Nov 29 14:14:16 2022). Please create a ticket at https://jira.ixsystems.com/ and attach the relevant core files along with a system debug. Once the core files have been archived and attached to the ticket, they may be removed by running the following command in shell: 'rm /var/db/system/cores/*'.​


--------------------------------------
no networking map display.......

View attachment 60445
2022-11-29 14:19:39 (Asia/Shanghai)
 

WN1X

Explorer
Joined
Dec 2, 2019
Messages
77
Complete hardware specs for your system?
 

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
some info here link

i am go on here
update my operate

1, finished the resilvering but new resilvering auto go up from 0% start .....
2 , i see the #zpool status all disks degraded tooooo
3, see my 16G using have 3.3GB free used80% is this normal? , if it running 90%up , i must be add the RAM to 64G or higher
by the way
hadware
i5 7400t , ram is 16GB,
one 216G ssd for sys-boot
one 512G ssd memory catch for homepool my named datapool , 22 disks per 16TB

sofware
tureNAS-SCALE-22.02.4
make RaidZ3
pool name is HomePool
and Dataset connect by SMB

4 , i decide copy out all files can copy out about 3T , and reinstall my system for trusnas again....

what about your good idear ? thanks
 
Last edited:

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
5 , yes some data and files broken , not copy out :(
 

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
what is PERC? power ? i have 600W power supply
 
Last edited:

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
i am copying out , speed so low , moment faster 80M... moment less 0k/s

6 ,so i will be to add my RAM to 64GB , in few days later
--------
to be continue.....wait for me
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You were previously asked for details on both your specific model of disks and what controller was used to attach them in your system. This is required information to continue discussing this problem. If you are using a RAID controller, as @Daisuke notes, this is NOT an allowable configuration for TrueNAS. If you are using SMR disks, this is also NOT likely to end up successfully. Answering the questions that forum members ask is a critical step to getting useful answers. We cannot see your system and if we don't have your help to understand your NAS, you are unlikely to get an accurate answer.
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
@jgreco, he probably uses an appliance connected to a server running Scale, since he has 22 disks on raidz3. Unless he deals with some sort of storinator, which I doubt.

@zhe, like @jgreco mentioned, post the server specs and how you connect your disks, also, your exact disk model. You should know the difference between CMR and SMR, you’re running a 16TB 22 disks array, hopefully separated in 2x11 disks vdevs. I hope you picked CMR disks, for the money you spent on that array.

I personally run 3x12 CMR 8TB disks (raidz2 array), which is borderline for array performance (one vdev on a R720xd server and two vdevs on a NetApp DS4246).

In my case, the H710 mini PERC (flashed to IT mode) was seated properly, but the contacts were not clean. I fixed the issue after cleaning the contacts with alcohol, as suggested into link, thanks @Samuel Tai. I was experiencing the same errors you do.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
@jgreco, he probably uses an appliance connected to a server running Scale, since he has 22 disks on raidz3. Unless he deals with some sort of storinator, which I doubt.

I don't know what you mean by "an appliance." There are a limited number of ways to do this. We've seen evidence of 11 disks, a claim of 22 disks, and a really weird statement about a 600W PSU, which doesn't add up to my math.

This could be 22 disks in a standard 4U chassis (SC846) but that doesn't come with a PSU as low as 600W.

It could be attached using an HBA and an external SAS disk shelf, but that generally involves a host system with a larger PSU, often hosting the first dozen disks, and we're back to the "doesn't come with a PSU as low as 600W" issue.

The problem set reported suggests either SMR disks and/or use of a RAID controller, both of which are serious no-no's with TrueNAS. But the poster has not been forthcoming.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's not an "appliance". TrueNAS is an appliance. The DS4246 is a SAS expander disk shelf. Even NetApp refers to it as a "Disk Shelf" --


Check JBOD Storage section

I'd also recommend that "JBOD Storage" is a bad term. Even if your usage of it is technically correct, industry usage tends to treat it as a bunch of individual filesystems (rather than disks), which matches up with the RAID controller terminology taking it to be a bunch of individually presented disks, often behind a bunch of RAID virtual drive abstractions. This is suuuuper-confusing to the ZFS crowd, where "JBOD" seems like the logical way to use a RAID controller to connect drives to the host. It is best not to use this as an interchangeable term for "Disk Shelf".
 

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
yes, i use the disks are 16T SETGATE exos x16 helium , it perhaps is CMR
my mainboard is ASUS prime-z270p , i will check raid is off
pci-e x16 convert 24 pieces sata interface with not raid function
is nomarl tower-pc x86 , not standard-server

thanks a lot

i am copying out ..... to slow .......

to be continue...
 
Last edited:

zhe

Dabbler
Joined
Nov 28, 2022
Messages
24
i want to kill the resilverling

Code:
root@truenas[~]# zpool detach HomePool /dev/disk/by-id/e8382bbd-d369-49d7-b1e9-709fea3e6de6

cannot detach /dev/disk/by-id/e8382bbd-d369-49d7-b1e9-709fea3e6de6: no such device in pool


Who know how to kill this auto runing as resilverling ?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You need to use the device id as shown by zpool status.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You need to use it without the /dev/... part exactly as it is stated in the zpool status output. The system is performing a literal string comparison, not testing devices.
 
Top