drive offline but no member disk listed for replacement

theaddies

Contributor
Joined
Mar 28, 2015
Messages
105
I have a 10 x 2TB RAIDZ2 pool and I had a drive in my pool fail, ada8. I swapped out the failed drive for a new one. Now when I go to replace disk the "member disk" list is blank. The disk I need to replace is ada8, but it is not listed. I don't know what to do from here.
 

Attachments

  • Capture.JPG
    Capture.JPG
    51.9 KB · Views: 89
  • zpoolstatus.txt
    1.7 KB · Views: 92
  • camcontrol.txt
    960 bytes · Views: 82

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
210
I'd normally add the replacement drive to the machine, then select the faulty drive and replace. Then once complete, remove the faulty drive.
 

theaddies

Contributor
Joined
Mar 28, 2015
Messages
105
I'd normally add the replacement drive to the machine, then select the faulty drive and replace. Then once complete, remove the faulty drive.
I wish I had done that. Perhaps I can go backwards. I will try. I am out of SATA ports but I could put it on USB3 and then move it to SATA after removing the bad drive.
 

theaddies

Contributor
Joined
Mar 28, 2015
Messages
105
I have a 10 x 2TB RAIDZ2 pool and I had a drive in my pool fail, ada8. I swapped out the failed drive for a new one. Now when I go to replace disk the "member disk" list is blank. The disk I need to replace is ada8, but it is not listed. I don't know what to do from here.
I get the error below. I thought I might be able to replicate the drive even though it is degraded but I cannot.

libzfs.ZFSException: pool I/O is currently suspended

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 277, in replace
target.replace(newvdev)
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 277, in replace
target.replace(newvdev)
File "libzfs.pyx", line 2060, in libzfs.ZFSVdev.replace
libzfs.ZFSException: pool I/O is currently suspended

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 279, in replace
raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_POOLUNAVAIL] pool I/O is currently suspended
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/replace_disk.py", line 122, in replace
raise e
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/replace_disk.py", line 102, in replace
await self.middleware.call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_POOLUNAVAIL] pool I/O is currently suspended
 
Last edited:

theaddies

Contributor
Joined
Mar 28, 2015
Messages
105
I don't really know what I did, but I rebooted and it has now resilvered and appears to work.
 

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
210
I wish I had done that. Perhaps I can go backwards. I will try. I am out of SATA ports but I could put it on USB3 and then move it to SATA after removing the bad drive.
Don't beat yourself up about it, there's lots of 'wish I had dones' with my config too. If possible and for reliability, get a LSI HBA instead. Or if you've got more than one pool, perhaps disconnect (not delete of course) and use one of those spare ports.

Anyhow, I see it's re-silvering, very glad to hear it.
 
Top