strikermed
Dabbler
- Joined
- Mar 22, 2017
- Messages
- 29
Hey guys, a week or two ago I had a drive with some errors. I went forward with replacing it with a spare drive I had installed.
The drive went forward with resilvering, and nearly completed, but I wanted to clear the old hard drive and do some smart testing on it. When I reinserted it, it took over and the pool remained degraded.
Now I currently have the old offline and removed, but I got errors when I tried. With the spare installed, I still get a degraded state, and I'm unsure if it has replaced the old disk... Under Pool status I have a Spare section listed with my new drive (da9p2) and it says it's unavailable. There is another section labeled SPARE, and under that is /dev/gptid/6085722-46cc-11e9-b4e3-a0369f5050d4 and it's listed offline, in addition to da9p2 which is listed as online.
When I try to remove da9p2 I get a large series of errors.
under "spare" if I try to remove da9p2 (listed as unavailable) here is what I get:
How do I fix this issue? I'd like to get the pool back to a good status. I should mention this is RAIDz3.
Ideally, I'd like to be able to remove the spare drive, wipe it, and then resilver it to replace my bad drive.
I'd like to be able to reinsert my drive that had errors and wipe it clean of any configuration and data.
Help?
The drive went forward with resilvering, and nearly completed, but I wanted to clear the old hard drive and do some smart testing on it. When I reinserted it, it took over and the pool remained degraded.
Now I currently have the old offline and removed, but I got errors when I tried. With the spare installed, I still get a degraded state, and I'm unsure if it has replaced the old disk... Under Pool status I have a Spare section listed with my new drive (da9p2) and it says it's unavailable. There is another section labeled SPARE, and under that is /dev/gptid/6085722-46cc-11e9-b4e3-a0369f5050d4 and it's listed offline, in addition to da9p2 which is listed as online.
When I try to remove da9p2 I get a large series of errors.
under "spare" if I try to remove da9p2 (listed as unavailable) here is what I get:
Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 247, in __zfs_vdev_operation
op(target, *args)
File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 247, in __zfs_vdev_operation
op(target, *args)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 279, in <lambda>
self.__zfs_vdev_operation(name, label, lambda target: target.remove())
File "libzfs.pyx", line 1788, in libzfs.ZFSVdev.remove
libzfs.ZFSException: Pool busy; removal may already be in progress
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 97, in main_worker
res = loop.run_until_complete(coro)
File "/usr/local/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 53, in _run
return await self._call(name, serviceobj, methodobj, params=args, job=job)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 965, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 279, in remove
self.__zfs_vdev_operation(name, label, lambda target: target.remove())
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 249, in __zfs_vdev_operation
raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_BUSY] Pool busy; removal may already be in progress
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 130, in call_method
io_thread=False)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1084, in _call
return await methodobj(*args)
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 961, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1347, in remove
await self.middleware.call('zfs.pool.remove', pool['name'], found[1]['guid'])
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1141, in call
app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1081, in _call
return await self._call_worker(name, *args)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1101, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1036, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1010, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_BUSY] Pool busy; removal may already be in progress
How do I fix this issue? I'd like to get the pool back to a good status. I should mention this is RAIDz3.
Ideally, I'd like to be able to remove the spare drive, wipe it, and then resilver it to replace my bad drive.
I'd like to be able to reinsert my drive that had errors and wipe it clean of any configuration and data.
Help?
Last edited: