HDD is not going Offline.... No Valid Replicas ERROR

BGGR

Cadet
Joined
Nov 28, 2020
Messages
3
When I'm trying to get a disk offline I'm getting the below error response:

***************************************************
CallError
[EZFS_NOREPLICAS] no valid replicas
remove_circle_outlineMore info...

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 247, in __zfs_vdev_operation
op(target, *args)
File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 247, in __zfs_vdev_operation
op(target, *args)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 263, in <lambda>
self.__zfs_vdev_operation(name, label, lambda target: target.offline())
File "libzfs.pyx", line 1806, in libzfs.ZFSVdev.offline
libzfs.ZFSException: no valid replicas

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 97, in main_worker
res = loop.run_until_complete(coro)
File "/usr/local/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
return future.result()
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 53, in _run
return await self._call(name, serviceobj, methodobj, params=args, job=job)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 965, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 263, in offline
self.__zfs_vdev_operation(name, label, lambda target: target.offline())
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 249, in __zfs_vdev_operation
raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_NOREPLICAS] no valid replicas
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 130, in call_method
io_thread=False)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1084, in _call
return await methodobj(*args)
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 961, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1249, in offline
await self.middleware.call('zfs.pool.offline', pool['name'], found[1]['guid'])
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1141, in call
app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1081, in _call
return await self._call_worker(name, *args)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1101, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1036, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1010, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_NOREPLICAS] no valid replicas

**********************************************************************

Please can anyone help with this because this disk have my Pool to DEGRADED state and I want to remove it from the Pool
Replace has already performed, but this disk remain to the pool together with the new (replacing).
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Please follow the forum rules and post some of the basic information we need to help you.
What version of FreeNAS/TrueNAS are you using?

Also, please submit a bug report for this problem if this is TrueNAS 12 or FreeNAS 11.3-U5.

Do you have a backup of your data already? It's always smart to do this before pulling hard drives out of your system, you could grab the wrong drive. It's happened before and will happen again.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You will see that message if offlining that drive would cause the pool to become unavailable.

Are you removing a drive from an online pool? What's the redundancy level of the pool? Has resilvering finished?
 
Top