mirror degraded

gabri

Cadet
Joined
Jan 31, 2023
Messages
4
hello there,

I have a truenas with originally 2 mirrors consisting of 2 disks each.

mirror 1 with 2 x 3TB disks
mirror 2 with 2 500 gb disks

they are all part of the same pool.

now, mirror 2 has degraded, as 1 of the 2 disks has failed. This 500GB disk was replaced with a 2TB disk.
after that, the remaining 500 disk also failed.
therefore mirror 2 has 1 2 TB disk online and 1 500 TB disk in fault.

the 500 GB disk was replaced with a 3 TB disk but the mirror 2 was lost.

1706800097539.png


on the 2 TB ada4 disk if I do expand, it gives me an error

1706800152431.png

1706800180632.png


Error: Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 355, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 391, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool_/attach_disk.py", line 84, in attach
await job.wrap(extend_job)
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 487, in wrap
raise CallError(subjob.exception)
middlewared.service_exception.CallError: [EFAULT] concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 220, in extend
i['target'].attach(newvdev)
File "libzfs.pyx", line 402, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 220, in extend
i['target'].attach(newvdev)
File "libzfs.pyx", line 2117, in libzfs.ZFSVdev.attach
libzfs.ZFSException: /dev/gptid/e35f0d64-c113-11ee-9a6d-78e7d1ca5e73 is busy, or device removal is in progress

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 223, in extend
raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_BADDEV] /dev/gptid/e35f0d64-c113-11ee-9a6d-78e7d1ca5e73 is busy, or device removal is in progress
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 355, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 386, in __run_body
rv = await self.middleware._call_worker(self.method_name, *self.args, job={'id': self.id})
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1250, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1169, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1152, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_BADDEV] /dev/gptid/e35f0d64-c113-11ee-9a6d-78e7d1ca5e73 is busy, or device removal is in progress


basically I'm left with a new 3TB disk which is outside mirror 1 made up of 2 3TB disks, even if I expand mirror 1 with the new 3TB disk it gives me an error that I can't.

How can I fix everything?
now there is mirror 1 with 2 3tb disks,
the ada4 2tb disk left alone and the new 3tb disk that I can't add anywhere.

how do I solve it?
can't I merge the 2 vdevs?

or do I have to restore mirror 2 but how do I do it? do I have to add another 2 TB disk because otherwise there is no way to add the 3 TB disk?

thank you
 
Top