Can't remove special Metadata raid-0 from pool

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
I finally figured out my issue and what was going on with my server. The Metadata pool has write and read problems but I cant seam to remove it from my pool so I can replace it. I have even tired export the pool and re-importing it and still no change. is there a way to do it in shell?
1650073435615.png

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 238, in __zfs_vdev_operation
op(target, *args) File "nvpair.pxi", line 404, in items
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 238, in __zfs_vdev_operation
op(target, *args)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 251, in impl getattr(target, op)()
File "libzfs.pyx", line 2161, in libzfs.ZFSVdev.remove
libzfs.ZFSException: invalid config; all top-level vdevs must have the same sector size and not be raidz.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 114, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1265, in nf
return func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 287, in remove
self.detach_remove_impl('remove', name, label, options)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 254, in detach_remove_impl
self.__zfs_vdev_operation(name, label, impl)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 240, in __zfs_vdev_operation
raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_INVALCONFIG] invalid config; all top-level vdevs must have the same sector size and not be raidz.
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 423, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 459, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1232, in remove
await self.middleware.call('zfs.pool.remove', pool['name'], found[1]['guid'])
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1318, in call
return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1283, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1289, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1212, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1186, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_INVALCONFIG] invalid config; all top-level vdevs must have the same sector size and not be raidz.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Well - one reason is that you have no hardware and no pool design that we can look at.
See forum rules - its at the top of the page
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Notwithstanding that we don't know much about your pool (data raidz# + special mirror?), the error says it all: You can only remove vdevs if the pool is exclusively made of mirrors—no raidz. But, most importantly: You cannot remove a metadata 'special' vdev! Metadata is part and parcel of data; without the special vdev, the pool is dead.

If the drives are failing, plug new drive(s) and use the GUI to replace.
To change the pool layout, backup, delete, rebuild and restore.
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
@NugentS sorry had been a long day. The pool consists of a 12 14tb vdev in raid-z3 and a 12 4tb vdev in raid-z3 and a special metadata vdev in mirror. When i was adding it i tried with 3 drives and it only gave me the option of mirror.
@Etorix thank you for that information. Guess ill be disconnecting that pool and bring the backup pool over from my Core backup to rebuild the pool a different way.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
TrueNAS may not allow raidz# for special vdevs, but you can have 3-way (or 4-way) mirrors to have more redundancy in the special vdev. SSDs reportedly tend to fail abruptly, and it would "only" take the two drives in your mirror going "poof" in short succession to lose all your data…

Is the special vdev made of consumer drives which are possibly not up to the workload?
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
TrueNAS may not allow raidz# for special vdevs, but you can have 3-way (or 4-way) mirrors to have more redundancy in the special vdev. SSDs reportedly tend to fail abruptly, and it would "only" take the two drives in your mirror going "poof" in short succession to lose all your data…

Is the special vdev made of consumer drives which are possibly not up to the workload?
they were a pair of Micron 1100 2tb ssd drives I had left over from an editing pool i had made so i was hoping they would help speed up the giant pool just a little bit. tomorrow on my day off my plan is to disconnect the pool move my backup pool over to my main server do some reconfiguring of those drives and not put any special vdevs on the pool.
 
Top