Not entirely, because I'm still not replacing the entire pool.The fastest the new drives come home, the safest for your data.
Not entirely, because I'm still not replacing the entire pool.
However, maybe it'd be best for me to add on the old drives to the current vdevs and turn all of them into 3-way mirrors.
That being said, I still am trying to find information on how to actually properly add in the 20tb drives, while removing the other vdevs.
How would this work though, If I am going to be removing the old vdevs? That's what is confusing me I think. Because the data needs to be moved onto the new drives, so it is safe to remove the old drives.First you expand the pool adding a vdev made of the new drives.![]()
I'm really confused about what you think your configuration will be. If you make a 20TB Mirror and you are able to place 19.4 TB of data on it, you have no space left. That is assuming you can fit 19.4TB of data on it, I doubt it. So yes, please explain your desired end configuration, I think you posted it in these 5 pages of text but I didn't see it.So when I put in these 20tb drives, I can remove 8 of my 4tb drives (2 vdevs) and I should still have my current ~5-6tb of free space to use. Correct?
Nope, another thread.Sorry, I think I lost the big picture. What is the final configuration you want to have for your pools?
I understand one pool of two mirrored 20TB drives. What is the configuration of the second pool? Are you wanting mirrors of old 4TB drives or a RAIDZ2 of old 4TB drives? And how many drives?
I'm really confused about what you think your configuration will be. If you make a 20TB Mirror and you are able to place 19.4 TB of data on it, you have no space left. That is assuming you can fit 19.4TB of data on it, I doubt it. So yes, please explain your desired end configuration, I think you posted it in these 5 pages of text but I didn't see it.
Look at the thread he linked, but basically replace a few of the vdevs with the 20tb drives as a 3-way mirror. And keep a handful of the 4tb vdevs. So it'd be a mixture of the both. Then when I can allocate more funds in the near future to replacing the others eventually it'd be an all 20tb vdev.Sorry, I think I lost the big picture. What is the final configuration you want to have for your pools?
I understand one pool of two mirrored 20TB drives. What is the configuration of the second pool? Are you wanting mirrors of old 4TB drives or a RAIDZ2 of old 4TB drives? And how many drives?
I'm really confused about what you think your configuration will be. If you make a 20TB Mirror and you are able to place 19.4 TB of data on it, you have no space left. That is assuming you can fit 19.4TB of data on it, I doubt it. So yes, please explain your desired end configuration, I think you posted it in these 5 pages of text but I didn't see it.
But when I add more drives by expanding it, it's in a separate vdev. So the data will be spread across that vdev too.You add more space with the big drives vdev, the you detach the drive from the WebUI and the
On a side note, the scrub didn't find any issues. InterestingThat being said, I am currently running a scrub again. It seems a little too weird to me. But maybe the drive was just seated weird or something.
Seems a bit complicated to me. As long as it's clear in your mind, that is what counts and I hope it works out the way you want.But when I add more drives by expanding it, it's in a separate vdev. So the data will be spread across that vdev too.
The data isn't being moved from the vdevs I'm wanting to remove, onto those drives. So I can't just go and put in the vdev, and then completely remove the other vdevs. The data needs to come off of them.
As far as I am aware, that's exactly what should happen if you have enough space. I am talking about the WebUI, not the shell.The data isn't being moved from the vdevs I'm wanting to remove, onto those drives. So I can't just go and put in the vdev, and then completely remove the other vdevs. The data needs to come off of them.
Complicated how so? I just don't have the money to replace all of my vdevs with 20tb drives upfront. So I will replace part of it for now basically.Seems a bit complicated to me. As long as it's clear in your mind, that is what counts and I hope it works out the way you want.
Would like confirmation as well. But yeah something doesn't ring right in my mind with that process. Because the data is sparsed across all vdevs. If I put in the 20tb mirror, and then just go remove the other vdevs, they still have my data on it.As far as I am aware, that's exactly what should happen if you have enough space. I am talking about the WebUI, not the shell.
Can anyone confirm this?
You don't just pull them out you have to [software] detach them.If I put in the 20tb mirror, and then just go remove the other vdevs, they still have my data on it.
How would truenas know "he is planning to remove these vdevs, so we need to make sure to move all the data over to this new vdev"?
No I get that, but how will truenas know they are staying out and the data needs to be moved off of them onto the new drives?You don't just pull them out you have to [software] detach them.
Iirc TN doesn't allow you to bring a VDEV offline if doing so offlines your pool.No I get that, but how will truenas know they are staying out and the data needs to be moved off of them onto the new drives?
Could detach not be utilized for other circumstances where you plan to put that drive back in?
Also in regards to this, I stated that the pool shows healthy now and it ran another scrub last night and still shows healthy.That being said, I am currently running a scrub again. It seems a little too weird to me. But maybe the drive was just seated weird or something.
zpool status -v pool: PrimaryPool state: ONLINE scan: scrub repaired 0B in 15:04:36 with 0 errors on Fri Aug 25 02:57:48 2023 config: NAME STATE READ WRITE CKSUM PrimaryPool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/d7476d46-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/db71bcb5-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/d96847a9-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gptid/d9fb7757-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 gptid/da1e1121-32ca-11ec-b815-002590f52cc2 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 gptid/9fd0872d-8f64-11ec-8462-002590f52cc2 ONLINE 0 0 0 gptid/9ff0f041-8f64-11ec-8462-002590f52cc2 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 gptid/14811777-1b6d-11ed-8423-ac1f6be66d76 ONLINE 0 0 0 gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76 ONLINE 0 0 0 mirror-6 ONLINE 0 0 0 gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76 ONLINE 0 0 0 spare-1 ONLINE 0 0 0 gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76 ONLINE 0 0 0 gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76 ONLINE 0 0 0 spares gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76 INUSE currently in use gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76 AVAIL errors: No known data errors pool: boot-pool state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:01:06 with 0 errors on Sat Aug 26 03:46:06 2023 config:
Just tried detatching that one that is in use and getting and error.Detatch those spares.
EZFS_NOTSUP] Cannot detach root-level vdevs Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 232, in __zfs_vdev_operation op(target, *args) File "libzfs.pyx", line 402, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 232, in __zfs_vdev_operation op(target, *args) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 241, in <lambda> self.__zfs_vdev_operation(name, label, lambda target: target.detach()) File "libzfs.pyx", line 2158, in libzfs.ZFSVdev.detach libzfs.ZFSException: Cannot detach root-level vdevs During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 241, in detach self.__zfs_vdev_operation(name, label, lambda target: target.detach()) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 234, in __zfs_vdev_operation raise CallError(str(e), e.code) middlewared.service_exception.CallError: [EZFS_NOTSUP] Cannot detach root-level vdevs """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 139, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1236, in _call return await methodobj(*prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1103, in detach await self.middleware.call('zfs.pool.detach', pool['name'], found[1]['guid']) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1279, in call return await self._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1244, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1250, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1169, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1152, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.service_exception.CallError: [EZFS_NOTSUP] Cannot detach root-level vdevs