I have a 29 drive vdev with raidz3. I am only using 20% space and would like to remove a drive or two (to have as replacements should I have a failure). Is this possible? How might I do this?
Thanks in advance.
Gary
Given that the current pool layout is certainly not a recommended one, as @jgreco mentioned, it might be a good a idea to let us know about your use-case. Also, how you ended up with the current pool would be interesting. Both information would be helpful to give additional advice.
Finally, as per the forum rules, more information on the system would not hurt either.
I mostly use the pool to backup my personal documents at home. Mostly a single user, etc. Have been accessing via smb. The system is an old re-purposed supermicro server with repurposed 6 Gb/s SAS drives. I do not have another server to backup the data too at this point. I could create another set of drives and fit them in the machine to backup to a new vdev. 2 CPU's and 96 GB of memory. Thanks for your help and advise.
Even 14-drive vdevs are usually considered to wide. Given you only use 20% recreating it as 3x9 RAID-Z2 vdevs +2 hot spare maybe?
And yes, you don't need another server if your chassis can accomodate enough drives to do backup to another vdev in the same server
Make sure your temporary backup is reliable enough for you (has enough redundancy - RAID-Z2 minimum, not made of faulty disks (test them)) because there would be a period of time where you'll destroy your old pool and backup will be only source of your data.
You can move data as dataset snapshots with replication in TrueNAS GUI it may be the faster and most straightforward in replicating SMB ACLs. But use RSYNC if you're not sure zpool would hold the entire process - because with half-done rsync you have exactly half of your data, while with half-done snapshot replica you effectively have next to none.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.