SOLVED Is it possible to change RAID configuration

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, saw the video. Looks like the option to detach isn't there for you so we will need to use CLI for it.

Offline the disk like in the video.

Go to CLI and use zpool detach myNASstorage gptid/...

Replace the 3 dots with the actual gptid from zpool status
 

metalhusky

Dabbler
Joined
Oct 7, 2019
Messages
14
OK, saw the video. Looks like the option to detach isn't there for you so we will need to use CLI for it.

Offline the disk like in the video.

Go to CLI and use zpool detach myNASstorage gptid/...

Replace the 3 dots with the actual gptid from zpool status

OK, cool that worked, thanks!

Now how do i add it, just as storage, as a second drive?
Do you know why the option to detach was not in the GUI?
Why did i need to use the console?


I wanted to "Extend" the pool and got
Extending the pool adds new vdevs in a stripe with the existing vdevs. It is important to only use new vdevs of the same size and type as those already in the pool. This operation cannot be reversed. Continue?

So because it's irreversible i didn't proceed.

Is there a way to add this volume not as "Stripe"? As i understand it, in Stripe, if 1 HDD fails, the other one can't recover data either.
That would be sad, but then again, it's basically juts a PLEX server and some files, nothing super important, so even if one drive fails, and all data from the pool with it, it would kill me.
How often do WD Reds fail anyway? Whats the expected livetime 3-5 Years?

PS I realize i bombard you with questions here, but you are one of the few people in the whole BSD/Linux community who actually stayed on it and ACTUALLY helped me!
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So, it's good you didn't take the option to extend if you don't want to put all your data at risk of a single disk failure. If as you say it's no big deal, then go ahead and extend (understanding that it's all eggs in one basket).

If what you want is just another area to store files that's not linked to the current one, just create a new pool and use the spare disk we just freed up. It is possible you'll need to go into the Disks view under Storage and Wipe it first.

Do you know why the option to detach was not in the GUI?
Why did i need to use the console?
Second question answered by the answer for the first...
I was leading you down that path as I had tested the process on my development system, which runs TrueNAS 12 Core (=FreeNAS 12 Nightly). The option is there to detach in that (future) version.
I'm sure that in the legacy UI (just removed in 11.3), it was possible to do that in prior versions too. So it's just this period in time where we have a feature gap.

How often do WD Reds fail anyway? Whats the expected livetime 3-5 Years?
WD publishes the MTBF (mean Time Between Failures) for the Reds as 1,000,000 hours. (That's 41,666 days, which is 114 years... clearly a number which makes no sense in human experience)

What that really means is that if you ran 1,000,000 WD reds for an hour, you should expect one of them to fail... 24 to fail in a day, 8766 in a year.

The practicalities of that are a bit special, since that means after 10 years only 87,660 in a million would be dead (Still 90%+ of the drives running after 10 years... and still over 10% running after 100 years), clearly a rubbish statement.

The experience most of us see is that after about 4-5 years, drives begin to show signs of age and report reallocated sectors or other issues and eventually crap out somewhere between 5 and 10 years... depends on your kind of thinking as to what you should do... replace preemptively because you like to sleep at night or replace only on failure because you love to live on the edge. Things like various levels of RAID and good operational procedures are what can help to balance those things.

There's a lot of statistical analysis out there which talks about the failure curve (most drives fail either in the early days/weeks of operation or beyond 4 years, with only a few failures in between that) and other factors such as heat and humidity (the helium filled drives would be presumably immune to the humidity aspect) can have an impact on lifespan. Good airflow and maintaining a cooler temperature is seen to be good for the long-term health of the disk, as is keeping them spinning as opposed to sleeping or powering off regularly.
 
Last edited:

metalhusky

Dabbler
Joined
Oct 7, 2019
Messages
14
OK, i just created a second pool with the HDD, I think it should be possible to setup Synctong that it writes data to both drives/pools, so that the stuff that's kind of important is there twice and the rest is media, doesn't matter that much... yeah.

Thanks a lot, man!
 
Top