Expanding capacity...

Status
Not open for further replies.

BlueMagician

Explorer
Joined
Apr 24, 2015
Messages
56
Dear all,

I'm currently running Freenas with 6 x 6TB WD Red HDD's in a zRAID2 configuration.

The pool is used to store multiple files of 2GB+ each and is mostly a write once / read many scenario.

With only 9.5 TiB free in a pool of 21TiB, space is dwindling faster than expected - especially if I aim to keep it under 90% full.

So I am forced to consider my expansion options.

I have the chassis and controller space for 12 drives.

Conventional advice seems to suggest that just adding a second 6 x 6TB zRAID2 VDEV is the way to go.

BUT then I'm essentially losing 4 out of 12 drives to redundancy. That's £800 and 24TB in unusable capacity - seems crazy.

So I wonder if I should move to an 11 drive zRAID3 configuration.

Or perhaps buck convention, and use a 12 drive zRAID2 or zRAID3 setup.

My biggest concern by far, is that I have nowhere to store ~11TiB of data temporarily, if I needed destroy my current pool.

I'd have to buy at least two extra WD Red's just to move stuff onto for a day. These could then arguably become cold spares, but again, that's essentially £400 of drives sitting on a shelf doing nothing.

I've heard of people creating pools with fake/sparse devices as members, essentially buying time to add more real devices. I wonder if doing something like this in the correct order, would save me from having to buy quite so many drives only to use them for a days data transfer.

I can't really afford £1000+ on more drives right now, but the irony is that the longer I leave it, the more my current pool grows - so potentially more drives I'd need to buy for the transfer process.. pfft.

I can't quite get my head around it all, and I'm also uncertain as to which final configuration is the one to aim for in my situation.

Any thoughts and advice would be much appreciated.

Kindest regards,
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Conventional advice seems to suggest that just adding a second 6 x 6TB zRAID2 VDEV is the way to go.
BUT then I'm essentially losing 4 out of 12 drives to redundancy. That's £800 and 24TB in unusable capacity - seems crazy.
Not so crazy, when you consider other hardware raid equivalents like Raid6. It would do the same and require 2 drives for parity. In the end it comes down to deciding on which two of Speed, Space or Redundancy you settle on.

So I wonder if I should move to an 11 drive zRAID3 configuration.
In your case I would think that a Raidz3 is a good route using 11 drives and 1 as a hot/cold spare.

However, when I ran the numbers using @Bidule0hm 's "ZFS RAID size and reliability calculator" it shows:

RaidZ3 (11x6TB):
Usable data space: 34.37 (TiB) / 37.79 (TB)

RaidZ2 (6x6TB)
Usable data space: 17.18 (TiB) / 18.89 (TB) --> x 2 (For 2vdevs) = 34.36 (TiB) / 37.78 (TB)

So, based on that as well as your comment:
My biggest concern by far, is that I have nowhere to store ~11TiB of data temporarily, if I needed destroy my current pool.

I would then suggest:
Conventional advice seems to suggest that just adding a second 6 x 6TB zRAID2 VDEV is the way to go.

This also has the advantage for being able to update a single vdev later (as space requirements increase) by simply replacing the drives (one at a time) and re-silvering. Once all the drives in a vdev are replaced, the capacity will autoexpand and provide you with the additional space. Since you are using 6 drive vdevs, it is easier to purchase/replace 6 drives instead of 11 (if you used a 11x6TB RaidZ3 vdev).
 

BlueMagician

Explorer
Joined
Apr 24, 2015
Messages
56
Thank you Mirfster, for your detailed and thoughtful reply.

So the total available space for two 6 disk zRAID2 vDEV's, is equal to that of one 11 disk zRAID3 vDEV.

Put another way - an extra £200 spent on one more disc doesn't get me any more space, but allows a twin vDEV configuration, allowing more flexible future upgradability.

So, back to my previous questions, for clarification, if I may:

1. Would it be completely stupid to move to a 12 disc zRAID3 pool? Would it hurt speed of reliability in any other way?

2. Thinking about a sneaky way to change pool configuration if I went with a 12 disc zRAID3...

...Say I bought another 6 drives but used 3 of them to temporarily hold my stuff.

This would leave me with 9 physical drives to create a new pool.

Could I create a new 12 disc zRAID3 pool using those 9 drives and 3 sparse files - then 'detach' the sparse files to effectively end up with a fully degraded 12 disc zRAID3 pool?

I could then move all my stuff through CIFS from the 3 discs I held back.

Once all my stuff is copied to the pool thus freed up the 3 last discs, could I then attach them back into the VDEV - effectively replacing the 'missing' sparse files, with real drives?

Sorry if that sounds nuts. I'm just thinking of a way to back up 11TB of data for a few days, without having to buy 2 or 3 extra discs that would otherwise just end up being shelf-fodder.

Thanks in advance,
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
  1. I wouldn't call it "completely stupid", but it would go against generally accepted guidelines.
  2. Sounds nuts to me.
I propose that before you expand your storage, you stop and think carefully about why you're storing more data than feel you can afford to back up. Is the data important? If so, then it must be backed up. Perhaps you need to slow the growth rate.

Did you follow @Mirfster's explanation of why two 6-disk vdevs offer an easier upgrade path than one larger vdev?
 

BlueMagician

Explorer
Joined
Apr 24, 2015
Messages
56
2. Sounds nuts to me.

I realise it was a slightly unorthodox suggestion, but I figured it might be possible, at least in theory - and save me from having to spend £600-800 on discs for just one weekends backup/transfer session.

Is the data important? If so, then it must be backed up.

My FreeNAS server is just a media server for the family, and a backup target for a couple other desktop machines. Nothing in the pool is irreplacable, but the media (although I own it all) would still take weeks to recover from source.

Did you follow @Mirfster's explanation of why two 6-disk vdevs offer an easier upgrade path than one larger vdev?

Indeed I did - and it is appreciated. I understand that I could expand the total capacity of a single vDEV by systematically replacing each drive with a bigger one in the future.

I will not ignore the good advice posted above, but it's still hard to swallow the fact that I'd still essentially be losing £800 / 24TB of disc to redundancy by going down the the twin zRAID2 VDEV route, instead of just recreating a 11/12 disc zRAID3 pool and only losing 18TB.

Hell, even a 12 disc zRAID2 pool is a thought - only losing 12TB. BUT I appreciate how only having 2 disc redundancy for a 12 disc array is a little sketchy, so I'll forget that.


My curious side would still like to know if my crazy idea for sparse disk switching would work in theory. And also, how bad the performance would be if I ran a 12 disc zRAID3 setup.


Thanks to all,
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
My curious side would still like to know if my crazy idea for sparse disk switching would work in theory. And also, how bad the performance would be if I ran a 12 disc zRAID3 setup.

In pure theory: yes, it would work. But you need to know exactly what you're doing and if anything goes wrong during the process then you'll probably lose your data, you're warned.

If you use a gigabit link then it'll probably be your bottleneck, not the pool. However I didn't tested a 12 drives RAID-Z3 so I can be wrong.
 

BlueMagician

Explorer
Joined
Apr 24, 2015
Messages
56
My thanks again to Mirfster,
Robert Trevellyan, and Bidule0hm.

I'm not trying to be difficult, I'm just trying to think outside the box a bit - and not waste too much space/money in the process.

In all likelihood, I'll end up going for a second 6 x 6 zRAID2 vDEV.


One final question:

Other than a slight increase in random IOPS, is there any other upside to merging multiple vDEV's into a single pool?

Surely it's safer for the integrity of the pool(s) to keep them separate.

Conversely, if I create a new second pool containing the new second vDEV now, can I merge the two pools later if I change my mind?

Of course I realise one can't technically merge vDEV's - hence this entire thread - but I'm not sure what rules apply at Pool level.

Thanks in advance for the great discussion,
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I'd still essentially be losing £800 / 24TB of disc to redundancy by going down the the twin zRAID2 VDEV route
If you can stop thinking of it as losing, as in raw capacity or monetary value, and start thinking of it as using, as in "to make your storage more reliable", it gets easier. In one of his boxes, @jgreco is using 48TB of raw storage to deliver 7TB of usable, high performance, reliable storage (or something along those lines).
Surely it's safer for the integrity of the pool(s) to keep them separate.
Technically correct, but then you have to manage two pools instead of one. If you're the type of person who likes to partition the hard drives in their desktop computers, it might be the way to go.
can I merge the two pools later if I change my mind?
Not unless you destroy one pool, then add its drives to another pool as a new vdev.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
having to spend £600-800 on discs for just one weekends backup/transfer session.
If you choose carefully, you could find yourself with cold spares ready to go, or even an externally mountable pool for backing up your most important data (e.g. via eSATA).

Paging @Arwen ...
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
TBH, I am not sure how things would go with a 12 disk vdev. I never considered pushing beyond recommendations. :)

As far as performance, for a single 11 RaidZ3 vdev; I would think writes would be slower and reads decent. But I would still think 2 RaidZ2 vdevs would be faster due to more iops. This is just me theorizing though.

I have heard of the sparse allocation, but would think that it is risky and not worth it. Again, I have not done this myself so YMMV.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
TBH, I am not sure how things would go with a 12 disk vdev. I never considered pushing beyond recommendations. :)

As far as performance, for a single 11 RaidZ3 vdev; I would think writes would be slower and reads decent. But I would still think 2 RaidZ2 vdevs would be faster due to more iops. This is just me theorizing though.

I have heard of the sparse allocation, but would think that it is risky and not worth it. Again, I have not done this myself so YMMV.

Out here on the edge of sanity, 15 drive Z3 vdevs work just fine and are fast enough to make a 1Gb/s link the bottleneck.

YMMV, Contents under pressure, Offer void where prohibited, no user serviceable parts inside.. :)
 

BlueMagician

Explorer
Joined
Apr 24, 2015
Messages
56
Some sound advice here, and food for thought, thank you.


If I were to keep it simple and just go with a second 6-disc RAIDz2 vDEV, adding that to my current pool would mean it becoming a stripe, correct? Akin to RAID60?

I've read that only newly written data will be striped/distributed across the two vDEV's, whilst existing data will stay put.

95% or my data is static. Write once, read many.

So this effectively means that for my data that's already on the pool, I'll not see any potential I/O throughput because it's not really striped - and never will be?

By extension, surely this also means that once the original vDEV is full, any newly written data has no choice but to reside on the new vDEV - again meaning it's never truly striped because the space doesn't exist to allow the writes to be fairly distributed.


If I've not misunderstood anything so far, the only way to rebalance the pool, would be to move all the data off the pool - freeing space - and copy it all back again, thus giving ZFS the chance to do its thing.

Is there any way to achieve this without having to move the data off the pool in one go? Would moving the data out and back in file by file do the trick - or would the pool have to be emptied completely for max benefit?

Hmm. Mind. Hurts.


Thank you all in advance,
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I've read that only newly written data will be striped/distributed across the two vDEV's, whilst existing data will stay put.
From my understanding this is partially true (not 100% certain).

Data that is "at rest" will not be re-written across both vdevs. However; any data that is new or modified will then be written across both vdevs. As well, new data may first be written mostly to the new vdev in the pool because FreeNas will see that as having more free space (internally).

Increased speeds should be seen when accessing data that has been written to across both vdevs. If you are not having speed issues with the current RaidZ2 vdev, I would not think that performance would lower either.

I think that there is a way to have it re-write/re-allocate the data; would have to look around in the forums to see though. Since you should have ample space once you add the vdev, it may be as easy as:
  1. Creating another new dataset
  2. Copying the data there.
  3. Deleting the contents of the old dataset
  4. Copying the data from the new dataset back to the old one
  5. Delete the new dataset.
  6. Again this is just me theorizing, but others may chime in with info regarding this...
Keep in mind, for RaidZ2 while you will have the ability to withstand the loss of up to 4 drives in the pool, you cannot lose more than 2 drives in any single vdev. If that should occur, your pool is lost altogether.

I understand your points, but am trying to provide that "best" answer for you while considering your desire to not spend too much. With all that said, then only thing I can recommend is to add the second RaidZ2 vdev.

Now, if you could get your hands on a "loaner" system or some way to safely copy the data elsewhere; then other possibilities would exist.
 

BlueMagician

Explorer
Joined
Apr 24, 2015
Messages
56
Thank you Mirfster - your comments confirm much of what I was thinking.

I appreciate that going with twin RAIDz2 vDEV's means the safety of entire pool is still bound by the redundancy level of any one vDEV.

That was making me lean towards 11 or 12 x 6TB drives in a RAIDz3 configuration - but part of me thinks that triple parity redundancy for a home media server is going it a little too far.


I've been looking at costs of drives, and I can currently pick up 5TB external for backup purposes, for just £100 a piece.

Three of those would cover me nicely to move my data onto temporarily, so I guess trashing the pool and starting again wouldn't be completely ridiculous.

Spending money doesn't worry me, but wasting it does - so I appreciate that you're trying to keep your suggestions financially sensible!


With 2 x 6 disc RAIDz2, the random I/O gain from having two vDEV's in the pool would be nice. In theory, that means my media server could better cope with multiple client sessions.

Also the benefit of being able to expand each vDEV separately in the future is a bonus.

Although, changing the size of one vDEV (and not the other) in future would eventually lead me back to a situation where the stripe data could not be evenly distributed because of vDEV capacity imbalance - thus negating the I/O perks for some of the data as the pool nears capacity.

On the other hand, a 10 disc RAIDz2 or an 11 disc RAIDz3 would be a clean balanced fresh start - less discs, less power, less cost - more efficient drive:usable-space ratio.

A 10 or 11 disc solution would also leave me with one or two slots left in my chassis - and more SATA headers free for other future requirements such as SSD's for a ZIL and L2ARC.

I don't ever see a need to use them, but still, not filling every bay in my chassis does have a certain allure.


Wow. I've rambled for ages. Sorry. And there I was thinking I'd made a decision!


Mirfster, you mentioned that I'd have more options if I had the option of backing up, trashing and starting again.

I assume I've covered most of what you were thinking in this post - bit I'd love to know other people's thoughts still.


Thanks again,
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
the media (although I own it all) would still take weeks to recover from source
part of me thinks that triple parity redundancy for a home media server is going it a little too far
You could make a case for a RAIDZ1 pool for the most easily replaceable data. It's not as though you'd need to be able to watch all those movies the day after a pool failure. In this scenario, if there's less easily replaceable data to be stored too, you would be better off with 2 pools, with more redundancy in the 2nd.
 

ttabbal

Dabbler
Joined
Oct 9, 2015
Messages
35
Speaking as someone who has been there.. My setup is 2x 6-disk raidz2. I started with 1 vdev and added the second after doing the initial data load from the old machine.

The somewhat unbalanced data levels haven't been an issue. I was concerned about it, but some testing with the system before going live convinced me that it wasn't a problem for a media server. And it hasn't been. The machine has been running for 7 years. The array can sustain about 500MB/sec locally (sequential) and handles a half dozen player clients and another 4+ machines doing random file transfers without problems. Some of the setups here make it look like a 5.25" floppy, but it does what I need it to.

In that time, I've ended up replacing all 6 original drives. Most of the time they were still accessible. I run scrubs and SMART long tests twice a month, and short tests between. If I see a drive reporting SMART errors, it gets replaced. I then test it in another system with the manufacturer's testing tools. I haven't had any I felt comfortable using after that test, they all failed in various ways. Failures tended to come a few months apart. As a bonus, the first vdev went from 1.5TB drives to 2TB transparently.

It's about time to start planning the second vdev replacement. Replace before you have disks drop off the array. It's well worth it. And do through burn in testing. I like to use the destructive manufacturer test, then create a temporary mirror pool, fill, scrub, destroy, repeat as desired. If a drive makes it past that, I find I can expect years of use from it. That takes time though, another reason to get on it at the first sign of trouble.
 

Asimov1973

Dabbler
Joined
Nov 23, 2013
Messages
49
mmm...when I needed to convert my 4x3TB vdev to a 6x3TB vdev I simply:

1) order an 8TB on Amazon.de
2) copy all my stuff inside the 8TB
3) destroy my 4x3TB vdev
4) add 2 disks of 3TB each to the pool
5) create the new vdev 6x3TB
6) disconnect 1 drive of 3TB because I had only 6 SATA PORTS
7) connect the 8TB to the NAS and copy all my data to the DEGRADED 6x3TB (which was now made of just 5 disks...)
8) disconnect the 8TB drive
9) connect back again the 3TB drive to the 6x3TB vdev which automagically repaired itself in a second
10) send back the day after the 8TB disk to amazon.de saying that "I was very sorry but the disk wasn't compatible with my system" (lol)
11) get full refund from amazon

THE END (MUAHAHAHAHAHH !!! evil laugh in background which progressively fade away....)
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
mmm...when I needed to convert my 4x3TB vdev to a 6x3TB vdev I simply:

1) order an 8TB on Amazon.de
2) copy all my stuff inside the 8TB
3) destroy my 4x3TB vdev
4) add 2 disks of 3TB each to the pool
5) create the new vdev 6x3TB
6) disconnect 1 drive of 3TB because I had only 6 SATA PORTS
7) connect the 8TB to the NAS and copy all my data to the DEGRADED 6x3TB (which was now made of just 5 disks...)
8) disconnect the 8TB drive
9) connect back again the 3TB drive to the 6x3TB vdev which automagically repaired itself in a second
10) send back the day after the 8TB disk to amazon.de saying that "I was very sorry but the disk wasn't compatible with my system" (lol)
11) get full refund from amazon

THE END (MUAHAHAHAHAHH !!! evil laugh in background which progressively fade away....)

Publicly admitting retail fraud..... o_O
 

Asimov1973

Dabbler
Joined
Nov 23, 2013
Messages
49
dcfa5081de72637fcde5c116fdd0802bb1c6801dc62b372f1266ad225622dc4e.jpg
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
Publicly admitting retail fraud..... o_O

Because of the way he worded it? Many stores allow you to return something simply because "I don't want it anymore" Which would be true. He wanted it for the backup, but then after that he didn't want it anymore and was still within the return period.

I'm surprised more people don't keep backups. You can reorganize your zpool and then restore from backup. Backups are always good :)

For me it seems that if you are storing data and holding onto it then you obviously don't want to lose it (or else why store it in the first place?), and IMO backups are pretty necessary for data that you don't want to lose.
 
Status
Not open for further replies.
Top