Optimal zpool storage configuration?

Status
Not open for further replies.

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Current situation
I have to 2 raidz3 pools in my freeNAS machine:
A) 5 x 4TB NAS disks raidz3 (total storage 6,99 TB)
B) 11 x 4TB NAS disks raidz3 (total storage 28,1 TB)
In addition to that I have 2 more NAS drives 8TB, not being used in FreeNAS.
My case can fit maximal 17 disks, 16 are already in place, the ones in the zpools above.

Problem
Resilvering of pool B is taking too long and I start to see it as a riskmoment that something happens during the resilvering (more drives fail during resilvering or before I have a replacement drive ready). I want to minimize that risk.
I searched and didn't find any way to reduce the resilvering time except making smaller pools? At the same time I want to maximize storage of my pool, and preferable have it as raidz3 pools as I am paranoid about failing disk and want maximum safety. I plaied with a FreeNAS machine and virtual drives of 2TB to see what can be optimal setup. If creating 2 pools raidz3 of 5 disks each, i got much less storage then creating 1 pool raidz3 containing 10 disks. For me it looks like if i go with 2 pool with less disks than one pool with all disks. then i will have much less storage?

Question
What is the best setup to make the resilvering as fast as possible but still have maximum storage?
Destroying A and B pools to create 2 pools raidz2 with a spare drive one that both can access if needed? (means 16 usable disks)
Destroying pool B and creating just 2 poolz raidz3 with a spare drive one that both can access if needed? (means 12 usable disks)
Keep pool A and B and somehow speed up resilvering on pool B?
Another setup that is better or more optimal?
I don't wanna loose too much space with a new pool setup, rather have all if not more space. But same time I want to have as much safety as possible against failing drives. I prefer to have 2 separate pools at least instead of one big one. But it is not a must.


I already read https://forums.freenas.org/index.php?threads/hardware-recommendations-read-this-first.23069/ and few of other posts, couldn't find answers regarding optimal storage in general
 
Last edited by a moderator:

Mr_N

Patron
Joined
Aug 31, 2013
Messages
289
Why is your pool resilvering to start with?

If you have a failing drive (investigate smart data, make sure your running regular smart tests) then replace it and move on.

There is such a small chance you'll get 3 drives failing before you can replace one of them and resilver (as long as your monitoring your drives) that you shouldn't be worrying.

Its not a RaidZ1 pool, where you could justifiably have that concern, but the opposite end with RaidZ3 so the only way to reduce the risk more would be to add a hot spare.
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Background
It started with one drive getting in faulted statues in pool B. FreeNAS gave read and write errors on that disk. Smart tests, short and extended one didn't show anything unusual. At least nothing i could read, I am not an expert on reading the smart results. So first time I just cleared the faults. Soon it got faulted again and, I just cleared it again. When it reoccured several times i started investigating something else. switched power and SATA cables to see if that was the reason, no it wasn't. Read and write errors still there. I removed the drive from my freeNAS machine and made an long extended test of the drive with Seagate Seatools in my PC, even that test did not show anything unusuall. At the same time freeNAS started giving me read and write errors on another drive on pool B. Suddenlty my raidz3 pool was lacking one drive that i was checking outside the machine and was giving me read and write errors on another one. Luckily it stopped there and I did not get more drives in faulted statues or drivefails, but It could have happend.
At the same time putting back those drives after checking them resulted in resilvering, and i had to do that more than once and that took too long time. All this time I avoided using that pool and so next time i would like to access everythign sooner by shortening resilvering time. Resilvering of disk or disks is gonna happen sooner or later when any drive dies naturally, and now that i backuped everything and can recreate the pools in better setup with shorter resilvering and even faster scrub then I would like to do that ofcourse.

Back to my question, is your opinion that the current pool setup is optimal compared to anything else I can do with the current setup and options ?
Or can you suggest a better setup with respect to my first post :) ?
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Back to my question, is your opinion that the current pool setup is optimal compared to anything else I can do with the current setup and options ?
Or can you suggest a better setup with respect to my first post :) ?

Hi there,
first of all:
a universally applicable optimal pool does not exist.
That is like asking what is the best dish in the world, everybody will have a different answer ;)

You seem very worried about the safety of your data, maybe a bit too much. :eek:
Did you have issues in the past?

Concerning the pool configuration:
The first question is:
Why did you create 2 pools in the first place?
What do you use the pools for? Are you replicating stuff from pool A to B?
From a general point of view resilvering operations on a RAIDZ3 VDEV will always be longer (and it get's worse with more drives -- 12 being already the limit that should not be crossed).
Additionally, you might be able to use tunables to increase the resilvering speed, by actually increasing the performance impact on the users of the pool.

In my opinion with 16 disks and your wish to use RAIDZ3, the best configuration would to create 1 single pool with 2 RAIDZ3 VDEVs (each containing 8 disks). Total capacity: 36,4TiB (-20% performance threshold = ~28TiB)
In this configuration you would be able to loose a total of 6 disks (3 in each VDEV), before you don't have redundancy any more. The resilvering time will be shorter and stress on the drives will be reduced because only 1 VDEV will be working to resilver.
 
Last edited by a moderator:

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Well I was asking for a better configuration out of my current hardware setup of 17 disk while maintaining maximum redundancy and achieve faster resilvering and scrubbing. I assumed it can NOT be done in many ways given that :)

The reason I created 2 pools in the first place was to have a copy of the most important files on 2 different pools, so if ever something happens to one of the pools I still have another copy.
I am not so experienced with FreeNAS. Almost every operation I do is from the GUI. If something happens that can not be solved through GUI and must be done by command line, then that would take me longer time to make the pool online and operational again. In those case, and they already happened, it was better to have another copy of my most important files.
Anyway that situation changed now, and I am keeping that copy outside of FreeNAS and can survive longer time without both pools in case I need to do searching online and looking for "how to" to fix a problem related to the pool being not usable under that time. Therefore I am looking for a new pool setup.
If I wasn't very worried about my data, I wouldn't be here in the first place with no experience in FreeBSD but still using FreeNAS just for ZFS. I mean regardless of what and how experienced guys in this forum say that the GUI covers everything and you don't need any commands to fix any normal problems (maybe noone said that and I just assumed it) I just noticed through the years that the GUI is not enough!
Several times I had a problem where I needed to learn some basics of FreeNAS, check for help online and try for days before I could fix that problem. Small problems could take at least a week to fix or longer, because I don't have FreeBSD experience. But I didn't leave FreeNAS because of ZFS and no better ZFS NAS options :D

Thanks for your suggestion
So such a pool would really have 36,4 TB of usable space ? more than both my current poolz combined 35 TB :eek:
I guess this zpool would not be expanded unless all 8 disks in one vdev are replaced with bigger disks than 4TB?
Is there any risk or disadvantage with having one such big pool?
 
Last edited by a moderator:

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
[...Snip...]
If I wasn't very worried about my data, I wouldn't be here in the first place with no experience in freeBSD but still using freeNAS just for ZFS. I mean regardless of what and how experienced guys in this forum say that the GUI covers everything and you don't need any commands to fix any normal problems (maybe noone said that and I just assumed it) I just noticed through the years that the GUI is not enough !

[...Snip...]
thanks for your suggestion
So such a pool would really have 36,4 TB of usable space ? more than both my current poolz combined 35 TB :eek:
I guess this zpool would not be expanded unless all 8 disks in one vdev are replaced with bigger disks than 4TB ?
Is there any risk or disadvantage with having one such big pool ?
Surprises me that you have experienced so many issues, but well Murphy's law is always hitting at the worse moments.

Yes, the pool layout I suggested is more space efficient than the one that you currently use. That's why you end up with a bit more space. That's always a good thing ;)
And yes, you will need to replace all the drives in the same VDEV with bigger ones to see the newly available space.

Concerning the risks:
In my opinion using 1 bigger pool is safer then having 2 to manage, because you have a better resilience and you would actually need to have 4 disks failing in the same VDEV or a full VDEV to be offline to be actually in trouble.
And the probability of having 4 disks out of 8 fail at the same time is significantly lower then 4 disks out of 11 for instance.

That being said you still need to make sure, that you run regular SMART tests (long and short) and have backups (onsite and offsite [for the important stuff]) of your data, to be on the safe side.
I found that i'm much more relaxed about potential drive failures since I'm running more regular backups and have hidden a drive with the really irreplaceable stuff at my workplace (I call that private could backup). :D
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Your suggested setup is maybe the answer to my questions :)
I read somewhere that if I loose one vdev then I loose the whole pool. Is this somehow increasing any risk when having 2 vdevs of 8 disks compared to not using vdevs ? additional risks that are vdev specific ?
Another detail i just thought about was the RAM memory, I remember when I at the start tried to avoid too big pools because of RAM memory.
And is my 32GB RAM memory enough for a pool of such size, 36TB ?

At the beginning I did not have enough SATA ports to create one big pool anyway, it was later when I aquired an IBM ServeRAID M1015 that I flashed that I could connect more drives.
But now the case is different and I can create such a pool from start.
I just have to backup my first pool too. Now my backups are just regular drives in windows where I copy the data with crc check on read and write.
Once that is done I just have to decide the upgrade from current 9.3. That will need an flash controller to P21 that I assume is working now with both all latest versions without warning.
Is it early to go with Corral ? should I just upgrade to 9.10
I am just looking for the basic functions, i havn't even touched the subject jails and just use my freeNAS as a simple NAS with ZFS.

Regarding regular backup, a pool of that size would require another freeNAS machine stationed somewhere else. I don't see where I could put that or even IF I can afford that for now :eek: sure it doesn't have to be a raidz3 setup but maybe raidz1. But it would still require an expensive hardware with severals disks AND a location. For now my backups are just external drives and the backup will be done from time to time :(
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
I read somewhere that if I loose one vdev then I loose the whole pool. Is this somehow increasing any risk when having 2 vdevs of 8 disks compared to not using vdevs ? additional risks that are vdev specific ?
Another detail i just thought about was the RAM memory, I remember when I at the start tried to avoid too big pools because of RAM memory.
And is my 32GB RAM memory enough for a pool of such size, 36TB ?

Once that is done I just have to decide the upgrade from current 9.3. That will need an flash controller to P21 that I assume is working now with both all latest versions without warning.
Is it early to go with Corral ? should I just upgrade to 9.10

Regarding regular backup, a pool of that size would require another freeNAS machine stationed somewhere else. I don't see where I could put that or even IF I can afford that for now :eek: sure it doesn't have to be a raidz3 setup but maybe raidz1. But it would still require an expensive hardware with severals disks AND a location. For now my backups are just external drives and the backup will be done from time to time :(

Yes, if you "loose" a VDEV inside of that hypothetic pool, your pool will be unmountable ...
Like in your current configuration ;)

Your amount of RAM is fine. You're very close to the "~1GB of RAM per TB of Data" Thumb rule.

Seeing the current situation and maturity, you should stay away from Corral ... It's not for production yet
Version 9.10 will be perfect. I will actually need to upgrade to that version too :p

Well, yeah the last point for the back up is quite important...
I'm having a similar issue actually, and I'm playing with the idea of buying a smaller server like the HP MicroServer Gen8 (A Celeron CPU with 4GB of ECC RAM is around 200€ + a RAM upgrade) to make a backup server.
With 4* 8TB disks in RAIDZ1 this would give around 20TiB of space. Would not be too bad for such s small form factor. :)
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
I am considering that option too, in the future, to buy a smaller server with big drives that can fit most of my current freeNAS data. but that is for the future :)
I thought Corral was announced as ready. So no need at all to go with it. Explains why there is no documentation, if that ever will come.

I have to ask you again about that capacity if I create one big pool as you suggested. Are you sure the size of the pool would be 36TB and not less than both pools combined ?
I ask because I played earlier today with a virtual machine of freeNAS where I attached 16 virtual drives of maximal possible storage of 2GB.
I tried configuring 2 pools (A & B), A with 5 disks as raidz3 and B with 11 disks as raidz3. Just as my current setup is (except for the drivesize)
Then I destroyed it and created one big pool with 2 vdevs containg 8 drives each as vdevs.
Then I compared the size of the bigger pool vs the 2 smaller pools. The 2 smaller pools had more storage.
That should apply on 4TB as well and mean if I create one bigger pool as you suggested it would men less storage than both pools combined as now. Or am I missing something here ?

I assume I did right when creating the big pool with 2 vdevs. I just went through GUI and choose 8 disks for each marked vdev device in the picture.
Vdevs.JPG

and the output for the 2 pool configurations was:

[root@freenas ~]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 49.8G 648M 49.1G - - 1% 1.00x ONLINE -
xz 31.8T 50.8M 31.7T - 0% 0% 1.00x ONLINE /mnt

[root@freenas ~]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 49.8G 647M 49.1G - - 1% 1.00x ONLINE -
x 9.94T 49.6M 9.94T - 0% 0% 1.00x ONLINE /mnt
z 21.9T 1.61M 21.9T - 0% 0% 1.00x ONLINE /mnt

So why am I getting a smaller size of pool with your suggested setup :confused: ?



And a comment regarding RAM.
If my current RAM is close to the thumbrule of 1GB ram for each TB data, it means that if your suggeted setup would grow with 8TB drives for one vdev, then I will not have the recommended RAM amount any longer :eek:?
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
I am considering that option too, in the future, to buy a smaller server with big drives that can fit most of my current freeNAS data. but that is for the future :)
I thought Corral was announced as ready. So no need at all to go with it. Explains why there is no documentation, if that ever will come.

I have to ask you again about that capacity if I create one big pool as you suggested. Are you sure the size of the pool would be 36TB and not less than both pools combined ?

And a comment regarding RAM.
If my current RAM is close to the thumbrule of 1GB ram for each TB data, it means that if your suggeted setup would grow with 8TB drives for one vdev, then I will not have the recommended RAM amount any longer :eek:?

Corral has been released as STABLE 10 days ago. And in the meantime we arrived at 10.0.2 including major bugfixes.
But as always: the source tree STABLE does not actually imply that the version is production ready.
A lot of additional work will need to go into the maturation to arrive at something as Rock stable as FreeNAS 9.10. ;)

Like usual I'm using @Bidule0hm 's calculcator for the approximation of the volume size.
And if i'm not mistaken we would have 2 VDEVs that will have following caracteristics:

upload_2017-3-26_23-16-17.png


And 2 x 18.19TiB = ~36.4TiB (-20% threshold = ~28TiB -- because you should not fill it more than that ) :p

Really don't worry about the RAM. ;)
With 32GB of RAM you will be fine to run around ~45TiB of volume size.
If you really want to go past that and the performance of the pool matters to you, you will need to get something more beefy and upgrade to 64GB.
And yes, I know that the X10SL7 maxes out at 32GB of RAM, I'm using the same board. :rolleyes:
Will be time for me to prepare moving to an Xeon E5 rig. I want at least 128GB of RAM :cool:

Cheers
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
The reason I in first place thought about upgrading to coral at once was that it is somethign i will have to do sooner or later. And now that my complete backup of both pools taken more than half a month I would like to not do that again for long time. not beeing able to update the pool with new data for 2 weeks while waiting to copy all :(
I used terracopy for copying all data to several drives in windows, it does do CRC checksums reading (copying from freeNAS) and writing (pasting to the external windowdrive). But this has really taking half a month.
Maybe there is a faster or better way to do this next time I need to? I don't have another freeNAS machine that I can transfer snapshots too continuesly or at all.
Any better way to backup the whole pool than this slow manual transfer of files ? :confused:

Regarding RAM, actually i wanted from the begning to go with much more ram, but it was too much of a cost and was not worth it for the moment just to have more RAM. but the next machine will propably get 128GB :D

Regarding the size of the pools.
I tried different pool setups in freeNAS again. This time with 4TB virtual drives, just as my drives.
What I get confirms what I wrote earlier about the pool sizes. All 3 different setups use totally 16 disks 4TB.
2 pools: Pool A raidz3 5 disks 4TB + Pool B raidz3 11 disks 4TB = 7,3TB + 29,1TB = 36,7 usable storage Total size of both pool
2 pools: Pool A raidz3 8 disks 4TB + Pool B raidz3 8 disks 4TB = 16,9TB + 16,9TB e = 33,8 usable storag Total size of both pool
1 pool: 2 Vdevs of 8 disks 4TB each = 33,7 TB Total size usable
These numbers are shown by FreeNAS GUI.
This means that the setup I am using now, the first option above, Is storageoptimal.
I don't want to say that calculator is missing something in its calculations, but I trust freeNAS repporting those numbers correct.
It dont want to destroy my pool just to cconfirm this, the numbers in the virtual machine of freeNAS should be correct ?! o_O
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hey,

the numbers Freenas is reporting are pretty close to the numbers you found during your virtual tryouts, so everything is fine. You don't want to start nitpicking for 300GB, right ? :p
You should not use more than 28TiB anyway to avoid performance issues...

That being said, you're far from being in an optimal situation in your current configuration, seeing that in Pool A you're having 5disks (loosing 3 to redundancy) and in pool B 11 disks (too wide for my standards ...)
If you really want to have a more resilient and optimal (always quite subjectiv) configuration that you should go with a single pool with 2x 8disks RAIDZ3 VDEVs.
As an alternative you could also go with 3x 6Disk RAIDZ2 VDEVs, but you would need an additional disk

All a matter of risk considerations :)
In the end, you're the one making a choice here ;)
 

arameen

Contributor
Joined
Sep 4, 2014
Messages
145
Then we could easily say that the calculator is far from perfect to use for estimation of poolsizes. Maybe its just easier to create a virtualmachine and play with virtual drives to estimate poolsizes for future.

My friend I am not sure I am following you in the pooldiscussion :rolleyes:
You suggest to create 2 pools with 8 drives 4TB each as raidz3 or just one pool with 2 vdevs with 8 drives 4TB each.
That would mean 33,8TB as I wrote above , 3TB less storage than the original pools. I am not greedy, but that is a lot of storage when using raidz3. Further this would mean only partially faster resilvering, compared to the original setup with 2 pools. one pools is smaller and hence faster in resilvering while the bigger one is slower. This original setup would be 5 disks 4TB as raidz3 and 1 pool 11 disks raidz3 as I wrote above.
33,8TB partially faster resilvering vs 36,7TB o_O
For me as I see it now, the original setup is better. Unless there is more to say about that ?
For example any disadvantages with going with the later, original, setup that will grant me more storage?
Any risk with the 11 disk pool ?

Your new suggestions is interresting.
That would mean I buy one more drive, that is ok for me. Then I should create 3 pools or vdevs of 6 disks 4TB raidz2. I prefer to go with pools thans vdevs in one pool, minimizing the risk of loosing everything if something happens one vdev.
Issue one, i can not physically fit more than 17 drives. And I dont konw about any way to extend the case or connect drives from outside ? lite external drives or something else ?
Anyway this would mean 14,8TB x 3 = 44,4 TB totaly (tested in freeNAS virtual machine), that is much more than know and very welcomed as I need much more storage. Only thing is that it feels a bit risky living with raidz2. What if 2 pools/vdevs loose 1 drive each same time ?

Yes before making decision i gather as much info as intel as possible from those who know more :D
 
Last edited:

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Ah yes, you're indeed right. It's 3000GB (3TB) and not 300GB, my bad. :p

From a general point of view your current setup is less balanced from a risk factor point of view:
Like I said, your 11disk pool is statistically more exposed to potential issues, because all drives are put under stress to get the resilvering done.

From a pure storage/redundancy ratio point of view, a RAIDZ2 VDEV of 6 disks is of course giving you more space.
It's a very good compromise of redundancy, resilience and better rebuild/resilver speed.
In this kind of VDEVs you have even less stress put onto the drives during resilver and a even more reduced probability of hitting a fatal URE error while doing so. That is always good ;)

But like I said earlier only you can assess the situation, measure the risks and decide if you prefer to go with RAIDZ2 or RAIDZ3. :)

Getting back to the actual pool configurations, there is one thing that you maybe forgot during your considerations: The "minimum recommended free space per pool" of 20%.
At no point you should consider getting too close to this threshold, if you care about performance and in general you don't want to reach 90% of space used because the things are getting painfully (like in almost unusable) slow.
I have been there, it's really no fun and your files are getting so fragmented that the things really get ugly.
Since this particular matter applies to each pool, you're effectively loosing a lot of potential space by going to a multi-pool model vs. multi-VDEV...
Additionally you're also having 2 or 3 times the pool management overhead

Let me do a quick math:
Current configuration:
For your 5disk pool: 7.2TiB of space - 20% = 5.7 TiB real usable space
For your 11disk pool: 29TiB of space - 20% = ~23TiB real usable space

- 2 pools of RAIDZ3:
2x 8 disk RAIDZ3 pools: 2 x~18TiB of space per pool minus a total of 2*20% (2x 3.6TiB =~7.2TiB) = ~2*14TiB of usable space (split into 2 pools).

- 3pools of RAIDZ2:
3*6 disk RAIDZ2: 3 * ~14.5TiB of space per pool minus a total of 3*20% (3x 2.8TiB =~8.4TiB) = 3*~12 TiB of usable space split into 3 pools

If you go with single pools:
- 1 pool of 2 x 8 disk RAIDZ3 VDEVs
A total of ~36TiB of data space -20% = ~28,8TiB

- 1 pool of 3x 6disk RAIDZ2 VDEVs
A total of 45TiB of space minus -20% = ~35TiB

Of course it does not change a lot the amount of space that you should not use, but it means that each of your pools is smaller (instead of having one bigger pool) and that could bring other issues ...

Decisions, decision. I'm sure I made it even worse now for you :p
 
Status
Not open for further replies.
Top