SOLVED Best HDD config for general purpose FreeNAS build (I'm desperate)

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
So you're saying with sich a Mixed workload and many sequential reads/writes the "scheduling" of the pool leads to mostly random access?? Which means I have to be prioritizing random throughput and IOPs? Which is what a mirrored pool does?

Exactly.

What are your thoughts in using an L2ARC? Would there even be any difference between RAIDZ2 an mirror with such a device???

RAM will serve you better. You plan for up to 128G of RAM. That should be enough and I doubt an L2ARC will help.

It is mainly me using the NAS. There won't ever be more than 2-4 sequential reads or writes at any given time.

And this is why I doubt an L2ARC will help... Being a single user, all of the 128G of cache in RAM will be yours. If you play again and again with the same stuff, you will not load 128G of RAM. I you do load that much data at once, you reach over such a large volume that your pool will always have to drop to pool and disks to answer you.

Should there be 32 users, each would have only on average 4G of RAM. In this case, to load 128G of RAM cache is much easier and so to extend that cache with an L2ARC may be good.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
In my opinion, a consumer SSD works fine for L2ARC. Make sure you get one that has decent performance and endurance. It will get a lot more use than as a desktop OS drive.

I don't think there would be an issue using Optane with that CPU, other than it probably wouldn't work as a boot drive. I personally like the price point of the Xeon v2 servers. They have great performance for the price, and power usage isn't bad.

The S3700 will have twice the latency, half the IOPS, and a third the sequential write speed of the P3700. For a SLOG drive latency is king. Performance isn't cheap.
Here in Germany there's this one reseller that offers Proliant Gen 8 Servers complete with dual CPU, redundant PSU and the 64GB RAM Upgrade is just ~130$. It's nearly Impossible to find a similar offering anywhere. Building Something like it myself would cost 3-4 times as much.

Thanks for pointing that out. I have thought about getting a 900P as SLOG even though it is damn expensive.

Could you maybe clarify your previous point about VM start times a little more? You said your VMs take 15-30mins to be up and running again after a FreeNAS reboot. You said you mitigate that by having your VMs run off of a second SSD Pool is that right? This is a superior option than having SLOG and L2ARC accelerate the HDD pool?
 
Last edited:

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
And this is why I doubt an L2ARC will help... Being a single user, all of the 128G of cache in RAM will be yours. If you play again and again with the same stuff, you will not load 128G of RAM. I you do load that much data at once, you reach over such a large volume that your pool will always have to drop to pool and disks to answer you.

Should there be 32 users, each would have only on average 4G of RAM. In this case, to load 128G of RAM cache is much easier and so to extend that cache with an L2ARC may be good.
Streaming movies and music would not load an L2ARC very well either would it?
Do VMs count as "users"?

You may have read I plan to be hosting all of my databases from the FreeNAS box as well. I think their size would exceed my amount of RAM sometime. Is this a case where an L2ARC would be beneficial?
I have also thought about putting my game libraries (Steam, Origin, etc.) on an iSCSI target on the NAS but not sure about that either.

Addition: I am a little against the mirrored approach. I know it offers very good performance just using HDDs but I am scared that 2 drives making up 1 vdev fail close to each other and I lose my pool completely... I would mitigate this by not using 2 disks bought simoultaneaously in a vdev.
What are your thoughts about 5x 5-drive RAIDZ2s?? Should offer decent performance and redundancy.
VM and database data is accessed pretty frequently I guess which is why I could trick it with SLOG/L2ARC? Or am I missing something?

I only have the option do do a 12-bay 3.5" server or 25-bay 2.5" one. Ironwolfs don't exist as 2.5" neither do Reds. And Exos drives are out because of their price. I kind of really want to do the 2x 6drive RAIDZ2....




Another question completely off topic. What would you think is sufficient CPU-wise for my use case? I can get my server with dual Xeon E5-2450L 8C/16T 1.8GHz CPUs. I read SMB prefers faster single core performance?
 
Last edited:

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
Here in Germany there's this one reseller that offers Proliant Gen 8 Servers complete with dual CPU, redundant PSU and the 64GB RAM Upgrade is just ~130$. It's nearly Impossible to find a similar offering anywhere. Building Something like it myself would cost 3-4 times as much.

Could you maybe clarify your previous point about VM start times a little more? You said your VMs take 15-30mins to be up and running again after a FreeNAS reboot. You said you mitigate that by having your VMs run off of a second SSD Pool is that right? This is a superior option than having SLOG and L2ARC accelerate the HDD pool?

Another question completely off topic. What would you think is sufficient CPU-wise for my use case? I can get my server with dual Xeon E5-2450L 8C/16T 1.8GHz CPUs. I read SMB prefers faster single core performance?
That sounds like a good deal. I've heard it's hard to find good used servers in the EU market. I would personally start with the 64GB and see how well that's working for you. I'm running 32GB and it seems to be enough.

Yes, I have found the SSD mirror to perform much better for VMs than a HHD pool with SLOG and L2ARC when hitting it with high IO bursts. I don't have SLOG or L2ARC for either pool currently. For my use case, yours sounds fairly similar, the tiered pools approach seems to perform better.

Those dual CPUs should be more than enough. If the PassMark score is anything to go by, the E5-2450L slightly out performs the E5-2609 v4 and I'm using just one of those.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
That sounds like a good deal. I've heard it's hard to find good used servers in the EU market. I would personally start with the 64GB and see how well that's working for you. I'm running 32GB and it seems to be enough.

Yes, I have found the SSD mirror to perform much better for VMs than a HHD pool with SLOG and L2ARC when hitting it with high IO bursts. I don't have SLOG or L2ARC for either pool currently. For my use case, yours sounds fairly similar, the tiered pools approach seems to perform better.

Those dual CPUs should be more than enough. If the PassMark score is anything to go by, the E5-2450L slightly out performs the E5-2609 v4 and I'm using just one of those.
So running my databases off of SSDs would be better as well I guess? I really don't have high IO bursts from my VMs. They are just running tiny servers that occasionally store some data on disk. Most of their operations happens in the hosts RAM anyway...
Why do you add a third tier? I could just use NVMe as my second tier right away I guess?

Man I'm so unsure about this... I want to do daily snapshots of my VMs as well. And I do not have very high disk IO on my VMs either. Most of them are just gameservers.
 
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Addition: I am a little against the mirrored approach. I know it offers very good performance just using HDDs but I am scared that 2 drives making up 1 vdev fail close to each other and I lose my pool completely...

So that means you do not have backups... In this case, know that there is no way for a single FreeNAS server to be bullet proof. For every case and everything holding data that you care about, you need backups. A single fire will destroy all your Raid-Z2 and Raid-Z3 vDevs at once. A single error handling the pool can corrupt it completely. A single intrusion in the system can destroy everything. There a many single incident that will destroy everything.

So because you seem to care about the data that will go in this NAS, start planning your backup strategy right now before going any further. See the 3 copies rule in my signature about backups...
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
So running my databases off of SSDs would be better as well I guess? I really don't have high IO bursts from my VMs. They are just running tiny servers that occasionally store some data on disk. Most of their operations happens in the hosts RAM anyway...
Why do you add a third tier? I could just use NVMe as my second tier right away I guess?
I'm looking at the third tier because I already have two. If I was starting today, I'd skip the SSD and go with NVMe.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
So that means you do not have backups... In this case, know that there is no way for a single FreeNAS server to be bullet proof. For every case and everything holding data that you care about, you need backups. A single fire will destroy all your Raid-Z2 and Raid-Z3 vDevs at once. A single error handling the pool can corrupt it completely. A single intrusion in the system can destroy everything. There a many single incident that will destroy everything.

So because you seem to care about the data that will go in this NAS, start planning your backup strategy right now before going any further. See the 3 copies rule in my signature about backups...
I have read your backup strategy a while back after your first answer and I am very impressed.

I would use my MyCloud as a backup at my grandmother's house but only for the Most important data. In case of a fire I really don't mind losing those movies...

I think the 25 bay 2.5" server is out of consideration because I could only fit it with 1TB drives and tbh spending $1800+ for a usable capacity of 11TB is way too expensive for me.

I really want to get this Proliant DL380e G8 because it is such a good deal and offers everything a good FreeNAS box should have for 430$.
Know that I have a constrained budget of ~2300$ for the entire build.

Can you help me get the most out of those 12 3.5" bays?
I am open to add a second pool consisting of NVMe SSDs for database and VM storage just as mouseskowitz did.
Would you prefer 6x 2-way mirror or 2x 6-drive RAIDZ2 in such a scenario?

I hope you can understand that I can't afford buying those fancy 4U Supermicro chassis and MoBos even used. I would if I could but now I'm limited by budget.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again Steve.

Can you help me get the most out of those 12 3.5" bays?

I would say 4 options here. I would recommend, in order :
6x 2 drives mirrors. Most IOPS for DB, VM and everything. Very good redundancy. For extra safety, keep a cold spare.
2x 6 drives Raid-Z2. At least 2 vDev, so twice as fast as a single one. Also very robust.
1x 12 drives Raid-Z3. A little more space and still very good redundancy. Poor performance for IOPS.
1x 12 drives Raid-Z2. Another small plus for usable size and redundancy is still acceptable despite a little low and still poor performance.

L2ARC and SLOG will not compensante for the low IOPS you will have. Also, with only 12 drives, I would not do 2 pools. Should you really go for 2 pools, then it would mean 3x 2 drives mirrors for DB, VM, etc. And 1x Raid-Z2 for mass storage.

These, in my opnion, are the best options.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
Hi again Steve.



I would say 4 options here. I would recommend, in order :
6x 2 drives mirrors. Most IOPS for DB, VM and everything. Very good redundancy. For extra safety, keep a cold spare.
2x 6 drives Raid-Z2. At least 2 vDev, so twice as fast as a single one. Also very robust.
1x 12 drives Raid-Z3. A little more space and still very good redundancy. Poor performance for IOPS.
1x 12 drives Raid-Z2. Another small plus for usable size and redundancy is still acceptable despite a little low and still poor performance.

L2ARC and SLOG will not compensante for the low IOPS you will have. Also, with only 12 drives, I would not do 2 pools. Should you really go for 2 pools, then it would mean 3x 2 drives mirrors for DB, VM, etc. And 1x Raid-Z2 for mass storage.

These, in my opnion, are the best options.
You convinced me of the mirrors. It has the best performance and is easiest to upgrade. Sadly it makes my heart (and wallet) bleed when I see that in the end only 38% of the raw capacitiy will be usable (after 20% free space etc.).
I was a little scared at first because if 1 drive in a vdev fails the pool is at risk of the partner drive failing but I read resilvering is much faster in mirrors than it is in RAIDZ let alone less taxing on the drives.
But I will read a little more about the exact performance differences between RAIDZ2 and 2-way mirror.
Now all that's left in my head is how to start populating the bays... Because I don't want to buy a batch of drives and put them in just to have them all fail at the same time.

Should I buy drives from different manufacturers in pairs per vdev?
Do you have any thoughts about HPE Enterprise drives? The shop I'm buying the Proliant from offers very cheap upgrades with these including the SmartCarriers. I'm tempted since each SmartCarrier costs 35$ on their own but don't know if thats a wise choice. I think those drives have 0h of operation on them but I'll have to check. What do you think?
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
I'm looking at the third tier because I already have two. If I was starting today, I'd skip the SSD and go with NVMe.
Okay. Thank you very much for taking your time to look into my case.
Heracles convinced me that mirrors are the superior option considering performance and ease of upgrading.
I'll start with 2 or 3 mirrors and then upgrade as I go. If performance is too bad I'll consider tiered storage with SSDs.
However I am still a little unsure why ZIL and L2ARC are considered to be so bad because ZIL would boost random writes and L2ARC should boost random reads after caching has taken place?! Well I think that's a topic for another day.
Again thank you and stay healthy!
 

mouseskowitz

Dabbler
Joined
Jul 25, 2017
Messages
36
Should I buy drives from different manufacturers in pairs per vdev?
The recommended way to do it is buy the same drives from two different retailers. That increases the odds that they're from different batches. If you're doing mirrors, pair the drive from retailer A with one from retailer B.

Starting with a HDD pool and then adding something faster if needed is a good idea. If it's fast enough for your use case, that's more of the budget to spend on other things. I think the HDDs will be good enough until you upgrade to 10G networking as the 1G network will be the bottleneck.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
The recommended way to do it is buy the same drives from two different retailers. That increases the odds that they're from different batches. If you're doing mirrors, pair the drive from retailer A with one from retailer B.

Starting with a HDD pool and then adding something faster if needed is a good idea. If it's fast enough for your use case, that's more of the budget to spend on other things. I think the HDDs will be good enough until you upgrade to 10G networking as the 1G network will be the bottleneck.
Nonsense, just buy all your drives wherever it's easy and cheap. The most important part is the burn in and testing.
 

steveroch-rs

Dabbler
Joined
Feb 26, 2020
Messages
36
Nonsense, just buy all your drives wherever it's easy and cheap. The most important part is the burn in and testing.
Yes I will do that extensively!
Can you tell me when I should be returning a drive?
I will look up the correct commands to test them but I do not know what to look for then.

Also, what are your opinions on shucking?
Is it true that it is mainly lottery and you do not have warranty on them? Mainly looking at WD Elements drives.
I am very afraid to have bad luck and end up with a bunch of SMR drives I can not return...
Honestly manufacturers should be sued for not clarifying this in their product descriptions...
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Also, what are your opinions on shucking?
Is it true that it is mainly lottery and you do not have warranty on them? Mainly looking at WD Elements drives.
I am very afraid to have bad luck and end up with a bunch of SMR drives I can not return...
Honestly manufacturers should be sued for not clarifying this in their product descriptions...

Not really a lottery.
- Shuck only 8TB and above, as WD has no SMR drives there. 3.5" only, as those are the ones with an actual SATA port.
- Before you shuck, run CrystalDiskInfo on the drive, make sure what's in there is something you want.
- Shuck exceedingly carefully, using a credit card or similar flexible prying tool. See YouTube videos.
- Burn that drive in well. https://www.ixsystems.com/community...for-freenas-scripts-including-disk-burnin.28/
- If there are any errors at all, return the drive to its enclosure and RMA it
- Yes, no warranty. WD Elements is only 1 year to begin with, and drive failure rate goes up after year 3 -> anything but a Pro drive is likely out of warranty by the time it fails, anyway. Exceptions are drives that fail early in their life, and that's what burn-in is there for.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Yes I will do that extensively!
Can you tell me when I should be returning a drive?
I will look up the correct commands to test them but I do not know what to look for then.

Also, what are your opinions on shucking?
Is it true that it is mainly lottery and you do not have warranty on them? Mainly looking at WD Elements drives.
I am very afraid to have bad luck and end up with a bunch of SMR drives I can not return...
Honestly manufacturers should be sued for not clarifying this in their product descriptions...

Here is how you do burn in. Basically run badblocks and smart long/short tests. Depending on the drive size this can take a couple days for a single disk. You should do all disks at once to make it faster.

Like stated above you return them if there are any errors.

Shucking is fine I have 8 8TB drives out of easystores I sucked. Not sure about elements, that news to me. Still have all the enclosures to warranty if needed but I won't need them. If you make it through testing the drives will last year's. Most drives fail fast if they are going to fail. That's why intense early testing is important.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Not sure about elements, that news to me

I got HGST Ultrastar HE10 out of 8TB Elements, with firmware that limits them to 8TB and 5400 rpm. Helium drives. No complaints. That was late 2018, not sure what's in those now. Hence CrystalDiskInfo first.
 
Top