Best raid configuration for 12 drives for speed. Other considerations?

Super7800

Dabbler
Joined
Feb 2, 2021
Messages
16
I'm building a storage server for 12x 2tb sas hard drives. Storage space is not the issue (my application will likely never exceed 1tb), I went with 2tb since I found some cheap.

I need speed. Faster the better. I have a dedicated backup server already, and I'm looking to move the programs used on the individual servers to all launch from one storage server. I dont want to have the system fail from a single drive failure, but i have no issue with a 2 drive failure causing issues.

Looking online it seems a raid 10 might be my best bet, since it can withstand a 1 drive failure but not two, but perhaps there is another faster option?

Another question i have is what hardware would be good for speed? I plan to use 64gb ram (probably ddr2). I assume that to be enoughph since this won't be doing anything fancy, no applications or plugins installed, dedicated server.

Looking online the hp p410 raid card seems to be cheep (obviously using it as a "dumb" controller, letting truenas do the raid), but not the fastest. Would using 2 of these be a bad idea, or would my system bottleneck elsewhere? I'm thinking I might Team two sfp 10gb connections together to the switch.

Any input is appreciated, I'll admit I'm a storage noob. Thanks.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I'm building a storage server for 12x 2tb sas hard drives. Storage space is not the issue (my application will likely never exceed 1tb), I went with 2tb since I found some cheap.

I need speed. Faster the better. I have a dedicated backup server already, and I'm looking to move the programs used on the individual servers to all launch from one storage server. I dont want to have the system fail from a single drive failure, but i have no issue with a 2 drive failure causing issues.

Looking online it seems a raid 10 might be my best bet, since it can withstand a 1 drive failure but not two, but perhaps there is another faster option?

Another question i have is what hardware would be good for speed? I plan to use 64gb ram (probably ddr2). I assume that to be enoughph since this won't be doing anything fancy, no applications or plugins installed, dedicated server.

Looking online the hp p410 raid card seems to be cheep (obviously using it as a "dumb" controller, letting truenas do the raid), but not the fastest. Would using 2 of these be a bad idea, or would my system bottleneck elsewhere? I'm thinking I might Team two sfp 10gb connections together to the switch.

Any input is appreciated, I'll admit I'm a storage noob. Thanks.
If speed is your need, then mirrors are going to be your best bet. With 12 drives you can create a pool made up of 6 mirrored pairs. I use this setup in one of my servers (see 'my systems' below). Or if you want to be really safe, use 4 sets of 3-disk mirrors instead; this would allow for 2 disks failing in any mirror set without losing your pool.

The 6 mirrored pairs will give you 12TB of capacity; the 4 mirrored triplets(?) will give you 8TB. Either way, 64GB of RAM should at least be adequate. ZFS loves memory; the more the better.

I don't know anything about the HP p410 RAID card you mentioned... but if it's a RAID card, you don't want to use it -- you want a true HBA, see:


Are you planning on using this storage for virtual machines? If so, multi-path iSCSI would probably give you the best performance. I don't know much about it, but there are others here on the forum who do. I use NFS for virtual machines and Samba, of course, for plain ol' file sharing.

Good luck!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
probably ddr2
If you're using an Intel chip that calls for DDR2, it also has a front side bus, which means two things: (1) it belongs in a museum, not in production; and (2) it's going to be painfully slow. But that wasn't your question. If raw speed is your only objective, stripe all twelve disks. The obvious danger there is that you'll have no redundancy, so any data errors on disk will result in corrupted data. Striped mirrors wouldn't be quite as fast, but gives you redundancy.
 

Super7800

Dabbler
Joined
Feb 2, 2021
Messages
16
Okay. I will start looking for a lsi raid card that supports hba. Also it will be used to store the programs and their files for a bunch of servers, not vm's. So your saying create 6 mirrored pairs and stripe them?

Perhaps ddr3 or ddr4 (and the accompanying better hardware) would be a better choice? I might lean towards ddr4 if I'm going through the trouble. What does truenas benefit from? Single core performance or multiple core's?

Would using two "raid" (hba) cards be faster?
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Okay. I will start looking for a lsi raid card that supports hba. Also it will be used to store the programs and their files for a bunch of servers, not vm's. So your saying create 6 mirrored pairs and stripe them?
Yes, create a single pool with 6 vdevs; each vdev a mirrored pair of drives. Your data will be striped across the 6 mirrored pairs.

Perhaps ddr3 or ddr4 (and the accompanying better hardware) would be a better choice? I might lean towards ddr4 if I'm going through the trouble. What does truenas benefit from? Single core performance or multiple core's?
DDR3 vs DDR4 probably doesn't make that much difference. Multiple cores is good, but for Samba-based shares you'll want CPUs with good single-thread performance, because Samba uses a single thread per process. So in general, CPUs with fewer cores but a faster clock speed will be better than ones with more cores but a slower clock.

Would using two "raid" (hba) cards be faster?
If you don't use a backplane with an expander, it will take two cards to connect 12 drives. A typical HBA can connect directly to 8 drives; but when plugged in to an expander backplane it can handle dozens. There are HBAs that directly connect to 16 drives but they're hard to find and usually expensive.

Are your SAS drives 6Gbps or 12Gbps?

If 6Gbs then you can use an HBA based on the LSI 2008 series chipset. For example: the LSI/Avago/Broadcom 9211-8i is a classic 2008-based HBA; I use three of them in one of my servers.

If 12Gbps, then you'll need an HBA that supports the higher speed, such as an LSI 3008-based card.

Both types are readily available for decent prices on eBay.

Here's an exhaustive list of HBA cards on the servethehome.com forum:


My honest advice is: don't get in a hurry; study HBAs and TrueNAS and ZFS and L2ARC devices -- you'll probably want an L2ARC device, too. Make your design decisions only when you're fully confident you understand why you're making them. You'll save yourself grief and money, and end up with a great server that will give years of reliable service.

Good luck!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
A simple 2x4 HBA will directly connect to 8 disks. You can use a sas expander (a bit like an ethernet switch... but for sas), or a second card to connect to more, but if your motherboard already has 4 sata ports (preferably sata 3 (ie 600mbps)), then you can probably just use those 4 ports plus the 8 from the HBA.

If you haven't selected a motherboard yet, you could consider one with a built in LSI SAS HBA.

If speed is important, then you would also want 10gbps ethernet.

Meanwhile, if you are using a lot of sync writes, which could be from SMB, but often from NFS/iSCSI shares, where VM disks are served as block storage, then you may want to consider a fast SLOG to absorb sync writes. The SLOG should be either a high end nvme flash disk with power loss protection, or preferably an optane based device, it does not have to be big, just 5s of your maximum transfer rate (ie 20GB is plenty).

async writes and reads will be cached in ARC, which uses the available RAM. And you can consider adding a fast L2ARC SSD to act as a second level read cache. The L2ARC's performance should exceed the disk performance of your array, otherwise it will actually be a bottleneck. It does not need PLP. And it can be the same device you use for SLOG if you don't mind compromising a little on maximum SLOG/L2ARC performance.

So, if it were me, I'd get an LSI 8i HBA, and an Optane P4801x 100GB M.2/U.2 disk for SLOG/L2ARC (split partitions). And use 4 motherboard sata ports. But I wouldn't be using SAS disks :)

Boot disk could be anything really.

I'd consider investigating if you really do in fact want to use iSCSI to host block storage for your servers, rather than hosting the programs.
 

Super7800

Dabbler
Joined
Feb 2, 2021
Messages
16
Thanks. It would appear that I misunderstood the "raid" card specs. Yes 1 will work fine here I think, my backplane appears to be the "extender" type with two mini sas connectors. The drives are 6gb/s HUS723020ALS640 2tb sas drives.

I have not selected purchased any hardware except the 12 drives, chasis, and psu (500watt). I do plan to have dual 10gbe links to a switch. Boot disk I'm planning on a mirrored array of two drives, probably cheap ssd's (so I can just throw em in the case without proper mounting).

I will look into more detail about what you said in your post stux, thanks.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
A 500 W PSU is likely not good enough for 12 drives, when they start up. There are various cases in the forum, where people spent enormous amounts of time to troubleshoot all sort of seemingly unrelated issues, that were finally solved by just using a bigger PSU. Please read https://www.truenas.com/community/resources/proper-power-supply-sizing-guidance.39/

As to network speed, I am a bit skeptical that you will need more than a single 10 Gbps link. And even that single line might be hard to saturate (SMB, NFS, iSCSI?). It would certainly be impossible with a system that is as old as DDR2.

For DDR3 vs DDR4, my system was built in October 2020 and the CPU is almost idle (with only SyncThing running in a Jail).

When you talk about speed, it is not entirely clear whether you mean sequential transfer rates, pure IOPS, or a mixture. More information on your planned workload would help here.
 

Super7800

Dabbler
Joined
Feb 2, 2021
Messages
16
I was planning on using a Zippy EMACS R2W-6500P 500 watt psu since I already had that, but your (probably most definitely) right about it being undersized. I will start searching for a higher capacity redundant power supply.

When the servers run the program, there is a surge of data being loaded into ram. After that, it simply needs fast response time, but not necessarily large file writes and reads. At this time 11 servers (physical machines), soon to be more.

I will have to look more into whether dual link 10gbe would be beneficial. It will be either ddr3 or 4 based. Each server would be linked to the switch using 2x 1gbe links.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
historically, it is my understanding, that LAGG connections (ie Link Aggregation) has not really been beneficial for FreeNAS/TrueNAS file sharing scenarios.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Any chance to be more specific on the workload? You are using a number of relative terms (surge of data, fast response times, not necessarily large files writes and reads). Each of those means something completely different to different people. And by different I do not mean a difference of 200% or 300%, but it might well be a factor of 10, 100, or even more.

Lastly, are you aware that LAGG does not double your speed on a single connection. So depending on the details there is quite a chance that using dual connections will not gain you anything (for NAS and connected servers alike).
 

Super7800

Dabbler
Joined
Feb 2, 2021
Messages
16
Ok, I will admit that I have very little knowledge on the subject of Nas, so let's start from the beginning with my application and yall (who have been amazingly helpful, thanks) can advise me on what I need.

I want to run minecraft servers (Java based game) from them. I have 11 physical machines, each with 1 to 2 instances of the minecraft server. Each instance is between 5gb to 100gb in file size, and allocated between 64-128gb ram each. I'm looking to do this as having the files stored locally is becoming an issue with the increasing number of servers. I want all the files to reside on and be launched from the storage server. Most player data stored on dedicated mysql server. I make money from these servers, so reliability is of concern (as stated before I have a dedicated backup server, so this only has to withstand one drive failure).

So stating what I have said above, what would be my best hardware solution? I currently have purchased 12x 2tb drives @10$ apiece and a 2u case, so I am very open to suggestions. Thanks.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
You would need (or find someone who does) to translate this into bandwidth and IOPS. My gut feeling say that it might be your best bet to hire someone who does this for you.
 
Top