24 SSD Maximum Write Performance

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
Hello All,

i use Freenas a Long time for some Projekts but now i have a new "Quest".

Our Customer want to step back from AWS to local Servers.

We want to made a 4 Node Proxmox Cluster with Supermicro Twin Servers.
Each Node has 1TB Ram and we Need a maximum an write Performance for MariaDB.

At the Moment the Customer used Amazon RDS with over 8k IOPs.

85% Write and only 15% Read.

So i Need a fast Freenas Storage Server.

  • Supermicroserver 2029U-E1CR4T with dual Intel Xeon Bronze 3104
  • 2x 64GB SataDom for the System
  • 128GB ECC Ram
  • 24x Crucial MX500 2TB SSD
Wich Pool Config is prefered for this Szenario?
Do i Need a Slog for this Action?

Thanks for your Advice!
 

Attachments

  • rds.png
    rds.png
    305.7 KB · Views: 441

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You haven't given any guidance on requirements for fault tolerance... if you just want raw speed, you can just stripe the SSDs and probably have over a quarter million IOPS... if your network can handle that. Seems like (25x) overkill looking at the chart.

If you value the data and don't want to risk losing an SSD in each node at the same time (I'm assuming you'll be using CEPH to mirror the storage for Proxmox and iSCSI to connect), you'll want to lose half of your storage and performance to doing mirrored pairs. (you can still be unlucky and lose 2 SSDs at the same time in a single mirror and take down one node). Depending on how you want to mitigate your risk and how much apitite for penalty you have if you're unlucky (more than one SSD failing before you can intervene), you could think about all kinds of VDEV splits of the 24 drives... really up to you, but the smaller number of VDEVs in total, the slower you will be.

You may see a small benefit from the right (super-fast) slog, but that will depend on your use of sync writes (don't bother with slog if you're not).
 

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
Hello,

sorry for reopen the old Thread.
In the meantime we changed the 24x Crucial MX500 to 24x Intel D3-S4610 SSD.
In the same Moment we Upgrade the Ram to 256GB ECC.
Has LACP any impact on the IO Performance?
The Server hat 4x10Gbit and we use a Netgear XS748T Switch.
Wich Pool Setup should i use for a moderate fault tolerance but high IOps?
The Database has a Size from ~4,5TB at the Moment but it is growing constantly.

Thanks for any Advice!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You should use mirrors for a database setup.

Depending on your tolerance for data loss, this can be two-way mirrors (has basic redundancy, can tolerate an SSD failing, but pool becomes nonredundant). With 1.9TB SSD's, the raw space is 45.6TB, the pool size is 22.8TB, and if you follow less-than-50% allocation guidelines to keep write performance speeds higher, you can use up to around 11TB of the pool.

Around here, the rule for critical storage is that loss of a device should not compromise redundancy on the pool, so you can also use three-way mirrors. The raw space for such a pool would be the same 45.6TB, pool size would be 15.2TB, and less-than-50% would be up to around 7TB.

How much performance you'd lose going over 50% is an interesting question, which depends greatly on your data, database, and how it's accessed. If it is a write-mostly database and you are not removing records, I suspect you could easily go beyond the 50% mark without it becoming an issue.
 

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
I tried it with Stripe+Mirror (Raid10) and i got smiliar same Performance.

So in installed Ubuntu with MDADM Raid10 for a Test and i got the following Result:

Code:
                                                                Alerts                            
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
...
fio-3.1
Starting 10 processes
Jobs: 10 (f=10): [w(10)][36.4%][r=0KiB/s,w=1617MiB/s][r=0,w=414k IOPS][eta 00m:42s]
 
Top