What do I buy (or what do I do) ... to get NVMe levels of performance in TrueNAS..?
What kind of performance can I expect from each drive? 200MB/s per SSD doesn't feel like good value.
I want to see over 1GB/s per drive... maybe 700 or 800 MB/s is fine if in aggregate and if IOPs stays high
... but lower than that is just not what I was expecting.
Granted, if it was actually pegging the CPU at 100%..? I'd understand. But this is not that.
I purchased a Dell R7415 (Epyc) sold with 24 slots wired for NVMe
Thinking it'd at least support 16 NVMe no problem...& should be fast ... right??
The previous owner said that he was getting about 3GB/s with much much slower SSD than I was going to use, but, alas ... whether I used 4 or 8 NVMe drives (which each get 2GB/s min up to 3.2GB/s) the machine just doesn't get the 2-3GB/s out of the group that each drive individually should get.
Hell, even using 1 drive in TNC or TNS ... I'm getting 165MB/s per drive in a 4 drive RAIDz1 pool. This is literally the same performance I get with 8 spinning drives.
When I tested with 6 SATA SSD (Evo 870) I got about the same 500MB/s W and 600MB/s R.
This just DOES NOT warrant the cost of NVMe gear!!
Now, I see [potential] issues ... such as the R7415 only has 32 PCIe lanes connected to the 24 NVMe drives!
But, I get this same exact performance whether I have 2 drives in bank 0 (which has 16 lanes) and 2 in bank 1 (another 16) ... or all 4 in either bank.
Yes, 16 lanes is enough for 4 NVMe drives. But I'm looking for explanations as to why they're getting x1 performance.
And even SATA drives which are connected to an HBA330 ... get a much smaller percent of each drives available performance than spinning drives get.
And THAT is the real question:
Why do I get such a tiny fraction of each drives available performance compared to spinning arrays..? And is this just the way it is ??
As in ... is the actual benefit of NVMe drives or SATA SSD ... that they have an excellent lower floor which ... whatever you get, they keep irrespective of how small the files are and thus how high the IOPs is ..?
Or is this an issue related to Epyc (AMD)..?
Or is this actually an issue with the topology (lanes allocated) ..?
I ask bc I'm open to buying an R7525 (which has two Epyc CPUs ... so apparently only after you have 256 lanes will Dell waste only 130 or so, and actually "wire up" the 96 lanes the NVMe slots need...?
And if the issue is that AMD just doesn't give great throughput per drive ... would a Dell R750 be better !?
Bc as TERRIBLE as 165MB/s is ... when I connect 8 drives..? Instead of 4 ... and at that, drives that are faster (3.2GB/s vs 2.2GB/s) ... the drive-performance DROPS DOWN to about 87MB/s.
And I cannot use a different NVMe controller ... (nor should I have to really) bc there's no way I could wire it to the backplane. This is what's available.
What's really pathetic..? Is using cheap consumer gear that I have (an i7-8700k with a HighPoint SSD7120) I've gotten 9GB/s ... obviously that's the local performance..but I tested everything locally in this case as well ... and it's literally identical. I installed Ubuntu and did benchmarks of individual drives as well as having made a RAID-5 array with 3 drives, 4 drives, etc ... which as a RAID array..? It was only 100MB/s faster than it was inside ZFS. BUT ... in Ubuntu a single NVMe drive got the full 3GB/s ... whereas in TNS ..?
Testing a single NVMe drive..? 520MB/s Write
Testing a mirror of 2 NVMe drives..? 555MB/s Write
With 8 drives it can get up to a WHOLE
- 700MB/s Write ...
- 800MB/s Read
And of course ... I bet it would do this even if it were tiny files (high IOPs) and would be very consistent (unlike spinning arrays).
But is that what you guys expect from the costs of NVMe..?? These drives are THOUSANDS if you buy them from Dell ... etc.
Yet we're supposed to be satiated with spinning drive performance?
If I'm doing something wrong (aside from my "unreasonable expectations") ... please, LMK what I should do.
Or ..? If maybe one of the other machines I mentioned would help.
Thanks
What kind of performance can I expect from each drive? 200MB/s per SSD doesn't feel like good value.
I want to see over 1GB/s per drive... maybe 700 or 800 MB/s is fine if in aggregate and if IOPs stays high
... but lower than that is just not what I was expecting.
Granted, if it was actually pegging the CPU at 100%..? I'd understand. But this is not that.
I purchased a Dell R7415 (Epyc) sold with 24 slots wired for NVMe
Thinking it'd at least support 16 NVMe no problem...& should be fast ... right??
The previous owner said that he was getting about 3GB/s with much much slower SSD than I was going to use, but, alas ... whether I used 4 or 8 NVMe drives (which each get 2GB/s min up to 3.2GB/s) the machine just doesn't get the 2-3GB/s out of the group that each drive individually should get.
Hell, even using 1 drive in TNC or TNS ... I'm getting 165MB/s per drive in a 4 drive RAIDz1 pool. This is literally the same performance I get with 8 spinning drives.
When I tested with 6 SATA SSD (Evo 870) I got about the same 500MB/s W and 600MB/s R.
This just DOES NOT warrant the cost of NVMe gear!!
Now, I see [potential] issues ... such as the R7415 only has 32 PCIe lanes connected to the 24 NVMe drives!
But, I get this same exact performance whether I have 2 drives in bank 0 (which has 16 lanes) and 2 in bank 1 (another 16) ... or all 4 in either bank.
Yes, 16 lanes is enough for 4 NVMe drives. But I'm looking for explanations as to why they're getting x1 performance.
And even SATA drives which are connected to an HBA330 ... get a much smaller percent of each drives available performance than spinning drives get.
And THAT is the real question:
Why do I get such a tiny fraction of each drives available performance compared to spinning arrays..? And is this just the way it is ??
As in ... is the actual benefit of NVMe drives or SATA SSD ... that they have an excellent lower floor which ... whatever you get, they keep irrespective of how small the files are and thus how high the IOPs is ..?
Or is this an issue related to Epyc (AMD)..?
Or is this actually an issue with the topology (lanes allocated) ..?
I ask bc I'm open to buying an R7525 (which has two Epyc CPUs ... so apparently only after you have 256 lanes will Dell waste only 130 or so, and actually "wire up" the 96 lanes the NVMe slots need...?
And if the issue is that AMD just doesn't give great throughput per drive ... would a Dell R750 be better !?
Bc as TERRIBLE as 165MB/s is ... when I connect 8 drives..? Instead of 4 ... and at that, drives that are faster (3.2GB/s vs 2.2GB/s) ... the drive-performance DROPS DOWN to about 87MB/s.
And I cannot use a different NVMe controller ... (nor should I have to really) bc there's no way I could wire it to the backplane. This is what's available.
What's really pathetic..? Is using cheap consumer gear that I have (an i7-8700k with a HighPoint SSD7120) I've gotten 9GB/s ... obviously that's the local performance..but I tested everything locally in this case as well ... and it's literally identical. I installed Ubuntu and did benchmarks of individual drives as well as having made a RAID-5 array with 3 drives, 4 drives, etc ... which as a RAID array..? It was only 100MB/s faster than it was inside ZFS. BUT ... in Ubuntu a single NVMe drive got the full 3GB/s ... whereas in TNS ..?
Testing a single NVMe drive..? 520MB/s Write
Testing a mirror of 2 NVMe drives..? 555MB/s Write
With 8 drives it can get up to a WHOLE
- 700MB/s Write ...
- 800MB/s Read
And of course ... I bet it would do this even if it were tiny files (high IOPs) and would be very consistent (unlike spinning arrays).
But is that what you guys expect from the costs of NVMe..?? These drives are THOUSANDS if you buy them from Dell ... etc.
Yet we're supposed to be satiated with spinning drive performance?
If I'm doing something wrong (aside from my "unreasonable expectations") ... please, LMK what I should do.
Or ..? If maybe one of the other machines I mentioned would help.
Thanks