Dell PowerEdge R7415
AMD Epyc 7351P (1st-Gen)
16c 2.4GHz | 2.9GHz 64M Cache
256GB DDR4-2400 ECC
24x SFF NVMe slots
4x 7.68TB Micron 7300 Pro NVMe x4
8x 7.68TB Micron 9300 Pro NVMe x4 (while they perform well, I've limited their use to testing only as the fans go high bc these weren't originally sold with the R7415)
Both synthetic benchmarks and Read / Write tests provide the same performance.
All drives tested via both Windows and Ubuntu ... getting
2GB/s - Write - Micron 7300 Pro NVMe x4
3GB/s - Read - Micron 7300 Pro NVMe x4
3GB/s - Write - Micron 9300 Pro NVMe x4
3GB/s - Read - Micron 9300 Pro NVMe x4
RAIDz1 of 3 drives & 4 SSD in both TNC // TNS get performance worse than the single-drive performance.
I get approx. the same performance of 1x 7300 Pro under TNC or TNS as I get with either 3 or 4 of them in TNC or TNS.
I get approx. the same performance of 8x 9300 Pro as I get with 4x 7300 Pro in both TNC or TNS.
I had originally thought this was a TrueNAS // ZFS issue ... but then, I tested a Software RAID under Ubuntu of 3x 7300 Pro...
and got approximately the same performance as I got in TNC / TNS with an extra ~100MBs R/W (likely the extra ZFS overhead for error checking).
When I tested a RAIDz1 of 4 NVMe, I got ~125MB/s according to ZFS I/O in performance reporting (per NVMe) ... awesome ey?
When I tested a RAIDz2 of 8 NVMe, I got ~ 87 MB/s according to ZFS I/O in performance reporting (per NVMe) ... awesome ey?
Of course, these drives are connect directly to the R7415 motherboard for which the manual says has 128 PCIe 3.0 lanes.
In fact ... while still rather unimpressive, I added an HBA330 and 4x Evo 870 (which get ~500MB/s R/W) and got only
500MB/s - Write
600MB/s - Read
As in, almost! as good as a bunch of drives that are at least 4x as fast.
Hoping Ericloewe might see this and make some suggestions ...
YEAH, of course, I can test this about 100 more ways to further see that there's really a problem.
What no one's yet to offer are candidate solutions (ideally, those that are free first).
In another thread on STH (and despite having said that CPU utilization (as if anyone really thought an Epyc CPU was the problem??) was at all of 5% for only about 0.5s of the tests that I did ... of course, the default, thoughtless suggestion ...? "Oh, it's the CPU." Even when having done tests with FIO and benchmarking in Ubuntu of the array ... meaning it doesn't even convolve SMB ... they STILL suggested I buy (if not a CPU, another computer, bc hey, maybe I just need a 3rd Gen Epyc to get more than 500MB/s like my SPINNING array gets).
I know this isn't just a TrueNAS issue if it's one at all. It seems like it's maybe a backplane issue..? But what..? Dell sold an NVMe ONLY (until I later added the HBA330 to even access SAS or SATA) ... but addressing more than 1 at a time makes them all limited to less than 1GB/s in aggregate??? Where are the thousands of complaints then? Bc this would be rather intolerable to any customer who purchased this configuration and installed more than 1 NVMe drive. Presumably you couldn't even copy from one NVMe and write to another without reducing their performance to at best 1/4th (or ~700MB/s).
Granted, the crippled NVMe speeds are still somewhat faster than my spinning rust array ... bc it's spectacularly consistent. But my spinning array always outperforms the performance of the drives of which it's comprised. In fact, I have literally seen it get 1200MB/s ... and HGST drives aren't particularly fast compared to most drives ... let's say ~200MB/s max. That means the RAIDz2 array gets over the N-P x drive-speed occasionally.
In contrast ... if this got even 50% it'd get 4GB/s (obviously not of IOPs limited data; my tests are with large media files of 1GB+) for the 4 drive config ... or 12GB/s with the 8x 9300 Pro (3GB/s each). I know that'll be limited by something (although my older // consumer i7-8700K with 4x PM983 managed to get over 10GB/s ... so, is it really that crazy !??).
I just hope I can get this machine to have the "array-performance" of equal or greater than 1 drive from which it's comprised.
AMD Epyc 7351P (1st-Gen)
16c 2.4GHz | 2.9GHz 64M Cache
256GB DDR4-2400 ECC
24x SFF NVMe slots
4x 7.68TB Micron 7300 Pro NVMe x4
8x 7.68TB Micron 9300 Pro NVMe x4 (while they perform well, I've limited their use to testing only as the fans go high bc these weren't originally sold with the R7415)
Both synthetic benchmarks and Read / Write tests provide the same performance.
All drives tested via both Windows and Ubuntu ... getting
2GB/s - Write - Micron 7300 Pro NVMe x4
3GB/s - Read - Micron 7300 Pro NVMe x4
3GB/s - Write - Micron 9300 Pro NVMe x4
3GB/s - Read - Micron 9300 Pro NVMe x4
RAIDz1 of 3 drives & 4 SSD in both TNC // TNS get performance worse than the single-drive performance.
I get approx. the same performance of 1x 7300 Pro under TNC or TNS as I get with either 3 or 4 of them in TNC or TNS.
I get approx. the same performance of 8x 9300 Pro as I get with 4x 7300 Pro in both TNC or TNS.
I had originally thought this was a TrueNAS // ZFS issue ... but then, I tested a Software RAID under Ubuntu of 3x 7300 Pro...
and got approximately the same performance as I got in TNC / TNS with an extra ~100MBs R/W (likely the extra ZFS overhead for error checking).
When I tested a RAIDz1 of 4 NVMe, I got ~125MB/s according to ZFS I/O in performance reporting (per NVMe) ... awesome ey?
When I tested a RAIDz2 of 8 NVMe, I got ~ 87 MB/s according to ZFS I/O in performance reporting (per NVMe) ... awesome ey?
Of course, these drives are connect directly to the R7415 motherboard for which the manual says has 128 PCIe 3.0 lanes.
In fact ... while still rather unimpressive, I added an HBA330 and 4x Evo 870 (which get ~500MB/s R/W) and got only
500MB/s - Write
600MB/s - Read
As in, almost! as good as a bunch of drives that are at least 4x as fast.
Hoping Ericloewe might see this and make some suggestions ...
YEAH, of course, I can test this about 100 more ways to further see that there's really a problem.
What no one's yet to offer are candidate solutions (ideally, those that are free first).
In another thread on STH (and despite having said that CPU utilization (as if anyone really thought an Epyc CPU was the problem??) was at all of 5% for only about 0.5s of the tests that I did ... of course, the default, thoughtless suggestion ...? "Oh, it's the CPU." Even when having done tests with FIO and benchmarking in Ubuntu of the array ... meaning it doesn't even convolve SMB ... they STILL suggested I buy (if not a CPU, another computer, bc hey, maybe I just need a 3rd Gen Epyc to get more than 500MB/s like my SPINNING array gets).
I know this isn't just a TrueNAS issue if it's one at all. It seems like it's maybe a backplane issue..? But what..? Dell sold an NVMe ONLY (until I later added the HBA330 to even access SAS or SATA) ... but addressing more than 1 at a time makes them all limited to less than 1GB/s in aggregate??? Where are the thousands of complaints then? Bc this would be rather intolerable to any customer who purchased this configuration and installed more than 1 NVMe drive. Presumably you couldn't even copy from one NVMe and write to another without reducing their performance to at best 1/4th (or ~700MB/s).
Granted, the crippled NVMe speeds are still somewhat faster than my spinning rust array ... bc it's spectacularly consistent. But my spinning array always outperforms the performance of the drives of which it's comprised. In fact, I have literally seen it get 1200MB/s ... and HGST drives aren't particularly fast compared to most drives ... let's say ~200MB/s max. That means the RAIDz2 array gets over the N-P x drive-speed occasionally.
In contrast ... if this got even 50% it'd get 4GB/s (obviously not of IOPs limited data; my tests are with large media files of 1GB+) for the 4 drive config ... or 12GB/s with the 8x 9300 Pro (3GB/s each). I know that'll be limited by something (although my older // consumer i7-8700K with 4x PM983 managed to get over 10GB/s ... so, is it really that crazy !??).
I just hope I can get this machine to have the "array-performance" of equal or greater than 1 drive from which it's comprised.