ISJ
Dabbler
- Joined
- Aug 4, 2019
- Messages
- 45
I just wanted to give everyone a heads up, since dropping ~$4500 on this setup and getting poor results sucks.
TL;DR: While ASUS Hyper M.2 Cards state they are PCIE 4.0 compliant, in my setup dropping them to PCIE 3.0 removed hardware errors present in other users and also improved performance 2x strangely.
Test Results
The only thing that changed was BIOS setting the two slots to PCIE 3.0, the cooling was massive for this test by use of two 60W 120MM server fans to eliminate any throttling, so it's not cooling that's causing this. Both tests were just after bootup.
My Setup
8x Samsung 980 Pros (2TB)
2x ASUS Hyper M.2 Cards
Asus WRX80E-SAGE SE Wifi
OS: Debian 11.7 (hardware testing to follow with TrueNAS)
CPU: AMD Ryzen Threadripper PRO 5955WX
RAM: 128GB DDR4 3200MHZ
All drive placed in a MDADM array with the following command:
sudo mdadm --create /dev/md0 --level=0 --raid-devices=8 --chunk=64K /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme5n1 /dev/nvme6n1 /dev/nvme7n1 /dev/nvme8n1
Tested with:
TL;DR: While ASUS Hyper M.2 Cards state they are PCIE 4.0 compliant, in my setup dropping them to PCIE 3.0 removed hardware errors present in other users and also improved performance 2x strangely.
Test Results
Situation | Read (MB/s) | Read IOPS | Write (MB/s) | Write IOPS |
Eight drives, no lvm, Encryption, RW, 64K chunk, PCIE 4 | 4923 | 75181 | 4921 | 75153 |
Eight drives, no lvm, Encryption, RW, 64K chunk, PCIE 3 | 7859 | 120101 | 7856 | 120044 |
The only thing that changed was BIOS setting the two slots to PCIE 3.0, the cooling was massive for this test by use of two 60W 120MM server fans to eliminate any throttling, so it's not cooling that's causing this. Both tests were just after bootup.
My Setup
8x Samsung 980 Pros (2TB)
2x ASUS Hyper M.2 Cards
Asus WRX80E-SAGE SE Wifi
OS: Debian 11.7 (hardware testing to follow with TrueNAS)
CPU: AMD Ryzen Threadripper PRO 5955WX
RAM: 128GB DDR4 3200MHZ
All drive placed in a MDADM array with the following command:
sudo mdadm --create /dev/md0 --level=0 --raid-devices=8 --chunk=64K /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme5n1 /dev/nvme6n1 /dev/nvme7n1 /dev/nvme8n1
Tested with:
sudo fio --ioengine=io_uring --direct=1 --name=fiotest --filename=./testfio --bs=64k --iodepth=256 --size=128GB --rw=randrw --runtime=60 --time_based --group_reporting --eta-newline=1 |