pious_greek
Dabbler
- Joined
- Nov 28, 2017
- Messages
- 18
I have my pool configured as raidz2 with compression off for testing. When performing a write test through SSH, i'm getting a write speed of about 192 megabytes per second. I was expecting about twice that. When i perform a read of that data, i'm getting 485 megabytes per second which is consistent with my expectations. (expectations were based on https://calomel.org/zfs_raid_speed_capacity.html which indicated: w=429MB/s , rw=71MB/s , r=488MB/s )
while running the test, the CPU does not appear to be taxed to any great extent, so how would i diagnose where the bottleneck is? i dont think my hardware selection is inadequate. I've run burn in tests on the CPU, RAM and on the hard drives with no alarming results, although i did experience one bank of drives running the testing at a slower rate than the other bank, which i was advised was not unusual.
https://forums.freenas.org/index.ph...urn-in-tests-bend-in-cable.59770/#post-424116
my hardware selection is in my signature below
Code:
sudo dd if=/dev/zero of=/mnt/pool/dataset/testfile bs=1024 count=1000000 1024000000 bytes transferred in 5.322653 secs (192385256 bytes/sec)
while running the test, the CPU does not appear to be taxed to any great extent, so how would i diagnose where the bottleneck is? i dont think my hardware selection is inadequate. I've run burn in tests on the CPU, RAM and on the hard drives with no alarming results, although i did experience one bank of drives running the testing at a slower rate than the other bank, which i was advised was not unusual.
https://forums.freenas.org/index.ph...urn-in-tests-bend-in-cable.59770/#post-424116
my hardware selection is in my signature below
Last edited: