slower than expected write speeds

Status
Not open for further replies.

pious_greek

Dabbler
Joined
Nov 28, 2017
Messages
18
I have my pool configured as raidz2 with compression off for testing. When performing a write test through SSH, i'm getting a write speed of about 192 megabytes per second. I was expecting about twice that. When i perform a read of that data, i'm getting 485 megabytes per second which is consistent with my expectations. (expectations were based on https://calomel.org/zfs_raid_speed_capacity.html which indicated: w=429MB/s , rw=71MB/s , r=488MB/s )

Code:
 sudo dd if=/dev/zero of=/mnt/pool/dataset/testfile bs=1024 count=1000000
1024000000 bytes transferred in 5.322653 secs (192385256 bytes/sec)


while running the test, the CPU does not appear to be taxed to any great extent, so how would i diagnose where the bottleneck is? i dont think my hardware selection is inadequate. I've run burn in tests on the CPU, RAM and on the hard drives with no alarming results, although i did experience one bank of drives running the testing at a slower rate than the other bank, which i was advised was not unusual.

https://forums.freenas.org/index.ph...urn-in-tests-bend-in-cable.59770/#post-424116


my hardware selection is in my signature below
 
Last edited:

pious_greek

Dabbler
Joined
Nov 28, 2017
Messages
18
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Your output file is "/testfile", which would be in the root of the boot drive. All of your storage volumes should be under /mnt/.
 

pious_greek

Dabbler
Joined
Nov 28, 2017
Messages
18
Thanks tvsjr. The output file path I initially indicated was incorrect. I wanted to redact volume and dataset names not worth sharing. Anyhow, the test file was being created in a dataset on the storage volume, with a path comparable to: of=/mnt/pool/dataset/testfile

ls showed the file was successfully created in the dataset, and the FreeNAS gui's storage tab reflected that data had been created.
 
Last edited by a moderator:

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Using dd, never use a bs of less than 64K. It just results in more overhead. Also, note that dd understands units like "k" and "m", so bs=64k or bs=1m work.

That said, writing only zeros, even with ZFS compression turned off, is probably going to get compressed in that I/O stream somewhere and show unrealistic numbers.
 
Status
Not open for further replies.
Top