Slow disk performance, on fresh install with decent hardware

Status
Not open for further replies.

Seifer

Cadet
Joined
Apr 17, 2017
Messages
3
I am getting really slow write speeds on my setup and can't figure out why.

FreeNAS-11.1-U3
hardware 2xIntel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz
128GB ecc ram
3x LSI 8211-8i flashed to IT mode
24x2tb seagate constellation ES SAS drives
system drives are 128GBSSD i a mirrored vdev (sata)

1 drive is set up as a stripe of 5 mirrored vdevs (RAID10 with 10 disks) /mnt/data
1 drive is a stripe of 12 disks /mnt/data2
2x 10GB network

I have also tried zfs set sync=standard | always | disabled on these volumes, and disabled and standard seem to be about the same where always is really slow (6MB/sec) and sync =disabled are the numbers I get below

I'm getting max 35MB/s writes across the network using NFS or rsync with SSH, locally copying from one desk to another I only get 85MB/s, it doesn't matter if I'm using the striped vdevds, strpied, or mirrored system drive, my drives barley go above 15% usage using gstat, and top shows a load of less then 2

Using DD I get decent numbers, I understand LZ4 compression makes them look a little faster but still these numbers are way out there
Code:
root@nas-05:/mnt/data # dd if=/dev/zero of=/mnt/data/test100G.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 33.548577 secs (3200558449 bytes/sec)

root@nas-05:/mnt/data # dd of=/dev/null if=/mnt/data/test100G.dat bs=2048k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 20.021183 secs (5363028950 bytes/sec)

root@nas-05:/mnt/data # dd if=/dev/zero of=/mnt/data2/test100G.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 33.858946 secs (3171220471 bytes/sec)

root@nas-05:/mnt/data # dd of=/dev/null if=/mnt/data2/test100G.dat bs=2048k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 20.017338 secs (5364059042 bytes/sec)

Any help would be appreciated.
Thanks,
 
Last edited by a moderator:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
First of all I'm going to burst your bubble, you cannot perform benchmark tests on a compressable dataset. You can't just copy files over and say that you have a slow pool without doing controlled testing. So far you have not provided a single credible test result as far as I can tell.

I understand LZ4 compression makes them look a little faster
A little faster, really? Try this same test on a dataset without compression, I think you will see a significant difference. It took me 522 seconds to generate the 100GB file. Your setup should generate it much quicker but not in 33 seconds.

Here is a link to some benchmark testing I performed years ago, don't let the title fool ya, it contains a set of benchmark tests you could run if you really want to get an accurate representation of the performance of your setup.

If you feel your performance is still bad after you do the tests then provide the results of those tests (in code brackets) and the output of zpool status (again in code brackets).

Lastly, NFS and Rsync have been known to be slow, this is not a hardware issue. Upgrade to 11.1-U4 as this might help but don't hold your breath.

Hope this infomration helps out.
 
Status
Not open for further replies.
Top