Terrible read performance - how to troubleshoot

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
So new build, got two zpools, one is a ZRAID2 with 8x16TB drives and the other is a mirror with 2x4TB SSDs, server is running over 10gbe.

Tested from both a Mac and a Windows machine and with both pools and seeing the same, I get around 400MB-500MB/s writes and reads are around 1MB/s, yes 1. So something is up and I need some pointers to track down.

I am just running SMB service. CPU and the system barely registers the load. I am looking at the Reports side, if I look at the SSD for example I see the write maxing at 260MB/s (still think this is poor and I assume the external 400/500 I am seeing is because of RAM. But the report shows that the disk reads being tiny. Same for the ZRAID2 pool.

So I'm fairly new to this, where can I start troubleshooting? Are there any tools within TrueNAS UI that would let me check the disc performance locally as a first step? I am fairly happy with command lines too.

Does a 1MB/s read performance either off RAID or SSD ring any bells with anyone?

This is replacing an ageing Synology 3612xs which has been running 8 years, I still get 600MB/s to and from that. Same network, same clients, same testing software (Blackmagic Speed Test)

Any pointers would be really appreciated!

thanks
Paul
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So I'm fairly new to this, where can I start troubleshooting? Are there any tools within TrueNAS UI that would let me check the disc performance locally as a first step? I am fairly happy with command lines too.
fio

zpool iostat -v

iostat
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
So I found the dd command

SSD is (READ/WRITE)
107374182400 bytes transferred in 21.386735 secs (5020597342 bytes/sec)
107374182400 bytes transferred in 12.710596 secs (8447612230 bytes/sec)

ZRAID2 is
107374182400 bytes transferred in 21.654265 secs (4958569686 bytes/sec)
107374182400 bytes transferred in 12.709673 secs (8448225497 bytes/sec)

from the command line in the UI. Now the Reports section of truenas doesn't report these does it? I found the fact that the SSD and the ZRAID2 results were remarkably similar kind of odd so I wanted to see the reports to make sure the discs being used did make sense but the reports really don't feature the above.

I believe my next port of call is iperf? Reading some other threads point to network card but what is odd in my circumstances is that early on in the build I had a single SSD running that I benchmarked remotely running happily at 500MB/s both directions. So I have seen this working..

cheers
Paul
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
fio

zpool iostat -v

iostat
So zpool istat gives

capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
OmeMedia 21.7M 3.62T 0 18 367 1.83M
OmeVault 209G 116T 0 7 42 1.36M
boot-pool 1.29G 215G 0 0 9.12K 161
---------- ----- ----- ----- ----- ----- -----

Which doesn't follow the dd results shown above so either the dd is not working or those results aren't recorded as part of truenas (most likely)

Stupid question but what are the units for bandwidth?

And fio, need to look into this too.

I think my dd results are rubbish, I just read something about compression (and it's all zero's right?)

thanks
Paul
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
Just another addition, with compression off I am getting more realistic results

107374182400 bytes transferred in 373.466036 secs (287507222 bytes/sec)
107374182400 bytes transferred in 134.937192 secs (795734525 bytes/sec)

107374182400 bytes transferred in 304.602843 secs (352505517 bytes/sec)
107374182400 bytes transferred in 112.694355 secs (952791136 bytes/sec)

This is closer to what I expect, although the SSD reading at 795MB/s is beyond the SSD specs but with a mirror setup does TrueNAS pull from both for speed?

Guess this is a NIC issue, maybe or setup
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
using dd to test write or read performance is a minefield of things that can cause results that are due to things you're not looking at.

where are you reading from and writing to (or just show the exact command you're using) and maybe it will be clear...

Main pitfall... using /dev/zero or /dev/random. those are both going to have misleading outcomes due to compression in the first case and CPU limitations in the second.

Bytes is the unit for those commands.

Look around for examples of fio, it's probably the best tool for whole pool testing.
 

paulinventome

Explorer
Joined
May 18, 2015
Messages
62
using dd to test write or read performance is a minefield of things that can cause results that are due to things you're not looking at.

where are you reading from and writing to (or just show the exact command you're using) and maybe it will be clear...

Main pitfall... using /dev/zero or /dev/random. those are both going to have misleading outcomes due to compression in the first case and CPU limitations in the second.

Bytes is the unit for those commands.

Look around for examples of fio, it's probably the best tool for whole pool testing.
Thanks. I think I have verified that the pools are performing locally closer to what I expect. My task is now why over a network and SMB share on both Mac and windows I am seeing sub 1MB/s read speeds

Thanks
Paul
 
Top