[SOLVED] Read performance very poor --> was bad disk

Status
Not open for further replies.

m4rv1n

Explorer
Joined
Oct 10, 2014
Messages
51
Hi,
iv'e build my new freenas box, it' s a HP microserver G8 with 16GB ram and cpu GT1610.

In write, I've a normal performance, about 600 mbits, but in read I've about 20 mbits, so huge difference.
I've tried with CIFS and als owith ftp.

I installed from zero the 9.3 version, go with the wizard till the end, creating the dataset and the windows share in the wizard process.
Now I see I've the storage pool --> main dataset (unix lz4) --> various dataset shared from CIFS under the main dataset (windows lz4).

I tried dd and it report normal speed

WRITE TEST
[root@storage] /mnt/StoragePool/BACKUP# dd if=/dev/zero of=/mnt/StoragePool/BACKUP/testfile bs=8192k count=2000
2000+0 records in
2000+0 records out
16777216000 bytes transferred in 36.644495 secs (457837282 bytes/sec)

READ TEST
[root@storage] /mnt/StoragePool/BACKUP# dd if=/mnt/StoragePool/BACKUP/testfile of=/dev/zero bs=8192k
2000+0 records in
2000+0 records out
16777216000 bytes transferred in 3.798195 secs (4417154847 bytes/sec)

READ TEST WITHOUT bs=8192k (I stop before the end cause really long time)
[root@storage] /mnt/StoragePool/BACKUP# dd if=/mnt/StoragePool/BACKUP/testfile of=/dev/zero
3067606+0 records in
3067605+0 records out
1570613760 bytes transferred in 68.051806 secs (23079678 bytes/sec)

If I read from CIFS or FTP the speed go to 2-3MB/s

[root@storage] /mnt/StoragePool# zpool list -v
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
StoragePool 7.25T 3.33T 3.92T - 3% 45% 1.00x ONLINE /mnt
raidz1 7.25T 3.33T 3.92T - 3% 45%
gptid/60bd0d83-83e7-11e4-847d-000c29d8ca9b - - - - - -
gptid/62000b68-83e7-11e4-847d-000c29d8ca9b - - - - - -
gptid/63396dff-83e7-11e4-847d-000c29d8ca9b - - - - - -
gptid/6475119b-83e7-11e4-847d-000c29d8ca9b - - - - - -
freenas-boot 7.94G 941M 7.02G - - 11% 1.00x ONLINE -
da0p2 7.94G 941M 7.02G - - 11%

[root@storage] /mnt/StoragePool# zpool status
pool: StoragePool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
StoragePool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/60bd0d83-83e7-11e4-847d-000c29d8ca9b ONLINE 0 0 0
gptid/62000b68-83e7-11e4-847d-000c29d8ca9b ONLINE 0 0 0
gptid/63396dff-83e7-11e4-847d-000c29d8ca9b ONLINE 0 0 0
gptid/6475119b-83e7-11e4-847d-000c29d8ca9b ONLINE 0 0 0

errors: No known data errors

Why the performance is very poor in read? 2-20MB/s is something wrong, I don't disk bottleneck.

Thanks to all.
 
Last edited:

doomed

Dabbler
Joined
Dec 16, 2014
Messages
17
Whatever progress, please update the thread.

Very interested in hearing more about this.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Maybe you should output to /dev/null instead of /dev/zero ?

also post output of zfs get all StoragePool/BACKUP
or zfs get all StoragePool if above doesn't work.
 
Last edited:

m4rv1n

Explorer
Joined
Oct 10, 2014
Messages
51
Hi,
yes, write wrong but the result is the same, the problem appear when I not use bs=8192k

[root@storage] /mnt/StoragePool/BACKUP# dd if=/mnt/StoragePool/BACKUP/testfile of=/dev/null
766775+0 records in
766775+0 records out
392588800 bytes transferred in 17.412118 secs (22546872 bytes/sec)

I tried to see a gstat when the read was slowly:
0 0 0 0 0.0 0 0 0.0 0.0| da0
0 0 0 0 0.0 0 0 0.0 0.0| da1
4 8 8 702 397.6 0 0 0.0 102.8| da2
0 0 0 0 0.0 0 0 0.0 0.0| da3
0 0 0 0 0.0 0 0 0.0 0.0| da4


This is the result of the command zfs get all :

[root@storage] /mnt/StoragePool/BACKUP# zfs get all StoragePool/BACKUP
NAME PROPERTY VALUE SOURCE
StoragePool/BACKUP type filesystem -
StoragePool/BACKUP creation Mon Dec 15 0:17 2014 -
StoragePool/BACKUP used 7.04G -
StoragePool/BACKUP available 2.68T -
StoragePool/BACKUP referenced 7.04G -
StoragePool/BACKUP compressratio 1.08x -
StoragePool/BACKUP mounted yes -
StoragePool/BACKUP quota none default
StoragePool/BACKUP reservation none default
StoragePool/BACKUP recordsize 128K default
StoragePool/BACKUP mountpoint /mnt/StoragePool/BACKUP default
StoragePool/BACKUP sharenfs off default
StoragePool/BACKUP checksum on default
StoragePool/BACKUP compression lz4 inherited from StoragePool
StoragePool/BACKUP atime on default
StoragePool/BACKUP devices on default
StoragePool/BACKUP exec on default
StoragePool/BACKUP setuid on default
StoragePool/BACKUP readonly off default
StoragePool/BACKUP jailed off default
StoragePool/BACKUP snapdir hidden default
StoragePool/BACKUP aclmode restricted local
StoragePool/BACKUP aclinherit passthrough inherited from StoragePool
StoragePool/BACKUP canmount on default
StoragePool/BACKUP xattr off temporary
StoragePool/BACKUP copies 1 default
StoragePool/BACKUP version 5 -
StoragePool/BACKUP utf8only off -
StoragePool/BACKUP normalization none -
StoragePool/BACKUP casesensitivity sensitive -
StoragePool/BACKUP vscan off default
StoragePool/BACKUP nbmand off default
StoragePool/BACKUP sharesmb off default
StoragePool/BACKUP refquota none default
StoragePool/BACKUP refreservation none default
StoragePool/BACKUP primarycache all default
StoragePool/BACKUP secondarycache all default
StoragePool/BACKUP usedbysnapshots 0 -
StoragePool/BACKUP usedbydataset 7.04G -
StoragePool/BACKUP usedbychildren 0 -
StoragePool/BACKUP usedbyrefreservation 0 -
StoragePool/BACKUP logbias latency default
StoragePool/BACKUP dedup off default
StoragePool/BACKUP mlslabel -
StoragePool/BACKUP sync standard default
StoragePool/BACKUP refcompressratio 1.08x -
StoragePool/BACKUP written 7.04G -
StoragePool/BACKUP logicalused 7.56G -
StoragePool/BACKUP logicalreferenced 7.56G -
StoragePool/BACKUP volmode default default
StoragePool/BACKUP filesystem_limit none default
StoragePool/BACKUP snapshot_limit none default
StoragePool/BACKUP filesystem_count none default
StoragePool/BACKUP snapshot_count none default
StoragePool/BACKUP redundant_metadata all default


[root@storage] /mnt/StoragePool/BACKUP# zfs get all StoragePool
NAME PROPERTY VALUE SOURCE
StoragePool type filesystem -
StoragePool creation Mon Dec 15 0:17 2014 -
StoragePool used 2.42T -
StoragePool available 2.68T -
StoragePool referenced 151K -
StoragePool compressratio 1.02x -
StoragePool mounted yes -
StoragePool quota none local
StoragePool reservation none local
StoragePool recordsize 128K default
StoragePool mountpoint /mnt/StoragePool default
StoragePool sharenfs off default
StoragePool checksum on default
StoragePool compression lz4 local
StoragePool atime on default
StoragePool devices on default
StoragePool exec on default
StoragePool setuid on default
StoragePool readonly off default
StoragePool jailed off default
StoragePool snapdir hidden default
StoragePool aclmode passthrough local
StoragePool aclinherit passthrough local
StoragePool canmount on default
StoragePool xattr off temporary
StoragePool copies 1 default
StoragePool version 5 -
StoragePool utf8only off -
StoragePool normalization none -
StoragePool casesensitivity sensitive -
StoragePool vscan off default
StoragePool nbmand off default
StoragePool sharesmb off default
StoragePool refquota none local
StoragePool refreservation none local
StoragePool primarycache all default
StoragePool secondarycache all default
StoragePool usedbysnapshots 0 -
StoragePool usedbydataset 151K -
StoragePool usedbychildren 2.42T -
StoragePool usedbyrefreservation 0 -
StoragePool logbias latency default
StoragePool dedup off default
StoragePool mlslabel -
StoragePool sync standard default
StoragePool refcompressratio 1.00x -
StoragePool written 151K -
StoragePool logicalused 2.47T -
StoragePool logicalreferenced 10.5K -
StoragePool volmode default default
StoragePool filesystem_limit none default
StoragePool snapshot_limit none default
StoragePool filesystem_count none default
StoragePool snapshot_count none default
StoragePool redundant_metadata all default

Also I tried to copy a little file from a to the cifs share, the speed was not really good but ok (70MB write and 100MB read). I tried also a big file (21GB) , the speed in write was 50MB/s but the speed in read was 3MB/s
 
Last edited:

rs225

Guru
Joined
Jun 28, 2014
Messages
878
On your gstat, do the numbers only show up on da2?

If da2 is 100% busy, and the other drives are at 0%, then something is wrong with the cabling for da2, or the da2 drive itself.
You could try offlining da2, and see if speed goes up. If so, replace the drive.
 

m4rv1n

Explorer
Joined
Oct 10, 2014
Messages
51
You are right. I take offline the disk da2 from interface and the speed go to fill the gigabit connection.
I go to buy another disk drive, thank you!
 
Status
Not open for further replies.
Top