My Freenas seems to be limitedto 1GB/s read speed

y7pro

Cadet
Joined
Feb 5, 2020
Messages
4
Hi there ,

i have successfully built my new freenas server with the following specs

1- Dell R740 dual xeon 4114 CPU
2- 128GB RAM
3- inte x540-T2 dual 10GBe Card
4- LSI 9300-8e HBA Card
5- supermicro 847E1C-R1K28JBOD Storage enclosure
6- 16 X 12TB Seagate Exos HDD (in Enclosure)
7- 2 X Samsung 512GB pro SATA SSD(in Server)

8- 2 X Dell 600GB 15K SAS Drives (Boot Drives)

Pools Configuration:
1- 2 X 8 Disk Raidz2 Vdevs using the 12TB Drives
2- 1 X 2 SSD Disk Mirror Vdev (just for testing)

I have setup the system and its working but i have some problems with the read/ write speeds:

1- read speeda from the Raidz2 pool seems to be limited to 1GB/s whatever i try , i am testing using dd with the following command
Code:
dd if=tmp.dat of=/dev/null bs=2048k count=50k

the compression on the pool is off and no matter what i try the read speed is max 1GB/s , its seems there is a bottleneck somewhere in the system i cant identify .

Strangely enough the write speed maxes out at about 1800MB/s which is relatively within the expected performance of the pool.

2- the second problem is actually a question , the seagate exos are rated for 230MB/s sequential write speed and 170MB/s random but when i test the pool with dd which is i assume sequential and check the reporting section for the disk , every disk seems to max out at about 150MB/s which is why -i think - the max write speed is below the theoritical maximum of the pool, does zfs incures such overhead on drives to reduce its maximum speed by such a big number.


i have tried switching the pool to 8X 2disk mirror Vdevs but the read speed stays the same locked at maximum 1GB/s.

any help where the problem might be?

thanks
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
1GB/s is about the max for a single 10Gb link.
 

y7pro

Cadet
Joined
Feb 5, 2020
Messages
4
yeah but i am testing locally with iozone and dd , i am suspecting that my HBA card is working at the speed of a single port (12gbps) thats the only bottleneck i can think about in the system thats near that number , actually occassionally i may get 1.2GB/s , is that possible?
 

dashtesla

Explorer
Joined
Mar 8, 2019
Messages
75
Are you on FreeNAS-11.3-RELEASE? Not saying there's nothing to be concerned about but 1.2gb/s is your network limit anyway, now if you're planning on using both nics have you tried benchmarking over smb 3 multipath or using them individually from the other end?

The rust spinners at 1gb/s I would still be pretty pleased with that considering ZFS is not gonna be sequential speeds x16 and will tax other system resources as well, the evos if you're planning on using L2 ARC and you have all the right settings for it on tunables should read/write at 1gb/s maximum as well.

Now are you using SAS or SATA drives? because SAS 3 will be limited to 12gbps depending on how you're setting things up, backplane, expander etc

SATA 3 will be at 6gbps but I think the expander will communicate with the controller at SAS speeds (not sure though would look that one up), you have 4 possible bottlenecks on the hardware level there, drive and link speed, link speed between expander and sas controller, sas version of the controller and expander and pci-e generation/speed (x1 x4 x8) you're running at.

Also you can have 2 cables from the controller to the expander for added link speed, if supported.
 
Last edited:

y7pro

Cadet
Joined
Feb 5, 2020
Messages
4
ye i am on freenas 11.3 , all components are SAS 3 speeds from controller to the disks , i am already connected using two cables , i intend on using 2 X 10Gbe links in link aggregation so i am testing the system to find its maximum sequential but i am getting very weird results , for one the write speed is always higher than read speed, for the sake of testing i setup 4 disks of the 16 in 2 X 2 Disk mirrors setup , for write speeds its achieving 400+ MBps which is in line with single disk speed of 200MB/s sequential writes, but the read speed is also in the range of 400MB/s which translates to about 100MB/s per disk whcih i find very weird and i cant find any explanation for
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It is difficult to speak about performance without close look, but I may guess few reasons:
- Single dd read thread may become CPU bound at high bandwidth, since it has to handle many things my itself. Writes are very different, that is why its performance may be different too. Check `top -SHIz` output.
- Single-threaded read from wide pool require significant prefetch depth. You may experiment with increasing vfs.zfs.zfetch.max_distance sysctl.
- Sequential read from mirror of HDDs not necessary gives double throughput, since in that case data are not placed sequentially and require head seek and rotation latency. ZFS tries to balance between good single-threaded and multi-threaded workloads. It may be that two sequential reads will be faster then one, or at least not as slow as could be.
 

y7pro

Cadet
Joined
Feb 5, 2020
Messages
4
- From the freenas GUI Cpu is barely above 4% and this the output of 'top -SHIz'
read
read.PNG

write
write.PNG

- i will try to change the parameters you mentioned and see what happens
- strangely enough i installed windows and tried 16 disk two way mirror in storage spaces and got well over 3GB/s reads using crystaldiskmark and atto with no problem using single SAS link from the HBA, individual disk speed was around 200MB/s which is about what i expected form Seagate exos drives.
- Also during testing on freenas the speeds for individual drives are 64MB/s exactly per disk (8 X 2 disk mirror pool) , if i create the pool with fewer number of drives for example 4 drives(2X 2 Disk mirror pool) i get 200MB/s per disk for reads , so it seems like a hard bottleneck of 1GB/s or 1.2GB/s somewhere in the system i just cant find where, the only thing i can think of is that all the HDDs are working over one port out of the 8 in the HBA so the total bandwidth is 12gbps but i dont know if thats even possible or how can i check it.
 
Top