What am I doing wrong? 1gb performance on 10gb link

jschmok

Dabbler
Joined
Dec 2, 2018
Messages
27
I'm currently running FreeNAS 11.1-U6 and am only seeing 1gbps performance over my 10gbps link!

I'll start with my hardware setup:

MB: Asus M4A87TDU/USB3
CPU: AMD Athlon II X2 255
MEM: 4GB (this could be the problem)
NIC - HP NC550SFP 10gbps connected using SFP+ DAC direct to Windows 10 box using the same card

Drives:
1x: 32GB Kingston USB 3.0 ( boot drive ) - da0
4x: WDC WD30EFRX-68EUZN0 3TB Red drives - ada0-3
1x OCZ Agility 4 64GB SSD (configured as L2ARC) - ada4

1544996115524.png


Network:
FreeNAS oce0 interface - 10.10.10.2
Windows 10 - 10gbps if - 10.10.10.1 (no gateway)
This is configured in a peer-to-peer configuration:


I've been doing some reading of different threads and posts about tweaking FreeNAS network configuration and tunables to squeeze more throughput. None of which have helped.... I average between 120-170MB/s ... I've seen it spike to about 190MB/s once and that was it.... glory moment :(

I'll now show screenshots of my configuration:

Windows 10 NIC properties:
1544996458580.png


FreeNAS NIG Properties:
1544996527997.png



I read a thread speaking about increasing transmit buffers for NIC settings:
http://45drives.blogspot.com/2016/05/how-to-tune-nas-for-direct-from-server.html

Unfortunately my card only supports max transmit buffer of 256.

I will outline my NIC settings here:
1544996731718.png


Class of Service (802.1p): Auto Priority Pause
Network Address: <not present>
Packet Size: 9014
VLAN Identifier (802.1q): <not present>
Wake on LAN: Enabled

Performance
CPU Affinity

Preferred NUMA node: <not present>
Recieve CPU: <not present>
Transmit CPU: <not present>
Flow Control: Rx & Tx Enabled
Interrupt Moderation: None
Recieve Buffers: 4096
TCP Offload Optimization: Optimize Throughput (other option is Optimize Latency)
Transmit Buffers: 256 (max)

Protocol Offloads
IPv4
Checksum
IP Checksum Offload: Rx & Tx Enabled
TCP Checksum Offload: Rx & Tx Enabled
UDP Checksum Offload: Rx & Tx Enabled
Large Send Offload v1: Enabled
Large Send Offload v2 (IPv4): Enabled
Recv Segment Coalescing (IPv4): Disabled
TCP Connection Offload: disabled
IPv6 settings are same as IPv4

1544997498776.png



The thread I found also recommended the following tunables:

1544997653429.png



the noprefetch was disabled by me...

I just performed another transfer and this was the results:

1544997800814.png

this one actually spiked to 200MB/s but only for a short while...

I'm not sure why I'm getting such low speeds. Here is my volume status:

1544997956675.png



If I've forgotten to provide any other information please let me know.... I really want to get to the bottom of this and maximixe my throughput. On a side note, I'm sourcing another 8GB of RAM soon. I know my RAM is a little low, but was hoping the L2ARC would pickup the slack there.... anyways, any suggestions are appreciated! :)
 

Attachments

  • 1544997467285.png
    1544997467285.png
    167 bytes · Views: 350
  • 1544997790798.png
    1544997790798.png
    14.8 KB · Views: 443

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
FreeNAS NIG Properties:
NIG?
I'm currently running FreeNAS 11.1-U6 and am only seeing 1gbps performance over my 10gbps link!
I would say that the biggest problem is that you have not followed any of the guidance with regard to the hardware to use, but you also have not got enough disks (vdevs particularly) to have more than around 250 MB/s.

So, tell us what kind of performance you need and we can give you some suggestions for hardware that might reach that level of performance.
I don't have enough vdevs either, but better hardware in the server gives you better performance.

FreeNAS to RAM Disk.PNG
 
Last edited:

jschmok

Dabbler
Joined
Dec 2, 2018
Messages
27
haha thats a typo.... NIC*

How many disks would I need to get more than 250MB/s ? I plan to do photo editing with the files on this box directly.

I'll have folders with many many 24MB files in them like so:

1544998886404.png


Will be using Adobe Lightroom so would like the ability to load my catalogs quickly rather than waiting for it to load. I actually have 7 of these 3TB drives, however, 1 is being used as a single drive backup.... which leaves 6 available. One problem is that my motherboard only has 6 SATA slots so I could go with 6 x 3TB drives and no L2ARC or 4 x 3TB drives with 1 L2ARC. I would prefer performance over more space so I chose the 4x3TB + 1L2ARC. If I got a SATA port expansion card I could get more ports but...... will that affect performance as its now relying on PCIx bus?

I also get the feeling this SSD I'm using for L2ARC is old shitty and slow... (OCZ Agility 4 64GB SSD Drive), however, was hoping to saturate even it's read of 300MB/s and its write of 200MB/s.

Alternatively, I have a SiliconPower S55 SSD 120GB Drive. Boasts up to 550MB/s read and 420MB/s write.... not sure how sustained that could be. Another option is the Kingston A400 boasting read and write speeds of up to 500MB/s and 450MB/s. Those are only $30 for a 120GB.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I also get the feeling this SSD I'm using for L2ARC is old crapty and slow... (OCZ Agility 4 64GB SSD Drive), however, was hoping to saturate even it's read of 300MB/s and its write of 200MB/s.
Yes. That SSD isn't helping and it could limit maximum performance to the speed of the SSD. It would be better to have more disks.
How many disks would I need to get more than 250MB/s ? I plan to do photo editing with the files on this box directly.
It entirely depends on the performance of the individual disks. If you are using some of the newest, high capacity drives, they have a data rate (manufacturer reported) of around 230 MB/s where some of the smaller capacity drives will have a data rate around 120 MB/s. So, the model of the individual drives involved matter, very much, not just a little. Newer, larger capacity, faster drives would cut the number of drives almost in half, where if you are using older model, lower capacity drives, you would need twice as many. We have a person that recently built a system using eight drives, in four mirrored pairs, and was able to sustain near line speed on their 10Gb network, but they were using new 10TB drives. A thing to understand is that even if a 2TB or 3TB drive that you are using is new production, it is likely a design that was developed five or more years ago and the speed of the mechanical device (regardless of the rated speed of the interface) is "older" and definitely slower, probably around 130 MB/s but you would need to test the actual drive performance because most manufacturers rate the performance under ideal situations instead of under real-world conditions. With the older type drives that you are describing, you will probably need about 16 drives in eight mirrored pairs to reach full performance of the 10 Gb network and possibly closer to 24 drives.
You will probably want to work your way up to that over a little time, but the thing to keep in mind is that the number of vdevs is directly connected to the data rate of the NAS because each vedev (in you case the vdevs are mirrors) but each vdev has roughly the performance of a single disk.
Does that make sense?
Also, you can't do that on older hardware because above a certain performance level and the processor and memory will limit the maximum performance of the system.
MB: Asus M4A87TDU/USB3
CPU: AMD Athlon II X2 255
MEM: 4GB (this could be the problem)
That means you will need something newer than this, and absolutely you will need more memory. Do you have a budget to build something newer or do you need to manage with what you have?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
SATA port expansion card I could get more ports but
Please don't use SATA port multiplier, if that is what you are thinking. The best way to expand a FreeNAS is with SAS controllers. Please look at the links under the Useful Links button in my signature and ask questions.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472

jschmok

Dabbler
Joined
Dec 2, 2018
Messages
27
Thanks so much for the tips guys.. I DO have some budget for this build and was hoping to use my DECENTLY older hardware to accomplish this task but it seems it is not the case. It is likely better suited as a basic "Windows 10 with an SSD browsing only" kind of machine...

What are your thoughts on the older HP Z workstations? HP Z400... Z420... Z220 etc.... utilizing Xeon processors? Like I said before, I do have SOME budget for this build but do want to keep costs as low as possible as I've already spent some on this. I'm also seeing a Lenovo ThinkStation E31 workstation that I could go for. This one's got a Xeon E3-1240 v2 @ 3.2 GHz and 8 Gb DDR3 out the box.

Minus the fact that there would be more RAM on the machine, do you think I would see a performance increase with one of these Xeon based Workstations?
 

nick8719

Cadet
Joined
Jan 25, 2020
Messages
1
NIC - HP NC550SFP 10gbps connected using SFP+ DAC direct to Windows 10 box using the same card

Hello everybody. The thread is a bit older I know, but I couldn't find anything else about this card (HP Emulex NC550SFP) and the topic of drivers for Windows. The owner of this thread wrote that he not only uses this card in his NAS under Linux, but also under Windows 10. I cant find a suitable or working driver for my Win10 Pro 64Bit .. can you please help me? Would be very nice to get this up and running. Thx
 
Top