Increase Bandwidth to FreeNAS (greater than 1GB Network)

Status
Not open for further replies.

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
How can I increase the bandwidth to my FreeNAS machine without buying $500 10GB NICs? Moving to 10GB NICs seemed to be the answer found in a couple of other threads, but that can't be the only solution, right?

I'm running FreeNAS 9.1.1 in ESXI 5.5 (the proper way, with server-grade Xeon/Supermicro hardware and have a LSI 2308 passed through and dedicated to FreeNAS).

I have a two port gigabit trunk setup on a level 2 switch and both of the physical Intel gigabit adapters on the ESXI server are combined into a load balancing trunk using "route based IP hash". My issue is getting something equivalent to a "trunk" to carry over into FreeNAS to increase the available FreeNAS bandwidth. The incoming side of the vSwitch in ESXI is trunked and working great. However, FreeNAS only appear to be able to use one NIC (which maxes out at about 100MB/s). I've tried adding another NIC in FreeNAS to the vSwitch, but can't seem to figure out a way to do "route based IP hash" in FreeNAS.

On my Windows 7 machine, I can setup a "Team" with two Intel NICs and can push out about 210MB/s to another "teamed" or trunked host.

Can FreeNAS be configured in a similar way, so that it can at least use two NICs simultaneously to increase its available bandwidth?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
First, it's not that FreeNAS only appears to be able to use one NIC, it's that trunking doesn't work how you think it is. Trunking isn't meant to give you a 2Gb throughput via a single network link. It's meant to allow for higher throughput for large numbers of workstations(aka businesses). As you are witnessing first hand, it's utterly useless for home use. You aren't the first one to wish it worked differently. Quite a few people in the forums(including myself) have learned this lesson the hard way. And if you ask around the vast majority of us(including myself) are still on 1Gb since we can't afford the 10Gb hardware.

Unfortunately, the answer of "go with 10Gb hardware" is still the only way you are going to get greater than 1Gb/sec. Windows does a lot of things that are wrong with TCP/IP. If you ask some of the more senior FreeBSD guys they curse the living hell out of Microsoft because they do things that aren't really supposed to work, but they make them work while simultaneously ignoring the TCP/IP standard. Not surprising considering Microsoft ignores standards when its convenient for them.

If you use a protocol that support multipath and fully understand how multipath works it is possible to get more than 1Gb/sec. But, you are limited to the protocols that multipath supports. Neither AFP, CIFS, or NFS support multipath. iSCSI is your only option as far as I know. I will warn you though that if you can't figure out the multipath on your own you probably won't get much help here. It's fairly complex to setup and trying to troubleshoot a broken multipath issue via forum is like trying to mow your lawn from across the street with really long arms.
 

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
Cyberjock, thank you for the details and back story. I've read that VMWare's VMXNET3 virtual adapter is actually a 10GB device. Are there any plans for adding VMXNET3 drivers to FreeNAS to make them easy to install (like the e1000 adapters)? This seems like the easiest way to at least bridge the gap between $500 adapters and $2,000 switches, and having the VMXNET3 driver baked in would probably make things smoother.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Not that I know of. I believe its a licensing issue with the vmxnet3 driver but I could be mistaken.

There's ways to hack the driver in yourself, but you are totally on your own to try them. One problem with the VMXNET3 driver is that its very CPU intensive with large quantities of traffic.
 

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
I might setup another FreeNAS instance and try hacking in the VMXNET3 as there's plenty of room on the CPU. With Raid-z, we're already relying on the CPU for performing Raid-z heavy lifting, so I'm OK with VMXNET3 using some as well. There's no reason not to if it increases performance as multi-core server CPU's seem to be staying ahead of the horsepower curve.

FreeNAS would pretty much be the ultimate end-all solution if the development team could figure out a way to make the VMXNET3 driver seamlessly integrate and perform like a true 10Gb adapter.

Whitebox lab consists of: Supermicro X10SL7-F, Xeon E3-1240v3 3.4Ghz 4C/8T, 32GB ECC (24GB dedicated to FreeNAS), LSI SAS2308 (flashed to IT mode & passed through; 6-drive Raid-z2), and dual port Intel i350-T2. The setup has dual port trunks all the way into the vSwitch, so bridging this last bottleneck in FreeNAS should open everything up.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Depending on what you are trying to do, the best use of your dual ports is to make a seperate virtual network and put FreeNAS and 1 of your 1G NIC ports on it. That way sharing doesn't have to compete with other traffic. Also if your sharing back to ESXi add a ESXi kernel port to the storage network.

In may setup I use have FN connected to both my office network & my SAN network(2 different IP addressing schemes), so I can manage it from the office network and also replicate over the WAN. But all traffic back to my ESXi boxes and in building replication all goes over the storage network.

Also the vmxnet3 works fine, though CPU usage gets ugly if you use jumbo frames with it and you don't get much extra throughput from it(at least on 10Gb).
 

Frodo

Cadet
Joined
Dec 5, 2013
Messages
2
Sorry I am new here, so excuse my ignorance. Also excuse me if I am hijacking the thread, but it seems relevant.

I have done a bit of digging (I have just built a home/lab freenas server on very cheap HW). I have two basic ESXi hosts and I am considering installing Dual nic's in everything and using the onboard nic in the NAS as management and home share only, and directly cabling each host to one of the dual ports on the NAS. Could I then set up an iScsi target for each Host connected to the same ZFS therefor getting 1gig for each host, rather than the 1 shared through the switch? Or do I not know what I am talking about?

Cheers.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So is there any chance at all that future versions of FreeBSD (and FreeNAS) might support the type of NIC trunking that a lot of us have been chasing after?

Not likely. What Microsoft does breaks TCP/IP stuff. FreeBSD follows the TCP/IP spec. You are literally suggesting we break things on purpose to improve things. Do you think you're going to convince some open-source programmers to ignore a spec? :P

Sorry I am new here, so excuse my ignorance. Also excuse me if I am hijacking the thread, but it seems relevant.

I have done a bit of digging (I have just built a home/lab freenas server on very cheap HW). I have two basic ESXi hosts and I am considering installing Dual nic's in everything and using the onboard nic in the NAS as management and home share only, and directly cabling each host to one of the dual ports on the NAS. Could I then set up an iScsi target for each Host connected to the same ZFS therefor getting 1gig for each host, rather than the 1 shared through the switch? Or do I not know what I am talking about?

Cheers.

That sounds doable. I will tell you that most people can't even hit 1Gb/sec with iSCSI without some serious hardware(we're talking lots of RAM and an L2ARC, fast pool, etc... really expensive system) so I doubt you're actually going to see a performance increase with what you are proposing. On my system I'm lucky to hit 30MB/sec with iSCSI despite the pool doing almost 900MB/sec regularly. I wouldn't even try to do what you are doing as I know flat out I can't get any speeds that would make me think 1Gb LAN is going to be a limiting factor.
 

Frodo

Cadet
Joined
Dec 5, 2013
Messages
2
That sounds doable. I will tell you that most people can't even hit 1Gb/sec with iSCSI without some serious hardware(we're talking lots of RAM and an L2ARC, fast pool, etc... really expensive system) so I doubt you're actually going to see a performance increase with what you are proposing. On my system I'm lucky to hit 30MB/sec with iSCSI despite the pool doing almost 900MB/sec regularly. I wouldn't even try to do what you are doing as I know flat out I can't get any speeds that would make me think 1Gb LAN is going to be a limiting factor.

Well I have started ok then. My first attempt was reading at 80MB/s, however write was atrocious, But I think one of my drives might be on the blink, I haven't had a chance to diagnose. I will have a look at the direct disk speeds this afternoon to see what, I might be able to achieve. I haven't bought the cards yet, but glad to know in theory it might work.

Thanks for the reply.
 
Status
Not open for further replies.
Top