Freenas ESXi Slow Network Speed PCI Passthrough Disks

Status
Not open for further replies.

airecken

Cadet
Joined
Sep 23, 2014
Messages
2
Hello,

I'm reaching out to the community in hopes of someone who might be able to help me solve a current road block.

I am creating an ESXi home server and I would like to be able to run Freenas with passthrough as one of the VM on the ESXi server. My current setup is Supermicro X10SL7-F running ESXI 5.5 off a USB thumb drive and FREENAS 9.2.1.7 loaded into a dedicated SSD datastore. I have 6 WD RED 3TB drives in RAIDZ2 with passthrough of the PCI controller to FreeNAS.

Most of my home computers are Apple Macs, so I wanted to be setup an ESXi where I can perform Time Machine Back-Up, as well as manual back-up of files.

I've installed VMXNET3 drivers onto my FreeNAS setup which work fine. However, my transfer over network is slow regardless of which protocol/sharing I use. I'm only getting speeds in the realm of 5-6MB/sec, whereas if I access FREENAS from a Windows VM on the same ESXi, I can hit speeds of 100MB/s.

I've tried the various ESXI network drivers, actual different network ports (I have a total of 4 gigabit ports on the ESXI server) to no avail. I don't know where else to troubleshoot to find this error.

Any help is appreciated please!
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
such all in one solutions mostly stuck. this is why you will hardly get replies here ;) simple and only solution: get a dedicated box for freenas and make sure you understand the system. btw: RTFM
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Virtualization is tricky.... the warnings are serious. LOSING YOUR DATA is a real risk. The policy of "if you have to ask you shouldn't do it" isn't really that unreasonable :). But it's not necessarily slow.

I'll throw you a bone. Since you're set up with pass through. It is trivial for you to reboot to baremetal. Build a new usb, boot on baremetal. Compare and contrast your performance. Running a windows vm within esxi doesn't show you much beyond how fast your pool can move data. With 6 3TB reds, we already know that it will do over 500MBps sequentially. VMXNET3 doesn't gain you much if anything used internally, you can use e1000 without performance loss for the most part, and it is built in and will survive an update.

There really isn't much overhead when you are running FreeNAS on PCI passthrough. Less than 3% on my box. My 32GB Haswell 1220v3 maxes out GBe all day every day. On 8, 16, 24 GB ram, 1 or 2 cores.... it doesn't matter. Specifically 173MBps vs 180 MBps writing, 397 vs 401 MBps reading. That is ESXI vs BAREMETAL. (This pool has a couple old slow disks... yours should be significantly faster.)

So... to troubleshoot this. Boot to Baremetal. Test your setup. You need to validate your local pool speeds, your network speeds between clients (iperf), then your speeds via your chosen protocols. Test with large files and small. Then move on to specific workloads i.e feed a VM, Stream, Backup etc. TimeMachine on the first write is slow as hell virtual or not.

When you have valid numbers and you are comfortable. Do it all again under esxi and see where the differences are. Then move on too tweaking. I suspect you are seeing pretty normal behavior, there is likely wifi, network overhead for small files... and many other things besides virtualization at play. ESXI will eat your data, but it is no slug when configured properly.

So that's the good deed of the day... and in no way does it condone virtualizing on a production system. All esxi advice is worth the price paid, and all data loss is at your own risk with an assumed policy of "no crying and no help" if and when your data disappears. Did I mention you might lose data ? ;)

Good luck. Read your face off, every post by @jgreco. Here be dragons.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
ESXI will eat your data, but it is no slug when configured properly.

keywords: "when configured properly"

I can vouch that if you aren't intimately familiar with ESXi you can do quite a few things that kill FreeNAS performance without you even realizing they will do that. :P

No, I won't try to discuss them. Because "if you have to ask you shouldn't do it" and I'm not an authority on the topic and would rather keep my mouth shut than disseminate wrong info on stuff that is not supporting FreeNAS directly.
 

airecken

Cadet
Joined
Sep 23, 2014
Messages
2
Ok just to post back. Thanks for the helpful comments.

I did more tests and realized all virtualized machines were having slow transfer rates and isolated the problem to the actual physical switch I was using. Since then, I replaced the physical switch and am maxing out the gigabit connection. There was nothing wrong with the ESXi configuration or Freenas configuration.

Have hardware passthrough with the Freenas VM and can boot a FreeNAS USB to directly access all drives/pools without any issues. In fact, seems like I can go back and forth between ESXi USB boot and FreeNAS USB boot without any problems (don't even have to import the volumes).

Not sure VMXNET3 drivers were actually needed, but since it seems to be working fine I may just leave it alone at this point.

Thanks for help. If any one needs help with ESXi/Freenas setup I'd be more than happy to help back.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Glad you got it. You are absolutely correct that you can swap between baremetal and esxi without import or any trouble. Takes esxi error right out of the picture when you are troubleshooting. Also allows you to drop your pool onto any baremetal hardware. Excellent recovery positions, imho.

We treat esxi/freenas a little like fightclub. Mostly because the compassionate mods, hate seeing people lose their data. It is really easy to make really bad choices. It is also possible to do everything right... and still get burned. It's also nearly impossible to duplicate what went wrong when things go sideways... so there is little progress made on how disasters actually occur. Is it user error? A bug? A glitch? An interaction? A driver? Configuration? It also seems that there are many poor implementations of vt-d. It's just one big black hole of potential despair.

Good luck. Be paranoid. Make sure you have tested backups. You might need them ;).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The ability to boot to bare metal to do things like iperf testing is invaluable - one of the best reasons not to use the vmxnet drivers, by the way. Also, I will note that there's a lot of testing that should be done, especially if virtualized, because the more complicated your setup becomes, the harder it is to identify specific problems. Things like testing with iperf shouldn't be considered optional or only to be done if you discover problems.
 
Status
Not open for further replies.
Top