ESXi, pass-thru, all in one, revisited

Status
Not open for further replies.

jmichealson

Cadet
Joined
Apr 23, 2014
Messages
3
I've recently attempted to utilize an all in one ESXi , FreeNAS solution. Previously I had been using CentOS as the NFS server and a 8888ELP card passed through via ESXi. 6x 2Tb disks R10. Worked well for years.

After discussing with friends around who are happy with FreeNAS, decided to upgrade the single proc super micro SAX to a better SM dual 6x core with lots O ram. Did that, ended up getting a IBM card and flashing it to LSI to use as a JBOD as the 8888ELP simply wouldn't pass through to FreeNAS properly. Again, this worked with CentOS for a long time. Everything works properly, even used the ESXi vmxnic3 driver I found on the forum here in BSD. Problem is NFS on the 10Gb vmic is 20Mbs compared to 128Mbs on the raid 5 and it was far higher on a single SSD datastore for obvious reasons.

I have 4x 2TB disks in ZFS, and 2x 32gb ssd in ZIL mirror.

Willing to post any data about the system, anyone have any thoughts? Ping @ 9k frames works fine... wireshark really doesn't show fragmentation like I expected....?

Thanks in advance.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I do exactly what you are doing/trying to do. You'll find very very few people even willing to discuss things on virtualizing FreeNAS here since it's a very treacherous path.

I do NFS on one of my pools over 10Gb. I can almost saturate the 10Gb card too. Here's my "lessons learned" that may or may not be useful for you:

1. Try without the jumbo frames. ESXi totally smashes my 9014 byte frames. It appeared to be hard coded not to go above 8988 or something. Since my workstation is a dropdown menu and not a "fill in the blank" option I was forced to go down to 4096 or something. The performance difference was noticable. If it hadn't been for the fact that I was doing direct-connect between the two machines I'd have opted for the 1500MTU.
2. Use the Intel NIC for the VM. I tried using the vmxnet stuff and the virtual Intel NIC just worked better for me.
3. It works well, but the performance will fluctuate wildly based on how busy the pool is with other tasks, block size of the data on the pool, compression, etc.
4. Be wary about how much CPU resources you give to your VMs. Its super easy to choke out you FreeNAS VM, or choke out your other VMs, just by changing a few settings.
5. This is crazy risky, and if your data is important you'd better have a backup server that isn't virtualized.
6. If you are trying to do NFS over Windows, just don't. It always sucks for performance and there's not a damn thing you can do about it from what I've heard in IRC. I can do about 350MB/sec over CIFS on my box, and I know I'm being limited to those speeds because of the virtualization.
 

hidden72

Dabbler
Joined
Aug 8, 2011
Messages
22
I'm also using a couple of virtualized instances of FreeNAS leveraging ESXi's PCI Passthrough for the HBAs (combination of LSI and Areca HBAs). I'm using FreeNAS for shared NFS datastores back into the virtualized environment. I've been doing this for a couple of years now and it's worked really well for me. This is mainly for a personal virtualization & networking lab.

I've just recently started comparing the Intel NIC vs vmxnet3, and also playing with jumbo frames... My first go-around was less than spectacular. Under heavy NFS traffic, ESXi would temporarily disconnect from FreeNAS and I would get an apd (all paths down). I've since rolled back the jumbo frames and things are stable now with vmxnet3. I'm hoping to give it another go in the short term.

I can easily peg my gig interface using CIFS between Windows and the FreeNAS server.

On the 10GbE side of things, I've seen almost line-rate during multiple concurrent storage vmotions. At one point I grabbed this from my 10GbE switch:
sh int e 1/2/1 | in util 154468 packets/sec, 97.67% utilization
 
Status
Not open for further replies.
Top