ng_ether_load tunable for VirtualBox - what type?

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
In that case you could argue as an extension of that idea, that if you want to run VMs you should be doing a FreeNAS build that is in-line with what you would use for VMs. That means mirrors, lots of vdevs, an L2ARC, and 64GB+ of RAM.

Not disagreeing with your assessment, but I'm saying that we could very easily and logically argue more harsh "needs" for Virtualbox to run stably.

Personally, I've always felt that if you want to run Virtualbox on your FreeNAS you're really implying mirrors, lots of vdevs, L2ARC, 64+GB of RAM, etc.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
In that case you could argue as an extension of that idea, that if you want to run VMs you should be doing a FreeNAS build that is in-line with what you would use for VMs. That means mirrors, lots of vdevs, an L2ARC, and 64GB+ of RAM.
Point taken. If my box were busier, perhaps I'd still be seeing aborted VMs. As it is, the box is not busy and the VMs are pretty lightly loaded too. I'm seeing an ARC Hit Ratio of 99%, which I attribute to ZFS being able to keep everything related to VirtualBox in RAM.

EDIT: that last phrase doesn't make sense. Still, it's clearly making very good use of the available RAM.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Resurrecting this thread one more time ...

After another extended period with no aborted VMs, I tried turning off "Use host I/O cache" in the SATA controller settings. I immediately began to see aborted VMs again. Turning "Use host I/O cache" on again has eliminated the problem. I conclude that for a system like mine, this setting is vital for VM stability.
 

MindBender

Explorer
Joined
Oct 12, 2015
Messages
67
I conclude that for a system like mine, this setting is vital for VM stability.
I'm afraid not: I have an 8-core Xeon D-1540 with 64MB of RAM, and I have the same issue. Enabling "Use host I/O cache" in the SATA controller settings cures the problem, but it just doesn't feel like a solution.
 
Status
Not open for further replies.
Top