HP DL360 G7 ESXi high power draw with VM

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
I got a few G7 to play with or use but I'm scratching my head with a small problem.
At idle a dual CPU server draw about 93W with ESXi 6.0 running. Not great but fair since it's old. But as soon as I turn on a single VM (windows 10 in this case) it jumps to 135W. CPU is pretty much idle and nothing else is happening. I don't get why since it's not doing anything.
I tried TinyCore and it did not seem to have any impact on power.


Any idea what is going on?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Shut down the VM to make the CPU idle again.

Running a VM means that the hypervisor scheduler is running, and finding that it is due a timeslice, and then hands off to the VM, which means that now /bin/vmx is running your world, and inside the world the Win10 kernel is running, doing all the things it thinks it needs to be doing. This may include background tasks and things other than running userland processes, as what Windows is seeing is an idle machine, which it will interpret as a good time to be doing housekeeping. Stuff like insipid "pretty" screensavers, Cortana, Windows telemetry, background updates, filesystem indexing, antivirus scanning, and other stuff are problematic as well. Some of us have spent a fair amount of time bludgeoning all the crap out of operating systems for use in virtualized environments, Windows is hardly the only guilty party.

You can in fact get a VM down to a point where it is using minimal CPU. Most modern operating systems actually play okay with the hypervisors at a low level, so if you get something that isn't running all sorts of unnecessary crap, you can get down to a handful of MHz while idle. However, there is always going to be some overhead even there. Reducing the number of cores used by a VM reduces the number of physical cores that need to be awakened by the hypervisor, because even if your VM isn't "doing anything", the hypervisor doesn't know that, so if you give a VM 8 cores, you will be pulling that many cores into an active state when the VM is scheduled.

A typical timeslice is 50 milliseconds, so your VM can be "busy" running many times per second. So you probably want to be careful about allocating vCPU.

The details here are somewhat more complex and I am whitewashing some of this just to keep it understandable.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
The VM is not using any CPU. That is the ting. It's like 60-200MHz at most for single core and maybe 350MHz for quad core. And the hypervisor is not reporting any extra CPU usage for entire host, or at least not any more than when VM is not running.
Single vCPU or 4 core don't seem to matter, just having one bumps the power draw up. Running a 2nd one make little to no difference. So one VM with single core and 4GB RAM and one quad core with 16GB RAM. I have also tried excessive amount of RAM and no difference.
I have tried regular Win10 and AME version (so no "house keeping") and it's the same.
Running Windows bare metal draw less power than ESXi. About 85-89W.

When the VM is actually doing work you can see it right away in power draw so actual usage is reflecting like expected. And performance when running Cinebench R15 is as expected.

I'm currently installing Ubunti VM and see how that does.
*edit2* Ubuntu don't exhibit this behavior, single core or quad core.

Host is a dual E5630 but it does the same thing on single CPU. Also tested with different amount of RAM and also PC3L.
Pinning a single core at 100% is about 150W and 100% CPU on all cores is about 230W for reference.

Also setting the power option to high performance seem to give the lowest power draw compared to OS control while still boosting clock during single core use. At least it did for Windows.
I don't think I tested the low power mode vs power draw, only when benchmark since single core performance is not gonna cut it anyway. Ill pop off and do that.
*edit* putting it in low power mode (pretty much locking CPU to 1.8GHz) makes no difference and it still draw that extra power.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
60MHz is a *lot* of CPU -- definitely nowhere near idle, at least from a hardware perspective. FreeBSD VM's that aren't even designed for optimal idle get down to 17MHz here, and it actually is possible to hit ~0MHz.

One of the things that you have to remember is that many of the tricks that an OS tries to use to drive CPU and power usage down work differently on a hypervisor than on a bare metal platform. And these tricks vary from platform to platform.

The Windows host is simply trying to do more in the background, and Microsoft probably hasn't bothered to optimize for VMware either, since VMware is a competitor. It would be interesting, since you have "several" of these units, to see if Hyper-V exhibited the same issue.

One final note, I did notice, years ago, that the power profile of the E5-2697v2 was very different than the E5-2609 it replaced. I had about six months experience with the 2609 as I was waiting for Ivy Bridge, and one thing I did notice was that the 2697 tended to burn many more watts when given relatively light workloads that hadn't been burning watts on the 2609.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
I won't really settle for "windows can't be run in a VM just deal with it" argument. To me it's obvious *something* is causing it and I want to figure out why.

I did disable HPET and it kinda seem to have solved the single core AME taking up lots of power and is more in line with what I would expect "house keeping" and general windows overhead you described.

ray_hpet.PNG

You can see the difference, running VM on the left, VM off after first gap, HPET disable for the rest with VM on. Not perfect but better.

The quad core VM is still very much stubborn.

*edit* I will also take a look at Scale and see if that can solve the problem. Probably not gonna touch Hyper-V but who knows.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I won't really settle for "windows can't be run in a VM just deal with it" argument. To me it's obvious *something* is causing it and I want to figure out why.

I didn't say Windows can't be run in a VM. It certainly can.

But you have to look at the fact that virtualization isn't magic, and different operating systems will have different quirks.

For many years during the early days, FreeBSD did very little to optimize for power consumption, because the PC CPU's didn't support it, and the various minicomputers and mainframes that had been running UNIX didn't have any power management at all, so no one had ever considered this. There is a lot of fiddly work involved in making power management work well, and both Intel and AMD have somewhat different strategies. Each OS has to navigate these challenges.

Consider all the times you've heard whining by smartphone owners that their latest OS has significantly impacted their battery life, but then a patch "fixed" it. What happened? What was fixed? Certainly the hardware didn't change. It's patterns of usage within the software.

I've spent a lot of time and effort optimizing various operating systems for virtualization environments. I'm *guessing* you haven't. Collecting the knowledge to make these things work well under virtualization is a long process. I wouldn't personally get too OCD about this because a hypervisor is probably going to have enough VM's running on it that it never gets down to your 93w level once several things are being scheduled. But if you decide to tough it out and actually identify what's going on, hey, I like learning new things. Please share.
 
Top