High iowait with mysql on iSCSI

Status
Not open for further replies.

ddimick

Contributor
Joined
Feb 23, 2013
Messages
144
I'm running FreeNAS and an Ubuntu server as guests on an ESXi host. The Ubuntu server is mounting an iSCSI device extent over a 10gb vmxnet with no physical adapters and with MTU 9000. FreeNAS and Ubuntu are both using vmxnet3 adapters. The extent is an ext4 LVM containing about 27GB of a mysql database.

What I'm seeing is a normal load on FreeNAS but iowait on the Ubuntu system is spiking to 99+% under mysql operations. On the VMware side I'm not seeing any CPU or IO utilization that looks unreasonable. The network shows about 200mb of throughput.

I'm looking for ideas on how to further troubleshoot or tune the system to reduce iowait.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So what you've done is set up a high performance networking environment. You have a possibly large-ish database (depending on what's stored and how it is used).

What's totally missing here is all the useful information that one would need to have any idea of why your database is performing poorly.

What's your physical disk layout? One 5400RPM hard drive is capable of sustaining maybe several dozen random seek IOPS. This always seems to come as a shock to people who say "but but but my drive benchmarks at 120MBytes/second!" but don't realize that's sequential access speed. One good SSD can do tens of thousands of IOPS per second. RAIDZ2 is slower than mirrored. etc.

What sort of queries are you running on the database? How many per second? Are they mostly reads? Mostly writes? A mix?

Does your MySQL database server have tons of RAM and have its indexes cached in RAM, or is it having to hit disk for every little suboperation it has to do?

And then, probably the only question that has any direct relevance to FreeNAS, how busy are your disks reporting (use gstat or iostat, gstat is easier to interpret for a beginner)? If your disks aren't staying green (in gstat), your database is definitely going to be feeling some grade of slow, anywhere from just a little laggy to horribly brokenly miserable.

MySQL likes memory for some of the same reasons ZFS likes memory .... caching caching caching. The less you hit disk, the faster you go.

Once - and only once - you've looked at, and understood where your performance problem lies:

For a 27GB database, if it turns out that you are suffering from seek saturation on some spinny disks, and the database isn't expected to grow much, the answer is to go buy yourself a pair of 60GB SSD's, mirror them, and then go do something productive with your time, because the cost-to-solve is lowest that way.

If you expect the database to get bigger, you can help your FreeNAS boost read speeds from spinny rust by adding ARC (memory) or ARC+L2ARC (memory+SSD). Start by making sure your FreeNAS VM has probably at least 16GB of RAM (and if you can't, or that number is objectionable to you, you are probably unable/unwilling to throw sufficient resources at the problem to get the benefits of ZFS). If it's still slow, you can look at the ARC statistics to see if L2ARC would help. There may also be things to consider with the block size ZFS uses.

But beware: the best first place to start is to understand how your database would work on hardware without the virtualization and added overhead of iSCSI and FreeNAS, and use some of the traditional wisdom easily available all over the net on that topic (optimize indexes etc). ZFS has some capabilities that can enhance your performance, but a poorly implemented database served on even the fastest hardware is still likely to suck.
 

ddimick

Contributor
Joined
Feb 23, 2013
Messages
144
First, thank you for the guidance on the different areas to look at to improve performance. What I'm seeing is a result of insufficient memory allocated to the VM, which is causing far too frequent access to disk, as you had suggested might be the case. Adding another 1GB and some basic tuning innodb caching resulted in an immediate reduction of about 40% in iowait under the same load. As you probably already surmised, this really wasn't a NAS/network/virtualization issue at all. :)

Thanks for the pointers, very much appreciated.
 
Status
Not open for further replies.
Top