CIOs and IT department heads are used to the idea of doing more with less, but if your business relies heavily on relational database management systems (RDBMSs), there’s a pretty good chance you might be able to do a whole lot more with no extra investment.
Improving database performance is something of a ‘dark art’, so we’ve put five zero-cost performance upgrades to the test in iTnews Labs. The results may surprise you.
How we tested
We fired up our HP ProLiant DL380p Gen8 server, powered by dual Intel Xeon E5-2650 eight-core CPUs and 32GB of RAM, running two separate virtual machines (VMs) we created using VMware’s vSphere hypervisor, with each implementing Ubuntu Server 14.04.1 LTS operating system and MySQL 5.5.38.
The first VM was a spec-limited build providing only three virtual CPUs (vCPUs) and 1GB of RAM; the second, eight vCPUs and 8GB of RAM. We created identical databases for both VMs, each with eight tables and one million records per table, giving a total size of 2GB each.
We used the virtual machines to carry out a series of read-only (RO) and read-write (RW) transaction tests with Sysbench 0.5, from one to 128 concurrent threads in binary stepping. We chose the 2GB database size deliberately to ensure it was too large to fully cache in our 1GB RAM VM but small enough to do exactly that in the 8GB VM.
To maximise the accuracy of our results, the vCPUs of each VM were split using the Linux taskset command to separately handle benchmark and database tasks. The 1GB VM had one vCPU keeping tabs on Sysbench, the remaining two left to MySQL, whereas the 8GB VM had two vCPUs on Sysbench and the final six running the DBMS. (This is a technique recommended by the MariaDB Foundation/SkySQL.)
Database performance is a notoriously slippery substance to nail down – it constantly changes, depending on your hardware setup and the database itself. Even so, our test results show these five simple techniques are well worth further investigation for your setup and may well provide the ultimate zero-cost performance upgrades.
While no two databases are alike, the one common feature they all have affecting performance is the my.cnf configuration file. MySQL makes available what seems like a million different global variables to fine-tune transaction throughput in this one file and all MySQL distributions feature it in an initial setup, including MySQL 5.5.38 when installed from Ubuntu Server 14.04.1 LTS or Debian 7.6.0.
For the most part, MySQL just implements default options and includes any extras specified in the initial my.cnf file as overrides. The key settings provided in Ubuntu Server 14.04.1 LTS’ MySQL config file include:
key_buffer = 16M
max_allowed_packets = 16M
thread_stack = 192K
thread_cache_size = 128
query_cache_limit = 1M
query_cache_size = 16M
Starting from this base level, we tested five different configuration options on both VMs to see how far transaction speed can be improved and how the modifications affect both read-only and read-write performance in different hardware setups.
Performance Tweak #1 – Increasing InnoDB buffer pool size
Since MySQL 5.5, InnoDB has been the default database storage engine and it’s a smart choice, if for no other reason than its row-level locking, which allows greater potential concurrency than predecessor MyISAM’s table-lock. It’s also said to help InnoDB deliver faster write actions, although MyISAM is still considered by many the first choice for performance in mostly read-only applications.
One of the benefits of InnoDB is that it implements its own single memory storage area or ‘buffer pool’ for caching data and indexes. If you don’t specify it in my.cnf, MySQL 5.5 implements a default buffer size of 128MB, yet Oracle recommends the pool should be ‘as large a value as practical’. This is clear indication there is no perfect value and that it should vary with your hardware setup. (Oracle goes as far as saying a large buffer pool can make InnoDB act like an in-memory database.)
But on a server build with only 1GB of RAM available, there’s a limit to how far you can or should go, so to ensure we didn’t end up face-palming ourselves due to overcommitting, we increased the default 128MB buffer size to a conservative 512MB for testing on our 1GB VM.
This is important because MySQL’s total RAM usage not only includes the buffer pools, but also approximately 2.7MB of RAM for every concurrent thread it’s asked to create with thread caching implemented. So in setting the Inno3B buffer pool size, you not only have to take into account your total RAM, but also estimate the buffer and cache requirements that are likely needed elsewhere.
But with 8GB of RAM on our second VM, we could throw 3GB at the buffer pool with impunity.
Looking at the 1GB VM results, the read-only performance gains didn’t really kick in significantly until the thread count rose above four, with rates increasing 4-6 percent. However, the read-write performance was almost the inverse – notable gains of up to 11.5 percent at low thread counts, decreasing but still significant up to the top 128-count level. If you look at the raw results, read-only transactions peaked at an eight-thread count whereas read-write performance didn’t max out until 32 threads.
Certainly on this server build, upping the InnoDB buffer pool size has made a worthwhile improvement, regardless of what else is going on inside the my.cnf file.
By comparison, and InnoDB buffer pooling aside, our 8GB VM delivered completely different results, most obviously the flat-lining of read-only transaction rates at a thread count of eight and up, which is in complete contrast to the read-write performance that clearly improve with thread count.
As for the changes brought about by the InnoDB buffer pool size, there were gains at lower thread counts for both read-only and read-write speeds, but it seemed to marginally pull down transaction rates in read-write mode at higher thread counts. However, with other constraints evident in the my.cnf file, it’s too early to write the buffer pool off in this situation just yet.
Fine-tuning the InnoDB buffer pool
Overall, it seemed that changing the InnoDB buffer pool made more of a difference to the 1GB VM than the more-resourced 8GB VM. But that may come down to how MySQL implements the buffer pool – and the way you use it.
Remember that if you don’t choose a size, MySQL chooses 128MB for you, which means there’s only the single light buffer that’s easy to handle. On the 8GB VM, that pool now becomes a 3GB man-mountain.
But here’s the really cool feature of the InnoDB buffer – although you can make the buffer pool also as large as you want, MySQL 5.5 allows you split the pool into multiple sections or ‘instances’ once you go beyond 1GB. The idea is that multiple buffers help reduce conflicts during concurrent read-write operations and make more efficient use of all that lovely space. Oracle’s recommendation for best efficiency is that you effectively set the instances switch to equal the number of gigabytes of RAM you set aside for the buffer pool. For example, if you set aside 3GB of RAM, the instances switch is set to 3, so that each instance gets a 1GB chunk to play with.
You do this by implementing the following my.cnf commands:
innodb_buffer_pool_size = 3G
innodb_buffer_pool_instances = 3
But setting the instances switch is entirely optional and we didn’t use it in these specific tests. Instead, we ran extra tests using the instances switch at the end that better show what it does for overall performance, particularly read-write rates on well-stocked servers.
One more thing - if you’re running a dedicated database server, Oracle points out that in certain setups, you can have as much as 80 percent of RAM allocated to the InnoDB buffer pool, showing just how effective it can be.
Read on for four more free, easy ways to fine-tune your MySQL database...
Hi! You've reached one of our premium articles. This is available exclusively to subscribers.
It's free to register, and only takes a few minutes.
Once you sign up you'll have unlimited access to the full catalogue of Australia's best business IT content, as well as a daily news bulletin delivered straight to your inbox.Register now
Already have an account? Log in to read this article.