Database Scalability, Part 2: Hardware

We began this dialectic on database scalability in Part 1 with
a basic introduction to what scalability is and the three ways it is achieved. Before splitting your database into 4,000 shards across 1,800 servers with 9,000 instances of memcached across another 805 servers as Facebook did, there are a number of ways to improve hardware in order to achieve that next level of scale.

Achieve Scalability With Existing Hardware

Perhaps you already have good hardware, but the system software isn’t running as fast as it could. There are a number of ways to achieve scalability without having to add new hardware and which don’t involve touching the database at all. First, optimize the operating system. If the database is on a Windows or Mac machine consider moving to a unix-based system that is significantly more light weight. Also, be sure to check that the operating system recognizes and is able to use all available resources, especially memory (see Windows memory limits, and PAE for 32-bit linux or windows). The Linux Performance Team has come up with additional ways to improve OS performance, which in turn allows for greater scale.

Second, optimize the hard disk through the use of the hdparm command and mounting disks with async and noattime options. Lastly, make sure the software is optimized. Check to make sure the software, such as the database engine, is able to use multithreading on multicore architectures. Make sure the system isn’t running additional processes that are unnecessary, and if the server is the application server as well, consider using lighttpd. Also, be sure to maximize the use of resources by making sure the swap area is at least four times the size of RAM and to use caching to maximize the use of memory. Moodle, a Course Management System, has an outstanding document on performance to achieve scale. Be sure to check for additional tips like these to optimize software configuration. They have a great section on both PHP and Apache. Also, look at Oracle’s article on database performance tuning.

Achieve Scalability With New Hardware

After optimizing the system software, then it is time to look at purchasing new hardware which can give the needed capacity to handle more data. First, look at memory. Memory is perhaps the easiest, most effective, and cheapest upgrade. If the system is setup correctly and the use of memory is being maximized, adding just a little more can go a long way. Some vouch for being able to store the entire database in memory. Second, hard drive I/O speed is a major bottleneck, hence the reason for storing as much of the database’s working set in memory. Look for faster hard drives with higher RPMs and better connectivity. One solution is the use of Solid State Drives (SSDs). SSDs can be up to four times faster than regular Hard Disk Drives (HDDs) at read access, and 35 times faster at random access. Which kind of drive will work best needs to be assessed by what the database is tailored to. As another option, there is a great debate about SCSI vs. SATA, which is worth looking into, because Serial Attached SCSI (SAS) can offer up to 24Gbit/s. Lastly, look into the use of RAID which can significantly increase fault tolerance and possible even speed, though there is no one size fits all.

Third, increase bandwidth both to the internet and the local network. Increasing bandwidth to the local network is very beneficial if the database server is being connected to by the application server, and the new IEEE specification allows the capacity for 40GbE and 100GbE, which would certainly shift the bottleneck to I/O. Fourth, increase the CPU capacity. This is possibly the most expensive because it often involves purchasing a new motherboard, so choosing wisely from the beginning is best. The basic concept is that the amount a computer can process will double with each additional processor. Before looking towards this option, note that if the CPU doesn’t hit above 50% use, then upgrading the CPU will be of little or no benefit. For guidelines on how much memory and CPU capacity to have, be sure to check out a Microsoft article which shows a table for both application and database servers.

Lastly, adding load balancing hardware and read slaves can give the greatest scalability of any hardware option. Though this may involve some alteration of database configuration and application logic, it is primarily a hardware improvement. Oracle’s article on scaling gives a great example of how this can be achieved. Doing this involves a greater knowledge of the systems and database architecture involved, but can be supplemented by applications like dbShards or database proxies which can do this seamlessly.

There is quite a debate about how far relying on Moore’s Law will take database scalability. A 37 Signals post makes the point that resource distribution and modifying database architecture can and should be avoided “as long as Moore’s law can give us capacity jumps.” It’s much cheaper to throw hardware at it than rewrite application logic and rebuild database architecture, which can be a nightmare and has been known to bring down major websites for days. A rebuttal to this argues that this is the wrong answer. We need to look at other scalability options, which is exactly where we will go to in Part 3.

Published
Categorized as Database

By Joe Purcell

Joe Purcell is a technology virtuoso, cyberspace frontiersman, and connoisseur of Linux, Mac, and Windows alike.

Leave a comment