Background
First let me just say that I have been struggling with this problem for quite some time now. The problem is to provide redundancy for a website so that the website continues to run even if there is a problem with one of the servers it's running on.
In a typical simple server setup, there is a single machine running the web and mysql servers. The machine can be either a dedicated server, or as I have been using, a VPS. I have been running my websites off of VPSs for several years now, with minimal trouble. This works most of the time, but the having a site on must one machine means a Single Point Of Failure. If something is wrong with that server, the websites are non-functional until it is fixed. The trouble I have run in to falls under a few categories:
- A problem with the physical host
- A problem at the VPS level (operating system, Apache or MySQL errors, etc)
- A problem at the network level
Problems with the physical host
Problems with the physical host do occur. With a VPS, these are completely out of your control, and are the responsibility of the hosting provider. Some examples of things I've encountered include a failed RAID array, a corrupted filesystem on the host, requiring a several-hour-long fsck, or an unplugged power cord. The worst issue I've had was when the provider said they had lost 2 drives in a RAID 5 array, and all the VPSs on that host were completely gone. Luckily I had a backup of the system and was up and running on a new VPS within a couple hours.
Problems at the application level
I haven't actually run in to many problems at the VPS level compared to the other types of problems. However the latest issue I'm having does fall in to this category. Apache periodically starts crashing part way through serving a page with the error "[notice] child pid 21106 exit signal Segmentation fault (11)". Visitors see a completely blank page some of the time.
Problems at the nework level
By far the most frequently occurring problem I encounter is network-related. These problems are usually out of both my and the hosting provider's control. If there is a problem with the network, the downtime can vary greatly, anywhere from 5 minutes to 12 hours. It can be caused by a Denial of Service attack on a completely different server in the same datacenter, or it could just be a routing issue somewhere along the path from me to the server.
A typical redundant setup will cover both #1 and #2. Typical setups may include one or more load balancers in front of multiple application servers. If a machine goes down, the load balancers can stop sending requests to it. This works great if you're trying to protect against servers failing. However if the load balancers are all on the same network, unless the network has multiple redundant paths, the whole system is still vulnerable to network issues.
My Solution
Since I most frequently encounter network issues, I can't get away with a just a typical load-balancing solution. What I really need is a copy of the entire website in a geographically different location. Here is my solution:
One VPS in Dallas, TX (called triton), and another VPS in Newark, NJ (called proteus). (Yes, I name my servers after Greek mythology.) Triton holds the master copies of the websites' php files, and proteus gets a copy of them via rsync. If I ever need to update the site, I edit the files on triton and then rsync them to proteus. Here is where the redundancy comes in. My DNS entries point the domain to both IP addresses. This means during normal operation, visitors will be more or less distributed between the two hosts evenly. If one server goes down, I can stop resolving DNS queries to it, and the worst that will happen is some dead pages for as long as the TTL on the domain.
This works as long as you're just serving static content. Serving dynamic content, such as from a database, gets a little more complicated. MySQL's NDB clusters are apparently only effective when run within a high speed network, with at least a 10 MBPS connection between them. Replication turns out to be more along the lines of what I'm looking for.
Replication to the rescue!
Replication is designed for a one-way sync between a master and slave. However, it is possible to configure two servers to be both a master and a slave. They will both notify each other of changes made to their databases. There is one trick you need to do in order to prevent primary key conflicts if rows are written to both databases while the link is down. It involves setting the auto_increment offset and increment, so that one server will only create even keys, and the other creates only odd keys.
/etc/my.cnf
auto_increment_increment = 2
auto_increment_offset = 1
Here's some dry reading on replication from the MySQL manual. Here is a slightly clearer guide to replication which sums everything up pretty nicely. Overall, replication was pretty easy to set up. It seems to be pretty robust as well. I simulated network problems by adding firewall rules to block the servers from each other. I was able to continue to interact with each database, and the changes were all carried over when the link came back up.
Feel free to comment if you have any experience or insights into configuring web and database servers! I'm curious to hear what other people have done.