A plan for scaling a CMS to multiple servers

  Follow me: Follow Bruce Kirkpatrick by email subscription Bruce Kirkpatrick on Twitter Bruce Kirkpatrick on Facebook
Mon, Apr 15, 2013 at 12:40AM

This article attempts to discuss how you could go about converting an application that is designed to run on one server to run on multiple servers.  Jetendo CMS is my application and it has its own unique issues to solve, but most of this article is general information that could be applied to other scaling other software.

Some Enterprise CMS software options have the ability to have 1 application instance that handles writing and others that mirror the content for multiple readers.   This helps you scale out to high traffic loads very nicely without making the site manager overly complex code.  You could put servers in data centers that are closer to the user so that the most common requests are viewed faster.   You could even use different hosting companies to have more forms of redundancy in place.

The future is in the cloud - For some, that is now

It's becoming more popular for people to favor having a bunch of cloud servers running an app rather then a few carefully managed servers.   I currently don't try to support cloud scaling in my Jetendo CMS application.   I make 1 powerful server try to serve everything and manage it carefully to ensure it doesn't go down very long when there is maintenance being performed.    It is in some ways less stressful if you can get things running on multiple servers, but it's more technical to setup and build software to run in a high availability / clustered way.  I discussed this in my previous blog article: Considerations when implementing high availability for a web application. 

Maybe you want to host large files like videos and handle serving a high number of concurrent visitors. You might just need to use a bunch of cloud servers to solve this kind of problem in a cost effective way.

I realize that some people who would adopt a CMS like Jetendo CMS may see high availability and support for massive cloud-scaling as a valuable feature, so I'm beginning to plan how to implement this.  Cloud doesn't mean you have to use cheap shared cloud hosting.  You may want to scale out to multiple dedicated servers and use global load balancing to distribute traffic based on customer's location. Having lower latency can make a big difference when it comes to the responsiveness of your application.   I hope to automate and simplify the task of achieving massive hardware scaling and high performance with my free open source project in the future.

There is poor support for clustering in most free CMS software.

Let's keep in mind that almost no CMS software is built to support clustering and synchronizing multiple servers automatically and reliably.  This is rarely a free feature if it is supported.  I found only a few that could officially market this feature when searching on Google such as Magnolia CMS and Plone and neither are free solutions.   None of the widely used systems like Wordpress, Joomla, Mura can do this out of the box.  Depending on how you use a CMS, you may find an acceptable work-around that isn't perfect, but works.     I'm not looking for a "good enough" solution.  I want it to be reliable. 

If one of your servers doesn't replicate correctly, you may find yourself having to manually update the slave servers.  Quite tedious if you have many different users changing the data.  By adding official support for clustering to my app, it will become more reliable and easy compared to competing software.   With a system like this in place, we may also be able to provide more dynamic hosting options for customers.  Charging not just for usage of a single server, but allowing their web site to scale to multiple servers with minimal added costs by relying on automation and our expertise with tuning the CMS we built, Jetendo CMS.

Why not use official replication methods like Mysql's replication?

I find MySQL replication to be difficult to fix when there is an issue with it.  Certain queries, upgrades, crashes or mistakes will often cause you significant downtime or lost productivity while you figure out how to get things running again on the slave server.  A system where I control the method of synchronization will lead to more consistent behavior that I understand over time.  I hope that it will be easy to resume / fix issues this way - only time will tell.  If I change to another database technology, I don't want to have to re-learn how to synchronize everything.

I also want to be able to queue changes that may not be database related.  For example, I may want to flush a specific part of the cache in Railo or on other servers like Sphinx or Memcache.  Having a consistent API to manage distributing changes will be easier to debug and integrate new features with.

Another reason you can't just use automatic MySQL replication is that the database might replicate the changes to the database before the associated files have been synchronized to the other servers.  Users would see a 404 for images embedded in an article until the other server catches up.  If replication stops working but the file replication continues, then you'd have problems that last even longer since you'd need to manually fix MySQL replication.  All replication of content must halt if something goes wrong.  It's not good to serve out of date information, but that's better then serving broken pages and errors.   On a high traffic site, it could be a serious problem and look very unprofessional. 

You could try to replicate images by storing them in the database perhaps, but the larger the file, the less practical that would be.  It wouldn't make sense to store 100mb videos in the database and I want to handle all files in a consistent way for the sake of efficient synchronization.  The API will have links to files with unique ids and their original file name and the application will deploy those files to disk consistently on each copy of the application.

Jetendo CMS will implement its own replication system

I'm thinking that the CMS could have a new queue feature that will handle the distribution of changes.

The first step when a user makes a change to the application would be to store those changes as a new record in the queue database, which will record all the request data unmodified into the database and organize any uploaded files to be associated with that queue record.   Then the changes would be executed by directly executing the associated API call on the writer instance through an internal call of the API instead of using xml remoting.  Any associated files would be copied to the final path rather then moved.  This will ensure that other servers can download those files from the original copy instead of working with a volatile copy.   If the changes are validated and accepted successfully, the writer instance will set the queue record to be distributed to other servers in the database.  If the change was not accepted, the queue record would be marked for deletion or for being archived.  

This queue system could also function as a log of all user input to the system for later analysis.  That could be useful for certain apps, and further evolved into a special feature if you want to have an app where no user data is ever permanently lost.  Perhaps analyzing shopping cart behavior or abuse patterns is easier when complete data is stored.

Since the same API that the reader instances use would be executed by the writer, it should guarantee that all servers will execute the code in the same way to avoid any bugs in the API from causing data to get out of sync.   This may also require that the rest of the app is built in a transactional way to be completely reliable since half completed updates may result in problems requiring tedious manual database updates.   It could take considerable work to ensure that updates are transactional & rollback on failure.    Many requests update multiple database records and it takes more work to ensure they all commit together.

File replication needs to commit at the same time as database changes

Below is an example of a reason why you need a more elaborate replication approach:
User uploads an image on server 1.
Server 2 is still updating previous changes and hasn't been sent the new image yet
User deletes the same image on server 1.
Server 2 tries to download the image from server 1, and it can't because it was deleted.   The other changes to the database could be incorrect or lost. 

We can't automatically handle this problem if server 1 is allowed to delete or change the data before it is replicated.

To combat this problem, the queue would maintain a separate copy of all files that it references.  If something fails to update, the queue would be paused until developer resolves the issue, and the original files will remain available for when it is ready to sync again.  Because the queue will always be able to complete when its run in order, there should be no way for the slave servers to get out of sync / broken unless the application has a serious bug in it.

By layering the queue system to occur just prior to normal request processing, it may require fewer changes to existing code.  I.e. file uploads just need to be converted to file copy operations and the other data will often be acceptable unchanged.   It's essentially a system that is designed to send the same request to each server when want to mirror changes to.

Use of session memory becomes a problem

If the request utilizes data from session memory, that information would also need to be distributed in the queue potentially.   If there is extensive use of session memory, this could make the queue much larger and inefficient compared to only passing the resulting changes to the system.  In a system that relies on distributing data to many servers, it may be wise to limit the use of session memory and ensure that the session data is encapsulated in an object so that the method of accessing it can be replaced more easily.   We don't want the server itself to handle requests like it is another user by mistake, it often needs a different level of security.   This is a significant problem in the current design of Jetendo CMS since the use of session memory is fairly extensive and it could require some rewriting.   It may be necessary to either redesign session memory usage or come up with a different replication technique for requests that depend on session data.   Perhaps instead of integrating with the queue system at the beginning of every request, some requests would integrate with it by calling a different API function to distribute the resulting changes to the data instead of repeating the original request.  This adds complexity to this project either way.

Distributing every change is dangerous

Just like with session memory causing trouble, some other requests couldn't integrate with this system automatically.

Let's say a user submits a form that is supposed to assign the lead to a specific user and send them an email.  We certainly wouldn't want to send the same email from every server, but we do want the assignment data to be distributed.   In cases like this, we'll need a simple function or boolean value to determine if the current server was the original place the request was made so that we can disable things that shouldn't happen twice.  This will require a careful review of the entire application to ensure things that should happen once, do happen once.

We also wouldn't want to do things like charge a customer twice, refund them twice, or place an order twice.

Handling IDs that don't match on different servers

If someone modifies the database manually and causes an automatically incrementing id to be ahead of another server, this could cause a serious problem with database records not having the same ID on each server.   For example, you couldn't give a customer different order IDs on each server.   Many urls contain the ID in them and must match for SEO and sharing purposes.  This is a situation where you either need to know the ID before it has been inserted or build the replication to occur after the IDs are known.   However, there is no way to guarantee a ID is unique on all servers if someone has tampered with the table.   This is why a single failure in a reader instance must halt further updates.   A duplicate key error on the database requires manual correction.   Developers would have to be more careful when dealing with a distributed system like this and ensure that tables are not modified in production from outside the application unless absolutely necessary.   When there is a bug, they might be required to manually correct some of the data to get things working again, but they shouldn't make a habit of manually changing the database.   I don't think this is a problem that is fixed easier with any form of replication because there is no way to prevent developers from doing things wrong.   You could use a GUID for every primary key, but that still wouldn't be 100% unique if a developer modifies the other servers with the same id beforehand.

Why not allow multiple writer instances?

After, I get it working as 1 writer with multiple slave readers, then I might figure out how to allow multiple writers for some features with the same system.  However, there are several operations that might be inefficient when writing is possible on multiple servers.  If you can't work with copies, then it could be very difficult to have multiple writers changing the same files and database.   As a compromise, there might be portions of the app that allow writing on multiple servers, but then other features will be limited to running on one server.     Sometimes data is being pulled from a third party system and it should not be repeat that operation multiple times.  It would be best to have one server managing that instead.   Like if we're downloading an image from another server and cropping it, we shouldn't need to do that over and over for each server, it should happen once and then the cropped files can be distributed.   To allow multiple writers, you definitely need to use some kind of universally unique identifier for all records in the database or risk having duplicates.  UUIDs are usually 64-bit or higher numbers and that slows everything down and takes more memory.  It is still reasonably fast and possible.   There are a lot of additional complications to having multiple writers, so it takes more significant rewriting of an application to support that.

Distribute data with a Custom ORM?

A custom Object Relational Mapping (ORM) system could internally track changes to data and send efficient delta packets to the other servers to minimize network and disk usage.  However, this requires rewriting all of the insert/update/replace/delete code to use the new ORM system.  This often seems easy to do for single table updates, but more elaborate when there are foreign keys and complex relationships.   The ORM would again only help with database changes unless it was adapted to function on shared memory, and other systems via the same API in a transactional way.  It would be interesting to further investigate this concept as an alternative to replaying requests on other servers.  It would likely be more efficient and easy to understand once finished.    The queue system would still work the same when using this approach, you'd just be sending the resulting changes instead of the request data.  It might be easier to enforce the use of the ORM for plugin code too without adding too much confusion.      I already have a simple custom ORM built, but I'd need to add a feature to queue those queries for being distributed asynchronously.

Each request could have an id.  All the queries run by the ORM and memory changes would be associated with the request id.  If they

Summary

I hope others have learned something from this discussion of building a system designed for distributing user content across several instances of the application.  If you have any better or alternative views, comment below.


Bookmark & Share



Popular tags on this blog

Performance |