Running Jetendo CMS at 5,000+ requests per second with nginx proxy_cache and SSI

  Follow me: Follow Bruce Kirkpatrick by email subscription Bruce Kirkpatrick on Twitter Bruce Kirkpatrick on Facebook
Mon, Apr 15, 2013 at 12:15AM

Publishing a dynamic web site to static files to improve performance is nothing new.  Most major sites probably have some form of static caching in place.  The real work is figuring out how to get your cache to stay fresh and avoid clearing too much of it at once every time you make a minor change to the site.

Our sites pull in data from many different places and a lot of this data changes every few days or whenever the user changes the content in the site manager.   However, when this data changes, it should really only update a portion of the page, and not the entire thing.

Phase 1: Implementing Server Side Includes (SSI) and unique template caching

The first phase of getting a caching system working has now been implemented and fully integrated with the Jetendo CMS source code.   In the process of implementing it, a more elegant solution was found for handling how the existing template system gets cached.   In Jetendo CMS, there are tags you insert in template to place data such as the title, meta tags, content and page title.   These tags are now being cached as SSI set variables along with an SSI include for the template.  

To minimize the amount of redundant code that is stored / processed, the template shell that includes the header / footer information is checked for uniqueness by generating an md5 of the template string and comparing that against the value in memory to determine if it needs to publish a new version to disk.

These published template files also get reloaded off the disk when the server restarts, so the cache doesn't have to be recreated just because the app restarts.

The SSI code looks like this (the extra backslashes are for escaping reserved characters):

<!--# set var="title" value="whatever" -->
<!--# set var="stylesheets" value="<link rel=\"stylesheet\" type=\"text/css\" href=\"/zcache/_z.system.mincat.css\" />" -->
<!--# include virtual="/ztemplatecache/7DE2F91E7BDF8A8B311BE77744499B17.html" wait="yes" -->

If necessary, the template cache file is published before the request finishes.  In that template cache file, there are SSI statements for outputting the variables that were set like this (encoding is disabled so that the HTML code passes through unchanged):

<!--# echo var="title" encoding="none" default="" -->

I don't publish that output to disk myself, I let Nginx automatically cache it in its proxy_cache folder, and then future requests won't need to hit the CFML application.

Using server side includes like this is around 5 to 10 times slower then serving a single HTML file, but the difference in performance is actually irrelevant since it's highly unlikely we'd ever experience that much traffic (over 5,000 requests per second) on a single server anytime soon.   Nginx's proxy_cache and SSI feature do nearly all their magic in memory.  The Nginx process consumes virtually no disk resources in my simple tests, which is the main reason for using it.  It also outperforms the current version of Varnish on the same system, and Nginx is easier to configure since I'm already using it for all the sites.  

When you put Railo or PHP under load, some of the real estate search features can cause the app to slow down to as little as 25 requests per second depending on the complexity of the queries.   25 requests per second is actually a lot because you rarely get that many simultaneous hits in reality, but I think it is dangerous to have pages that can be that slow under load.   Rather then cripple the application by removing features, I work hard to retain those features and make them faster.  We want these complex pages to be able to load fast and scale to higher number of concurrent users so that our app is useful and affordable to host.  Eliminating the disk activity is the only way to do that.

Phase 2: Tracking the database for changes

The progress so far has just allowed the system to cache static files when it is enabled.  The cache system will not always be enabled.  For example, the proxy_cache will not be used when there is a logged in user so that session specific information isn't shared with other users.  It also won't use the proxy_cache unless it is explicitly enabled so that I can ensure that each feature that will use it has been tested thoroughly.  I did this by returning an custom HTTP header that informs nginx not to cache the request, which is done with the proxy_no_cache statement.

I plan on setting up the app so that you can assign certain database tables to be tracked automatically so that all the data that is visible on the page will automatically be associated with the current URL and that data will be stored in memory and then asynchronously on disk.   This will allow the system to know exactly what data need to be cleared from the cache when there are changes to the data.

Many parts of the application don't query the database directly for public requests.  These will need to be updated to be integrated with the cache system as well so there is still minimal overhead for this tracking code.  

From the developer point of view, I want it to be very easy / almost automatic to have full integration with the cache system.  This will help developers build features that can scale to high performance with less effort.

I built some of the code for managing this tracking and I need to integrate it with the database functions and do some testing still.

Phase 3: Deploying to production

Once I'm confident the new caching system is working, I'll be deploying to a few of my web sites and monitor them for any bugs I missed.   It will be very exciting when the majority of our web sites are running from the cache, since it really does allow pages to run at 5,000 or more requests per second.   This will make our server less vulnerable to denial of service attacks.  There will always be certain features that must remain dynamic, so we'll still need to optimize and protect those, but the large majority of public requests will be much faster once updated.

Other Performance Tweaks

In addition to the caching system, some other parts of Jetendo CMS were further optimized so there is even less disk access through the system.  I also found some code that didn't need to run.  The end result was that I was able to get a dynamic page to run at up to 1,000 requests per second with Railo on my quad core test server.  This page doesn't run any queries and it just shows the performance of the Jetendo CMS inner workings. 

I'll be updating this article or posting another article as I make more progress. Stay tuned!

Related Resources

Jetendo CMS - Free Open Source CMS Build with CFML

Bookmark & Share

Popular tags on this blog

Performance |