cfcache is very slow compared to server scope in Railo 3.3.4.003

  Follow me: Follow Bruce Kirkpatrick by email subscription Bruce Kirkpatrick on Twitter Bruce Kirkpatrick on Facebook
Thu, Oct 18, 2012 at 12:25AM

I thought <cfcache> was supposed to be fast?  I was surprised that it is not.

Using railo 3.3.4.003 with memcache beta extension or ram cache, <cfcache> is much slower if you put it under load.

I found the server scope to be 15 times faster for serving a full static HTML page.   You can place your own cache loading system at the top of Application.cfc and achieve 5000+ requests per second on most current quad core Core i7 processors when you max out the CPU with lots of requests using a benchmarking tool like wrk or ab.

I thought memcached might be useful when combined with nginx for serving from a cache in-memory.  I was wrong again.   memcached + nginx is 3 times slower then nginx serving static files.  nginx's memcached module was also timing out when trying to manage more then 1000 concurrent connections despite me tweaking numerous settings.

I did my tests using 8 threads / workers, etc in each app and ran them hundreds of thousands of iterations under 1000 concurrency or greater.

Why does <cfcache> do its work so inefficient when using ramcache?  It was unable to use all available CPU - it only reached 350% cpu.  Reading from the server scope is able to max out the cpu and finish 15 times faster.  800% is the most possible on linux top utility on my system.   Perhaps <cfcache> should be re-written to use the CFML scopes more instead.

I don't have a problem, I'm just making a statement to the community on my findings.  I spent a whole day evaluating the different caching solutions that are available and it seems clear to me that I should publish static files with Railo using my own code and then serve them statically with Nginx.    Nginx was 3 to 6 times faster then Railo at serving static content.  Keep in mind that I'm using Nginx reverse proxy to connect with Tomcat/Railo, which may slow it down a few percent.

I'm also going to make Railo publish the static files with SSI includes for footer, header, etc so that I don't have to invalidate the entire cache every time there is a design or menu change.   I found that Nginx is up to 3 times slower when it has to process 2 SSI include directives.

I had 17,000 requests per second with Nginx and SSI   vs 5000 requests per second with Railo outputting the same HTML from the server scope and immediately aborting.  However, Nginx used only 10% cpu, while Railo used 750% cpu.  So Nginx + SSI is up to 30 times faster for this purpose.   I think Railo is super fast and I love how easy it makes programming, but there is a huge gap in performance still between Railo and an event based server written in C/C++.   If you are CPU bottle-necked, it seems to me you'd do very well rewriting that part to C.   If you are waiting on the disk/database, it wouldn't help much and Railo is still a faster way to build the app.

My tests were also in a virtual machine (Centos 6 inside virtualbox on windows 7).  A current production system is approximate twice as fast as my test system in my previous tests, so there is some amazing performance in both solutions when you statically cache content.   My full featured & optimized Railo CMS app can go between 10 and 250 requests per second on a 1x quad core intel sandy bridge system.  Being able to scale to thousands still is exciting.

17,000 requests per second is equivalent to 1.4 billion page views per day.  The system is going to have a lot of other problems before hitting that limit, but its fun to make these artificial tests and see your app go faster.


Bookmark & Share



Popular tags on this blog

Performance |