Making Drupal as fast as it needs to be

This is a rundown of how I sped up a single Drupal site (the one you're reading, as it happens) to handle more traffic than it should ever need to and also to load pages quicker than it ever did before. I won’t say I made it 800% faster or can handle 29m hits per day, because that doesn't mean a lot in real life. It’d be a pretty artificial interpretation. Even if it is sort of what happened, it’s disingenuous because the start point could have been anything I wanted it to be. As it happens the site was, pre-improvements, a standard Drupal 7 install with lots of modules enabled, a third-party theme and many modules disabled but not removed from the database (uninstalled in Drupal-speak).

I've run some tests with blitz.io that showed good results for concurrent users, and have old Pingdom tests from the last year to compare speed to. The 800% figure is taken from the previous Pingdom tests. Blitz.io was quite adamant that the site could handle at least 29m hits per day, with 0 errors or timeouts, extrapolating their figures from the 20k hits it tested over one minute. The bandwidth would run out first, of course, so it feels like the site’s relatively covered for its own needs. Premature optimisation? Check. Respect for user’s bandwidth and time? I prefer to say that.

I do happen to have some robust opinions on speed and resource efficiency, but I’ve moved those to the end if you want to get into the whys and wherefores.

So, without any further ado, here’s what I’d done prior to this round of performance work:

  • Moved onto nginx, used the Perusio config, enabled microcaching
  • Cleaned all unused or little used modules out (a good dozen that were ‘in testing’ on the live site - changed that practice too, with the help of local Drupal-head NikLP)
  • Removed all unused variables from the DB - some 100 (probably because this is the 3rd iteration of a migrated Drupal 6 site...)
  • Removed the often blocking Piwik JS and started to rely on its log analytics via cronjob
  • Stopped Drupal cronjobs from triggering on user actions, switching to set times via crontab
  • Set up SPDY via SSL/TLS - Note to other TLS users: remember to update Google Webmaster Tools as each protocol needs its own web property, as well as the subdomains

This all made the site feel much snappier. Load times were under 2s for first page, and in the 5-800ms for subsequent loads from cache.

A note on how the caching is supposed to work

Using Perusio’s config and Drupal’s in-built caching I’ve set it up as follows:

  • Microcaching, with hit pages cached for 8 seconds, stale cache served outside of that
  • Cache warming via cronjob hitting the last X number of content pages + the main site pages every 8 minutes
  • Drupal cache lifetime: 5 minutes, expiration: 5 minutes

Microcaching was initially 15s, as per the default perusio config, but I found ajax calls that auto-update certain Views were not working. 8s fixes that, for some reason I haven’t worked out yet in detail. But the microcaching protection is in place, regardless, with nginx handling repeated pageviews in short time periods before ever asking the Drupal cache to perform, with old pages never being that old, due to the cache warming. You can read more about microcaching here. It seems that this way I don’t need redis/memcached/varnish/boost to protect the site, as nginx covers most of their functionality without raising CPU cycles and memory use. The Blitz.io tests didn’t take the server CPU load over 5%, for example, with nearly 700 requests per second coming in.

The main issue I have with this setup is that it requires setting the cache expire to 'epoch', which is essentially perma-expired. Cache-Control headers report 'no-cache', forcing browsers to revalidate on every view. It's still quick, but some CDNs (I don't use them) and older browsers take no-cache as a queue to ignore the cache. Most modern browsers still access the cache, however. You can read more on this issue here.

So with that groundwork done, I more recently moved onto the finer details of cleaning up the theme and site design itself. I started by going over the state of affairs using Google Pagespeed Insights:

  • Desktop score: 77/100
  • Mobile score: 62/100

Key suggestions:

  • Stop render-blocking JS - 6 files
  • Optimize css delivery - 14 files - at least 7 of which are fonts
  • Compress css (66% reduction, reduction of 110kb)

Using the in-browser developer tools I knew I had a CSS/JS issue, but this highlighted it further. Drupal does already aggregate all of its module and core CSS files for me, but the theme I use then calls in a bunch more for fonts and icons. I considered loading the fonts locally, but performance-wise that’s not much of an improvement, if at all, because popular webfonts might already be cached with most users. I was increasingly drawn towards the Georgia typeface, however, having had a play with the CSS in the browser developer tools, setting letter and line spacing, font size etc. to make it look as good as it could. I also liked the idea of a serif font partly because the business itself deals in words. And words for ‘serious consideration’ are nearly always displayed in a serif. More on that below.

To tackle the CSS/JS issues with a cheap fix, I had a go with the advagg module. It has 21k sites using it, out of 257k downloads. It claims to be able to improve CSS/JS groupings, placement, gzip support, etc. etc. But I found that the module itself seemed bloated and slowed site loads down to over 2s again. Without going over the source to see just how busy the module got, I checked out the alternative solution, magic, with 5k sites using it out of 18k downloads. I don’t know if those ratios say anything about people’s first thoughts of both modules (i.e. do more stick with magic than advagg? or are more advagg using sites now offline, as it’s an older project, 2011 vs 2013?) but I was happy with just moving the JS to the bottom of the page and relying on myself to reduce CSS calls as much as possible.

After refreshing caches, Google Pagespeed reported marginally improved scores by just a few points.

That’s a start, then. Still had 10 CSS files loading, however, even if the JS was no longer ‘blocking’ because it was in the footer. Whether the CSS or JS was blocking or not, given it all loaded in parallel over SPDY, I’m not sure if Google Pagespeed insights takes this into account. Perhaps the score would be better if it did? It still reports big CSS compression savings are available, up to 66%, which advagg would handle, and Nginx should already be handling as it is set to use gzip for those files. There may be a line overriding that - I will look into it.

So the mission to reduce JS/CSS/font files continues. I also wanted to load as little non-domain content as possible, be it fonts, CSS or otherwise. Although Google is removing the mixed HTTPS/HTTP content calls, and the fact that you can set the calls to be over TLS, it still feels wrong to offer TLS with external calls, even if those third parties claim not to track usage.

I can definitely do this with the font, keeping it local and reducing CSS/font file calls. Based on the typetester.org comparison you can see below, Times New Roman is the closest to Adobe Garamond Pro, which is very nice to look at, but is of course an external font, so little improvement on the current situation.

I took a brief detour to try out the responsive_favicons module, but it added time to the full page load. Over a second. It loaded 3 icons at once and was another module enabled in itself, so I’m heading back to the 15kb favicon. It covered the edge cases of mobile device icons and retina screens nicely, but that overhead I can live without. For now.

I then went on to finish up the clean out:

  • Removed the text-box grippie JS and image from loading with a template.php mod in the theme.

  • Stopped the markdown toolbar and images loading on anon comments (that’s every news page for users not logged in), I allowed the toolbar on the registration pages, all in the bueditor settings.

  • Changed the favicon to something in keeping with the font change and new image direction.

  • Removed the unused blockquote and code CSS font loads, as well as the fontawesome icons and CSS calls, all pointless overhead.

  • Removed the formtips module and so its JS and CSS, switching to a CSS hover instead. The initial need for formtips was that the form field descriptions looked unwieldy on the page. This greys them out until hovered over. Much more efficient.

  • Replaced securepages with a proper nginx config that was a long time coming. This ensured 100% https use, as I had before, but only on the www subdomain. Previously, for a period, users could open another session (few to nobody did this, according to the logs) by manually entering the non-www subdomain, the nginx https server would redirect them to the non-www version. I don’t really want to perpetuate a pointless habit, but I expect users are more used to seeing and entering www.URLs. The solution in the end was to add a server block to redirect non-www to www on http, and the same two server blocks for https seperately. Before, I’d tried to redirect *.linguaquote.com and linguaquote.com server names to https://www. and it was causing loop redirects, or still allowing the https server to be non-www’d. Fixed now. No module required.

  • Removed the onscroll following header, after reading this about how it could distract users from converting. I noticed The Intercept_ uses one on mobile, but that includes social sharing links, which makes more sense. Mine was just ‘main menu options’. People instinctively know where they are anyway.

  • Removed JS for ‘back to top’ button (this was a simple theme setting)

  • The big one: the font change - I asked on Reddit, to which one response was "dude, are you crazy". Unfortunately for me, that is a red rag to a bull, just like the time my good friend Andreas kindly pointed out that I’d never be able to learn his native Swedish. Careful what you tell me I can’t do... I have a lot to thank him for, really. I can speak it now, and even work with the language! Back to the proceedings...

The font decision considerations were:

  • Load times - local vs native, vs external load issues.
  • Look - Source Sans Pro is smart, but generic. Georgia is more fitting. Also massively widespread in usage1

As if by happy chance my connection cut out partway through testing and the webfont failed to load. It left me with Arial and that was just wrong. So I went for it after testing Georgia on staging/dev sites. It was clearly faster to load. I’m still tweaking it, but I think it looks OK.

Most news sites use a serif font. Kindles use serif fonts. ‘Designer blog’ sites use serifs. They might mix with the sans for headers sometimes, but the body is usually serif, for any articles for ‘serious consideration’, as mentioned above. And subjectively, knowing there’s no studies backing this up, it feels much easier to read. I don’t know if you get this, but there’s less resistance reading serif fonts. They are parsed quicker by my brain’s OCR, if you like. It could be that I’m more used to them after a lifetime of reading in print, or that the serifs themselves really do help form more clearly identifiable word shapes. Whatever it is, I have a preference for reading in serif and other people seem to also, based on their enduring popularity for ‘serious reading’. They also happen to fit into the (slowly developing) brand-image of the site. It’s lucky too that Matthew Carter’s stunning work from the mid-90s available on nearly every operating system in the world. Decision made.

Some don’t like the old-style numbers (e.g. 12345). I prefer the caps in Times New Roman. The rendering might not be as high-res as some webfonts. It needs a little tweaking in terms of size, line-spacing, etc. but overall it’s a very excellent piece of work that seems to be very often overlooked. Custom serifs, such as the Guardian’s Egyptian or the lovely Adobe Garamond Pro are truly a pleasure to read. But I’m operating a fledgling attempt at a bootstrapped startup and to my eye, wallet and efficiency mindset, Georgia is perfect for the job. Result: 4 files no longer loaded (2 x woff, 2 CSS) on first view, giving 4 less calls.

Not to mention the internet users out there, like myself, who block most external requests with browser extensions to remove all that grimy tracking and advertising from their daily surf. Many of these would miss out on any fancy font use if not actively whitelisted. A two-sentence tangent about the ad-blocking ethics debate: blocking ads still gives the site host another hit to take to the advertisers, it just stops the user from seeing an ad they wouldn’t have clicked anyway. Not to mention load times decrease. Maybe that hit doesn’t show up in the hosts JS trackers, but it’ll be in the server logs somewhere. Inconvenient, perhaps, but it's there.

So what were the results after all of that?

Points crept up ever so slightly in Google Pagespeed insights tester:

  • Desktop: 79/100 (2 point increase from start)
  • Mobile: 65/100 (3 point increase)

But subjectively the site feels like it loads faster, and i’m back in control of the cruft it was loading. The site got a better suited look and has started to move away from the generic startup look (a bit). Load times, after the initial page load, also seem to hover around the 500ms mark now, using browser, nginx and Drupal caching.

Speed-wise, Pingdom gave much more dramatic results:

  • 91/100, 19 requests, 681ms, 443kb.

The speed history of the site was:

  • Nov 7th 2014 - 4.8s, 585kb, 80 req, 77 page speed
  • Jan 11th 2015 - 3.95s, 258kb, 43 req
  • Oct 14th on HTTPS, 4 mins after first test: 238ms, with Pingdom suggesting it was faster than 99% of all tested websites.

Then to finish it all up I migrated to the newest Digital Ocean server hardware. This was scheduled for next week anwyay, but they offered tools to let you do it early. After backing up, of course. No issues, ten minutes to migrate around 6GB. Same kernel, same IP. It again seemed subjectively quicker. No idea why, would be good if DO could let me know how! Also the drive seemed to accumlate another 400MB of stuff in the move when comparing df -h before and after the migration. What was that blob? Stuff that was swapped or in tmp before?

Concurrency results from blitz.io

These were interesting: 20k hits in 60 sec, 585mb of data transferred, 345 hits/s, or 29.8m hits/day, average response time of 437ms. And all this from California, the other side of world. 0 errors, 0 timeouts. It scaled up to nearly 1000 concurrent users. giving 688 hits/second max. Perhaps could have pushed the test harder, up to 2k users, but I can’t see even 1000 concurrent happening in real life any time soon!

I then removed all the blitz.io references from log before the logs were processed by Piwik with the handy VIM command:

:g/blitz/d

(21k fewer lines in under a second; that's satisfying!).

Still need to do

  • Reduce homepage load size, 4-650kb front page, other pages under 250/100kb. The image on the front page is half of the size...
  • Load bootstrap locally. Won’t be quicker, it’s just the slight niggle about security (i.e. maxcdn knowing you visit the site - not a major issue for users, and Chrome have just stopped the mixed content message)
  • Might still experiment with Helvetica or Times New Roman + Georgia (titles/body) but overall happy with Georgia for now
  • I’m already aggregating CSS into fewer files, it just needs ‘total aggregation’ and minifying/compressing. I thought the nginx setup was handling that, but clearly something up. If you could truly aggregate the CSS and JS, it would reduce the numbers to 1 CSS and 1 JS, so removing 10 requests, leaving just 7. Or even if I could find how to compress them as Google Pagespeed suggests, it would be better. Will investigate...
  • EDIT: I tried advagg again after all improvements - it reduced requests to 8 on a blog page, 11 on the homepage. Google scores: 86 desktop, 69 mobile. Much improved, but it's not the full story. It still wants me to leverage browser caching, with the no-cache headers confusing it. The site was still faster pre-advagg, unfortunately. Pingdom says it scores 87 with it vs 90/91 without. What gives? More processing time? SPDY? Microcaching? Either way I got a better Google score with it and a worse Pingdom score. My choice is for speed. I'll remove it for now.
  • One blank css file of unknown origin still persists
  • Get started with Let’s Encrypt from November
  • Implement real http/2 from nginx when released
  • Check out the mysql and opcache optimisations mentioned in this thorough presentation on performance by mikeytown2

But I also need to actually sell accounts to the site too, and as this team consists of one member, I better get onto that. Much time has obviously been spent keeping notes on performance improvements, which I hope can help someone else out there trying to do the same. It's certainly handy to have a log myself for future reference.

At the very least, page load and site speed are no longer a major issue. Any more tips, tricks and questions welcome as ever!

PS - Why so fixed on efficiency?

To me it’s part of a good user experience; speed, simple information structure (or ‘architecture’ if being fancy ), clear layout, hopefully appealing in its simplicity. Work in progress. That’s why I visit the sites I do. That’s why I use uMatrix on Iceweasel and Chromium. That’s, incidentally, why I run a lightweight Debian netinst with i3 (or openbox before, but wanted to tile and tab) and my ‘leisure’ software in a terminal running on a Pi elsewhere (Finch, irssi, newsbeuter, krill, lynx). I’m not a ‘speed-junky’, at least not for the sake of it. It’s a mini, one-man rebellion against bloat-creep.

Yes, we have more computing power, but don’t use that as an excuse to stuff my system and browser with junk. Most people are of course ambivalent to this and just decide a new processor/RAM/PC upgrade is in order. I obviously can’t operate that way, knowing the little I do know about web development. But there is plenty of proof that folks do abandon sites if they take too long to load. There’s also anecdotal proof from a Youtube engineer that all new visitors will come if they find they can navigate your site quickly, having been barred from it previously, much like how translation opens businesses up to whole new markets... Plug-time over!

It’s an opportunity cost most don’t realise they are paying. And users are glad to receive it, even if they don’t know how or why it’s happening. They engage more. They buy more. They might, if in the savvier crowd, even feel respected and valued as a customer. Imagine that. Wow, someone not tracking me with third-party god-knows-who-sees JS snippets. Someone who has optimized their page for me. Those people will be few and far between, but still, it wouldn’t feel right doing it any other way.


  1. The Georgia typeface is used by 172,743 of the top 1 million sites online: http://www.fontreach.com/#font:Georgia (Facebook, Yahoo, Twitter, Amazon, CNN, BBC, Spiegel, Independent...) ↩︎

Add new comment

Sharing license

This post is licensed under the Creative Commons Attribution license. We have done this to encourage translations into any language, with a credit link back to the original. Feel free to print and share copies in your business, school or university, or to publish your own translation, and be sure to let us know if you do!

We would like to actively discourage reposting it verbatim, at least not without a canonical link, to show search engines that this is the original post. An alternative way to use the post's information is to use it as a key source for a completely re-written post, still giving credit as per the license. Thanks for your understanding.


Get the latest on translation, freelancing and business.
(You'll also get our Translation Marketing Checklist and Freelancer Mental Health Cheatsheet)