Dealing with Drupal 8 and a giant cache_render table

There are a number of scenarios in Drupal 8 where you might notice your MySQL database size starts growing incredibly fast, even if you're not adding any content. Most often, in my experience, the problem stems from a exponentially-increasing-in-size cache_render table. I've had enough private conversations about this issue that I figure I'd write this blog post to cover common scenarios, as well as short and long-term fixes if you run into this issue.

Consider the following scenarios I've seen where a cache_render table increased to 10, 50, 100 GB or more:

  • A Search API search page with five facets; each facet has 5-10 links to narrow the search down.
  • A views page with a few taxonomy filters with dozens of options.
  • A views block with a couple filters that allow sorting and narrowing options.

In all three cases, the problem is that the one page (e.g. /search) can have hundreds, thousands, millions, or even billions of possible variations (e.g. /search?value=test, /search?value=another%20test, etc.). So the problem is that every single variation produces a row in the render_cache table—whether that cached entry is accessed once, ever, or a million times a day. And there's no process that cleans up the cache_render table, so it just grows and grows. Especially when crawlers start crawling the page and following every combination of every link!

This isn't a problem that only affects large sites with millions of nodes, either—all it takes is a few hundred taxonomy terms, nodes, etc., and you will likely encounter the problem.

So, what's the fix?

First of all, you should follow the Drupal 8 core issue Database cache bins allow unlimited growth: cache DB tables of gigabytes!—that's the best place to testing patches which should resolve the issue more permanently.

After that, here are some ways to fix the issue, in order from the most immediate/quick to the most correct and long-lasting (but possibly more difficult to implement):

  • For an immediate fix, run drush cr or click 'Clear all caches' in the admin UI at Configuration > Development > Performance. This will nuke the cache_render table immediately. Note: This could take a while if you have a giant table! It's recommended to use drush if possible since a timeout is less likely.
  • For a short-term band-aid to prevent it from happening again, add a cron job that runs drush cr at least once a day or week, during a low-traffic period. This is kind of like cleaning up your room by detonating it with TNT, but hey, it works!
  • For a better band-aid, consider using the Slushi cache module, which basically limits the growth of the cache_render and cache_dynamic_page_cache tables.
  • For the best fix long-term (and worthwhile to do for better performance regardless of whether you have this problem!), you should use Redis or Memcache as your site's cache backend. Unlike the MySQL database, these in-memory caches are usually a bit faster, and are designed to be able to discard old cache entries more intelligently.

And for a bit more detail about debugging this problem (to verify it's the root cause of site issues), check out Acquia's helpful help document, Managing Large Cache Render tables in Drupal 8.

The issue linked earlier in this post will hopefully have a better long-term fix for the many users who are limited to using MySQL, but that's little consolation if your site is offline right now due to the database filling up it's disk space, or backups failing because the database is just too large! In those cases, use one of the bandaids above and determine whether using Redis or Memcache is a possibility.


I've had a fresh D8 install with just 10 nodes and a about 20 terms in single vocabulary can reach 20GB.

Drupal 8's cache was praised to heaven by various orators but in real world it's nothing short of disaster or a half-baked mishmash.
patched Varnish in front and patched memcache module at the back kinda saves the day.

Drupal 8 can barely walk by itself.

What patches are you using on varnish and memcache?


Compatibility with Drupal 8 cache_config is broken #2838301
Port memcache_admin module to d8 #2364557
1M data writes incompatible with memcached #435694
...and until recently - Socket file configuration not working #2882431

For the varnish_purge and purge modules - the issues are too many to list here, unfortunately.

Compared to certain idiot-isms of Drupal 7, it's kinda bearable but for 2017, Drupal 8 is amateur hour thus far.

A simple fix would be Most site owners try first to adjust the page cache maximum age in the performance settings until they realize this doesn't help at all.


Thing is, there is hardly a fix on the page you refer to.
I only see a confession of a serious big, the date is funny too - about a year after Drupal 8 was released.

Even today, I fail to see how such technological disaster like Drupal 8 deem releasable.
Seems like, Drupal 8 was released, with the hope that the community will fix it in no time, and there will come the awesome symfony developers. Didn't and most likely wont happen.

Some folks are saying Drupal 8 was meant for the enterprise but it looks like the target was bug testers.
Lots of issues related to the multiple cache layers.
No media support, while the contrib media space is barely operational without various patches and end-to-end understanding of whats needed to be configured in order to reach production ready state.

Instead of adopting the paragraphs module approach in core, there were various nonsense modules added in D8 core since 2015.

Even the blogtards at Wordpress are currently working towards paragraphs-like solution to building content but no, Drupal 8 OOB is - a body field kitchen sink and bare bones concept of inline image - no library selection options, nothing - no responsive option for inline images, either.

Too much hassle for little or no gain.

We are learning how to deal with a large purge_queue table that has grown to 30GB. This appears to be used for the internal Drupal cache because we are using tag-based invalidation through Varnish and CloudFlare (hence the bundles of expired tags are being passed to those servers to figure out which pages to invalidate).

It's early days (about 6 so far) but aside from the large purge_queue table things are working well so far.