mysql

Analyzing a MySQL slow query log with pt-query-digest

There are times when you may notice your MySQL or MariaDB database server getting very slow. Usually, it's a very stressful time, as it means your site or application is also getting very slow since the underlying database is slow. And then when you dig in, you notice that logs are filling up—and in MySQL's case, the slow query log is often a canary in a coal mine which can indicate potential performance issues (or highlight active performance issues).

But—assuming you have the slow query log enabled—have you ever grabbed a copy of the log and dug into it? It can be extremely daunting. It's literally a list of query metrics (time, how long the query took, how long it locked the table), then the raw slow query itself. How do you know which query takes the longest time? And is there one sort-of slow query that is actually the worst, just because it's being run hundreds of times per minute?

How to fix "Host '172.18.0.1' is not allowed to connect" with MySQL Docker

Using the official MySQL Docker image from Docker Hub, I recently ran into the error:

Host '172.18.0.1' is not allowed to connect to this MySQL server

The only change I had made to my docker-compose.yml file was:

mysql:
  image: mysql:5.6
  ports:
    - '3306'
  volumes:
    # Use this option to persist the MySQL DBs in a shared volume.
    - ./mysqldata:/var/lib/mysql:rw,delegated
    # Use this option to persist the MySQL DBs in a data volume.
    # - db_data:/var/lib/mysql

I switched from using a data volume (db_data) to mounting a volume from my host (mysqldata in the current directory), and after the next time I did a docker-compose down and docker-compose up, I started seeing the error about my host not being allowed to connect to the MySQL server.

Dealing with Drupal 8 and a giant cache_render table

There are a number of scenarios in Drupal 8 where you might notice your MySQL database size starts growing incredibly fast, even if you're not adding any content. Most often, in my experience, the problem stems from a exponentially-increasing-in-size cache_render table. I've had enough private conversations about this issue that I figure I'd write this blog post to cover common scenarios, as well as short and long-term fixes if you run into this issue.

Consider the following scenarios I've seen where a cache_render table increased to 10, 50, 100 GB or more:

Fixing MySQL 'The table is full' error using the official MySQL Docker image

Recently I had to test importing some very large databases with lots of giant log tables (e.g. 5+ GB tables), and when I tried doing an import into a local docker MySQL container instance, I got ERROR 1114: The table is full. Here are the commands I used:

# Run a MySQL container locally to test a large file import.
$ docker run --name mysql-import-test -p 3306 -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=database_name -d mysql:latest

# Import a database .sql file and monitor progress with pv.
$ pv ~/database.sql | mysql -u root -proot -h 127.0.0.1 --port 32774 database_name
ERROR 1114 (HY000) at line 93898: The table 'xyz' is full

I found that—likely due to some Docker filesystem defaults—the MySQL import would fail every time when there was a database table containing more than 1GB of data. Now, this could be related to the way the database was exported, and I also found some issues where people were using memory tables that got exported and wouldn't import cleanly.

Getting Emoji and multibyte characters on your Drupal 7 site with 7.50

Almost exactly a year ago, I wrote a blog post titled Solving the Emoji/character encoding problem in Drupal 7.

Since writing that post, Drupal 7 bugfixes and improvements have started to pick up steam as (a) many members of the community who were focused on launching Drupal 8 had time to take a breather and fix up some long-standing Drupal 7 bugs and improvements that hadn't yet been backported, and (b) there are two new D7 core maintainers. One of the patches I've been applying to many sites and hoping would get pulled into core for a long time was adding support for full UTF-8, which allows the entry of emojis, Asian symbols, and mathematical symbols that would break Drupal 7 sites running on MySQL previously.

My old blog post had a few steps that you could follow to make your Drupal 7 site 'mostly' support UTF-8, but there were some rough edges. Now that support is in core, the process for converting your existing site's database is more straightforward:

Solving the Emoji/character encoding problem in Drupal 7

Update: As of Drupal 7.50, Emoji/UTF-8 mb4 is now supported for MySQL (and other databases) in core! See the documentation page here for more information on how to configure it: Multi-byte UTF-8 support in Drupal 7. This blog post exists for historical purposes only—please see the Drupal.org documentation for the most up-to-date instructions!

On many Drupal 7 sites, I have encountered issues with Emoji (mostly) and other special characters (rarely) when importing content from social media feeds, during content migrations, and in other situations, so I finally decided to add a quick blog post about it.

Have you ever noticed an error in your logs complaining about incorrect string values, with an emoji or other special character, like the following:

<br />
PDOException: SQLSTATE[HY000]: General error: 1366 Incorrect string value: '\xF0\x9F\x98\x89" ...' for column 'body_value' at row 1: INSERT INTO {field_data_body} (entity_type, entity_id, revision_id, bundle, delta, language, body_value, body_summary, body_format) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8); Array ( [:db_insert_placeholder_0] =&gt; node [:db_insert_placeholder_1] =&gt; 538551 [:db_insert_placeholder_2] =&gt; 538550 [:db_insert_placeholder_3] =&gt; story [:db_insert_placeholder_4] =&gt; 0 [:db_insert_placeholder_5] =&gt; und [:db_insert_placeholder_6] =&gt;
<p>[EMOJI_HERE]</p>
<p> [:db_insert_placeholder_7] =&gt; [:db_insert_placeholder_8] =&gt; filtered_html ) in field_sql_storage_field_storage_write() (line 514 of /drupal/modules/field/modules/field_sql_storage/field_sql_storage.module).<br /></p>

(Note: Actual Emoji was removed from this summary post to prevent Drupal Planet's aggregator from barfing on the feed... due to this very issue!).

To fix this, you need to switch the affected MySQL table's encoding to utf8mb4, and also switch any table columns ('fields', in Drupal parlance) which will store Emojis or other exotic UTF-8 characters. This will allow these special characters to be stored in the database, and stop the PDOExceptions.

Highly-Available PHP infrastructure with Ansible

I just posted a large excerpt from Ansible for DevOps over on the Server Check.in blog: Highly-Available Infrastructure Provisioning and Configuration with Ansible. In it, I describe a simple set of playbooks that configures a highly-available infrastructure primarily for PHP-based websites and web applications, using Varnish, Apache, Memcached, and MySQL, each configured in a way optimal for high-traffic and highly-available sites.

Here's a diagram of the ultimate infrastructure being built:

Highly Available Infrastructure

Diagnosing Disk I/O issues: swapping, high IO wait, congestion

One one small LEMP VPS I manage, I noticed munin graphs that showed anywhere between 5-50 MB/second of disk IO. Since the VM has an SSD instead of traditional spinning hard drive, performance wasn't too bad, but all that disk I/O definitely slowed things down.

I wanted to figure out what was the source of all the disk I/O, so I used the following techniques to narrow down the culprit (spoilers: it was MySQL, which was using some swap space because it was tuned to use a little too much memory).

iotop

First up was iotop, a handy top-like utility for monitoring disk IO in real-time. Install it via yum or apt, then run it with the command sudo iotop -ao to see an aggregated summary of disk IO over the course of the utility's run. I let it sit for a few minutes, then checked back in to find:

Dump an entire database with structure only for some tables with mysqldump

I typically use a MySQL GUI like Sequel Pro when I do database dumps and imports working from my Mac. GUI apps often give checkboxes that allow you to choose whether to include the structure/content/drop table command for each table in an export.

When using mysqldump on the command line, though, it's not as simple. You can either do a full dump and exclude a few tables entirely (using --ignore-table, or dump the structures of just one set of tables using the -d option. But you can't do both in one go with mysqldump.

However, you can use the power of redirection to do both commands at once to result in one dump file with all your tables, with structure only for the tables you specify:

Quickly resetting a local MySQL database from the command line [Updated]

[Update: And, as quickly as I finished writing this post, I thought to myself, "surely, this would be a good thing to have drush do out-of-the-box. And... it already does, making my work on this shell script null and void. I present to you: drush sql-drop! Oh, well.]

When I'm creating or updating an installation profile/distribution for Drupal, I need to reinstall Drupal over and over again. Doing this requires a few simple steps: drop/recreate the database (or drop all db tables), then drush site-install (shortcut: si) with the proper arguments to install the site again.

In the past, I've often had Sequel Pro running in the background on my Mac, and I'd select all the database tables, right-click, choose 'Delete Tables', then have to click again on a button to confirm the deletion. This took maybe 10-20 seconds, depending on whether I already had Sequel Pro running, and how good my mouse muscles were working.