Occasionally, we find sites
that are running extremely slowly, and digging down we end up finding MySQL
queries that have been running for sometimes days, or even weeks.
These stuck queries end up
consuming resources
How to find the stuck queries
The easiest way to find stuck queries is to send the following query to your database
SHOW FULL PROCESSLIST;
This will list out all the
queries, what user executed them and when they were first issued.
How to manually kill the stuck queries in AWS
RDS MySQL
If you ever find threads stuck
in state sending data that will never complete, you will need
to kill those queries. Note, this can cause you to corrupt a database if there
are ongoing non-SELECT based queries (so execute with caution). In this example
the MySQL username is drupal.
SELECT CONCAT('CALL mysql.rds_kill(',id,');') FROM information_schema.processlist
WHERE user='drupal';
This will produce a list of SQL
statements to execute to kill all queries by this user.
CALL mysql.rds_kill(18047573);
CALL mysql.rds_kill(18003186);
CALL mysql.rds_kill(18122490);
How to automatically kill long running
queries in MySQL
It is all well and good to reactively kill stuck
queries, but what if you could automatically kill them? This sounds far more
ideal.
Looking at the parameters
you can tune in MySQL, the one I will focus on is called max_execution_time.
The MAX_EXECUTION_TIME hint is permitted only for SELECT statements. It places a limit N (a
timeout value in milliseconds) on how long a statement is permitted to execute
before the server terminates it:
Checking that AWS RDS supports
this parameter (using awscli):
aws rds describe-db-parameters --db-parameter-group-name
$PARAMETER_GROUP_NAME | jq '.Parameters[] | select(.ParameterName ==
"max_execution_time")'
So AWS RDS does support this parameter, and it's ApplyType is
set to dynamic. This which means any changes are applied immediately (whereas
static parameters require a database reboot to take effect).
Changing this parameter can
be done via awscli or the UI.