Simple steps to setup performance monitoring for PostgreSQL


Performance concerns are usually put off until they become issues. This is probably the right thing to do since you don’t want to over-optimize. But you can setup your PostgreSQL database to help with future performance, and in less than 5 minutes!

This post is inspired by Craig Kerstien’s twitter thread. I’ve added a bit more detail for certain points to explain why they’re important.

1. Track statistics of all executed SQL statements

First, create the extension.

create extension pg_stat_statements;

alter system set shared_preload_libraries='pg_stat_statements';

The pg_stat_statements module tracks the planning and execution stats of all executed SQL statements.

Why do you want to do this? So when you’re facing database performance issues later on, you can easily see which queries take up the most time.

How does it work? If you want to see the maximum time spent (in milliseconds) executing certain queries, you could run this query:

select query, max_exec_time from pg_stat_statements;

              query               | max_exec_time 
----------------------------------+---------------
 select * from pg_stat_statements |       0.13891

There are many other useful ways to quickly query stats data. Some of the frequently used ones include:

And more in the docs.

2. Log the execution plans for slow queries

The auto_explain module automatically logs the execution plans for slow queries. Why would you want to do this? So that you don’t have to run explain manually when you’re debugging these slow queries, and you have all the data you need already available.

session_preload_libraries = auto_explain
auto_explain.log_min_duration = 200
auto_explain.log_analyze = true

This setting will log the execution plans for queries that take more than 200ms. An example of what it logs looks like this:

LOG:  duration: 3.651 ms  plan:
  Query Text: SELECT count(*)
              FROM pg_class, pg_index
              WHERE oid = indrelid AND indisunique;
  Aggregate  (cost=16.79..16.80 rows=1 width=0) (actual time=3.626..3.627 rows=1 loops=1)
    ->  Hash Join  (cost=4.17..16.55 rows=92 width=0) (actual time=3.349..3.594 rows=92 loops=1)
          Hash Cond: (pg_class.oid = pg_index.indrelid)
          ->  Seq Scan on pg_class  (cost=0.00..9.55 rows=255 width=4) (actual time=0.016..0.140 rows=255 loops=1)
          ->  Hash  (cost=3.02..3.02 rows=92 width=4) (actual time=3.238..3.238 rows=92 loops=1)
                Buckets: 1024  Batches: 1  Memory Usage: 4kB
                ->  Seq Scan on pg_index  (cost=0.00..3.02 rows=92 width=4) (actual time=0.008..3.187 rows=92 loops=1)
                      Filter: indisunique

To make it more readable, you can plug it into despez.

3. Auto-kill queries over 30s

Why would you want to do this? Long-running transactions may lock up your database, depending on what’s running, so you don’t want that to happen.

ALTER DATABASE dbname SET statement_timeout = '30s'; 

If you need to increase that limit for an individual transaction, you can run this:

BEGIN;
SET LOCAL statement_timeout = 50s;
-- your sql here
COMMIT;

Have some thoughts on this post? Reply with an email.

If you're interested in updates, you can subscribe below or via the RSS feed

Powered by Buttondown.