Simple steps to setup performance monitoring for PostgreSQL
Contents
Performance concerns are usually put off until they become issues. This is probably the right thing to do since you don’t want to over-optimize. But you can setup your PostgreSQL database to help with future performance, and in less than 5 minutes!
This post is inspired by Craig Kerstien’s twitter thread. I’ve added a bit more detail for certain points to explain why they’re important.
1. Track statistics of all executed SQL statements
First, create the extension.
The pg_stat_statements
module tracks the planning and execution stats of all executed SQL statements.
Why do you want to do this? So when you’re facing database performance issues later on, you can easily see which queries take up the most time.
How does it work? If you want to see the maximum time spent (in milliseconds) executing certain queries, you could run this query:
There are many other useful ways to quickly query stats data. Some of the frequently used ones include:
- mean execution time
- minimum execution time
- standard deviation of execution time
- number of times the statement was executed
And more in the docs.
2. Log the execution plans for slow queries
The auto_explain module automatically logs the execution plans for slow queries.
Why would you want to do this? So that you don’t have to run explain
manually when you’re debugging these slow queries, and you have all the data you need already available.
This setting will log the execution plans for queries that take more than 200ms. An example of what it logs looks like this:
To make it more readable, you can plug it into despez.
3. Auto-kill queries over 30s
Why would you want to do this? Long-running transactions may lock up your database, depending on what’s running, so you don’t want that to happen.
If you need to increase that limit for an individual transaction, you can run this:
Have some thoughts on this post? Reply with an email.