The Problem With Slow Queries
Most performance problems in web applications have one thing in common: the database. A query that runs fine with 1,000 rows can become a bottleneck at 100,000 rows. Knowing how to diagnose what PostgreSQL is actually doing is a foundational skill for any backend developer.
This guide covers the most effective tools and techniques to track down and fix slow queries.
EXPLAIN and EXPLAIN ANALYZE
EXPLAIN shows PostgreSQL’s query execution plan. EXPLAIN ANALYZE actually runs the query and shows both the plan and the real execution times.
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE customer_id = 42
ORDER BY created_at DESC;
A typical output looks like:
Sort (cost=1200.45..1201.10 rows=260 width=120) (actual time=18.342..18.390 rows=260 loops=1)
Sort Key: created_at DESC
Sort Method: quicksort Memory: 64kB
-> Seq Scan on orders (cost=0.00..1189.00 rows=260 width=120) (actual time=0.041..17.801 rows=260 loops=1)
Filter: (customer_id = 42)
Rows Removed by Filter: 48740
Planning Time: 0.210 ms
Execution Time: 18.501 ms
The key thing here is Seq Scan — PostgreSQL is reading every row in the table and discarding the ones that don’t match. That is exactly what you want to avoid on large tables.
Seq Scan vs Index Scan
| Scan Type | When it happens | Performance |
|---|---|---|
| Seq Scan | No usable index, or the planner thinks it’s cheaper | Slow on large tables |
| Index Scan | An index covers the filter/sort | Fast for selective queries |
| Bitmap Index Scan | Multiple conditions, moderate selectivity | Good for range queries |
| Index Only Scan | All needed columns are in the index | Very fast |
When you see Seq Scan on a large table, your first question should be: is there an index on the column used in the WHERE clause?
-- Add an index on customer_id
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
-- Now re-run EXPLAIN ANALYZE
EXPLAIN ANALYZE
SELECT * FROM orders
WHERE customer_id = 42
ORDER BY created_at DESC;
After adding the index, you should see an Index Scan and a dramatic drop in execution time.
pg_stat_statements
EXPLAIN ANALYZE is useful when you already know which query is slow. pg_stat_statements helps you find the slow queries in the first place.
Enable the extension:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
You also need to add it to postgresql.conf:
shared_preload_libraries = 'pg_stat_statements'
Then query the view:
SELECT
query,
calls,
round(total_exec_time::numeric, 2) AS total_ms,
round(mean_exec_time::numeric, 2) AS mean_ms,
rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
This gives you the top 10 queries by cumulative execution time. It is the fastest way to identify which queries are causing the most load in production.
Practical Indexing Tips
Composite indexes follow column order. An index on (customer_id, created_at) supports queries filtering on customer_id alone, or on both columns together. It does not help if you filter only on created_at.
-- Useful for: WHERE customer_id = ? AND created_at > ?
-- Also useful for: WHERE customer_id = ?
-- Not useful for: WHERE created_at > ?
CREATE INDEX idx_orders_customer_date ON orders(customer_id, created_at);
Partial indexes reduce index size and maintenance cost when you only query a subset of rows:
-- Index only active users
CREATE INDEX idx_users_active_email ON users(email) WHERE active = true;
Expression indexes let you index computed values:
-- If you frequently query WHERE lower(email) = ?
CREATE INDEX idx_users_lower_email ON users(lower(email));
Ruby on Rails Examples
Diagnosing N+1 queries
The classic Rails problem: loading a list of posts and then hitting the database once per post to get the author.
# N+1 — runs 1 + N queries
posts = Post.all
posts.each { |post| puts post.author.name }
# Fixed with includes — runs 2 queries
posts = Post.includes(:author).all
posts.each { |post| puts post.author.name }
Use the bullet gem in development to automatically detect N+1 queries.
Using pluck to avoid loading full objects
# Loads full ActiveRecord objects — slow and memory-heavy
user_ids = User.where(active: true).map(&:id)
# Uses pluck — returns a plain array, much faster
user_ids = User.where(active: true).pluck(:id)
Running EXPLAIN directly from Rails
Since Rails 7.1, the .explain method accepts options directly — no gems needed:
# Basic EXPLAIN
User.where(active: true).explain
# => EXPLAIN SELECT "users".* FROM "users" WHERE "users"."active" = true
# EXPLAIN ANALYZE with options (Rails 7.1+)
User.where(active: true).explain(:analyze, :verbose, :buffers)
# => EXPLAIN (ANALYZE, VERBOSE, BUFFERS) SELECT ...
This prints the real execution plan with timings, buffers, and details — right in the Rails console, no raw SQL needed.
Useful Tools
pgBadger analyzes PostgreSQL log files and generates detailed HTML reports showing slow queries, query frequency, and lock wait times. It is invaluable for investigating production incidents after the fact.
pgbadger /var/log/postgresql/postgresql.log -o report.html
auto_explain is a PostgreSQL module that automatically logs execution plans for slow queries. Add this to postgresql.conf:
session_preload_libraries = 'auto_explain'
auto_explain.log_min_duration = 1000 -- log queries slower than 1 second
auto_explain.log_analyze = true
This gives you execution plans in the logs without modifying your application code — useful in staging or production when you cannot easily run EXPLAIN ANALYZE interactively.
Debugging queries in PostgreSQL isn’t magic — it’s process. With EXPLAIN ANALYZE, pg_stat_statements, and smart indexing, you can identify and fix most performance problems before they become production incidents.