Skip to main content
Skip table of contents

PostgreSQL database performance check

Learn about the performance of a PostgreSQL database.

Configure PostgreSQL for pretty good performance

Every PostgreSQL database has a default configuration but a few basic configuration changes can make it much faster while using fewer system resources. If any server gives a bad performance, we have to change some important parameters in the "postgresql.conf” file according to the server RAM to get better DB performance. Some important parameters of "postgresql.conf" file are as below:

If you change some of the above parameters, a DB restart is required. For an example of which values should be set for these parameters, see the  https://pgtune.leopard.in.ua/  tool. Ensure to provide all the information such as "DB Version", "OS Type", "DB Type", "Total RAM", "Number of CPUs", "Total Number of Connection in the database" and "Data Storage" and generate the report. After generating the report, set all values in "postgresql.conf" file and restart the database as below for "postgres" user.


CODE
-bash-4.2$ pg_ctl restart


If you use the PgTune tool for setting the PostgreSQL parameters, take care of the most important parameter, i.e. “shared_buffers”. PgTune works on the thumb rule of PostgreSQL and accordingly suggests values. As per thumb rule of PostgreSQL, “shared_buffers” should be 25% of the memory(RAM) in your system. But as per “Robert Haas (Core Member of PostgreSQL community)”, “shared_buffers” should not be more than about 8GB on Linux or 512MB on Windows, and sometimes less. See the below blog for details:

CODE
http://rhaas.blogspot.in/2012/03/tuning-sharedbuffers-and-walbuffers.html


For example, if our server RAM is 64 GB, set “shared_buffers” maximum to 8 GB according to “Robert Haas” but don't set to 16 GB (25% of the total RAM) according to PgTune.

The rest of the values are OK for other parameters as suggested by the PgTune tool.

Check "idle" sessions older than ca. 30 minutes

If there are open sessions in state "idle" older than approx. 30 minutes, sometimes performance is degraded by the below error:

CODE
Error: am.cantLockAsset - Asset is used by someone else. Please retry later.


Then, the following code provides the information of username, client_IP_address, proc_id, exact_ query_information and query_start time.

CODE
psql -p 5432 -U corpus -d corpus << EOF 
\pset format wrapped; 
\pset linestyle unicod; 
\pset columns 180; 
Select 'select pg_terminate_backend('||pid||') from pg_stat_activity;' kill_query,usename,state,client_addr,query,query_start From pg_stat_activity 
where datname='corpus' AND pid <> pg_backend_pid() AND state = 'idle' 
AND state_change < current_timestamp - INTERVAL '30' MINUTE; 
EOF


After running the above code, it provides "proc_id's" with other information in the "kill_query" column section and terminates the idle sessions by the following command. Run from "corpus" user to connect with "corpus" database.

CODE
select pg_terminate_backend(<proc_id>) from pg_stat_activity;

List of total databases

To see the list of total databases, run the below code as postgres_user@database_server:

CODE
{ 
echo "" 
echo "###############" 
echo "List of databases" 
echo "###############" 
psql -p 5432 -U corpus -d corpus -c "\l+"; 
}

List of 10 big tables of "corpus" database

To list 10 big tables of the "CORPUS" database, run the below code as postgres_user@database_server:

Size which includes the TOAST table and the indexes size:

CODE
{ 
echo "" 
echo "####################################" 
echo "List of Table size of 10 big tables" 
echo "####################################" 
psql -p 5432 -U corpus -d corpus -c "select schemaname as table_schema, 
relname as table_name, pg_size_pretty(pg_total_relation_size(relid)) as total_size, 
pg_size_pretty(pg_relation_size(relid)) as data_size, 
pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) 
as external_size 
from pg_catalog.pg_statio_user_tables 
order by pg_total_relation_size(relid) desc, 
pg_relation_size(relid) desc 
LIMIT 10"; 
}


Where Columns details of the above query as below:

CODE
table_schema - table's schema name 
table_name - table name 
total_size - total table size 
data_size - size of table's rows external_size - size of external elements, such as indexes etc.

Check for long running queries

To check for long-running queries and their state (idle or active), run the below code as postgres_user@database_server:

CODE
{ 
echo "" 
psql -p 5432 -U corpus -d corpus -c "select substr(query,1,60) query, 
backend_type,xact_start, 
application_name,query_start, 
wait_event_type,wait_event, 
state from pg_stat_activity"; 
}

Queries to detect bottlenecks

20 statements which have taken a lot of time

The following query shows 20 statements, which have taken a lot of time. Run the below code as postgres_user@database_server:

CODE
{
echo ""
psql -p 5432 -c "SELECT substring(query, 1, 50) AS short_query,
round(total_time::numeric, 2) AS total_time,
calls,
round(mean_time::numeric, 2) AS mean,
round((100 * total_time / sum(total_time::numeric) OVER ())::numeric, 2) ASpercentage_cpu
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 20";
}

Find indexes that have never been used

To find the indexes that have never been used since the last statistics reset with pg_stat_reset(), run the below code as postgres_user@database_server:

CODE
{
echo ""
psql -p 5432 -U corpus -d corpus -c "SELECT s.schemaname,
s.relname AS tablename,
s.indexrelname AS indexname,
pg_relation_size(s.indexrelid) AS index_size,
idx_scan as index_scans
FROM pg_catalog.pg_stat_user_indexes s
JOIN pg_catalog.pg_index i ON s.indexrelid = i.indexrelid
WHERE s.idx_scan = 0
AND 0 <>ALL (i.indkey)
AND NOT i.indisunique
AND NOT EXISTS
(SELECT 1 FROM pg_catalog.pg_constraint c
WHERE c.conindid = s.indexrelid)
ORDER BY pg_relation_size(s.indexrelid) DESC";
}

Monitor PostgreSQL VACUUM processes

Check how many dead rows (n_dead_tup) are in any particular table. PostgreSQL offers a  "pg_stat_user_tables"  view that provides a breakdown of each table (relname) and how many dead rows (n_dead_tup) are in that particular table. Change the <table name> in "relname" according to your requirement in the below code. Run the below code as  postgres_user@database_server:

CODE
{
echo ""
psql -p 5432 -U corpus -d corpus -c "SELECT schemaname,relname,
n_live_tup,n_dead_tup FROM pg_stat_user_tables where
relname='asset_feature' ORDER BY  n_dead_tup";
}


To see the dead rows for all the tables, run the below code as postgres_user@database_server:

CODE
{
echo ""
psql -p 5432 -U corpus -d corpus -c "SELECT schemaname, relname, n_live_tup, n_dead_tup FROM
pg_stat_user_tables order by n_dead_tup";
}
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.