PostgreSQL database performance check
Learn about the performance of a PostgreSQL database.
Configure PostgreSQL for pretty good performance
Every PostgreSQL database has a default configuration but a few basic configuration changes can make it much faster while using fewer system resources. If any server gives bad performance, we have to change some important parameters in the "postgresql.conf” file according to the server RAM to get better DB performance. Some important parameters of "postgresql.conf" file are as below:
max_connections
shared_buffers
temp_buffers
work_mem
maintenance_work_mem
effective_cache_size
random_page_cost
archive_mode
archive_command
archive_timeout
synchronus_commit = off
Note: Change the default value of "synchronus_commit" parameter to off.etc. (Depending on users/applications need and also to enhance the performance of the DB.)
For more information why we tune these parameters, see:
https://www.postgresql.org/docs/10/runtime-config-resource.html
-bash-4.2$ pg_ctl restart
http://rhaas.blogspot.in/2012/03/tuning-sharedbuffers-and-walbuffers.html
For example, if our server RAM is 64 GB, set “shared_buffers” maximum to 8 GB according to “Robert Haas” but don't set to 16 GB (25% of the total RAM) according to PgTune.
The rest of the values are OK for other parameters as suggested by the PgTune tool.
Check "idle" sessions older than ca. 30 minutes
If there are open sessions in state "idle" older than approx. 30 minutes, sometimes performance is degraded by the below error:
Error: am.cantLockAsset - Asset is used by someone else. Please retry later.
Then, the following code provides the information of username, client_IP_address, proc_id, exact_ query_information and query_start time.
psql -p 5432 -U corpus -d corpus << EOF
\pset format wrapped;
\pset linestyle unicod;
\pset columns 180;
Select 'select pg_terminate_backend('||pid||') from pg_stat_activity;' kill_query,usename,state,client_addr,query,query_start From pg_stat_activity
where datname='corpus' AND pid <> pg_backend_pid() AND state = 'idle'
AND state_change < current_timestamp - INTERVAL '30' MINUTE;
EOF
select pg_terminate_backend(<proc_id>) from pg_stat_activity;
List of total databases
To see the list of total databasess, run below code as postgres_user@database_server:
{
echo ""
echo "###############"
echo "List of databases"
echo "###############"
psql -p 5432 -U corpus -d corpus -c "\l+";
}
List of 10 big tables of "corpus" database
To list 10 big tables of the "CORPUS" database, run below code as postgres_user@database_server:
Size which includes the TOAST table and the indexes size:
{
echo ""
echo "####################################"
echo "List of Table size of 10 big tables"
echo "####################################"
psql -p 5432 -U corpus -d corpus -c "select schemaname as table_schema,
relname as table_name, pg_size_pretty(pg_total_relation_size(relid)) as total_size,
pg_size_pretty(pg_relation_size(relid)) as data_size,
pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid))
as external_size
from pg_catalog.pg_statio_user_tables
order by pg_total_relation_size(relid) desc,
pg_relation_size(relid) desc
LIMIT 10";
}
table_schema - table's schema name
table_name - table name
total_size - total table size
data_size - size of table's rows external_size - size of external elements, such as indexes etc.
Check for long running queries
To check for long running queries and their state (idle or active), run below code as postgres_user@database_server:
{
echo ""
psql -p 5432 -U corpus -d corpus -c "select substr(query,1,60) query,
backend_type,xact_start,
application_name,query_start,
wait_event_type,wait_event,
state from pg_stat_activity";
}
Queries to detect bottlenecks
20 statements which have taken lot of time
The following query shows 20 statements, which have taken a lot of time. Run below code as postgres_user@database_server:
{
echo ""
psql -p 5432 -c "SELECT substring(query, 1, 50) AS short_query,
round(total_time::numeric, 2) AS total_time,
calls,
round(mean_time::numeric, 2) AS mean,
round((100 * total_time / sum(total_time::numeric) OVER ())::numeric, 2) ASpercentage_cpu
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 20";
}
Find indexes that have never been used
To find the indexes that have never been used since the last statistics reset with pg_stat_reset(), run below code as postgres_user@database_server:
{
echo ""
psql -p 5432 -U corpus -d corpus -c "SELECT s.schemaname,
s.relname AS tablename,
s.indexrelname AS indexname,
pg_relation_size(s.indexrelid) AS index_size,
idx_scan as index_scans
FROM pg_catalog.pg_stat_user_indexes s
JOIN pg_catalog.pg_index i ON s.indexrelid = i.indexrelid
WHERE s.idx_scan = 0
AND 0 <>ALL (i.indkey)
AND NOT i.indisunique
AND NOT EXISTS
(SELECT 1 FROM pg_catalog.pg_constraint c
WHERE c.conindid = s.indexrelid)
ORDER BY pg_relation_size(s.indexrelid) DESC";
}
Monitor PostgreSQL VACUUM processes
Check how many dead rows (n_dead_tup) are in any particular table. PostgreSQL offers a "pg_stat_user_tables" view that provides a breakdown of each table (relname) and how many dead rows (n_dead_tup) are in that particular table. Change the <table name> in "relname" according to your requirement in the below code. Run below code as postgres_user@database_server:
{
echo ""
psql -p 5432 -U corpus -d corpus -c "SELECT schemaname,relname,
n_live_tup,n_dead_tup FROM pg_stat_user_tables where
relname='asset_feature' ORDER BY n_dead_tup";
}
To see the dead rows for all the table's, run the below code as postgres_user@database_server:
{
echo ""
psql -p 5432 -U corpus -d corpus -c "SELECT schemaname, relname, n_live_tup, n_dead_tup FROM
pg_stat_user_tables order by n_dead_tup";
}