I recently migrated my Postgres database from Windows to CentOS 6.7. On Windows the database never used much CPU, but on Linux I see it using a constant ~30% CPU (using top). (4 core on machine)
Anyone know if this is normally, or why it would be doing this? The application seems to run fine, and as fast or faster than Windows.
Note, it is a big database, 100gb+ data, 1000+ databases.
I tried using Pgadmin to monitor the server status, but the server status hangs, and fails to run, error "the log_filename parameter must be equal"
Idle connections in the database might consume compute resources, such as memory and CPU. When your instance has high CPU utilization, be sure to check for idle connections on the database. For more information, see Performance impact of idle PostgreSQL connections.
shared_buffers (integer) Sets the amount of memory the database server uses for shared memory buffers. The default is typically 32 megabytes (32MB), but might be less if your kernel settings will not support it (as determined during initdb). This setting must be at least 128 kilobytes.
maintenance_work_mem ( integer ) Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM , CREATE INDEX , and ALTER TABLE ADD FOREIGN KEY . If this value is specified without units, it is taken as kilobytes. It defaults to 64 megabytes ( 64MB ).
With 1000 databases I expect vacuum workers and stats collector to spend a lot of time checking about what needs maintenance.
I suggest you to do two things
autovacuum_naptime
parameter to reduce the frequency of checksstats_temp_directory
on a ramdiskYou probably also set a high max_connections
limit to allow your clients to use those high number of databases and this is another probable source of CPU load, due to the high number of 'slots' to be checked every time a backend has to synchronize with the others.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With