Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to fix postgresql server automatic reboots [closed]

On some days PostgreSql server starts to reboot itself periodically. Log file is below. It contains found orphan temp table records and after that reboot is performed.

Application is accessed by single application which are used over internet by approx 50 users and there are about 10 databases in cluster.

How to fix reboots and those errors ?

Using PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit

log file contains:

....

    2013-06-10 11:11:57 EEST   LOG:  server process (PID 25148) was terminated by signal 9: Killed
    2013-06-10 11:11:57 EEST   LOG:  terminating any other active server processes
...

Update

dmesg content is below. This is VPS server running under Parallels Desktop in hosting provider. Swapping is probably disabled. In this server there is also mono 2.6 and apache which serve ASP.NET applications accessing the same database. output from top is below. I have small Linux experience. I read links from answer but did'nt understand. Which is best solution to solve it, preferably without adding memory.

free returns

             total       used       free     shared    buffers     cached
Mem:       1048576    1046504       2072          0          0     230512
-/+ buffers/cache:     815992     232584
Swap:            0          0          0

dmesg:

[2083241.896072] OOM killed process 21849 (mono) vm:420164kB, rss:165296kB, swap:0kB
[2116348.398705] OOM killed process 4970 (postgres) vm:157948kB, rss:76236kB, swap:0kB
[2121711.560995] OOM killed process 5366 (postgres) vm:160348kB, rss:82340kB, swap:0kB
[2123522.901114] OOM killed process 5505 (postgres) vm:145272kB, rss:66840kB, swap:0kB
[2151490.026306] OOM killed process 362 (mono) vm:370636kB, rss:162272kB, swap:0kB
[2160560.103350] OOM killed process 13285 (postgres) vm:195468kB, rss:103792kB, swap:0kB
[2202499.040721] OOM killed process 19391 (postgres) vm:118792kB, rss:45116kB, swap:0kB
[2207881.033010] OOM killed process 19876 (postgres) vm:141356kB, rss:57004kB, swap:0kB
[2209677.336040] OOM killed process 20017 (postgres) vm:127360kB, rss:50764kB, swap:0kB
[2211481.827980] OOM killed process 20193 (postgres) vm:139560kB, rss:56112kB, swap:0kB
[2227779.349062] OOM killed process 12151 (mono) vm:346484kB, rss:142900kB, swap:0kB
[2233087.801652] OOM killed process 21250 (postgres) vm:111996kB, rss:38548kB, swap:0kB
[2236034.881167] OOM killed process 22622 (postgres) vm:111972kB, rss:37672kB, swap:0kB
[2237418.351794] OOM killed process 23868 (postgres) vm:114480kB, rss:40864kB, swap:0kB
[2237723.417347] OOM killed process 24460 (postgres) vm:112764kB, rss:37500kB, swap:0kB
[2238023.668780] OOM killed process 24583 (postgres) vm:112884kB, rss:36024kB, swap:0kB
[2238210.220733] OOM killed process 24773 (postgres) vm:105600kB, rss:22608kB, swap:0kB
[2238397.290829] OOM killed process 24812 (postgres) vm:106360kB, rss:28996kB, swap:0kB
[2238808.757086] OOM killed process 24973 (postgres) vm:109156kB, rss:28676kB, swap:0kB
[2239112.617356] OOM killed process 25148 (postgres) vm:105520kB, rss:26392kB, swap:0kB
[2239217.367104] OOM killed process 25298 (postgres) vm:105700kB, rss:31020kB, swap:0kB
[2239277.036465] OOM killed process 25417 (postgres) vm:106424kB, rss:26024kB, swap:0kB
[2239400.317380] OOM killed process 25479 (postgres) vm:106392kB, rss:18544kB, swap:0kB
[2239536.589647] OOM killed process 25561 (postgres) vm:108412kB, rss:18364kB, swap:0kB
[2239715.268972] OOM killed process 25602 (postgres) vm:111832kB, rss:35944kB, swap:0kB
[2239798.713414] OOM killed process 25701 (postgres) vm:124232kB, rss:37844kB, swap:0kB
[2239812.799232] OOM killed process 25746 (postgres) vm:135948kB, rss:34552kB, swap:0kB
[2239885.587583] OOM killed process 25752 (postgres) vm:113524kB, rss:36880kB, swap:0kB
[2240040.811768] OOM killed process 25789 (postgres) vm:109204kB, rss:33684kB, swap:0kB
[2240416.506723] OOM killed process 25870 (postgres) vm:109268kB, rss:34060kB, swap:0kB

top:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
26679 postgres  20   0  101m  14m  10m S  5.0  1.4   0:00.20 postgres
26680 postgres  20   0  103m  29m  24m S  1.7  2.8   0:00.21 postgres
26135 www-data  20   0  265m  60m 3180 S  0.3  5.9   0:07.37 mono
26401 www-data  20   0  244m  49m 2912 S  0.3  4.8   0:03.17 mono
    1 root      20   0  8360  236  108 S  0.0  0.0   0:14.01 init
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd/893
    3 root      20   0     0    0    0 S  0.0  0.0   0:00.00 khelper/893
  460 root      20   0  5988  372  204 S  0.0  0.0   0:07.68 syslogd
  488 root      20   0 54568  540   44 S  0.0  0.1   0:00.00 saslauthd
  490 root      20   0 54568  496    0 S  0.0  0.0   0:00.00 saslauthd
  552 root      20   0 22432  440  192 S  0.0  0.0   0:02.05 cron
  563 messageb  20   0 23272  408  136 S  0.0  0.0   0:00.01 dbus-daemon
  594 postgres  20   0 99628 6740 5568 S  0.0  0.6   2:51.67 postgres
  613 root      20   0 19340  216   12 S  0.0  0.0   0:00.00 xinetd
  641 root      20   0 49180  776  220 S  0.0  0.1   0:11.05 sshd
  669 root      20   0 56036 1956  372 S  0.0  0.2   0:42.91 sendmail-mta
  728 postgres  20   0 65848 1388  172 S  0.0  0.1   0:36.99 postgres
 6547 www-data  20   0  252m  57m  280 S  0.0  5.6   1:03.18 mono
 6930 www-data  20   0  117m  43m  124 S  0.0  4.3   1:01.71 mono
 7489 www-data  20   0  122m  40m  124 S  0.0  4.0   1:00.75 mono
 8158 www-data  20   0  118m  38m  124 S  0.0  3.8   0:58.19 mono
 8311 www-data  20   0  120m  38m  124 S  0.0  3.8   1:16.12 mono
 9776 www-data  20   0  302m  85m  660 S  0.0  8.4   1:17.09 mono
12555 root      20   0  183m 2100  612 S  0.0  0.2   0:00.13 console-kit-dae
14887 root      20   0 74392 2544  908 S  0.0  0.2   0:23.99 apache2
14890 www-data  20   0 50000 9792  292 S  0.0  0.9   0:05.86 mono
14892 www-data  20   0  189m  51m  732 S  0.0  5.1   1:57.05 mono
14900 www-data  20   0  168m  34m  608 S  0.0  3.4  11:47.60 mono

Update 2

postgresql.conf settings are below. Work mem are commented and shared_buffers is 24MB. So changing those settings does not affect to the behaviour. memory is 1GB. After killing mono processes which took 10% of memory reboots stopped. How to fix the issue so that it will not happen more ?

#work_mem = 1MB                         # min 64kB
#maintenance_work_mem = 16MB            # min 1MB
shared_buffers = 24MB
like image 544
Andrus Avatar asked Jun 10 '13 08:06

Andrus


People also ask

How to restart PostgreSQL on a Windows machine?

Here 13 is the version of PostgreSQL installed in the window machine. Right-click on the postgresql-13 and click on the restart option. The services will restart now.

How to fix PostgreSQL 13 not responding in Windows?

Press Windows key + R, ‘RUN’ box will appear. Type services.msc in the Run box and hit enter. Services window will open, search for postgresql-13. Here 13 is the version of PostgreSQL installed in the window machine. Right-click on the postgresql-13 and click on the restart option. The services will restart now.

How to start PostgreSQL cluster at boot?

Edit 1: If your postgresql installation package includes the command pg_createcluster, a better alternative is to use said command to create the cluster, and the systemctl command to start the "postgresql" service. This way the service is automatically configured to start at boot. Thanks for contributing an answer to Ask Ubuntu!

What happens when PostgreSQL service is stopped?

If postgresql.service controls all the activities related to postgreSQL. If this service is stopped then [email protected] also stops. [email protected] is the PostgreSQL server if this service is stopped then the user won’t be able to access the PostgreSQL.


1 Answers

2013-06-10 11:11:57 EEST   LOG:  server process (PID 25148) was terminated by signal 9: Killed

Either you're running out of memory and the Linux kernel's OOM killer is being run, or a cron job or other tool is killing PostgreSQL directly.

If it's the OOM killer you'll see that in dmesg or your kernel log file, so check there first. If that's the problem, please read the PostgreSQL documentation on Linux memory overcommit for how to stop it happening.

If you're running out of memory make sure you don't set excessive work_mem or maintenance_work_mem, keep shared_buffers reasonable, etc.

If the OOM killer isn't at fault then you need to find out what's going around sending SIGKILL to your PostgreSQL backends, because that will never happen on a normal system unless it's the OOM killer's doing.

like image 189
Craig Ringer Avatar answered Oct 14 '22 09:10

Craig Ringer