I got a little problem when I try to restore a large database (almost 32Go in custom format) on my devel database node (this node has less RAM, CPU... than my production server).
My database dumps are generated with a command similar to:
pg_dump -F custom -b myDB -Z 9 > /backup/myDB-`date +%y%m%d`.pg91
And when I restore it, I used the following command:
pg_restore -F custom -j 5 -d myDB /backup/myDB-20130331.pg91
But here, each time the restore command failed with an error like:
pg_restore: [archiver (db)] error returned by PQputCopyData: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_restore: [archiver] worker process failed: exit code 1
pg_restore: [archiver (db)] error returned by PQputCopyData: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_restore: [archiver (db)] error returned by PQputCopyData: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_restore: [archiver (db)] error returned by PQputCopyData: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
And when I check my postgresql logs, I can read this:
HINT: In a moment you should be able to reconnect to the database and repeat your command.
LOG: all server processes terminated; reinitializing
LOG: database system was interrupted; last known up at 2013-04-02 11:41:48 UTC
LOG: database system was not properly shut down; automatic recovery in progress
LOG: redo starts at 86/26F302B0
LOG: unexpected pageaddr 85/E3F52000 in log file 134, segment 38, offset 16064512
LOG: redo done at 86/26F51FC0
LOG: last completed transaction was at log time 2013-04-02 11:50:47.663599+00
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
It's quite strange, my postgresql server "restarts" alone just because of my restore.
I try to minimize the number of jobs (-j 5
option) but still got the same problem.
However on a node with better specs, I have no problem to restore this database.
I'm not sure but maybe the updates of my indexes (one of them is really too large) could be a clue to understand this issue?
So I have some questions: is there a better way to restore really large database? Do I miss something in my pg_restore command? May be the settings of my devel server are too low?
Any clue will be greatly appreciated. Thank in advance.
env: PostgreSQL 9.1 (installed via Debian packages)
Memory limits may prevent very large columns, rows, or result sets from being created, transferred across a network (which in itself will be slow), or received by the client. PostgreSQL does not impose a limit on the total size of a database. Databases of 4 terabytes (TB) are reported to exist.
There are active PostgreSQL clusters in production environments that manage many terabytes of data, and specialized systems that manage petabytes.
If you want to free up space on the file system, either VACUUM FULL or CLUSTER can help you. You may also want to run ANALYZE after these, to make sure the planner has up-to-date statistics but this is not specifically required.
pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers). pg_dump only dumps a single database.
For this kind of big work, it is recommended to disable the autovacuum
(by set it to off
in your postgresql.conf) during the restoration process.
It seems it finally works for me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With