Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

pg_dump gets SSL error, seems to time out

I'm trying to download a database to my local machine using pg_dump. The command I'm using is:

pg_dump --host xx.xx.xx.xx --port xxxx --username "xxx" --password  --format custom --blobs --verbose --file "testing.db" "xxx"

When it gets to dumping the last table in the database it always crashes with this error:

pg_dump: Dumping the contents of table "versions" failed: PQgetCopyData() failed.
pg_dump: Error message from server: SSL error: sslv3 alert handshake failure
pg_dump: The command was: COPY public.xxx (columns) TO stdout;

I SSH'd into a server that's a bit closer to the server I'm downloading from (I'm in Brisbane, it's in San Francisco) and was able to do the pg_dump without issue. So I know the database server is fine. I suspect it's a timeout because it's getting to the last table before failing; if it was actually an SSL error I'd have expected it to come up sooner. That said, the timeout occurs after a different amount of time each time it fails (the two most recent tests failed after 1300s and 1812s respectively).

Any tips on how to debug are welcome.

I'm on OS X 10.8.5. Local pg_dump is 9.2.4, server is Ubuntu Server running psql 9.1.9.

like image 312
Alex Ghiculescu Avatar asked Jan 31 '14 09:01

Alex Ghiculescu


People also ask

Does pg_dump affect performance?

The only impact of pg_dump are the increased I/O load and the long running transaction it creates. The long transaction will keep autovacuum from reclaimimg dead tuples for the duration of the dump. Normally that is no big problem unless you have very high write activity in the database.

Is pg_dump consistent?

pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently.

What does pg_dump command do?

The pg_dump command extracts a PostgreSQL database into a script file or another archive file. This utility is for backing up databases. The utility makes consistent backups even if the database is being used concurrently. Readers, writers, and other users won't be blocked from using the database while using pg_dump .


1 Answers

It might be a SSL renegociation problem.

See this parameter on the server (postgresql.conf) and the associated warning about old SSL client libraries, although OS X 10.8 seems newer than this.

From the 9.1 documentation:

ssl_renegotiation_limit (integer)

Specifies how much data can flow over an SSL-encrypted connection before
renegotiation of the session keys will take place.

Renegotiation decreases an attacker's chances of doing cryptanalysis when large amounts of traffic can be examined, but it also carries a large performance penalty. The sum of sent and received traffic is used to check the limit. If this parameter is set to 0, renegotiation is disabled. The default is 512MB.

Note: SSL libraries from before November 2009 are insecure when using SSL renegotiation, due to a vulnerability in the SSL protocol. As a stop-gap fix for this vulnerability, some vendors shipped SSL libraries incapable of doing renegotiation. If any such libraries are in use on the client or server, SSL renegotiation should be disabled.

EDIT:

Updating this parameter in postgresql.conf does not require a server restart, but a server reload with /etc/init.d/postgresql reload or service postgresql reload.

The value can be also be checked in SQL with show ssl_renegotiation_limit;

Even if the size of the dump is smaller than 512Mb, it may be that the amount of data transmitted is way larger, since pg_dump compresses the data locally when using the custom format (--format custom).

like image 159
Daniel Vérité Avatar answered Nov 03 '22 03:11

Daniel Vérité