I have a large PostgreSQL table - 2.8 millions rows; 2345 MB in size; 49 columns, mainly short VARCHAR fields, but with a large json field.
It's running on an Ubuntu 12.04 VM with 4GB RAM.
When I try doing a SELECT * against this table, my psql connection is terminated. Looking in the error logs, I just get:
2014-03-19 18:50:53 UTC LOG: could not send data to client: Connection reset by peer
2014-03-19 18:50:53 UTC STATEMENT: select * from all;
2014-03-19 18:50:53 UTC FATAL: connection to client lost
2014-03-19 18:50:53 UTC STATEMENT: select * from all;
Why is this happening? Is there a maximum amount of data that can be transferred or something - and is that configurable in postgres?
Having one large, wide table is dictated by the system we're using (I know it's not an ideal DB structure). Can postgres handle tables of this size, or will we keep having problems?
Thanks for any help, Ben
Those messages in the server log just mean that the client went away unexpectedly. In this case, it probably died with an out of memory error.
By default psql loads the entire results into memory before displaying anything. That way it can best decide how to format the data. You can change that behavior by setting FETCH_COUNT
I have seen a similar issue, however, the issue I faced was not on the client side, but most probably on the postgres driver side. The query was required to fetch a lot of rows, and as a result, there could have been temporary memory spike requirement on postgres driver. As a result, the cursor that I was using to fetch the records closed, and I got the exactly same logs.
I would really love if someone validated if this is possible, one thing I am sure is that there was not any issue on the client side.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With