Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is my PostgreSQL table larger (in GB) than the csv it came from?

A < 4 GB csv became a 7.7 GB table in my AWS Postgres instance. And a 14 GB csv wouldn't load into 22 GB of space, I'm guessing because it is also going to double in size! Is this factor of two normal? And if so, why, and is it reliable?

like image 269
Erin Avatar asked Mar 23 '15 22:03

Erin


People also ask

How do I export a table from PostgreSQL to CSV?

The easiest but the most efficient way to export data from a Postgres table to a CSV file is by using the COPY command. COPY command generates a CSV file on the Database Server. You can export the entire table or the results of a query to a CSV file with the COPY TO command.

What is the table size in PostgreSQL?

To determine the size of a Postgres database table from the command line, log in to your server using SSH and access the database with the table you want to check. Type this code in the command line to determine the size of any of the database's tables. SELECT pg_size_pretty( pg_total_relation_size('tablename') );

Which commands is the most efficient way to bulk load data into a Postgres table from a CSV file?

Goto solution for bulk loading into PostgreSQL is the native copy command.


1 Answers

There are many possible reasons:

  • Indexes take up space. If you have lots of indexes, especially multi-column indexes or GiST / GIN indexes, they can be a big space hog.

  • Some data types are represented more compactly in text form than in a table. For example, 1 consumes 1 byte in csv (or 2 if you count the comma delimiter) but if you store it in a bigint column it requires 8 bytes.

  • If there's a FILLFACTOR set, PostgreSQL will intentionally waste space so make later UPDATEs and INSERTs faster. If you don't know what FILLFACTOR is, then there isn't one set.

  • PostgreSQL has a much larger per-row overhead than CSV. In CSV, the per-row overhead is 2 bytes for a newline and carriage return. Rows in a PostgreSQL table require 24 to 28 bytes, plus data values, mainly because of the metadata required for multiversion concurrency control. So a CSV with very many narrow rows will produce a significantly bigger table than one the same size in bytes that has fewer wider rows.

  • PostgreSQL can do out-of-line storage and compression of values using TOAST. This can make big text strings significantly smaller in the database than in CSV.

You can use octet_size and pg_column_size to get PostgreSQL to tell you how big rows are. Because of TOAST out-of-line compressed storage, the pg_column_size might be different for a tuple produced by a VALUES expression vs one SELECTed from a table.

You can also use pg_total_relation_size to find out how big the table for a given sample input is.

like image 123
Craig Ringer Avatar answered Sep 17 '22 08:09

Craig Ringer