Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Thousands of tables in Postgres - an anti-pattern?

I'm working on a software which gathers snapshots from cameras and saves the record of that snapshot into a snapshots table. We have thousands of cameras and hundreds of millions of snapshot records. Every snapshot record contains only a small amount of data - id, camera_id, timestamp and notes (string field for additional information).

We're starting to notice performance issues in some of the queries, mainly with fetching a number of snapshots in a given time range. Aside from upgrading the hardware that Postgres is running on, we're looking for alternative ways to improve the performance of the whole system.

The team had a similar situation before using MongoDB, with one table for cameras and one for snapshots. They switched to using a separate snapshots table for each camera, resulting in a couple of thousands of snapshot tables. This improved performance by 10x without causing new problems.

Now we're facing the same problem on Postgres. Even though the above solution worked, it doesn't seem like something that a database would be optimized for. Would it be a crazy idea to do the same thing in Postgres? Is there a known limitation in a number of tables you'd normally use? I wasn't able to find good information for this use-case.

like image 391
Milos Avatar asked Jul 31 '15 12:07

Milos


1 Answers

Looks like partitions could be useful for you, but I see a note to say that large numbers (>100) of partitions can be detrimental. You would probably need several thousand partitions, though this may not have a negative impact if your queries never involve more than one partition at a time. From my knoweldge of Evercam, they might not.

like image 68
Marco Herbst Avatar answered Oct 01 '22 03:10

Marco Herbst