I'm new to database. Recently I start using timescaledb, which is an extension in PostgreSQL, so I guess this is also PostgreSQL related.
I observed a strange behavior. I calculated my table structure, 1 timestamp, 2 double, so totally 24bytes per row. And I imported (by psycopg2 copy_from) 2,750,182 rows from csv file. I manually calculated the size should be 63MB, but I query timescaledb, it tells me the table size is 137MB, index size is 100MB and total 237MB. I was expecting that the table size should equal my calculation, but it doesn't. Any idea?
Users can store 100s of billions of rows and 10s of terabytes of data on a single machine, or scale to petabytes across many servers. TimescaleDB includes a number of time-oriented features that aren't found in traditional relational databases including functions for time-oriented analytics.
For complex queries that go beyond rollups or thresholds, there really is no comparison: TimescaleDB [Fully Managed Service for TimescaleDB, as of September 2021] vastly outperforms InfluxDB here (in some cases over thousands of times faster).
TimescaleDB is an open-source database designed to make SQL scalable for time-series data. It is engineered up from PostgreSQL and packaged as a PostgreSQL extension, providing automatic partitioning across time and space (partitioning key), as well as full SQL support.
At 200 million rows the insert rate in PostgreSQL is an average of 30K rows per second and only gets worse; at 1 billion rows, it's averaging 5K rows per second. On the other hand, TimescaleDB sustains an average insert rate of 111K rows per second through 1 billion rows of data–a 20x improvement.
There are two basic reasons your table is bigger than you expect: 1. Per tuple overhead in Postgres 2. Index size
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With