As title, I have been searching around for a while and not able to find an answer. It only states key and value can't be longer than 65535 when it's on 8.4, but not being mentioned at all on 9.0 documentation.
PostgreSQL does not impose a limit on the total size of a database. Databases of 4 terabytes (TB) are reported to exist. A database of this size is more than sufficient for all but the most demanding applications.
hstore is deprecated. Use jsonb . @danger89 Actually, it's not formally deprecated, though I don't think there's any reason to use it in favour of jsonb anymore.
hstore is a PostgreSQL extension that implements the hstore data type. It's a key-value data type for PostgreSQL that's been around since before JSON and JSONB data types were added.
PostgreSQL is well known as the most advanced opensource database, and it helps you to manage your data no matter how big, small or different the dataset is, so you can use it to manage or analyze your big data, and of course, there are several ways to make this possible, e.g Apache Spark.
hstore
is a varlena, and is limited by the maximum size of TOAST
ed fields, about 1GB.
I do not recommend that you go anywhere near the size. Performance will be awful. Every time you update a row - including rows with hstore
fields - PostgreSQL must write a new copy of the row. Needless to say, with gigabyte rows that's not going to be fun.
Read performance will be OK if you're reading all the keys/values, but poor if you're selectively reading just a few keys/values, as the hstore
must be de-TOAST
ed before access.
It's hard to give more specific advice without knowing your design and use case; the why of this question.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With