Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Storing large amounts of data in a database

I'm currently working on a home-automation project which provides the user with the possibility to view their energy usage over a period of time. Currently we request data every 15 minutes and we are expecting around 2000 users for our first big pilot.

My boss is requesting we that we store at least half a year of data. A quick sum leads to estimates of around 35 million records. Though these records are small (around 500bytes each) I'm still wondering whether storing these in our database (Postgres) is a correct decision.

Does anyone have some good reference material and/or advise about how to deal with this amount of information?

like image 299
Exelian Avatar asked Jul 20 '11 10:07

Exelian


People also ask

How do you store a large amount of data in a database?

You can just write it to the database, and when the volume exceeds the capacity of the DB to handle, you can shard the database (= have multiple subsets of the data sit on different database servers). Benefit: you can use a relational DB and don't have to learn anything new.

Which database is best for large amount of data?

They are mostly NoSQL (non-relational) databases built on a horizontal architecture, which enable quick and cost-effective processing of large volumes of big data as well as multiple concurrent queries. Amazon Redshift, Azure Synapse Analytics, Microsoft SQL Server, Oracle Database, MySQL, IBM DB2, etc.

Which files are used to store large amounts of data?

Hard disk is an hardware component which is used to store large amounts of data. The primary characteristics of hard drive are its capacity and performance.

What are 3 methods of storing data?

Data can be recorded and stored in three main forms: file storage, block storage and object storage.


2 Answers

For now, 35M records of 0.5K each means 37.5G of data. This fits in a database for your pilot, but you should also think of the next step after the pilot. Your boss will not be happy when the pilot will be a big success and that you will tell him that you cannot add 100.000 users to the system in the next months without redesigning everything. Moreover, what about a new feature for VIP users to request data at each minutes...

This is a complex issue and the choice you make will restrict the evolution of your software.

For the pilot, keep it as simple as possible to get the product out as cheap as possible --> ok for a database. But tell you boss that you cannot open the service like that and that you will have to change things before getting 10.000 new users per week.

One thing for the next release: have many data repositories: one for your user data that is updated frequently, one for you queries/statistics system, ...

You could look at RRD for your next release.

Also keep in mind the update frequency: 2000 users updating data each 15 minutes means 2.2 updates per seconds --> ok; 100.000 users updating data each 5 minutes means 333.3 updates per seconds. I am not sure a simple database can keep up with that, and a single web service server definitely cannot.

like image 198
jfg956 Avatar answered Oct 06 '22 15:10

jfg956


We frequently hit tables that look like this. Obviously structure your indexes based on usage (do you read or write a lot, etc), and from the start think about table partitioning based on some high level grouping of the data.

Also, you can implement an archiving idea to keep the live table thin. Historical records are either never touched, or reported on, both of which are no good to live tables in my opinion.

It's worth noting that we have tables around 100m records and we don't perceive there to be a performance problem. A lot of these performance improvements can be made with little pain afterwards, so you could always start with a common-sense solution and tune only when performance is proven to be poor.

like image 25
Adam Houldsworth Avatar answered Oct 06 '22 16:10

Adam Houldsworth