Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Limit an sqlite Table's Maximum Number of Rows

I am looking to implement a sort of 'activity log' table where actions a user does are stored in a sqlite table and then presented to the user so that they can see the latest activity they have done. However, naturally, I don't feel it is necessary to keep every single bit of history, so I am wondering if there is a way to configure the table to start pruning older rows once a maximum set limit is reached.

For example, if the limit is 100, and that's how many rows there currently are in the table, when another action is inserted, the oldest row is automatically removed so that there are always a maximum of 100 rows. Is there a way to configure the sqlite table to do this? Or would I have to run a cron job?

Clarification Edit: At any given moment, I would like to display the last 100 (for example) actions/events (rows) of the table.

like image 937
Jorge Israel Peña Avatar asked Jan 10 '10 01:01

Jorge Israel Peña


People also ask

How many rows can a SQLite table handle?

The max_page_count PRAGMA can be used to raise or lower this limit at run-time. The theoretical maximum number of rows in a table is 264 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 281 terabytes will be reached first.

Does SQLite have a limit?

An SQLite database is limited in size to 140 terabytes (247 bytes, 128 tibibytes). And even if it could handle larger databases, SQLite stores the entire database in a single disk file and many filesystems limit the maximum size of files to something less than this.

What is the limitation of SQLite in Android?

As far as @Amokrane answer, you'll also need to be aware of limitations not imposed by SQLite, but by android. 50mb is the current maximum app size and there are issues with accessing large datasets on device.

How big is too big for SQLite?

An unlikely requirement for an engine popular on Android and iOS. SQLite, which claims to be "used more than all other database engines combined", has been updated to version 3.33. 0 with the maximum size increased to 281TB, around twice the previous capacity of 140TB.


2 Answers

Another solution is to precreate 100 rows and instead of INSERT use UPDATE to update the oldest row.
Assuming that the table has a datetime field, the query

UPDATE ... WHERE datetime = (SELECT min(datetime) FROM logtable) 

can do the job.

Edit: display the last 100 entries

SELECT * FROM logtable ORDER BY datetime DESC LIMIT 100 

Update: here is a way to create 130 "dummy" rows by using join operation:

CREATE TABLE logtable (time TIMESTAMP, msg TEXT); INSERT INTO logtable DEFAULT VALUES; INSERT INTO logtable DEFAULT VALUES; -- insert 2^7 = 128 rows INSERT INTO logtable SELECT NULL, NULL FROM logtable, logtable, logtable,    logtable, logtable, logtable, logtable; UPDATE logtable SET time = DATETIME('now');  
like image 178
Nick Dandoulakis Avatar answered Sep 20 '22 21:09

Nick Dandoulakis


You could create a trigger that fires on INSERT, but a better way to approach this, might be to simply have a scheduled job that runs periodically (say once a week) and deletes records from the table.

like image 44
Mitch Wheat Avatar answered Sep 19 '22 21:09

Mitch Wheat