How can I find out the disk usage of a single table inside a SQLite database without copying it in a new empty database?
The maximum size of a database file is 4294967294 pages. At the maximum page size of 65536 bytes, this translates into a maximum database size of approximately 1.4e+14 bytes (281 terabytes, or 256 tebibytes, or 281474 gigabytes or 256,000 gibibytes).
sqlite > dbinfo. sql will give you detail info on each table's size on disk.
sqlite or memory-sqlite is faster for the following tasks: select two columns from data (<. 1 millisecond for any data size for sqlite . pandas scales with the data, up to just under 0.5 seconds for 10 million records)
SQLite supports databases up to 281 terabytes in size, assuming you can find a disk drive and filesystem that will support 281-terabyte files. Even so, when the size of the content looks like it might creep into the terabyte range, it would be good to consider a centralized client/server database.
You can use sqlite3_analyzer from https://www.sqlite.org/download.html.
It's a really cool tool. It shows you the number of pages used by each table with and without indexes (each page, by default, is 1024 bytes).
This is a sample sqlite3_analyzer output for the Northwind database:
*** Page counts for all tables with their indices ******************** EMPLOYEES............................. 200 34.4% ORDERS................................ 152 26.2% CATEGORIES............................ 90 15.5% ORDER DETAILS......................... 81 13.9% CUSTOMERS............................. 17 2.9% SQLITE_MASTER......................... 11 1.9% PRODUCTS.............................. 7 1.2% SUPPLIERS............................. 7 1.2% TERRITORIES........................... 6 1.0% CUSTOMERCUSTOMERDEMO.................. 2 0.34% CUSTOMERDEMOGRAPHICS.................. 2 0.34% EMPLOYEETERRITORIES................... 2 0.34% REGION................................ 2 0.34% SHIPPERS.............................. 2 0.34%
It also generates SQL statements which can be used to create a database with usage statistics, which you can then analyze.
If you are on linux or OSX, or otherwise have the unix utilities awk (and optionally, sort) available, you can do the following to get counts and estimated size via dump analysis:
# substitute '.dump' for '.dump mytable' if you want to limit to specific table sqlite3 db.sqlite3 '.dump' | awk -f sqlite3_size.awk
which returns:
table count est. size my_biggest_table 1090 60733958 my_table2 26919 7796902 my_table3 10390 2732068
and uses awk script:
/INSERT INTO/ { # parse INSERT commands split($0, values, "VALUES"); # extract everything after VALUES split(values[1], name, "INSERT INTO"); # get tablename tablename = name[2]; # gsub(/[\047\042]/, "", tablename); # remove single and double quotes from name gsub(/[\047,]/, "", values[2]); # remove single-quotes and commas sizes[tablename] += length(values[2]) - 3; # subtract 3 for parens and semicolon counts[tablename] += 1; } END { print "table\tcount\test. size" for(k in sizes) { # print and sort in descending order: print k "\t" counts[k] "\t" sizes[k] | "sort -k3 -n -r"; # or, if you don't have the sort command: print k "\t" counts[k] "\t" sizes[k]; } }
The estimated size is based on the string length of the "INSERT INTO" command, and so is not going to equal the actual size on disk, but for me, count plus the estimated size is more useful than other alternatives such as page count.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With