Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to optimize MySQL for handling a small database, i.e. < 100mb?

I'm working on designing a base that will probably stay smaller than 100 MB, has its own server, and is going to be read and modified through an intranet Java EE web application. I've found a lot of references on optimizing for large bases, and I know that is a much more critical issue, but I've got lots of time, read/insertion speed is a project priority, and I'm pretty sure I can take advantage of such a small total db size, somehow. Unless MySQL is already naturally optimized for that kind of small benchmark, of course.

It does fit on memory, of course, but I need its data to actually persist, on disk, or at least be saved to disk before long; I've thought of some crazy-sounding alternatives, like sequentially loading up the whole base into memory at the first time its needed (at user connection time, likely?), in some way, and commiting it later to disk.

But I thought it better to ask it here and see if someone's faced this kind of situation before, and had a decent idea for profiting from small base size.

I'm thinking more in terms of database access and not structure, though if someone has structure design tips for small bases and believes they make the issue of access optimization completely irrelevant, stating that is probably an appropriate response as well.

Thanks in advance.

Edit: the app is kinda critical and after I'm done it will be developed by guys who are mostly used to MySQL, so different DBMS aren't much of an option unless they're veeery similar to MySQL.

like image 676
userBigNum Avatar asked Mar 22 '12 15:03

userBigNum


People also ask

How much data can MySQL handle?

The internal representation of a MySQL table has a maximum row size limit of 65,535 bytes, even if the storage engine is capable of supporting larger rows.

What is optimize MySQL?

OPTIMIZE TABLE reorganizes the physical storage of table data and associated index data, to reduce storage space and improve I/O efficiency when accessing the table. The exact changes made to each table depend on the storage engine used by that table.


1 Answers

Databases were made to handle just this kind of thing so most of the infrastructure is there for you.

The onus is just on you to:

  • Create appropriate indexes based on the data, the volumn and the dbms.

  • Normalize data.

  • Apply good validations - not nulls, uniques, etc.

  • Use explain plan to see how queries can be speeded up - different for every situation.

  • Use caching to improve performance.

  • Ensure all table has unique primary keys defined (obvious perhaps, but still required).

like image 132
Michael Durrant Avatar answered Oct 05 '22 00:10

Michael Durrant