Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Maintaining a large table of unique values in MySQL

This is probably a common situation, but I couldn't find a specific answer on SO or Google.

I have a large table (>10 million rows) of friend relationships on a MySQL database that is very important and needs to be maintained such that there are no duplicate rows. The table stores the user's uids. The SQL for the table is:

CREATE TABLE possiblefriends(
 id INT NOT NULL AUTO_INCREMENT, 
 PRIMARY KEY(id),
 user INT, 
 possiblefriend INT)

The way the table works is that each user has around 1000 or so "possible friends" that are discovered and need to be stored, but duplicate "possible friends" need to be avoided.

The problem is, due to the design of the program, over the course of a day, I need to add 1 million rows or more to the table that may or not be duplicate row entries. The simple answer would seem to be to check each row to see if it is a duplicate, and if not, then insert it into the table. But this technique will probably get very slow as the table size increases to 100 million rows, 1 billion rows or higher (which I expect it to soon).

What is the best (i.e. fastest) way to maintain this unique table?

I don't need to have a table with only unique values always on hand. I just need it once-a-day for batch jobs. In this case, should I create a separate table that just inserts all the possible rows (containing duplicate rows and all), and then at the end of the day, create a second table that calculates all the unique rows in the first table?

If not, what is the best way for this table long-term?

(If indexes are the best long-term solution, please tell me which indexes to use)

like image 999
eric Avatar asked Nov 11 '10 08:11

eric


People also ask

How can I get unique values from a table in MySQL?

To get unique or distinct values of a column in MySQL Table, use the following SQL Query. SELECT DISTINCT(column_name) FROM your_table_name; You can select distinct values for one or more columns. The column names has to be separated with comma.

Is MySQL good for large database?

MySQL was not designed for running complicated queries against massive data volumes (which requires crunching through a lot of data on a huge scale). MySQL optimizer is quite limited, executing a single query at a time using a single thread.

Can MySQL store millions of records?

Can MySQL handle 100 million records? Sure, and a whole lot more. I've personally worked with single tables in MySQL that had ten billion records.


2 Answers

Add a unique index on (user, possiblefriend) then use one of:

  • INSERT ... ON DUPLICATE KEY UPDATE ...
  • INSERT IGNORE
  • REPLACE

to ensure that you don't get errors when you try to insert a duplicate row.

You might also want to consider if you can drop your auto-incrementing primary key and use (user, possiblefriend) as the primary key. This will decrease the size of your table and also the primary key will function as the index, saving you from having to create an extra index.

See also:

  • “INSERT IGNORE” vs “INSERT … ON DUPLICATE KEY UPDATE”
like image 80
Mark Byers Avatar answered Sep 22 '22 12:09

Mark Byers


A unique index will let you be sure that the field is indeed unique, you can add a unique index like so:

CREATE TABLE possiblefriends( 
 id INT NOT NULL AUTO_INCREMENT,  
 PRIMARY KEY(id), 
 user INT,  
 possiblefriend INT,
PRIMARY KEY (id),
UNIQUE INDEX DefUserID_UNIQUE (user ASC, possiblefriend ASC))

This will also speec up your table access significantly.

Your other issue with the mass insert is a little more tricky, you could use the in-built ON DUPLICATE KEY UPDATE function below:

INSERT INTO table (a,b,c) VALUES (1,2,3)
  ON DUPLICATE KEY UPDATE c=c+1;

UPDATE table SET c=c+1 WHERE a=1;
like image 40
JonVD Avatar answered Sep 19 '22 12:09

JonVD