Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Lucene index backup

Tags:

java

lucene

What is the best practice to backup a lucene index without taking the index offline (hot backup)?

like image 758
yannisf Avatar asked May 05 '11 12:05

yannisf


People also ask

Where is Lucene index stored?

When using the default Sitefinity CMS search service (Lucene), the search index definition (configurations which content to be indexed) is stored in your website database, and the actual search index files – on the file system. By default, the search index files are in the ~/App_Data/Sitefinity/Search/ folder.

What is index in backup?

File indexing is the process of searching full, incremental, or synthetic full backups in order to create a file (called an index), which contains all of the metadata (such as file name and date) that Commvault needs to be able to locate and perform operations on the data that it backs up (for example, restore data).

Why is Lucene so fast?

Why is Lucene faster? Lucene is very fast at searching for data because of its inverted index technique. Normally, datasources structure the data as an object or record, which in turn have fields and values.

Is Lucene still used?

From my experience, yes. Lucene is a "production" state of art library and Solr/Elasticsearch is very used in many scenarios. This expertise is very on demand.


2 Answers

You don't have to stop your IndexWriter in order to take a backup of the index.

Just use the SnapshotDeletionPolicy, which lets you "protect" a given commit point (and all files it includes) from being deleted. Then, copy the files in that commit point to your backup, and finally release the commit.

It's fine if the backup takes a while to run -- as long as you don't release the commit point with SnapshotDeletionPolicy, the IndexWriter will not delete the files (even if, eg, they have since been merged together).

This gives you a consistent backup which is a point-in-time image of the index without blocking ongoing indexing.

I wrote about this in Lucene in Action (2nd edition), and there's paper excerpted from the book available (free) from http://www.manning.com/hatcher3, "Hot Backups with Lucene", that describes this in more detail.

like image 67
Michael McCandless Avatar answered Sep 18 '22 02:09

Michael McCandless


This answer depends upon (a) how big your index is and (b) what OS you are using. It is suitable for large indexes hosted on Unix operating systems, and is based upon the Solr 1.3 replication strategy.

Once a file has been created, Lucene will not change it, it will only delete it. Therefore, you can use a hard link strategy to make a backup. The approach would be:

  • stop indexing (and do a commit?), so that you can be sure you won't snapshot mid write
  • create a hard link copy of your index files (using cp -lr)
  • restart indexing

The cp -lr will only copy the directory structure and not the files, so even a 100Gb index should copy in less than a second.

like image 39
Upayavira Avatar answered Sep 19 '22 02:09

Upayavira