What is the best practice to backup a lucene index without taking the index offline (hot backup)?
When using the default Sitefinity CMS search service (Lucene), the search index definition (configurations which content to be indexed) is stored in your website database, and the actual search index files – on the file system. By default, the search index files are in the ~/App_Data/Sitefinity/Search/ folder.
File indexing is the process of searching full, incremental, or synthetic full backups in order to create a file (called an index), which contains all of the metadata (such as file name and date) that Commvault needs to be able to locate and perform operations on the data that it backs up (for example, restore data).
Why is Lucene faster? Lucene is very fast at searching for data because of its inverted index technique. Normally, datasources structure the data as an object or record, which in turn have fields and values.
From my experience, yes. Lucene is a "production" state of art library and Solr/Elasticsearch is very used in many scenarios. This expertise is very on demand.
You don't have to stop your IndexWriter in order to take a backup of the index.
Just use the SnapshotDeletionPolicy, which lets you "protect" a given commit point (and all files it includes) from being deleted. Then, copy the files in that commit point to your backup, and finally release the commit.
It's fine if the backup takes a while to run -- as long as you don't release the commit point with SnapshotDeletionPolicy, the IndexWriter will not delete the files (even if, eg, they have since been merged together).
This gives you a consistent backup which is a point-in-time image of the index without blocking ongoing indexing.
I wrote about this in Lucene in Action (2nd edition), and there's paper excerpted from the book available (free) from http://www.manning.com/hatcher3, "Hot Backups with Lucene", that describes this in more detail.
This answer depends upon (a) how big your index is and (b) what OS you are using. It is suitable for large indexes hosted on Unix operating systems, and is based upon the Solr 1.3 replication strategy.
Once a file has been created, Lucene will not change it, it will only delete it. Therefore, you can use a hard link strategy to make a backup. The approach would be:
The cp -lr will only copy the directory structure and not the files, so even a 100Gb index should copy in less than a second.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With