I have a Solr slave that is running in Tomcat. I added a core, so I changed solr.xml. To reload it I decided to simply restart Tomcat using the Windows Services managament console.
After restarting Tomcat I keep getting the following exception:
org.apache.lucene.store.LockObtainFailedException: Index locked for write for core
I decided to temporarily change the solrconfig.xml for each core to add:
<unlockOnStartup>true</unlockOnStartup>
But no luck. Locking is set to native, so I can't go and remove lock files.
There is no process in Solr for programmatically reindexing data. When we say "reindex", we mean, literally, "index it again". However you got the data into the index the first time, you will run that process again.
Apache Solr stores the data it indexes in the local filesystem by default. HDFS (Hadoop Distributed File System) provides several benefits, such as a large scale and distributed storage with redundancy and failover capabilities. Apache Solr supports storing data in HDFS.
Solr works by gathering, storing and indexing documents from different sources and making them searchable in near real-time. It follows a 3-step process that involves indexing, querying, and finally, ranking the results – all in near real-time, even though it can work with huge volumes of data.
Delete only write.log in data/index . Where is data directory is specified in conf/solrconfig.xml.
Clear the index directory and restart solr. It will work
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With