I am trying to set up a solrCloud with external zookeeper ensemble of 3 servers and a replicated solr on 2 servers.
Assumed that an external zookeeper should be independent from other storages I can't find out how to set the -solrhome parameter. Is the zookeeper supposed to read data from the worker nodes?
How do you upload the config and link it with target collection?
We had a lot of problems using solr.home so save yourself some stress and just keep your directories how solr likes them by default.
Example:
To get your configuration into Zookeeper, get familiar with solr's zkcli.sh script. You want to use this to manage your solr configs. It will create/update the files in ZK under the /configs
node.
./zkcli.sh -cmd upconfig -confdir /example/solr/collection1/conf -confname collection1 -z 127.0.0.1
After running the upconfig cmd above, the files in /example/solr/collection1/conf
will be uploaded to ZK under /configs/collection1
.
Also need to link your config to your collection (creates a node under the /collections
node in ZK)
# only need to link the config once
./zkcli.sh -cmd linkconfig -collection collection1 -confname collection1 -z 127.0.0.1
Then you can just start solr like this:
java -DzkHost=127.0.0.1 -jar start.jar
The other servers in your cloud will now get the configuration from zookeeper! Some more info in a pretty good blog post here: SolrCloud Cluster (Single Collection) Deployment
Note: 127.0.0.1 is a comma delimited list of your ZK servers and collection1 is your collection
You can specify the root of the Solr configuration as part of your Zookeeper connection string: -zkhost host1,host2,hostN/solr
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With