Suppose I want to use elasticsearch to implement a generic search on a website. The top search bar would be expected to find resources of all different kinds across the site. Documents for sure (uploaded/indexed via tika) but also things like clients, accounts, other people, etc.
For architectural reasons, most of the non-document stuff (clients, accounts) will exist in a relational database.
When implementing this search, option #1 would be to create document versions of everything, and then just use elasticsearch to run all aspects of the search, relying not at all on the relational database for finding different types of objects.
Option #2 would be to use elasticsearch only for indexing the documents, which would mean for a general "site search" feature, you'd have to farm out multiple searches to multiple systems, then aggregate the results before returning them.
Option #1 seems far superior, but the downside is that it requires that elastic search in essence have a copy of a great many things in the production relational database, plus that those copies be kept fresh as things change.
What's the best option for keeping these stores in sync, and am I correct in thinking that for general search, option #1 is superior? Is there an option #3?
Each index in Elasticsearch is divided into shards and each shard can have multiple copies. These copies are known as a replication group and must be kept in sync when documents are added or removed. If we fail to do so, reading from one copy will result in very different results than reading from another.
Cross-cluster replication works by replaying the history of individual write operations that were performed on the shards of the leader index. Elasticsearch needs to retain the history of these operations on the leader shards so that they can be pulled by the follower shard tasks.
Elasticsearch cross-cluster replication (CCR) was released as a beta feature in Elasticsearch 6.5, and as a Generally Available (GA) feature in Elasticsearch 6.7. CCR allows multiple indices to be replicated to one or more Elasticsearch clusters.
Because Elasticsearch is not a relational database, joins do not exist as a native functionality like in an SQL database. It focuses more on search efficiency as opposed to storage efficiency. The stored data is practically flattened out or denormalized to drive fast search use cases.
You've pretty much listed the two main options there are when it comes to search across multiple data stores, i.e. search in one central data store (option #1) or search in all data stores and aggregate the results (option #2).
Both options would work, although option #2 has two main drawbacks:
If you want to provide a better and more reliable search experience, option #1 would clearly get my vote (I take this way most of the time actually). As you've correctly stated, the main "drawback" of this option is that you need to keep Elasticsearch in synch with the changes in your other master data stores.
Since your other data stores will be relational databases, you have a few different options to keep them in synch with Elasticsearch, namely:
These first two options work great but have one main disadvantage, i.e. they don't capture DELETEs on your table, they will only capture INSERTs and UPDATEs. This means that if you ever delete a user, account, etc, you will not be able to know that you have to delete the corresponding document in Elasticsearch. Unless, of course, you decide to delete the Elasticsearch index before each import session.
To alleviate this, you can use another tool which bases itself on the MySQL binlog and will thus be able to capture every event. There's one written in Go, one in Java and one in Python.
UPDATE:
Here is another interesting blog article on the subject: How to keep Elasticsearch synchronized with a relational database using Logstash
Please take a look at Debezium. It's a change data capture (CDC) platform, which allow you to stream your data.
I created a simple github repository, which shows how it works with PostgreSQL and ElasticSearch.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With