Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sync postgreSql data with ElasticSearch

Ultimately I want to have a scalable search solution for the data in PostgreSql. My finding points me towards using Logstash to ship write events from Postgres to ElasticSearch, however I have not found a usable solution. The soluions I have found involve using jdbc-input to query all data from Postgres on an interval, and the delete events are not captured.

I think this is a common use case so I hope you guys could share with me your experience, or give me some pointers to proceed.

like image 858
Khanetor Avatar asked Mar 05 '16 12:03

Khanetor


People also ask

Is Elasticsearch faster than Postgres?

And the more size you want to search in, the more Elasticsearch is better than PostgreSQL in performance. Additionally, you could also get many benefits and great performance if you pre-process the posts into several fields and indexes well before storing into Elasticsearch.


2 Answers

If you need to also be notified on DELETEs and delete the respective record in Elasticsearch, it is true that the Logstash jdbc input will not help. You'd have to use a solution working around the binlog as suggested here

However, if you still want to use the Logstash jdbc input, what you could do is simply soft-delete records in PostgreSQL, i.e. create a new BOOLEAN column in order to mark your records as deleted. The same flag would then exist in Elasticsearch and you can exclude them from your searches with a simple term query on the deleted field.

Whenever you need to perform some cleanup, you can delete all records flagged deleted in both PostgreSQL and Elasticsearch.

like image 127
Val Avatar answered Sep 29 '22 15:09

Val


You can also take a look at PGSync.

It's similar to Debezium but a lot easier to get up and running.

PGSync is a Change data capture tool for moving data from Postgres to Elasticsearch. It allows you to keep Postgres as your source-of-truth and expose structured denormalized documents in Elasticsearch.

You simply define a JSON schema describing the structure of the data in Elasticsearch.

Here is an example schema: (you can also have nested objects)

e.g

{     "nodes": {         "table": "book",         "columns": [             "isbn",             "title",             "description"         ]     } } 

PGsync generates queries for your document on the fly. No need to write queries like Logstash. It also supports and tracks deletion operations.

It operates both a polling and an event-driven model to capture changes made to date and notification for changes that occur at a point in time. The initial sync polls the database for changes since the last time the daemon was run and thereafter event notification (based on triggers and handled by the pg-notify) for changes to the database.

It has very little development overhead.

  • Create a schema as described above
  • Point pgsync at your Postgres database and Elasticsearch cluster
  • Start the daemon.

You can easily create a document that includes multiple relations as nested objects. PGSync tracks any changes for you.

Have a look at the github repo for more details.

You can install the package from PyPI

like image 41
taina Avatar answered Sep 29 '22 17:09

taina