Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Liquibase or Flyway database migration alternative for Elasticsearch

I am pretty new to ES. I have been trying to search for a db migration tool for long and I could not find one. I am wondering if anyone could help to point me to the right direction.

I would be using Elasticsearch as a primary datastore in my project. I would like to version all mapping and configuration changes / data import / data upgrades scripts which I run as I develop new modules in my project.

In the past I used database versioning tools like Flyway or Liquibase.

Are there any frameworks / scripts or methods I could use with ES to achieve something similar ?

Does anyone have any experience doing this by hand using scripts and run migration scripts at least upgrade scripts.

Thanks in advance!

like image 464
Istvano Avatar asked Jun 01 '14 07:06

Istvano


People also ask

Which is better Flyway or Liquibase?

While both tools are based on Martin Fowler's Evolutionary Database, there are many differences in what these tools offer. Here's where Liquibase and Flyway differ. The bottom line is that Liquibase is more powerful and flexible — covering more database change and deployment use cases than Flyway.

Can Liquibase migrate data?

Liquibase offers a powerful open source database migration tool for Java apps. It brings structure and confidence to developers and DBAs that need to easily create and track database schema changes.

Why Flyway is required?

Flyway is an open-source tool, licensed under Apache License 2.0, that helps you implement automated and version-based database migrations. It allows you to define the required update operations in an SQL script or as Java code.

Can Flyway migrate data?

Flyway is an open-source database migration tool. It strongly favors simplicity and convention over configuration. It is based around just 7 basic commands: Migrate, Clean, Info, Validate, Undo, Baseline and Repair.


1 Answers

From this point of view/need, ES have a huge limitations:

  • despite having dynamic mapping, ES is not schemaless but schema-intensive. Mappings cant be changed in case when this change conflicting with existing documents (practically, if any of documents have not-null field which new mapping affects, this will result in exception)
  • documents in ES is immutable: once you've indexed one, you can retrieve/delete in only. The syntactic sugar around this is partial update, which makes thread-safe delete + index (with same id) on ES side

What does that mean in context of your question? You, basically, can't have classic migration tools for ES. And here's what can make your work with ES easier:

  • use strict mapping ("dynamic": "strict" and/or index.mapper.dynamic: false, take a look at mapping docs). This will protect your indexes/types from

  • being accidentally dynamically mapped with wrong type

  • get explicit error in case when you miss some error in data-mapping relation

  • you can fetch actual ES mapping and compare it with your data models. If your PL have high enough level library for ES, this should be pretty easy

  • you can leverage index aliases for migrations


So, a little bit of experience. For me, currently reasonable flow is this:

  • All data structures described as models in code. This models actually provide ORM abstraction too.
  • Index/mapping creation call is simple model's method.
  • Every index has alias (i.e. news) which points to actual index (i.e. news_index_{revision}_{date_created}).

Every time code being deployed, you

  1. Try to put model(type) mapping. If it's done w/o error, this means that you've either
  • put the same mapping
  • put mapping that is pure superset of old one (only new fields was provided, old stays untouched)
  • no documents have values in fields affected by new mapping

All of this actually means that you're good to go with mappping/data you have, just work with data as always.

  1. If ES provide exception about new mapping, you
  • create new index/type with new mapping (named like name_{revision}_{date}
  • redirect your alias to new index
  • fire up migration code that makes bulk requests for fast reindexing During this reindexing you can safely index new documents normally through the alias. The drawback is that historical data is partially available during reindexing.

This is production-tested solution. Caveats around such approach:

  • you cannot do such, if your read requests require consistent historical data
  • you're required to reindex whole index. If you have 1 type per index (viable solution) then its fine. But sometimes you need multi-type indexes
  • data network roundtrip. Can be pain sometimes

To sum up this:

  • try to have good abstraction in your models, this always helps
  • try keeping historical data/fields stale. Just build your code with this idea in mind, that's easier than sounds at first
  • I strongly recommend to avoid relying on migration tools that leverage ES experimental tools. Those can be changed anytime, like river-* tools did.
like image 99
Slam Avatar answered Oct 07 '22 17:10

Slam