Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SQL Database synchronization

We are develping an application using PHP and Symfony framework. Ideally the system would run on a remote server, so that it can be available online anywhere. However, there has to be offline availability too.

For this, the system will be running both on a remote server and a local server. By default clients would access the local server, and only working online when outside of the facility.

So we need to implement database synchronization for changes made both on the local and remote database without any kind of client interaction. If possible, it should be a daemon running in the background, synchronizing databases. The master database would be the local one, so it will take priority upon any conflict.

We haven't decide on the DB system to use (we are using Doctrine so any supported DB will do), so we are really looking at the one with the most ready or available solution for our scenario.

The solution can be a dedicated application, commercial or not, as long as it requires no client interaction.

like image 529
Hioushi Avatar asked Oct 10 '22 19:10

Hioushi


2 Answers

I use MySQL's synchronization, it was a snap to set up and has performed flawlessly. In conjunction with phpMyAdmin, which gives you a UI to use, you can literally set this up out-of-the-box in about 6 clicks of the mouse.

http://dev.mysql.com/doc/refman/5.0/en/replication.html

http://www.phpmyadmin.net/home_page/index.php

like image 53
Chris Baker Avatar answered Oct 14 '22 03:10

Chris Baker


It really depends on your application.

The most robust system would probably be a master master environment with object oriented databases like CouchDB.

There are some that feature great support out of the box, but you can also do it application side which is probably more flexible.

The approach described in this answer however works only well if you can boil down your modifications to document entities.

The procedure kind of goes as follows:

  • Every document recieves a last_modification timestamp.

  • The client stores a timestamp when it was last synchronized with the server.

  • Deleted documents dont get deleted right away but recieve a deleted_at timestamp

  • When the conneciton gets interrupted the client can continue modify local documents

  • When connection is established again every modification on the server and client that occoured after the last synchronization timestamp is fetched.

  • Documents get updated in both directions. Conflict resolution on documents that got modified on both sides is done based on the lsat_modification timestmap

  • Keep in mind that there may be small offsets in time on server and client, so you cannot 100% rely on it. Synchronization is needed. Also you would loose modifications made by the side that modified first. This however should be a problem that you can deal with in your application (like cvs system conflicts)

  • Also document mergin and/or field based modifications timestamps could be an option

As already said it really depends on your application and much more on your data.

Your problem is one that also occours in CVS systems in software developement (SVN revisions and stuff!)

Imagine the always does a svn checkin and then a svn update. Maby this is possible for your appliaction if you store your data in XML files or something similar?

SVN is a bit oldschool and has some disadvantages and is not so reliable so for this case it would be not suitable, but GIT could be very capable of doing the job.

SQL (Mysql etc) based master master replication systems are very powerful too however also quite hard to maintain and not so easy to set up.

I hope this answer is useful because I am not really sure about your use case, but if you put it a bit more detailed I may be able to help you further.

like image 24
The Surrican Avatar answered Oct 14 '22 04:10

The Surrican