Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mysql container can not mount data to a nfs folder

By swarm mode, containers may deploy in any joined nodes. I created a shared nfs folder as mysql data folder on host1.

mkdir -p /nfs/data-volume

In another host2, it mounts to this shared folder. And added necessary permission. I tried this nfs share folder by reading and writing some text file in it. It worked very well. (There was no permission error) After these nfs configuration , I defined my container volume like this;

mysqldb-read:
    image: demo/db-slave
    ports:
     - "3308:3306"
    volumes:
     - /nfs/data-volume:/var/lib/mysql

The result is: If mysql container run on host1, works very good. If mysql container run on host2, it doesn't startup. But the container doesn't exit, the thread stay there and looks like wait something. By running check log command:

docker logs -f mymysql

It shows logs like this:

   2017-06-07T02:40:13.627195Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2017-06-07T02:40:13.632313Z 0 [Note] mysqld (mysqld 5.7.18-log) starting as process 52 ...
2017-06-07T02:40:13.648010Z 0 [Note] InnoDB: PUNCH HOLE support available
2017-06-07T02:40:13.648054Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2017-06-07T02:40:13.648059Z 0 [Note] InnoDB: Uses event mutexes
2017-06-07T02:40:13.648062Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2017-06-07T02:40:13.648066Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.3
2017-06-07T02:40:13.648069Z 0 [Note] InnoDB: Using Linux native AIO
2017-06-07T02:40:13.648326Z 0 [Note] InnoDB: Number of pools: 1
2017-06-07T02:40:13.648770Z 0 [Note] InnoDB: Using CPU crc32 instructions
2017-06-07T02:40:13.651011Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2017-06-07T02:40:13.760444Z 0 [Note] InnoDB: Completed initialization of buffer pool
2017-06-07T02:40:13.829981Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().

Nothing more on this log, it stops on this line. I tried to login into container, and input command

mysqld -uroot -proot

The showing log is totally same.

I feel this is caused by nfs. But I googled and found almost all materials suggested to use nfs to share data. Is there anyone who successfully make this work? Or any suggestion to me?

Thanks

like image 735
Seiya Avatar asked Jun 07 '17 02:06

Seiya


1 Answers

Q1: Is there anyone who successfully make this work ?

My experience is... No. I tried NFS, MySQL and Docker Swarm (v1.12) some months ago and I also did fail with that.

They are pretty clear with that indeed, from MySQL documentation:

Using NFS with MySQL

Caution is advised when considering using NFS with MySQL. Potential issues, which vary by operating system and NFS version, include:

  • MySQL data and log files placed on NFS volumes becoming locked and unavailable for use...
  • Data inconsistencies...
  • Maximum file size limitations

I also experienced file locks, slow queries and slow writes...

Q2: Or any suggestion to me?

One of docker-swarm tricky part is indeed with the data, especially with databases. You don't know on witch host the mysql container will be run. I've used two alternatives so overcome this:

1. Swarm mode service creation --constraint option

This option will instruct docker to deploy your MySQL container always on the same host, for instance:

mysqldb-read:
  image: demo/db-slave
  ports:
    - "3308:3306"
  volumes:
    - /nfs/data-volume:/var/lib/mysql
  deploy:
    placement:
      constraints: [node.hostname == host1]

If docker swarm service mysqldb-read restarts, this will always be on host1 node.

2. Docker volumes

Another option is to dynamically attach a shared docker volume to the MySQL service before startup. The documentation states:

If you want your data to persist, use a named volume and a volume driver that is multi-host aware, so that the data is accessible from any node...

There are some docker volume plugins that allow you to do that. I personally tried rancher's convoy in an AWS environment, but I also have other issues with volume deletion, sync, etc...

You can also have a look at this popular SO thread about swarm and docker volumes.

PS: About NFS

I'm not saying that you should give up NFS for other docker services, I still use it for read-only configuration files (Apache Tomcat and Nginx configuration, etc...), but for MySQL it's a no go.

Hope my experience will help!

like image 183
François Maturel Avatar answered Dec 28 '22 21:12

François Maturel