I try to use the mongodb official docker image on mac os x 10.10.2 with this command inside mac terminal :
docker run -v /Users/john/data/db:/data/db -p 27017:27017 mongo --smallfiles
But it exits with this error log :
2015-04-11T10:53:19.709+0000 I JOURNAL [initandlisten] journal dir=/data/db/journal
2015-04-11T10:53:19.711+0000 I JOURNAL [initandlisten] recover begin
2015-04-11T10:53:19.711+0000 I STORAGE [initandlisten] In File::open(), ::open for '/data/db/journal/lsn' failed with errno:1 Operation not permitted
2015-04-11T10:53:19.711+0000 I - [initandlisten] Assertion failure f.is_open() src/mongo/db/storage/mmap_v1/dur_journal.cpp 597
2015-04-11T10:53:19.713+0000 I CONTROL [initandlisten]
0xf69069 0xf09861 0xeeed9e 0xd2b8f7 0xd36852 0xd37561 0xd37a90 0xd254b6 0xa9b9f9 0x824220 0x7f13c4 0x7f6dbae1bead 0x822459
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"B69069"},{"b":"400000","o":"B09861"},{"b":"400000","o":"AEED9E"},{"b":"400000","o":"92B8F7"},{"b":"400000","o":"936852"},{"b":"400000","o":"937561"},{"b":"400000","o":"937A90"},{"b":"400000","o":"9254B6"},{"b":"400000","o":"69B9F9"},{"b":"400000","o":"424220"},{"b":"400000","o":"3F13C4"},{"b":"7F6DBADFD000","o":"1EEAD"},{"b":"400000","o":"422459"}],"processInfo":{ "mongodbVersion" : "3.0.1", "gitVersion" : "534b5a3f9d10f00cd27737fbcd951032248b5952", "uname" : { "sysname" : "Linux", "release" : "3.18.5-tinycore64", "version" : "#1 SMP Sun Feb 1 06:02:30 UTC 2015", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "4AB5B4C24C9EE5C1743971702746CDB87DC92DCE" }, { "b" : "7FFFFE772000", "elfType" : 3, "buildId" : "C58213BB786BBA102C73C58D3FF0123C2006C7F4" }, { "b" : "7F6DBC38B000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "FEF281218797AD6AE726DD5FCEDECADD9E9F51DC" }, { "b" : "7F6DBC12B000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "AEE5F3A05E87AFA440FCF6352C568A0F08584119" }, { "b" : "7F6DBBD33000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "37084B8E55653C947BA6295814D850D6AA0C561D" }, { "b" : "7F6DBBB2B000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "F58D5DE3E7A2989E915422BA4203FE53DBA449A0" }, { "b" : "7F6DBB927000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "5D1CA3A3D93ED5B6C6462FFA03E787FDBE4013A3" }, { "b" : "7F6DBB620000", "path" : "/usr/lib/x86_64-linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "8711429397A5AF8B6269B867D830EDF6E0225B8D" }, { "b" : "7F6DBB39E000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "7F58D6664571941C86B2D969701A572AD4D7BF1D" }, { "b" : "7F6DBB188000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "F980B1188708F8D8B5C35D185444AF4CB939AA1E" }, { "b" : "7F6DBADFD000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "A745EBA2C16BA80AE1EF1A7A7B70740C2CF1B363" }, { "b" : "7F6DBC5A7000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "9B23F2A44CC8CA6175CBD8D64584B1C7EA5FD18C" }, { "b" : "7F6DBABE6000", "path" : "/lib/x86_64-linux-gnu/libz.so.1", "elfType" : 3, "buildId" : "1EFEB71FD4999C2307570D673A724EA4E1D85267" } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x29) [0xf69069]
mongod(_ZN5mongo10logContextEPKc+0xE1) [0xf09861]
mongod(_ZN5mongo12verifyFailedEPKcS1_j+0xCE) [0xeeed9e]
mongod(_ZN5mongo3dur14journalReadLSNEv+0x1E7) [0xd2b8f7]
mongod(_ZN5mongo3dur11RecoveryJob2goERSt6vectorIN5boost11filesystem34pathESaIS5_EE+0xB2) [0xd36852]
mongod(_ZN5mongo3dur8_recoverEv+0x851) [0xd37561]
mongod(_ZN5mongo3dur27replayJournalFilesAtStartupEv+0x60) [0xd37a90]
mongod(_ZN5mongo3dur7startupEv+0x26) [0xd254b6]
mongod(_ZN5mongo23GlobalEnvironmentMongoD22setGlobalStorageEngineERKSs+0x319) [0xa9b9f9]
mongod(_ZN5mongo13initAndListenEi+0x2F0) [0x824220]
mongod(main+0x134) [0x7f13c4]
libc.so.6(__libc_start_main+0xFD) [0x7f6dbae1bead]
mongod(+0x422459) [0x822459]
----- END BACKTRACE -----
2015-04-11T10:53:19.716+0000 F JOURNAL [initandlisten] dbexception during recovery: 13611 can't read lsn file in journal directory : assertion src/mongo/db/storage/mmap_v1/dur_journal.cpp:597
2015-04-11T10:53:19.716+0000 I STORAGE [initandlisten] exception in initAndListen: 13611 can't read lsn file in journal directory : assertion src/mongo/db/storage/mmap_v1/dur_journal.cpp:597, terminating
2015-04-11T10:53:19.716+0000 I CONTROL [initandlisten] now exiting
2015-04-11T10:53:19.716+0000 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2015-04-11T10:53:19.716+0000 I NETWORK [initandlisten] shutdown: going to flush diaglog...
2015-04-11T10:53:19.716+0000 I NETWORK [initandlisten] shutdown: going to close sockets...
2015-04-11T10:53:19.716+0000 I STORAGE [initandlisten] shutdown: waiting for fs preallocator...
2015-04-11T10:53:19.716+0000 I STORAGE [initandlisten] shutdown: final commit...
2015-04-11T10:53:19.716+0000 I STORAGE [initandlisten] shutdown: closing all files...
2015-04-11T10:53:19.716+0000 I STORAGE [initandlisten] closeAllFiles() finished
2015-04-11T10:53:19.716+0000 I CONTROL [initandlisten] dbexit: rc: 100
What i don't understand is that if i execute this same command inside docker-machine, with the same data placed inside a docker-machine folder then everything works fine.
I already tried to chmod -R 777 on the folder but it doesn't fix the problem.
Can someone explain what i am doing wrong here ?
Can we place data inside the "/Users" folder mounted in docker-machine to share with containers on mac os x ?
Is mongodb requiring something specific about the filesystem ?
docker. docker/Data/vms/0/tty to get into the vm and then navigate to the folder to see the volumes.
For connecting to your local MongoDB instance from a Container you must first allow to accept connections from the Docker bridge gateway. To do so, simply add the respective gateway IP in the MongoDB config file /etc/mongod. conf under bindIp in the network interface section.
MongoDB can be run in a Docker container. There is an official image available on Docker Hub containing the MongoDB community edition, used in development environments. For production, you may custom-build a container with MongoDB's enterprise version.
Running MongoDB as a Docker Container If you need to access the MongoDB server from another application running locally, you will need to expose a port using the -p argument. Using this method, you will be able to connect to your MongoDB instance on mongodb://localhost:27017 .
It seems to be specific to the mongo software and the use of virtualbox , see the mongo docker README.
WARNING (Windows & OS X): The default Docker setup on Windows and OS X uses a VirtualBox VM to host the Docker daemon. Unfortunately, the mechanism VirtualBox uses to share folders between the host system and the Docker container is not compatible with the memory mapped files used by MongoDB (see vbox bug, docs.mongodb.org and related jira.mongodb.org bug). This means that it is not possible to run a MongoDB container with the data directory mapped to the host.
It doesn't behave like that if you keep the datadir out of the mounted volume.
Running on the docker osx native beta (which uses xhyve rather than VirtualBox), it's now possible to mount the mongo data directory to the host system properly and without issue.
My setup: MacBook Pro with El Capitan, Docker native beta Version 1.11.1-beta10 (build: 6662), Docker version 1.11.1, build 5604cbe
docker run -d -p 127.0.0.1:27017:27017 -v ~/foo/data/db/:/data/db --name foo-mongo mongo
7f8a72ec42b0ac235f49e0edd8d4f6613b45d10beb54012ca643629218a6653d
docker logs foo-mongo
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=7f8a72ec42b0
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] db version v3.2.4
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] git version: e2ee9ffcf9f5a94fad76802e28cc978718bb7a30
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e 11 Feb 2013
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] allocator: tcmalloc
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] modules: none
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] build environment:
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] distmod: debian71
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] distarch: x86_64
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] target_arch: x86_64
2016-05-05T23:42:54.014+0000 I CONTROL [initandlisten] options: {}
2016-05-05T23:42:54.028+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-05-05T23:42:54.560+0000 I CONTROL [initandlisten]
2016-05-05T23:42:54.560+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-05-05T23:42:54.560+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-05-05T23:42:54.560+0000 I CONTROL [initandlisten]
2016-05-05T23:42:54.560+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-05-05T23:42:54.560+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2016-05-05T23:42:54.560+0000 I CONTROL [initandlisten]
2016-05-05T23:42:54.562+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2016-05-05T23:42:54.562+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-05-05T23:42:54.573+0000 I NETWORK [initandlisten] waiting for connections on port 27017
ls ~/foo/data/db/
WiredTiger WiredTiger.turtle WiredTigerLAS.wt collection-0-516089495343762760.wt index-1-516089495343762760.wt mongod.lock storage.bson
WiredTiger.lock WiredTiger.wt _mdb_catalog.wt diagnostic.data/ journal/ sizeStorer.wt
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With