Our mongodb.conf of version 3.06 and data files only 240 MB in size. Network is reliable at this timestamps.
# mongod.conf
# Where to store the data.
# Note: if you run mongodb as a non-root user (recommended) you may
# need to create and set permissions for this directory manually,
# e.g., if the parent directory isn't mutable by the mongodb user.
dbpath=/db/db32/mongodb/data/
# path to logfile
logpath=/db/db32/mongodb/logs/mongod.log
# add new entries to the end of the logfile
logappend=true
# Listen to local interface only. Comment out to listen on all interfaces.
#bind_ip = 127.0.0.1
# enable operation journaling
#journal = true
#smallfiles = true
nojournal = true
# Enables periodic logging of CPU utilization and I/O wait
cpu = true
# enable database authentication for users connecting from remote hosts
auth = true
# Verbose logging output.
#verbose = true
# Enable db quota management
#quota = true
# Set oplogging level where n is
# 0=off (default)
# 1=W
# 2=R
# 3=both
# 7=W+some reads
#diaglog = 0
# Ignore query hints
#nohints = true
# Turns off server-side scripting. This will result in greatly limited
# functionality
noscripting = true
# Turns off table scans. Any query that would do a table scan fails.
#notablescan = true
# Disable data file preallocation.
#noprealloc = true
# Specify .ns file size for new databases.
# nssize = <size>
# Replication Options
# in replicated mongo databases, specify the replica set name here
#replSet=setname
# maximum size in megabytes for replication operation log
#oplogSize=1024
# path to a key file storing authentication info for connections
# between replica set members
#keyFile=/path/to/keyfile
# Forces the mongod to validate all requests from clients
objcheck = true
# Disable HTTP status interface
nohttpinterface = true
# disable REST interface
rest = false
# database profiling 1 = only includes slow operations
profile = 1
# logs slow queries to the log
slowms = 100
# maximum number of simultaneous connections
maxConns = 25
mongodump with verbose flag. In server log no entries to this timestamp.
(...)
2016-03-09T16:51:17.378+0100 enqueued collection 'Tetzi005.xxx'
2016-03-09T16:51:17.384+0100 enqueued collection 'Tetzi005.xxxxxx'
2016-03-09T16:51:17.391+0100 enqueued collection 'Tetzi005.system.indexes'
2016-03-09T16:51:17.391+0100 finalizing intent manager with longest task first prioritizer
2016-03-09T16:51:17.391+0100 dumping with 8 job threads
2016-03-09T16:51:17.391+0100 starting dump routine with id=0
2016-03-09T16:51:17.391+0100 starting dump routine with id=4
2016-03-09T16:51:17.391+0100 starting dump routine with id=1
2016-03-09T16:51:17.391+0100 writing Tetzi005.DailyEmailUser to dbbackup/dump/Tetzi005/xxxxxxx.bson
2016-03-09T16:51:17.391+0100 starting dump routine with id=3
2016-03-09T16:51:17.391+0100 starting dump routine with id=6
2016-03-09T16:51:17.391+0100 writing Tetzi005.Prototype to dbbackup/dump/Tetzi005/xxxxxxx.bson
2016-03-09T16:51:17.392+0100 starting dump routine with id=7
2016-03-09T16:51:17.392+0100 writing Tetzi005.ProfileUser to dbbackup/dump/Tetzi005/xxxxxxx.bson
2016-03-09T16:51:17.392+0100 starting dump routine with id=2
2016-03-09T16:51:17.392+0100 writing Tetzi005.OrganizationDataSet to dbbackup/dump/Tetzi005/xxxxxxxx.bson
2016-03-09T16:51:17.392+0100 writing Tetzi005.DailyUserCount to dbbackup/dump/Tetzi005/xxxxxxxxxx.bson
2016-03-09T16:51:17.392+0100 writing Tetzi005.DailyEmailOrganization to dbbackup/dump/Tetzi005/xxxxxxxxxxxxx.bson
2016-03-09T16:51:17.392+0100 starting dump routine with id=5
2016-03-09T16:51:17.392+0100 writing Tetzi005.OrganizationStatistics to dbbackup/dump/Tetzi005/xxxxxxxxxxx.bson
2016-03-09T16:51:17.392+0100 writing Tetzi005.Organization to dbbackup/dump/Tetzi005/xxxx.bson
2016-03-09T16:51:17.398+0100 counted 112 documents in Tetzi005.xxxxxxxxxxxxx
2016-03-09T16:51:17.398+0100 counted 475 documents in Tetzi005.xxxxxxxx
2016-03-09T16:51:17.405+0100 Failed: error reading from db: EOF
No solutions when googling for Failed: error reading from db: EOF
We have this problem only with large
plan. Technically the configuration doesn't differ (expect memory, disk and maxConns). All mongo Server running in Docker container. Docker runs on OpenStack VMs with RHEL7.
small Maximum 10 concurrent connections, 1GB storage, 256MB memory paid
medium Maximum 15 concurrent connections, 8GB storage, 1GB memory paid
large Maximum 25 concurrent connections, 16GB storage, 4GB memory paid
mongodump is a utility that creates a binary export of a database's contents. mongodump can export data from: Standalone deployments. Replica sets.
Mongdump does not lock the db. It means other read and write operations will continue normally. Actually, both mongodump and mongorestore are non-blocking. So if you want to mongodump mongorestore a db then its your responsibility to make sure that it is really a desired snapshot backup/restore.
The default location for backups is the dump/ folder. When the WiredTiger storage engine is used in a MongoDB instance, the output will be uncompressed data. Backup operations using mongodump is dependent on the available system memory.
The Error Failed: error reading from db: EOF
is caused from running out of memory during the oplog write out.
You can use less memory as when you run mongodump
add the --quiet
option.
mongodump --quiet
The "Failed: error reading from db: EOF"
is caused from running out of memory during the oplog write out.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With