Your AWS Elastic Beanstalk deployment fails: - Intermittent - For no real apparent reason
Step 1: Check obvious log
/var/log/eb-activity.log
Running npm install: /opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm
Setting npm config jobs to 1
npm config jobs set to 1
Running npm with --production flag
Failed to run npm install. Snapshot logs for more details.
Traceback (most recent call last):
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 695, in <module>
main()
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 677, in main
node_version_manager.run_npm_install(options.app_path)
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 136, in run_npm_install
self.npm_install(bin_path, self.config_manager.get_container_config('app_staging_dir'))
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 180, in npm_install
raise e
subprocess.CalledProcessError: Command '['/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm', '--production', 'install']' returned non-zero exit status 1 (ElasticBeanstalk::ExternalInvocationError)
caused by: + /opt/elasticbeanstalk/containerfiles/ebnode.py --action npm-install
Step 2: Google for appropriate Snapshot log file...
/var/log/nodejs/npm-debug.log
58089 verbose stack Error: spawn ENOMEM
58089 verbose stack at exports._errnoException (util.js:1022:11)
58089 verbose stack at ChildProcess.spawn (internal/child_process.js:313:11)
58089 verbose stack at exports.spawn (child_process.js:380:9)
58089 verbose stack at spawn (/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/spawn.js:21:13)
58089 verbose stack at runCmd_ (/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/lifecycle.js:247:14)
58089 verbose stack at /opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/lifecycle.js:211:7
58089 verbose stack at _combinedTickCallback (internal/process/next_tick.js:67:7)
58089 verbose stack at process._tickCallback (internal/process/next_tick.js:98:9)
58090 verbose cwd /tmp/deployment/application
58091 error Linux 4.4.44-39.55.amzn1.x86_64
58092 error argv "/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/node" "/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm" "--production" "install"
58093 error node v6.10.0
58094 error npm v3.10.10
58095 error code ENOMEM
58096 error errno ENOMEM
58097 error syscall spawn
58098 error spawn ENOMEM
Step 3: Obvious options...
Use a bigger instance and it works...
Don't fix, just try again
Deploy again and it works...
Clone the environment and it works...
Rebuild the environment and it works....
Are left feeling dirty and wrong
Rebuilding an environment terminates all of its resources and replaces them with new resources with the same configuration. You can also rebuild terminated environments within six weeks (42 days) of their termination.
Deploy your application and view your metrics Deploy your updated Elastic Beanstalk application. 2. To see your memory utilization metrics, open the CloudWatch console, and then choose Metrics in the navigation pane. You can see your metrics in the custom namespace labeled CWAgent.
Your instances (t2.micro in my case) are running out of memory because the instance spin-up is parallelised.
For one-off, while logged into instance...
sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
sudo /sbin/mkswap /var/swap.1
sudo chmod 600 /var/swap.1
sudo /sbin/swapon /var/swap.1
From / more detail: How do you add swap to an EC2 instance?
During deployment we use a bit of SWAP, but no crash
Mem: 1019116k total, 840880k used, 178236k free, 15064k buffers
Swap: 1048572k total, 12540k used, 1036032k free, 62440k cached
Bigger instances
Automate provisioning of swap in ElasticBeanStalk
.ebextensions/
Hop on the 'server-less' bandwagon
Use less bloated packages
A quick google reveals that ENOMEM
is an out of memory error.
t2.micro
instances only have 1 GB of RAM.
Rarely would we use this amount on dev; however, ElasticBeanstalk parallelizes parts of the build process through spawned workers. This means that during SETUP, for the larger packages, one may run out of memory and the operation will fail.
Using free -m
we can see...
Start (plenty of free memory)
total used free shared buffers cached
Mem: 1019116 609672 409444 144 45448 240064
-/+ buffers/cache: 324160 694956
Swap: 0 0 0
Ran out of memory at next tick)
Mem: 1019116 947232 71884 144 11544 81280
-/+ buffers/cache: 854408 164708
Swap: 0 0 0
Deploy process aborted
total used free shared buffers cached
Mem: 1019116 411892 607224 144 13000 95460
-/+ buffers/cache: 303432 715684
Swap: 0 0 0
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With