I'm using - image: peopleperhour/dynamodb
for a docker image in my CircleCI config file.
In CircleCI it's outputting the following.
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: /var/dynamodb_local
SharedDb: false
shouldDelayTransientStatuses: false
CorsParams: *
Exited with code 137
The first tests pass fine and Exited with code 137
doesn't happen until later on. But once that error happens all the tests start failing.
I saw this link and changed my code to the following with no luck.
- image: peopleperhour/dynamodb
environment:
MAX_HEAP_SIZE: 2048m
HEAP_NEWSIZE: 512m
Any ideas on how to fix this?
If a few pods are consistently getting exit code 137 returned to them, then that is a sign that you need to increase the amount of space you afford to the pod. By increasing the maximum limit manually in the pods that are under the most strain, you'll be able to reduce the frequency with which this problem occurs.
When a container (Spark executor) runs out of memory, YARN automatically kills it. This causes the "Container killed on request. Exit code is 137" error.
The OOMKilled error, also indicated by exit code 137, means that a container or pod was terminated because they used more memory than allowed. OOM stands for “Out Of Memory”. Kubernetes allows pods to limit the resources their containers are allowed to utilize on the host machine.
I ran into the same issue. I ended up just using localstack for it, as the memory footprint seems to be lower.
My config for that container looks like:
- image: localstack/localstack
environment:
SERVICES: dynamodb:4570
DEFAULT_REGION: us-west-2
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With