Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Exit Code 125 from Docker when Trying to Run Container Programmatically

I am trying to get an integration test working. In the test initialization phase I attempt to spin up a Redis server from a docker image.

var p = new Process { StartInfo = new ProcessStartInfo("docker", "run --name redistest –p 6379:6379 redis")};
p.Start();

When I do that the process exits with exit code 125. If I comment out those lines, hit a breakpoint in the test before the test code executes and instead run from the command line

docker run --name redistest -p 6379:6379 redis

the test runs as expected when continuing from the breakpoint. The 125 exist code just means docker run failed, so there's not much more information to go on.

Prior to either the command line invocation or the C# invocation, I made sure there was no container named redistest with

docker stop redistest
docker rm redistest

Yet the difference in behavior remains. All of these attempts to run docker programmatically fail:

  • adding -d
  • running as a normal user
  • running with elevated privileges
  • running from within a test
  • running from a .NET Framework console app

Why does programmatic process creation of the docker run command cause docker to exit with a 125?

It works programmatically just fine for some images but not others.

like image 243
Kit Avatar asked Mar 09 '26 10:03

Kit


1 Answers

I know this is an old question and I'm obviously way late to help the original poster, but since this question appears near the top of the search results when searching for this error for either docker or podman, I thought I would add one other workaround I found that I didn't see mentioned (should work for either podman or docker). Posting in case it helps someone else.

I have to give credit to the answers above, especially TheECanyon's, for pointing me in the right direction with mention of it being related to passing --name. That said, it's nothing revolutionary; it's just a variation on Kit's answer that delegates the removal to docker/podman instead of requiring additional handling in your code/script. Then again, his answer or a hybrid approach might be more resilient for handling oddball conditions. Like the others, I was getting this error while using that parameter but I wasn't seeing any of the usual errors about not being able to run a container because a container with that name already existed. Possibly because I was running it from a script.

What I didn't see mentioned thus far and allowed me to keep the --name parameter was to simply remove the existing container manually (e.g. [docker|podman] rm <container-name>) and then add the --rm parameter to the run command (e.g. [docker|podman] run --rm) which causes the container to be removed on exit (in effect, you would then be re-creating the container on every run and docker/podman would remove it when it stops/exits). This probably won't work if there's a reason that you need to use the exact same container every run, but in my case all of the config/data/persistent bits were stored on mounted volumes and re-acquired on the next run anyway so it worked great, including in rootless containers.

This solution works better for me than simply removing the name because removing the name causes a new container instance to be generated each and every run but without cleaning up the old container instances. Not that big of a deal if you just need to do 10s of runs on a local dev box but in theory this could lead to some storage concerns over time especially if you are doing 100s or 1000s of runs that each create a new container instance.

like image 67
zpangwin Avatar answered Mar 12 '26 03:03

zpangwin