The task definition my Service uses is pulling the "latest" tagged version of my image.
When I update my service though and "force new deployment" i look at the events and see this:
service MYSERVICE was unable to place a task because no container instance met all of its requirements. The closest matching container-instance .... is already using a port required by your task
I then went to my cluster and stopped all tasks.
Then went back to my Service and updated with force new deploy again. This seems to have worked
Will I have to stop all tasks and update the service each time I want to deploy a new image? Or is there a "right" way to do this?
Just to follow up, as stated in the answers I just needed to use dynamic port mapping.
Initially, when I first started I didn't have a load balancer so I was hitting the EC2 instances directly to access the running containers. Of course in order to do this I had to expose a static port on the EC2 host.
I added a load balancer but kept that static port mapping not understanding how dynamic port mapping worked. All I had to do was change my task definition to set the host port to "0". Now I have no static port mappings on the hosts, the NLB does the routing for me and deploys work as expected
The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680.
You can receive this error due to one or more of the following reasons: No container instances were found in your cluster. The port needed by the task is already in use. Not enough memory for your tasks.
Method to Map a Host Port to a Container Port in Docker You can open it by searching in the application menu by typing the keyword “terminal” in the search bar or by utilizing the “Ctrl+Alt+T” shortcut key. Once it is opened, you have to write the below-listed command to run the image named 'nginx' with the –P flag.
While the other answers are correct, I don't think they apply to the problem you have. I say this because it's a problem my team has faced as well, and doesn't really have anything to do with trying to launch multiple containers on the same instance - if I understand correctly, you're just trying to replace the existing container from an updated task definition. If you want to put multiple copies of the same container on a single box, definitely look at the suggestions from the other answers (in addition to the details below), but for rolling deploys, dynamic ports are by no means required.
[[ Side note for completeness: it's possible that your forced deploy threw the error you posted because it just takes a while for EC2 to clean up resources stopped by ECS. You'll see the same sort of issue if you're trying to force stop / start a task -- we've seen similar errors when trying to restart a container that was configured to allocate >50% of the available instance memory. You'll get those types of resource errors until the EC2 instance is completely cleaned up, and reported back to ECS. I've seen this take upwards of 5 minutes. ]]
To your question then, unfortunately for now there aren't any great built-in mechanics from AWS for performing a rolling restart of tasks. However, you can do rolling deploys.
As you're probably aware already, your Service relies on a task definition that's specified. Note that it's reliant on the task definition number, and doesn't care about the container tags in the way that the EC2 instance will.
The below settings are where the magic happens for enabling rolling deploys; you can find these configuration options in your service settings.
For you to be able to do rolling deploys, you have to have at least 2 tasks running.
n
when deploying new tasks
n
that can be added when deploying new tasks
So for a real example, let's assume you have the following configuration:
Number of tasks: 3
Minimum healthy percent: 50
Maximum percent: 100
If you change the task definition that your service is pointing at, it will initiate a rolling deploy. We have 3
running tasks, but allow for >=50%
healthy. ECS will kill one of your tasks, making the healthy % drop to 66%
, still above 50%
. Once the new task comes up, the service is again at 100%
, and ECS can continue with rolling the deploy to the next instance.
Likewise, if you had a configuration where minimum % == 100
, and maximum % == 150
(assuming you have capacity), ECS will launch an additional task; once it's up, you have a healthy percent of 133%
, and it can safely kill one of the old tasks. This process continues until your new task is fully deployed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With