Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to see Python print statements from running Fargate ECS task?

I have a Fargate ECS container that I use to run a Docker container through tasks in ECS. When the task starts, an sh script is called, runner.sh,

#!/bin/sh
echo "this line will get logged to ECS..."
python3 src/my_python_script.py # however print statements from this Python script are not logged to ECS

This in turn starts a long-running Python script, my_python_script.py. I know the Python script is running fine because it does what it needs to do, but I can't see output from the Python script.

Inside of my_python_script.py there are several print() statements. In the CloudWatch logs for my ECS Fargate task, I see output from the sh script ("this line will get logged to ECS..."), but not output from print() statements that are made within the Python script.

This is the logs configuration from inside my task definition:

{
    "ipcMode": null,
    "executionRoleArn": "myecsTaskExecutionRolearn",
    "containerDefinitions": [
        {
            "dnsSearchDomains": null,
            "environmentFiles": null,
            "logConfiguration": {
                "logDriver": "awslogs",
                "secretOptions": null,
                "options": {
                    "awslogs-group": "/ecs/mylogsgroup",
                    "awslogs-region": "eu-west-1",
                    "awslogs-stream-prefix": "ecs"
                }
            },
            "entryPoint": null,
            "portMappings": [],
            "command": null,
            "linuxParameters": null,
            "cpu": 0,
            "environment": [],
            "resourceRequirements": null,
            "ulimits": null,
            "dnsServers": null,
            "mountPoints": [],
            "workingDirectory": null,
            "secrets": null,
            "dockerSecurityOptions": null,
            "memory": null,
            "memoryReservation": null,
            "volumesFrom": [],
            "stopTimeout": null,
            "image": "1234567.dck.aws.com/mydockerimage",
            "startTimeout": null,
            "firelensConfiguration": null,
            "dependsOn": null,
            "disableNetworking": null,
            "interactive": null,
            "healthCheck": null,
            "essential": true,
            "links": null,
            "hostname": null,
            "extraHosts": null,
            "pseudoTerminal": null,
            "user": null,
            "readonlyRootFilesystem": null,
            "dockerLabels": null,
            "systemControls": null,
            "privileged": null,
            "name": "my-task-definition-name"
        }
    ],
    "memory": "4096",
    "taskRoleArn": "myecsTaskRolearn",
    "family": "my-task-definition-name",
    "pidMode": null,
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "networkMode": "awsvpc",
    "cpu": "2048",
    "inferenceAccelerators": [],
    "proxyConfiguration": null,
    "volumes": [],
    "tags": []
}

Dockerfile:


FROM rocker/verse:3.6.0
ENV DEBIAN_FRONTEND noninteractive

RUN install2.r --error \
    jsonlite

RUN echo "deb http://ftp.de.debian.org/debian testing main" >> /etc/apt/sources.list
RUN echo 'APT::Default-Release "stable";' | tee -a /etc/apt/apt.conf.d/00local
RUN apt-get update && apt-get -t testing install -y --force-yes python3.6
RUN apt-get update && apt-get -t testing install -y libmagick++-dev python3-pip python-setuptools 

RUN mkdir /app
WORKDIR /app
COPY ./src /app/src

RUN pip3 install --trusted-host pypi.python.org -r /app/requirements.txt

CMD /app/runner.sh

I think I am following the awslogs instructions from https://docs.aws.amazon.com/AmazonECS/latest/userguide/using_awslogs.html but maybe not? Is there something obvious I need to do to make sure that print() statements from within a Python script are captured in my ECS task's CloudWatch logs?

like image 203
Benjamin Avatar asked Oct 01 '20 15:10

Benjamin


People also ask

How to create a Fargate task in Amazon ECS?

Open the Amazon ECS console, and choose Task Definitions from the navigation pane. 6. Select your task definition, choose Actions, and then choose Run Task. 7. For Launch type, choose FARGATE. 8. For Cluster, choose the cluster for your task definition. 9. For Number of tasks, enter the number of tasks that you want copied.

How to run a Fargate task from a container?

Go to “task definitions” and create a new task with Fargate compatibility. copy the image address you got from the CLI command where the image is pushed Now you should have defined a task within Fargate and connected your container. Now you can run your defined task:

How to run Fargate as lunch type in Azure Functions?

Now you can run your defined task: Select Fargate as lunch type and add the provided dropdown options within the form (cluster VPC and subnets) Afterward, you can run the task and it should work like this:

How to launch EKS cluster using Fargate?

To Launch EKS cluster: Note: Add Fargate Profile with a namespace of your choice. Hoping that you have the prerequisites setup lets go ahead. Here, I have a python file with the name main.py that I want to execute. 2. Build Dockerfile to create Image: # docker build -f Dockerfile -t image_name:tag .


1 Answers

Seems to me that there are a couple of things you could be dealing with here.

The first is the default buffering behaviour of Python, which could stop the output from showing up. You will need to stop this.

You can set the PYTHONUNBUFFERED env var correctly by inserting the following before CMD:

ENV PYTHONUNBUFFERED=1

Secondly, quoting from the Using the awslogs driver doc that you linked:

The type of information that is logged by the containers in your task depends mostly on their ENTRYPOINT command. By default, the logs that are captured show the command output that you would normally see in an interactive terminal if you ran the container locally, which are the STDOUT and STDERR I/O streams. The awslogs log driver simply passes these logs from Docker to CloudWatch Logs. For more information on how Docker logs are processed, including alternative ways to capture different file data or streams, see View logs for a container or service in the Docker documentation.

So going by that, I would replace the CMD line with the following as per the Exec form of ENTRYPOINT:

ENTRYPOINT ["/app/runner.sh"]

This should serve to hook up the STDOUT and STDERR I/O streams for your shell script and hopefully your Python script to the container logging.

like image 92
evangineer Avatar answered Oct 23 '22 04:10

evangineer