Application consists of: - Django - Redis - Celery - Docker - Postgres
Before merging the project into docker, everything was working smooth and fine, but once it has been moved into containers, something wrong started to happen. At first it starts perfectly fine, but after a while I do receive folowing error:
celery-beat_1 | ERROR: Pidfile (celerybeat.pid) already exists.
I've been struggling with it for a while, but right now I literally give up. I've no idea of what is wrong with it.
Dockerfile:
FROM python:3.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src
COPY /scripts/startup/entrypoint.sh entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system
COPY . /opt/services/djangoapp/src
RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \;
RUN sed -i "s|django.core.urlresolvers|django.urls |g" /usr/local/lib/python3.7/site-packages/vanilla/views.py
RUN cp /usr/local/lib/python3.7/site-packages/celery/backends/async.py /usr/local/lib/python3.7/site-packages/celery/backends/asynchronous.py
RUN rm /usr/local/lib/python3.7/site-packages/celery/backends/async.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/redis.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/rpc.py
RUN cd app && python manage.py collectstatic --no-input
EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "app", "example.wsgi:application", "--reload"]
docker-compose.yml:
version: '3'
services:
djangoapp:
build: .
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
- .:/code
restart: always
networks:
- nginx_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
- redis_network
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- migration
- redis
# base redis server
redis:
image: "redis:alpine"
restart: always
ports:
- "6379:6379"
networks:
- redis_network
volumes:
- redis_data:/data
# celery worker
celery:
build: .
command: >
bash -c "cd app && celery -A example worker --without-gossip --without-mingle --without-heartbeat -Ofair"
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
networks:
- redis_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
restart: always
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- redis
links:
- redis
celery-beat:
build: .
command: >
bash -c "cd app && celery -A example beat"
volumes:
- .:/opt/services/djangoapp/src
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
networks:
- redis_network
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
restart: always
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
- redis
links:
- redis
# migrations needed for proper db functioning
migration:
build: .
command: >
bash -c "cd app && python3 manage.py makemigrations && python3 manage.py migrate"
depends_on:
- database1 # comment when testing
# - test_database1 # uncomment when testing
networks:
- database1_network # comment when testing
# - test_database1_network # uncomment when testing
# reverse proxy container (nginx)
nginx:
image: nginx:1.13
ports:
- 80:80
volumes:
- ./config/nginx/conf.d:/etc/nginx/conf.d
- static_volume:/opt/services/djangoapp/static # <-- bind the static volume
- media_volume:/opt/services/djangoapp/media # <-- bind the media volume
- static_local_volume:/opt/services/djangoapp/src/app/static
- media_local_volume:/opt/services/djangoapp/src/app/media
restart: always
depends_on:
- djangoapp
networks:
- nginx_network
database1: # comment when testing
image: postgres:10 # comment when testing
env_file: # comment when testing
- config/db/database1_env # comment when testing
networks: # comment when testing
- database1_network # comment when testing
volumes: # comment when testing
- database1_volume:/var/lib/postgresql/data # comment when testing
# test_database1: # uncomment when testing
# image: postgres:10 # uncomment when testing
# env_file: # uncomment when testing
# - config/db/test_database1_env # uncomment when testing
# networks: # uncomment when testing
# - test_database1_network # uncomment when testing
# volumes: # uncomment when testing
# - test_database1_volume:/var/lib/postgresql/data # uncomment when testing
networks:
nginx_network:
driver: bridge
database1_network: # comment when testing
driver: bridge # comment when testing
# test_database1_network: # uncomment when testing
# driver: bridge # uncomment when testing
redis_network:
driver: bridge
volumes:
database1_volume: # comment when testing
# test_database1_volume: # uncomment when testing
static_volume: # <-- declare the static volume
media_volume: # <-- declare the media volume
static_local_volume:
media_local_volume:
redis_data:
Please, ignore "test_database1_volume" as it exists only for test purposes.
Docker is an open source containerization platform. It enables developers to package applications into containers—standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.
Docker is a company which provides a set of tools for building and sharing container images, and running containers at both small and large scale. Kubernetes is a tool which manages (“orchestrates”) container-based applications running on a cluster of servers.
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications.
The Docker tool was designed for developers and system administrators, an essential part of DevOps. With Docker, developers can focus on application development rather than being worried about the platform where it will run. They can start all by themselves using ready-to-use Docker programs.
Another solution (taken from https://stackoverflow.com/a/17674248/39296) is to use --pidfile= (with no path) to not create a pidfile at all. Same effect as Siyu's answer above.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With