So my goal is to have several containers that interact among themselves using rabbitmq messages(rabbitmq server is in a separate container)
rabbit.py
class Rabbit:
host = 'rabbitmq-host'
userid = 'test'
password = 'test'
class Consumer(Rabbit):
def __init__(self, exchange_name):
self.exchange_name = exchange_name
self.connection = None
try:
credentials = pika.PlainCredentials(self.userid, self.password)
params = pika.ConnectionParameters(self.host, 5672, '/', credentials)
self.connection = pika.BlockingConnection(params)
except Exception as ex:
print(ex)
if self.connection is not None and self.connection.is_open:
self.connection.close()
raise ex
self.channel = self.connection.channel()
Credentials test:test exist, I re-checked.
Then from another file(main.py) the Consumer is created
c = Consumer('media')
docker-compose.yml
version: '3'
services:
rabbitmq-server:
image: "rabbitmq:3-management"
hostname: "rabbitmq-host"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "test"
RABBITMQ_DEFAULT_PASS: "test"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
info-getter:
build: ./info-getter
depends_on:
- rabbitmq-server
tty: true
ports:
- "3000:3000"
When testing, I run rabbitmq server in a container, but the app from local mechine, all is working, but when I run 'docker-compose up' I get this exception:
info-getter_1 |
info-getter_1 | Traceback (most recent call last):
info-getter_1 | File "main.py", line 10, in <module>
info-getter_1 | c = Consumer('media')
info-getter_1 | File "libs/rabbit.py", line 27, in __init__
info-getter_1 | raise ex
info-getter_1 | File "libs/rabbit.py", line 22, in __init__
info-getter_1 | self.connection = pika.BlockingConnection(params)
info-getter_1 | File "/usr/local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 360, in __init__
info-getter_1 | self._impl = self._create_connection(parameters, _impl_class)
info-getter_1 | File "/usr/local/lib/python3.6/site-packages/pika/adapters/blocking_connection.py", line 451, in _create_connection
info-getter_1 | raise self._reap_last_connection_workflow_error(error)
info-getter_1 | pika.exceptions.AMQPConnectionError
services_info-getter_1 exited with code 1
Where am I going wrong?
Also I would like to add that even though I have 'depends_on' specified, after I run 'docker-compose up' the info-getter log appears before the rabbitmq-server log.
Open a terminal, navigate to your rabbitmq-go folder and run docker-compose up . This command will pull the rabbitmq:3-management-alpine image, create the container rabbitmq and start the service and webUI. You should see something like this: Once you see this, open your browser and head over to http://localhost:15672.
If you open your Docker engine, you will see the RbbitMQ container set and running. If you open http://localhost:15672/ on a browser, you will be able to access the management Ui, and now you can log in using the docker-compose set username and password. And now you can see the RabbitMQ instance is up and running.
The host needs to be the name you defined in docker-compose.yml:
my-worker:
image: my-worker-image
restart: always
depends_on:
- my-rabbitmq
my-rabbitmq:
image: rabbitmq:management
ports:
- 5672:5672
- 15672:15672
then in your consumer (python):
connection = pika.BlockingConnection(pika.ConnectionParameters('my-rabbitmq'))
Usually the error message contains additional status - for example:
pika.exceptions.AMQPConnectionError: Connection to :5672 failed: [Errno 111] Connection refused
This error is caused usually because the RabbitMQ workers/clients failed to connect to the RabbitMQ-server.
This might happen from a number of reasons - I'll name a few:
1) The IP of the RabbitMQ-server wasn't specified correctly.
It might no passed has a environment variable (this is why its important to add a check for that in the code).
When working with docker-compose the IP of the RabbitMQ-server should be replaced with the service DNS (which is the name of the service in the docker-compose.yml
file or the value of hostname
property if specified).
2) The RabbitMQ workers/clients tried to reach the RabbitMQ-server before its ready.
Notice that depend_on
only express dependency between services, and wait for services to start but not to be ready.
So you can't rely on it by adding:
depends_on:
- rabbitmq-server
Because the RabbitMQ-server service bootstrap phase take time.
See the reference below regarding the depend_on
command.
See the solutions I provided for this runtime dependency problem below.
3) There is a RabbitMQ-server service already running on the host which uses the 5672
port.
In this case you'll receive an explicit error when you try to start the RabbitMQ-server service, but from the RabbitMQ-worker perspective its the same problem.
You can solve the runtime dependency problem mentioned in the #2 by:
A) Had a retry logic in the client - Consider using pluggins like Shoval and Federation.
B) If the cause for the problem is #2 - you can use the restart_policy option and the connection will succeed after a few retries.
C) Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for.
These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections. Read more in here.
D) Execute docker-compose up rabbitmq-server
and only after service is ready execute the other services.
E) Use a time interval (e.g sleep 10
) in the workers execution command (I won't recommend this approach).
Regarding from depends_on
from the Docker-Compose docs:
There are several things to be aware of when using depends_on
:
depends_on
does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
Version 3 no longer supports the condition form ofdepends_on
.
Thedepends_on
option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With