What is the purpose of workers? Are these workers for multi-threading or something else? When the Odoo
instance start I see at least 6 workers on command line that informed:
2016-03-10 13:55:09,602 15504 INFO ? openerp.service.server: Worker WorkerHTTP (15504) alive
2016-03-10 13:55:09,606 15503 INFO ? openerp.service.server: Worker WorkerHTTP (15503) alive
2016-03-10 13:55:09,625 15507 INFO ? openerp.service.server: Worker WorkerCron (15507) alive
2016-03-10 13:55:09,628 15506 INFO ? openerp.service.server: Worker WorkerCron (15506) alive
2016-03-10 13:55:09,629 15508 INFO ? openerp.service.server: Worker WorkerCron (15508) alive
2016-03-10 13:55:09,629 15509 INFO ? openerp.service.server: Worker WorkerCron (15509) alive
And what is the difference between WorkerHTTP
and WorkerCron
? Honestly I don't know what they do.
It is explained in the Odoo documentation here
Odoo includes built-in HTTP servers, using either multithreading or multiprocessing.
For production use, it is recommended to use the multiprocessing server as it increases stability, makes somewhat better use of computing resources and can be better monitored and resource-restricted.
Multiprocessing is enabled by configuring :option:
a non-zero number of worker processes <odoo.py --workers>
, the number of workers should be based on the number of cores in the machine (possibly with some room for cron workers depending on how much cron work is predicted) Worker limits can be configured based on the hardware configuration to avoid resources exhaustion WarningNote: Multiprocessing mode currently isn't available on Windows
You should use 2 worker threads + 1 cron thread per available CPU, and 1 CPU per 10 concurent users. Make sure you tune the memory limits and cpu limits in your configuration file.
workers = --workers <count>
If count is not 0 (the default), enables multiprocessing and sets up the specified number of HTTP workers (sub-processes processing HTTP and RPC requests).
A number of options allow limiting and recyling workers:
--limit-request <limit>
Number of requests a worker will process before being recycled and restarted. Defaults to 8196.
--limit-memory-soft <limit>
Maximum allowed virtual memory per worker. If the limit is exceeded, the worker is killed and recycled at the end of the current request. Defaults to 640MB.
--limit-memory-hard <limit>
Hard limit on virtual memory, any worker exceeding the limit will be immediately killed without waiting for the end of the current request processing. Defaults to 768MB.
--limit-time-cpu <limit>
Prevents the worker from using more than CPU seconds for each request. If the limit is exceeded, the worker is killed. Defaults to 60.
--limit-time-real <limit>
Prevents the worker from taking longer than seconds to process a request. If the limit is exceeded, the worker is killed. Defaults to 120.
Differs from --limit-time-cpu
in that this is a "wall time" limit including e.g. SQL queries.
--max-cron-threads <count>
number of workers dedicated to cron jobs. Defaults to 2. The workers are threads in multithreading mode and processes in multiprocessing mode.
For multiprocessing mode, this is in addition to the HTTP worker processes.
More info about Deployment Architecture, with some diagram.
More information about the configuration file
I am Adding here the information on the @prakah link in the comment above:
Heading | Description
------------------ | ---------------------------------------------------------
CPUs | Number of CPU Cores not threads
Physical | Physical memory, not virtual or swap
workers | Number of workers specified in config file (workers = x)
cron | Number of workers for cron jobs (max_cron_threads = xx)
Mem Per | Memory in MB that is the max memory for request per worker
Max Mem | Maximum amount that can be used by all workers
limit_memory_soft | Number in bytes that you will use for this setting
Note: Max Memory if notice is less than total memory this is on purpose. As workers process requests they can grow beyond the Mem Per limit so a server under heavy load could go past this amount. This is why there is "head room" built in.
CPUs | Physical | workers | cron | Mem Per | Max Mem | limit_memory_soft
---- | -------- | ------- | ---- | ------- | ------- | -----------------------
ANY | =< 256MB | NR | NR | NR | NR | NR
1 | 512MB | 0 | N/A | N/A | N/A | N/A
1 | 512MB | 1 | 1 | 177MB | 354MB | 185127901
1 | 1GB | 2 | 1 | 244MB | 732MB | 255652815
1 | 2GB | 2 | 1 | 506MB | 1518MB | 530242876
2 | 1GB | 3 | 1 | 183MB | 732MB | 191739611
2 | 2GB | 5 | 2 | 217MB | 1519MB | 227246947
2 | 4GB | 5 | 2 | 450MB | 3150MB | 471974428
4 | 2GB | 5 | 2 | 217MB | 1519MB | 227246947
4 | 4GB | 9 | 2 | 286MB | 3146MB | 300347363
4 | 8GB | 9 | 3 | 546MB | 6552MB | 572662306
4 | 16GB | 9 | 3 | 1187MB | 14244MB | 1244918057
As you might know the GIL prevents Python from doing any real threading
to better understand workers, let's see what would happen without them:
with no workers enabled, your odoo instance only uses one core in the hosting machine, hence once the number of clients goes beyond one, the performance just goes downhill since a new client needs to wait its turn to use odoo resources.
normally a production server would have multiple cores, hence the need to scale odoo on the machine resources; simply put workers is somehow equal to launching multiple instance of odoo on the same machine.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With