In the documentation I see the following:
There is only one limiting factor regarding scaling in Flask which are the context local proxies. They depend on context which in Flask is defined as being either a thread, process or greenlet. If your server uses some kind of concurrency that is not based on threads or greenlets, Flask will no longer be able to support these global proxies. However the majority of servers are using either threads, greenlets or separate processes to achieve concurrency which are all methods well supported by the underlying Werkzeug library.
My question: What other concurrent mechanisms are there other than these 3 methods?
Flask is also highly scalable as it can process a high number of requests each day. This micro-framework modularize the entire code and let developers work on independent chunks and use them as the code base grows. However, as compared to Django, Flask's scalability is limited.
Improve performance in both Blocking and Non-Blocking web servers. Multitasking is the ability to execute multiple tasks or processes (almost) at the same time. Modern web servers like Flask, Django, and Tornado are all able to handle multiple requests simultaneously.
For reference, the Flask benchmarks on techempower give 25,000 requests per second.
One pretty interesting concurrency mechanism is the asynchronous model. You have a single process with a single thread running the whole show, with all the I/O or otherwise lengthy tasks being asynchronous and callback based. This method scales really well for I/O bound services, servers in this category easily handle the C10K problem.
See Tornado or node.js for examples.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With