Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Where is the bottleneck in these 10 requests per second with Python Bottle + Javascript fetch?

I am sending 10 HTTP requests per second between Python Bottle server and a browser client with JS:

import bottle, time
app = bottle.Bottle()
@bottle.route('/')
def index():
    return """<script>
var i = 0;
setInterval(() => {
    i += 1;
    let i2 = i;
    console.log("sending request", i2);
    fetch("/data")
        .then((r) => r.text())
        .then((arr) => {
            console.log("finished processing", i2);
        });
}, 100);
</script>"""
@bottle.route('/data')
def data():
    return "abcd"
bottle.run(port=80)

The result is rather poor:

sending request 1
sending request 2
sending request 3
sending request 4
finished processing 1
sending request 5
sending request 6
sending request 7
finished processing 2
sending request 8
sending request 9
sending request 10
finished processing 3
sending request 11
sending request 12

Why does it fail to process 10 requests per second successfully (on an average i5 computer): is there a known bottleneck in my code?

Where are the 100 ms lost per request, that prevent the program to keep a normal pace like:

sending request 1
finished processing 1
sending request 2
finished processing 2
sending request 3
finished processing 3

?

Notes:

  • Tested with Flask instead of Bottle and the problem is similar

  • Is there a simple way to get this working:

    • without having to either monkey patch the Python stdlib (with from gevent import monkey; monkey.patch_all()),

    • and without using a much more complex setup with Gunicorn or similar (not easy at all on Windows)?

    ?

like image 854
Basj Avatar asked Oct 11 '25 12:10

Basj


1 Answers

Are you absolutely tied to flask+bottle? You can pretty easily get this working using a FastAPI server out of the box.

Nice thing with FastAPI is that this is then all single-threaded with asyncio support. No monkey patching or other weird behavior required by gevent. IMO that makes life A LOT easier.

Added in some timestamps to show that its sending roughly 10 requests per second.

from fastapi import FastAPI
from fastapi.responses import HTMLResponse
import uvicorn

app = FastAPI()

@app.get("/", response_class=HTMLResponse)
async def index():
    return """<script>
var i = 0;
const start = Date.now();
setInterval(() => {
    const startOffset = Date.now() - start;
    i += 1;
    let i2 = i;
    console.log(`${startOffset}: sending request`, i2);
    fetch("/data")
        .then((r) => r.text())
        .then((arr) => {
            const duration = Date.now() - start - startOffset;
            console.log(`finished processing ${i2} in ${duration}ms`);
        });
}, 100);
</script>"""

@app.get("/data")
async def data():
    return "abcd"

if __name__ == "__main__":
    uvicorn.run(app)
106: sending request 1
finished processing 1 in 22ms
208: sending request 2
finished processing 2 in 18ms
315: sending request 3
finished processing 3 in 20ms
420: sending request 4
finished processing 4 in 8ms
524: sending request 5
finished processing 5 in 27ms
624: sending request 6
finished processing 6 in 10ms
729: sending request 7
finished processing 7 in 39ms
831: sending request 8
finished processing 8 in 37ms
932: sending request 9
finished processing 9 in 12ms
1037: sending request 10
finished processing 10 in 7ms
like image 64
flakes Avatar answered Oct 14 '25 01:10

flakes



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!