Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Send intermittent status of request before sending actual response

I have a server, which takes few minutes to process a specific request and then responds to it.

The client has to keep waiting for the response without knowing when it will complete.

Is there a way to let the client know about the processing status? (say 50% completed, 80% completed), without the client having to poll for the status.

like image 628
Thirupathi Thangavel Avatar asked Jul 26 '17 06:07

Thirupathi Thangavel


Video Answer


3 Answers

Without using any of the newer techniques (websockets, webpush/http2, ...), I've previously used a simplified Pushlet or Long polling solution for HTTP 1.1 and various javascript or own client implementation. If my solution doesn't fit in your use case, you can always google those two names for further possible ways.

Client sends a request, reads 17 bytes (Inital http response) and then reads 2 bytes at a time getting processing status.

Server sends a valid HTTP response and during request progress sends 2 bytes of percentage completed, until last 2 bytes are "ok" and closes connection.

UPDATED: Example uwsgi server.py

 from time import sleep
 def application(env, start_response):
     start_response('200 OK', [])

     def working():
         yield b'00'
         sleep(1)
         yield b'36'
         sleep(1)
         yield b'ok'
     return working()

UPDATED: Example requests client.py

import requests

response = requests.get('http://localhost:8080/', stream=True)
for r in response.iter_content(chunk_size=2):
    print(r)

Example server (only use for testing :)

import socket
from time import sleep
HOST, PORT = '', 8888

listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
listen_socket.bind((HOST, PORT))
listen_socket.listen(1)

while True:
    client_connection, client_address = listen_socket.accept()
    request = client_connection.recv(1024)
    client_connection.send('HTTP/1.1 200 OK\n\n')
    client_connection.send('00')  # 0%
    sleep(2)  # Your work is done here
    client_connection.send('36')  # 36%
    sleep(2)  # Your work is done here
    client_connection.sendall('ok')  # done
    client_connection.close()

If the last 2 bytes aren't "ok", handle error someway else. This isn't beautiful HTTP status code compliance but more of a workaround that did work for me many years ago.

telnet client example

$ telnet localhost 8888
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1
HTTP/1.1 200 OK

0036okConnection closed by foreign host.
like image 131
D.Nibon Avatar answered Oct 11 '22 07:10

D.Nibon


This answer probably won’t help in your particular case, but it might help in other cases.

The HTTP protocol supports informational (1xx) responses:

indicates an interim response for communicating connection status or request progress prior to completing the requested action and sending a final response

There is even a status code precisely for your use case, 102 (Processing):

interim response used to inform the client that the server has accepted the complete request, but has not yet completed it

Status code 102 was removed from further editions of that standard due to lack of implementations, but it is still registered and could be used.

So, it might look like this (HTTP/2 has an equivalent binary form):

HTTP/1.1 102 Processing
Progress: 50%

HTTP/1.1 102 Processing
Progress: 80%

HTTP/1.1 200 OK
Date: Sat, 05 Aug 2017 11:53:14 GMT
Content-Type: text/plain

All done!

Unfortunately, this is not widely supported. In particular, WSGI does not provide a way to send arbitrary 1xx responses. Clients support 1xx responses in the sense that they are required to parse and tolerate them, but they usually don’t give programmatic access to them: in this example, the Progress header would not be available to the client application.

However, 1xx responses may still be useful (if the server can send them) because they have the effect of resetting the client’s socket read timeout, which is one of the main problems with slow responses.

like image 21
Vasiliy Faronov Avatar answered Oct 11 '22 09:10

Vasiliy Faronov


Use Chunked Transfer Encoding, which is a standard technique to transmit streams of unknown length.

See: Wikipedia - Chunked Transfer Encoding

Here a python server implementation available as a gist on GitHub:

  • https://gist.github.com/josiahcarlson/3250376

It sends content using chunked transfer encoding using standard library modules

In the client if chunked transfer encoding has been notified by the server, you'd only need to:

import requests

response = requests.get('http://server.fqdn:port/', stream=True)
for r in response.iter_content(chunk_size=None):
    print(r)

chunk_size=None, because the chunks are dynamic and will be determined by the information in the simple conventions of the chunked transfer semantics.

See: http://docs.python-requests.org/en/master/user/advanced/#chunk-encoded-requests

When you see for example 100 in the content of response r, you know that the next chunk will be the actual content after processing the 100.

like image 2
mementum Avatar answered Oct 11 '22 08:10

mementum