Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Asynchronous HTTP requests in PHP

Is there any sane way to make a HTTP request asynchronously in PHP without throwing out the response? I.e., something similar to AJAX - the PHP script initiates the request, does it's own thing and later, when the response is received, a callback function/method or another script handles the response.

One approach has crossed my mind - spawning a new php process with another script for each request - the second script does the request, waits for the response and then parses the data and does whatever it should, while the original script goes on spawning new processes. I have doubts, though, about performance in this case - there must be some performance penalty from having to create a new process every time.

like image 320
pilsetnieks Avatar asked Aug 06 '09 17:08

pilsetnieks


People also ask

What are asynchronous http requests?

Asynchronous HTTP Request Processing is a relatively new technique that allows you to process a single HTTP request using non-blocking I/O and, if desired in separate threads. Some refer to it as COMET capabilities.

Is PHP synchronous or asynchronous?

PHP was originally created to support synchronous development, so most PHP developers are used to writing only synchronous code with the language.

Does PHP support asynchronous?

PHP has no built in support for asynchronous calls. You can make pseudo-asynchronous calls using curl.

Which method is used to make an asynchronous HTTP request?

Ajax. Ajax is the traditional way to make an asynchronous HTTP request. Data can be sent using the HTTP POST method and received using the HTTP GET method.


1 Answers

Yes, depending on the traffic of your site, spawning a separate PHP process for running a script could be devastating. It would be more efficient to use shell_exec() to start a background process that saves the output to a filename you already know, but even this could be resource intensive.

You could also have a request queue stored in a database. A single, separate background process would pull the job, execute it, and save the output, possibly setting a flag in the DB that your web process could check.

If you're going to use the DB queue approach, use curl_multi* class of functions to send all queued requests at once. This will limit the execution time of each iteration in your background process to the longest request time.

like image 168
Lucas Oman Avatar answered Oct 15 '22 06:10

Lucas Oman