Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

curl_exec maximum execution time - what is causing it?

Tags:

php

curl

xml

I get the dreaded message:

Fatal error: Maximum execution time of 90 seconds exceeded in /home/pricing.php on line 239

the code is:

$url = "http://*******.com/feed?f=PR&categories=$cat_id&limit=100&startproducts=$ii&price_min=0.01&sortproducts=score&show=properties";

$c = curl_init($url); 
curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($c, CURLOPT_HEADER, 0);
curl_setopt($c, CURLOPT_USERPWD, "****:****");
$xml = simplexml_load_string(curl_exec($c)); // line 239

the simplexml_load_string is line 239, surely this means the feed has been got (for want of a better word!) and the string is being loaded, but surely that can't take more than 90 seconds?

My questions are:

1 - what could/would cause this?

2 - is it safe to increase the php_value max_execution_time above 90 secs and what's considered a safe maximum?

3 - is there a better/faster/more stable way to bring down the feed than using curl?

Thanks for all help!

like image 930
StudioTime Avatar asked Dec 04 '11 07:12

StudioTime


2 Answers

A1: Yes, you got your answer from your 2nd question. it's caused by php's max_execution_time.

A2: It's not safe, unless you are hosting it locally and you know what your script is doing. I think the common practice is either 30sec or max 300sec(5min).

A3 If you are dealing with curl, I prefer you to set set_time_limit(0) on very top of your php script(code level equivalent for max_execution_time in php.ini), and use the timeout for curl to handle the timeout.

curl_setopt($curl, CURLOPT_TIMEOUT_MS, 2000); //in miliseconds
like image 143
You Qi Avatar answered Nov 03 '22 00:11

You Qi


  1. The curl to external feed is extremely slow

  2. Is OK to increase the max_execution_time to a higher value, but is not recommended. If the script is to serve as one of your normal web pages, you have to think twice. No user would like to wait for more than 90 seconds to get a page loaded.

  3. Cache it!

Details of cache it! :-

Not a super logic,
what it mean is you can prepare list of the feed URL,
then do a background job (cronjob) to grab each feed URL and store into local storage.
Once the local XML is available, load from local

So, is reverse of on-demand access, get the XML ready before any user would access the page.
The difficulties will be lots of different feed URL to grab, there is an curl_multi_exec which should be ideal to grab multiple url at the same time.

like image 38
ajreal Avatar answered Nov 03 '22 00:11

ajreal