Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

NodeJS async queue too fast (Slowing down async queue method)

I have an HTTP Get request and I want to parse the response and save it to my database.

If i call crawl(i) independentely i get good results. But i have to call crawl() from 1 to 2000. I get good results but some responses seem to get lost and some responses are duplicates. I don't think I understand how to call thousands of asynchronous functions. I am using the async module queue function but so far I am still missing some data and still have some duplicates. What am I doing wrong here? Thanks for your help.

What i am crawling

My node functions :

 function getOptions(i) {
    return {
        host: 'magicseaweed.com',
        path: '/syndicate/rss/index.php?id='+i+'&unit=uk',
        method: 'GET'
    }
};

function crawl(i){
var req = http.request(getOptions(i), function(res) {
    res.on('data', function (body) {
        parseLocation(body);
    });
});
req.end();

}

function parseLocation(body){
    parser.parseString(body, function(err, result) {
        if(result && typeof result.rss != 'undefined') {
            var locationTitle = result.rss.channel[0].title;
            var locationString = result.rss.channel[0].item[0].link[0];
            var location = new Location({
                id: locationString.split('/')[2],
                name: locationTitle
            });
            location.save();
        }
    });
  }

N = 2 //# of simultaneous tasks
var q = async.queue(function (task, callback) {
        crawl(task.url);
        callback();
}, N);


q.drain = function() {
    console.log('Crawling done.');
}

for(var i = 0; i < 100; i++){
   q.push({url: 'http://magicseaweed.com/syndicate/rss/index.php?id='+i+'&unit=uk'});
}

[EDIT] WELL, after a lot of testing it seems that the service I am crawling cannot handle so many request that fast. Because when I do each requests sequentially, I can get all the good responses.

Is there a way to SLOW DOWN ASYNC queue method?

like image 622
William Fortin Avatar asked Jan 16 '13 21:01

William Fortin


2 Answers

You should have a look at this great module, async which simplifies async tasks like this. You can use queue, simple example:

N = # of simultaneous tasks
var q = async.queue(function (task, callback) {
    somehttprequestfunction(task.url, function(){
    callback();
    } 
}, N);


q.drain = function() {
    console.log('all items have been processed');
}

for(var i = 0; i < 2000; i++){
   q.push({url:"http://somewebsite.com/"+i+"/feed/"});
}

It will have a window of ongoing actions and the tasks room will be available for a future task if you only invoke the callback function. Difference is, your code now opens 2000 connections immidiately and obviously the failure rate is high. Limiting it to a reasonable value, 5,10,20 (depends on site and connection) will result in a better sucess rate. If a request fails, you can always try it again, or push the task to another async queue for another trial. The key point is to invoke callback() in queue function, so that a room will be available when it is done.

like image 193
Mustafa Avatar answered Oct 04 '22 21:10

Mustafa


var q = async.queue(function (task, callback) {
    crawl(task.url);
    callback();
}, N);

You'are executing next task immediately after starting the previous one, in this way, the queue is just meaningless. You should modify your code like this:

// first, modify your 'crawl' function to take a callback argument, and call this callback after the job is done.

// then
var q = async.queue(function (task, next/* name this argument as 'next' is more meaningful */) {
    crawl(task.url, function () {
        // after this one is done, start next one.
        next();
    });     
    // or, more simple way, crawl(task.url, next);
}, N);
like image 22
Aaron Wang Avatar answered Oct 04 '22 22:10

Aaron Wang