Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient Alternative to "in"

I'm writing a web crawler with the ultimate goal of creating a map of the path the crawler has taken. While I haven't a clue at what rate other, and most definitely better crawlers pull down pages, mine clocks about 2,000 pages per minute.

The crawler works on a recursive backtracking algorithm which I have limited to a depth of 15. Furthermore, in order to prevent my crawler from endlessly revisitng pages, it stores the url of each page it has visited in a list, and checks that list for the next candidate url.

for href in tempUrl:
    ...
    if href not in urls:
         collect(href,parent,depth+1)

This method seems to become a problem by the time it has pulled down around 300,000 pages. At this point the crawler on average has been clocking 500 pages per minute.

So my question is, what is another method of achieving the same functionality while improving its efficiency.

I've thought that decreasing the size of each entry may help, so instead of appending the entire url, I append the first 2 and the last to characters of each url as a string. This, however hasn't helped.

Is there a way I could do this with sets or something?

Thanks for the help

edit: As a side note, my program is not yet multithreaded. I figured I should resolve this bottleneck before I get into learning about threading.

like image 820
danem Avatar asked Nov 27 '22 18:11

danem


1 Answers

Perhaps you could use a set instead of a list for the urls that you have seen so far.

like image 103
Bill Lynch Avatar answered Dec 11 '22 09:12

Bill Lynch