I was recently reading a book as prep for an interview and came across the following question:
What will you do when your crawler runs into a honey pot that generates an infinite subgraph for you to wander about?
I wanted to get some solutions to this qn. Personally, I would some form of depth limited search to prevent traversing continuously. Or perhaps use some form of machine learning to detect patterns. Thoughts?
Most commonly infinite subgraphs are prevented by link depth. So you gain an inital set of urls and you will traverse from each to a finite depth. While limiting the traversing depth you may use some heuristics to dynamically adjust it according to webpage characteristics. More information can be found e.g. here.
Another option would be to try some sort of pattern matching. But depending on the algorithm which produces the subgraph this will be a pretty (very very very)hard task. This will also be at least a pretty expensive operation.
For the interview question(about detecting infinite loops):
If they ask this questiom someone want to hear a reference to the Halting problem
Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With