Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Extracting URLs from large text/HTML files

I have a lot of text that I need to process for valid URLs.

The input is vaguely HTMLish, in that it's mostly html. However, It's not really valid HTML.

I*ve been trying to do it with regex, and having issues.

Before you say (or possibly scream - I've read the other HTML + regex questions) "use a parser", there is one thing you need to consider:
The files I am working with are about 5 GB in size

I don't know any parsers that can handle that without failing, or taking days. Furthermore, the fact that, while the text content is largely html, but not necessarily valid html means it would require a very tolerant parser. Lastly, not all links are necessarily in <a> tags (some may be just plaintext).

Given that I don't really care about document structure, are there any better alternatives WRT extracting links?

Right now I'm using the regex:
\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/))) (in grep -E)
but even with that, I gave up after letting it run for about 3 hours.

Are there significant differences in Regex engine performance? I'm using MacOS's command-line grep. If there are other compatible implementations with better performance, that might be an option.


I don't care too much about language/platform, though MacOS/command line would be nice.

like image 751
Fake Name Avatar asked Nov 04 '22 05:11

Fake Name


1 Answers

I wound up string a couple grep commands together:

pv -cN source allContent | grep -oP "(?:\"([^\"' ]*?)\")|(?:'([^\"' ]*?)')|(?:([^\"' ]*?) )" | grep -E "(http)|(www)|(\.com)|(\.net)|(\.to)|(\.cc)|(\.info)|(\.org)" | pv -cN out > extrLinks1

I used pv to give me a progress indicator.

grep -oP "(?:\"([^\"' ]*?)\")|(?:'([^\"' ]*?)')|(?:([^\"' ]*?) )"
Pulls out anything that looks like a word or quoted text, and has no spaces.

grep -E "(http)|(www)|(\.com)|(\.net)|(\.to)|(\.cc)|(\.info)|(\.org)"
Filters the output for anything that looks like it could be a URL.

Finally,
pv -cN out > extrLinks1
Outputs it to a file, and gives a nice activity meter.

I'll probably push the generated file through sort -u to remove duplicate entries, but I didn't want to string that on the end because it would add another layer of complexity, and I'm pretty sure that sort will try to buffer the whole file, which could cause a crash.


Anyways, as it's running right now, it looks like it's going to take about 40 minutes. I didn't know about pv before. It's a really cool utility!

like image 109
Fake Name Avatar answered Nov 07 '22 20:11

Fake Name