Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fetching via wget to memory & bypassing disk writes

Is it possible to download contents of a website—a set of HTML pages—straight to memory without writing out to disk?

I have a cluster of machines with 24G of installed each, but I’m limited by a disk quota to several hundreds MB. I was thinking of redirecting the output wget to some kind of in-memory structure without storing the contents on a disk. The other option is to create my own version of wget but may be there is a simple way to do it with pipes

Also what would be the best way to run this download in parallel (the cluster has >20 nodes). Can’t use the file system in this case.

like image 907
alex Avatar asked Nov 29 '22 04:11

alex


2 Answers

See wget download options:

‘-O file’

‘--output-document=file’

The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If ‘-’ is used as file, documents will be printed to standard output, disabling link conversion. (Use ‘./-’ to print to a file literally named ‘-’.)

If you want to read the files into a Perl program, you can invoke wget using backticks.

Depending on what you really need to do, you might be able to get by just using LWP::Simple's get.

use LWP::Simple;
my $content = get("http://www.example.com/");
die "Couldn't get it!" unless defined $content;

Update: I had no idea you could implement your own file system in Perl using Fuse and Fuse.pm. See also Fuse::InMemory.

like image 62
Sinan Ünür Avatar answered Dec 05 '22 14:12

Sinan Ünür


If you a) are already using Perl, b) want to download HTML, and c) parse it, I always recommend LWP and HTML::TreeBuilder.

like image 43
Leonardo Herrera Avatar answered Dec 05 '22 13:12

Leonardo Herrera