I am trying to fetch facebook a user's profile page using "wget" but keep getting a non-profile page called "browser.php" which has nothing to do with that particular user. The profile page's URL as I see in the browser happens to be of the following format:
http://www.facebook.com/user-name
and that's what I have been using as the argument to the wget command:
wget http://www.facebook.com/user-name
I am also interested in using wget to fetch a user's friends' list but even that is giving me the same unhelpful result ("browser.php"):
wget http://www.facebook.com/user-name?sk=friends&v=friends
Could someone kindly advise me what I'm doing wrong here? In other words, am I missing out some key options for wget command or does wget not fit such a scenario at all?
Any help will be greatly appreciated.
To add context to this query, I need to figure out how to fetch these pages from Facebook using wget as it would then help me write a script/program to look up friends' profile URLs from the HTML source code and then look up some other keywords on them, etc. I am basically hoping that this would help me in doing some kind of selective-crawling (with Facebook's permission of course) of people I am not connected to.
2. Click "File" in the overhead menu and select "Save Page As" from the context menu. Name the page whatever you like. In the "Save as Type" drop-down menu below the filename, select "Web Page, Complete," if that option isn't already selected.
Log into Facebook, click the down-triangle icon at top right, and choose Settings. On the General Settings page, click the last item, the link to download a copy of your data.
First, Facebook have probably created a condition where certain user agents (e.g. wget) cannot crawl the pages. So they redirect certain user agents yo a different page which would probably say something like "your browser is not supported" They do that to protect people from doing exactly what you are doing. However you can tell wget to identify itself as a different agent using -U
argument to wget (read the wget man page). e.g. wget -U Mozilla http://....
Second, Facebooks privacy setting rarely allows you to read any/much information unless you are logged in as a user, and probably only as a user who is friend to the profile you are trying to scrape.
Thridly, there is an Facebook API which you need to use to crawl and extract information from facebook -- you are likely in violation of the Acceptable Use policy if you try to obtain information in any other way.
I donno why you want to use wget ..facebook offers an excellent API .
wget --user-agent=Firefox http://www.facebook.com/markzuckerberg
will save the publicly available content to a file.
you should consider using their API.
Facebook Developers
If you want to save the logged in page, you can log in with Firefox with "Keep me logged in" selected, then copy those cookies to a file and use them with the cookiejar option. You will still have quite a bit of dynamic script loaded content that WGET isn't going to save.
There's many ways to skin this cat. If you need to extract a specific item, check out the API. If you're simply wanting to archive a snapshot of the page as it would appear in a web browser, try CutyCapt. It's much like wget, except it parses the entire document as a web broswer would and stores an image of the page.
Check the following open-source projects:
facebook-cli
, it's a command-line utility to interact with the Facebook API.facebook-friends
which can generate an HTML page of all of your Facebook friends.If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With