Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to Download PDFs from Scraped Links [Python]?

I'm working on making a PDF Web Scraper in Python. Essentially, I'm trying to scrape all of the lecture notes from one of my courses, which are in the form of PDFs. I want to enter a url, and then get the PDFs and save them in a directory in my laptop. I've looked at several tutorials, but I'm not entirely sure how to go about doing this. None of the questions on StackOverflow seem to be helping me either.

Here is what I have so far:

import requests
from bs4 import BeautifulSoup
import shutil

bs = BeautifulSoup

url = input("Enter the URL you want to scrape from: ")
print("")

suffix = ".pdf"

link_list = []

def getPDFs():    
    # Gets URL from user to scrape
    response = requests.get(url, stream=True)
    soup = bs(response.text)

    #for link in soup.find_all('a'): # Finds all links
     #   if suffix in str(link): # If the link ends in .pdf
      #      link_list.append(link.get('href'))
    #print(link_list)

    with open('CS112.Lecture.09.pdf', 'wb') as out_file:
        shutil.copyfileobj(response.raw, out_file)
    del response
    print("PDF Saved")

getPDFs()

Originally, I had gotten all of the links to the PDFs, but did not know how to download them; the code for that is now commented out.

Now I've gotten to the point where I'm trying to download just one PDF; and a PDF does get downloaded, but it's a 0KB file.

If it's of any use, I'm using Python 3.4.2

like image 738
freddiev4 Avatar asked Apr 15 '15 04:04

freddiev4


1 Answers

If this is something that does not require being logged in, you can use urlretrieve():

from urllib.request import urlretrieve

for link in link_list:
    urlretrieve(link)
like image 123
alecxe Avatar answered Oct 30 '22 11:10

alecxe