I am trying to download a pdf file from a website using urllib. This is what i got so far:
import urllib
def download_file(download_url):
web_file = urllib.urlopen(download_url)
local_file = open('some_file.pdf', 'w')
local_file.write(web_file.read())
web_file.close()
local_file.close()
if __name__ == 'main':
download_file('http://www.example.com/some_file.pdf')
When i run this code, all I get is an empty pdf file. What am I doing wrong?
Urllib package is the URL handling module for python. It is used to fetch URLs (Uniform Resource Locators). It uses the urlopen function and is able to fetch URLs using a variety of different protocols.
Here is an example that works:
import urllib2
def main():
download_file("http://mensenhandel.nl/files/pdftest2.pdf")
def download_file(download_url):
response = urllib2.urlopen(download_url)
file = open("document.pdf", 'wb')
file.write(response.read())
file.close()
print("Completed")
if __name__ == "__main__":
main()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With