I'm currently writing a script that downloads a file from a URL
import urllib.request urllib.request.urlretrieve(my_url, 'my_filename')
The docs urllib.request.urlretrieve
state:
The following functions and classes are ported from the Python 2 module urllib (as opposed to urllib2). They might become deprecated at some point in the future.
Therefore I would like to avoid it so I don't have to rewrite this code in the near future.
I'm unable to find another interface like download(url, filename)
in standard libraries. If urlretrieve
is considered a legacy interface in Python 3, what is the replacement?
True, if you want to avoid adding any dependencies, urllib is available. But note that even the Python official documentation recommends the requests library: "The Requests package is recommended for a higher-level HTTP client interface."
In line 14, the urllib. request. urlretrieve() function is used to retrieve the image from the given url and store it to the required file directory.
The urllib. request module defines functions and classes which help in opening URLs (mostly HTTP) in a complex world — basic and digest authentication, redirections, cookies and more. See also. The Requests package is recommended for a higher-level HTTP client interface.
I found that time took to send the data from the client to the server took same time for both modules (urllib, requests) but the time it took to return data from the server to the client is more then twice faster in urllib compare to request.
Deprecated is one thing, might become deprecated at some point in the future is another.
If it suits your needs, I'd continuing using urlretrieve
.
That said, you can do without it:
from urllib.request import urlopen from shutil import copyfileobj with urlopen(my_url) as in_stream, open('my_filename', 'wb') as out_file: copyfileobj(in_stream, out_file)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With