Is there a good way to download a lot of files en masse using python? This code is speedy enough for downloading about 100 or so files. But I need to download 300,000 files. Obviously they are all very small files (or I wouldn't be downloading 300,000 of them :) ) so the real bottleneck seems to be this loop. Does anyone have any thoughts? Maybe using MPI or threading?
Do I just have to live with the bottle neck? Or is there a faster way, maybe not even using python?
(I included the full beginning of the code just for completeness sake)
from __future__ import division
import pandas as pd
import numpy as np
import urllib2
import os
import linecache
#we start with a huge file of urls
data= pd.read_csv("edgar.csv")
datatemp2=data[data['form'].str.contains("14A")]
datatemp3=data[data['form'].str.contains("14C")]
#data2 is the cut-down file
data2=datatemp2.append(datatemp3)
flist=np.array(data2['filename'])
print len(flist)
print flist
###below we have a script to download all of the files in the data2 database
###here you will need to create a new directory named edgar14A14C in your CWD
original=os.getcwd().copy()
os.chdir(str(os.getcwd())+str('/edgar14A14C'))
for i in xrange(len(flist)):
url = "ftp://ftp.sec.gov/"+str(flist[i])
file_name = str(url.split('/')[-1])
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.write(u.read())
f.close()
print i
The usual pattern with multiprocessing is to create a job()
function that takes arguments and performs some potentially CPU bound work.
Example: (based on your code)
from multiprocessing import Pool
def job(url):
file_name = str(url.split('/')[-1])
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.write(u.read())
f.close()
pool = Pool()
urls = ["ftp://ftp.sec.gov/{0:s}".format(f) for f in flist]
pool.map(job, urls)
This will do a number of things:
job()
function.urls
to job()
and wait for all jobs to complete.Python's multiprocessing.Pool.map
will take care of splitting up your input across the no. of workers in the pool.
Another useful neat little thing I've done for this kind of work is to use progress like this:
from multiprocessing import Pool
from progress.bar import Bar
def job(input):
# do some work
pool = Pool()
inputs = range(100)
bar = Bar('Processing', max=len(inputs))
for i in pool.imap(job, inputs):
bar.next()
bar.finish()
This gives you a nice progress bar on your console as your jobs are progressing so you have some idea of progress and eta, etc.
I also find the requests library very useful here and a much nicer set of API(s) for dealing with web resources and downloading of content.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With