From https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor.map
If a func call raises an exception, then that exception will be raised when its value is retrieved from the iterator.
The following snippet only outptus the first exeption (Exeption: 1), and stops. Does this contradict the above statement? I expect the following to print out all exceptions in the loop.
def test_func(val):
raise Exception(val)
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
for r in executor.map(test_func,[1,2,3,4,5]):
try:
print r
except Exception as exc:
print 'generated an exception: %s' % (exc)
The ThreadPoolExecutor map() function supports target functions that take more than one argument by providing more than one iterable as arguments to the call to map(). For example, we can define a target function for map that takes two arguments, then provide two iterables to the call to map().
ThreadPoolExecutor. ThreadPoolExecutor is an Executor subclass that uses a pool of threads to execute calls asynchronously. An Executor subclass that uses a pool of at most max_workers threads to execute calls asynchronously.
ThreadPoolExecutor Thread-Safety Although the ThreadPoolExecutor uses threads internally, you do not need to work with threads directly in order to execute tasks and get results. Nevertheless, when accessing resources or critical sections, thread-safety may be a concern.
Ehsan's solution is good, but it may be slightly more efficient to take the results as the are completed instead of waiting for sequential items in the list to finish. Here is an example from the library docs.
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
As mentioned above, unfortunately executor.map's API is limited and only lets you get the first exception. Also, when iterating through the results, you will only get values up to the first exception.
To answer your question, if you don't wan to use a different library, you can unroll your map and manually apply each function:
future_list = []
with concurrent.futures.ThreadPoolExecutor() as executor:
for arg in range(10):
future = executor.submit(test_func, arg)
future_list.append(future)
for future in future_list:
try:
print(future.result())
except Exception as e:
print(e)
This allows you to handle each future individually.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With