I am implementing a Python script that needs to keep sending 1500+ packets in parallel in less than 5 seconds each.
In a nutshell what I need is:
def send_pkts(ip): #craft packet while True: #send packet time.sleep(randint(0,3)) for x in list[:1500]: send_pkts(x) time.sleep(randint(1,5))
I have tried the simple single-threaded, multithreading, multiprocessing and multiprocessing+multithreading forms and had the following issues:
Is there a better approach I could use to accomplish this task?
[1] EDIT 1:
def send_pkt(x): #craft pkt while True: #send pkt gevent.sleep(0) gevent.joinall([gevent.spawn(send_pkt, x) for x in list[:1500]])
[2] EDIT 2 (gevent monkey-patching):
from gevent import monkey; monkey.patch_all() jobs = [gevent.spawn(send_pkt, x) for x in list[:1500]] gevent.wait(jobs) #for send_pkt(x) check [1]
However I got the following error: "ValueError: filedescriptor out of range in select()". So I checked my system ulimit (Soft and Hard both are maximum: 65536). After, I checked it has something to do with select() limitations over Linux (1024 fds maximum). Please check: http://man7.org/linux/man-pages/man2/select.2.html (BUGS section) - In orderto overcome that I should use poll() (http://man7.org/linux/man-pages/man2/poll.2.html) instead. But with poll() I return to same limitations: as polling is a "blocking approach".
Regards,
For parallelism, Python offers multiprocessing, which launches multiple instances of the Python interpreter, each one running independently on its own hardware thread. All three of these mechanisms — threading, coroutines, and multiprocessing — have distinctly different use cases.
Running on a different core means that they actually can run at the same time, which is fabulous. There are some complications that arise from doing this, but Python does a pretty good job of smoothing them over most of the time. The operating system decides when to switch tasks external to Python.
Multiprocessing is a easier to just drop in than threading but has a higher memory overhead. If your code is CPU bound, multiprocessing is most likely going to be the better choice—especially if the target machine has multiple cores or CPUs.
No, it is not a good idea. Multithreading is not possible in Python due to something called the Global Interpreter Lock.
When using parallelism in Python a good approach is to use either ThreadPoolExecutor or ProcessPoolExecutor from https://docs.python.org/3/library/concurrent.futures.html#module-concurrent.futures these work well in my experience.
an example of threadedPoolExecutor that can be adapted for your use.
import concurrent.futures import urllib.request import time IPs= ['168.212. 226.204', '168.212. 226.204', '168.212. 226.204', '168.212. 226.204', '168.212. 226.204'] def send_pkt(x): status = 'Failed' while True: #send pkt time.sleep(10) status = 'Successful' break return status with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: future_to_ip = {executor.submit(send_pkt, ip): ip for ip in IPs} for future in concurrent.futures.as_completed(future_to_ip): ip = future_to_ip[future] try: data = future.result() except Exception as exc: print('%r generated an exception: %s' % (ip, exc)) else: print('%r send %s' % (url, data))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With