Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the best way to ping many network devices in parallel?

I poll a lot of devices in network (more than 300) by iterative ping.

The program polls the devices sequentially, so it's slow. I'd like to enhance the speed of polling.

There some ways to do this in Delphi 7:

  1. Each device has a thread doing ping. Manage threads manually.
  2. Learn and use Indy 10. Need examples.
  3. Use overlapped I/O based on window messages.
  4. Use completion ports based on events.

What is faster, easier? Please, provide some examples or links for example.

like image 322
Dr.eel Avatar asked Feb 03 '11 13:02

Dr.eel


People also ask

How do I ping multiple devices at once?

To ping from one device to multiple devices, select Tools > Diagnostics > Ping All Devices From Device.... Move your mouse to the topology map and the cursor will become a crosshair. Click on the desired source device.

Is there a way to ping multiple IP addresses?

While the ping command is used to ping a single host device to identify its existence, ping sweep helps to ping multiple IP addresses simultaneously. It's a basic network scanning technique used to determine the range of active and inactive IP addresses available on the network.

How do I ping my Neighbours PC?

To ping another network device using a computer running Windows, complete the following: To bring up the run dialog, press the Windows key + R. Type cmd and press Enter. Type ping <IP address> and press Enter.


2 Answers

Flooding the network with ICMP is not a good idea.

You might want to consider some kind of thread pool and queue up the ping requests and have a fixed number of threads doing the requests.

like image 92
Lloyd Avatar answered Oct 01 '22 00:10

Lloyd


Personally I would go with IOCP. I'm using that very successfully for the transport implementation in NexusDB.

If you want to perform 300 send/receive cycles using blocking sockets and threads in parallel, you end up needing 300 threads.

With IOCP, after you've associated the sockets with the IOCP, you can perform the 300 send operations, and they will return instantly before the operation is completed. As the operations are completed, so called completion packages will be queued to the IOCP. You then have a pool of threads waiting on the IOCP, and the OS wakes them up as the completion packets come in. In reaction to completed send operations you can then perform the receive operations. The receive operations also return instantly and once actually completed get queued to the IOCP.

The real special thing about an IOCP is that it knows which threads belong to it and are currently processing completion packages. And the IOCP only wakes up new threads if the total number of active threads (not in a kernel mode wait state) is lower than the concurrency number of the IOCP (by default that equals the number of logical cores available on the machine). Also, if there are threads waiting for completion packages on the IOCP (which haven't been started yet despite completion packages being queued because the number of active threads was equal to the concurrancy number), the moment one of the threads that is currently processing a completion package enters a kernel mode wait state for any reason, one of the waiting threads is started.

Threads returning to the IOCP pick up completion packages in LIFO order. That is, if a thread is returning to the IOCP and there are completion packages still waiting, that thread directly picks up the next completion package, instead of being put into a wait state and the thread waiting for the longest time waking up.

Under optimal conditions, you will have a number of threads equal to the number of available cores running concurrently (one on each core), picking up the next completion package, processing it, returning to the IOCP and directly picking up the next completion package, all without ever entering a kernel mode wait state or a thread context switch having to take place.

If you would have 300 threads and blocking operations instead, not only would you waste at least 300 MB address space (for the reserved space for the stacks), but you would also have constant thread context switches as one thread enters a wait state (waiting for a send or receive to complete) and the next thread with a completed send or receive waking up. – Thorsten Engler 12 hours ago

like image 44
Thorsten Engler Avatar answered Sep 30 '22 23:09

Thorsten Engler