When calling WSASend()
, I have to pass it a WSAOVERLAPPED
instance, and I cannot re-use this WSAOVERLAPPED
instance until the previous WSASend()
operation has been completed (i.e. when a completion packet has been placed in the completion port, and when I deque this completion packet I guess).
Based on this understanding, I have a WSAOVERLAPPED
instance associated with each socket in my application, and I also have a boolean variable (called is_sending_in_progress
).
Now let's say that I have a button that will send the string "hello"
when clicked to the other side.
Now when the user clicks on this button, I will check to see if is_sending_in_progress
is false
, and if it is false
, then I would call WSASend()
and then set is_sending_in_progress
to true, now when I call GetQueuedCompletionStatus()
to deque the completion packet, I set is_sending_in_progress
to false
. And if the user clicks on the button and is_sending_in_progress
was true
, then I would display a message box telling the user that he can't send anything right now until the previous send operation is completed.
Now I don't think that this is a good approach to handle the sending of data in IOCP, because the user would get this message box a lot (especially if the IOCP threads are busy and it would take them some time to set is_sending_in_progress
to false
).
So is there a better approach to handling the sending of data in IOCP, like having multiple WSAOVERLAPPED
instances for each socket, and using the WSAOVERLAPPED
instance available when calling WSASend()
?
you are completely wrong understand IOCP
and asynchronous I/O.
I have a WSAOVERLAPPED instance associated with each socket in my application
NO!!!
you can have any class/structure associated/encapsulated socket handle. but for every I/O operation you must allocate some another data structure inherited from OVERLAPPED
. but need clear understand - this is structure per operation but not per socket . this instance must be allocated just before I/O operation begin. and destroyed just after I/O operation is end.
this structure is somehow related to IRP
and have similar sense and lifetime. in this structure except OVERLAPPED
you must pointer to your class instance which is encapsulate socket, some tag which is described what is this I/O operation - send, receive, connect, disconnect, etc.. and possible some additional data - related to operation
and I also have a boolean variable (called is_sending_in_progress).
again NO!!!
we can have multiple I/O operation on same socket at time. we can have multiple send operation in time on same socket - of course for every operation must be unique OVERLAPPED
(your custom user mode IRP
) but need clear understand - this OVERLAPPED
per operation only - class instance where socket handle located - must not containing (inherit from) OVERLAPPED
. we can have send and receive operation in same time. receive and disconnect.
only single restriction - receive operation can not be multiple (several) at some time. but this not OVERLAPPED
restriction - simply if you got 2 packets of data at once - you can not know which was send first and which second - so you lost data order
really asynchronous I/O give your very big freedom and power in action, but only if you deep understand it.
So is there a better approach to handling the sending of data in IOCP
when you using asynchronous I/O with IOCP
we have callback called when operation is finished ( FileIOCompletionRoutine
or IoCompletionCallback
- this callbacks will be called automatically by system when operation is complete or if you yourself call GetQueuedCompletionStatus
- you need yourself also call this callback. and all operations with socket we must do inside this callback. if we need send big portion of data - we can break it on chunks. send first chunk direct. and when send will be complete - callback is called - and here we call send next chunk of data.. - send next chunk exactly when previous send is complete.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With