So in my current server implementation, it is currently something like this:
void loop(){
// step 1: clear set
fd_set readfds;
while(true){
// step 1:
FD_ZERO(readfds);
// step 2:
loop_through_sockets_and_add_active_sockets_to(theset);
// step 3:
switch(select(FD_SETSIZE, &readfds, 0, 0, &tv)) {
case SOCKET_ERROR:
patia->receiveEvent(Error, net::getError());
return;
case 0:
return;
}
// step 4:
loop through sockets and check, using FD_ISSET,
which read fd's have incoming data.
}
}
Now, not clearing the fd_set (using FD_SET, FD_CLR when the channels are added/removed only) would be a better way to do things.
My question is, how can you loop through the fd_set after select(), without checking each member of the set if it's part of the set, without using FD_ISSET?
I mean, when you have 4000 active connections, whenever there's incoming data, the above loop will have to go through a potential of 4000 sockets before getting to the right one. The complexity would be n^2 if all the threads are active a lot!
My question is, how can you loop through the fd_set after select(), without checking each member of the set if it's part of the set, without using FD_ISSET?
You can't.
There is a slight optimisation that select()
returns the number of ready descriptors, so if you keep a count of the number you have processed, you can stop when you know you have done them all without going to the end of the set.
I mean, when you have 4000 active connections, whenever there's incoming data, the above loop will have to go through a potential of 4000 sockets before getting to the right one. The complexity would be n^2 if all the threads are active a lot!
I don't see where you get O(n^2) from. Surely, after returning from select()
you'd go through the set once processing each ready descriptor on the way. If you have 4,000 ready IO descriptors, the overhead of looping through an array of 4,000 in memory C objects is going to be fairly insignificant.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With