I don't quite understand the purpose of the first argument in the select
function. Wikipedia describes it as the maximum file descriptor across all the sets, plus 1 . Why +1 and why does select
need this information ?
In *Nix systems, file descriptors are just indexes into a system table, and the fd_set
structure contains a bitmask that corresponds to those indexes. When a descriptor is added to an fd_set
, the corresponding bit is enabled. select()
needs to know the highest descriptor value so it can loop through the bits and know which one to stop at.
On Windows, sockets are represented by handles to kernel objects, not by indexes. The fd_set
structure contains an array of socket handles and a counter of the number of sockets in the array. This way, select()
can just loop through the array, and is thus why the first parameter of select()
is ignored on Windows.
This is a happenstance detail of the (original) Berkeley sockets implementation. Basically, the implementation used the number of file descriptors as a sizing variable for some temporary internal bit arrays. Since Unix descriptors start with zero, the largest descriptor would be one less than the size of any array with a one-slot-per-descriptor semantic. Hence the "largest-plus-one" requirement. This plus-1 adjustment could have been absorbed into the system call itself, but wasn't.
Ancient history, that's all. The result is that the correct interpretation of the first argument has less to do with descriptor values than with the number of them (i.e. the maximum number of descriptors to be tested). See Section 6.3 of Stevens et al (This is a revised and updated version of Rich Stevens' classic text. If you don't have it, get it!)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With