Consider:
ssize_t write(int fd, const void *buf, size_t count);
The result has to be signed to account for -1 on error, etc., and is thus ssize_t. But why then allow for the request to be an unsigned amount (twice as large) when the result of asking for more than ssize_t is undefined?
Is there a significant optimization in the kernel by virtue of not checking for signedness of the count parameter? Or something else?
According to the documentation for ssize_t write(int fildes, const void *buf, size_t nbyte)
If the value of nbyte is greater than {SSIZE_MAX}, the result is implementation-defined.
So each particular implementation may handle this situation differently. I would not be surprised if some implementations simply set EFBIG
.
As for the rationale, perhaps size_t
is simply the best type to represent the size of the buffer, semantically? It states that this argument is a nonnegative size, and nothing else.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With