The function signature for write(2)
is ssize_t write(int fd, const void *buf, size_t count)
. Generally, the maximum value of size_t
is greater than that of ssize_t
. Does this mean the amount of data that write
can actually write is actually SSIZE_MAX
instead of SIZE_MAX
? If that is not the case, what happens when the number of bytes written is greater than SSIZE_MAX
with respect to overflows?
I am essentially wondering if that amount of data written by write
is bounded by SSIZE_MAX
or SIZE_MAX
.
The type ssize_t
is defined by POSIX as a signed type to be capable of storing at least 32767 (_POSIX_SSIZE_MAX
) with no other guarantees. So its maximum value can be less than the maximum value of size_t
.
ssize_t
's POSIX definition:
ssize_t
Used for a count of bytes or an error indication.
So it's possible the number of bytes you requested to be written can be greater than what ssize_t
can hold. In that case, POSIX leaves it to the implementation.
From write()
's POSIX spec:
ssize_t write(int fildes, const void *buf, size_t nbyte);
If the value of nbyte is greater than {SSIZE_MAX}, the result is implementation-defined.
The POSIX specification for write()
says:
If the value of
nbyte
is greater than {SSIZE_MAX}, the result is implementation-defined.
So any attempt to write more than SSIZE_MAX
bytes leads to behaviour that is not mandated by POSIX, but that must be documented by the system (it is implementation-defined, not undefined, behaviour). However, different systems may handle it differently, and there's nothing to stop one system from reporting an error (perhaps errno
set to EINVAL
) and another writing SSIZE_MAX
bytes and reporting that, leaving it to the application to try again on the remainder, and other systems could be inventive and do things differently still.
If you've got a 64-bit system, SSIZE_MAX
is likely larger than the amount of disk space in the biggest single data centre in the world (possibly by an order of magnitude or more, even allowing for the NSA and Google), so you're unlikely to be able to run into real problems with this, but on 32-bit systems, you could easily have more than 2 GiB of space and if ssize_t
is 32-bit, you have to deal with all this. (On Mac OS X 10.10.3, a 32-bit build has a 4-byte size_t
and ssize_t
, at least by default.)
Yes, the amount of data that can be written in a single call to write is limited to what can be held in a ssize_t. For clarification, see the relevant glibc documentation page. To quote that page, "Your program should always call write in a loop, iterating until all the data is written." (emphasis added) That page also clarifies that ssize_t is used to represent the size of blocks that can be read or written in a single operation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With