I looked at the implementations of inet_ntoa
like this and this,
I am wondering why they both allocate a buffer of 18 characters.
If I take the maximum length IPv4 address string: 255.255.255.255
the size that I need is: 3 for each octet, 3 for dots and 1 for the null terminator.
3*4+3+1 = 16.
So why do we need those 2 extra characters?
The inet_ntoa
implementation from the first link:
static __thread char buffer[18];
char *
inet_ntoa (struct in_addr in)
{
unsigned char *bytes = (unsigned char *) ∈
__snprintf (buffer, sizeof (buffer), "%d.%d.%d.%d",
bytes[0], bytes[1], bytes[2], bytes[3]);
return buffer;
}
If I take the maximum length IPv4 address string: 255.255.255.255 the size that I need is: 3 for each octet, 3 for dots and 1 for the null terminator. 3*4+3+1 = 16.
So why do we need those 2 extra characters?
Your computation is correct. Only sixteen bytes are needed for storing the dotted-decimal address string produced by inet_ntoa()
, including its terminator. The relevant documentation and specifications specify the current format at least as far back as POSIX.1 2004, and as far as I am aware, no implementation has ever been released that produced any other format, so we can only speculate about why some implementations provide extra space. Possibilities include, but are not necessarily limited to
That the same extra bytes are observed today in many implementations may support the multi-use buffer alternative, but that observation is also consistent with any explanation for those bytes appearing in some early implementation, maybe BSD, and being propagated from there to many subsequent ones. I'm inclined to favor the latter explanation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With