This is also related to char32_t
and any intXX_t
. The specification points out that:
2.14.3.2
:
The value of a char16_t literal containing a single c-char is equal to its ISO 10646 code point value, provided that the code point is representable with a single 16-bit code unit.
5.3.3.1
:
[..] in particular [..] sizeof(char16_t), sizeof(char32_t), and sizeof(wchar_t) are implementation-defined
I can not see anything about the intXX_t
types, apart from the comment that they are "optional" (18.4.1
).
If a char16_t
isn`t guaranteed to be 2 bytes, is it guaranteed to be 16 bit (even on architectures where 1 byte != 8 bit)?
Types char16_t and char32_t denote distinct types with the same size, signedness, and alignment as uint_least16_t and uint_least32_t, respectively, in , called the underlying types.
This means char16_t is at least 16 bits (but may be larger)
But I also believe:
The value of a char16_t literal containing a single c-char is equal to its ISO 10646 code point value, provided that the code point is representable with a single 16-bit code unit.
provides the same guarantees (though less explicitly (as you have to know that ISO 10646 is UCS (Note UCS is compatible but not exactly the same as Unicode))).
The value of a
char16_t
literal containing a single c-char is equal to its ISO 10646 code point value, provided that the code point is representable with a single 16-bit code unit.
This is impossible to satisfy if char16_t
isn't at least 16 bits wide, so by contradiction, it's guaranteed to be at least that wide.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With