Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why the sizeof(bool) is not defined to be one, by the Standard itself?

Size of char, signed char and unsigned char is defined to be 1 byte, by the C++ Standard itself. I'm wondering why it didn't define the sizeof(bool) also?

C++03 Standard $5.3.3/1 says,

sizeof(char), sizeof(signed char) and sizeof(unsigned char) are 1; the result of sizeof applied to any other fundamental type (3.9.1) is implementation-defined. [Note: in particular,sizeof(bool) and sizeof(wchar_t) are implementation-defined.69)

I understand the rationale that sizeof(bool) cannot be less than one byte. But is there any rationale why it should be greater than 1 byte either? I'm not saying that implementations define it to be greater than 1, but the Standard left it to be defined by implementation as if it may be greater than 1.

If there is no reason sizeof(bool) to be greater than 1, then I don't understand why the Standard didn't define it as just 1 byte, as it has defined sizeof(char), and it's all variants.

like image 350
Nawaz Avatar asked Feb 21 '11 15:02

Nawaz


People also ask

What is the size of a boolean variable in C Plus Plus?

The size of boolean data type in C++ is 1 byte, whereas size of boolean in Java is not precisely defined and it depends upon the Java Virtual Machine (JVM).

Is bool smaller than char?

However, my C++ book (C++ Pocket Reference, O'Reilly) states: "The typical size of a bool is one byte," and "The size of a char is one byte. The size of a byte technically is implementation defined, but it is rarely anything but eight bits."


2 Answers

The other likely size for it is that of int, being the "efficient" integer type for the platform.

On architectures where it makes any difference whether the implementation chooses 1 or sizeof(int) there could be a trade-off between size (but if you're happy to waste 7 bits per bool, why shouldn't you be happy to waste 31? Use bitfields when size matters) vs. performance (but when is storing and loading bool values going to be a genuine performance issue? Use int explicitly when speed matters). So implementation flexibility wins - if for some reason 1 would be atrocious in terms of performance or code size, it can avoid it.

like image 192
Steve Jessop Avatar answered Oct 17 '22 13:10

Steve Jessop


As @MSalters pointed out, some platforms work more efficiently with larger data items.

Many "RISC" CPUs (e.g., MIPS, PowerPC, early versions of the Alpha) have/had a considerably more difficult time working with data smaller than one word, so they do the same. IIRC, with at least some compilers on the Alpha a bool actually occupied 64 bits.

gcc for PowerPC Macs defaulted to using 4 bytes for a bool, but had a switch to change that to one byte if you wanted to.

Even for the x86, there's some advantage to using a 32-bit data item. gcc for the x86 has (or at least used to have -- I haven't looked recently at all) a define in one of its configuration files for BOOL_TYPE_SIZE (going from memory, so I could have that name a little wrong) that you could set to 1 or 4, and then re-compile the compiler to get a bool of that size.

Edit: As for the reason behind this, I'd say it's a simple reflection of a basic philosophy of C and C++: leave as much room for the implementation to optimize/customize its behavior as reasonable. Require specific behavior only when/if there's an obvious, tangible benefit, and unlikely to be any major liability, especially if the change would make it substantially more difficult to support C++ on some particular platform (though, of course, if the platform is sufficiently obscure, it might get ignored).

like image 36
Jerry Coffin Avatar answered Oct 17 '22 11:10

Jerry Coffin