In some code I am working on I have come across strange re-definitions of truth and falsehood. I have seen such things before to make checks more strict/certain, but this one is a little bizarre in my mind and I wonder if anyone can tell me what could be a good reason for such definitions, see below with my comments next to them:
#define FALSE (1 != 1) // why not just define it as "false" or "0"?
#define TRUE (!FALSE) // why not just define it as "true" or "1"?
There are many other strange oddities in this code base. Like there are re-definitions for all the standard types like:
#define myUInt32 unsigned integer // why not just use uint32_t from stdint?
All these little "quirks" make me feel like I am missing something obvious, but I really can't see the point :(
Note: Strictly this is c++ code, but it could have been ported from a 'c' project.
What does true or false mean? True or false is variously said of something that must be considered as correct (true) or incorrect (false).
C does not have boolean data types, and normally uses integers for boolean testing. Zero is used to represent false, and One is used to represent true. For interpretation, Zero is interpreted as false and anything non-zero is interpreted as true.
2 is True is false (as is 1 is True ). Sure, bool() will return True for non-zero integer inputs, but that doesn't mean the values "are" True . The snippet proves that 2 is not a binary number.
The intent appears to be portability.
#define FALSE (1 != 1) // why not just define it as "false" or "0"? #define TRUE (!FALSE) // why not just define it as "true" or "1"?
These have boolean type in languages that support it (C++), while providing still-useful numeric values for those that don't (C — even C99 and C11, apparently, despite their acquisition of explicit boolean datatypes).
Having booleans where possible is good for function overloading.
#define myUInt32 unsigned integer // why not just use uint32_t from stdint?
That's fine if stdint
is available. You may take such things for granted, but it's a big wide world out there! This code recognises that.
Disclaimer: Personally, I would stick to the standards and simply state that compilers released later than 1990 are a pre-requisite. But we don't know what the underlying requirements are for the project in question.
TRWTF is that the author of the code in question did not explain this in comments alongside.
#define FALSE (1 != 1) // why not just define it as "false" or "0"?
I think it is because the type of the expression (1!=1)
depends on the language's support for boolean value — if it is C++, the type is bool
, else it is int
.
On the other hand 0
is always int
, in both languages, and false
is not recognized in C.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With