I vaguely remember reading about this a couple of years ago, but I can't find any reference on the net.
Can you give me an example where the NULL macro didn't expand to 0?
Edit for clarity: Today it expands to either ((void *)0)
, (0)
, or (0L)
. However, there were architectures long forgotten where this wasn't true, and NULL expanded to a different address. Something like
#ifdef UNIVAC
#define NULL (0xffff)
#endif
I'm looking for an example of such a machine.
Update to address the issues:
I didn't mean this question in the context of current standards, or to upset people with my incorrect terminology. However, my assumptions were confirmed by the accepted answer:
Later models used [blah], evidently as a sop to all the extant poorly-written C code which made incorrect assumptions.
For a discussion about null pointers in the current standard, see this question.
Traditionally, the NULL macro is an implementation defined constant representing a null pointer, usually the integer 0 . In C, the NULL macro can have type void * .
Now lets update the NULL (no value) to 0 (an actual value of zero). As you see, as we now have an actual value in the other row (as opposed to “no value”), the AVG function takes that zero into account. Hopefully this illustrates once more that NULL does not mean zero or any other value, it means NO value.
The macro NULL is an implementation-defined null pointer constant, which may be. an integral constant expression rvalue of integer type that evaluates to zero. (until C++11) an integer literal with value zero, or a prvalue of type std::nullptr_t.
NULL is 0 (zero) i.e. integer constant zero with C-style typecast to void* , while nullptr is prvalue of type nullptr_t , which is an integer literal that evaluates to zero. For those of you who believe that NULL is the same i.e. (void*)0 in C and C++.
The C FAQ has some examples of historical machines with non-0 NULL representations.
From The C FAQ List, question 5.17:
Q: Seriously, have any actual machines really used nonzero null pointers, or different representations for pointers to different types?
A: The Prime 50 series used segment 07777, offset 0 for the null pointer, at least for PL/I. Later models used segment 0, offset 0 for null pointers in C, necessitating new instructions such as TCNP (Test C Null Pointer), evidently as a sop to [footnote] all the extant poorly-written C code which made incorrect assumptions. Older, word-addressed Prime machines were also notorious for requiring larger byte pointers (
char *
's) than word pointers (int *
's).The Eclipse MV series from Data General has three architecturally supported pointer formats (word, byte, and bit pointers), two of which are used by C compilers: byte pointers for
char *
andvoid *
, and word pointers for everything else. For historical reasons during the evolution of the 32-bit MV line from the 16-bit Nova line, word pointers and byte pointers had the offset, indirection, and ring protection bits in different places in the word. Passing a mismatched pointer format to a function resulted in protection faults. Eventually, the MV C compiler added many compatibility options to try to deal with code that had pointer type mismatch errors.Some Honeywell-Bull mainframes use the bit pattern 06000 for (internal) null pointers.
The CDC Cyber 180 Series has 48-bit pointers consisting of a ring, segment, and offset. Most users (in ring 11) have null pointers of 0xB00000000000. It was common on old CDC ones-complement machines to use an all-one-bits word as a special flag for all kinds of data, including invalid addresses.
The old HP 3000 series uses a different addressing scheme for byte addresses than for word addresses; like several of the machines above it therefore uses different representations for
char *
andvoid *
pointers than for other pointers.The Symbolics Lisp Machine, a tagged architecture, does not even have conventional numeric pointers; it uses the pair
<NIL, 0>
(basically a nonexistent<object, offset>
handle) as a C null pointer.Depending on the "memory model" in use, 8086-family processors (PC compatibles) may use 16-bit data pointers and 32-bit function pointers, or vice versa.
Some 64-bit Cray machines represent
int *
in the lower 48 bits of a word;char *
additionally uses some of the upper 16 bits to indicate a byte address within a word.
There was a time long ago when it was typed as ((void*)0)
or some other machine-specific manner, where that machine didn't use the all-zero bit pattern.
Some platforms (certain CDC or Honeywell machines) had a different bit pattern for NULL (ie, not all zeros) although ISO/ANSI fixed that before C90 was ratified, by specifying that 0
was the correct NULL pointer in the source code, regardless of the underlying bit pattern. From C11 6.3.2.3 Pointers /4
(though, as mentioned, this wording goes all the way back to C90):
An integer constant expression with the value
0
, or such an expression cast to typevoid *
, is called a null pointer constant.
In C compilers, it can expand to '((void *)0)
' (but does not have to do so). This does not work for C++ compilers.
See also the C FAQ which has a whole chapter on null pointers.
In the GNU libio.h file:
#ifndef NULL
# if defined __GNUG__ && \
(__GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 8))
# define NULL (__null)
# else
# if !defined(__cplusplus)
# define NULL ((void*)0)
# else
# define NULL (0)
# endif
# endif
#endif
Note the conditional compilation on __cplusplus. C++ can't use ((void*) 0) because of its stricter rules about pointer casting; the standard requires NULL to be 0. C allows other definitions of NULL.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With