Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it okay to compare a pointer and an integer in C?

Tags:

c

pointers

I'm writing some code that maps virtual addresses to physical addresses.

I have code along these lines:

if (address > 0xFFFF)
   Status = XST_FAILURE; // Out of range
else if (address <= 0xCFFF || address >= 0xD400) {
   // Write to OCM
   Xil_Out8(OCM_HIGH64_BASEADDR + OCM_OFFSET + address, data);
else { // (address >= 0xD000)
   // Write to external CCA
   Status = ext_mem_write(address, data);

I get a compiler warning: comparison between pointer and integer [enabled by default]

I realize that I'm comparing two different types (pointer and integer), but is this an issue? After all, comparing a pointer to an integer is exactly what I want to do.

Would it be cleaner to define pointer constants to compare to instead of integers?

const int *UPPER_LIMIT = 0xFFFF;
...
if (address > UPPER_LIMIT ){
    ....
like image 784
Spark Avatar asked Mar 13 '15 19:03

Spark


2 Answers

The clean way is to use contants of type uintptr_t, which is defined to be an unsigned integer that can uniquely map between pointers and integers.

This should be defined by #include <stdint.h>. If it is not defined then it indicates that either your compiler doesn't follow the C standard, or the system does not have a flat memory model.

It's intended to be mapped in the "obvious" way , i.e. one integer per byte in ascending order. The standard doesn't absolutely guarantee that but as a quality of implementation issue it's hard to see anything else happening.

Example:

uintptr_t foo = 0xFFFF;

void test(char *ptr)
{
    if ( (uintptr_t)ptr < foo )
         // do something...
}

This is well-defined by the C standard. The version where you use void * instead of uintptr_t is undefined behaviour, although it may appear to work if your compiler isn't too aggressive.

like image 62
M.M Avatar answered Oct 18 '22 16:10

M.M


That's probably why Linux Kernel uses unsigned long for addresses (note the difference -- pointer points to an object, while address is an abstract code representing location in memory).

That's how it seem from compiler perspective:

  1. C standard doesn't define how to compare int (arithmetic type) literal 0xFFFF and pointer address -- see paragraph 6.5.8
  2. So, it has to convert operands somehow. Both conversions are implementation defined as paragraph 6.3.2.3 states. Here are couple of crazy decisions that compiler eligible to make:
    • Because 0xFFFF is probably int -- see 6.4.4, it may coerce pointer to int and if sizeof(int) < sizeof(void*), you will lose higher bytes.
    • I can imagine more crazier situations when 0xFFFF is sign extended to 0xFFFFFFFF (shouldn't be, but why not)

Of course, none of (2) should happen, modern compilers are smart enough. But it can happen (I assume you're writing something embedded, where it is more likely to happen), so that's why compiler raises a warning.

Here is one practical example of "crazy compiler things": in GCC 4.8 optimizer started to treat integer overflow as UB (Undefined Behavior) and omit instructions assuming programmer doesn't want integer overflow: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61569

I'm referring to N1570 - C11 standard draft

like image 34
myaut Avatar answered Oct 18 '22 17:10

myaut