Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Are there well-known "profiles" of the C standard?

I write C code that makes certain assumptions about the implementation, such as:

  • char is 8 bits.
  • signed integral types are two's complement.
  • >> on signed integers sign-extends.
  • integer division rounds negative quotients towards zero.
  • double is IEEE-754 doubles and can be type-punned to and from uint64_t with the expected result.
  • comparisons involving NaN always evaluate to false.
  • a null pointer is all zero bits.
  • all data pointers have the same representation, and can be converted to size_t and back again without information loss.
  • pointer arithmetic on char* is the same as ordinary arithmetic on size_t.
  • functions pointers can be cast to void* and back again without information loss.

Now, all of these are things that the C standard doesn't guarantee, so strictly speaking my code is non-portable. However, they happen to be true on the architectures and ABIs I'm currently targeting, and after careful consideration I've decided that the risk they will fail to hold on some architecture that I'll need to target in the future is acceptably low compared to the pragmatic benefits I derive from making the assumptions now.

The question is: how do I best document this decision? Many of my assumptions are made by practically everyone (non-octet chars? or sign-magnitude integers? on a future, commercially successful, architecture?). Others are more arguable -- the most risky probably being the one about function pointers. But if I just list everything I assume beyond what the standard gives me, the reader's eyes are just going to glaze over, and he may not notice the ones that actually matter.

So, is there some well-known set of assumptions about being a "somewhat orthodox" architecture that I can incorporate by reference, and then only document explicitly where I go beyond even that? (Effectively such a "profile" would define a new language that is a superset of C, but it might not acknowledge that in so many words -- and it may not be a pragmatically useful way to think of it either).

Clarification: I'm looking for a shorthand way to document my choices, not for a way to test automatically whether a given compiler matches my expectations. The latter is obviously useful too, but does not solve everything. For example, if a business partner contacts us saying, "we're making a device based on Google's new G2015 chip; will your software run on it?" -- then it would be nice to be able to answer "we haven't worked with that arch yet, but it shouldn't be a problem if it has a C compiler that satisfies such-and-such".

Clarify even more since somebody has voted to close as "not constructive": I'm not looking for discussion here, just for pointers to actual, existing, formal documents that can simplify my documentation by being incorporated by reference.

like image 699
hmakholm left over Monica Avatar asked Aug 24 '11 13:08

hmakholm left over Monica


2 Answers

I would introduce a STATIC_ASSERT macro and put all your assumptions in such asserts.

like image 164
Andreas Brinck Avatar answered Sep 19 '22 15:09

Andreas Brinck


Unfortunately, not only is there a lack of standards for a dialect of C that combines the extensions which have emerged as de facto standards during the 1990s (two's-complement, universally-ranked pointers, etc.) but compilers trends are moving in the opposite direction. Given the following requirements for a function:

* Accept int parameters x,y,z:
* Return 0 if x-y is computable as "int" and is less than Z
* Return 1 if x-y is computable as "int" and is not less than Z
* Return 0 or 1 if x-y is not computable */

The vast majority of compilers in the 1990s would have allowed:

int diffCompare(int x, int y, int z)
{ return (x-y) >= z; }

On some platforms, in cases where the difference between x-y was not computable as int, it would be faster to compute a "wrapped" two's-complement value of x-y and compare that, while on others it would be faster to perform the calculation using a type larger than int and compare that. By the late 1990s, however, nearly every C compiler would implement the above code to use one of whichever one of those approaches would have been more efficient on its hardware platform.

Since 2010, however, compiler writers seem to have taken the attitude that if computations overflow, compilers shouldn't perform the calculations in whatever fashion is normal for their platform and let what happens happens, nor should they recognizably trap (which would break some code, but could prevent certain kinds of errant program behavior), but instead they should overflows as an excuse to negate laws of time and causality. Consequently, even if a programmer would have been perfectly happy with any behavior a 1990s compiler would have produced, the programmer must replace the code with something like:

{ return ((long)x-y) >= z; }

which would greatly reduce efficiency on many platforms, or

{ return x+(INT_MAX+1U)-y >= z+(INT_MAX+1U); }

which requires specifying a bunch of calculations the programmer doesn't actually want in the hopes that the optimizer will omit them (using signed comparison to make them unnecessary), and would reduce efficiency on a number of platforms (especially DSPs) where the form using (long) would have been more efficient.

It would be helpful if there were standard profiles which would allow programmers to avoid the need for nasty horrible kludges like the above using INT_MAX+1U, but if trends continue they will become more and more necessary.

like image 32
supercat Avatar answered Sep 20 '22 15:09

supercat