Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C: finding the maximum and minimum of the type of an arithmetic expression

I need to find the maximum and minimum of an arbitrary C expression which has no side effects. The following macros work on my machine. Will they work on all platforms? If not, can they be modified to work? My intent is to subsequently use these to implement macros like SAFE_MUL(a,b) in place of a*b. SAFE_MUL would include a check for multiplication overflow.

EDIT: the type is cast as suggested by Steve.

#include <stdio.h>
#include <limits.h>

#define IS_SIGNED(exp) (((exp)*0-1) < 0)

#define TYPE_MAX_UNSIGNED(exp) ((exp)*0-1)

#define TYPE_MAX_SIGNED(exp) ( \
    sizeof (exp) == sizeof (int) \
    ? \
    INT_MAX \
    : \
    ( \
        sizeof (exp) == sizeof (long) \
        ? \
        LONG_MAX \
        : \
        LLONG_MAX \
    ) \
)

#define TYPE_MAX(exp) ((unsigned long long)( \
    IS_SIGNED (exp) \
    ? \
    TYPE_MAX_SIGNED (exp) \
    : \
    TYPE_MAX_UNSIGNED (exp) \
))

#define TYPE_MIN_SIGNED(exp) ( \
    sizeof (exp) == sizeof (int) \
    ? \
    INT_MIN \
    : \
    ( \
        sizeof (exp) == sizeof (long) \
        ? \
        LONG_MIN \
        : \
        LLONG_MIN \
    ) \
)

#define TYPE_MIN(exp) ((long long)( \
    IS_SIGNED (exp) \
    ? \
    TYPE_MIN_SIGNED (exp) \
    : \
    (exp)*0 \
))

int
main (void) {

    printf ("TYPE_MAX (1 + 1) = %lld\n", TYPE_MAX (1 + 1));
    printf ("TYPE_MAX (1 + 1L) = %lld\n", TYPE_MAX (1 + 1L));
    printf ("TYPE_MAX (1 + 1LL) = %lld\n", TYPE_MAX (1 + 1LL));
    printf ("TYPE_MAX (1 + 1U) = %llu\n", TYPE_MAX (1 + 1U));
    printf ("TYPE_MAX (1 + 1UL) = %llu\n", TYPE_MAX (1 + 1UL));
    printf ("TYPE_MAX (1 + 1ULL) = %llu\n", TYPE_MAX (1 + 1ULL));
    printf ("TYPE_MIN (1 + 1) = %lld\n", TYPE_MIN (1 + 1));
    printf ("TYPE_MIN (1 + 1L) = %lld\n", TYPE_MIN (1 + 1L));
    printf ("TYPE_MIN (1 + 1LL) = %lld\n", TYPE_MIN (1 + 1LL));
    printf ("TYPE_MIN (1 + 1U) = %llu\n", TYPE_MIN (1 + 1U));
    printf ("TYPE_MIN (1 + 1UL) = %llu\n", TYPE_MIN (1 + 1UL));
    printf ("TYPE_MIN (1 + 1ULL) = %llu\n", TYPE_MIN (1 + 1ULL));
    return 0;
}
like image 633
tyty Avatar asked Oct 31 '11 14:10

tyty


2 Answers

  • The IS_SIGNED macro doesn't tell the truth with unsigned types smaller than int. IS_SIGNED((unsigned char)1) is true on any normal implementation, because the type of (unsigned char)1*0 is int, not unsigned char.

Your eventual SAFE macros should still tell the truth about whether overflow occurs, since the same integer promotions apply to all arithmetic. But they'll tell you whether overflow occurs in the multiplication, not necessarily whether it occurs when the user converts the result back to the original type of one of the operands.

Come to think of it, though, you probably knew that already since your macros don't attempt to suggest CHAR_MIN and so on. But other people finding this question in future might not realise that restriction.

  • There is no single type guaranteed to be able to hold all the values that TYPE_MIN and TYPE_MAX can evaluate to. But you could make TYPE_MAX always evaluate to unsigned long long (and the value always fits in that type), and the same with TYPE_MIN and signed long long. This would allow you to use a correct printf format without using your knowledge of whether the expression is signed. Currently TYPE_MAX(1) is a long long, whereas TYPE_MAX(1ULL) is an unsigned long long.

  • Technically it's permitted for int and long to have the same size but different ranges, due to long having fewer padding bits than int. I doubt that any important implementation does that, though.

like image 186
Steve Jessop Avatar answered Sep 28 '22 08:09

Steve Jessop


Just an idea: if you use gcc, you can use its typeof extension:

#define IS_SIGNED(exp) ((typeof(exp))-1 < 0)
#define TYPE_MAX_UNSIGNED(exp) ((typeof(exp))-1)
#define TYPE_MAX_SIGNED(exp) ... // i cannot improve your code here

Edit: might also check for floating-point types:

#define CHECK_INT(exp) ((typeof(exp))1 / 2 == 0)
#define CHECK_INT(exp) (((exp) * 0 + 1) / 2 == 0) // if cannot use typeof
#define MY_CONST_1(exp) (1/CHECK_INT(exp))
// Now replace any 1 in code by MY_CONST_1(exp) to cause error for floating-point
like image 32
anatolyg Avatar answered Sep 28 '22 09:09

anatolyg