Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Integer vs double arithmetic performance?

i'm writing a C# class to perform 2D separable convolution using integers to obtain better performance than double counterpart. The problem is that i don't obtain a real performance gain.

This is the X filter code (it is valid both for int and double cases):

foreach (pixel)
{
      int value = 0;
      for (int k = 0; k < filterOffsetsX.Length; k++)
      {
          value += InputImage[index + filterOffsetsX[k]] * filterValuesX[k];  //index is relative to current pixel position
      }
      tempImage[index] = value;
 }

In the integer case "value", "InputImage" and "tempImage" are of "int", "Image<byte>" and "Image<int>" types.
In the double case "value", "InputImage" and "tempImage" are of "double", "Image<double>" and "Image<double>" types.
(filterValues is int[] in each case)
(The class Image<T> is part of an extern dll. It should be similar to .NET Drawing Image class..).

My goal is to achieve fast perfomance thanks to int += (byte * int) vs double += (double * int)

The following times are mean of 200 repetitions.
Filter size 9 = 0.031 (double) 0.027 (int)
Filter size 13 = 0.042 (double) 0.038 (int)
Filter size 25 = 0.078 (double) 0.070 (int)

The performance gain is minimal. Can this be caused by pipeline stall and suboptimal code?

EDIT: simplified the code deleting unimportant vars.

EDIT2: i don't think i have a cache miss related problema because "index"iterate through adjacent memory cells (row after row fashion). Moreover "filterOffstetsX" contains only small offsets relatives to pixels on the same row and at a max distance of filter size / 2. The problem can be present in the second separable filter (Y-filter) but times are not so different.

like image 304
Marco Avatar asked Sep 09 '10 12:09

Marco


People also ask

Is double faster than int?

Google taught me that most people state integers are faster than using doubles, because their operations (addition, multiplication) take less assembler instructions.

Is integer arithmetic faster than floating point arithmetic?

In computer systems, integer arithmetic is exact, but the possible range of values is limited. Integer arithmetic is generally faster than floating-point arithmetic. Floating-point numbers represent what were called in school “real” numbers (i.e., those that have a fractional part, such as 3.1415927).

Is floating point arithmetic slower than integer arithmetic?

Floating-point operations are always slower than integer ops at same data size. Smaller is faster. 64 bits integer precision is really slow. Float 32 bits is faster than 64 bits on sums, but not really on products and divisions.

Is float arithmetic faster than double?

Float vs Double Performance Float and Double are defined by their IEEE-754 definitions implemented in the hardware. wilshipley: In general Double is preferred because it's SO MUCH more precise than Float , and on 64-bit platforms (eg, everything but watchOS) it's usually as fast (sometimes faster) to work with.


1 Answers

Using Visual C++, because that way I can be sure that I'm timing arithmetic operations and not much else.

Results (each operation is performed 600 million times):

i16 add: 834575
i32 add: 840381
i64 add: 1691091
f32 add: 987181
f64 add: 979725
i16 mult: 850516
i32 mult: 858988
i64 mult: 6526342
f32 mult: 1085199
f64 mult: 1072950
i16 divide: 3505916
i32 divide: 3123804
i64 divide: 10714697
f32 divide: 8309924
f64 divide: 8266111

freq = 1562587

CPU is an Intel Core i7, Turbo Boosted to 2.53 GHz.

Benchmark code:

#include <stdio.h>
#include <windows.h>

template<void (*unit)(void)>
void profile( const char* label )
{
    static __int64 cumtime;
    LARGE_INTEGER before, after;
    ::QueryPerformanceCounter(&before);
    (*unit)();
    ::QueryPerformanceCounter(&after);
    after.QuadPart -= before.QuadPart;
    printf("%s: %I64i\n", label, cumtime += after.QuadPart);
}

const unsigned repcount = 10000000;

template<typename T>
void add(volatile T& var, T val) { var += val; }

template<typename T>
void mult(volatile T& var, T val) { var *= val; }

template<typename T>
void divide(volatile T& var, T val) { var /= val; }

template<typename T, void (*fn)(volatile T& var, T val)>
void integer_op( void )
{
    unsigned reps = repcount;
    do {
        volatile T var = 2000;
        fn(var,5);
        fn(var,6);
        fn(var,7);
        fn(var,8);
        fn(var,9);
        fn(var,10);
    } while (--reps);
}

template<typename T, void (*fn)(volatile T& var, T val)>
void fp_op( void )
{
    unsigned reps = repcount;
    do {
        volatile T var = (T)2.0;
        fn(var,(T)1.01);
        fn(var,(T)1.02);
        fn(var,(T)1.03);
        fn(var,(T)2.01);
        fn(var,(T)2.02);
        fn(var,(T)2.03);
    } while (--reps);
}

int main( void )
{
    LARGE_INTEGER freq;
    unsigned reps = 10;
    do {
        profile<&integer_op<__int16,add<__int16>>>("i16 add");
        profile<&integer_op<__int32,add<__int32>>>("i32 add");
        profile<&integer_op<__int64,add<__int64>>>("i64 add");
        profile<&fp_op<float,add<float>>>("f32 add");
        profile<&fp_op<double,add<double>>>("f64 add");

        profile<&integer_op<__int16,mult<__int16>>>("i16 mult");
        profile<&integer_op<__int32,mult<__int32>>>("i32 mult");
        profile<&integer_op<__int64,mult<__int64>>>("i64 mult");
        profile<&fp_op<float,mult<float>>>("f32 mult");
        profile<&fp_op<double,mult<double>>>("f64 mult");

        profile<&integer_op<__int16,divide<__int16>>>("i16 divide");
        profile<&integer_op<__int32,divide<__int32>>>("i32 divide");
        profile<&integer_op<__int64,divide<__int64>>>("i64 divide");
        profile<&fp_op<float,divide<float>>>("f32 divide");
        profile<&fp_op<double,divide<double>>>("f64 divide");

        ::QueryPerformanceFrequency(&freq);

        putchar('\n');
    } while (--reps);

    printf("freq = %I64i\n", freq);
}

I did a default optimized build using Visual C++ 2010 32-bit.

Every call to profile, add, mult, and divide (inside the loops) got inlined. Function calls were still generated to profile, but since 60 million operations get done for each call, I think the function call overhead is unimportant.

Even with volatile thrown in, the Visual C++ optimizing compiler is SMART. I originally used small integers as the right-hand operand, and the compiler happily used lea and add instructions to do integer multiply. You may get more of a boost from calling out to highly optimized C++ code than the common wisdom suggests, simply because the C++ optimizer does a much better job than any JIT.

Originally I had the initialization of var outside the loop, and that made the floating-point multiply code run miserably slow because of the constant overflows. FPU handling NaNs is slow, something else to keep in mind when writing high-performance number-crunching routines.

The dependencies are also set up in such a way as to prevent pipelining. If you want to see the effects of pipelining, say so in a comment, and I'll revise the testbench to operate on multiple variables instead of just one.

Disassembly of i32 multiply:

;   COMDAT ??$integer_op@H$1??$mult@H@@YAXACHH@Z@@YAXXZ
_TEXT   SEGMENT
_var$66971 = -4                     ; size = 4
??$integer_op@H$1??$mult@H@@YAXACHH@Z@@YAXXZ PROC   ; integer_op<int,&mult<int> >, COMDAT

; 29   : {

  00000 55       push    ebp
  00001 8b ec        mov     ebp, esp
  00003 51       push    ecx

; 30   :    unsigned reps = repcount;

  00004 b8 80 96 98 00   mov     eax, 10000000      ; 00989680H
  00009 b9 d0 07 00 00   mov     ecx, 2000      ; 000007d0H
  0000e 8b ff        npad    2
$LL3@integer_op@5:

; 31   :    do {
; 32   :        volatile T var = 2000;

  00010 89 4d fc     mov     DWORD PTR _var$66971[ebp], ecx

; 33   :        fn(var,751);

  00013 8b 55 fc     mov     edx, DWORD PTR _var$66971[ebp]
  00016 69 d2 ef 02 00
    00       imul    edx, 751       ; 000002efH
  0001c 89 55 fc     mov     DWORD PTR _var$66971[ebp], edx

; 34   :        fn(var,6923);

  0001f 8b 55 fc     mov     edx, DWORD PTR _var$66971[ebp]
  00022 69 d2 0b 1b 00
    00       imul    edx, 6923      ; 00001b0bH
  00028 89 55 fc     mov     DWORD PTR _var$66971[ebp], edx

; 35   :        fn(var,7124);

  0002b 8b 55 fc     mov     edx, DWORD PTR _var$66971[ebp]
  0002e 69 d2 d4 1b 00
    00       imul    edx, 7124      ; 00001bd4H
  00034 89 55 fc     mov     DWORD PTR _var$66971[ebp], edx

; 36   :        fn(var,81);

  00037 8b 55 fc     mov     edx, DWORD PTR _var$66971[ebp]
  0003a 6b d2 51     imul    edx, 81            ; 00000051H
  0003d 89 55 fc     mov     DWORD PTR _var$66971[ebp], edx

; 37   :        fn(var,9143);

  00040 8b 55 fc     mov     edx, DWORD PTR _var$66971[ebp]
  00043 69 d2 b7 23 00
    00       imul    edx, 9143      ; 000023b7H
  00049 89 55 fc     mov     DWORD PTR _var$66971[ebp], edx

; 38   :        fn(var,101244215);

  0004c 8b 55 fc     mov     edx, DWORD PTR _var$66971[ebp]
  0004f 69 d2 37 dd 08
    06       imul    edx, 101244215     ; 0608dd37H

; 39   :    } while (--reps);

  00055 48       dec     eax
  00056 89 55 fc     mov     DWORD PTR _var$66971[ebp], edx
  00059 75 b5        jne     SHORT $LL3@integer_op@5

; 40   : }

  0005b 8b e5        mov     esp, ebp
  0005d 5d       pop     ebp
  0005e c3       ret     0
??$integer_op@H$1??$mult@H@@YAXACHH@Z@@YAXXZ ENDP   ; integer_op<int,&mult<int> >
; Function compile flags: /Ogtp
_TEXT   ENDS

And of f64 multiply:

;   COMDAT ??$fp_op@N$1??$mult@N@@YAXACNN@Z@@YAXXZ
_TEXT   SEGMENT
_var$67014 = -8                     ; size = 8
??$fp_op@N$1??$mult@N@@YAXACNN@Z@@YAXXZ PROC        ; fp_op<double,&mult<double> >, COMDAT

; 44   : {

  00000 55       push    ebp
  00001 8b ec        mov     ebp, esp
  00003 83 e4 f8     and     esp, -8            ; fffffff8H

; 45   :    unsigned reps = repcount;

  00006 dd 05 00 00 00
    00       fld     QWORD PTR __real@4000000000000000
  0000c 83 ec 08     sub     esp, 8
  0000f dd 05 00 00 00
    00       fld     QWORD PTR __real@3ff028f5c28f5c29
  00015 b8 80 96 98 00   mov     eax, 10000000      ; 00989680H
  0001a dd 05 00 00 00
    00       fld     QWORD PTR __real@3ff051eb851eb852
  00020 dd 05 00 00 00
    00       fld     QWORD PTR __real@3ff07ae147ae147b
  00026 dd 05 00 00 00
    00       fld     QWORD PTR __real@4000147ae147ae14
  0002c dd 05 00 00 00
    00       fld     QWORD PTR __real@400028f5c28f5c29
  00032 dd 05 00 00 00
    00       fld     QWORD PTR __real@40003d70a3d70a3d
  00038 eb 02        jmp     SHORT $LN3@fp_op@3
$LN22@fp_op@3:

; 46   :    do {
; 47   :        volatile T var = (T)2.0;
; 48   :        fn(var,(T)1.01);
; 49   :        fn(var,(T)1.02);
; 50   :        fn(var,(T)1.03);
; 51   :        fn(var,(T)2.01);
; 52   :        fn(var,(T)2.02);
; 53   :        fn(var,(T)2.03);
; 54   :    } while (--reps);

  0003a d9 ce        fxch    ST(6)
$LN3@fp_op@3:
  0003c 48       dec     eax
  0003d d9 ce        fxch    ST(6)
  0003f dd 14 24     fst     QWORD PTR _var$67014[esp+8]
  00042 dd 04 24     fld     QWORD PTR _var$67014[esp+8]
  00045 d8 ce        fmul    ST(0), ST(6)
  00047 dd 1c 24     fstp    QWORD PTR _var$67014[esp+8]
  0004a dd 04 24     fld     QWORD PTR _var$67014[esp+8]
  0004d d8 cd        fmul    ST(0), ST(5)
  0004f dd 1c 24     fstp    QWORD PTR _var$67014[esp+8]
  00052 dd 04 24     fld     QWORD PTR _var$67014[esp+8]
  00055 d8 cc        fmul    ST(0), ST(4)
  00057 dd 1c 24     fstp    QWORD PTR _var$67014[esp+8]
  0005a dd 04 24     fld     QWORD PTR _var$67014[esp+8]
  0005d d8 cb        fmul    ST(0), ST(3)
  0005f dd 1c 24     fstp    QWORD PTR _var$67014[esp+8]
  00062 dd 04 24     fld     QWORD PTR _var$67014[esp+8]
  00065 d8 ca        fmul    ST(0), ST(2)
  00067 dd 1c 24     fstp    QWORD PTR _var$67014[esp+8]
  0006a dd 04 24     fld     QWORD PTR _var$67014[esp+8]
  0006d d8 cf        fmul    ST(0), ST(7)
  0006f dd 1c 24     fstp    QWORD PTR _var$67014[esp+8]
  00072 75 c6        jne     SHORT $LN22@fp_op@3
  00074 dd d8        fstp    ST(0)
  00076 dd dc        fstp    ST(4)
  00078 dd da        fstp    ST(2)
  0007a dd d8        fstp    ST(0)
  0007c dd d8        fstp    ST(0)
  0007e dd d8        fstp    ST(0)
  00080 dd d8        fstp    ST(0)

; 55   : }

  00082 8b e5        mov     esp, ebp
  00084 5d       pop     ebp
  00085 c3       ret     0
??$fp_op@N$1??$mult@N@@YAXACNN@Z@@YAXXZ ENDP        ; fp_op<double,&mult<double> >
; Function compile flags: /Ogtp
_TEXT   ENDS
like image 168
Ben Voigt Avatar answered Oct 02 '22 04:10

Ben Voigt