Floored division is when the result is always floored down (towards −∞), not towards 0:
Is it possible to efficiently implement floored or euclidean integer division in C/C++?
(the obvious solution is to check the dividend's sign)
Euclid's division lemma states that for any two positive integers, say 'a' and 'b', the condition 'a = bq +r', where 0 ≤ r < b always holds true. Mathematically, we can express this as 'Dividend = (Divisor × Quotient) + Remainder'.
As written, you're performing integer arithmetic, which automatically just truncates any decimal results.
Intel's desktop/server class processors do FP divisions much faster due to AVX (being capable of 8 32-bit FP divisions in parallel) compared to integer divisions. Their Atom processors aren't as good, being able to do 4 32-bit FP divisions in parallel but with a long latency but still faster than integer divisions.
I've written a test program to benchmark the ideas presented here:
#include <stdio.h> #include <stdlib.h> #include <math.h> #include <windows.h> #define N 10000000 #define M 100 int dividends[N], divisors[N], results[N]; __forceinline int floordiv_signcheck(int a, int b) { return (a<0 ? a-(b-1) : a) / b; } __forceinline int floordiv_signcheck2(int a, int b) { return (a - (a<0 ? b-1 : 0)) / b; } __forceinline int floordiv_signmultiply(int a, int b) { return (a + (a>>(sizeof(a)*8-1))*(b-1)) / b; } __forceinline int floordiv_floatingpoint(int a, int b) { // I imagine that the call to floor can be replaced to a cast // if you can get FPU rounding control to work (I couldn't). return floor((double)a / b); } void main() { for (int i=0; i<N; i++) { dividends[i] = rand(); do divisors[i] = rand(); while (divisors[i]==0); } LARGE_INTEGER t0, t1; QueryPerformanceCounter(&t0); for (int j=0; j<M; j++) for (int i=0; i<N; i++) results[i] = floordiv_signcheck(dividends[i], divisors[i]); QueryPerformanceCounter(&t1); printf("signcheck : %9llu\n", t1.QuadPart-t0.QuadPart); QueryPerformanceCounter(&t0); for (int j=0; j<M; j++) for (int i=0; i<N; i++) results[i] = floordiv_signcheck2(dividends[i], divisors[i]); QueryPerformanceCounter(&t1); printf("signcheck2 : %9llu\n", t1.QuadPart-t0.QuadPart); QueryPerformanceCounter(&t0); for (int j=0; j<M; j++) for (int i=0; i<N; i++) results[i] = floordiv_signmultiply(dividends[i], divisors[i]); QueryPerformanceCounter(&t1); printf("signmultiply : %9llu\n", t1.QuadPart-t0.QuadPart); QueryPerformanceCounter(&t0); for (int j=0; j<M; j++) for (int i=0; i<N; i++) results[i] = floordiv_floatingpoint(dividends[i], divisors[i]); QueryPerformanceCounter(&t1); printf("floatingpoint: %9llu\n", t1.QuadPart-t0.QuadPart); }
Results:
signcheck : 61458768 signcheck2 : 61284370 signmultiply : 61625076 floatingpoint: 287315364
So, according to my results, checking the sign is the fastest:
(a - (a<0 ? b-1 : 0)) / b
I'm revisiting this question five years later, as this is relevant for me too. I did some performance measurements on two pure-C versions and two inline-assembly versions for x86-64, and the results may be interesting.
The tested variants of floored division are:
CMOV
version implemented in assembly.The following is my benchmark program:
#include <stdio.h> #include <stdlib.h> #include <sys/time.h> #ifndef VARIANT #define VARIANT 3 #endif #if VARIANT == 0 #define floordiv(a, b) (((a) < 0)?((((a) + 1) / (b)) - 1):((a) / (b))) #elif VARIANT == 1 #define floordiv(a, b) ((((a) < 0)?((a) - ((b) - 1)):(a)) / (b)) #elif VARIANT == 2 #define floordiv(a, b) ({ \ int result; \ asm("test %%eax, %%eax; jns 1f; sub %1, %%eax;" \ "add $1, %%eax; 1: cltd; idivl %1;" \ : "=a" (result) \ : "r" (b), \ "0" (a) \ : "rdx"); \ result;}) #elif VARIANT == 3 #define floordiv(a, b) ({ \ int result; \ asm("mov %%eax, %%edx; sub %1, %%edx; add $1, %%edx;" \ "test %%eax, %%eax; cmovs %%edx, %%eax; cltd;" \ "idivl %1;" \ : "=a" (result) \ : "r" (b), \ "0" (a) \ : "rdx"); \ result;}) #endif double ntime(void) { struct timeval tv; gettimeofday(&tv, NULL); return(tv.tv_sec + (((double)tv.tv_usec) / 1000000.0)); } void timediv(int n, int *p, int *q, int *r) { int i; for(i = 0; i < n; i++) r[i] = floordiv(p[i], q[i]); } int main(int argc, char **argv) { int n, i, *q, *p, *r; double st; n = 10000000; p = malloc(sizeof(*p) * n); q = malloc(sizeof(*q) * n); r = malloc(sizeof(*r) * n); for(i = 0; i < n; i++) { p[i] = (rand() % 1000000) - 500000; q[i] = (rand() % 1000000) + 1; } st = ntime(); for(i = 0; i < 100; i++) timediv(n, p, q, r); printf("%g\n", ntime() - st); return(0); }
I compiled this with gcc -march=native -Ofast
using GCC 4.9.2, and the results, on my Core i5-2400, were as follows. The results are fairly reproducible from run to run -- they always land in the same order, at least.
So the CMOV
implementation blows the others out of the water, at least. What surprises me is that variant 2 out-does its pure-C version (variant 1) by a fairly wide margin. I'd have thought the compiler should be able to emit code at least as efficient as mine.
Here are some other platforms, for comparison:
AMD Athlon 64 X2 4200+, GCC 4.7.2:
Xeon E3-1271 v3, GCC 4.9.2:
As a final note, I should perhaps warn against taking the apparent performance advantage of the CMOV
version too seriously, because in the real world, the branch in the other versions will probably not be as completely random as in this benchmark, and if the branch predictor can do a reasonable job, the branching versions may turn out to be better. However, the realities of that will depend quite a bit on the data that are being used in practice, and so is probably pointless to try and do any generic benchmark of.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With