Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Intel AVX: 256-bits version of dot product for double precision floating point variables

The Intel Advanced Vector Extensions (AVX) offers no dot product in the 256-bit version (YMM register) for double precision floating point variables. The "Why?" question have been very briefly treated in another forum (here) and on Stack Overflow (here). But the question I am facing is how to replace this missing instruction with other AVX instructions in an efficient way?

The dot product in 256-bit version exists for single precision floating point variables (reference here):

 __m256 _mm256_dp_ps(__m256 m1, __m256 m2, const int mask);

The idea is to find an efficient equivalent for this missing instruction:

 __m256d _mm256_dp_pd(__m256d m1, __m256d m2, const int mask);

To be more specific, the code I would like to transform from __m128 (four floats) to __m256d (4 doubles) use the following instructions:

   __m128 val0 = ...; // Four float values
   __m128 val1 = ...; //
   __m128 val2 = ...; //
   __m128 val3 = ...; //
   __m128 val4 = ...; //

   __m128 res = _mm_or_ps( _mm_dp_ps(val1,  val0,   0xF1),
                _mm_or_ps( _mm_dp_ps(val2,  val0,   0xF2),
                _mm_or_ps( _mm_dp_ps(val3,  val0,   0xF4),
                           _mm_dp_ps(val4,  val0,   0xF8) )));

The result of this code is a _m128 vector of four floats containing the results of the dot products between val1 and val0, val2 and val0, val3 and val0, val4 and val0.

Maybe this can give hints for the suggestions?

like image 817
gleeen.gould Avatar asked May 04 '12 18:05

gleeen.gould


3 Answers

I would use a 4*double multiplication, then a hadd (which unfortunately adds only 2*2 floats in the upper and lower half), extract the upper half (a shuffle should work equally, maybe faster) and add it to the lower half.

The result is in the low 64 bit of dotproduct.

__m256d xy = _mm256_mul_pd( x, y );
__m256d temp = _mm256_hadd_pd( xy, xy );
__m128d hi128 = _mm256_extractf128_pd( temp, 1 );
__m128d dotproduct = _mm_add_pd( (__m128d)temp, hi128 );

Edit:
After an idea of Norbert P. I extended this version to do 4 dot products at one time.

__m256d xy0 = _mm256_mul_pd( x[0], y[0] );
__m256d xy1 = _mm256_mul_pd( x[1], y[1] );
__m256d xy2 = _mm256_mul_pd( x[2], y[2] );
__m256d xy3 = _mm256_mul_pd( x[3], y[3] );

// low to high: xy00+xy01 xy10+xy11 xy02+xy03 xy12+xy13
__m256d temp01 = _mm256_hadd_pd( xy0, xy1 );   

// low to high: xy20+xy21 xy30+xy31 xy22+xy23 xy32+xy33
__m256d temp23 = _mm256_hadd_pd( xy2, xy3 );

// low to high: xy02+xy03 xy12+xy13 xy20+xy21 xy30+xy31
__m256d swapped = _mm256_permute2f128_pd( temp01, temp23, 0x21 );

// low to high: xy00+xy01 xy10+xy11 xy22+xy23 xy32+xy33
__m256d blended = _mm256_blend_pd(temp01, temp23, 0b1100);

__m256d dotproduct = _mm256_add_pd( swapped, blended );
like image 161
Gunther Piez Avatar answered Nov 01 '22 12:11

Gunther Piez


I would extend drhirsch's answer to perform two dot products at the same time, saving some work:

__m256d xy = _mm256_mul_pd( x, y );
__m256d zw = _mm256_mul_pd( z, w );
__m256d temp = _mm256_hadd_pd( xy, zw );
__m128d hi128 = _mm256_extractf128_pd( temp, 1 );
__m128d dotproduct = _mm_add_pd( (__m128d)temp, hi128 );

Then dot(x,y) is in the low double and dot(z,w) is in the high double of dotproduct.

like image 34
Norbert P. Avatar answered Nov 01 '22 10:11

Norbert P.


For a single dot-product, it's simply a vertical multiply and horizontal sum (see Fastest way to do horizontal float vector sum on x86). hadd costs 2 shuffles + an add. It's almost always sub-optimal for throughput when used with both inputs = the same vector.

// both elements = dot(x,y)
__m128d dot1(__m256d x, __m256d y) {
    __m256d xy = _mm256_mul_pd(x, y);

    __m128d xylow  = _mm256_castps256_pd128(xy);   // (__m128d)cast isn't portable
    __m128d xyhigh = _mm256_extractf128_pd(xy, 1);
    __m128d sum1 =   _mm_add_pd(xylow, xyhigh);

    __m128d swapped = _mm_shuffle_pd(sum1, sum1, 0b01);   // or unpackhi
    __m128d dotproduct = _mm_add_pd(sum1, swapped);
    return dotproduct;
}

If you only need one dot product, this is better than @hirschhornsalz's single-vector answer by 1 shuffle uop on Intel, and a bigger win on AMD Jaguar / Bulldozer-family / Ryzen because it narrows down to 128b right away instead of doing a bunch of 256b stuff. AMD splits 256b ops into two 128b uops.


It can be worth using hadd in cases like doing 2 or 4 dot products in parallel where you're using it with 2 different input vectors. Norbert's dot of two pairs of vectors looks optimal if you want the results packed. I don't see any way to do better even with AVX2 vpermpd as a lane-crossing shuffle.

Of course if you really want one larger dot (of 8 or more doubles), use vertical add (with multiple accumulators to hide vaddps latency) and do the horizontal summing at the end. You can also use fma if available.


haddpd internally shuffles xy and zw together two different ways and feeds that to a vertical addpd, and that's what we'd do by hand anyway. If we kept xy and zw separate, we'd need 2 shuffles + 2 adds for each one to get a dot product (in separate registers). So by shuffling them together with hadd as a first step, we save on the total number of shuffles, only on adds and total uop count.

/*  Norbert's version, for an Intel CPU:
    __m256d temp = _mm256_hadd_pd( xy, zw );   // 2 shuffle + 1 add
    __m128d hi128 = _mm256_extractf128_pd( temp, 1 ); // 1 shuffle (lane crossing, higher latency)
    __m128d dotproduct = _mm_add_pd( (__m128d)temp, hi128 ); // 1 add
     // 3 shuffle + 2 add
*/

But for AMD, where vextractf128 is very cheap, and 256b hadd costs 2x as much as 128b hadd, it could make sense to narrow each 256b product down to 128b separately and then combine with a 128b hadd.

Actually, according to Agner Fog's tables, haddpd xmm,xmm is 4 uops on Ryzen. (And the 256b ymm version is 8 uops). So it's actually better to use 2x vshufpd + vaddpd manually on Ryzen, if that data is right. It might not be: his data for Piledriver has 3 uop haddpd xmm,xmm, and it's only 4 uops with a memory operand. It doesn't make sense to me that they couldn't implement hadd as only 3 (or 6 for ymm) uops.


For doing 4 dots with the results packed into one __m256d, the exact problem asked, I think @hirschhornsalz's answer looks very good for Intel CPUs. I haven't studied it super-carefully, but combining in pairs with hadd is good. vperm2f128 is efficient on Intel (but quite bad on AMD: 8 uops on Ryzen with one per 3c throughput).

like image 6
Peter Cordes Avatar answered Nov 01 '22 10:11

Peter Cordes