Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

The best way to shift a __m128i?

I need to shift a __m128i variable, (say v), by m bits, in such a way that bits move through all of the variable (So, the resulting variable represents v*2^m). What is the best way to do this?!

Note that _mm_slli_epi64 shifts v0 and v1 seperately:

r0 := v0 << count
r1 := v1 << count

so the last bits of v0 missed, but I want to move those bits to r1.

Edit: I looking for a code, faster than this (m<64):

r0 = v0 << m;
r1 = v0 >> (64-m);
r1 ^= v1 << m;
r2 = v1 >> (64-m);
like image 476
user0 Avatar asked Sep 26 '22 20:09

user0


1 Answers

For compile-time constant shift counts, you can get fairly good results. Otherwise not really.

This is just an SSE implementation of the r0 / r1 code from your question, since there's no other obvious way to do it. Variable-count shifts are only available for bit-shifts within vector elements, not for byte-shifts of the whole register. So we just carry the low 64bits up to the high 64 and use a variable-count shift to put them in the right place.

// untested
#include <immintrin.h>

/* some compilers might choke on slli / srli with non-compile-time-constant args
 * gcc generates the   xmm, imm8 form with constants,
 * and generates the   xmm, xmm  form with otherwise.  (With movd to get the count in an xmm)
 */

// doesn't optimize for the special-case where count%8 = 0
// could maybe do that in gcc with if(__builtin_constant_p(count)) { if (!count%8) return ...; }
__m128i mm_bitshift_left(__m128i x, unsigned count)
{
    __m128i carry = _mm_bslli_si128(x, 8);   // old compilers only have the confusingly named _mm_slli_si128 synonym
    if (count >= 64)
        return _mm_slli_epi64(carry, count-64);  // the non-carry part is all zero, so return early
    // else
    carry = _mm_srli_epi64(carry, 64-count);  // After bslli shifted left by 64b

    x = _mm_slli_epi64(x, count);
    return _mm_or_si128(x, carry);
}

__m128i mm_bitshift_left_3(__m128i x) { // by a specific constant, to see inlined constant version
    return mm_bitshift_left(x, 3);
}
// by a specific constant, to see inlined constant version
__m128i mm_bitshift_left_100(__m128i x) { return mm_bitshift_left(x, 100);  }

I thought this was going to be less convenient than it turned out to be. _mm_slli_epi64 works on gcc/clang/icc even when the count is not a compile-time constant (generating a movd from integer reg to xmm reg). There is a _mm_sll_epi64 (__m128i a, __m128i count) (note the lack of i), but at least these days, the i intrinsic can generate either form of psllq.


The compile-time-constant count versions are fairly efficient, compiling to 4 instructions (or 5 without AVX):

mm_bitshift_left_3(long long __vector(2)):
        vpslldq xmm1, xmm0, 8
        vpsrlq  xmm1, xmm1, 61
        vpsllq  xmm0, xmm0, 3
        vpor    xmm0, xmm0, xmm1
        ret

Performance:

This has 3 cycle latency (vpslldq(1) -> vpsrlq(1) -> vpor(1)) on Intel SnB/IvB/Haswell, with throughput limited to one per 2 cycles (saturating the vector shift unit on port 0). Byte-shift runs on the shuffle unit on a different port. Immediate-count vector shifts are all single-uop instructions, so this is only 4 fused-domain uops taking up pipeline space when mixed in with other code. (Variable-count vector shifts are 2 uop, 2 cycle latency, so the variable-count version of this function is worse than it looks from counting instructions.)

Or for counts >= 64:

mm_bitshift_left_100(long long __vector(2)):
        vpslldq xmm0, xmm0, 8
        vpsllq  xmm0, xmm0, 36
        ret

If your shift-count is not a compile-time constant, you have to branch on count > 64 to figure out whether to left or right shift the carry. I believe the shift count is interpreted as an unsigned integer, so a negative count is impossible.

It also takes extra instructions to get the int count and 64-count into vector registers. Doing this in a branchless fashion with vector compares and a blend instruction might be possible, but a branch is probably a good idea.


The variable-count version for __uint128_t in GP registers looks fairly good; better than the SSE version. Clang does a slightly better job than gcc, emitting fewer mov instructions, but it still uses two cmov instructions for the count >= 64 case. (Because x86 integer shift instructions mask the count, instead of saturating.)

__uint128_t leftshift_int128(__uint128_t x, unsigned count) {
    return x << count;  // undefined if count >= 128
}
like image 172
Peter Cordes Avatar answered Oct 04 '22 16:10

Peter Cordes