I am migrating vectorized code written using SSE2 intrinsics to AVX2 intrinsics.
Much to my disappointment, I discover that the shift instructions _mm256_slli_si256 and _mm256_srli_si256 operate only on the two halves of the AVX registers separately and zeroes are introduced in between. (This is by contrast with _mm_slli_si128 and _mm_srli_si128 that handle whole SSE registers.)
Can you recommend me a short substitute ?
UPDATE:
_mm256_slli_si256
is efficiently achieved with
_mm256_alignr_epi8(A, _mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 3, 0)), N)
or
_mm256_slli_si256(_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 3, 0)), N)
for shifts larger than 16 bytes.
But the question remains for _mm256_srli_si256
.
From different inputs, I gathered these solutions. The key to crossing the inter-lane barrier is the align instruction, _mm256_alignr_epi8
.
0 < N < 16
_mm256_alignr_epi8(A, _mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 2, 0)), 16 - N)
N = 16
_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 2, 0))
16 < N < 32
_mm256_slli_si256(_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(0, 0, 2, 0)), N - 16)
0 < N < 16
_mm256_alignr_epi8(_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(2, 0, 0, 1)), A, N)
N = 16
_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(2, 0, 0, 1))
16 < N < 32
_mm256_srli_si256(_mm256_permute2x128_si256(A, A, _MM_SHUFFLE(2, 0, 0, 1)), N - 16)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With