Consider a randomly generated __m256i
vector. Is there a faster precise way to convert them into __m256
vector of floats between 0
(inclusively) and 1
(exclusively) than division by float(1ull<<32)
?
Here's what I have tried so far, where iRand
is the input and ans
is the output:
const __m256 fRand = _mm256_cvtepi32_ps(iRand);
const __m256 normalized = _mm256_div_ps(fRand, _mm256_set1_ps(float(1ull<<32)));
const __m256 ans = _mm256_add_ps(normalized, _mm256_set1_ps(0.5f));
The version below should be faster, compared to your initial version that uses _mm256_div_ps
vdivps
is quite slow, e.g. on my Haswell Xeon it’s 18-21 cycles latency, 14 cycles throughput. Newer CPUs perform better BTW, it’s 11/5 on Skylake, 10/6 on Ryzen.
As said in the comments, the performance is fixable by replacing divide with multiply and further improved with FMA. The problem with the approach is quality of distribution. If you’ll try to get these numbers in your output interval by rounding mode or clipping, you’ll introduce peaks in probability distribution of the output numbers.
My implementation is not ideal either, it doesn’t output all possible values in the output interval, skips many representable floats, especially near 0. But at least the distribution is very even.
__m256 __vectorcall randomFloats( __m256i randomBits )
{
// Convert to random float bits
__m256 result = _mm256_castsi256_ps( randomBits );
// Zero out exponent bits, leave random bits in mantissa.
// BTW since the mask value is constexpr, we don't actually need AVX2 instructions for this, it's just easier to code with set1_epi32.
const __m256 mantissaMask = _mm256_castsi256_ps( _mm256_set1_epi32( 0x007FFFFF ) );
result = _mm256_and_ps( result, mantissaMask );
// Set sign + exponent bits to that of 1.0, which is sign=0, exponent=2^0.
const __m256 one = _mm256_set1_ps( 1.0f );
result = _mm256_or_ps( result, one );
// Subtract 1.0. The above algorithm generates floats in range [1..2).
// Can't use bit tricks to generate floats in [0..1) because it would cause them to be distributed very unevenly.
return _mm256_sub_ps( result, one );
}
Update: if you want better precision, use the following version. But it’s no longer “the fastest”.
__m256 __vectorcall randomFloats_32( __m256i randomBits )
{
// Convert to random float bits
__m256 result = _mm256_castsi256_ps( randomBits );
// Zero out exponent bits, leave random bits in mantissa.
const __m256 mantissaMask = _mm256_castsi256_ps( _mm256_set1_epi32( 0x007FFFFF ) );
result = _mm256_and_ps( result, mantissaMask );
// Set sign + exponent bits to that of 1.0, which is sign=0, exponent = 2^0.
const __m256 one = _mm256_set1_ps( 1.0f );
result = _mm256_or_ps( result, one );
// Subtract 1.0. The above algorithm generates floats in range [1..2).
result = _mm256_sub_ps( result, one );
// Use 9 unused random bits to add extra randomness to the lower bits of the values.
// This increases precision to 2^-32, however most floats in the range can't store that many bits, fmadd will only add them for small enough values.
// If you want uniformly distributed floats with 2^-24 precision, replace the second argument in the following line with _mm256_set1_epi32( 0x80000000 ).
// In this case you don't need to set rounding mode bits in MXCSR.
__m256i extraBits = _mm256_and_si256( randomBits, _mm256_castps_si256( mantissaMask ) );
extraBits = _mm256_srli_epi32( extraBits, 9 );
__m256 extra = _mm256_castsi256_ps( extraBits );
extra = _mm256_or_ps( extra, one );
extra = _mm256_sub_ps( extra, one );
_MM_SET_ROUNDING_MODE( _MM_ROUND_DOWN );
constexpr float mul = 0x1p-23f; // The initial part of the algorithm has generated uniform distribution with the step 2^-23.
return _mm256_fmadd_ps( extra, _mm256_set1_ps( mul ), result );
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With