Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Looping over arrays with inline assembly

When looping over an array with inline assembly should I use the register modifier "r" or he memory modifier "m"?

Let's consider an example which adds two float arrays x, and y and writes the results to z. Normally I would use intrinsics to do this like this

for(int i=0; i<n/4; i++) {
    __m128 x4 = _mm_load_ps(&x[4*i]);
    __m128 y4 = _mm_load_ps(&y[4*i]);
    __m128 s = _mm_add_ps(x4,y4);
    _mm_store_ps(&z[4*i], s);
}

Here is the inline assembly solution I have come up with using the register modifier "r"

void add_asm1(float *x, float *y, float *z, unsigned n) {
    for(int i=0; i<n; i+=4) {
        __asm__ __volatile__ (
            "movaps   (%1,%%rax,4), %%xmm0\n"
            "addps    (%2,%%rax,4), %%xmm0\n"
            "movaps   %%xmm0, (%0,%%rax,4)\n"
            :
            : "r" (z), "r" (y), "r" (x), "a" (i)
            :
        );
    }
}

This generates similar assembly to GCC. The main difference is that GCC adds 16 to the index register and uses a scale of 1 whereas the inline-assembly solution adds 4 to the index register and uses a scale of 4.

I was not able to use a general register for the iterator. I had to specify one which in this case was rax. Is there a reason for this?

Here is the solution I came up with using the memory modifer "m"

void add_asm2(float *x, float *y, float *z, unsigned n) {
    for(int i=0; i<n; i+=4) {
        __asm__ __volatile__ (
            "movaps   %1, %%xmm0\n"
            "addps    %2, %%xmm0\n"
            "movaps   %%xmm0, %0\n"
            : "=m" (z[i])
            : "m" (y[i]), "m" (x[i])
            :
            );
    }
}

This is less efficient as it does not use an index register and instead has to add 16 to the base register of each array. The generated assembly is (gcc (Ubuntu 5.2.1-22ubuntu2) with gcc -O3 -S asmtest.c):

.L22
    movaps   (%rsi), %xmm0
    addps    (%rdi), %xmm0
    movaps   %xmm0, (%rdx)
    addl    $4, %eax
    addq    $16, %rdx
    addq    $16, %rsi
    addq    $16, %rdi
    cmpl    %eax, %ecx
    ja      .L22

Is there a better solution using the memory modifier "m"? Is there some way to get it to use an index register? The reason I asked is that it seemed more logical to me to use the memory modifer "m" since I am reading and writing memory. Additionally, with the register modifier "r" I never use an output operand list which seemed odd to me at first.

Maybe there is a better solution than using "r" or "m"?

Here is the full code I used to test this

#include <stdio.h>
#include <x86intrin.h>

#define N 64

void add_intrin(float *x, float *y, float *z, unsigned n) {
    for(int i=0; i<n; i+=4) {
        __m128 x4 = _mm_load_ps(&x[i]);
        __m128 y4 = _mm_load_ps(&y[i]);
        __m128 s = _mm_add_ps(x4,y4);
        _mm_store_ps(&z[i], s);
    }
}

void add_intrin2(float *x, float *y, float *z, unsigned n) {
    for(int i=0; i<n/4; i++) {
        __m128 x4 = _mm_load_ps(&x[4*i]);
        __m128 y4 = _mm_load_ps(&y[4*i]);
        __m128 s = _mm_add_ps(x4,y4);
        _mm_store_ps(&z[4*i], s);
    }
}

void add_asm1(float *x, float *y, float *z, unsigned n) {
    for(int i=0; i<n; i+=4) {
        __asm__ __volatile__ (
            "movaps   (%1,%%rax,4), %%xmm0\n"
            "addps    (%2,%%rax,4), %%xmm0\n"
            "movaps   %%xmm0, (%0,%%rax,4)\n"
            :
            : "r" (z), "r" (y), "r" (x), "a" (i)
            :
        );
    }
}

void add_asm2(float *x, float *y, float *z, unsigned n) {
    for(int i=0; i<n; i+=4) {
        __asm__ __volatile__ (
            "movaps   %1, %%xmm0\n"
            "addps    %2, %%xmm0\n"
            "movaps   %%xmm0, %0\n"
            : "=m" (z[i])
            : "m" (y[i]), "m" (x[i])
            :
            );
    }
}

int main(void) {
    float x[N], y[N], z1[N], z2[N], z3[N];
    for(int i=0; i<N; i++) x[i] = 1.0f, y[i] = 2.0f;
    add_intrin2(x,y,z1,N);
    add_asm1(x,y,z2,N);
    add_asm2(x,y,z3,N);
    for(int i=0; i<N; i++) printf("%.0f ", z1[i]); puts("");
    for(int i=0; i<N; i++) printf("%.0f ", z2[i]); puts("");
    for(int i=0; i<N; i++) printf("%.0f ", z3[i]); puts("");
}
like image 849
Z boson Avatar asked Dec 12 '15 19:12

Z boson


2 Answers

Avoid inline asm whenever possible: https://gcc.gnu.org/wiki/DontUseInlineAsm. It blocks many optimizations. But if you really can't hand-hold the compiler into making the asm you want, you should probably write your whole loop in asm so you can unroll and tweak it manually, instead of doing stuff like this.


You can use an r constraint for the index. Use the q modifier to get the name of the 64bit register, so you can use it in an addressing mode. When compiled for 32bit targets, the q modifier selects the name of the 32bit register, so the same code still works.

If you want to choose what kind of addressing mode is used, you'll need to do it yourself, using pointer operands with r constraints.

GNU C inline asm syntax doesn't assume that you read or write memory pointed to by pointer operands. (e.g. maybe you're using an inline-asm and on the pointer value). So you need to do something with either a "memory" clobber or memory input/output operands to let it know what memory you modify. A "memory" clobber is easy, but forces everything except locals to be spilled/reloaded. See the Clobbers section in the docs for an example of using a dummy input operand.

Specifically, a "m" (*(const float (*)[]) fptr) will tell the compiler that the entire array object is an input, arbitrary-length. i.e. the asm can't reorder with any stores that use fptr as part of the address (or that use the array it's known to point into). Also works with an "=m" or "+m" constraint (without the const, obviously).

Using a specific size like "m" (*(const float (*)[4]) fptr) lets you tell the compiler what you do/don't read. (Or write). Then it can (if otherwise permitted) sink a store to a later element past the asm statement, and combine it with another store (or do dead-store elimination) of any stores that your inline asm doesn't read.

(See How can I indicate that the memory *pointed* to by an inline ASM argument may be used? for a whole Q&A about this.)


Another huge benefit to an m constraint is that -funroll-loops can work by generating addresses with constant offsets. Doing the addressing ourself prevents the compiler from doing a single increment every 4 iterations or something, because every source-level value of i needs to appear in a register.


Here's my version, with some tweaks as noted in comments. This is not optimal, e.g. can't be unrolled efficiently by the compiler.

#include <immintrin.h>
void add_asm1_memclobber(float *x, float *y, float *z, unsigned n) {
    __m128 vectmp;  // let the compiler choose a scratch register
    for(int i=0; i<n; i+=4) {
        __asm__ __volatile__ (
            "movaps   (%[y],%q[idx],4), %[vectmp]\n\t"  // q modifier: 64bit version of a GP reg
            "addps    (%[x],%q[idx],4), %[vectmp]\n\t"
            "movaps   %[vectmp], (%[z],%q[idx],4)\n\t"
            : [vectmp] "=x" (vectmp)  // "=m" (z[i])  // gives worse code if the compiler prepares a reg we don't use
            : [z] "r" (z), [y] "r" (y), [x] "r" (x),
              [idx] "r" (i) // unrolling is impossible this way (without an insn for every increment by 4)
            : "memory"
          // you can avoid a "memory" clobber with dummy input/output operands
        );
    }
}

Godbolt compiler explorer asm output for this and a couple versions below.

Your version needs to declare %xmm0 as clobbered, or you will have a bad time when this is inlined. My version uses a temporary variable as an output-only operand that's never used. This gives the compiler full freedom for register allocation.

If you want to avoid the "memory" clobber, you can use dummy memory input/output operands like "m" (*(const __m128*)&x[i]) to tell the compiler which memory is read and written by your function. This is necessary to ensure correct code-generation if you did something like x[4] = 1.0; right before running that loop. (And even if you didn't write something that simple, inlining and constant propagation can boil it down to that.) And also to make sure the compiler doesn't read from z[] before the loop runs.

In this case, we get horrible results: gcc5.x actually increments 3 extra pointers because it decides to use [reg] addressing modes instead of indexed. It doesn't know that the inline asm never actually references those memory operands using the addressing mode created by the constraint!

# gcc5.4 with dummy constraints like "=m" (*(__m128*)&z[i]) instead of "memory" clobber
.L11:
    movaps   (%rsi,%rax,4), %xmm0   # y, i, vectmp
    addps    (%rdi,%rax,4), %xmm0   # x, i, vectmp
    movaps   %xmm0, (%rdx,%rax,4)   # vectmp, z, i

    addl    $4, %eax        #, i
    addq    $16, %r10       #, ivtmp.19
    addq    $16, %r9        #, ivtmp.21
    addq    $16, %r8        #, ivtmp.22
    cmpl    %eax, %ecx      # i, n
    ja      .L11        #,

r8, r9, and r10 are the extra pointers that the inline asm block doesn't use.

You can use a constraint that tells gcc an entire array of arbitrary length is an input or an output: "m" (*(const char (*)[]) pStr). This casts the pointer to a pointer-to-array (of unspecified size). See How can I indicate that the memory *pointed* to by an inline ASM argument may be used?

If we want to use indexed addressing modes, we will have the base address of all three arrays in registers, and this form of constraint asks for the base address (of the whole array) as an operand, rather than a pointer to the current memory being operated on.

This actually works without any extra pointer or counter increments inside the loop: (avoiding a "memory" clobber, but still not easily unrollable by the compiler).

void add_asm1_dummy_whole_array(const float *restrict x, const float *restrict y,
                             float *restrict z, unsigned n) {
    __m128 vectmp;  // let the compiler choose a scratch register
    for(int i=0; i<n; i+=4) {
        __asm__ __volatile__ (
            "movaps   (%[y],%q[idx],4), %[vectmp]\n\t"  // q modifier: 64bit version of a GP reg
            "addps    (%[x],%q[idx],4), %[vectmp]\n\t"
            "movaps   %[vectmp], (%[z],%q[idx],4)\n\t"
            : [vectmp] "=x" (vectmp)
             , "=m" (*(float (*)[]) z)  // "=m" (z[i])  // gives worse code if the compiler prepares a reg we don't use
            : [z] "r" (z), [y] "r" (y), [x] "r" (x),
              [idx] "r" (i) // unrolling is impossible this way (without an insn for every increment by 4)
              , "m" (*(const float (*)[]) x),
                "m" (*(const float (*)[]) y)  // pointer to unsized array = all memory from this pointer
        );
    }
}

This gives us the same inner loop we got with a "memory" clobber:

.L19:   # with clobbers like "m" (*(const struct {float a; float x[];} *) y)
    movaps   (%rsi,%rax,4), %xmm0   # y, i, vectmp
    addps    (%rdi,%rax,4), %xmm0   # x, i, vectmp
    movaps   %xmm0, (%rdx,%rax,4)   # vectmp, z, i

    addl    $4, %eax        #, i
    cmpl    %eax, %ecx      # i, n
    ja      .L19        #,

It tells the compiler that each asm block reads or writes the entire arrays, so it may unnecessarily stop it from interleaving with other code (e.g. after fully unrolling with low iteration count). It doesn't stop unrolling, but the requirement to have each index value in a register does make it less effective. There's no way for this to end up with a 16(%rsi,%rax,4) addressing mode in a 2nd copy of this block in the same loop, because we're hiding the addressing from the compiler.


A version with m constraints, that gcc can unroll:

#include <immintrin.h>
void add_asm1(float *x, float *y, float *z, unsigned n) {
    // x, y, z are assumed to be aligned
    __m128 vectmp;  // let the compiler choose a scratch register
    for(int i=0; i<n; i+=4) {
        __asm__ __volatile__ (
           // "movaps   %[yi], %[vectmp]\n\t"   // get the compiler to do this load instead
            "addps    %[xi], %[vectmp]\n\t"
            "movaps   %[vectmp], %[zi]\n\t"
          // __m128 is a may_alias type so these casts are safe.
            : [vectmp] "=x" (vectmp)         // let compiler pick a stratch reg
              ,[zi] "=m" (*(__m128*)&z[i])   // actual memory output for the movaps store
            : [yi] "0"  (*(__m128*)&y[i])  // or [yi] "xm" (*(__m128*)&y[i]), and uncomment the movaps load
             ,[xi] "xm" (*(__m128*)&x[i])
              //, [idx] "r" (i) // unrolling with this would need an insn for every increment by 4
        );
    }
}

Using [yi] as a +x input/output operand would be simpler, but writing it this way makes a smaller change for uncommenting the load in the inline asm, instead of letting the compiler get one value into registers for us.

like image 110
Peter Cordes Avatar answered Sep 21 '22 18:09

Peter Cordes


When I compile your add_asm2 code with gcc (4.9.2) I get:

add_asm2:
.LFB0:
        .cfi_startproc
        xorl        %eax, %eax
        xorl        %r8d, %r8d
        testl       %ecx, %ecx
        je  .L1
        .p2align 4,,10
        .p2align 3
.L5:
#APP
# 3 "add_asm2.c" 1
        movaps   (%rsi,%rax), %xmm0
addps    (%rdi,%rax), %xmm0
movaps   %xmm0, (%rdx,%rax)

# 0 "" 2
#NO_APP
        addl        $4, %r8d
        addq        $16, %rax
        cmpl        %r8d, %ecx
        ja  .L5
.L1:
        rep; ret
        .cfi_endproc

so it is not perfect (it uses a redundant register), but does use indexed loads...

like image 45
Chris Dodd Avatar answered Sep 21 '22 18:09

Chris Dodd