Linux defines an assembler macro to use BX
on CPUs that support it, which makes me suspect there is some performance reason.
This answer and the Cortex-A7 MPCore Technical Reference Manual also states that it helps with branch prediction.
However my benchmarking efforts have not been able to find a performance difference with ARM1176, Cortex-A17, Cortex-A72 and Neoverse-N1 cpus.
Is there therefore any reason to prefer BX
over MOV pc,
on cpus with a MMU and that implement the 32-bit ARM instruction set, other than interworking with Thumb code?
Edited to add benchmark code, all aligned to 64 bytes:
Perform useless calculations on lr
and return using BX
:
div_bx
mov r9, #2
mul lr, r9, lr
udiv lr, lr, r9
mul lr, r9, lr
udiv lr, lr, r9
bx lr
Perform useless calculations on another register and return using BX
:
div_bx2
mov r9, #2
mul r3, r9, lr
udiv r3, r3, r9
mul r3, r9, r3
udiv r3, r3, r9
bx lr
Perform useless calculations on lr
and return using MOV
:
div_mov
mov r9, #2
mul lr, r9, lr
udiv lr, lr, r9
mul lr, r9, lr
udiv lr, lr, r9
mov pc, lr
Call using classic function pointer sequence:
movmov
push {lr}
loop mov lr, pc
mov pc, r1
mov lr, pc
mov pc, r1
mov lr, pc
mov pc, r1
mov lr, pc
mov pc, r1
subs r0, r0, #1
bne loop
pop {pc}
Call using BLX
:
blx
push {lr}
loop nop
blx r1
nop
blx r1
nop
blx r1
nop
blx r1
subs r0, r0, #1
bne loop
pop {pc}
Removing the nop
s makes is slower.
Results as seconds per 100000000 loops:
Neoverse-N1 r3p1 (AWS c6g.medium)
mov+mov blx
div_bx 5.73 1.70
div_mov 5.89 1.71
div_bx2 2.81 1.69
Cortex-A72 r0p3 (AWS a1.medium)
mov+mov blx
div_bx 5.32 1.63
div_mov 5.39 1.58
div_bx2 2.79 1.63
Cortex-A17 r0p1 (ASUS C100P)
mov+mov blx
div_bx 12.52 5.69
div_mov 12.52 5.75
div_bx2 5.51 5.56
It appears the 3 ARMv7 processors I tested recognise both mov pc, lr
and bx lr
as return instructions. However the Raspberry Pi 1 with ARM1176 is documented as having return prediction that recognises only BX lr
and some loads as return instructions, but I find no evidence of return prediction.
header: .string " Calle BL B Difference"
format: .string "%12s %7i %7i %11i\n"
.align
.global main
main: push {r3-r5, lr}
adr r0, header
bl puts
@ Warm up
bl clock
mov r0, #0x40000000
1: subs r0, r0, #1
bne 1b
bl clock
.macro run_test test
2: bl 1f
nop
bl clock
mov r4, r0
ldr r0, =10000000
.balign 64
3: mov lr, pc
bl 1f
nop
mov lr, pc
bl 1f
nop
mov lr, pc
bl 1f
nop
subs r0, r0, #1
bne 3b
bl clock
mov r5, r0
ldr r0, =10000000
.balign 64
5: mov lr, pc
b 1f
nop
mov lr, pc
b 1f
nop
mov lr, pc
b 1f
nop
subs r0, r0, #1
bne 5b
bl clock
sub r2, r5, r4
sub r3, r0, r5
sub r0, r3, r2
str r0, [sp]
adr r1, 4f
ldr r0, =format
bl printf
b 2f
.ltorg
4: .string "\test"
.balign 64
1:
.endm
run_test mov
mov lr, lr
mov pc, lr
run_test bx
mov lr, lr
bx lr
run_test mov_mov
mov r2, lr
mov pc, r2
run_test mov_bx
mov r2, lr
bx r2
run_test pp_mov_mov
push {r1-r11, lr}
pop {r1-r11, lr}
mov r12, lr
mov pc, r12
run_test pp_mov_bx
push {r1-r11, lr}
pop {r1-r11, lr}
mov r12, lr
bx r12
run_test pp_mov_mov_f
push {r0-r11}
pop {r0-r11}
mov r12, lr
mov pc, r12
run_test pp_mov_bx_f
push {r0-r11}
pop {r0-r11}
mov r12, lr
bx r12
run_test pp_mov
push {r1-r11, lr}
pop {r1-r11, lr}
mov r12, lr
mov pc, lr
run_test pp_bx
push {r1-r11, lr}
pop {r1-r11, lr}
mov r12, lr
bx lr
run_test pp_mov_f
push {r0-r11}
pop {r0-r11}
mov r12, lr
bx lr
run_test pp_bx_f
push {r0-r11}
pop {r0-r11}
mov r12, lr
bx lr
run_test add_mov
nop
add r2, lr, #4
mov pc, r2
run_test add_bx
nop
add r2, lr, #4
bx r2
2: pop {r3-r5, pc}
Results on Cortex-A17 are as expected:
Calle BL B Difference
mov 94492 255882 161390
bx 94673 255752 161079
mov_mov 255872 255806 -66
mov_bx 255902 255796 -106
pp_mov_mov 506079 506132 53
pp_mov_bx 506108 506262 154
pp_mov_mov_f 439339 439436 97
pp_mov_bx_f 439437 439776 339
pp_mov 247941 495527 247586
pp_bx 247891 494873 246982
pp_mov_f 230846 422626 191780
pp_bx_f 230850 422772 191922
add_mov 255997 255896 -101
add_bx 255900 256288 388
However on my Raspberry Pi1 with ARM1176 running Linux 5.4.51+ from Raspbery Pi OS show no advantage of predictable instuctions:
Calle BL B Difference
mov 464367 464372 5
bx 464343 465104 761
mov_mov 464346 464417 71
mov_bx 464280 464577 297
pp_mov_mov 1073684 1074169 485
pp_mov_bx 1074009 1073832 -177
pp_mov_mov_f 769160 768757 -403
pp_mov_bx_f 769354 769368 14
pp_mov 885585 1030520 144935
pp_bx 885222 1032396 147174
pp_mov_f 682139 726129 43990
pp_bx_f 682431 725210 42779
add_mov 494061 493306 -755
add_bx 494080 493093 -987
If you're testing simple cases where mov pc, ...
always jumps to the same return address, regular indirect-branch prediction might do fine.
I'd guess that bx lr
might use a return-address predictor that assumes matching call/ret (blx
/ bx lr
) to correctly predict returns to various call sites, also without wasting space in the normal indirect branch-predictor.
To test this hypothesis, try something like
testfunc:
bx lr @ or mov pc,lr
caller:
mov r0, #100000000
.p2align 4
.loop:
blx testfunc
blx testfunc # different return address than the previous blx
blx testfunc
blx testfunc
subs r0, #1
bne .loop
If my hypothesis is right, I predict that mov pc, lr
will be slower for this than bx lr
.
(A more complicated pattern of target addresses (callsites in this case) might be needed to confound indirect branch prediction on some CPUs. Some CPUs have a return address predictor that can only remember 1 target address, but somewhat more sophisticated predictors can handle a simple repeating pattern of 4 addresses.)
(This is a guess, I don't have any experience with any of these chips, but the general cpu-architecture technique of a return-address predictor is well known, and I've read that it's used in practice on multiple ISAs. I know for sure x86 uses it: http://blog.stuffedcow.net/2018/04/ras-microbenchmarks/ Mismatched call/ret is definitely a problem there.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With