Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Integer multiplication mod 2³² in Actionscript 3

Has anyone come across an authoritative specification of how arithmetic on int and uint works in Actionscript 3? (By "authoritative" I mean either "comes from Adobe" or "has been declared authoritative by Adobe"). In particular I'm looking for a supported way to do integer multiplication modulo 232. This is not covered in any Adobe documentation I have been able to find.

Actionscript claims to be based on ECMAScript, but ECMAScript does not do integer arithmetic at all. It does everything on IEEE-754 doubles, and reduces the result modulo 232 before bitwise operations, which in most cases simulates integer arithmetic. However, this does not work for multiplication: the true result of a multiplying, say, 0x10000001 * 0x0FFFFFFF will be too long for the mantissa of a double, so the low-order bits will be lost if the specification is followed to the letter.

Now enter Actionscript. I have found experimentally that multiplying two int or uint variables and immediately casting the product to int or uint always seems to give me the exact result. However, the generated AVM2 bytecode just contains a plain "mul" instruction with no direct indication that it is supposed to produce an integer result rather than a floating-point one; the virtual machine would have to look ahead to find this out. I'm worrying that I've just been lucky in my experiments and gotten extra precision as a bonus rather than something I can rely on.

(For one thing, my experiments were all performed using an x86 Flash player. Perhaps it represents intermediate results as Intel 80-bit doubles, or stores a 64-bit int on the evaluation stack until it's known what it will be used for. Neither would be easily possible on a non-x86 tablet with no native 32×32→64 multiplication instruction, so might the VM just decide to reduce the precision to what the ECMAScript standard specifies?)

24-hour status: Mike Welsh has done some able investigation and provided very useful links, but unfortunately not enough to close the question. Anyone else?

(tl;dr debate in comments: whitequark refutes, to some degree, one of my hypothetical reasons why the answer might be "no". His points have merit, but of course don't constitute a showing that the answer is "yes").

like image 221
hmakholm left over Monica Avatar asked Aug 07 '11 15:08

hmakholm left over Monica


1 Answers

ActionScript 3 was based on ECMAScript 4, which includes true 32-bit int and uint operations. For example, the multipy_i instruction performs integer multiplication (source: AVM2 Overview).

Unfortunately, the Adobe AS compiler only seems to perform the float versions of these opcodes, e.g. multiply, which supposedly casts the operands as 64-bit floats. This is perhaps in accordance with the ECMAScript specs, which state that ints will be promoted to doubles during math operations in order handle overflow. If it does indeed do 64-bit float multiplication, and then converts back to an int, then there should be a loss of precision.

Despite this, the Flash Player seems to not lose precision when casting back to int immediately. For example:

var n:int = 0x7FFFFFFF;
var n2:int = n*n;
trace(n2);

Even though this code emits a multiply instruction, it traces out a 1 in the Flash Player, which is the result if there is no loss of precision. It's unclear whether this behavior is consistent and cross-platform. However, I tested it in the Flash Player on several platforms, including a few mobile phones, and the result seemed to be 1 consistently. However, running this code through a Tamarin shell in interpreted mode outputted a 0! (JIT mode still outputted a 1, so this behavior must be a side effect of JIT). So it may be risky to rely on this.

Using a multiply_i opcode instead should behave appropriately. Haxe will use this opcode when working with ints. Apparat could also be used to apply this opcode.

like image 55
Mike Welsh Avatar answered Oct 30 '22 02:10

Mike Welsh