Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between packed switch and sparse switch dalvik opcode

I want to know the difference between packed switch and sparse switch opcodes in dalvik. Please if you can provide examples. The explanation provided by google is unclear to me.

packed-switch sparse switch

Thanks.

like image 1000
P basak Avatar asked Nov 08 '13 09:11

P basak


1 Answers

It sounds as if packed-switch is equivalent to Java's tableswitch, and sparse-switch to lookupswitch.

A packed-switch uses a simple jump table, indexed by the form low + n, where low is the lowest test value among the case labels, and n is the input to the switch. The values at each index represent the bytecode offsets for each case. Finding the correct jump address is a constant-time operation.

A sparse-switch uses a sorted list of key-value pairs, where each key is a test value from a case label, and the values are jump offsets. Finding the correct jump target for a lookupswitch requires a binary search on the key, so it is a logarithmic-time operation.

The compiler will choose which to use. If the keys tend to be clustered or packed closely together, then a packed-switch (or, in Java terms, a tableswitch) can be emitted efficiently. But if the keys are sparse, and the range of values (high - low + 1) is large, then using a jump table would require a large block of bytecode, as all values in that range must exist in the jump table regardless of whether there is a corresponding case label. In these scenarios, the compiler will emit a sparse-switch (lookupswitch).

Interestingly, the Dalvik engineers chose to name these opcodes in a way that describes the key distributions for which they should be used, whereas the Java engineers chose names which describe the conceptual data structures that the bytecode operands resemble.

Let's look at some examples. Consider the following Java code, which will produce a tableswitch (and, when converted to Dalvik, a packed-switch):

static String packedSwitch(final int n) {
    switch (n) {
        case 5:
            return "Five";
        case 3:
            return "Three";
        case 1:
            return "One";
        default:
            return "Other";
    }
}

Conceptually, the payload for the packed-switch opcode would look something like this:

actual packed-switch

As you can see, it's fairly compact. Three out of the five slots point to actual case targets, with the remaining two jumping to the default target. But what if our test values were more spread out?

static String sparseSwitch(final int n) {
    switch (n) {
        case 500:
            return "Five Hundred";
        case 300:
            return "Three Hundred";
        case 100:
            return "One Hundred";
        default:
            return "Other";
    }
}

If the compiler tried to emit this as a packed-switch, the payload would look something like this:

theoretical packed-switch

Notice how only three out of a few hundred slots actually point to case labels from the original code. The rest are there simply to fill up the jump table. Not very space efficient, is it? That's why the compiler would emit a sparse-switch, which has a far more compact bytecode footprint for this particular example:

sparse-switch

Now, that's much more reasonable, don't you think? The downside, however, is that instead of knowing exactly which index to jump to based on the input, we have to perform a binary search on the table until we find a matching test value. The larger the switch, the more significant the impact on performance, though the effect has a logarithmic curve.

like image 113
Mike Strobel Avatar answered Oct 19 '22 01:10

Mike Strobel