Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is the performance/memory benefit of short nullified by downcasting?

I'm writing a large scale application where I'm trying to conserve as much memory as possible as well as boost performance. As such, when I have a field that I know is only going to have values from 0 - 10 or from -100 - 100, I try to use the short data type instead of int.

What this means for the rest of the code, though, is that all over the place when I call these functions, I have to downcast simple ints into shorts. For example:

Method Signature

public void coordinates(short x, short y) ...

Method Call

obj.coordinates((short) 1, (short) 2);

It's like that all throughout my code because the literals are treated as ints and aren't being automatically downcast or typed based on the function parameters.

As such, is any performance or memory gain actually significant once this downcasting occurs? Or is the conversion process so efficient that I can still pick up some gains?

like image 396
asteri Avatar asked Nov 27 '12 16:11

asteri


People also ask

What is the advantage of Downcasting in Java?

Downcasting does help when you're trying to make generic methods. For example, I often see code that parses an XML String into an Object . The Object can then be downcast into the specific Object that you (as the coder) know it represents.

What is the reason to use upcasting and downcasting?

Why we need Upcasting and Downcasting? In Java, we rarely use Upcasting. We use it when we need to develop a code that deals with only the parent class. Downcasting is used when we need to develop a code that accesses behaviors of the child class.

What is the purpose of Downcasting?

Uses. Downcasting is useful when the type of the value referenced by the Parent variable is known and often is used when passing a value as a parameter. In the below example, the method objectToString takes an Object parameter which is assumed to be of type String.


2 Answers

There is no performance benefit of using short versus int on 32-bit platforms, in all but the case of short[] versus int[] - and even then the cons usually outweigh the pros.

Assuming you're running on either x64, x86 or ARM-32:

  • When in use, 16-bit SHORTs are stored in integer registers which are either 32-bit or 64-bits long, just the same as ints. I.e. when the short is in use, you gain no memory or performance benefit versus an int.
  • When on the stack, 16-bit SHORTs are stored in 32-bit or 64-bit "slots" in order to keep the stack aligned (just like ints). You gain no performance or memory benefit from using SHORTs versus INTs for local variables.
  • When being passed as parameters, SHORTs are auto-widened to 32-bit or 64-bit when they are pushed on the stack (unlike ints which are just pushed). Your code here is actually slightly less performance and has a slightly bigger (code) memory footprint than if you used ints.
  • When storing global (static) variables, these variables are automatically expanded to take up 32-bit or 64-bit slots to guarantee alignment of pointers (references). This means you get no performance or memory benefit for using SHORTs versus INTs for global (static) variables.
  • When storing fields, these live in a structure in heap memory that maps to the layout of the class. In this class, fields are automatically padded to 32-bit or 64-bit to maintain the alignment of fields on the heap. You get no performance or memory benefit by using SHORTs for fields versus INTs.

The only benefit you'll ever see for using SHORTs versus INTs is in the case where you allocate an array of them. In this case, an array of N shorts is roughly half as long as an array of N ints.

Other than the performance benefit caused by having variables in a hot loop together in the case of complex but localized math within a large array of shorts, you'll never see a benefit for using SHORTS versus INTs.

In ALL other cases - such as shorts being used for fields, globals, parameters and locals, other than the number of bits that it can store, there is no difference between a SHORT and an INT.

My advice as always is to recommend that before making your code more difficult to read, and more artificially restricted, try BENCHMARKING your code to see where the memory and CPU bottlenecks are, and then tackle those.

I strongly suspect that if you ever come across the case where your app is suffering from use of ints rather than shorts, then you'll have long since ditched Java for a less memory/CPU hungry runtime anyway, so doing all of this work upfront is wasted effort.

like image 129
SecurityMatt Avatar answered Oct 15 '22 11:10

SecurityMatt


As far as I can see, the casts per se should have no runtime costs (whether using short instead of int actually improves performance is debatable, and depends on the specifics of your application).

Consider the following:

public class Main {
    public static void f(short x, short y) {
    }

    public static void main(String args[]) {
        final short x = 1;
        final short y = 2;
        f(x, y);
        f((short)1, (short)2);
    }
}

The last two lines of main() compile to:

  // f(x, y)
   4: iconst_1      
   5: iconst_2      
   6: invokestatic  #21                 // Method f:(SS)V

  // f((short)1, (short)2);
   9: iconst_1      
  10: iconst_2      
  11: invokestatic  #21                 // Method f:(SS)V

As you can see, they are identical. The casts happen at compile time.

like image 31
NPE Avatar answered Oct 15 '22 12:10

NPE