I'm writing a large scale application where I'm trying to conserve as much memory as possible as well as boost performance. As such, when I have a field that I know is only going to have values from 0 - 10 or from -100 - 100, I try to use the short
data type instead of int
.
What this means for the rest of the code, though, is that all over the place when I call these functions, I have to downcast simple int
s into short
s. For example:
Method Signature
public void coordinates(short x, short y) ...
Method Call
obj.coordinates((short) 1, (short) 2);
It's like that all throughout my code because the literals are treated as int
s and aren't being automatically downcast or typed based on the function parameters.
As such, is any performance or memory gain actually significant once this downcasting occurs? Or is the conversion process so efficient that I can still pick up some gains?
Downcasting does help when you're trying to make generic methods. For example, I often see code that parses an XML String into an Object . The Object can then be downcast into the specific Object that you (as the coder) know it represents.
Why we need Upcasting and Downcasting? In Java, we rarely use Upcasting. We use it when we need to develop a code that deals with only the parent class. Downcasting is used when we need to develop a code that accesses behaviors of the child class.
Uses. Downcasting is useful when the type of the value referenced by the Parent variable is known and often is used when passing a value as a parameter. In the below example, the method objectToString takes an Object parameter which is assumed to be of type String.
There is no performance benefit of using short versus int on 32-bit platforms, in all but the case of short[] versus int[] - and even then the cons usually outweigh the pros.
Assuming you're running on either x64, x86 or ARM-32:
The only benefit you'll ever see for using SHORTs versus INTs is in the case where you allocate an array of them. In this case, an array of N shorts is roughly half as long as an array of N ints.
Other than the performance benefit caused by having variables in a hot loop together in the case of complex but localized math within a large array of shorts, you'll never see a benefit for using SHORTS versus INTs.
In ALL other cases - such as shorts being used for fields, globals, parameters and locals, other than the number of bits that it can store, there is no difference between a SHORT and an INT.
My advice as always is to recommend that before making your code more difficult to read, and more artificially restricted, try BENCHMARKING your code to see where the memory and CPU bottlenecks are, and then tackle those.
I strongly suspect that if you ever come across the case where your app is suffering from use of ints rather than shorts, then you'll have long since ditched Java for a less memory/CPU hungry runtime anyway, so doing all of this work upfront is wasted effort.
As far as I can see, the casts per se should have no runtime costs (whether using short
instead of int
actually improves performance is debatable, and depends on the specifics of your application).
Consider the following:
public class Main {
public static void f(short x, short y) {
}
public static void main(String args[]) {
final short x = 1;
final short y = 2;
f(x, y);
f((short)1, (short)2);
}
}
The last two lines of main()
compile to:
// f(x, y)
4: iconst_1
5: iconst_2
6: invokestatic #21 // Method f:(SS)V
// f((short)1, (short)2);
9: iconst_1
10: iconst_2
11: invokestatic #21 // Method f:(SS)V
As you can see, they are identical. The casts happen at compile time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With