I've noticed I've always used int and doubles no matter how small or big the number needs to be. So in java, is it more efficient to use byte
or short
instead of int
and float
instead of double
?
So assume I have a program with plenty of ints and doubles. Would it be worth going through and changing my ints to bytes or shorts if I knew the number would fit?
I know java doesn't have unsigned types but is there anything extra I could do if I knew the number would be positive only?
By efficient I mostly mean processing. I'd assume the garbage collector would be a lot faster if all the variables would be half size and that calculations would probably be somewhat faster too. ( I guess since I am working on android I need to somewhat worry about ram too)
(I'd assume the garbage collector only deals with Objects and not primitive but still deletes all the primitives in abandoned objects right? )
I tried it with a small android app I have but didn't really notice a difference at all. (Though I didn't "scientifically" measure anything.)
Am I wrong in assuming it should be faster and more efficient? I'd hate to go through and change everything in a massive program to find out I wasted my time.
Would it be worth doing from the beginning when I start a new project? (I mean I think every little bit would help but then again if so, why doesn't it seem like anyone does it.)
int datatype is the most preferred type for numeric values. long datatype is less frequently used. It should only be used when the range of the numeric value is too high. It requires the most memory(8 bytes) in comparison to the other three data-types.
Using short can conserve memory if it is narrower than int , which can be important when using a large array. Your program will use more memory in a 32-bit int system compared to a 16-bit int system.
A CPU works more efficient when the data with equals to the native CPU register width. This applies indirect to . NET code as well. In most cases using int in a loop is more efficient than using short.
Am I wrong in assuming it should be faster and more efficient? I'd hate to go through and change everything in a massive program to find out I wasted my time.
Yes, you are wrong. In most cases, it makes little difference in terms of space used.
It is not worth trying to optimize this ... unless you have clear evidence that optimization is needed. And if you do need to optimize memory usage of object fields in particular, you will probably need to take other (more effective) measures.
The Java Virtual Machine models stacks and object fields using offsets that are (in effect) multiples of a 32 bit primitive cell size. So when you declare a local variable or object field as (say) a byte
, the variable / field will be stored in a 32 bit cell, just like an int
.
There are two exceptions to this:
long
and double
values require 2 primitive 32-bit cellsSo it might be worth optimizing use of long
and double
... and large arrays of primitives. But in general no.
In theory, a JIT might be able to optimize this, but in practice I've never heard of a JIT that does. One impediment is that the JIT typically cannot run until after there instances of the class being compiled have been created. If the JIT optimized the memory layout, you could have two (or more) "flavors" of object of the same class ... and that would present huge difficulties.
Looking at the benchmark results in @meriton's answer, it appears that using short
and byte
instead of int
incurs a performance penalty for multiplication. Indeed, if you consider the operations in isolation, the penalty is significant. (You shouldn't consider them in isolation ... but that's another topic.)
I think the explanation is that JIT is probably doing the multiplications using 32bit multiply instructions in each case. But in the byte
and short
case, it executes extra instructions to convert the intermediate 32 bit value to a byte
or short
in each loop iteration. (In theory, that conversion could be done once at the end of the loop ... but I doubt that the optimizer would be able to figure that out.)
Anyway, this does point to another problem with switching to short
and byte
as an optimization. It could make performance worse ... in an algorithm that is arithmetic and compute intensive.
I know java doesn't have unsigned types but is there anything extra I could do if I knew the number would be positive only?
No. Not in terms of performance anyway. (There are some methods in Integer
, Long
, etc for dealing with int
, long
, etc as unsigned. But these don't give any performance advantage. That is not their purpose.)
(I'd assume the garbage collector only deals with Objects and not primitive but still deletes all the primitives in abandoned objects right? )
Correct. A field of an object is part of the object. It goes away when the object is garbage collected. Likewise the cells of an array go away when the array is collected. When the field or cell type is a primitive type, then the value is stored in the field / cell ... which is part of the object / array ... and that has been deleted.
That depends on the implementation of the JVM, as well as the underlying hardware. Most modern hardware will not fetch single bytes from memory (or even from the first level cache), i.e. using the smaller primitive types generally does not reduce memory bandwidth consumption. Likewise, modern CPU have a word size of 64 bits. They can perform operations on less bits, but that works by discarding the extra bits, which isn't faster either.
The only benefit is that smaller primitive types can result in a more compact memory layout, most notably when using arrays. This saves memory, which can improve locality of reference (thus reducing the number of cache misses) and reduce garbage collection overhead.
Generally speaking however, using the smaller primitive types is not faster.
To demonstrate that, behold the following benchmark:
public class Benchmark { public static void benchmark(String label, Code code) { print(25, label); try { for (int iterations = 1; ; iterations *= 2) { // detect reasonable iteration count and warm up the code under test System.gc(); // clean up previous runs, so we don't benchmark their cleanup long previouslyUsedMemory = usedMemory(); long start = System.nanoTime(); code.execute(iterations); long duration = System.nanoTime() - start; long memoryUsed = usedMemory() - previouslyUsedMemory; if (iterations > 1E8 || duration > 1E9) { print(25, new BigDecimal(duration * 1000 / iterations).movePointLeft(3) + " ns / iteration"); print(30, new BigDecimal(memoryUsed * 1000 / iterations).movePointLeft(3) + " bytes / iteration\n"); return; } } } catch (Throwable e) { throw new RuntimeException(e); } } private static void print(int desiredLength, String message) { System.out.print(" ".repeat(Math.max(1, desiredLength - message.length())) + message); } private static long usedMemory() { return Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(); } @FunctionalInterface interface Code { /** * Executes the code under test. * * @param iterations * number of iterations to perform * @return any value that requires the entire code to be executed (to * prevent dead code elimination by the just in time compiler) * @throws Throwable * if the test could not complete successfully */ Object execute(int iterations); } public static void main(String[] args) { benchmark("long[] traversal", (iterations) -> { long[] array = new long[iterations]; for (int i = 0; i < iterations; i++) { array[i] = i; } return array; }); benchmark("int[] traversal", (iterations) -> { int[] array = new int[iterations]; for (int i = 0; i < iterations; i++) { array[i] = i; } return array; }); benchmark("short[] traversal", (iterations) -> { short[] array = new short[iterations]; for (int i = 0; i < iterations; i++) { array[i] = (short) i; } return array; }); benchmark("byte[] traversal", (iterations) -> { byte[] array = new byte[iterations]; for (int i = 0; i < iterations; i++) { array[i] = (byte) i; } return array; }); benchmark("long fields", (iterations) -> { class C { long a = 1; long b = 2; } C[] array = new C[iterations]; for (int i = 0; i < iterations; i++) { array[i] = new C(); } return array; }); benchmark("int fields", (iterations) -> { class C { int a = 1; int b = 2; } C[] array = new C[iterations]; for (int i = 0; i < iterations; i++) { array[i] = new C(); } return array; }); benchmark("short fields", (iterations) -> { class C { short a = 1; short b = 2; } C[] array = new C[iterations]; for (int i = 0; i < iterations; i++) { array[i] = new C(); } return array; }); benchmark("byte fields", (iterations) -> { class C { byte a = 1; byte b = 2; } C[] array = new C[iterations]; for (int i = 0; i < iterations; i++) { array[i] = new C(); } return array; }); benchmark("long multiplication", (iterations) -> { long result = 1; for (int i = 0; i < iterations; i++) { result *= 3; } return result; }); benchmark("int multiplication", (iterations) -> { int result = 1; for (int i = 0; i < iterations; i++) { result *= 3; } return result; }); benchmark("short multiplication", (iterations) -> { short result = 1; for (int i = 0; i < iterations; i++) { result *= 3; } return result; }); benchmark("byte multiplication", (iterations) -> { byte result = 1; for (int i = 0; i < iterations; i++) { result *= 3; } return result; }); } }
Run with OpenJDK 14 on my Intel Core i7 CPU @ 3.5 GHz, this prints:
long[] traversal 3.206 ns / iteration 8.007 bytes / iteration int[] traversal 1.557 ns / iteration 4.007 bytes / iteration short[] traversal 0.881 ns / iteration 2.007 bytes / iteration byte[] traversal 0.584 ns / iteration 1.007 bytes / iteration long fields 25.485 ns / iteration 36.359 bytes / iteration int fields 23.126 ns / iteration 28.304 bytes / iteration short fields 21.717 ns / iteration 20.296 bytes / iteration byte fields 21.767 ns / iteration 20.273 bytes / iteration long multiplication 0.538 ns / iteration 0.000 bytes / iteration int multiplication 0.526 ns / iteration 0.000 bytes / iteration short multiplication 0.786 ns / iteration 0.000 bytes / iteration byte multiplication 0.784 ns / iteration 0.000 bytes / iteration
As you can see, the only significant speed savings occur when traversing large arrays; using smaller object fields yields negligible benefit, and computations are actually slightly slower on the small datatypes.
Overall, the performance differences are quite minor. Optimizing algorithms is far more important than the choice of primitive type.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With