I want to create a new array of objects putting together two smaller arrays.
They can't be null, but size may be 0.
I can't chose between these two ways: are they equivalent or is one more efficient (for example system.arraycopy() copies whole chunks)?
MyObject[] things = new MyObject[publicThings.length+privateThings.length]; System.arraycopy(publicThings, 0, things, 0, publicThings.length); System.arraycopy(privateThings, 0, things, publicThings.length, privateThings.length);
or
MyObject[] things = new MyObject[publicThings.length+privateThings.length]; for (int i = 0; i < things.length; i++) { if (i<publicThings.length){ things[i] = publicThings[i] } else { things[i] = privateThings[i-publicThings.length] } }
Is the only difference the look of the code?
EDIT: thanks for linked question, but they seem to have an unsolved discussion:
Is it truly faster if it is not for native types
: byte[], Object[], char[]? in all other cases, a type check is executed, which would be my case and so would be equivalent... no?
On another linked question, they say that the size matters a lot
, for size >24 system.arraycopy() wins, for smaller than 10, manual for loop is better...
Now I'm really confused.
arraycopy() wins, for smaller than 10, manual for loop is better... Now I'm really confused. arraycopy() is a native call, which is most certainly faster.
System. arraycopy does shallow copy, which means it copies Object references when applied to non primitive arrays. Therefore after System.
arraycopy() simply copies values from the source array to the destination, Arrays. copyOf() also creates new array. If necessary, it will truncate or pad the content.
arraycopy() Java's System class has a method called “ArrayCOpy” that allows you to copy elements of one array to another array.
public void testHardCopyBytes() { byte[] bytes = new byte[0x5000000]; /*~83mb buffer*/ byte[] out = new byte[bytes.length]; for(int i = 0; i < out.length; i++) { out[i] = bytes[i]; } } public void testArrayCopyBytes() { byte[] bytes = new byte[0x5000000]; /*~83mb buffer*/ byte[] out = new byte[bytes.length]; System.arraycopy(bytes, 0, out, 0, out.length); }
I know JUnit tests aren't really the best for benchmarking, but
testHardCopyBytes took 0.157s to complete
and
testArrayCopyBytes took 0.086s to complete.
I think it depends on the virtual machine, but it looks as if it copies blocks of memory instead of copying single array elements. This would absolutely increase performance.
EDIT:
It looks like System.arraycopy 's performance is all over the place. When Strings are used instead of bytes, and arrays are small (size 10), I get these results:
String HC: 60306 ns String AC: 4812 ns byte HC: 4490 ns byte AC: 9945 ns
Here is what it looks like when arrays are at size 0x1000000. It looks like System.arraycopy definitely wins with larger arrays.
Strs HC: 51730575 ns Strs AC: 24033154 ns Bytes HC: 28521827 ns Bytes AC: 5264961 ns
How peculiar!
Thanks, Daren, for pointing out that references copy differently. It made this a much more interesting problem!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With