I wonder which size an array has. I thought of size * sizeof(item) + sizeof(pointer) but how many bytes are allocated for being able to reference the array?
The overhead of arrays in bytes are:
Architecture | Value Type Array | Reference Type Array
x86 12 16
x64 24 32
You can calc these values with
using System;
class Test
{
const int Size = 100000;
static void Main()
{
Console.WriteLine("Running at {0} bits", IntPtr.Size * 8);
Tester<string>();
Tester<double>();
Console.ReadKey();
}
static void Tester<T>()
{
var array = new object[Size];
long initialMemory = GC.GetTotalMemory(true);
for (int i = 0; i < Size; i++)
{
array[i] = new T[0];
}
long finalMemory = GC.GetTotalMemory(true);
GC.KeepAlive(array);
long total = finalMemory - initialMemory;
Console.WriteLine("Size of each {0}[]: {1:0.000} bytes", typeof(T).Name,
((double)total) / Size);
}
}
This code is a modified version of the one from here Overhead of a .NET array?
Clearly you have to execute it at 32 and at 64 bits.
To this overhead you have to add: the elements of the array (so size * sizeof(element)) plus at least a reference to the array that you'll need to have (so IntPtr.Size).
Note that there are some inconsistencies I've noticed. If I create double[1], so arrays of a single double, each one of them is perfectly aligned on the 8 byte boundary, but the space used seems to be only 20 bytes/array (at 32 bits, so 12 + sizeof(double)). This is clearly impossible, because 20 isn't divisible by 8. I think the GC.GetTotalMemory is "ignoring" the hole between objects. This could be an additional overhead of some bytes/array (depending on the type of elements of the array). For byte[1] the medium size is 16 bytes/array (at 32 bits, so 12 + sizeof(byte) + 3). This seems to be more correct.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With