I'm playing around with BenchmarkDotNet and its MemoryDiagnoser feature.
Considering the following benchmark:
[Benchmark]
public void Dummy()
{
var buffer = new byte[1];
}
I expect it to allocate exactly 1 byte.
But the benchmark result shows that a total of 32 bytes were allocated. How come? I find this quite misleading.
| Method | Mean | Error | StdDev | Median | Ratio | Rank | Gen 0 | Gen 1 | Gen 2 | Allocated |
|------- |---------:|----------:|----------:|---------:|------:|-----:|-------:|------:|------:|----------:|
| Dummy | 4.486 ns | 0.1762 ns | 0.5196 ns | 4.650 ns | 1.00 | 1 | 0.0038 | - | - | 32 B |
why not 1 byte? ^^^^
I am the author of MemoryDiagnoser and I've described how to read the results my blog: https://adamsitnik.com/the-new-Memory-Diagnoser/#how-to-read-the-results
CLR does some aligning. If you try to allocate new byte[1] array, it will allocate byte[8] array.
We need extra space for object header, method table pointer and length of the array. The overhead is 3x Pointer Size. 8 + 3x4 = 20 for 32bit and 8 + 3x8 = 32 for 64bit.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With