Chrome docs says that retained size is "the size of memory that is freed once the object itself is deleted along with its dependent objects that were made unreachable from GC roots" which is fair enough. However, even for simple objects, retained size is often 3x of shallow size. I understand that V8 need to store reference to hidden shape, probably some data for GC and so on, but sometimes objects have hundreds of extra "retained" bytes, which seems to be a problem when you need to have millions of such objects. Let's take a look at a simple example:
class TestObject {
constructor( x, y, z ) {
this.x = x;
this.y = y;
this.z = z;
}
}
window.arr = [];
for ( let i = 0; i < 100000; i++ ) {
window.arr.push( new TestObject( Math.random(), Math.random(), Math.random() ) );
}
Here's the memory snapshot:
Shallow size is 24 bytes, which is perfectly matches with the fact that we're storing 3 x 8-byte doubles. "Extra" size is 36 bytes, which allows to store 9 x 4-byte pointers (assuming that pointer compression is on). If we add three extra properties, extra size will be 72 (!) bytes, so it depends on number of properties. What is being stored there? Is it possible to avoid such massive memory overhead?
V8 developer here.
Shallow size is the object itself, consisting of the standard object header (3 pointers) and 3 in-object properties, which are again pointers. That's 6 (compressed) pointers of 4 bytes each = 24 bytes.
Additional retained size is the storage for the three properties. Each of them is a "HeapNumber", consisting of a 4-byte map pointer plus an 8-byte payload. So that's 3 properties times 12 bytes = 36 bytes. (Armed with this knowledge, it shouldn't be surprising that with another three properties, which presumably are also numbers, this doubles to 72.)
Added up, each object occupies a total of 24+36 = 60 bytes.
Map and prototype don't count for each object's retained size because they are shared by all objects, so freeing one object wouldn't allow them to be freed as well.
One idea to save memory (if you feel that it is important) is to "transpose" your data organization: instead of 1 array containing 100,000 objects with 3 numbers each, you could have 1 object containing 3 arrays with 100,000 numbers each. Depending on your use case, this may or may not be a feasible approach: if the triples of numbers come and go a lot, then storing them in a single huge array would be unpleasant; whereas if it's a static data set, then both models might be fairly equivalent in usability. If you did this, you'd avoid the repeated per-object overhead; additionally arrays can store double numbers inline (as long as the entire array contains only numbers), so you'd be able to store the same 300K numbers with only 2.4MB total memory consumption.
If you try replacing the 3-property objects with many small TypedArrays, you'll see a significant increase in memory usage, because TypedArrays have much bigger per-object overhead than simple objects. They are geared towards having a few large arrays, not many small ones.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With