Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is Java HashMap.clear() and remove() memory effective?

Consider the follwing HashMap.clear() code:

 /**  * Removes all of the mappings from this map.  * The map will be empty after this call returns.  */ public void clear() {     modCount++;     Entry[] tab = table;     for (int i = 0; i < tab.length; i++)         tab[i] = null;     size = 0; } 

It seems, that the internal array (table) of Entry objects is never shrinked. So, when I add 10000 elements to a map, and after that call map.clear(), it will keep 10000 nulls in it's internal array. So, my question is, how does JVM handle this array of nothing, and thus, is HashMap memory effective?

like image 635
Illarion Kovalchuk Avatar asked May 11 '10 14:05

Illarion Kovalchuk


People also ask

Do we need to clear HashMap in Java?

If all you want to do is discard the data in the Map , then you need not (and in fact should not) call clear() on it, but simply clear all references to the Map itself, in which case it will be garbage collected eventually. Show activity on this post. Looking at the source code, it does look like HashMap never shrinks.

Is HashMap memory efficient?

The HashMap will most likely need more memory, even if you only store a few elements. By the way, the memory footprint should not be a concern, as you will only need the data structure as long as you need it for counting. Then it will be garbage collected, anyway.

Do Hashmaps take up a lot of memory?

A HashMap. Entry is 24 Bytes, not 16, for example. For many cases, this adds up to an enormous amount of memory wasted. For example, a HashMap<Integer, Double> needs about 100 Bytes per stored value due to boxing, with 12 bytes of actual data, and 88 bytes overhead.

Which map is more efficient in Java?

There is an alternative called AirConcurrentMap that is more memory efficient above 1K Entries than any other Map I have found, and is faster than ConcurrentSkipListMap for key-based operations and faster than any Map for iterations, and has an internal thread pool for parallel scans.


2 Answers

The idea is that clear() is only called when you want to re-use the HashMap. Reusing an object should only be done for the same reason it was used before, so chances are that you'll have roughly the same number of entries. To avoid useless shrinking and resizing of the Map the capacity is held the same when clear() is called.

If all you want to do is discard the data in the Map, then you need not (and in fact should not) call clear() on it, but simply clear all references to the Map itself, in which case it will be garbage collected eventually.

like image 196
Joachim Sauer Avatar answered Sep 19 '22 14:09

Joachim Sauer


Looking at the source code, it does look like HashMap never shrinks. The resize method is called to double the size whenever required, but doesn't have anything ala ArrayList.trimToSize().

If you're using a HashMap in such a way that it grows and shrinks dramatically often, you may want to just create a new HashMap instead of calling clear().

like image 32
polygenelubricants Avatar answered Sep 20 '22 14:09

polygenelubricants