Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Scala compiler optimization for immutability

Does the scala compiler optimize for memory usage by removing refs to vals used only once within a block?

Imagine an object holding in aggregate some huge data - reaching a size where cloning data or derivatives of it may well scratch the maximum amount of memory for the JVM/machine.

A minimal code example, but imagine a longer chain of data transforms:

val huge: HugeObjectType
val derivative1 = huge.map(_.x)
val derivative2 = derivative1.groupBy(....)

Will the compiler e.g. leave huge marked eligible for garbage collection after derivative1 has been computed? or will it keep it alive until the wrapping block is exited?

Immutability is nice in theory, I personally find it addictive. But to be a fit for big data objects that can't be stream-processed item by item on current-day operating systems - I would claim that it is inherently impedance mismatched with reasonable memory utilization, for a big data application on the JVM isn't it, unless compilers optimize for such things as this case..

like image 1000
matanster Avatar asked Nov 22 '15 09:11

matanster


1 Answers

First of all: the actual freeing of unused memory happens whenever the JVM GC deems it necessary. So there is nothing scalac can do about this.

The only thing that scalac could do would be to set references to null not just when they go out of scope, but as soon as they are no longer used.

Basically

val huge: HugeObjectType
val derivative1 = huge.map(_.x)
huge = null // inserted by scalac
val derivative2 = derivative1.groupBy(....)
derivative1 = null // inserted by scalac

According to this thread on scala-internals, it currently does not do this, nor does the latest hotspot JVM provide salvage. See the post by scalac hacker Grzegorz Kossakowski and rest of that thread.

For a method that is being optimised by the JVM JIT compiler, the JIT compiler will null references as soon as possible. However, for a main method that is executed only once, the JVM will never attempt to fully optimise it.

The thread linked above contains a pretty detailed discussion of the topic and all the tradeoffs.

Note that in typical big data computing frameworks such as apache spark, the values you work with are not direct references to the data. So in these frameworks the lifetime of references is usually not a problem.

For the example given above, all intermediate values are used exactly once. So an easy solution is to just define all intermediate results as defs.

def huge: HugeObjectType
def derivative1 = huge.map(_.x)
def derivative2 = derivative1.groupBy(....)
val result = derivative2.<some other transform>

A different yet very potent approach, is to use iterators! chaining functions like map and filter over an iterator processes them item by item, resulting in no intermediary collections ever being materialized.. which fits the scenario very well! this will not help with functions like groupBy but may significantly reduce memory allocation for the former functions and similar ones. Credits to Simon Schafer from the above mentioned.

like image 147
Rüdiger Klaehn Avatar answered Sep 20 '22 09:09

Rüdiger Klaehn