Both Java and Scala introduce their own global ForkJoinPool, Java as java.util.concurrent.ForkJoinPool#commonPool
and Scala as scala.concurrent.ExecutionContext#global
.
Both of these seem to be intended to be used for the same use cases, specifically running non-blocking concurrent tasks (often implicitly).
Now from what I can figure, if you pick your interop dependencies the wrong way, you will end up with two thread pools doing exactly the same thing, one for the Java world and one for the Scala world.
So unless I am missing something obvious, is there any good reason for Scala to not simply use the Java commonPool for its global ExecutionContext?
* Replace with a simple call to Unsafe.getUnsafe when integrating * into a jdk. * * @return a sun.misc.Unsafe */ private static sun.misc.Unsafe getUnsafe () { return scala.concurrent.util.Unsafe.instance; } } Here is a short list of links related to this Scala ForkJoinPool.java source code file:
The ForkJoinPool class is the center of the fork/join framework, which is an implementation of the ExecutorService interface.
(For my Scala work, see my Scala examples and tutorials .) abase, ac_shift, ashift, concurrent, countedcompleter, ctl, forkjoin, forkjoinpool, forkjointask, pl_lock, smask, workqueue
However, if you intend to use commonPool for blocking tasks, then you need to consider some consequences.
To add to other answers - besides JVM version issue, usage of JVM specific implementation would bound Scala API to Java internals. Even if that wasn't the goal initially, right now Scala community would like to target more than one backend: we have Scala, Scala.js, Scala Native. If we decide to change things and couple Scala library to JVM API code will be less portable for no good reason - after all ExecutionContext on JVM still uses some Java's thread pool implementations internally, so it's not like we are reinventing the wheel.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With