Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the smallest unit of work that is sensible to parallelize with actors?

Tags:

scala

akka

actor

While Scala actors are described as light-weight, Akka actors even more so, there is obviously some overhead to using them.

So my question is, what is the smallest unit of work that is worth parallelising with Actors (assuming it can be parallelized)? Is it only worth it if there is some potentially latency or there are a lot of heavy calculations?

I'm looking for a general rule of thumb that I can easily apply in my everyday work.

EDIT: The answers so far have made me realise that what I'm interested in is perhaps actually the inverse of the question that I originally asked. So:

Assuming that structuring my program with actors is a very good fit, and therefore incurs no extra development overhead (or even incurs less development overhead than a non-actor implementation would), but the units of work it performs are quite small - is there a point at which using actors would be damaging in terms of performance and should be avoided?

like image 563
Russell Avatar asked Apr 12 '12 15:04

Russell


2 Answers

Whether to use actors is not primarily a question of the unit of work, its main benefit is to make concurrent programs easier to get right. In exchange for this, you need to model your solution according to a different paradigm.

So, you need to decide first whether to use concurrency at all (which may be due to performance or correctness) and then whether to use actors. The latter is very much a matter of taste, although with Akka 2.0 I would need good reasons not to, since you get distributability (up & out) essentially for free with very little overhead.

If you still want to decide the other way around, a rule of thumb from our performance tests might be that the target message processing rate should not be higher than a few million per second.

like image 113
Roland Kuhn Avatar answered Oct 27 '22 20:10

Roland Kuhn


My rule of thumb--for everyday work--is that if it takes milliseconds then it's potentially worth parallelizing. Although the transaction rates are higher than that (usually no more than a few 10s of microseconds of overhead), I like to stay well away from overhead-dominated cases. Of course, it may need to take much longer than a few milliseconds to actually be worth parallelizing. You always have to balance time time taken by writing more code against the time saved running it.

like image 30
Rex Kerr Avatar answered Oct 27 '22 20:10

Rex Kerr