Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance problem: CPU intensive work performs better with more concurrency in Erlang

tl;dr
I'm getting better performance with my erlang program when I perform my CPU intensive tasks at higher concurrency (e.g. 10K at once vs. 4). Why?


I'm writing a map reduce framework using erlang, and I'm doing performance tests.

My map function is highly CPU intensive (mostly pure calculation). It also needs access to some static data, so I have a few persistent (lingering i.e. lives through the app. life cycle) worker processes on my machine, each having a part of this data in-memory, and awaiting map requests. The output of map is sent to the manager process (which sent out the map requests to the workers), where the reduce (very lightweight) is performed.

Anyways, I noticed that I'm getting better throughput when I immediately spawn a new process for each map request that the workers receives, rather than letting the worker process itself synchronously perform the map request by itself one-by-one (thus leaving bunch of map requests in its process queue, because I'm firing the map requests all at once).

Code snippet:

%% When I remove the comment, I get significant performance boost (90% -> 96%)
%% spawn_link(fun()->
                %% One invocation uses around 250ms of CPU time
                do_map(Map, AssignedSet, Emit, Data),
                Manager ! {finished, JobId, self(), AssignedSet, normal},
%%       end),

Compared to when I perform the same calculation in a tight loop, I get 96% throughput (efficiency) using the "immediately spawning" method (e.g. 10000 map reduce jobs running completely in parallel). When I use the "worker performs one-by-one" method, I get only around 90%.

I understand Erlang is supposed to be good at concurrent stuff, and I'm impressed that efficiency doesn't change even if I perform 10K map reduce requests at once as opposed to 100 etc! However, since I have only 4 CPU cores, I'd expect to get better throughput if I use lower concurrency like 4 or maybe 5.

Weirdly, my CPU usage looks very similar in the 2 different implementation (almost completely pegged at 100% on all cores). The performance difference is quite stable. I.e. even when I just do 100 map reduce jobs, I still get around 96% efficiency with the "immediately spawn" method, and around 90% when I use "one-by-one" method. Likewise when I test with 200, 500, 1000, 10K jobs.

I first suspected that the queuing at the worker process queue is the culprit, but even when I should only have something like 25 messages in the worker process queue, I still see the lower performance. 25 messages seem to be quite small for causing a clog (I am doing selective message matching, but not in a way the process would have to put messages back to the queue).

I'm not sure how I should proceed from here. Am I doing something wrong, or am I completely missing something??

UPDATE

I did some more tests and found out that the performance difference can disappear depending on conditions (particularly into how many worker process I divide the static data). Looks like I have much more to learn!

like image 398
Enno Shioji Avatar asked Oct 10 '22 10:10

Enno Shioji


1 Answers

Assuming 1 worker process with 3 map actions, we have the first variants:

  _______   _______   _______
 |   m   | |   m   | |   m   |
 |       | |       | |       |
_|       |_|       |_|       |_
a         a         a         r

Where a is administrative tasks (reading from the message queue, dispatching the map etc.) m is the actual map and r is sending back the result. The second variant where a process is spawned for every map:

  _________________._
 |   m              r
 |  ___________________._
 | |   m                r
 | |  _____________________._
_|_|_|   m                  r
a a a

As you can see, there's both administrative tasks (a) going on as the same time as maps (m) and as the same times as sending back results (r).

This will keep the CPU busy with map (i.e. calculation intensive) work all the time, as opposed to having short dips every now and then. This is most likely the small gain you see in throughput.

As you have quite high concurrency from the beginning, you only see a relatively small gain in throughput. Compare this to theoretically running only one worker process (as in the first variant) you'll see much bigger gains.

like image 149
Adam Lindberg Avatar answered Oct 13 '22 00:10

Adam Lindberg