Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Parallel depth-first search in Erlang is slower than its sequential counterpart

I am trying to implement a modified parallel depth-first search algorithm in Erlang (let's call it *dfs_mod*).

All I want to get is all the 'dead-end paths' which are basically the paths that are returned when *dfs_mod* visits a vertex without neighbours or a vertex with neighbours which were already visited. I save each path to ets_table1 if my custom function fun1(Path) returns true and to ets_table2 if fun1(Path) returns false(I need to filter the resulting 'dead-end' paths with some customer filter).

I have implemented a sequential version of this algorithm and for some strange reason it performs better than the parallel one.

The idea behind the parallel implementation is simple:

  • visit a Vertex from [Vertex|Other_vertices] = Unvisited_neighbours,
  • add this Vertex to the current path;
  • send {self(), wait} to the 'collector' process;
  • run *dfs_mod* for Unvisited_neighbours of the current Vertex in a new process;
  • continue running *dfs_mod* with the rest of the provided vertices (Other_vertices);
  • when there are no more vertices to visit - send {self(), done} to the collector process and terminate;

So, basically each time I visit a vertex with unvisited neighbours I spawn a new depth-first search process and then continue with the other vertices.

Right after spawning a first *dfs_mod* process I start to collect all {Pid, wait} and {Pid, done} messages (wait message is to keep the collector waiting for all the done messages). In N milliseconds after waiting the collector function returns ok.


For some reason, this parallel implementation runs from 8 to 160 seconds while the sequential version runs just 4 seconds (the testing was done on a fully-connected digraph with 5 vertices on a machine with Intel i5 processor).

Here are my thoughts on such a poor performance:

  • I pass the digraph Graph to each new process which runs *dfs_mod*. Maybe doing digraph:out_neighbours(Graph) against one digraph from many processes causes this slowness?
  • I accumulate the current path in a list and pass it to each new spawned *dfs_mod* process, maybe passing so many lists is the problem?
  • I use an ETS table to save a path each time I visit a new vertex and add it to the path. The ETS properties are ([bag, public,{write_concurrency, true}), but maybe I am doing something wrong?
  • each time I visit a new vertex and add it to the path, I check a path with a custom function fun1() (it basically checks if the path has vertices labeled with letter "n" occurring before vertices with "m" and returns true/false depending on the result). Maybe this fun1() slows things down?
  • I have tried to run *dfs_mod* without collecting done and wait messages, but htop shows a lot of Erlang activity for quite a long time after *dfs_mod* returns ok in the shell, so I do not think that the active message passing slows things down.

How can I make my parallel dfs_mod run faster than its sequential counterpart?

Edit: when I run the parallel *dfs_mod*, pman shows no processes at all, although htop shows that all 4 CPU threads are busy.

like image 481
skanatek Avatar asked Nov 22 '11 13:11

skanatek


1 Answers

There is no quick way to know without the code, but here's a quick list of why this might fail:

  • You might be confusing parallelism and concurrency. Erlang's model is shared-nothing and aims for concurrency first (running distinct units of code independently). Parallelism is only an optimization of this (running some of the units of code at the same time). Usually, parallelism will take form at a higher level, say you want to run your sorting function on 50 different structures -- you then decide to run 50 of the sequential sort functions.

  • You might have synchronization problems or sequential bottlenecks, effectively changing your parallel solution into a sequential one.

  • The overhead of copying data, context switching and whatnot dwarfs the gains you have in terms of parallelism. This former is especially true of large data sets that you break into sub data sets, then join back into a large one. The latter is especially true of highly sequential code, as seen is the process ring benchmarks.

If I wanted to optimize this, I would try to reduce message passing and data copying to a minimum.

If I were the one working on this, I would keep the sequential version. It does what it says it should do, and when part of a larger system, as soon as you have more processes than core, parallelism will come from the many calls to the sort function rather than branches of the sort function. In the long run, if part of a server or service, using the sequential version N times should have no more negative impact than a parallel one that ends up creating many, many more processes to do the same task, and risk overloading the system more.

like image 148
I GIVE TERRIBLE ADVICE Avatar answered Nov 15 '22 09:11

I GIVE TERRIBLE ADVICE