I am trying to implement a modified parallel depth-first search algorithm in Erlang (let's call it *dfs_mod*).
All I want to get is all the 'dead-end paths' which are basically the paths that are returned when *dfs_mod* visits a vertex without neighbours or a vertex with neighbours which were already visited. I save each path to ets_table1
if my custom function fun1(Path)
returns true
and to ets_table2
if fun1(Path)
returns false
(I need to filter the resulting 'dead-end' paths with some customer filter).
I have implemented a sequential version of this algorithm and for some strange reason it performs better than the parallel one.
The idea behind the parallel implementation is simple:
Vertex
from [Vertex|Other_vertices] = Unvisited_neighbours
, Vertex
to the current path;{self(), wait}
to the 'collector' process;Unvisited_neighbours
of the current Vertex
in a new process;Other_vertices
);{self(), done}
to the collector process and terminate;So, basically each time I visit a vertex with unvisited neighbours I spawn a new depth-first search process and then continue with the other vertices.
Right after spawning a first *dfs_mod* process I start to collect all {Pid, wait}
and {Pid, done}
messages (wait
message is to keep the collector waiting for all the done
messages). In N milliseconds after waiting the collector function returns ok
.
For some reason, this parallel implementation runs from 8 to 160 seconds while the sequential version runs just 4 seconds (the testing was done on a fully-connected digraph with 5 vertices on a machine with Intel i5 processor).
Here are my thoughts on such a poor performance:
Graph
to each new process which runs *dfs_mod*. Maybe doing digraph:out_neighbours(Graph)
against one digraph from many processes causes this slowness?([bag, public,{write_concurrency, true})
, but maybe I am doing something wrong?fun1()
(it basically checks if the path has vertices labeled with letter "n" occurring before vertices with "m" and returns true/false
depending on the result). Maybe this fun1()
slows things down?done
and wait
messages, but htop
shows a lot of Erlang activity for quite a long time after *dfs_mod* returns ok
in the shell, so I do not think that the active message passing slows things down.How can I make my parallel dfs_mod run faster than its sequential counterpart?
Edit: when I run the parallel *dfs_mod*, pman
shows no processes at all, although htop
shows that all 4 CPU threads are busy.
There is no quick way to know without the code, but here's a quick list of why this might fail:
You might be confusing parallelism and concurrency. Erlang's model is shared-nothing and aims for concurrency first (running distinct units of code independently). Parallelism is only an optimization of this (running some of the units of code at the same time). Usually, parallelism will take form at a higher level, say you want to run your sorting function on 50 different structures -- you then decide to run 50 of the sequential sort functions.
You might have synchronization problems or sequential bottlenecks, effectively changing your parallel solution into a sequential one.
The overhead of copying data, context switching and whatnot dwarfs the gains you have in terms of parallelism. This former is especially true of large data sets that you break into sub data sets, then join back into a large one. The latter is especially true of highly sequential code, as seen is the process ring benchmarks.
If I wanted to optimize this, I would try to reduce message passing and data copying to a minimum.
If I were the one working on this, I would keep the sequential version. It does what it says it should do, and when part of a larger system, as soon as you have more processes than core, parallelism will come from the many calls to the sort function rather than branches of the sort function. In the long run, if part of a server or service, using the sequential version N times should have no more negative impact than a parallel one that ends up creating many, many more processes to do the same task, and risk overloading the system more.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With