Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Disjoint sets on apache spark

I trying to find algorithm of searching disjoint sets (connected components/union-find) on large amount of data with apache spark. Problem is amount of data. Even Raw representation of graph vertex doesn't fit in to ram on single machine. Edges also doesn't fit in to the ram.

Source data is text file of graph edges on hdfs: "id1 \t id2".

id present as string value, not int.

Naive solution that I found is:

  1. take rdd of edges -> [id1:id2] [id3:id4] [id1:id3]
  2. group edges by key. -> [id1:[id2;id3]][id3:[id4]]
  3. for each record set minimum id to each group -> (flatMap) [id1:id1][id2:id1][id3:id1][id3:id3][id4:id3]
  4. reverse rdd from stage 3 [id2:id1] -> [id1:id2]
  5. leftOuterJoin of rdds from stage 3 and 4
  6. repeat from stage 2 while size of rdd on step 3 wouldn't change

But this results in the transfer of large amounts of data between nodes (shuffling)

Any advices?

like image 445
Puh Avatar asked May 18 '16 10:05

Puh


1 Answers

If you are working with graphs I would suggest that you take a look at either one of these libraries

  • GraphX
  • GraphFrames

They both provide the connected components algorithm out of the box.

GraphX:

val graph: Graph = ...
val cc = graph.connectedComponents().vertices

GraphFrames:

val graph: GraphFrame = ...
val cc = graph.connectedComponents.run()
cc.select("id", "component").orderBy("component").show()
like image 130
Marsellus Wallace Avatar answered Sep 29 '22 17:09

Marsellus Wallace