Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to define a global read\write variables in Spark

Tags:

apache-spark

Spark has broadcast variables, which are read only, and accumulator variables, which can be updates by the nodes, but not read. Is there way - or a workaround - to define a variable which is both updatable and can be read?

One requirement for such a read\write global variable would be to implement a cache. As files are loaded and processed as rdd's, calculations are performed. The results of these calculations - happening in several nodes running in parallel - need to be placed into a map, which has as it's key some of the attributes of the entity being processed. As subsequent entities within the rdd's are processed, the cache is queried.

Scala does have ScalaCache, which is a facade for cache implementations such as Google Guava. But how would such a cache be included and accessed within a Spark application?

The cache could be defined as a variable in the driver application which creates the SparkContext. But then there would be two issues:

  • Performance would presumably be bad because of the network overhead between the nodes and the driver application.
  • To my understanding, each rdd will be passed a copy of the variable (cache in this case) when the variable is first accessed by the function passed to the rdd. Each rdd would have it's own copy, not access to a shared global variable .

What is the best way to implement and store such a cache?

Thanks

like image 462
user1052610 Avatar asked Apr 04 '16 11:04

user1052610


People also ask

Are there general read/write shared variables in spark?

Supporting general, read-write shared variables across tasks would be inefficient. Apache Spark provides two types of shared variable namely broadcast variable and accumulator. Broadcast variable caches only read-variable on each machine rather than shipping a copy of it with the task.

How do you declare a global variable?

The global Keyword Normally, when you create a variable inside a function, that variable is local, and can only be used inside that function. To create a global variable inside a function, you can use the global keyword.

How do I declare a variable in spark SQL?

Since in SQL Server ,we can declare variables like declare @sparksql='<any query/value/string>' but in spark sql what alternative can be used . So that we don't need to hard code any values/query/strings.


1 Answers

Well, the best way of doing this is not doing it at all. In general Spark processing model doesn't provide any guarantees* regarding

  • where,
  • when,
  • in what order (excluding of course the order of transformations defined by the lineage / DAG)
  • and how many times

given piece of code is executed. Moreover, any updates which depend directly on the Spark architecture, are not granular.

These are the properties which make Spark scalable and resilient but at the same this is the thing that makes keeping shared mutable state very hard to implement and most of the time completely useless.

If all you want is a simple cache then you have multiple options:

  • use one of the methods described by Tzach Zohar in Caching in Spark
  • use local caching (per JVM or executor thread) combined with application specific partitioning to keep things local
  • for communication with external systems use node local cache independent of Spark (for example Nginx proxy for http requests)

If application requires much more complex communication you may try different message passing tools to keep synchronized state but in general it requires a complex and potentially fragile code.


* This partially changed in Spark 2.4, with introduction of the barrier execution mode (SPARK-24795, SPARK-24822).

like image 187
4 revs, 4 users 89% Avatar answered Oct 25 '22 15:10

4 revs, 4 users 89%