I have 1 spark master and 2 slave nodes setup with 8 gb memory each on AWS. I have setup spark master to run every 1 hour. I have a cassandra database which is read every hour from spark to get records and process it in spark. There are around 5000 records every hour. My spark master crashed in one of the run saying
"15/12/20 11:04:45 ERROR ActorSystemImpl: Uncaught fatal error from thread [sparkMaster-akka.actor.default-dispatcher-4436] shutting down ActorSystem [sparkMaster]
java.lang.OutOfMemoryError: GC overhead limit exceeded
at scala.math.BigInt$.apply(BigInt.scala:82)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:16)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:42)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:35)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:42)
at org.json4s.jackson.JValueDeserializer.deserialize(JValueDeserializer.scala:35)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3066)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2161)
at org.json4s.jackson.JsonMethods$class.parse(JsonMethods.scala:19)
at org.json4s.jackson.JsonMethods$.parse(JsonMethods.scala:44)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:58)
at org.apache.spark.deploy.master.Master.rebuildSparkUI(Master.scala:793)
at org.apache.spark.deploy.master.Master.removeApplication(Master.scala:734)
at org.apache.spark.deploy.master.Master.org$apache$spark$deploy$master$Master$$finishApplication(Master.scala:712)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$28.apply(Master.scala:445)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$28.apply(Master.scala:445)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.deploy.master.Master$$anonfun$receiveWithLogging$1.applyOrElse(Master.scala:445)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at org.apache.spark.deploy.master.Master.aroundReceive(Master.scala:52)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
"
Can you please let me know the reason why spark master crashed with out of memory. I have this as setup for spark _executorMemory=6G _driverMemory=6G creating 8 paritions in my code. Why does master goes down which out of memory
Here is the code
//create spark context
_sparkContext = new SparkContext(_conf)
//load the cassandra table
val tabledf = _sqlContext.read.format("org.apache.spark.sql.cassandra").options(Map( "table" -> "events", "keyspace" -> "sams")).load
val whereQuery = "addedtime >= '" + _from + "' AND addedtime < '" + _to + "'"
helpers.printnextLine("Where query to run on Cassandra : " + whereQuery)
val rdd = tabledf.filter(whereQuery)
rdd.registerTempTable("rdd")
val selectQuery = "lower(brandname) as brandname, lower(appname) as appname, lower(packname) as packname, lower(assetname) as assetname, eventtime, lower(eventname) as eventname, lower(client.OSName) as platform, lower(eventorigin) as eventorigin, meta.price as price"
val modefiedDF = _sqlContext.sql("select " + selectQuery + " from rdd")
//cache the rdd
modefiedDF.cache
// perform groupby operation
grprdd = filterrdd.groupBy("brandname", "appname", "packname", "eventname", "platform", "eventorigin", "price").count()
grprdd.foreachPartition{iter =>
{
iter.foreach(element =>
{
// Write to sql server table
val statement = con.createStatement()
statement.executeUpdate(insertQuery)
finally
{
if(con != null)
con.close
}
// clear the cache
_sqlContext.clearCache()
The problem may be that you are asking spark master to use 6 GB and spark executor to use another 6 GB (total 12 GB to be used). However the system only has a total 8 GB RAM available.
Of this 8 GB you should also allow some memory to be utilized for OS processes (say 1 GB)k. Thus total RAM available to spark (master and worker combined) is only 7 GB.
Set executorMemory and driverMemory accordingly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With