Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark: Exception in thread "main" org.apache.spark.sql.catalyst.errors.package

While running my spark-submit code, I get this error when I execute.

Scala file which performs joins.

I am just curious to know what is this TreeNodeException error.

Why do we have this error?

Please share your ideas on this TreeNodeException error:

Exception in thread “main” org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
like image 733
Ankita Avatar asked Oct 25 '17 10:10

Ankita


2 Answers

Ok so the stack trace given above is not sufficient to understand the root cause, but as you mentioned you are using the join the most probably it's happening because of that. I faced the same issue for join, if you dig down your stack trace you would see something like -

+- *HashAggregate(keys=[], functions=[partial_count(1)], output=[count#73300L])
+- *Project
+- *BroadcastHashJoin 
...
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]

This gives hint why it's failing, Spark tries to join using "Broadcast Hash Join", which has Timeout and Broadcast size threshold, either of which causes above error.To fix this depending on underlying error -

Increase the "spark.sql.broadcastTimeout", default is 300 sec -

spark = SparkSession
  .builder
  .appName("AppName")
  .config("spark.sql.broadcastTimeout", "1800")
  .getOrCreate()

Or increase the broadcast threshold,default is 10 MB -

spark = SparkSession
      .builder
      .appName("AppName")
      .config("spark.sql.autoBroadcastJoinThreshold", "20485760 ")
      .getOrCreate()

Or disable the Broadcast join by setting value to -1

spark = SparkSession
          .builder
          .appName("AppName")
          .config("spark.sql.autoBroadcastJoinThreshold", "-1")
          .getOrCreate()

More details can be found here - https://spark.apache.org/docs/latest/sql-performance-tuning.html

like image 136
Pratik Goenka Avatar answered Oct 11 '22 04:10

Pratik Goenka


I encountered this exception when joining dataframes too

Exception in thread “main” org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:

To fix it, I simply reversed the order of the join. That is, instead of doing df1.join(df2, on_col="A"), I did df2.join(df1, on_col="A"). Not sure why this is the case but my intuition tells me the logic tree that Spark must follow is messy when you use the former command but not the with the latter. You can think of it as the number of comparisons Spark would have to make with column "A" in my toy example to join both dataframes. I know it's not a definite answer but I hope it helps.

like image 30
Mauricio Avatar answered Oct 11 '22 04:10

Mauricio