To give some background, I am trying to run TPCDS benchmark on Spark with and without Spark's catalyst optimizer. For complicated queries on smaller datasets, we might be spending more time optimizing the plans than actually executing the plans. Hence wanted to measure the performance impact of optimizers on overall execution of the query
Is there a way to disable some or all of the spark catalyst optimization rules?
The Spark SQL Catalyst Optimizer improves developer productivity and the performance of their written queries. Catalyst automatically transforms relational queries to execute them more efficiently using techniques such as filtering, indexes and ensuring that data source joins are performed in the most efficient order.
Catalyst optimization allows some advanced programming language features that allow you to build an extensible query optimizer. A new extensible optimizer called Catalyst emerged to implement Spark SQL. This optimizer is based on functional programming construct in Scala.
Solution : Catalyst does not make the reaction more exothermic or endothermic .
Tungsten is the codename for the umbrella project to make changes to Apache Spark's execution engine that focuses on substantially improving the efficiency of memory and CPU for Spark applications, to push performance closer to the limits of modern hardware.
I know it's not the exact answer but it can help you.
Assuming your driver is not multithreaded. (hint for optimization if Catalyst is slow? :) )
If you want to measure time spent in Catalyst, just go to Spark UI and check how much time your executors are idle, or check the list of stages/jobs.
If you have a Job started at 15:30 with duration 30seconds, and next one starts at 15:32, it probably means catalyst is taking 1:30 to optimize (assuming no driver-heavy work is done).
Or even better, just put logs before calling every action in Spark and then just check how much time passes until the task is actually sent to the executor.
This ability has been added as part of Spark-2.4.0 in SPARK-24802.
val OPTIMIZER_EXCLUDED_RULES = buildConf("spark.sql.optimizer.excludedRules")
.doc("Configures a list of rules to be disabled in the optimizer, in which the rules are " +
"specified by their rule names and separated by comma. It is not guaranteed that all the " +
"rules in this configuration will eventually be excluded, as some rules are necessary " +
"for correctness. The optimizer will log the rules that have indeed been excluded.")
.stringConf
.createOptional
You could find the list of optimizer rules here.
But ideally, we shouldn't be disabling the rules, since most of them provide performance benefits. We should identify the rule that consumes time and check if is not useful for the query and then disable them.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With