Since in SQL Server ,we can declare variables like declare @sparksql='<any query/value/string>'
but in spark sql what alternative can be used .
So that we don't need to hard code any values/query/strings.
There is support for the variables substitution in the Spark, at least from version of the 2.1.x. It's controlled by the configuration option spark.sql.variable.substitute
- in 3.0.x it's set to true
by default (you can check it by executing SET spark.sql.variable.substitute
).
With that option set to true
, you can set variable to specific value with SET myVar=123
, and then use it using the ${varName}
syntax, like: select ${myVar}
...
On Databricks, parser also recognizes that syntax, and creates a field to populate value, although it would be easier to use widgets from SQL as described in documentation
P.S. According to the code, besides variables themselves, it also supports getting the data from environment variables & from the Java system properties, like this:
select '${env:PATH}';
select '${system:java.home}';
P.S. This answer is about using variables defined in Spark SQL itself. If you're looking about using variables defined in Python/Scala in Spark SQL, then please refer to this answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With