By default, spark_read_jdbc()
reads an entire database table into Spark. I've used the following syntax to create these connections.
library(sparklyr)
library(dplyr)
config <- spark_config()
config$`sparklyr.shell.driver-class-path` <- "mysql-connector-java-5.1.43/mysql-connector-java-5.1.43-bin.jar"
sc <- spark_connect(master = "local",
version = "1.6.0",
hadoop_version = 2.4,
config = config)
db_tbl <- sc %>%
spark_read_jdbc(sc = .,
name = "table_name",
options = list(url = "jdbc:mysql://localhost:3306/schema_name",
user = "root",
password = "password",
dbtable = "table_name"))
However, I've now encountered the scenario where I have a table in a MySQL database and I would prefer to only read in a subset of this table into Spark.
How do I get spark_read_jdbc
to accept a predicate? I've tried adding the predicate to the options list without success,
db_tbl <- sc %>%
spark_read_jdbc(sc = .,
name = "table_name",
options = list(url = "jdbc:mysql://localhost:3306/schema_name",
user = "root",
password = "password",
dbtable = "table_name",
predicates = "field > 1"))
You can replace dbtable
with query:
db_tbl <- sc %>%
spark_read_jdbc(sc = .,
name = "table_name",
options = list(url = "jdbc:mysql://localhost:3306/schema_name",
user = "root",
password = "password",
dbtable = "(SELECT * FROM table_name WHERE field > 1) as my_query"))
but with simple condition like this Spark should push it automatically when you filter:
db_tbl %>% filter(field > 1)
Just make sure to set:
memory = FALSE
in spark_read_jdbc
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With