Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to store data in a Spark cluster using sparklyr?

Tags:

r

sparklyr

If I connect to a Spark cluster, copy some data to it, and disconnect, ...

library(dplyr)
library(sparklyr)
sc <- spark_connect("local")
copy_to(sc, iris)
src_tbls(sc)
## [1] "iris"
spark_disconnect(sc)

then the next time I connect to Spark, the data is not there.

sc <- spark_connect("local")
src_tbls(sc)
## character(0)
spark_disconnect(sc)

This is different to the situation of working with a database, where regardless of how many times you connect, the data is just there.

How do I persist data in the Spark cluster between connections?

I thought sdf_persist() might be what I want, but it appears not.

like image 844
Richie Cotton Avatar asked Nov 08 '22 01:11

Richie Cotton


1 Answers

Spark is technically an engine that runs on the computer/cluster to execute tasks. It is not a database or file-system. You can save the data when you are done to a file-system and load it up during your next session.

https://en.wikipedia.org/wiki/Apache_Spark

like image 125
Andrew Troiano Avatar answered Nov 15 '22 06:11

Andrew Troiano