Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reading data from SQL Server using Spark SQL

Is it possible to read data from Microsoft Sql Server (and oracle, mysql, etc.) into an rdd in a Spark application? Or do we need to create an in memory set and parallize that into an RDD?

like image 852
ashic Avatar asked Oct 07 '14 14:10

ashic


2 Answers

In Spark 1.4.0+ you can now use sqlContext.read.jdbc

That will give you a DataFrame instead of an RDD of Row objects.

The equivalent to the solution you posted above would be

sqlContext.read.jdbc("jdbc:sqlserver://omnimirror;databaseName=moneycorp;integratedSecurity=true;", "TABLE_NAME", "id", 1, 100000, 1000, new java.util.Properties)

It should pick up the schema of the table, but if you'd like to force it, you can use the schema method after read sqlContext.read.schema(...insert schema here...).jdbc(...rest of the things...)

Note that you won't get an RDD of SomeClass here (which is nicer in my view). Instead you'll get a DataFrame of the relevant fields.

More information can be found here: http://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases

like image 141
kanielc Avatar answered Oct 08 '22 04:10

kanielc


Found a solution to this from the mailing list. JdbcRDD can be used to accomplish this. I needed to get the MS Sql Server JDBC driver jar and add it to the lib for my project. I wanted to use integrated security, and so needed to put sqljdbc_auth.dll (available in the same download) in a location that java.library.path can see. Then, the code looks like this:

     val rdd = new JdbcRDD[Email](sc,
          () => {DriverManager.getConnection(
 "jdbc:sqlserver://omnimirror;databaseName=moneycorp;integratedSecurity=true;")},
          "SELECT * FROM TABLE_NAME Where ? < X and X < ?",
            1, 100000, 1000,
          (r:ResultSet) => { SomeClass(r.getString("Col1"), 
            r.getString("Col2"), r.getString("Col3")) } )

This gives an Rdd of SomeClass.The second, third and fourth parameters are required and are for lower and upper bounds, and number of partitions. In other words, that source data needs to be partitionable by longs for this to work.

like image 42
ashic Avatar answered Oct 08 '22 04:10

ashic