Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How To Push a Spark Dataframe to Elastic Search (Pyspark)

Beginner ES Question here

What is the workflow or steps for pushing a Spark Dataframe to Elastic Search?

From research, I believe I need to use the spark.newAPIHadoopFile() method.

However, digging through the Elastic Search Documentation, and other Stack Q/A's I am still a little confused on what format the arguments need to be in and why

NOTE that I am using pyspark, this is a new table to ES (no index already exists), and the df is 5 columns (2 string types, 2 long types,  and 1 list of ints) with ~3.5M rows.

like image 603
whs2k Avatar asked Oct 16 '17 03:10

whs2k


2 Answers

This worked for me - I had my data in df.

df = df.drop('_id')
df.write.format(
    "org.elasticsearch.spark.sql"
).option(
    "es.resource", '%s/%s' % (conf['index'], conf['doc_type'])
).option(
    "es.nodes", conf['host']
).option(
    "es.port", conf['port']
).save()

I had used this command to submit my job - /path/to/spark-submit --master spark://master:7077 --jars ./jar_files/elasticsearch-hadoop-5.6.4.jar --driver-class-path ./jar_files/elasticsearch-hadoop-5.6.4.jar main_df.py.

like image 104
akki Avatar answered Sep 28 '22 20:09

akki


Managed to find an answer so I'll share. Spark DF's (from pyspark.sql) don't currently support the newAPIHadoopFile() methods; however, df.rdd.saveAsNewAPIHadoopFile() was giving me errors as well. The trick was to convert the df to strings via the following function

def transform(doc):
    import json
    import hashlib

    _json = json.dumps(doc)
    keys = doc.keys()
    for key in keys:
        if doc[key] == 'null' or doc[key] == 'None':
            del doc[key]
    if not doc.has_key('id'):
        id = hashlib.sha224(_json).hexdigest()
        doc['id'] = id
    else:
        id = doc['id']
    _json = json.dumps(doc)
    return (id, _json)

So my JSON workflow is:

1: df = spark.read.json('XXX.json')

2: rdd_mapped = df.rdd.map(lambda y: y.asDict())

3: final_rdd = rdd_mapped.map(transform)

4:

final_rdd.saveAsNewAPIHadoopFile(
     path='-', 
     outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
     keyClass="org.apache.hadoop.io.NullWritable",  
     valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", 
     conf={ "es.resource" : "<INDEX> / <INDEX>", "es.mapping.id":"id", 
         "es.input.json": "true", "es.net.http.auth.user":"elastic",
         "es.write.operation":"index", "es.nodes.wan.only":"false",
         "es.net.http.auth.pass":"changeme", "es.nodes":"<NODE1>, <NODE2>, <NODE3>...",
         "es.port":"9200" })

More information on ES arguments can be found here (Scroll to 'Configuration')

like image 45
whs2k Avatar answered Sep 28 '22 20:09

whs2k