Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PySpark: Deserializing an Avro serialized message contained in an eventhub capture avro file

Initial situation

AVRO serialized events are sent to an azure event hub. These events are stored persistently using azure event hubs capture feature. Captured data, along with event hub metadata, is written in Apache Avro format. The original events contained in the capture avro file shall be analyzed using (py)Spark.


Question

How to deserialize an AVRO serialized event that is contained within a field / column of an AVRO file using (py)Spark? (Annotation: the avro schema of the event is not know by the reader application, but it is contained within the message as avro header)


Background

The Background is an analytical platform for an IoT scenario. Messages are provided by an IoT platform running on kafka. To be more flexible with schema changes the strategic decision is to stick with avro format. To enable the use of Azure Stream Analytics (ASA) the avro schema is specified with each message (otherwise ASA is not able to deserialize the message).

capture file avro schema

The schema of the avro files generated by the event hub capture feature is as listed below:

{
    "type":"record",
    "name":"EventData",
    "namespace":"Microsoft.ServiceBus.Messaging",
    "fields":[
                 {"name":"SequenceNumber","type":"long"},
                 {"name":"Offset","type":"string"},
                 {"name":"EnqueuedTimeUtc","type":"string"},
                 {"name":"SystemProperties","type":{"type":"map","values":["long","double","string","bytes"]}},
                 {"name":"Properties","type":{"type":"map","values":["long","double","string","bytes"]}},
                 {"name":"Body","type":["null","bytes"]}
             ]
}

(note that the actual message is stored in the body field as bytes)

example event avro schema

For illustration i sent events with the following avro schema to event hub:

{
    "type" : "record",
    "name" : "twitter_schema",
    "namespace" : "com.test.avro",
    "fields" : [ 
                {"name" : "username","type" : "string"}, 
                {"name" : "tweet","type" : "string"},
                {"name" : "timestamp","type" : "long"}
    ],
}

example event

{
    "username": "stackoverflow",
    "tweet": "please help deserialize me",
    "timestamp": 1366150681
}

example avro message payload

(encoded as a string / note that avro schema is included)

Objavro.schema�{"type":"record","name":"twitter_schema","namespace":"com.test.avro","fields":[{"name":"username","type":"string"},{"name":"tweet","type":"string"},{"name":"timestamp","type":"long"}]}

So at the end this payload will be stored as Bytes in the 'Body' field of the capture avro file.

.
.


My current approach

For ease of use, testing and debugging i currently use a pyspark jupyter notebook.

Config of Spark Session:

%%configure
{
    "conf": {
        "spark.jars.packages": "com.databricks:spark-avro_2.11:4.0.0"
    }
}

reading avro file into a dataframe and outputting result:

capture_df = spark.read.format("com.databricks.spark.avro").load("[pathToCaptureAvroFile]")
capture_df.show()

result:

+--------------+------+--------------------+----------------+----------+--------------------+
|SequenceNumber|Offset|     EnqueuedTimeUtc|SystemProperties|Properties|                Body|
+--------------+------+--------------------+----------------+----------+--------------------+
|            71|  9936|11/4/2018 4:59:54 PM|           Map()|     Map()|[4F 62 6A 01 02 1...|
|            72| 10448|11/4/2018 5:00:01 PM|           Map()|     Map()|[4F 62 6A 01 02 1...|

getting the content of the Body field and cast it to a String:

msgRdd = capture_df.select(capture_df.Body.cast("string")).rdd.map(lambda x: x[0])

That's how far i got the code working. Spent a lot of time trying to deserialize the actual message, but without success. I would appreciate any help!

Some additional info: Spark is running on a Microsoft Azure HDInsight 3.6 cluster. Spark Version is 2.2. Python Version is 2.7.12.

like image 594
devel0p3r Avatar asked Nov 07 '18 21:11

devel0p3r


1 Answers

What you want to do is apply .decode('utf-8') to each element in the Body column. You have to create an UDF from decode, so you can apply it. The UDF can be created with

from pyspark.sql import functions as f

decodeElements = f.udf(lambda a: a.decode('utf-8'))

Here is a complete example for parsing avro files stored by the IoT Hub to a custom Blob Storage endpoint:

storage_account_name = "<YOUR STORACE ACCOUNT NAME>"
storage_account_access_key = "<YOUR STORAGE ACCOUNT KEY>"

# Read all files from one day. All PartitionIds are included. 
file_location = "wasbs://<CONTAINER>@"+storage_account_name+".blob.core.windows.net/<IoT Hub Name>/*/2018/11/30/*/*"
file_type = "avro"

# Read raw data
spark.conf.set(
  "fs.azure.account.key."+storage_account_name+".blob.core.windows.net",
  storage_account_access_key)

reader = spark.read.format(file_type).option("inferSchema", "true")
raw = reader.load(file_location)

# Decode Body into strings
from pyspark.sql import functions as f

decodeElements = f.udf(lambda a: a.decode('utf-8'))

jsons = raw.select(
    raw['EnqueuedTimeUtc'],
    raw['SystemProperties.connectionDeviceId'].alias('DeviceId'), 
    decodeElements(raw['Body']).alias("Json")
)

# Parse Json data
from pyspark.sql.functions import from_json

json_schema = spark.read.json(jsons.rdd.map(lambda row: row.Json)).schema
data = jsons.withColumn('Parsed', from_json('Json', json_schema)).drop('Json')

Disclamer: I am new to both Python and Databricks and my solution is probably less than perfect. But I spent more than a day to get this working and I hope it can be a good starting point for someone.

like image 163
PeterB Avatar answered Oct 01 '22 06:10

PeterB