Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

AWS Athena export array of structs to JSON

I've got an Athena table where some fields have a fairly complex nested format. The backing records in S3 are JSON. Along these lines (but we have several more levels of nesting):

CREATE EXTERNAL TABLE IF NOT EXISTS test (
  timestamp double,
  stats array<struct<time:double, mean:double, var:double>>,
  dets array<struct<coords: array<double>, header:struct<frame:int, 
    seq:int, name:string>>>,
  pos struct<x:double, y:double, theta:double>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES ('ignore.malformed.json'='true')
LOCATION 's3://test-bucket/test-folder/'

Now we need to be able to query the data and import the results into Python for analysis. Because of security restrictions I can't connect directly to Athena; I need to be able to give someone the query and then they will give me the CSV results.

If we just do a straight select * we get back the struct/array columns in a format that isn't quite JSON. Here's a sample input file entry:

{"timestamp":1520640777.666096,"stats":[{"time":15,"mean":45.23,"var":0.31},{"time":19,"mean":17.315,"var":2.612}],"dets":[{"coords":[2.4,1.7,0.3], "header":{"frame":1,"seq":1,"name":"hello"}}],"pos": {"x":5,"y":1.4,"theta":0.04}}

And example output:

select * from test

"timestamp","stats","dets","pos"
"1.520640777666096E9","[{time=15.0, mean=45.23, var=0.31}, {time=19.0, mean=17.315, var=2.612}]","[{coords=[2.4, 1.7, 0.3], header={frame=1, seq=1, name=hello}}]","{x=5.0, y=1.4, theta=0.04}"

I was hoping to get those nested fields exported in a more convenient format - getting them in JSON would be great.

Unfortunately it seems that cast to JSON only works for maps, not structs, because it just flattens everything into arrays:

SELECT timestamp, cast(stats as JSON) as stats, cast(dets as JSON) as dets, cast(pos as JSON) as pos FROM "sampledb"."test"

"timestamp","stats","dets","pos"
"1.520640777666096E9","[[15.0,45.23,0.31],[19.0,17.315,2.612]]","[[[2.4,1.7,0.3],[1,1,""hello""]]]","[5.0,1.4,0.04]"

Is there a good way to convert to JSON (or another easy-to-import format) or should I just go ahead and do a custom parsing function?

like image 462
Chelsar Avatar asked Mar 15 '18 20:03

Chelsar


Video Answer


2 Answers

I have skimmed through all the documentation and unfortunately there seems to be no way to do this as of now. The only possible workaround is

converting a struct to a json when querying athena

SELECT
  my_field,
  my_field.a,
  my_field.b,
  my_field.c.d,
  my_field.c.e
FROM 
  my_table

Or I would convert the data to json using post processing. Below script shows how

#!/usr/bin/env python
import io
import re

pattern1 = re.compile(r'(?<={)([a-z]+)=', re.I)
pattern2 = re.compile(r':([a-z][^,{}. [\]]+)', re.I)
pattern3 = re.compile(r'\\"', re.I)

with io.open("test.csv") as f:
    headers = list(map(lambda f: f.strip(), f.readline().split(",")))
    for line in f.readlines():
        orig_line = line
        data = []
        for i, l in enumerate(line.split('","')):
            data.append(headers[i] + ":" + re.sub('^"|"$', "", l))

        line = "{" + ','.join(data) + "}"
        line = pattern1.sub(r'"\1":', line)
        line = pattern2.sub(r':"\1"', line)
        print(line)

The output on your input data is

{"timestamp":1.520640777666096E9,"stats":[{"time":15.0, "mean":45.23, "var":0.31}, {"time":19.0, "mean":17.315, "var":2.612}],"dets":[{"coords":[2.4, 1.7, 0.3], "header":{"frame":1, "seq":1, "name":"hello"}}],"pos":{"x":5.0, "y":1.4, "theta":0.04}
}

Which is a valid JSON

Converted JSON

like image 77
Tarun Lalwani Avatar answered Sep 22 '22 01:09

Tarun Lalwani


The python code from @tarun almost got me there, but I had to modify it in several ways due to my data. In particular, I have:

  • json structures saved in Athena as strings
  • Strings that contain multiple words, and therefore need to be in between double quotes. Some of them contain "[]" and "{}" symbols.

Here is the code that worked for me, hopefully will be useful for others:

#!/usr/bin/env python
import io
import re, sys

pattern1 = re.compile(r'(?<={)([a-z]+)=', re.I)
pattern2 = re.compile(r':([a-z][^,{}. [\]]+)', re.I)
pattern3 = re.compile(r'\\"', re.I)

with io.open(sys.argv[1]) as f:
    headers = list(map(lambda f: f.strip(), f.readline().split(",")))
    print(headers)
    for line in f.readlines():

        orig_line = line
        #save the double quote cases, which mean there is a string with quotes inside
        line = re.sub('""', "#", orig_line)
        data = []
        for i, l in enumerate(line.split('","')):
            item = re.sub('^"|"$', "", l.rstrip())
            if (item[0] == "{" and item[-1] == "}") or (item[0] == "[" and item[-1] == "]"):
                data.append(headers[i] + ":" + item)
            else: #we have a string
                data.append(headers[i] + ": \"" + item + "\"")

        line = "{" + ','.join(data) + "}"
        line = pattern1.sub(r'"\1":', line)
        line = pattern2.sub(r':"\1"', line)

        #restate the double quotes to single ones, once inside the json
        line = re.sub("#", '"', line)

        print(line)
like image 41
XAnguera Avatar answered Sep 19 '22 01:09

XAnguera