I am trying to load the json file to pandas data frame. I found that there were some nested json. Below is the sample json:
{'events': [{'id': 142896214,
'playerId': 37831,
'teamId': 3157,
'matchId': 2214569,
'matchPeriod': '1H',
'eventSec': 0.8935539999999946,
'eventId': 8,
'eventName': 'Pass',
'subEventId': 85,
'subEventName': 'Simple pass',
'positions': [{'x': 51, 'y': 49}, {'x': 40, 'y': 53}],
'tags': [{'id': 1801, 'tag': {'label': 'accurate'}}]}
I used the following code to load json into dataframe:
with open('EVENTS.json') as f:
jsonstr = json.load(f)
df = pd.io.json.json_normalize(jsonstr['events'])
Below is the output of df.head()
But I found two nested columns such as positions and tags.
I tried using the following code to flatten it:
Position_data = json_normalize(data =jsonstr['events'], record_path='positions', meta = ['x','y','x','y'] )
It showed me an error as follow:
KeyError: "Try running with errors='ignore' as key 'x' is not always present"
Can you advise me how to flatten positions and tags ( those having nested data).
Thanks, Zep
If you are looking for a more general way to unfold multiple hierarchies from a json you can use recursion
and list comprehension to reshape your data. One alternative is presented below:
def flatten_json(nested_json, exclude=['']):
"""Flatten json object with nested keys into a single level.
Args:
nested_json: A nested json object.
exclude: Keys to exclude from output.
Returns:
The flattened json object if successful, None otherwise.
"""
out = {}
def flatten(x, name='', exclude=exclude):
if type(x) is dict:
for a in x:
if a not in exclude: flatten(x[a], name + a + '_')
elif type(x) is list:
i = 0
for a in x:
flatten(a, name + str(i) + '_')
i += 1
else:
out[name[:-1]] = x
flatten(nested_json)
return out
Then you can apply to your data, independent of nested levels:
New sample data
this_dict = {'events': [
{'id': 142896214,
'playerId': 37831,
'teamId': 3157,
'matchId': 2214569,
'matchPeriod': '1H',
'eventSec': 0.8935539999999946,
'eventId': 8,
'eventName': 'Pass',
'subEventId': 85,
'subEventName': 'Simple pass',
'positions': [{'x': 51, 'y': 49}, {'x': 40, 'y': 53}],
'tags': [{'id': 1801, 'tag': {'label': 'accurate'}}]},
{'id': 142896214,
'playerId': 37831,
'teamId': 3157,
'matchId': 2214569,
'matchPeriod': '1H',
'eventSec': 0.8935539999999946,
'eventId': 8,
'eventName': 'Pass',
'subEventId': 85,
'subEventName': 'Simple pass',
'positions': [{'x': 51, 'y': 49}, {'x': 40, 'y': 53},{'x': 51, 'y': 49}],
'tags': [{'id': 1801, 'tag': {'label': 'accurate'}}]}
]}
Usage
pd.DataFrame([flatten_json(x) for x in this_dict['events']])
Out[1]:
id playerId teamId matchId matchPeriod eventSec eventId \
0 142896214 37831 3157 2214569 1H 0.893554 8
1 142896214 37831 3157 2214569 1H 0.893554 8
eventName subEventId subEventName positions_0_x positions_0_y \
0 Pass 85 Simple pass 51 49
1 Pass 85 Simple pass 51 49
positions_1_x positions_1_y tags_0_id tags_0_tag_label positions_2_x \
0 40 53 1801 accurate NaN
1 40 53 1801 accurate 51.0
positions_2_y
0 NaN
1 49.0
Note that this flatten_json
code is not mine, I have seen it here and here without much certainty of the original source.
flatten_json
can be a great option, depending on the structure of the JSON, and how the structure should be flattened.
flatten_json
workspositions
to have a separate row, then pandas.json_normalize
is the better option.flatten_json
is, if there are many positions
, then the number of columns for each event in events
can be very large.flatten_json
.dict
in events
data = {'events': [{'id': 142896214,
'playerId': 37831,
'teamId': 3157,
'matchId': 2214569,
'matchPeriod': '1H',
'eventSec': 0.8935539999999946,
'eventId': 8,
'eventName': 'Pass',
'subEventId': 85,
'subEventName': 'Simple pass',
'positions': [{'x': 51, 'y': 49}, {'x': 40, 'y': 53}],
'tags': [{'id': 1801, 'tag': {'label': 'accurate'}}]}]}
Create the DataFrame
df = pd.DataFrame.from_dict(data)
df = df['events'].apply(pd.Series)
Flatten positions
with pd.Series
df_p = df['positions'].apply(pd.Series)
df_p_0 = df_p[0].apply(pd.Series)
df_p_1 = df_p[1].apply(pd.Series)
Rename positions[0]
& positions[1]
:
df_p_0.columns = ['pos_0_x', 'pos_0_y']
df_p_1.columns = ['pos_1_x', 'pos_1_y']
Flatten tags
with pd.Series
:
df_t = df.tags.apply(pd.Series)
df_t = df_t[0].apply(pd.Series)
df_t_t = df_t.tag.apply(pd.Series)
Rename id
& label
:
df_t = df_t.rename(columns={'id': 'tags_id'})
df_t_t.columns = ['tags_tag_label']
Combine them all with pd.concat
:
df_new = pd.concat([df, df_p_0, df_p_1, df_t.tags_id, df_t_t], axis=1)
Drop the old columns:
df_new = df_new.drop(['positions', 'tags'], axis=1)
positions
# normalize events
df = pd.json_normalize(data, 'events')
# explode all columns with lists of dicts
df = df.apply(lambda x: x.explode()).reset_index(drop=True)
# list of columns with dicts
cols_to_normalize = ['positions', 'tags']
# if there are keys, which will become column names, overlap with excising column names
# add the current column name as a prefix
normalized = list()
for col in cols_to_normalize:
d = pd.json_normalize(df[col], sep='_')
d.columns = [f'{col}_{v}' for v in d.columns]
normalized.append(d.copy())
# combine df with the normalized columns
df = pd.concat([df] + normalized, axis=1).drop(columns=cols_to_normalize)
# display(df)
id playerId teamId matchId matchPeriod eventSec eventId eventName subEventId subEventName positions_x positions_y tags_id tags_tag_label
0 142896214 37831 3157 2214569 1H 0.893554 8 Pass 85 Simple pass 51 49 1801 accurate
1 142896214 37831 3157 2214569 1H 0.893554 8 Pass 85 Simple pass 40 53 1801 accurate
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With