Read json string files in pandas read_json() . You can do this for URLS, files, compressed files and anything that's in json format.
Pandas allow you to convert a list of lists into a Dataframe and specify the column names separately. A JSON parser transforms a JSON text into another representation must accept all texts that conform to the JSON grammar. It may accept non-JSON forms or extensions.
pandas is a library in python that can be used to convert JSON (String or file) to CSV file, all you need is first read the JSON into a pandas DataFrame and then write pandas DataFrame to CSV file.
I found a quick and easy solution to what I wanted using json_normalize()
included in pandas 1.01
.
from urllib2 import Request, urlopen
import json
import pandas as pd
path1 = '42.974049,-81.205203|42.974298,-81.195755'
request=Request('http://maps.googleapis.com/maps/api/elevation/json?locations='+path1+'&sensor=false')
response = urlopen(request)
elevations = response.read()
data = json.loads(elevations)
df = pd.json_normalize(data['results'])
This gives a nice flattened dataframe with the json data that I got from the Google Maps API.
Check this snip out.
# reading the JSON data using json.load()
file = 'data.json'
with open(file) as train_file:
dict_train = json.load(train_file)
# converting json dataset from dictionary to dataframe
train = pd.DataFrame.from_dict(dict_train, orient='index')
train.reset_index(level=0, inplace=True)
Hope it helps :)
Optimization of the accepted answer:
The accepted answer has some functioning problems, so I want to share my code that does not rely on urllib2:
import requests
from pandas import json_normalize
url = 'https://www.energidataservice.dk/proxy/api/datastore_search?resource_id=nordpoolmarket&limit=5'
response = requests.get(url)
dictr = response.json()
recs = dictr['result']['records']
df = json_normalize(recs)
print(df)
Output:
_id HourUTC HourDK ... ElbasAveragePriceEUR ElbasMaxPriceEUR ElbasMinPriceEUR
0 264028 2019-01-01T00:00:00+00:00 2019-01-01T01:00:00 ... NaN NaN NaN
1 138428 2017-09-03T15:00:00+00:00 2017-09-03T17:00:00 ... 33.28 33.4 32.0
2 138429 2017-09-03T16:00:00+00:00 2017-09-03T18:00:00 ... 35.20 35.7 34.9
3 138430 2017-09-03T17:00:00+00:00 2017-09-03T19:00:00 ... 37.50 37.8 37.3
4 138431 2017-09-03T18:00:00+00:00 2017-09-03T20:00:00 ... 39.65 42.9 35.3
.. ... ... ... ... ... ... ...
995 139290 2017-10-09T13:00:00+00:00 2017-10-09T15:00:00 ... 38.40 38.4 38.4
996 139291 2017-10-09T14:00:00+00:00 2017-10-09T16:00:00 ... 41.90 44.3 33.9
997 139292 2017-10-09T15:00:00+00:00 2017-10-09T17:00:00 ... 46.26 49.5 41.4
998 139293 2017-10-09T16:00:00+00:00 2017-10-09T18:00:00 ... 56.22 58.5 49.1
999 139294 2017-10-09T17:00:00+00:00 2017-10-09T19:00:00 ... 56.71 65.4 42.2
PS: API is for Danish electricity prices
You could first import your json data in a Python dictionnary :
data = json.loads(elevations)
Then modify data on the fly :
for result in data['results']:
result[u'lat']=result[u'location'][u'lat']
result[u'lng']=result[u'location'][u'lng']
del result[u'location']
Rebuild json string :
elevations = json.dumps(data)
Finally :
pd.read_json(elevations)
You can, also, probably avoid to dump data back to a string, I assume Panda can directly create a DataFrame from a dictionnary (I haven't used it since a long time :p)
Just a new version of the accepted answer, as python3.x
does not support urllib2
from requests import request
import json
from pandas.io.json import json_normalize
path1 = '42.974049,-81.205203|42.974298,-81.195755'
response=request(url='http://maps.googleapis.com/maps/api/elevation/json?locations='+path1+'&sensor=false', method='get')
elevations = response.json()
elevations
data = json.loads(elevations)
json_normalize(data['results'])
Here is small utility class that converts JSON to DataFrame and back: Hope you find this helpful.
# -*- coding: utf-8 -*-
from pandas.io.json import json_normalize
class DFConverter:
#Converts the input JSON to a DataFrame
def convertToDF(self,dfJSON):
return(json_normalize(dfJSON))
#Converts the input DataFrame to JSON
def convertToJSON(self, df):
resultJSON = df.to_json(orient='records')
return(resultJSON)
The problem is that you have several columns in the data frame that contain dicts with smaller dicts inside them. Useful Json is often heavily nested. I have been writing small functions that pull the info I want out into a new column. That way I have it in the format that I want to use.
for row in range(len(data)):
#First I load the dict (one at a time)
n = data.loc[row,'dict_column']
#Now I make a new column that pulls out the data that I want.
data.loc[row,'new_column'] = n.get('key')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With