I am very new to bash scripting. I attempted to write a script that merges several json files. For example:
File 1:
{ "file1": { "foo": "bar" } }
File 2:
{ "file1": { "lorem": "ipsum" } }
Merged File:
{ "file1": { "foo": "bar" }, "file2": { "lorem": "ipsum" } }
This is what I came up with:
awk 'BEGIN{print "{"} FNR > 1 && last_file == FILENAME {print line} FNR == 1 {line = ""} FNR==1 && FNR != NR {printf ","} FNR > 1 {line = $0} {last_file = FILENAME} END{print "}"}' json_files/* > json_files/all_merged.json
It works but I feel there is a better way of doing this. Any ideas?
jq command is used not only for reading JSON data but also to display data by removing the particular key. The following command will print all key values of Students. json file by excluding batch key. map and del function are used in jq command to do the task.
Step 1: Load the json files with the help of pandas dataframe. Step 2 : Concatenate the dataframes into one dataframe. Step 3: Convert the concatenated dataframe into CSV file.
Handling JSON with awk is not a terribly good idea. Arbitrary changes in meaningless whitespace will break your code. Instead, use jq
; it is made for this sort of thing. To combine two objects, use the *
operator, i.e., for two files:
jq -s '.[0] * .[1]' file1.json file2.json
And for arbitrarily many files, use reduce
to apply it sequentially to all:
jq -s 'reduce .[] as $item ({}; . * $item)' json_files/*
The -s
switch makes jq
read the contents of the JSON files into a large array before handling them.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With