I have an h5 file containing multiple groups and datasets. Each dataset has associated attributes. I want to find/filter the datasets in this h5 file based upon the respective attribute associated with it.
Example:
dataset1 =cloudy(attribute)
dataset2 =rainy(attribute)
dataset3 =cloudy(attribute)
I want to find the datasets having weather
attribute/metadata as cloudy
What will be the simplest approach to get this done in pythonic way.
There are 2 ways to access HDF5 data with Python: h5py and pytables. Both are good, with different capabilities:
When working with HDF5 data, it is important to understand the HDF5 data model. That goes beyond the scope of this post. For simplicity sake, think of the data model as a file system; where "groups" and "datasets" are like "folders" and "files". Both can have attributes. "node" is the term used to refer to a "group" or "dataset".
@Kiran Ramachandra outlined a method with h5py
. Since you tagged your post with pytables
, outlined below is the same process with pytables
.
Note: Kiran's example assumes datasets 1,2,3 are all at the root level. You said you also have groups. Likely your groups also have some datasets. You can use the HDFView utility to view the data model and your data.
import tables as tb
h5f = tb.open_file('a.h5')
This gives you a file object you use to access additional objects (groups or datasets).
h5f.walk_nodes()
It is an iterable object to nodes and subnodes, and gives the complete HDF5 data structure (remember "nodes" can be either groups and datasets). You can list all node and types with:
for anode in h5f.walk_nodes() :
print (anode)
Use the following to get (a non-recursive) Python List of node names:
h5f.list_nodes()
This will fetch the value of attribute cloudy
from dataset1
(if it exists):
h5f.root.dataset1._f_getattr('cloudy')
If you want all attributes for a node, use this (shown for dataset1
):
ds1_attrs = h5f.root.dataset1._v_attrs._v_attrnames
for attr_name in ds1_attrs :
print ('Attribute', attr_name,'=' ,h5f.root.dataset1._f_getattr(attr_name))
All of the above references dataset1
at the root level (h5f.root
).
If a data set is in a group, you simply add the group name to the path.
For dataset2
in group named agroup
, use:
h5f.root.agroup.dataset2._f_getattr('rainy')
This will fetch the value of attribute rainy
from dataset2
in agroup
(if it exists)
If you want all attributes for dataset2
:
ds2_attrs = h5f.root.agroup.dataset2._v_attrs._v_attrnames
for attr_name in ds2_attrs :
print ('Attribute', attr_name,'=' , h5f.root.agroup.dataset2._f_getattr(attr_name))
For completeness, enclosed below is the code to create a.h5
used in my example. numpy
is only required to define the dtype
when creating the table. In general, HDF5 files are interchangeable (so you can open this example with h5py
).
import tables as tb
import numpy as np
h5f = tb.open_file('a.h5','w')
#create dataset 1 at root level, and assign attribute
ds_dtype = np.dtype([('a',int),('b',float)])
dataset1 = h5f.create_table(h5f.root, 'dataset1', description=ds_dtype)
dataset1._f_setattr('cloudy', 'True')
#create a group at root level
h5f.create_group(h5f.root, 'agroup')
#create dataset 2,3 at root.agroup level, and assign attributes
dataset2 = h5f.create_table(h5f.root.agroup, 'dataset2', description=ds_dtype)
dataset2._f_setattr('rainy', 'True')
dataset3 = h5f.create_table(h5f.root.agroup, 'dataset3', description=ds_dtype)
dataset3._f_setattr('cloudy', 'True')
h5f.close()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With