Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Problem renaming all HDF5 datasets in group for large hdf5 files

I am having a problem renaming datasets in hdf5. The process is EXTREMELY slow. I read some documentation stating that dataset names are merely links to the data, so an acceptable way to rename is:

group['new_name'] = group['old_name']
del group['old_name']

But this is so slow (only 5% complete running overnight), it makes me think my process is entirely wrong.

I'm using python h5py, and here's my slow code:

# Open file
with h5py.File('test.hdf5') as f:

    # Get all top level groups
    top_keys = [key for key in f.keys()]

    # Iterate over each group
    for top_key in top_keys:
        group = f[top_key]
        tot_digits = len(group)

        #Rename all datasets in the group (pad with zeros)
        for key in tqdm(group.keys()):
            new_key = str(key)
            while len(new_key)<tot_digits:
                new_key = '0'+str(new_key)
            group[new_key] = group[key]
            del group[key]

Per @jpp suggestion, I also tried replacing the last two lines with group.move:

group.move(key, new_key)

But this method was equally slow. I have several groups with the same number of datasets, but each group has different size datasets. The group with the largest datasets (most bytes) seem to rename the slowest.

Certainly there is a way to do this quickly. Is the dataset name just a symbolic link? Or does renaming inherently cause the entire dataset to be rewritten? How should I go about renaming many datasets in an HDF5 file?

like image 442
Richard Avatar asked Oct 31 '18 14:10

Richard


People also ask

Why are HDF5 files so large?

This is probably due to your chunk layout - the more chunk sizes are small the more your HDF5 file will be bloated. Try to find an optimal balance between chunk sizes (to solve your use-case properly) and the overhead (size-wise) that they introduce in the HDF5 file.

Is HDF5 compressed?

The HDF5 file format and library provide flexibility to use a variety of data compression filters on individual datasets in an HDF5 file. Compressed data is stored in chunks and automatically uncompressed by the library and filter plugin when a chunk is accessed.

Can HDF5 store strings?

Storing stringsYou can use string_dtype() to explicitly specify any HDF5 string datatype.

How does HDF5 store data?

HDF5 Datasets. A dataset is stored in a file in two parts: a header and a data array. The header contains information that is needed to interpret the array portion of the dataset, as well as metadata (or pointers to metadata) that describes or annotates the dataset.


1 Answers

One possible culprit, at least if you have a large number of groups under your top level keys, is that your are creating the new name in a very inefficient way. Instead of

while len(new_key)<tot_digits:
    new_key = '0'+str(new_key)

You should generate the new key like this:

if len(new_key)<tot_digits:
    new_key = (tot_digits-len(new_key))*'0' + new_key

This way you don't create a new string object for every extra digit you need to add.

It is also possible, although I can't confirm this, that calling group.keys() will return an iterator which will get repopulated with the new key names you add, since you modify the group while iterating over the keys. A standard python iterator would throw a RuntimeError, but it's clear if hf5py would do the same. To be sure you don't have that problem, you can simple make sure you create a list of the keys up-front.

for key in tqdm(list(group.keys())):
like image 68
ilmarinen Avatar answered Oct 14 '22 01:10

ilmarinen