I have a boatload of images in a hdf5-file that I would like to load and analyse. Each image is 1920x1920 uint16 and loading all off them into the memory crashes the computer. I have been told that others work around that by slicing the image, e.g. if the data is 1920x1920x100 (100 images) then they read in the first 80 rows of each images and analyse that slice, then move to next slice. This I can do without problems, but when I try to create a dataset in the hdf5 file, it get a TypeError: Can't convert element 0 ... to hsize_t
I can recreate the problem with this very simplified code:
with h5py.File('h5file.hdf5','w') as f:
data = np.random.randint(100, size=(15,15,20))
data_set = f.create_dataset('data', data, dtype='uint16')
which gives the output:
TypeError: Can't convert element 0 ([[29 50 75...4 50 28 36 13 72]]) to hsize_t
I have also tried omitting the "data_set =" and the "dtype='uint16'", but I still get the same error. The code is then:
with h5py.File('h5file.hdf5','w') as f:
data = np.random.randint(100, size=(15,15,20))
f.create_dataset('data', data)
Can anyone give me any hints to what the problem is? Cheers!
The second parameter of create_dataset
is the shape parameter (see the docs), but you pass the entire array. If you want to initialize the dataset with an existing array, you must specify this with the data
keyword, like this:
data_set = f.create_dataset('data', data=data, dtype="uint16")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With