I have a numpy array of size (4, X, Y), where the first dimension stands for an (R,G,B,A) quadruplet.
My aim is to transpose each X*Y
RGBA quadruplets to X*Y
floating-point values, given a dictionary matching them.
My current code is as follows:
codeTable = {
(255, 255, 255, 127): 5.5,
(128, 128, 128, 255): 6.5,
(0 , 0 , 0 , 0 ): 7.5,
}
for i in range(0, rows):
for j in range(0, cols):
new_data[i,j] = codeTable.get(tuple(data[:,i,j]), -9999)
Where data
is a numpy array of size (4, rows, cols)
, and new_data
is of size (rows, cols)
.
The code is working fine, but takes quite a long time. How should I optimize that piece of code?
Here is a full example:
import numpy
codeTable = {
(253, 254, 255, 127): 5.5,
(128, 129, 130, 255): 6.5,
(0 , 0 , 0 , 0 ): 7.5,
}
# test data
rows = 2
cols = 2
data = numpy.array([
[[253, 0], [128, 0], [128, 0]],
[[254, 0], [129, 144], [129, 0]],
[[255, 0], [130, 243], [130, 5]],
[[127, 0], [255, 120], [255, 5]],
])
new_data = numpy.zeros((rows,cols), numpy.float32)
for i in range(0, rows):
for j in range(0, cols):
new_data[i,j] = codeTable.get(tuple(data[:,i,j]), -9999)
# expected result for `new_data`:
# array([[ 5.50000000e+00, 7.50000000e+00],
# [ 6.50000000e+00, -9.99900000e+03],
# [ 6.50000000e+00, -9.99900000e+03], dtype=float32)
Here's an approach that returns your expected result, but with such a small amount of data it's hard to know if this will be faster for you. Since I've avoided the double for loop, however, I imagine you'll see a pretty decent speedup.
import numpy
import pandas as pd
codeTable = {
(253, 254, 255, 127): 5.5,
(128, 129, 130, 255): 6.5,
(0 , 0 , 0 , 0 ): 7.5,
}
# test data
rows = 3
cols = 2
data = numpy.array([
[[253, 0], [128, 0], [128, 0]],
[[254, 0], [129, 144], [129, 0]],
[[255, 0], [130, 243], [130, 5]],
[[127, 0], [255, 120], [255, 5]],
])
new_data = numpy.zeros((rows,cols), numpy.float32)
for i in range(0, rows):
for j in range(0, cols):
new_data[i,j] = codeTable.get(tuple(data[:,i,j]), -9999)
def create_output(data):
# Reshape your two data sources to be a bit more sane
reshaped_data = data.reshape((4, -1))
df = pd.DataFrame(reshaped_data).T
reshaped_codeTable = []
for key in codeTable.keys():
reshaped = list(key) + [codeTable[key]]
reshaped_codeTable.append(reshaped)
ct = pd.DataFrame(reshaped_codeTable)
# Merge on the data, replace missing merges with -9999
result = df.merge(ct, how='left')
newest_data = result[4].fillna(-9999)
# Reshape
output = newest_data.reshape(rows, cols)
return output
output = create_output(data)
print(output)
# array([[ 5.50000000e+00, 7.50000000e+00],
# [ 6.50000000e+00, -9.99900000e+03],
# [ 6.50000000e+00, -9.99900000e+03])
print(numpy.array_equal(new_data, output))
# True
The numpy_indexed package (disclaimer: I am its author) contains a vectorized nd-array capable variant of list.index, which can be used to solve your problem efficiently and concisely:
import numpy_indexed as npi
map_keys = np.array(list(codeTable.keys()))
map_values = np.array(list(codeTable.values()))
indices = npi.indices(map_keys, data.reshape(4, -1).T, missing='mask')
remapped = np.where(indices.mask, -9999, map_values[indices.data]).reshape(data.shape[1:])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With