Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient method of interpolating a heat map

I'm writing a program that creates a visual representation of temperatures. There are 200 points of data laid out in a grid and I'm using interpolation to fill in the pixels between these points.

I've written a program that puts out the data I want using inverse distance weighting (the modified Shepards method in this case), giving an image like shown below:

heatmap

With all the irrelevant stuff cut out (such as the image library stuff) the code for creating this is shown below:

Firstly all of the distances and summed distances from each point to each tube are calculated (because these are invariant). In this bit I'm not particularly worried about the time taken because it is only done once, but I'm including the code so you can see how the values are stored.

#set_tubes creates an array of tubes (which is the data I'm working on)
#each tube has an x position in pixels, a y position in pixels and a temperature

self.set_tubes()
self.dists = []
for x in range(1,BASE_WIDTH-1):
    self.summed_dists.append([])
    self.dists.append([])
    for y in range(1,BASE_HEIGHT-1):
        self.summed_dists[x-1].append([])
        self.dists[x-1].append([])
        self.summed_dists[x-1][y-1]=0
        for row in range(10):
            self.dists[x-1][y-1].append([])
            for tube in range(20):
                dist = np.sqrt((x-self.tubes[row][tube].xPos)**2+(y-self.tubes[row][tube].yPos)**2)+0.1
                #The -3 in the next two lines is simply a weighting factor
                self.dists[x-1][y-1][row].append(dist**(-3))
                self.summed_dists[x-1][y-1] = self.summed_dists[x-1][y-1] + dist**(-3)

Then the interpolation is done (this is done repeatedly as the temperatures change). This is the bit where the time taken matters.

def other_proc_calc_temp(ret_queue, dists, tubes,summed_dists):
    heat_values = list()
    for x in range (BASE_WIDTH):
        heat_values.append([])
        for y in range(BASE_HEIGHT):
            summed = 0
            for row in range(10):
                for tube in range(20):
                    dist = dists[x][y][row][tube]
                    temp = tubes[row][tube].temp
                    summed = summed + temp* dist/summed_dists[x-1][y-1]
            heat_values[x].append(summed)

My problem is with speed, for a 200*200 pixel image this takes approximately 30 seconds to run through the second part of the code on my computer. Is there a faster way to get the same or a similar effect, or some sort of glaring inefficiency in my code?

I have tried bi-linear and bi-cubic interpolation but wasn't particularly satisfied with the image I got out.

I have also limited the neighbourhood of data points that will effect an individual pixel in an attempt to speed it up, which did help but I think I've pushed that as far as I can without causing obvious lines in the image.

Thanks for any help you can give.

like image 591
MDT Avatar asked Nov 14 '22 11:11

MDT


1 Answers

There is one change that might be an improvement:

Try moving the dists[x][y] and tubes[row] outside of the inner-most loop. This might take out several array index lookups per inner iteration (it depends upon how clever the Python interpreter is):

def other_proc_calc_temp(ret_queue, dists, tubes,summed_dists):
    heat_values = list()
    for x in range (BASE_WIDTH):
        heat_values.append([])
        for y in range(BASE_HEIGHT):
            outer_dist = dists[x][y]
            summed = 0
            for row in range(10):
                inner_dist = outer_dist[row]
                inner_tube = tubes[row]
                for tube in range(20):
                    dist = inner_dist[tube]
                    temp = inner_tubes[tube].temp
                    summed = summed + temp* dist/summed_dists[x-1][y-1]
            heat_values[x].append(summed)

If the Python interpreter is clever enough to know that the values haven't changed, this is just harder to read. But if the Python interpreter recalculates all these array indexes over and over again, it can add up.

I used to have a paragraph here about setting the size of the arrays upfront rather than growing them with .append(). gnibbler says .append() is an O(1) amortized operation which means there is probably little optimization available here. See the edit history if you're curious what I wrote.

like image 148
sarnold Avatar answered Dec 21 '22 02:12

sarnold