Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Data binning: irregular polygons to regular mesh

I have thousands of polygons stored in a table format (given their 4 corner coordinates) which represent small regions of the earth. In addition, each polygon has a data value. The file looks for example like this:

lat1,  lat2,  lat3,  lat4,  lon1,   lon2,   lon3,   lon4,   data
57.27, 57.72, 57.68, 58.1,  151.58, 152.06, 150.27, 150.72, 13.45
56.96, 57.41, 57.36, 57.79, 151.24, 151.72, 149.95, 150.39, 56.24
57.33, 57.75, 57.69, 58.1,  150.06, 150.51, 148.82, 149.23, 24.52
56.65, 57.09, 57.05, 57.47, 150.91, 151.38, 149.63, 150.06, 38.24
57.01, 57.44, 57.38, 57.78, 149.74, 150.18, 148.5,  148.91, 84.25
...

Many of the polygons intersect or overlap. Now I would like to create a n*m matrix ranging from -90° to 90° latitude and -180° to 180° longitude in steps of, for instance, 0.25°x0.25° to store the (area-weighted) mean data value of all polygons that fall within each pixel.

So, one pixel in the regular mesh shall get the mean value of one or more polygons (or none if no polygon overlaps with the pixel). Each polygon should contribute to this mean value depending on its area fraction within this pixel.

Basically the regular mesh and the polygons look like this:

enter image description here

If you look at pixel 2, you see that two polygons are inside this pixel. Thus, I have to take the mean data value of both polygons considering their area fractions. The result should be then stored in the regular mesh pixel.

I looked around the web and found no satisfactory approach for this so far. Since I am using Python/Numpy for daily work I would like to stick to it. Is this possible? The package shapely looks promising but I don't know where to begin with... Porting everything to a postgis database is an awful amount of effort and I guess there will be quite a few obstacles in my way.

like image 662
HyperCube Avatar asked Dec 18 '12 14:12

HyperCube


2 Answers

There are plenty of ways to do it, but yes, Shapely can help. It appears that your polygons are quadrilateral, but the approach I'll sketch doesn't count on that. You won't need anything other than box() and Polygon() from shapely.geometry.

For each pixel, find the polygons that approximately overlap with it by comparing the pixels bounds to the minimum bounding box of each polygon.

from shapely.geometry import box, Polygon

for pixel in pixels:
    # say the pixel has llx, lly, urx, ury values.
    pixel_shape = box(llx, lly, urx, ury)

    for polygon in approximately_overlapping:
        # say the polygon has a ``value`` and a 2-D array of coordinates 
        # [[x0,y0],...] named ``xy``.
        polygon_shape = Polygon(xy)
        pixel_value += polygon_shape.intersection(pixel_shape).area * value

If the pixel and polygon don't intersect, the area of their intersection will be 0 and the contribution of that polygon to that pixel vanishes.

like image 133
sgillies Avatar answered Nov 02 '22 11:11

sgillies


I added a couple of things to my initial question, but this is a working solution so far. Do you have any ideas to speed things up? It is still quite slow. As input, I have over 100000 polygons and the meshgrid has 720*1440 grid cells. That is also why I changed the order, because there are a lot of grid cells with no intersecting polygons. Furthermore, when there is only one polygon that intersects with a grid cell, the grid cell receives the whole data value of the polygon. In addition, since I have to store the area fraction and the data value for the "post-processing" part, I set the possible number of intersections to 10.

from shapely.geometry import box, Polygon
import h5py
import numpy as np

f = h5py.File('data.he5','r')
geo = f['geo'][:] #10 columns: 4xlat, lat center, 4xlon, lon center 
product = f['product'][:]
f.close()

#prepare the regular meshgrid
delta = 0.25
darea = delta**-2
llx, lly = np.meshgrid( np.arange(-180, 180, delta), np.arange(-90, 90, delta) )
urx, ury = np.meshgrid( np.arange(-179.75, 180.25, delta), np.arange(-89.75, 90.25, delta) )
lly = np.flipud(lly)
ury = np.flipud(ury)
llx = llx.flatten()
lly = lly.flatten()
urx = urx.flatten()
ury = ury.flatten()

#initialize the data structures
data = np.zeros(len(llx),'f2')+np.nan
counter = np.zeros(len(llx),'f2')
fraction = np.zeros( (len(llx),10),'f2')
value = np.zeros( (len(llx),10),'f2')

#go through all polygons
for ii in np.arange(1000):#len(hcho)):

    percent = (float(ii)/float(len(hcho)))*100
    print("Polygon: %i (%0.3f %%)" % (ii, percent))

    xy = [ [geo[ii,5],geo[ii,0]], [geo[ii,7],geo[ii,2]], [geo[ii,8],geo[ii,3]], [geo[ii,6],geo[ii,1]] ]
    polygon_shape = Polygon(xy)

    # only go through grid cells which might intersect with the polygon    
    minx = np.min( geo[ii,5:9] )
    miny = np.min( geo[ii,:3] )
    maxx = np.max( geo[ii,5:9] )
    maxy = np.max( geo[ii,:3] )
    mask = np.argwhere( (lly>=miny) & (lly<=maxy) & (llx>=minx) & (llx<=maxx) )
    if mask.size:
        cc = 0
        for mm in mask:
            cc = int(counter[mm])
            pixel_shape = box(llx[mm], lly[mm], urx[mm], ury[mm])
            fraction[mm,cc] = polygon_shape.intersection(pixel_shape).area * darea
            value[mm,cc] = hcho[ii]
            counter[mm] += 1

print("post-processing")
mask = np.argwhere(counter>0)
for mm in mask:
    for cc in np.arange(counter[mm]):
        maxfraction = np.sum(fraction[mm,:])
        value[mm,cc] = (fraction[mm,cc]/maxfraction) * value[mm,cc]
    data[mm] = np.mean(value[mm,:int(counter[mm])])

data = data.reshape( 720, 1440 )
like image 44
HyperCube Avatar answered Nov 02 '22 12:11

HyperCube