For a game, I'm drawing dense clusters of several thousand randomly-distributed circles with varying radii, defined by a sequence of (x,y,r) triples. Here's an example image consisting of 14,000 circles:
I have some dynamic effects in mind, such as merging clusters, but for this to be possible I'll need to redraw all the circles every frame.
Many (maybe 80-90%) of the circles that are drawn are covered over by subsequent draws. Therefore I suspect that with preprocessing I can significantly speed up my draw loop by eliminating covered circles. Is there an algorithm that can identify them with reasonable efficiency?
I can tolerate a fairly large number of false negatives (ie draw some circles that are actually covered), as long as it's not so many that drawing efficiency suffers. I can also tolerate false positives as long as they're almost positive (eg remove some circles that are only 99% covered). I'm also amenable to changes in the way the circles are distributed, as long as it still looks okay.
This kind of culling is essentially what hidden surface algorithms (HSAs) do - especially the variety called "object space". In your case the sorted order of the circles gives them an effective constant depth coordinate. The fact that it's constant simplifies the problem.
A classical reference on HSA's is here. I'd give it a read for ideas.
An idea inspired by this thinking is to consider each circle with a "sweep line" algorithm, say a horizontal line moving from top to bottom. The sweep line contains the set of circles that it's touching. Initialize by sorting the input list of the circles by top coordinate.
The sweep advances in "events", which are the top and bottom coordinates of each circle. When a top is reached, add the circle to the sweep. When its bottom occurs, remove it (unless it's already gone as described below). As a new circle enters the sweep, consider it against the circles already there. You can keep events in a max (y-coordinate) heap, adding them lazily as needed: the next input circle's top coordinate plus all the scan line circles' bottom coordinates.
A new circle entering the sweep can do any or all of 3 things.
Obscure circles in the sweep with greater depth. (Since we are identifying circles not to draw, the conservative side of this decision is to use the biggest included axis-aligned box (BIALB) of the new circle to record the obscured area for each existing deeper circle.)
Be obscured by other circles with lesser depth. (Here the conservative way is to use the BIALB of each other relevant circle to record the obscured area of the new circle.)
Have areas that are not obscured.
The obscured area of each circle must be maintained (it will generally grow as more circles are processed) until the scan line reaches its bottom. If at any time the obscured area covers the entire circle, it can be deleted and never drawn.
The more detailed the recording of the obscured area is, the better the algorithm will work. A union of rectangular regions is one possibility (see Android's Region code for example). A single rectangle is another, though this is likely to cause many false positives.
Similarly a fast data structure for finding the possibly obscuring and obscured circles in the scan line is also needed. An interval tree containing the BIALBs is likely to be good.
Note that in practice algorithms like this only produce a win if the number of primitives is huge because fast graphics hardware is so ... fast.
Based on the example image you provided, it seems your circles have a near-constant radius. If their radius cannot be lower than a significant number of pixels, you could take advantage of the simple geometry of circles to try an image-space approach.
Imagine you divide your rendering surface in a grid of squares so that the smallest rendered circle can fit into the grid like this:
the circle radius is sqrt(10) grid units and covers at least 21 squares, so if you mark the squares entirely overlapped by any circle as already painted, you will have eliminated approximately 21/10pi fraction of the circle surface, that is about 2/3.
You can get some ideas of optimal circle coverage by squares here
The culling process would look a bit like a reverse-painter algorithm:
For each circle from closest to farthest
if all squares overlapped (even partially) by the circle are painted
eliminate the circle
else
paint the squares totally overlapped by the circle
You could also 'cheat' by painting grid squares not entirely covered by a given circle (or eliminating circles that overflow slightly from the already painted surface), increasing the number of eliminated circles at the cost of some false positives.
You can then render the remaining circles with a Z-buffer algorithm (i.e. let the GPU do the rest of the work).
CPU-based approach
This assumes you implement the grid as a memory bitmap, with no help from the GPU.
To determine the squares to be painted, you can use precomputed patterns based on the distance of the circle center relative to the grid (the red crosses in the example images) and the actual circle radius.
If the relative variations of diameter are small enough, you can define a two dimensional table of patterns indexed by circle radius and distance of the center from the nearest grid point.
Once you've retrieved the proper pattern, you can apply it to the appropriate location by using simple symmetries.
The same principle can be used for checking if a circle fits into an already painted surface.
GPU-based approach
It's been a long time since I worked with computer graphics, but if the current state of the art allows, you could let the GPU do the drawing for you.
Painting the grid would be achieved by rendering each circle scaled to fit the grid
Checking elimination would require to read the value of all pixels containing the circle (scaled to grid dimensions).
Efficiency
There should be some sweet spot for the grid dimension. A denser grid will cover a higher percentage of the circles surface and thus eliminate more circles (less false negatives), but the computation cost will grow in o(1/grid_step²).
Of course, if the rendered circles can shrink to about 1 pixel diameter, you could as well dump the whole algorithm and let the GPU do the work. But the efficiency compared with the GPU pixel-based approach grows as the square of the grid step.
Using the grid in my example, you could probably expect about 1/3 false negatives for a completely random set of circles.
For your picture, which seems to define volumes, 2/3 of the foreground circles and (nearly) all of the backward ones should be eliminated. Culling more than 80% of the circles might be worth the effort.
All this being said, it is not easy to beat a GPU in a brute-force computation contest, so I have only the vaguest idea of the actual performance gain you could expect. Could be fun to try, though.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With