Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sub-pixel rendering in OpenGL - accuracy issue

I need to generate a few series of test images for a photogrammetric application. These should contain simple objects (disks, rectangles, etc) with a very precisely known position.

Considering a 8bit grayscale image of a black rectangle on white background, a smallest observable displacement (after interpolation) should be 1/256 pixel, as there are 256 possible intensity levels of each pixel.

I decided to use OpenGL (with python + pyglet) to render such images, as later on I will have to render more complicated (3d scene, stereo image pairs)

Unfortunately, the best accuracy I have achieved is around pixel/10, the full intensity depth is not used.

Is it possible to do better, ideally - achieving full 1/256pixel accuracy? Any hints on how to do that, please?

sample code, generating images of a partial disk, moved 0.01 pixel more in each frame

#-*- coding: utf-8 -*-
import pyglet
from pyglet.gl import *
from PIL import Image, ImageGrab

config = pyglet.gl.Config(width = 800, height = 600, sample_buffers=1, samples=16)
window = pyglet.window.Window(config=config, resizable=True) 
window.set_size(800, 600)

printScreenNr = 0

@window.event
def on_draw():
     global printScreenNr
     window.clear()
     glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

     glLoadIdentity()
     glEnable(GL_LINE_SMOOTH)
     glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);

     glTranslated(200, 200,0)

     circleQuad = gluNewQuadric()
     glTranslated(200+(printScreenNr*0.01), 0, 0)
     gluPartialDisk(circleQuad, 0, 50, 500, 500, 0, 180)

@window.event
def on_resize(width, height):
    glViewport(0, 0, width, height)
    glMatrixMode(GL_PROJECTION)
    glLoadIdentity()
    glOrtho (0, width, height, 0, 0, 1);
    glMatrixMode(GL_MODELVIEW)
    return pyglet.event.EVENT_HANDLED

@window.event
def on_text(text):
    global printScreenNr
    if(text == "p"):
        pyglet.image.get_buffer_manager().get_color_buffer().save('photo'+str(printScreenNr)+'.tiff')
        printScreenNr += 1

pyglet.app.run()

(the code above is using gluPartialDisk, but I have tested the issue also using quadrangles, results' accuracy did not differ)

like image 550
Pietro Avatar asked Aug 06 '12 13:08

Pietro


2 Answers

A simple way to do this is to scale the image by a factor. If you scale the image by 4.0, then 16 original pixels will be merged into one target pixel which gives you 16 shades of gray when scaling a pure B&W image.

But there is a catch which probably explains your problem. If you have this:

  ...#####
  ...#####
  ...#####
  ...#####

(left: white, right: black filled rectangle), then you have 12 white and 4 black pixels that contribute to a single output pixel. To get 1 black pixel, the input would need to be this:

  ....####
  ....####
  ....####
  ...#####

See? Even though the black box leaks only one pixel into the white space, it does so four times. So to make sure your subpixel rendering code works, you need to look at single pixels or corners, not at the edges.

like image 196
Aaron Digulla Avatar answered Oct 12 '22 06:10

Aaron Digulla


Even though you are using orthogonal projection, GL_PERSPECTIVE_CORRECTION_HINT might have impact on rendering accuracy. At least I vaguely remember glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); fixing some gaps on my orthogonally projected scene many years ago.

like image 22
Jorn van de Beek Avatar answered Oct 12 '22 07:10

Jorn van de Beek