Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Creating parallel offscreen OpenGL contexts on Windows

Tags:

c++

opengl

I am trying to setup parallel Multi GPU offscreen rendering contexts.I use "OpenGL Insights" book ,chapter 27 , "Multi-GPU Rendering on NVIDIA Quadro" .I also looked into wglCreateAffinityDCNV docs but still can't pin it down.

My Machine has 2 NVidia Quadro 4000 cards (no SLI ).Running on Windows 7 64bit. My workflow goes like this:

  1. Create default window context using GLFW.
  2. Map the GPU devices.
  3. Destroy the default GLFW context.
  4. Create new GL context for each one of the devices (currently trying only one)
  5. Setup boost thread for each context and make it current in that thread.
  6. Run rendering procedures on each thread separately.(No resources share)

Everything is created without errors and runs but once I try to read pixels from an offscreen FBO I am getting a null pointer here :

GLubyte* ptr  = (GLubyte*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);

Also glError returns "UNKNOWN ERROR"

I thought may be the multi-threading is the problem but the same setup gives identical result when running on single thread. So I believe it is related to contexts creations.

Here is how I do it :

  ////Creating default window with GLFW here .
      .....
         .....

Creating offscreen contexts:

  PIXELFORMATDESCRIPTOR pfd =
{
    sizeof(PIXELFORMATDESCRIPTOR),
    1,
    PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,    //Flags
    PFD_TYPE_RGBA,            //The kind of framebuffer. RGBA or palette.
    24,                        //Colordepth of the framebuffer.
    0, 0, 0, 0, 0, 0,
    0,
    0,
    0,
    0, 0, 0, 0,
    24,                        //Number of bits for the depthbuffer
    8,                        //Number of bits for the stencilbuffer
    0,                        //Number of Aux buffers in the framebuffer.
    PFD_MAIN_PLANE,
    0,
    0, 0, 0
};

void  glMultiContext::renderingContext::createGPUContext(GPUEnum gpuIndex){

    int    pf;
    HGPUNV hGPU[MAX_GPU];
    HGPUNV GpuMask[MAX_GPU];

    UINT displayDeviceIdx;
    GPU_DEVICE gpuDevice;
    bool bDisplay, bPrimary;
    // Get a list of the first MAX_GPU GPUs in the system
    if ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex])) {

        printf("Device# %d:\n", gpuIndex);

        // Now get the detailed information about this device:
        // how many displays it's attached to
        displayDeviceIdx = 0;
        if(wglEnumGpuDevicesNV(hGPU[gpuIndex], displayDeviceIdx, &gpuDevice))
        {   

            bPrimary |= (gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE) != 0;
            printf(" Display# %d:\n", displayDeviceIdx);
            printf("  Name: %s\n",   gpuDevice.DeviceName);
            printf("  String: %s\n", gpuDevice.DeviceString);
            if(gpuDevice.Flags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
            {
                printf("  Attached to the desktop: LEFT=%d, RIGHT=%d, TOP=%d, BOTTOM=%d\n",
                    gpuDevice.rcVirtualScreen.left, gpuDevice.rcVirtualScreen.right, gpuDevice.rcVirtualScreen.top, gpuDevice.rcVirtualScreen.bottom);
            }
            else
            {
                printf("  Not attached to the desktop\n");
            }

            // See if it's the primary GPU
            if(gpuDevice.Flags & DISPLAY_DEVICE_PRIMARY_DEVICE)
            {
                printf("  This is the PRIMARY Display Device\n");
            }


        }

        ///=======================   CREATE a CONTEXT HERE 
        GpuMask[0] = hGPU[gpuIndex];
        GpuMask[1] = NULL;
        _affDC = wglCreateAffinityDCNV(GpuMask);

        if(!_affDC)
        {
            printf( "wglCreateAffinityDCNV failed");                  
        }

    }

    printf("GPU context created");
}

glMultiContext::renderingContext *
    glMultiContext::createRenderingContext(GPUEnum gpuIndex)
{
    glMultiContext::renderingContext *rc;

    rc = new renderingContext(gpuIndex);

    _pixelFormat = ChoosePixelFormat(rc->_affDC, &pfd);

    if(_pixelFormat == 0)
    {

        printf("failed to  choose pixel format");
        return false;
    }

     DescribePixelFormat(rc->_affDC, _pixelFormat, sizeof(pfd), &pfd);

    if(SetPixelFormat(rc->_affDC, _pixelFormat, &pfd) == FALSE)
    {
        printf("failed to set pixel format");
        return false;
    }

    rc->_affRC = wglCreateContext(rc->_affDC);


    if(rc->_affRC == 0)
    {
        printf("failed to create gl render context");
        return false;
    }


    return rc;
}

//Call at the end to make it current :


 bool glMultiContext::makeCurrent(renderingContext *rc)
{
    if(!wglMakeCurrent(rc->_affDC, rc->_affRC))
    {

        printf("failed to make context current");
        return false;
    }

    return true;
}

    ////  init OpenGL objects and rendering here :

     ..........
     ............

AS I said ,I am getting no errors on any stages of device and context creation. What am I doing wrong ?

UPDATE:

Well ,seems like I figured out the bug.I call glfwTerminate() after I calling wglMakeCurrent() ,so it seems like the latest makes "uncurrent" also the new context.Though it is wired as OpenGL commands keep getting executed.So it works in a single thread.

But now , if I spawn another thread using boost treads I am getting the initial error.Here is my thread class:

GPUThread::GPUThread(void)
{
    _thread =NULL;
    _mustStop=false;
    _frame=0;


    _rc =glMultiContext::getInstance().createRenderingContext(GPU1);
    assert(_rc);

    glfwTerminate(); //terminate the initial window and context
    if(!glMultiContext::getInstance().makeCurrent(_rc)){

        printf("failed to make current!!!");
    }
             // init engine here (GLEW was already initiated)
    engine = new Engine(800,600,1);

}
void GPUThread::Start(){



    printf("threaded view setup ok");

    ///init thread here :
    _thread=new boost::thread(boost::ref(*this));

    _thread->join();

}
void GPUThread::Stop(){
    // Signal the thread to stop (thread-safe)
    _mustStopMutex.lock();
    _mustStop=true;
    _mustStopMutex.unlock();

    // Wait for the thread to finish.
    if (_thread!=NULL) _thread->join();

}
// Thread function
void GPUThread::operator () ()
{
    bool mustStop;

    do
    {
        // Display the next animation frame
        DisplayNextFrame();
        _mustStopMutex.lock();
        mustStop=_mustStop;
        _mustStopMutex.unlock();
    }   while (mustStop==false);

}


void GPUThread::DisplayNextFrame()
{

    engine->Render(); //renders frame
    if(_frame == 101){
        _mustStop=true;
    }
}

GPUThread::~GPUThread(void)
{
    delete _view;
    if(_rc != 0)
    {
        glMultiContext::getInstance().deleteRenderingContext(_rc);
        _rc = 0;
    }
    if(_thread!=NULL)delete _thread;
}
like image 214
Michael IV Avatar asked Apr 02 '13 09:04

Michael IV


1 Answers

Finally I solved the issues by myself. First problem was that I called glfwTerminate after I set another device context to be current. That probably unmounted the new context too. Second problem was my "noobiness " with boost threads. I failed to init all the rendering related objects in the custom thread because I called the rc init object procedures before setting the thread as is seen in the example above.

like image 189
Michael IV Avatar answered Oct 28 '22 22:10

Michael IV