I'm currently writing a game in C++/Qt5 using the Qt3D module.
I can render the scene (QGLSceneNodes) on a QGLView but am now stuck on overpainting the scene with some GUI elements. I haven't decided yet whether to use QML or C++ to define the look and feel of the interface, so I'm open to solutions for both. (Note that the QML module is called QtQuick3D, the C++ module is called Qt3D and they are both part of Qt5.)
I'm very much preferring a QML based solution.
How can I do some overpainting?
The following things have to be possible:
I think this is all possible using just another QGLSceneNode for the 2D GUI part. But I think positioning and orienting some scene node in order to just be re-oriented and re-positioned during rendering (using the vertex shader) doesn't make sense, introduces numerical errors and seems inefficient.
Is using another "GUI" vertex shader the correct way?
(How) can I make post-rendering effects and overpainting work together?
It would be really nice if I can do the following post-rendering effects:
I see a problem when I implement the overpainting using a special shader program: I can't access the pixels behind the GUI to apply some effects like gaussian blur to make the GUI look like glass. I would have to render the scene to another buffer instead of on the QGLView. This is my main problem...
I've had similar issues making an opengl game as well via qglwidget, and the solution I came up with is using orthographic projection to render scratch-built controls. At the end of the main draw function, I call a separate function that sets up the view and then draws the 2d bits. As for the mouse events, I capture the events sent to the main widget, then use a combination of hit tests, recursive coordinate offsets, and pseudo-callbacks that allows my controls to be nested. In my implementation, I forward commands to the top level controls (mainly scroll boxes) one at a time, but you could easily add a linked-list structure if you need to add controls dynamically. I'm currently just using quads to render the parts as colored panels, but it should be simple to add textures or even make them 3d components that respond to dynamic lighting. There's a lot of code, so I'm just gonna throw the relevant bits in pieces:
void GLWidget::paintGL() {
if (isPaused) return;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glLightfv(GL_LIGHT0, GL_POSITION, FV0001);
gluLookAt(...);
// Typical 3d stuff
glCallList(Body::glGrid);
glPopMatrix();
//non3d bits
drawOverlay(); // <---call the 2d rendering
fpsCount++;
}
Here is the code that sets up the modelview/projection matrix for 2d rendering:
void GLWidget::drawOverlay() {
glClear(GL_DEPTH_BUFFER_BIT);
glPushMatrix(); //reset
glLoadIdentity(); //modelview
glMatrixMode(GL_PROJECTION);//set ortho camera
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0,width()-2*padX,height()-2*padY,0); // <--critical; padx/y is just the letterboxing I use
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST); // <-- necessary if you want to draw things in the order you call them
glDisable( GL_LIGHTING ); // <-- lighting can cause weird artifacts
drawHUD(); //<-- Actually does the drawing
glEnable( GL_LIGHTING );
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION); glPopMatrix();
glMatrixMode(GL_MODELVIEW); glPopMatrix();
}
This is the main drawing function that handles the hud stuff; most of the draw functions resemble the big 'backround panel' chunk. You basically push the modelview matrix, use translate/rotate/scale like you would in 3d, paint the controls, then popmatrix:
void GLWidget::drawHUD() {
float gridcol[] = { 0.5, 0.1, 0.5, 0.9 };
float gridcolB[] = { 0.3, 0.0, 0.3, 0.9 };
//string caption
QString tmpStr = QString::number(FPS);
drawText(tmpStr.toAscii(),120+padX,20+padY);
//Markers
tGroup* tmpGroup = curGroup->firstChild;
while (tmpGroup != 0) {
drawMarker(tmpGroup->scrXYZ);
tmpGroup = tmpGroup->nextItem;
}
//Background panel
glEnable(GL_BLEND);
glTranslatef(0,height()*0.8,0);
glBegin(GL_QUADS);
glColor4fv(gridcol);
glVertex2f(0.0f, 0.0);
glVertex2f(width(), 0);
glColor4fv(gridcolB);
glVertex2f(width(), height()*0.2);
glVertex2f(0.0f, height()*0.2);
glEnd();
glDisable(GL_BLEND);
glLoadIdentity(); //nec b/c of translate ^^
glTranslatef(-padX,-padY,0);
//Controls
cargoBox->Draw();
sideBox->Draw();
econBox->Draw();
}
Now for the controls themselves. All of my controls like buttons, sliders, labels, etc all inheirit from a common base class with empty virtual functions that allow them to interact with eachother type-agnostically. The base has typical stuff like height, coordinates, etc as well as a few pointers to handle callbacks and text rendering. Also included is the generic hittest(x,y) function that tells if it's been clicked. You could make this virtual if you want elliptical controls. Here is the base class, the hittest, and the code for a simple button.:
class glBase {
public:
glBase(float tx, float ty, float tw, float th);
virtual void Draw() {return;}
virtual glBase* mouseDown(float xIn, float yIn) {return this;}
virtual void mouseUp(float xIn, float yIn) {return;}
virtual void mouseMove(float xIn, float yIn) {return;}
virtual void childCallBack(float vIn, void* caller) {return;}
bool HitTest(float xIn, float yIn);
float xOff, yOff;
float width, height;
glBase* parent;
static GLWidget* renderer;
protected:
glBase* callFwd;
};
bool glBase::HitTest(float xIn, float yIn) { //input should be relative to parents space
xIn -= xOff; yIn -= yOff;
if (yIn >= 0 && xIn >= 0) {
if (xIn <= width && yIn <= height) return true;
}
return false;
}
class glButton : public glBase {
public:
glButton(float tx, float ty, float tw, float th, char rChar);
void Draw();
glBase* mouseDown(float xIn, float yIn);
void mouseUp(float xIn, float yIn);
private:
bool isPressed;
char renderChar;
};
glButton::glButton(float tx, float ty, float tw, float th, char rChar) :
glBase(tx, ty, tw, th), isPressed(false), renderChar(rChar) {}
void glButton::Draw() {
float gridcolA[] = { 0.5, 0.6, 0.5, 1.0 };//up
float gridcolB[] = { 0.2, 0.3, 0.2, 1.0 };
float gridcolC[] = { 1.0, 0.2, 0.1, 1.0 };//dn
float gridcolD[] = { 1.0, 0.5, 0.3, 1.0 };
float* upState;
float* upStateB;
if (isPressed) {
upState = gridcolC;
upStateB = gridcolD;
} else {
upState = gridcolA;
upStateB = gridcolB;
}
glPushMatrix();
glTranslatef(xOff,yOff,0);
glBegin(GL_QUADS);
glColor4fv(upState);
glVertex2f(0, 0);
glVertex2f(width, 0);
glColor4fv(upStateB);
glVertex2f(width, height);
glVertex2f(0, height);
glEnd();
if (renderChar != 0) //center
renderer->drawChar(renderChar, (width - 12)/2, (height - 16)/2);
glPopMatrix();
}
glBase* glButton::mouseDown(float xIn, float yIn) {
isPressed = true;
return this;
}
void glButton::mouseUp(float xIn, float yIn) {
isPressed = false;
if (parent != 0) {
if (HitTest(xIn, yIn)) parent->childCallBack(0, this);
}
}
Since compound controls like sliders have multiple buttons, and scrollboxes have tiles and sliders, it all needs to be recursive for the drawing/mouse events. Each control also has a generic callback function that passes a value and it's own address; this allows the parent to test each of it's children to see who's calling, and respond appropriately (eg up/down buttons). Note that child controls are added via new/delete in the c/dtors. Also, draw() functions for child controls is called during the parents' own draw() call, which allows everything to be self-contained. Here is relevant code from a mid-level slidebar that contains 3 sub buttons:
glBase* glSlider::mouseDown(float xIn, float yIn) {
xIn -= xOff; yIn -= yOff;//move from parent to local coords
if (slider->HitTest(xIn,yIn-width)) { //clicked slider
yIn -= width; //localize to field area
callFwd = slider->mouseDown(xIn, yIn);
dragMode = true;
dragY = yIn - slider->yOff;
} else if (upButton->HitTest(xIn,yIn)) {
callFwd = upButton->mouseDown(xIn, yIn);
} else if (downButton->HitTest(xIn,yIn)) {
callFwd = downButton->mouseDown(xIn, yIn);
} else {
//clicked in field, but not on slider
}
return this;
}//TO BE CHECKED (esp slider hit)
void glSlider::mouseUp(float xIn, float yIn) {
xIn -= xOff; yIn -= yOff;
if (callFwd != 0) {
callFwd->mouseUp(xIn, yIn); //ok to not translate b/c slider doesn't callback
callFwd = 0;
dragMode = false;
} else {
//clicked in field, not any of the 3 buttons
}
}
void glSlider::childCallBack(float vIn, void* caller) { //can use value blending for smooth
float tDelta = (maxVal - minVal)/(10);
if (caller == upButton) {//only gets callbacks from ud buttons
setVal(Value - tDelta);
} else {
setVal(Value + tDelta);
}
}
Now that we have self-contained controls that render into an opengl context, all that's needed is the interface from the top-level control, eg mouse events from the glWidget.
void GLWidget::mousePressEvent(QMouseEvent* event) {
if (cargoBox->HitTest(event->x()-padX, event->y()-padY)) { //if clicked first scrollbox
callFwd = cargoBox->mouseDown(event->x()-padX, event->y()-padY); //forward the click, noting which last got
return;
} else if (sideBox->HitTest(event->x()-padX, event->y()-padY)) {
callFwd = sideBox->mouseDown(event->x()-padX, event->y()-padY);
return;
} else if (econBox->HitTest(event->x()-padX, event->y()-padY)) {
callFwd = econBox->mouseDown(event->x()-padX, event->y()-padY);
return;
}
lastPos = event->pos(); //for dragging
}
void GLWidget::mouseReleaseEvent(QMouseEvent *event) {
if (callFwd != 0) { //Did user mousedown on something?
callFwd->mouseUp(event->x()-padX, event->y()-padY);
callFwd = 0; //^^^ Allows you to drag off a button before releasing w/o triggering its callback
return;
} else { //local
tBase* tmpPick = pickObject(event->x(), event->y());
//normal clicking, game stuff...
}
}
Overall this system is a little hacky, and I haven't added some features like keyboard response or resizing, but these should be easy to add, maybe requiring a 'event type' in the callback (ie mouse or keyboard). You could even subclass the glBase into the topmost-widget, and it will even allow it to recieve the parent->childcallback. Although it's a lot of work, this system only requires vanilla c++/opengl and it uses much fewer cpu/memory resources than the native Qt stuff, probably by an order of magnitude or two. If you need any more info, just ask.
resize and events are easy to implement specially if you make a class inherit from qglwidget here is a short video that should help you get started
https://www.youtube.com/watch?v=1nzHSkY4K18
in regards to drawing images and text qt has the qpainter that can help you in doing that, use QGLContext (http://qt-project.org/doc/qt-4.8/qglcontext.html) to pass it the current context and render your label/image/widget at the specific cordinate, but make sure that this function call is the last in your paint event otherwise it will not be on top
to react to mouse event make your class include QMouseEvent header then you can get the coordinates of the mouse within the widget and pass them to opengl by overloading the protected event mouseMoveEvent(QMouseEvent *event){event->pos();}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With