I'm seeing major performance issues in a QML app I wrote to show a point cloud in a Scene3d
. With 1000 points/sec it's alright, but at 10,000 it basically just halts my entire computer. The goal is to get up into the millions of points (which is what are old app, a Qt/VTK mixture could do before slowing down.)
I'm worried that I'm not offloading processing into another thread, or not rendering properly. ... This is my first Qt project and am new to all of this.
Basically I build a circular_buffer of points (each point is 32 bytes), which I copy into a QByteArray
on a custom QGeometry
, on an Entity
. This entity has a material that runs a vertex and fragment shader.
Is there something I can do to increase the performance?
material:
import Qt3D.Core 2.0
import Qt3D.Render 2.0
Material {
effect: Effect {
techniques: Technique {
renderPasses: RenderPass {
shaderProgram: ShaderProgram {
vertexShaderCode: loadSource("qrc:/shaders/pointcloud.vert")
fragmentShaderCode: loadSource("qrc:/shaders/pointcloud.frag")
}
renderStates: [
PointSize { sizeMode: PointSize.Programmable } //supported since OpenGL 3.2
]
}
graphicsApiFilter {
api: GraphicsApiFilter.OpenGL
profile: GraphicsApiFilter.CoreProfile
majorVersion: 4
minorVersion: 3
}
}
}
// some parameters...
}
My shaders are pretty simple:
vertex:
#version 430
layout(location = 1) in vec3 vertexPosition;
out VertexBlock
{
flat vec3 col;
vec3 pos;
vec3 normal;
} v_out;
uniform mat4 modelView;
uniform mat3 modelViewNormal;
uniform mat4 mvp;
uniform mat4 projectionMatrix;
uniform mat4 viewportMatrix;
uniform float pointSize;
uniform float maxDistance;
void main()
{
vec3 vertexNormal = vec3(1.0, 1.0, 1.0);
v_out.normal = normalize(modelViewNormal * vertexNormal);
v_out.pos = vec3(modelView * vec4(vertexPosition, 1.0));
float c = (vertexPosition[0]*vertexPosition[0] + vertexPosition[1]*vertexPosition[1])*maxDistance;
v_out.col = vec3(c,c,0.5);
gl_Position = mvp * vec4(vertexPosition, 1.0);
gl_PointSize = viewportMatrix[1][1] * projectionMatrix[1][1] * pointSize / gl_Position.w;
}
fragment:
#version 430
in VertexBlock
{
flat vec3 col;
vec3 pos;
vec3 normal;
} frag_in;
out vec4 colour;
void main()
{
colour = vec4(frag_in.col, 1.0);
}
Renderer:
import Qt3D.Core 2.0
import Qt3D.Render 2.0
import "Cameras"
RenderSettings {
id: root
property CameraSet cameraSet: CameraSet {
id: cameraSet
}
property real userViewWidth: 0.79
property real topOrthoViewHeight: 0.79
activeFrameGraph: Viewport {
id: viewport
normalizedRect: Qt.rect(0.0, 0.0, 1.0, 1.0)
RenderSurfaceSelector {
ClearBuffers {
buffers : ClearBuffers.ColorDepthBuffer
clearColor: theme.cSceneClear
NoDraw {}
}
Viewport {
id: userViewport
normalizedRect: Qt.rect(0, 0, userViewWidth, 1.0)
CameraSelector {
id: userCameraSelectorViewport
camera: cameraSet.user.camera
}
}
// Two other viewports...
}
}
}
Entity
Entity {
property PointBuffer buffer: PointBuffer {
id: pointBuffer
}
PointsMaterial {
id: pointsMaterial
dataBuffer: pointBuffer
}
Entity {
id: particleRenderEntity
property GeometryRenderer particlesRenderer: GeometryRenderer {
instanceCount: buffer.count
primitiveType: GeometryRenderer.Points
geometry: PointGeometry { buffer: pointBuffer }
}
components: [
particlesRenderer
, pointsMaterial
]
}
}
Found the problem, and it wasn't in the info I originally posted.
In the entity, I had instanceCount: buffer.count
, but in my Geometry, I write the entire buffer in one step. Therefore, I was effectively squaring the size of my buffer.
The solution was to set instanceCount: 1
I had puzzled on this line before, even removing it, but I suspect it defaulted to that value.. And I didn't understand the QML docs on what exactly this would do.
In any case, the real use of this is for geometries like SphereGeometry
, which builds a buffer for each point. So, given a point, it builds the vertices and indices to render a sphere around that point. (I'm not sure why they don't do this in a Geometry Shader.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With