Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Graph rendering using 3D acceleration

We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per linegraph. Each sample is a double (8 bytes) precision FP. Furthermore, we render multiple linegraphs on one screen, up to about a hundred. This makes we render about 25M samples in a single screen. Using common sense and simple tricks, we can get this code performant using the CPU drawing this on a 2D canvas. Performant, that is the render times fall below one minute. As this is scientific data, we cannot omit any samples. Seriously, this is not an option. Do not even start thinking about it.

Naturally, we want to improve render times using all techniques available. Multicore, pre-rendering, caching are all quite interesting but do not cut it. We want 30FPS rendering with these datasets at minimum, 60FPS preferred. We now this is an ambitious goal.

A natural way to offload graphics rendering is using the GPU of the system. GPU's are made to work with huge datasets and process them parrallel. Some simple HelloWorld tests showed us a difference of day and night in rendering speed, using the GPU.

Now the problem is: GPU API's such as OpenGL, DirectX and XNA are made for 3D scenes in mind. Thus, using them to render 2D linegraphs is possible, but not ideal. In the proof of concepts we developed, we encountered that we need to transform the 2D world into a 3D world. Suddnely we have to work with and XYZ coordinate system with polygons, vertices and more of the goodness. That is far from ideal from a development perspective. Code gets unreadable, maintenance is a nightmare, and more issues boil up.

What would your suggestion or idea be to to this in 3D? Is the only way to do this to actually convert the two systems (2D coordinates versus 3D coordinates & entities)? Or is there a sleeker way to achieve this?

-Why is it usefull to render multiple samples on one pixel? Since it represents the dataset better. Say on one pixel, you have the values 2, 5 and 8. Due to some sample omitting algorithm, only the 5 is drawn. The line would only go to 5, and not to 8, hence the data is distorted. You could argue for the opposite too, but fact of the matter is that the first argument counts for the datasets we work with. This is exactly the reason why we cannot omit samples.

like image 743
user29688 Avatar asked Oct 20 '08 20:10

user29688


4 Answers

I'd like to comment on your assertion that you cannot omit samples, on the back of tgamblin's answer.

You should think of the data that you're drawing to the screen as a sampling problem. You're talking about 2.4M points of data, and you're trying to draw that to a screen that is only a few thousand points across (at least I assuming that it is, since you're worried about 30fps refresh rates)

So that means that for every pixel in the x axis you're rendering in the order of 1000 points that you don't need to. Even if you do go down the path of utilising your gpu (eg. through the use of opengl) that is still a great deal of work that the gpu needs to do for lines that aren't going to be visible.

A technique that I have used for presenting sample data is to generate a set of data that is a subset of the whole set, just for rendering. For a given pixel in the x axis (ie. a given x axis screen coordinate) you need to render an absolute maximum of 4 points - that is the minimum y, maximum y, leftmost y and rightmost y. That will render all of the information that can be usefully rendered. You can still see the minima and maxima, and you retain the relationship to the neighbouring pixels.

With this in mind, you can work out the number of samples that will fall into the same pixel in the x axis (think of them as data "bins"). Within a given bin, you can then determine the particular samples for maxima, minima etc.

To reiterate, this is only a subset that is used for display - and is only appropriate until the display parameters change. eg. if the user scrolls the graph or zooms, you need to recalculate the render subset.

You can do this if you are using opengl, but since opengl uses a normalised coordinate system (and you're interested in real world screen coordinates) you will have to work a little harder to accurately determine your data bins. This will be easier without using opengl, but then you don't get the full benefit of your graphics hardware.

like image 75
Andrew Edgecombe Avatar answered Oct 16 '22 10:10

Andrew Edgecombe


A really popular toolkit for scientific visualization is VTK, and I think it suits your needs:

  1. It's a high-level API, so you won't have to use OpenGL (VTK is built on top of OpenGL). There are interfaces for C++, Python, Java, and Tcl. I think this would keep your codebase pretty clean.

  2. You can import all kinds of datasets into VTK (there are tons of examples from medical imaging to financial data).

  3. VTK is pretty fast, and you can distribute VTK graphics pipelines across multiple machines if you want to do very large visualizations.

  4. Regarding:

    This makes we render about 25M samples in a single screen.

    [...]

    As this is scientific data, we cannot omit any samples. Seriously, this is not an option. Do not even start thinking about it.

You can render large datasets in VTK by sampling and by using LOD models. That is, you'd have a model where you see a lower-resolution version from far out, but if you zoom in you would see a higher-resolution version. This is how a lot of large dataset rendering is done.

You don't need to eliminate points from your actual dataset, but you can surely incrementally refine it when the user zooms in. It does you no good to render 25 million points to a single screen when the user can't possibly process all that data. I would recommend that you take a look at both the VTK library and the VTK user guide, as there's some invaluable information in there on ways to visualize large datasets.

like image 42
Todd Gamblin Avatar answered Oct 16 '22 09:10

Todd Gamblin


You really don't have to worry about the Z-axis if you don't want to. In OpenGL (for example), you can specify XY vertices (with implicit Z=0), turn of the zbuffer, use a non-projective projection-matrix, and hey presto you're in 2D.

like image 5
Menkboy Avatar answered Oct 16 '22 11:10

Menkboy


Mark Bessey mentioned it, that you might lack the pixels to display the graph. But given your explanations, I assume you know what you are doing.

OpenGL has an orthogonal mode that has a z-coordinate inside (0;1). There is no perspective projection, the polygons you draw will be planar to the screen clipping area.

DirectX will have similar. On OpenGL, it's called gluOrtho2d().

like image 3
mstrobl Avatar answered Oct 16 '22 10:10

mstrobl