[osg-users] How to ensure stable execution of the GPU?

Alf Inge Iversen alf-inge at autosim.no
Fri Jul 18 14:50:31 PDT 2008


We have a problem with significant stepping in our visualiation application, and have traced the problem down to how execution of the GPU is trigged.

First some general information:
We are currently using OSG 2.2 (the application is upgraded to OSG 2.4 few weeks ago, but this new version will not be released to customers until late fall due to sync'ing with the release of other software modules. So we are stuck with 2.2 for now). The systems run on the Windows XP platform, using Nvidia boards in the range 7600GS to 8800GTS. The viewer application is based on the osgViewer::Viewer class, displaying two channels (a main channel with an embedded mirror channel). The test described below is run on a dual-core 3.4 GHz processors and 7900GS graphics boards.

By visually observing the OSG stats, together with printouts to the console window to detect lost real-tme position data, we have observed that even if no datasets are missing and the computing time is well inside the available time slot, glitches occur. When observing the stats, we realized that the execution of the GPU happen to be performed sometimes within the current frame (see attached image GPU-early.jpg), sometimes in the next frame (see GPU-late.jpg). The shift from one frame to the next or vice versa can happen several times a second, or it can happen only now and then. For every shift, a significant glitch in the visualization can be observed.

Q1. What causes the GPU to wait for execution some frames, and in other frames execute immediately when new data is available?

Q2. Is it possible to force the GPU to wait to process the available data until a given signal (for example vertical sync)? If so, how?

Some latency can be accepted if the image is stable. So to ensure a stable image, it should be possible to lock the start of the GPU processing to the vertical sync. It would then process on data delivered last frame from the Draw dispatcher. This would give a stable image, in addition to increase the available time for the processing (full 16 ms for Update, Cull and Draw, and another full 16 ms for the GPU).

It has been mentioned that multithreading could help out. But as the attached pictures show, there is no relationship between load and when the GPU processes. Hence, improved efficency by introducing multithreading would still not ensure stable GPU processing. Very brief tests with our newer version shows the same tendency with glitches in multithread mode.

We appreciate any suggestion that help us solve our problem.

-Alf Inge

Mr. Alf Inge Iversen, VP Engineering
AutoSim AS, Strandveien 106, 9006 Tromsø, Norway
Visit us at http://www.autosim.no
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GPU-early.JPG
Type: image/jpeg
Size: 30323 bytes
Desc: GPU-early.JPG
URL: <http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/attachments/20080718/5c7dead6/attachment-0004.jpeg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: GPU-late.JPG
Type: image/jpeg
Size: 29191 bytes
Desc: GPU-late.JPG
URL: <http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/attachments/20080718/5c7dead6/attachment-0005.jpeg>

More information about the osg-users mailing list