[osg-users] OSG-based excavator simulator

Jean-Sébastien Guay jean-sebastien.guay at cm-labs.com
Thu Apr 28 12:15:15 PDT 2011


Hi Paul,

> Hi J-S -- It looks great. But I do have a couple questions.

No problem, I'll answer as best I can. :-)

> How flexible is your art pipeline / rendering process for loading
> arbitrary models? Could you replace that excavator with some arbitrary
> CAD machinery, such as a tractor, and get the same visual fidelity? Or
> is there quite a bit of per-model hands-on modeling time in the art
> pipeline to create the necessary specular and normal maps, dirt, scuff
> marks, etc?

I'm surprised you're asking me this type of question...

An artist is really needed in order to get any good looking models, 
IMHO. Doing too much procedurally just becomes incredibly complex and 
gives so-so results most of the time, but you end up handling lots of 
corner cases and skirting around visual artifacts.

You also mention CAD models, which are another pet peeve of mine. CAD is 
just not made for real-time use, you get lots of geometry where you need 
very little and vice versa. A human modeler who knows about making 
models for real-time can really make good use of textures when possible 
and put polygons where needed.

In our case, we have an in-house modeler who makes models especially for 
real-time use. When we need a new vehicle for a simulator, he will 
typically start gathering as many reference photos as possible and then 
make the model. He'll then figure out what sections will most benefit 
from having normal and specular maps (where there's not much fine 
detail, they're not needed, and a simpler shader can be used).

> Also, how are you doing the transparency (e.g., the excavator cab
> windows)? Is that simple RenderBin back-to-front rendering, or are you
> using an order-independent technique? If the latter, how are you
> implementing it?

Simple render bin technique. The windows are two polygons with opposing 
normals (with backface culling enabled so they don't fight with each 
other, only one will be visible from any angle). All the windows are in 
separate geometry objects so they sort correctly (which is bad for draw 
time, I know, but used sparingly it's ok). And the windows on the inside 
have a different shader than the ones on the outside, so they have a 
double-sided mirror kind of effect (look darker from outside than from 
inside the cab).

> Are the trees in the background simple billboards, or truly 3D so that
> the viewpoint can be placed inside them?

No, just billboards, and in fact we had to choose our shots carefully to 
make the video so we didn't see any sorting artifacts 'cause I didn't 
want to take the time to separate them all :-)

> Is the atmospheric scattering simple OpenGL fog, or something more complex?

It's nothing as sophisticated as atmospheric scattering - we called it 
atmospheric dust. :-) It's just large particle billboards, with a few 
classic smoke rendering techniques:

- depth buffer test to avoid artifacts where a billboard gets clipped by 
opaque scene geometry (lower the alpha where the billboard is close to 
the ground - search the nvidia samples for a paper on that)
- the billboards rotate slowly to make the dust seem like it's billowing
- they also fade in and out for the same reason.

All three things are done in the vertex / fragment shaders.

> I didn't notice any screen-space post-rendering effects (such as depth
> of field), is that correct? If I'm wrong, what screen-space effects are
> you doing, and did you encounter any difficulties with generalizing them
> into your rendering system?

We didn't do depth of field, but HDR is a screen-space post-rendering 
effect. It gives a nice light bloom.

We used osgPPU to do it, and osgPPU comes with an HDR example, which we 
modified slightly so that the light adaptation effect was less 
pronounced. In retrospect, the way osgPPU does its post-rendering 
effects limits us in other things we want to do with our pipeline, so 
we'll probably do the HDR ourselves sometime in the near future (or try 
to find ways to work around the limitations). But it was a great way to 
get the effect going quickly.

> I'm always looking for more general ways to make stuff like this work,
> so I'm just trying to figure out how much of this demo was
> general-purpose, and how much was non-general, suitable only for demos.

Because of the time constraints, we did a lot of quick-and-dirty coding. 
In the weeks after the demo was done and shown to the client, we did a 
lot of clean up work, and we still have some left to do (for me, mostly 
the osgPPU limitations I mentioned above).

But since we did the demo, everyone in the company wants to use similar 
effects in their projects (clients always want the best-looking 
simulator they can get, at least when taking the purchasing decision - 
after that it becomes less important) so I'm making sure what we did is 
as general as it can be.

J-S
-- 
______________________________________________________
Jean-Sebastien Guay    jean-sebastien.guay at cm-labs.com
                                http://www.cm-labs.com/
                     http://whitestar02.dyndns-web.com/



More information about the osg-users mailing list