[osg-users] Computing Normals for Drawables

Andrew Burnett-Thompson aburnettthompson at googlemail.com
Mon Nov 9 02:21:50 PST 2009


Hi guys,

Interesting conversation on solving this problem. I'm working with James on
this and he's away this week, so I thought I'd reply to keep the
conversation going, and also to thank you for your input.

Just to throw another requirement into the mix, I don't think we can use
vertex shaders, as we cannot guarantee what hardware the software will be
run on. Basically integrated graphics is the lowest possible hardware, so
what's that, DirectX 8? 9? Vertex shaders run on DX8 but you won't get many
instructions.

Can this not be done on the CPU side only? Interestingly we did float the
idea of computing a shared normal array (a sphere coated with normals) for
all objects in the scene, then provide indices between Geometry vertices and
the shared normal array, however we really didn't know if this was possible
in OSG, or if it would work.

A quick look at osg::Geometry source code there is a function
setNormalIndices, so perhaps if we were to create a NodeVisitor that did the
following this could work?

1. On creation of the nodeVisitor, create an osg::Vec3Array for shared
normals. Populate with normals in all directions.
2. As the NodeVisitor walks the tree, for each Geometry object use the code
from SmoothingVisitor to calculate what the normal should be
3. Rather than store that normal in an array and pass to the Geometry,
instead find the closest normal to it in the shared normal array,
4. Store the index to the shared normal, and finally set the indices on the
Geometry using setNormalIndices

This would be done once on load or creation of an object. It wouldn't matter
to take the performance hit once per load, nor would it matter if this
slowed down rendering. We already employ aggressive use of culling to
achieve decent rendering framerate when the object count is high. The hard
limit is memory.

Thanks a lot for your input,

Regards,
Andrew

On Sat, Nov 7, 2009 at 7:46 AM, yann le paih <lepaih.yann at gmail.com> wrote:

> Hi,
>
> Interresting links about compact normal and more :
>
>
> http://aras-p.info/blog/2009/08/04/compact-normal-storage-for-small-g-buffers/
> http://aras-p.info/texts/CompactNormalStorage.html
>
> and
>
> http://c0de517e.blogspot.com/2008/10/normals-without-normals.html
>
> yann
>
>
> On Fri, Nov 6, 2009 at 10:51 PM, Jean-Sébastien Guay <
> jean-sebastien.guay at cm-labs.com> wrote:
>
>> Hi Simon,
>>
>>
>>  Ah I didn't know that, ty. Seems like an odd limitation though.
>>>
>>
>> Well, the video cards are implemented to take in only certain types of
>> data. Vertices, normals and colors are in general aggregates of 32-bit
>> floats (which is why no vertex type is ever a vector of 3 doubles).
>>
>>
>>  How does the angles thing work? Not sure I follow you.
>>>
>>
>> Your intuition is right that normals around a vertex will always be in a
>> sphere. The thing is that you can encode that as spherical coordinates (
>> http://en.wikipedia.org/wiki/Spherical_coordinates), where r = 1 (since a
>> normal is always unit-length). So the direction could be encoded as two
>> angles, theta and phi, inclination and azimuth (see the Wikipedia article).
>>
>> Now, you already save one float that way over storing a 3-component
>> vector. If you want to save more, since the angles are small numbers, you
>> can store both of them into the lower and upper parts of a single 32-bit
>> float (two 16 bit numbers) and re-extract them from the float in the shader.
>>
>> Another way is you could store only 2 of the 3 components of the normal -
>> since a normal is unit-length you can calculate the third component easily.
>> Those numbers will also be small (less than or equal to 1) so you could do
>> the same transformation into two 16-bit values and store them into a single
>> float. The spherical coordinates just came to mind first as I've used it
>> before, but the result would be the same.
>>
>>
>>  If you did the normal lookup, wouldn't you save a whole load
>>> of shader processing time, or is memory bandwidth a bigger limitation
>>> with
>>> shaders?
>>>
>>
>> You'd save a little bit of processing time per vertex, but compared to a
>> texture lookup to get the right normal from the lookup table, I don't know
>> what would win. Seriously, what I've described above is child's play for a
>> GPU, and very easily parallelizable, which is what GPUs do best.
>>
>> It's not really a matter of bandwidth since your lookup table would be
>> pretty small too. I expect you'd get similar performance in both cases, for
>> me it's just inconvenient to drag around an extra table when I don't have to
>> - the angles / quantized vector method above would all fit nicely in the
>> object itself without needing any extra data.
>>
>>
>> J-S
>> --
>> ______________________________________________________
>> Jean-Sebastien Guay    jean-sebastien.guay at cm-labs.com
>>                               http://www.cm-labs.com/
>>                        http://whitestar02.webhop.org/
>> _______________________________________________
>> osg-users mailing list
>> osg-users at lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
>
>
> --
> Yann Le Paih
> Keraudrono
> 56150 BAUD
> Portable: +33(0)610524356
> lepaih.yann at gmail.com
>
> _______________________________________________
> osg-users mailing list
> osg-users at lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openscenegraph.org/pipermail/osg-users-openscenegraph.org/attachments/20091109/ae53e467/attachment.htm>


More information about the osg-users mailing list