The up-coming I3D conference on interactive 3D Graphics and Games will feature several interesting papers! For instance, this year will features papers such as " Cascaded Light Propagation Volumes for Real-Time Indirect Illumination" by Anton Kaplanyan (Crytek) or "Interactive Volume Caustics in Single-Scattering Media" by Wei Hu

Fourier Opacity Mapping (FOM) is about approximating light attenuation through a volume made of particles. Let's consider a spot light: the authors proposed to reconstruct the light transmittance function along each ray using a Fourier series. The coefficients of the Fourier series are stored in the fourier opacity map in light view space. This map is generated using usual particles rendering with transformation in the Fourier basis of extinction coefficients in the fragment program. Then, when rendering the particles from view, the corrected light attenuation for each particle can be recovered. I will not go into the details here. You can read the paper here and my report on my personal webpage. Indeed, you can see a video of my implementation of FOM here.

Here are some screenshots of my implementation (Top: without FOM, Bottom with FOM):

*.*Another paper I found very interesting is the one proposed by Jon Jansen and Louis Bavoil (Yeah a French guy), both at nVidia, called "Fourier Opacity Mapping". I could not resist implementing this nice paper.Fourier Opacity Mapping (FOM) is about approximating light attenuation through a volume made of particles. Let's consider a spot light: the authors proposed to reconstruct the light transmittance function along each ray using a Fourier series. The coefficients of the Fourier series are stored in the fourier opacity map in light view space. This map is generated using usual particles rendering with transformation in the Fourier basis of extinction coefficients in the fragment program. Then, when rendering the particles from view, the corrected light attenuation for each particle can be recovered. I will not go into the details here. You can read the paper here and my report on my personal webpage. Indeed, you can see a video of my implementation of FOM here.

Here are some screenshots of my implementation (Top: without FOM, Bottom with FOM):

A simple particle chain. Notice the correct order of light attenuation.

Colored light attenuation through a particle block.

A grey smoke volume with some red particles. Notice that the red particles attenuate the light correctly: they only affect the color of particles that receive the light after them.

I know that my screenshots are not really eye candy but they show that this method is really efficient to simulate colored light extinction when passing through a volume of particle. Furthermore, if you want an exemple of good use by skilled artist, just have a look at the game Batman Arkham Asylum which implements this method.

~~UPDATE~~

My report is finally available on my website together with demo and open source code. :)

You need a recent video card to run the demo since it uses GLSL 1.5 and render data into 6 buffers in one pass.

~~UPDATE~~

My report is finally available on my website together with demo and open source code. :)

You need a recent video card to run the demo since it uses GLSL 1.5 and render data into 6 buffers in one pass.

Awesome work!

ReplyDeleteImagine this, together with cascaded light propagation volumes and virtual textures.. hmmm... ;)

Ouh yeah that would be good!

ReplyDeleteAnd we could light the particles with the light propagation volume and also take into account their contribution to global illumination!

Aah, virtual textures... I need to try this one day...

so fun, Louis is the guy working with us at NV, i see him quite frequently... I will hask him for some info about FOM. We already tried to implement such sort of things, but it's a little bit too expensive ;)

ReplyDeleteIt would be cool to have your personal report on it of course ;)

Nice!

ReplyDeleteIndeed, the time to compute the FOM buffer will depend a lot of the number of particles and the fill rate (since it use additive blending) from the light view. Then, you can use simple luminosity attenuation: in this case, you will just ned 2 RGBA16f buffer at the resolution you want. If you want per color component light attenuation, you need 2 buffers per color channel which I think is may be too much.

Actualy, we have a better 2 pass method wich allow us to have a better distribution. I asked our lead 3D, and he already talked about that with Louis... We may try it in a prĂ© prod on another product, but not enough time to try it now :(

ReplyDeleteGood!

ReplyDeleteI'd like to hear more about this method! :)

My report is finally available on my website together with demo and open source code. :)

ReplyDeleteYou need a recent video card to run the demo since it uses GLSL 1.5 and render data into 6 buffers in one pass.

Just a random, not really thought out, thought: Could this be used to fake subsurface scattering effects? I mean if you're already rendering it, you might as well try to use it for as many different things at the same time.

ReplyDeleteYes, I was also thinking about this good idea. To use this method, you would need to render several part inside the translucent model. I think, you would need to considered the inside of a mesh as a list of particles (discretization of the medium inside the mesh) with several different colors (we can think of intern organs, etc).

ReplyDeleteHowever, using a Fourier series with high opacity particles, i.e. bones inside a body, may result in very large ringing artifacts (see the paper for some example). But for low opacity object, I think it would be good. Another advantage is that you can blur the Fourier opacity map to smooth out the attenuation spatially. (this is demonstrated in the paper for the purpose of hair rendering)

It would not look like real subsurface scattering (no multiple, neither single scattering) but I am conviced it would look nice!

you could try rendering luminance in one pass, with 8 coefficients (2 MRTs), and render chromaticity in a second pass to a lower-resolution buffer, and potentially using less coefficients, something like 4 coefficients per channel. this would bring the RT count to 4, with a lower precision for the color, but acceptable precision for the actual opacity. you will get worse spatial definition for the color tint, but it might be acceptable?

ReplyDeleteThis is a good idea. It is definitly something that should be tried.

ReplyDeleteBut I am afraid having not the same number of coefficient for color and luminance could lead to larger visual artifacts, e.g. ringing will no longer coincide for luminance and chromacity components. But As you said it might be acceptable...