Apr 21, 2010

My Relief-Mapped Light Pre-Pass Renderer

Hi!

Just to show you some progress I made concerning my new renderer: a light pre-pass renderer with relief-mapping over all surfaces! An important feature of this engine is that each texel in the virtual scene is unique. Indeed, no virtual texture mapping is used but simply large 4096*4096 textures! Because no virtual texturing is used, everything is resident in the GPU video memory. Texture for dynamically added objects in the VE are allocated in dynamic texture atlases (using a quad-tree to manage texture space use). As a result, when designing a map (under Maya with my exporter), you have to find a good trade-off between texel density in world space and the size of your map. For me, it is important to have unique texel everywhere to achieve this kind of effects I made before.

Yesterday, as visible on screen-shots, I have finished implementing Virtual Shadow Depth Cube map as described in ShaderX3. I may test later render to cube map using geometry shader.

Thank to this light pre-pass engine, I will be able to easily implement/test several rendering methods like soft particles, light propagation volume, SSAO, etc... The only thing I miss in this engine is spherical-harmonic- or Source-like lightmaps (I wish I had a Beast or Turtle license for this). Currently, it is a Quake3-like lightmap: no directional information about incoming light on surfaces.

With this engine I plan to develop a simple Quake3-like-death-match-with-bots game as a simple demo. If some artist read this post and are interested in designing a death match map, please send me an email. The only thing you would need is Maya2008.



Black lightmap, two shadowed point light sources


Black lightmap, two shadowed point light sources


Grey lightmap: directional indirect lighting using
spherical harmonic volume.
For this, I use the library I have developed.

In these screenshots, I only used a uniformly colored lightmap because I did not found time to generate one for this level. And also because I just finished adding point light sources support which will be used for direct illumination (lightmap will only contains emissive surface and global illumination). Next move is adding spot lights and optimizing shaders as well as the way meshes are processed by the engine.

Feel free to ask me some questions! :)

Apr 9, 2010

Spherical Harmonics Lighting

Hello everyone!

It's time for another post on another rendering method! :) Currently, I am working on a light pre-pass renderer. I have just finished to include SH lighting inside.

There exist several methods to compute the lighting solution of a virtual environment. Some methods are fully dynamic (IdTech4, CryEngine) or fully/partially static (idTech3, UnrealEngine). For fully dynamic methods, since the lighting solution is often unified, there is no problem to compute the lighting of dynamic objects. However, for static methods, there can be several problems.

Let's take the case of an environment having its lighting solution stored in a static lightmap. When you add dynamic objects in the virtual environment, you have to compute their lighting solution. But how can we compute the light that reach each dynamic object using only their position/orientation and the surrounding environment?

Common methods are using probe or light volume. For instance, in Quake3, a light volume is define for the entire environment and each Voxel stores an ambient color plus a directional colored source (Quake3 map Specs). Another solution is to used probes positioned by artists in the virtual environment (Source engine). At each of this location, irradiance can be computed and stored in several formats (SH, directional light, Source basis, etc).

For my light pre-pass renderer, I decided to use a light volume having each Voxel containing a 2 bands spherical harmonics as visible on the next screen-shots.


A quake3 map with it's corresponding SH volume.


My test map with its corresponding SH volume.

For each dynamic object, the SH volume is sampled on the CPU using tri-linear interpolation (using object's position). The final SH contribution is added during the final pass of the light pre-pass pipeline according to the normals stored in the g-buffer (And in my renderer, each surface is rendered using relief mapping).



A dynamic object lit only by dynamic lights (left) and
using the SH volume (right).
Note that the lighting information stored in the lightmap now
affects the object final look.

The SH volume is computed as a pre-process. For each Voxel, I render the virtual environment in a cube map as in my previous demo. Finally, the surrounding colored environment stored in this cube map is transformed into the SH basis and stored in its corresponding SH volume Voxel.

I plan to store only global illumination in the lightmap of my environments so the SH volume will only contains global illumination contribution. Then, direct lighting will be computed on static/dynamic objects using standard unified light rendering and shadowing. I also plan to add Crytek's light propagation volume later...

I will continue to work on this engine in parallel to the writing of my PhD thesis, other little demos I have in my pipeline and future job research.

New volumetric lines method

Hi readers,

I have been working on many things recently. One of these things is a new volumetric line algorithm!
My previous method (basically the same as the one proposed in ShaderX7) was really fast and yields good looking result. I think that's why it was successfully used in the iPhone game PewPew. The only thing about this method is that you should avoid looking lines along their direction because, in this case, the trick I use become visible. Also, it was not possible to shade the line based on its thickness from current viewpoint.
Another method has been proposed by Tristan Lorach to change volumetric lines appearance based on the angle between view and line directions. However, line appearance was represented by only 16 texture tiles and interpolation between them was visible.

The new method I propose is able to render capsule like volumetric lines with any width and for any point of view, i.e. you can look inside/through the line. It also allows the use of thickness based visual effects.

Here is the overall algorithm of my new method:
  1. Extrude an OOBB around the volumetric line having the same width as the line. This is done using a geometry shader computing triangles strips from a single line.
  2. Compute closest and farthest intersections between view ray and capsule using geometric methods and/or quadratic functions resolution. This is done in the OOBB frame reference.
  3. Compute thickness based on capsule intersections and environment depth map.
  4. Shade the volumetric line based on it's thickness.
It is pretty simple but efficient. I will not go into the details right now: there will be a post on my website soon. I will just show some early screenshots from the current version:

The capsules representing the volumetric lines
filled with a white color


line radius can be changed


Thickness based shading and intersection with the environment



View from above and under the ground

The thickness is correct,
even if the camera is inside the volumetric lines