My PhD is about virtual reality so I needed an engine to render my Virtual Environments (VE) and conduct my experiments. I like to do things on the GPU so almost everything is computed on GPU. I wanted the engine to display point and spot lights that could be static or dynamic. Also, I wanted the model inside the environment to be simulated physically. Every experiment I were going to be very different so I needed the engine to be easily script-able. Finally, I needed to replay all experiment sessions so I wanted to be able to replay recorded navigations and interactions to apply coslty algorithms in order to study users' gaze behavior. Finally, I wanted to be able to create my own VE very easily.
I have developed this engine during 2 months in summer 2008 from scratch. If you would look at the code, you would see really a lot of classes with nice client interface accessible from the engine interface. But maybe you'll be sad to not see cache friendly structures such as structure of array. I have to admit, I did not take the time to optimize...
I did not show this engine before because the virtual environment were used for double blinded publications in conference and journal.
The virtual environment editor : Maya
At this time, I was learning Maya by myself: this is a pretty nice 3D modeling software simple to use for basic 3D. I decided to use it to create my VE. So I have developed my own exporter. It can export the whole geometry, point light, spot light as well as phong materials. Also, because I wanted to be able to use a lightmap, the world lightmap is rendered by Mental Ray in HDR format. This lightmap only contains global illumination! Thus, each light is rendered as usual (diffuse+specular) and, finally, global illumination is added to the scene. The drawback of this is that static lights cannot be turn off.
Phong Materials
Here are the components of the Phong material I selected from the long list of Maya:
I have developed this engine during 2 months in summer 2008 from scratch. If you would look at the code, you would see really a lot of classes with nice client interface accessible from the engine interface. But maybe you'll be sad to not see cache friendly structures such as structure of array. I have to admit, I did not take the time to optimize...
I did not show this engine before because the virtual environment were used for double blinded publications in conference and journal.
The virtual environment editor : Maya
At this time, I was learning Maya by myself: this is a pretty nice 3D modeling software simple to use for basic 3D. I decided to use it to create my VE. So I have developed my own exporter. It can export the whole geometry, point light, spot light as well as phong materials. Also, because I wanted to be able to use a lightmap, the world lightmap is rendered by Mental Ray in HDR format. This lightmap only contains global illumination! Thus, each light is rendered as usual (diffuse+specular) and, finally, global illumination is added to the scene. The drawback of this is that static lights cannot be turn off.
Phong Materials
Here are the components of the Phong material I selected from the long list of Maya:
- Diffuse color (can be a texture or a single color)
- Specular color (can be a texture or a single color)
- Specular exponent
- Local normal (can be a texture or will be flat)
Lights
I wanted each light to be rendered the same way as when using Mental Ray. Thus I decided to use the same parameters as in Maya. Then I simply wrote a shader corresponding to point and spot light.
Here are the spot light parameters:
- Transformation matrix
- Color & intensity
- Decay rate (in fact, I force this to be quadratic in maya and in the engine)
- Penumbra angle & cone angle
- Drop off
Here are the point light parameters:
- Position
- Color & intensity
- Decay rate (in fact, I force this to be quadratic in Maya and in the engine)
Indeed, a quadratic extinction of light intensity (close to what happens physically) means that each light can interact with ALL object in the VE. To avoid this problem, I compute the distance from the light center where a surface will receive only 5% of light. Then, I attenuate linearly the intensity from the center of the light to this distance. This methods allows me to compute an Axis-Aligned Bounding Box (AABB) for each lights to accelerate the rendering process by using frustum culling.
The rendering engine
As an addicted to OpenGL, I decided to use OpenGL! :) I also decided that the engine will be based on the zPrePass architecture (like the Doom3 engine). So the final rendering is obtained by summing the contribution of each light by rendering each meshes they interact with (after a zPre pass). Static lights come from the exported VE and, then, dynamic lights can be added from the script. The engine behave similarly concerning static and dynamic meshes.
For shadows, I use a simple depth map for spot lights and a virtual depth cube map for point lights. I only use native hardware shadow map filtering. Not very eye candy but it is enough for the experiment I needed to conduct.
Physics
I use the PhysX API from nVidia. Only rigid body interactions are simulated in my case. Each dynamic objects added to the simulation can be added to the PhysX engine if needed according to one of 4 shape type: bounding box, bounding sphere, convex or general triangle mesh.
Scripts
For the script, I chose Lua together with LuaBind (to simplify functions and classes registration). From the script, you can add and manipulate light and model. The script can also receive mouse and keyboard signals and each frame, a specific update function is called. For instance, here is a small piece of Lua code I use to position a spot light to look like a flashlight the user is holding up:
As an example, I have also developed some Lua classes that control some robot which follow a predefined path made of way points. I have to say that Lua is a very powerful and easy to use script language.
Record/Replay feature
My engine is able to record and replay navigation sessions in the VE. For each frame, I do not record everything. For instance, I only record the event when an dynamic object's position has been changed: everything is recorded as an event. I do not interpolate each frame because, for my experiments, I only need each frame that were displayed to the viewer.
Gaze-Tracking
My PhD is about gaze tracking so my engine takes into account gaze tracking. Currently, I have a class communicating with the TobiiX50 soft/hardware. This class is handled by a GazeTrackerManager which allows to easily add support for another type of gaze tracking system.
Some Results
The q3dm1 map in Maya during editing:
The q3dm1 map rendered in my engine with HDR lightmap generated by MentalRay (no lights here, everything is baked in the lightmap). The screenshot is a bit over-exposed due to the automatic luminosity adaptation (a lot of screen area are black).
The house VE used in one of my experiment during editing in Maya:
The house VE rendered in my engine with HDR GI lightmap generated by MentalRay:
A game where you have to destroy ships with your gaze:
A video is available on my youtube space.
I am currently changing the zPrePass renderer for a Light-Pre Pass rendering process. I have to say that it is very interesting! I already have point lights working and performance are impressive! I will talk about this later but I have to try to publish something with it before. :)
Do not hesitate if you have questions or if you want some piece of code. I plan to release the source of this engine at the end of my PhD.
The rendering engine
As an addicted to OpenGL, I decided to use OpenGL! :) I also decided that the engine will be based on the zPrePass architecture (like the Doom3 engine). So the final rendering is obtained by summing the contribution of each light by rendering each meshes they interact with (after a zPre pass). Static lights come from the exported VE and, then, dynamic lights can be added from the script. The engine behave similarly concerning static and dynamic meshes.
For shadows, I use a simple depth map for spot lights and a virtual depth cube map for point lights. I only use native hardware shadow map filtering. Not very eye candy but it is enough for the experiment I needed to conduct.
Physics
I use the PhysX API from nVidia. Only rigid body interactions are simulated in my case. Each dynamic objects added to the simulation can be added to the PhysX engine if needed according to one of 4 shape type: bounding box, bounding sphere, convex or general triangle mesh.
Scripts
For the script, I chose Lua together with LuaBind (to simplify functions and classes registration). From the script, you can add and manipulate light and model. The script can also receive mouse and keyboard signals and each frame, a specific update function is called. For instance, here is a small piece of Lua code I use to position a spot light to look like a flashlight the user is holding up:
if(flashLightOn==true)
then
local viewCam = getViewPointOrientPos();
local target = Vector3(0.0,0.0,-5.0);
viewCam:transformBackFromLocalSpace(target);
local origin = Vector3(-0.2,-0.1,0.0);
viewCam:transformBackFromLocalSpace(origin);
flashLight:setIntensity(10.0);
flashLight:setPosTargetUp(origin,target,Vector3(0.0,1.0,0.0));
else
flashLight:setIntensity(0.0);
end
As an example, I have also developed some Lua classes that control some robot which follow a predefined path made of way points. I have to say that Lua is a very powerful and easy to use script language.
Record/Replay feature
My engine is able to record and replay navigation sessions in the VE. For each frame, I do not record everything. For instance, I only record the event when an dynamic object's position has been changed: everything is recorded as an event. I do not interpolate each frame because, for my experiments, I only need each frame that were displayed to the viewer.
Gaze-Tracking
My PhD is about gaze tracking so my engine takes into account gaze tracking. Currently, I have a class communicating with the TobiiX50 soft/hardware. This class is handled by a GazeTrackerManager which allows to easily add support for another type of gaze tracking system.
Some Results
The q3dm1 map in Maya during editing:
The q3dm1 map rendered in my engine with HDR lightmap generated by MentalRay (no lights here, everything is baked in the lightmap). The screenshot is a bit over-exposed due to the automatic luminosity adaptation (a lot of screen area are black).
The house VE used in one of my experiment during editing in Maya:
The house VE rendered in my engine with HDR GI lightmap generated by MentalRay:
A game where you have to destroy ships with your gaze:
A video is available on my youtube space.
I am currently changing the zPrePass renderer for a Light-Pre Pass rendering process. I have to say that it is very interesting! I already have point lights working and performance are impressive! I will talk about this later but I have to try to publish something with it before. :)
Do not hesitate if you have questions or if you want some piece of code. I plan to release the source of this engine at the end of my PhD.