Dec 30, 2009

What do you think about OnLive?

Do you know OnLive? It's a company that is going to provide a technology that will (?) allow people to play games on a computer or a tv (Hmm ok). The tricky part is the fact that you would not need a high-end PC (GPU, CPU, etc) to play, for instance, Crysis. Basically, their software will send your actions on input devices to their servers which will compute the image frames for you. Then, all these frames would be sent back to your computer as a video stream.

The concept is very interesting : you just need to rent a server and you don't need to upgrade your computer. Also, I am sure that they will provide a game catalog like Steam does.

However, to my opinion, this system will not work because:
  1. According to OnLive, you will have to be less than 1000 miles away from a server to play. As a result, they will need a lot of servers to provide *good* pings to costumers. It seems that they consider a ping of 80ms as the limit. So, quality of service will depend a lot on your distance to a server.
  2. I am sure they have new efficient technologies to compress frames into a video stream but is their method lossless? What final resolution can you get? Full HD? (again, this will surely depends on your distance to a server)
  3. As an avid player of FPS, I think that a target ping of 80ms is not enough. Even for a RTS with a lot of micro-actions, fighting game or a car race game you will feel it. And I am not speaking about lags. If you don't play those kind of CPU/GPU time consuming games, may be you don't need a high-end hardware...
I really hope that I am wrong about this technology... Anyway, I wish the OnLive team "Good Luck"!!! :)

Dec 22, 2009

Efficient OOP + Publications repositories

A very interesting presentation has been en-lighted by Atom: Pitfalls of Object Oriented Programming by Tony Albrecht from SCEE. I was aware of the problems exposed in this presentation, however, to my opinion, this is the best presentation so far that explains and details the cach miss optimization problem on a simple example. I will not extend more on the presentation: it is clear enough. If you wonder what is a Structure-Of-Array (in contrary to Array-Of-Structure) and why some people says OOP can be considered as evil (the presentation expose one reason) you should read it now! :)

This presentation can be found on SCEE presentation repository. Here are two more repositories I found very interesting: Valve and Bungie. If you know other interesting publication repositories, please post them.

Bonus: Siggraph 2009 course Advances in Real-Time Rendering in 3D Graphics and Games repository.

Dec 16, 2009

News from France


Since my last post, I am back in France! I was in Japan for a Japanese-French research collaboration during one month (November). Maybe I will speak about this work once it is published in a conference or journal. Anyway, I have to say that Japan is a really impressive country! So many place to visit! (temples, tokyo tower, etc). Also food is really good and not expensive! I really recommend traveling in Japan: people are really respectful and will help you even if they can't speak English. To my opinion, each gamer, game developer which travel to Japan should visit Akihabara (called "The electric town"). A lot of games, arcade games, electric shop with very low price, etc. You can even buy an old megadrive for 0.8 euro! :)

Finally, I decided that it was a good time to change my computer. Indeed, my old graphics card caused me too much troubles: random colored pixels, triangles stretched to infinity (only with DirectX, not OpenGL) and even black death screens! I decided to buy hardware pieces that have a good performance/price ratio: Intel i5, geForceGTX275 as the main components.
Now I plan to learn DirectX with this computer. I am using OpenGL for 7 years and it is time for me to learn something else.

See you next news!

Nov 27, 2009

Volumetric lines

Four years ago, A method to render volumetric lines has been proposed by Tristan Lorach at nVidia (link). I have found this method very interesting but, to my opinion, texture interpolation simulating change in point of view were too visible. Three years ago, I decided to propose my own volumetric line rendering method as visible on my website. My method does not allow as much visual effect as the one of nVidia but at least you can change the appearance of lines using a simple 2D square texture. Each line is rendered using 6 triangles and all computations are done in the vertex program. The drawback with my method is that when lines' width is very large and their direction is parrallel to the view direction, the trick I use is visible. However, for line which are not large, this problem is rarely noticeable.

Screen shot of the volumetric lines.
Because the line on the bottom is large, the trick I use is visible.In this screenshot, another appearance texture is applied.

Recently, an article published in ShaderX7 proposed a method very similar, if not same, to the one I propose. I should have written an article before... At that time, I was beginning my Master study and I cannot imagine this would be possible.

Many people write to me in order to discuss this method, and I am really happy to see people using it. I am pleased to announce that my method is use in an iPhone game developed by a French guy as you can read on one if his post and blog. :)

Nov 26, 2009

Screen-space buzz survey

This post exposes a survey of screen-space methods proposed in the field of CG. I will not go into the details of these methods since that is not the purpose of this post but I invite you to read these articles since, to my opinion, they are very interesting.

Screen-space ambient occlusion

Many methods have been proposed to render ambient occlusion. The one proposed by Crytek in 2007 became very popular and was named “Screen-Space Ambient Occlusion”. The method was designed to be used in real time in Crysis game. To my opinion, the real first paper to propose screen space ambient occlusion was proposed by Luft et al.: Image Enhancement by Unsharp Masking the Depth Buffer. Then, a High quality (but slower) version was proposed by Louis Bavoil et Miguel Sainz (nVidia).
It is interesting to note that the screen-space name became more and more popular and started some kind of Buzz! Also, it seems that screen-space and image-space are synonyms. Am I correct?

For more links on screen-space ambient-occlusion please refer to this wikipedia article
. Also, many methods have been published in ShaderX7.

Screen-space global illumination

Crytek also proposed a screen-space global illumination method. However, I am not aware of the algorithms they use. I suppose it is a variant of their SSAO using sampled surfaces color as well as its normal direction in order to take into account relative surfaces orientation (refer to their presentation about light propagation volume).
Also, researchers proposed the screen-space directional occlusion method to compute real time global illumination.

Screen-space fluid and water

Even fluid got screen-spac-ed!
Concerning SPH fluid simulation, two methods have been proposed as an alternative to marching-cube-like methods, namely : Screen-Space Meshes
and Screen-Space Fluids Rendering with Curvature Flow.
To render ocean, a screen-space grid projection is used in the Cry Engine
. Another screen-space approach has also been proposed on gamedev with really nice looking results.

Screen-space light shaft

One article about this in GPU Gems 3. Ok they used the word post-process but after reading the article, it sounds a lot like screen-space to me!
It seems that Crytek uses a similar methods. Obviously, they really love screen-space methods! :). I agree with them since the methods they use are fast and result in eye-candy effects!

Image-space sub surface scattering

And last but not the least, Image-Space Subsurface Scattering for Interactive Rendering of Deformable Translucent Objects.


Wouhaou! So many screen-space methods! I wonder what’s next… Also, you can read this other interesting discussion I have found here to complete this survey.

Nov 24, 2009

My first attempt to global illumination

I was really impressed by the work of Anton Kaplanyan from Crytek on light propagation volume (LPV) for massive lighting and global illumination! So I decided to do a quick test implementation of this method to experiment it by myself. Here are the result:

Direct lightingDirect lighting + global illuminationGlobal illumination onlyGlobal illumination x2

As a quick test, I considered only one LPV aligned on the AABB of a single object (no cascaded LPV). The LPV is one voxel larger than the AABB in order to avoid artifact at borders. Then, I just apply the LPV algorithm discribed in the white paper using virtual point lights and co. As I said, this is a basic test implementation so no SH gradients are used to avoid back face lighting and other artifacts.

The light is a simple spot without shadows.

It is interesting to note that without high frequency detailed texture, the GI lighting have a "square" look mostly due to the Cartesian cubic sampling. Indeed it seems that this is not visible with nice textures and normal mapping, as you can see on the screen-shots presented in their white paper.

Because this is a quick test, the code is somewhat ugly :-|, excepted the class to manage the unfolded volumetric texture. That's why I'm not going to publish it. If you are still interested, send me a mail.

The unfolded volumetric texture I have developed is visible on the bottom of the screenshots. Basically, it represents all slices of the 3D texture. Considering LPV, I also have included a black border color between each slice. Thus, in the shader, I do not have to test if my border samples come from the right slice during the gathering process. However, one step of the gathering process involves now rendering a quad overlapping each unfolded slice (one quad per slice) and not overlapping the black borders.

Ok I could give you computation time for each step of the method but as I said the code is note optimized, there is still a lot of tests (even in shaders!) and I am on a laptop PC...

Next step: implement this method as presented in the paper with cascaded LPV, importance sampling of virtual point lights, etc. I plan to implement this in my small engine I am using now for my PhD (will speak about it later). I don't know when I will do this since the pressure increases as the end of my PhD come closer.

Thanks for reading!

Nov 23, 2009

Small tips that could help to get into the game industry

As I said in my welcome post, I would like to get into the game industry after my PhD. If it is your case, you should read this post...

As you may have noticed, I have a personal web site where I share my work with everyone. All source code of my demos and applications are available. When I started this web site, it was to do like Humus. I really appreciated the demo he developed and the fact that he is sharing his work with the community. It turns out to be a good portfolio...

Since 2008, I have received 3 serious job offers from game studios (2 of them being very well-known). Unfortunately, I was busy with my PhD but they told me to contact them again later. It seems that game developers in these studios browse the web to find some interesting developers and then report to the HR department.

I did a phone interview in english with the lead graphic developer of one of these studios during 2h! A really great guy which speak at Siggraph sometimes. I was really impressed because for me getting in touch with this studio was something unreachable! (NB: for a first interview, it turns out to be quite good but it could have been way better. I have to learn and work more!!!)
This studio even contact me a second time! I'm going on site at the beginning of 2010 for another interview. Awesome.

I will not reveal which studios contact me to not break any chances to get in. Follow this blog to know where I will land! :)

OK, here are my tips:
  • have a website where you show your personal work. But rendering two triangles with bump mapping is not enough. You need to implement more advanced rendering methods and maybe try to improve on it. Also, show some applications you did (games, editors engines, etc). Concerning my web site, I think it would be better with more applications or projects. And also with a cleaner presentation/appearance.
  • do not hesitate to post your work on GameDev. Or other website with picture of the day.
I hope this will help concerned people.

Hello world!

Welcome on my first blog!

Let me introduce myself: my name is S├ębastien Hillaire I am a french PhD student in virtual reality at INRIA and Orange Labs at Rennes. I will defend my PhD by the end of 2010.

After, I would like to get in the game industry as a graphic developer. At home, when I'm not playing guitar or games, I really like to develop some graphics related algorithm. For some examples of my work, check my personal website: I am always available to discuss about graphics. Some people mail me about my demos and I am really happy to help them.

As I said, I really enjoy playing games! If you want to be a game developer, you have to play games to try new gameplay, see new rendering methods, etc. My favorite game style is FPS. The game I have played the most is Quake3Arena. For me, this is THE perfect simple skill-based gameplay!! I have also really appreciated the FarCry1/Crysis game for the awesome graphics, gameplay possibility and feeling of liberty you get. More recently, another game got my attention: Left4Dead. The cooperative gameplay in this game is just... I don't know which word I could use because this is such a good gameplay idea!
I hope to see more games like these in the future.

On this blog, I will talk about my PhD experience, graphics related stuff (demo, research papers, etc) and games development.

Ok, I think this is all for the welcome/introduction post! I hope I will be able to make this blog as interesting as those I follow.

See you soon,