Sunday, December 9, 2012

Path Tracing

In this post i shall extend my ray tracer to a path tracer , where lighting will be more realistic ! All the renderings we have done so far does not solve the entire rendering equation. It misses the following phenomenon:

  • Caustics.
  • Indirect illumination (color bleeding, ambient lighting).
  • Soft shadows.
In this posting i shall try to render the models with the global illumination techniques. I shall also describe a little bit about the light transport notation, because  it is the useful tool for thinking about the global illumination(GI). The light transport notation uses the following parameters:

  • It follows the light paths.
  • The end points of the straight paths could be
    • Light source : L
    • Eye : E
    • A specular reflection : S
    • A diffuse reflection : D
    • Semi-diffuse reflection (Glossy) : G

Scene with a glass ball and two diffuse walls [1].








For example, The above figure shows the following example paths:
  • Path a - LDSSE.
  • Path b - LDSE.
  • path c - LSSDE.

The Rendering equation describes the energy transport in the scene. The global illumination problem is in essence a transport problem. Energy is emitted by the light sources and transported through the scene by reflections and refractions at the surface. The transport equation that describes the global illumination transport is called the rendering equation. It is the integral equation formulation of the definition of the BRDF, and it adds an self-emittance of surface points as light sources as initialization function.  This self-emitted light energy is required to provide the environment with some starting energy. The radiance leaving some point x, can be expressed as the integral over the all hemispherical direction incident on the point x as follows:

Path tracing would be the extension of the basic whitted ray tracing algorithm. It can simulate all possible light bounces in the model and makes it possible to compute the lighting effects that requires evaluating the integration problems such as area light and indirect illumination reflected out of diffuse surface. The integration problems are handled by tracing the random rays within the integration domain to estimate the value of the integral.

It is better to evaluate the light sources separately, because the explicit light source evaluation converge to the solution faster that shooting the random rays over the hemisphere only. The following figure gives an brief overview of the process:

Breaking down of the Rendering Equation
Image courtesy of
Lund Graphics Group




I stared with the direct illumination where i have a spherical light area light source, because in real life lighting we only have area light source and no point light source. So, all the rendering you will be seeing in this post will be rendered with the explicit area light sources or the high dynamic range light probe. According to Peter Shirley's Realistic Ray Tracing, if we want to compute the explicit direct lighting from all the surfaces, we could use the surface-based transport equation as follows:



Area-based transport equation
Image Courtesy of
Realistic Ray tracing



In  the above equation we have the emitted radiance and the visibility function is represented as follows:



Now if we choose a random number on the surface of the luminaire with the underlying density, the estimate for the direct lighting becomes:



Although a sphere with center c and radius r can be sampled using the random points on the luminaire, this will yield a very noisy image because many samples will be on the back of the sphere. So we used a bit more complex probability density function to reduce the noise.

Geometry for Spherical Luminaire


I got the following output by rendering the cornell scene with the sampled spherical luminaire only, no indirect illumination is considered yet.

Cornell scene with direct illumination  by sampling the spherical
luminaire only


All the surface have diffuse properties and we are yet to get any color-bleeding, shadows are still pretty strong. So we are a bit far away from the goal. We need to get the color bleeding around the edges of the wall and to the model and the shadows are ought to be softer.

As i mentioned before that in real-life there is no single light source illuminating the environment. In reality, light bounces off surfaces making each surface a new light source in turn. In order to better approximate the rendering equation, we need to consider the indirect illumination. Indirect illumination can be written as:

Indirect part of the rendering equation
Image Courtesy of
Lund Graphics Group




The above rendering equation can be approximated using the finite sum of discrete samples and can be represented as follows:


We want to avoid n to the power k rays after k bounces (each ray contribute less to the image). In path tracing we trace n rays through each pixel and randomize its path. We sample the indirect illumination in a single random direction. One of main problems with path tracing is that it is still noisy even after so many samples.

I am generating the random direction using the importance sampling to reduce the noise and it is the cosine-weighted importance sampling. As before i test the ray for intersection against the scene. If the ray missed the scene, call the function trace() recursively. I use the method Russian Roulette instead of  using the fixed  cutoff depth. In Russian roulette, the ray is killed with a certain probability alpha , called the absorption probability. If the ray is terminated, we simply return black, otherwise we pick a random direction and call trace again. In that case, the output must be divided by (1 - alpha) to give the correct result.

A good starting value for alpha is 0.1, which means the ray has 10% chance of being absorbed and 90% chance of continuing. A canonical random number is used to decide if the ray should be absorbed or traced (a canonical random number returns a number between 0 and 1). If the ray is not absorbed, we need to select random direction in which to continue tracing.

Cosine-weighted sampling
The following image is generated with both fixed recursion depth and russian roulette. It is good to force a few recursions before starting roulette.

Cornell scene with 1024 paths/pixel



As you can see we have achieved some of our goals we mentioned at the beginning of this post, we are getting soft shadows and color bleeding around the edges of the walls and over the surfaces of the model. Now it is time to add the reflection and refraction to the scene.

When it comes to reflectivity and refractivity, it is similar to the whitted tracer except the fact that the path tracer is choosing only one of them (reflectivity or refractivity) and i am using russian roulette to choose to pick one. The canonical random number decides if i will be tracing reflected ray , refracted ray or the direct and indirect illumination.


Cornell scene with reflection and refraction







The background of all the images i have rendered so far is black, which made them look a little dull. Now i shall use an image as environment map. The image is called light probe because it represents the incoming light from all directions. I use the light probe stored as OpenEXR files in the angular map format.



I just need to remove the walls of the cornell scenes and the spherical light source. Then i use the light probe as the main light source. It will enable the objects in the scene blend into the environment.



Image based lighting

As you can see that the floor is very grainy. This is because of the large variance with the light probe i am using. There are methods to remove this and hopefully be addressed in my next post. The source that generated above images can be found in the following link:

Path Tracing Source Code



References

[1] Realistic Ray Tracing, Peter Shirley.
[2] Advanced Global Illumination, Kavita Bala.
[3] Lund University Graphics Group

Monday, December 3, 2012

Parallel Volume Ray Casting

It is an image based volume rendering technique. It computes the 2D image from the 3D data set. The aim of this post is to describe the procedure followed to read a volumetric data of moderate size, render it without any parallelization and monitor the amount of the rendering time taken. Then we supplement the rendering using the parallel architecture with the system and monitor if the parallel rendering do speed up the process or not. All the rendering is done at the intel zeon processor. It has 4 processing cores, each running at 2.66 GHz.


Volume rendering provides a very high quality rendering. The basic goal of the volume ray casting is to allow the best use of the 3D data and not attempt to use any geometric structure on it. It solves one of the most important limitations of surface extraction techniques.


Ray casting is an algorithm to perform the direct volume rendering. The core of the algorithm is sending one ray, R, per pixel from the camera and take samples along the ray inside a volume. At each sample there is an amount of illumination I(x,y,z), from the light sources. Not all illumination is scattered in the direction to the camera. It depends on the local density D(x,y,z) at the point.

Maximum Intensity Projection uses the highest value that is encountered along the ray to calculate the final color[1]. This technique can be used to visualize blood vessels since they have high values caused by substance injected into the vein. The blood vessel will be visible if the surrounding tissues have the lower values.

In this post i am using the MIP(maximum intensity projection) ray casting technique. When i render the original volume size of 256X256X256 and parallelize it, i do not see much of speed-up, only fractions of second. To realize any significant speed-up within the rendering, i scale the volume 2.5 times in each dimension and then i render it with and without parallelization. Scaling operation is just one of the affine transformation operations to  make the volume larger in each dimension so that the boundary becomes larger and thus takes more time to traverse. It took 7 seconds to render the image as shown below, while rendering with the parallel pragmas enable it to complete the rendering in 2 seconds.



The main rendering algorithm contains 3 loops as we are traversing through the volumetric data. The OpenMP pragmas will be only the first outer loop, since  we need to allocate rows of the frame buffer to different threads. Even though OpenMP provides the data-sharing attribute rules that could be relied on, i prefer to use the default (none) clause instead. In this way i mention to the OpenMP that i take the sole responsibility of data sharing attributes and the reward is two-fold[2]. They are:

  • We must carefully think about the usage of variables and this action help us to avoid mistakes.
  • Explicitly mentioning the data sharing attributes is more subtle.

For good performance, it is good idea to use private variables as much as possible. I believe that i have managed to make the rendering faster by carefully considering which variables are going to be private or shared. 




References

[1] Introduction to Volume Rendering.
[2] Using OpenMP: Portable Shared Memory Parallel Programming