The initial incarnation of the of the ray tracer is only capable of shooting eye rays and detecting if they hit any spheres in the scene or not.
I would like to give a small over view of some of the utility classes that came along with the skeleton.
- Color - It represents the RGB color value. Each color component (r,g,b) is stored as a float and it is not limited to any specific range since we shall be working with the high-dynamic range images. The most basic arithmetic operations have been implemented, so that two colors can be added, multiplied, etc simply by: c1 - c2, c1 * c2, etc. All the arithmetic operations work element-wise.
- Vector - It represents a vector direction in 3D space. Intuitive math operations are implemented.
- Point - It represent a position in 3D space.
- Ray - The basic utility class in any ray tracing operation.It contains some of the above mentioned class objects as members. The origin of the ray is represented by Point and direction of the ray is represented by Vector.
![]() |
Ray structure Image courtesy by: Lund Graphics Group |
- Scene - All the objects in the 3D world is represented by this class. These objects include the following:
- Primitives.
- Light sources.
- Cameras.
![]() |
Scene Representation Image courtesy by: Lund Graphics Group |
![]() |
Intersection interface Image courtesy by: Lund Graphics Group |
- Intersection - It represents the ray/object intersection point. All the necessary information about the intersection is computed by the intersected object and stored in this class.
The main function do the following typical setup for the ray tracer[2]:
- Build a scene.
- Create materials, objects, lights.
- Create the image buffer represented by the Image class. It contains the size (width X height) of the image. Each pixel of the image is a Color object with red, green, and blue components. There are functions to load and save the image in OpenEXR (.exr) and for getting and setting pixel values. This class is used as frame buffer in the ray tracer, storing the output color of each pixel, and also used in the LightProbe class for representing the environment map.
- Setup the perspective camera. The class Camera needs to know which Image object is used for the output image in order the extract the image dimensions. The camera can be set conveniently setup using the setLookAt() function, which takes a position, target and an up vector. The field of view parameter measures the FOV in degrees horizontally.
- Prepare the scene as mentioned above.
- Create the ray tracer object.
- Perform the ray tracing.
- Save the output image.
The following diagram will provide the reader a broader overview of this tiny renderer:
![]() |
Skeleton Overview Image courtesy by: Lund Graphics Group |
Whitted ray tracer launches up to three new rays at the intersection point between the ray and the intersection object; one for determining the shadow, one for perfect reflection, and one for perfect refraction. The shadow ray is always launched to determine if there is any objects located between the intersection point and the light source. Reflection and refraction rays are spawned based on the material properties of the intersected object.
The basic ray tracer embryo contains some spherical objects in the scene. It colors the pixel white if the ray hits anything, black otherwise.
The basic ray tracer embryo contains some spherical objects in the scene. It colors the pixel white if the ray hits anything, black otherwise.
![]() |
Default output image from the whitted ray tracer |
As you can see from the above image that there is no light transport in the scene. It would be definitely more interesting if we would be able to apply the diffuse shading to the spheres. For now i only consider the direct illumination that originates from the point light source. The implementation will involve the calculation of the basic rendering equation as follows:
![]() |
Rendering Equation |
We can break the basic rendering equation as follows:
![]() |
Breakdown of the rendering equation |
The scene contain a number of point light sources. We do the following calculation to get the contribution from the light source.
- The light vector is calculated from the intersection point to the point light and then normalize it and this is the incident light vector.
- The incident angle - the dot product between the light vector and the normal at the intersection point is calculated.
- The BRDF(Bidirectional Reflectance Distribution Function) is calculated at the intersection point for the light vector which have been calculated.
I got the following output of the diffuse reflection:
![]() |
Diffuse Reflection |
![]() |
The point p is not in shadow while the point q is not in shadow Image Courtesy of Peter Shirley in Fundamentals of Computer Graphics |
Note that the intersection test only has to return a boolean, true or false answer, since we are not interested in any intersection information. To get the algorithm for shading, i add an if statement to determine if the point is in shadow or not. In the naive implementation, the shadow ray will check if the instance of intersection is between 0 and INFINITY. But, because of the numerical imprecision, this can result in an intersection with the surface the point in question (p) lies. Instead, the usual adjustment to address the problem is to add a small positive constant so that we can check the intersection between the positive constant and INFINITY. The following figure elaborates better , i believe:
I generated the following image with shadow calculation:
![]() |
Diffuse Reflection with Shadow ray |
Similar to the shadow rays in last addition, the reflectance can be spawned at the point of intersection. In this way, the ray originating at the eye can be traced recursively to account for alterations in the ray path caused by reflections. Since we have reflection, the basic rendering equation is updated as follows:
The above equation contains a new parameter - specular reflectance , represented by r. For the time being it is enough to assume that specular reflectance affects all wavelengths equally ,and thus is a single coefficient between 0 and 1. If r = 0.0 it means that the surface is not reflective, and if r = 1.0, then the surface is a perfect mirror. Note that the resulting color is linearly interpolated using the specular reflectance. After adding this specular reflectance i get the following outpt:
![]() |
Reflection with 5 recursive depth |
Another important feature of the ray tracer is the ability to handle transparency and refractions. Many real materials are more or less transparent( glass, plastics, liquids, etc.). When lights enters a transparent material, it is usually bent or refracted. How much is determined by the index of refraction of the material. By the Snell's law we can compute the refraction vector T. Similar to the reflection term, i can add the refraction term to the transmittance model as follows:
Just like reflection, a refraction can be treated by recursively spawning a new ray at the hit point of a refractive surface where t > 0. Like before, i interpolate between the direct illumination, reflection and refraction components, so it should hold that r+t <= 1. Here goes the rendered image with reflection , refraction and shadow:
![]() |
Reflection & Refraction with shadows |
To actually compute an estimate of the average color of the pixel, an algorithm is needed for picking "uniform" points on the pixel. There are several methods of obtaining samples within a pixel. Usually, such algorithms pick samples within the unit square. Two simple possibilities of generating sample over a unit square is show below:
![]() |
Sampling strategies Image courtesy of Realistic Ray tracing by Peter Shirley |
![]() |
Different Sampling Techniques Image courtesy of Realistic Ray tracing By Peter Shirley |
I have used the multi-jittered sampling technique . Here goes the following image :
![]() |
Multi-jittered sampling with 100 primary rays per pixel |
If you look at the image closely and compare it with the previous ones you will see the difference. At the same time i would like to stress that sampling for anti-aliasing is the weakest way of testing its strength. To really stress test them and expose their flaws, you need to use them for depth-of-field simulation, ambient occlusion, shading with area lights, particularly environment lights. Some of them will show up in my next post.
All of the images generated so far are way too perfect and far from being photo-realistic. The goal of photo-realism is addressed mainly by monte carlo path tracing techniques and their variations. I shall discuss the simplest one in my next post.
References
[1] Realistic Ray Tracing, Peter Shirley and R. Keith Morley
[2] Lund University Graphics Group
[3] Ray Tracing from Ground Up, Kevin Suffern
[4] Fundamentals of Computer Graphics, Peter Shirley
No comments:
Post a Comment