graphicsDeploy index


phase 14:
ray tracing


overview

The purpose of this assignment was to implement some form of attempted photorealism. I decided to attack the global illumination problem, and chose to do this by implementing a recursive ray-tracer.


global illumination

The global illumination problem, or light transport problem, asks the question, given a scene composed of some objects with various properties for absorbing and reflecting light, plus some light sources, how much light of various colors does a viewer perceive from various angles. The simplest approach, lambertian shading, simply assumes that the angle between surface normal and light source determines the amount of illumination at each surface point. Slightly more sophisticated models, such as phong shading, include a specular illumination component such that surface points appear brighter when the viewer is on or near the path of geometric reflection about the surface normal. The combination of diffuse (lambertian) and phong specular illumination models can produce decent images, but none of these techniques takes into account the effect that different surfaces in the scene have on each other. This is the essence of the light transport problem -- actual illumination is the result of photons tracing through a scene along an infinite number of paths. Finding computationally tractable ways to simulate the resulting effects is difficult.

One manner of surface-surface interaction already considered (phase 9) is that of shadowing; in determining the illumination at a surface point, the point only receives illumination from a light source if there are no intervening surfaces. Shadow rays enable us to create both hard and soft shadows, but still we have no ability to consider how another surface might add to the illumination at a surface point.


ray tracing

Enter: ray tracing! One of the standard methods for addressing the global illumination problem (other than ignoring it) is ray tracing. The basic idea is that, given a scene, lights, and set of viewing parameters, we trace rays into the scene corresponding to each pixel in the final image. When a ray hits a surface, we determine its illumination using the standard framework (using shadow rays, so that only surfaces with a direct line to a light source are illuminated in this manner). In addition, if a surface is highly reflective, we send out a "reflection ray" along the geometric direction of reflection about the surface normal at that point, and add the illumination which that ray receives to the surface illumination. In addition, if a surface is sufficiently transparent, we send out a "transmission ray"; if the surface is just a thin membrane, this ray will continue along the same direction as the source ray, otherwise we apply Snell's law to determine the correct angle of refraction. We trace rays in this manner up to a pre-specified depth limit.

In non-paragraph form, the algorithm is:

  1. generate eye-rays from the viewer's position, one for each pixel
  2. for each ray, find the closest intersecting polygon
  3. at the intersection point, determine illumination from each light source using shadow rays (I-lights)
  4. if the surface is reflective, and we're not at the maximum recursion depth, send out a reflection ray (its returned illumination is I-reflected)
  5. if the surface is transparent, and not we're not at the maximum recursion depth, send out a transmission ray (its returned illumination is I-transmitted)
  6. return total illumination (I-lights + reflectance x I-reflected + transmittance x I-transmitted)

Ray tracing in this manner, combined with shadow rays, enables us to simulate the following effects:

In the following two images, on the left is a flat mirror and a vase reflected in the mirror; on the right the mirror is convex, and a reflective solid glass ball is between the viewer and the vase; you can see multiple reflected and refracted images of the vase, which is kind of cool, and also physically accurate (I think...)


In the following images, on the left is a flower-in-vase as produced without ray-tracing, the middle image uses ray-tracing but no shadow rays, and the image on the right uses ray-tracing with shadow rays (the actual vase does not cast shadows, but all other surfaces do, including the water). Lacking shadows, the middle image appears unnaturally bright, whereas in the image on the right the scene appears quite realistic; you can see the stem diffracting in the water, which is pretty cool (the vase is modelled as a thin, non-refracting surface, so you don't see any refraction effects from the vase itself). Having the vase cast shadows caused problems because then no light gets to the stem (this is in some sense a caustic lighting effect, which more sophisticated ray tracers would handle properly; see below).


limitations of ray tracing

First and foremost, ray tracing is incredibly slow, especially with my not-very-optimized code. The real problem is the vast number of ray-triangle intersection tests that must be performed; there are a variety of scene organization techniques which can ease the computational load, but I have not yet implemented any of these beyond simple bounding-volume checks.

There are also certain effects which this basic approach to ray-tracing cannot re-create. Of particular importance are diffuse interactions between surfaces (if you hold a green book next to a white wall, the wall will appear greenish), and a related effect, specular reflections which are not entirely perfect (like a bad mirror or a shiny table). Another desirable effect are surface-caustics, light patterns which are cast onto diffuse surfaces by reflecting or refracting media (eg. light shining through a glass of water onto a table). Caustics are viewer-independent phenomena, and thus our current model of tracing rays from the viewer, and calculating reflection and refraction angles based ultimately on the viewer's position, is inadequate.

One final problem -- this will shock you -- aliasing, horrible horrible aliasing! We are sampling a continuous space with discrete rays, and this can lead to trouble. In particular, I have experienced a lot of trouble in handling concave reflecting surfaces; as far as I can tell, because my surfaces are approximated by polygons, and thus surface normals are interpolated and do not correspond perfectly to true surface normals, with a concave surface (which will stretch the apparent size of a reflected image) glancing rays can alternate hitting and missing, and the results are terrible. In the images below, the left image shows this form of aliasing, and the right is a semi-successful attempt to remedy the problem by modeling the mirror with a larger number of polygons.




graphicsDeploy index