graphicsDeploy index


phase 7:
hidden surface removal

cs40 assignment 7 web page



overview

in the previous phase, I developed parallel and perspective projections to correctly map 3d points into screen space. The problem, though, was that nothing was done to handle the frequent case when one surface sits in front of another surface -- the problem of hidden surface removal. In this phase, hidden surface removal was addressed using a z-buffer. This was then extended to handle 3-dimensional transparency using an a-buffer, and a 3-dimensional shader library was implemented to allow for color-mapping of surfaces.

Technical documentation is available at the api, with changes most relevant to z- and a-buffering in the section on draw3d.h and the shader library in the section on shader.h. What follows are discussion and examples of the various features implemented in phase 7.


hidden surface removal using z-buffering

the general z-buffer algorithm is really straightforward. Along with each pixel for an image, you keep depth information (a z-buffer). Then, you draw every pixel, but only when the depth of the new pixel is close than the depth in the z-buffer do you change the pixel in the image and update the z-buffer. This is very easy in parallel projection because then z varies linearly with x and y, so a simple bi-linear interpolation over vertex z-values allows you to determine the z-value of each pixel in a triangle (this is where the fact that all vertices of a triangle are co-planar becomes especially lovely). With perspective projection, this is only slightly more difficult, in that 1/z varies linearly with x and y, so you just have to do a bilinear interpolation with 1/z, and store 1/z values in the z-buffer. Then, of course, closer objects have larger z-buffer values, but that is just a trivial modification of code.

Below are examples of both parallel and perspective projection cubes both with and without z-buffering. The top and bottom are drawn first, followed by the front and back, and then lastly the sides. In the non-z-buffered images, we can see both sides because they are drawn last, whereas for the z-buffered images we see one side, the top, and the front, as we should.



layered transparency using a-buffering

the a-buffer algorithm is an extension to the z-buffer algorithm which allows for layered transparency. Basically, the way it works is that now, for each pixel, rather than keeping its current color and depth, you keep a linked list of all colors, depths, and transparency values up until the closest completely opaque pixel. Then, after drawing all object pixels into the a-buffer, for each pixel you traverse each list starting at the most distal pixel and use the alpha-channel equation to update the pixel color in the resulting image (in my implementation, this is accomplished by calling the function cascade_layers()). In order to maintain maximum flexibility, my drawingState3d struct has both z-buffer and a-buffer pointers, and if the user is using transparency the a-buffer is allocated, but if the user turns transparency off then the z-buffer is used instead (its a lot faster).

Below are some examples of cubes drawn using transparency, sorted in ascending order of wackiness.


integration with the modeling system

as alluded to in the phase 6 project description, the cube is treated as a 3d graphics primitive (created using the cube() function). currently, both line() and square() also draw properly under three dimensional transformation with z- and a-buffering. The attributes which can be applied to these primitives are described in the next section. In general, the way things work is that at render time each primitive gets represented internally as one or more 'mesh' structures (for example the cube is just six quadrilateral meshes). Meshes are intended to be things easily drawn as either line segments (eg four lines for a quad) or as triangles (split on one diagonal of a quad). The vertices of each mesh are then transformed using the world transformation (gtm * ltm), then surface normals are determined and then viewing transformations are applied. When rendering surfaces, we end up with lists of triangles which are then fed to draw3d.h to handle the actual drawing to the image.

The system is designed with the intent of being easily extensible, for example circles (really n-gons) should be representable using 'wedge' meshes, where the first point is the center and each subsequent point marks a point along the circumference. Then cylinders should be representable using two wedges and a ring-strip, and so on and so forth (hopefully).

A good example of the current level of system integration is the "snake eyes" animation at the top of the page; each die is made up of six squares (cubes wouldn't work because each face requires a different color map). The die is then instanced twice, with a camera inserted as described in phase 6.


shaders and object attributes

for probably like the third or fourth time i have completely changed the set of attribute nodes which can be applied to objects. The set now includes a render-type, which specifies the highest level of detail the object will be rendered at (this is superceded by the global render-type set by the user in the rendering parameters); opacity (1.0 is fully opaque, 0.0 is fully invisible); and a suite of shader attributes.

shaders are structures describing the surface appearance of an object. The most basic is a just a single color. We can get fancier, though, by adding color-maps, which are images which serve as color look-up tables. Then, each vertex of a triangle has associated u,v shader coordinates, and proper interpolation of u and v allows lookup into the color-map for the entire triangle (I accomplished this by realizing that along a given line, u and v will vary linearly with z, and z can be determined from the current z-buffering information). Shaders themselves have opacity parameters (this is multiplied with the object's opacity to determine the opacity the object actually renders at) and staggering parameters which affect how the color-map looks when repeated. At the attribute level, there are parameters for offsetting and repeating, as well as a shader-mapping parameter which affects which parts of an object actually get color-maps applied to them -- eg. for a cube, FULL will map to all faces, but SIDES will not map to the top or bottom, and so forth.

below are some examples of cubes drawn a julia set shader, using (a) just the default color, (b) shader mapping TOP, (c) shader mapping SIDES with 0.5 offset in u, and (d) shader mapping FULL with repeat 3,3 and 0.5 horizontal stagger.


basic illumination

the images presented here were rendered using a fairly primitive illumination model, where there is an assumed light source pointing in the uniform direction (-1,-3,2). For each facet, a surface normal vector has been calculated (always pointing in the viewable hemisphere for transparency reasons), and the illumination of the facet is given by the dot product of the light vector (1,3,-2) and the surface vector (both normalized, so we get 0.0 to 1.0). This illumination value is then offset by a global illumination (I' = 0.5 + 0.5*I), and this illumination is multiplied by the color value using multIntensity() to determine the final color value for each pixel.


keyable attributes

as one can see from looking at keyframe.h , the following attribute nodes are now keyable -- opacity, map-repeat, and map-offset. This works pretty much the same as keyable transformations or camera zooms.

The following animation gives an example of keyed opacity and map-offset, using a constant map-repeat of 2, 2 and a stagger of 0.5 with horizontal staggering.




graphicsDeploy index