Rendering
As noted, rendering is the process whereby the scene description (model) is converted into the final 3D image.
There is a very wide range of rendering techniques. Quality of the image and the take needed to render the
image can vary widely depending on a variety of factors, such as the complexity of the scene,
the software algorithms being used, the hardware capabilities and the resolution of the final image.
Return to top of document
Wireframe
The least photo-realistic but computational simplistic way to draw a 3D image is to simply draw the straight
lines which make up the edges of the polgons in the scene model. This approach is typically included
in 3D modelling software to give a very quick view of the scene without having to wait for a more realistic
rendering.
Wireframe renderings have the problem that it is sometimes difficult to clearly understand the shape of the objects
in a complex scene. To enhance wireframe renderings, 3D imaging sofware often applies 'hidden polygon removal' algorithms which show only the polygons (lines) that are visible from the viewer's point of view. This approach is
slightly slower than a simple wireframe but significantly increases the viewer's perception of the object as a solid.
Return to top of document
Shading
The first step of creating photo-realisitc 3D images is to create a polygonal model of the objects in the scene -
with sufficient numbers and sizes of polygons to faithfully render the surfaces curvatures of the objects in the scene.
While complicated, this is not the most difficult part of 3D imaging.
The basic task that must be accomplished in creating a photo-realistic 3D imaging is to adjust the color of the every pixel within the image. Shading (color assignment) of each pixel is done to accurately depict lighting (reflections and
refractions) and textures of the objects in the scene. This task is performed polygon by polygon, pixel by pixel and can be a very time consuming process.
Fortunately there are a variety of techniques available, offerring tradeoffs between time-to-render and photo-realism
of the resulting image.
Flat Shading (Lambert Shading)
This is the simplest of all shading algorithms. In this approach, a polygon (i.e., all pixels that make up the polygon) is assigned a single color that depends on the angle at which the light from the light source strikes the polygon. When the light strikes the polygon head-on, the color of the polygon gets brighter (approaches the color of the light source). As the angle at which the light strikes the polygon approaches edge-on, the colors of the polygon gets darker (approaches black) because no light is reflected. Typically the calculation of the angle between the light and the polygon is made at the center of the polygon.
While simple to draw, flat shading results in an object with a faceted appearance - consisting of adjacent polygons whose
color changes sharply at shared boundaries. This reduces the realism of the image. Increasing the number of
polygons (more polygons of smaller size) which make up the image can compensate for this effect, but at the expense of additional computation.
One additional point about polygons is worth mentioning here. When a polygon is added to a scene, one side of the
polygon is considered the 'outside' and the back face is the 'inside' of an object. When applying shading algorigthms,
the back face of a polygon is not shaded. The usual technique of determining front and back face of a triangle is
to define the vertices of the triangle using the right hand rule. As the fingers of the right hand curl in the direction
of the vertices, the thumb points in out of the 'front' side of the polygon.
Smooth Shading
There are a number of other shading techniques which attempt to smooth the 3D image by assigning a different color
to each pixel within a polygon. The polygons making up the object will still have staight edges but will appear
smoother to the eye.
- Goraud Shading
With Gouraud shading, a color calculation is made at each vertex of the polygons in the scene. Then, within a polygon,
algorithms are used to blend shading (color) from one vertext to the other. Since all vertices of a polygon are also
the vertices of adjacent polygons, the color assignment at a vertex is the average of the color that would be assigned
to each invidual polygon sharing that vertex. This approach gives a significant jump in apparent smoothness over simple Lambert (flat) shading.
While this algorithm is the fastest of all smooth shading algorithms, it is not realistic enough for most 3D image designers. The Phong smooth shading technique, described next, is an industry standard and provides considerably improved, though not outstanding, photo-realism.
- Phong Shading
Phong smooth shading also assigns a color for each pixel of the polygon, but it does so by interpolating the angle of light incidence and recalculating the correct color for each pixel as is done in Gouraud shading. This approach provides a much smoother image at the expense of additional computational demand on the PC. Phong shading is perhaps the most widely use shading algorithm, although there are many variations on the algorithm designed to affect the time to render a scene.
- Z-Buffer
Z-buffer uses a memory buffer to store the Z coordinates of every point to be rendered on each polygon. Several commerical products implements this approach.
- Scanline
Unlike Z-buffer, scanline renders one line of pixels at a time. This requires more memory because all the polygons that a certain scan line covers must be processed. Also used by several commercial programs, it is considered by many to be the best compromise between rendering quality and rendering speed.
Other Smooth Shading Techniques
Both the Gouraud and Phong smooth shading techniques provide improvements over flat shading, but each has
weaknesses - particularly in the handling of light relfections. Two other techniquest have been developed which
address those weaknesses.
- RayTracing
This rendering algorithm is universally acknowledged as providing the most photo-realistic images. Unfortunately it is also the slowest of all algorithms. This makes it great for producing quality single images, but useless for creating real-time animations.
A light source can be thought of as sending out an infinite number of light rays. These light ray leave the light source and go in a straight line until they reflect of the surface of an object. Rays of light also change directions when the enter and leave transparent or translucent objects.
With ray tracing, objects in the scene must have light reflection and refraction properties assigned to them. The rendering software then follows the path of the light rays as they move around the scene, with each light ray/object collision contributing to the final rendered view of the scene.
You'll be surprised to find that the starting point of the light rays is not the light source, but your eyes, and followed until they hit the light source!. This approach is used because the light sources emit rays that often leave the scene at locations other than the point of view (where your eye is looking through). Calculating the effects of those rays is of no value because it cannot be seen from the selected point of view. By starting with light at the point of view (your eyes), we're assured of doing only computations on the rays of light which affect the image seen from the point of view.
There is an enormous following of the most popular ray tracing software - POV-Ray. The software is free and you can find galleries of ray traced images all over the web - single images that is because the slowness of the algorithm makes it very difficult to use in creating 3D animations.
- Radiosity
A short-fall of ray tracing is that it does not handle indirect illumination very well. When light strike an object not all of the light reflects off the objects in a simple angle - as is assumed by ray tracing algorithms. In real life, light striking an object is scattered in random directions (diffuse reflection). As noted before, the ray tracing assumption that rays of light maintain coherency (they don't break up) results in excellent, photo-realistic images. But ray tracing does not correctly handle the diffuse light reflections. Further, this diffuse reflection of light leads to color bleeding - where light striking a surface carries that surface's color into the environment are part of the reflection process.
Radiosity is an algorithm designed to take care of indirect illumination. In a scene not all the parts of the objects are directly hit by a light ray and with a renderer that cannot provide any form of indirect lighting these parts are rendered black. Ray tracing attempts to handle this by shading these areas by a flat constant value. This work-around can be acceptable for most purpose but there are some scenes where the approximation is not adequate.
Like ray tracing, radiosity computations are very slow. Radiosity has drawbacks of its own, including poor handling of specular lighting, so it is often used in conjunction with ray tracing to provide the best of both worlds. Even the POV-Ray software now supports radiosity.
|