In the final part of this series, John Knight, Paul Gordon and Philip Morris
take our shaded object and subject it to illumination. They consider the
state-of-the-art of computer graphics, hardware platforms, and the future.

The techniques required to produce realistic im-
ages demonstrate a necessary understanding of the
physics of light and the workings of the human eye.

Physics of Light
The eye's lens is flexible and is used to focus
incoming light rays onto the retina. The retina
contains rod and cone receptors. The rods are
sensitive to low-light and cannot resolve fine de-
tail. The cones provide colour/brightness resolu-
tion and fine detail. The brain then processes the
received map from the nerve impulses fired by the
rods and cones.
Sensitivity to bri~htness is thou~ht to be lo~arithmic 
and the eye adapts to the average brightness
level. Hence, a black screen containing an image is
perceived to have a darker view of the object than
a white screen with the same image on it. Bright-
ness response also tends to overshoot at the edges
of regions of constant intensity. When the intensity
changes abruptly, the surface appears brighter or
darker dependent on the direction ofthe change, so
an area of constantbrightness is registered as one of
varying intensity by the brain. This effect is seen
particularly when a polygon-based image is shown.
The polygon boundaries produce this effect and so
the size of each polygon must be decreased to allow
the eye to register smooth surface brightness.
Now we will consider the illumination effects.
When light energy falls on a surface, some is
absorbed, reflected and transmitted in different
relative quantities dependent on the material com-
position of the surface and the wavelength of the
light. It is the reflected or transmitted light that
makes an object visible.
If white light illuminates an object, then it ap-
pears:
(a) black if all light is absorbed
(b) white if none is absorbed
(c) grey if equal amounts of all
wavelengths are absorbed
(d) coloured if certain wave-
lengths are absorbed more than
others
The characteristics of the ob-
ject also determine the reflect-
ance properties. The intensity of
a pixel in an illuminated object is
calculated by considering the
angle of the light vector relative to
the normal incident on the plane.
Ambient light from a distrib-
uted light source is an additive
effect to the overall illumination
of an object and is thus summed
over all angles of incidence as a
constant term in most graphics
illuminations. Diffuse illumina-
tion is caused by scattering of the
light from a given source and gives the impression
of a dull or matt finish. Whereas diffusely reflected
light is scattered equally in all directions, specular
reflectance is produced either by a point source of
light or direct reflection from a 'shiny' outer sur-
face. Such highlights move with lighting and ob-
server and also tend to exhibit the properties of the
incident light rather than the reflected rays (white
light on a blue object produces a white highlight).
Specular reflectance is clearly directional. Surfaces
may be transparent, translucent or opaque which
alters the reflectance characteristic dll~? tn more or
less light being transmitted.

Determining the surface normal
and reflectance vector
The direction ofthe surface normal gives a measure
ofthe curvature ofthe surface and so allows estima-
tion of the direction of specular reflectance. The
outward normal is calculated for each polygon
using vector mathematics. The intensity of each
point in the polygon can also be found for diffuse
light source (or parallel light rays from infinity).

Models of Illumination
In last month's article, we showed how to write a
polygon scan converter which can produce a con-
stant shaded view of our object. The resulting
display reveals the underlying polygon mesh to the
viewer. In order to achieve a much greater degree of
realism, we can use a shading technique to produce
a smooth representation of our object even with
relatively few polygons.
Two examples of such techniques are the simple
Gouraud and oft-used Phong shading techniques.
Both utilise the concept of surface normals dis-
cussed in the previous article's panel on vector
operations. The linearly interpolating scan con-
verter is also used to provide values necessary for
calculation of intensity at each point in the poly-
gon. Fig 1 shows a comparison between Phong and
Gouraud shading (the P and C respectively) with a
constant-shaded W for reference.

Gouraud shading
In this technique, all that is required is the intensity
of light at each vertex of the polygon. For each
vertex, the vector product is used on the adjoining
polygon edges to compute the average surface nor-
mal. The dot product is used with the normal vector
and the light source vector to determine the angle
subtended. The angle is used directly to provide a
value of intensity at the vertex. The mathematics of
the calculations is thus


Ipoint = lambient kambient + lincident kdi,ffuse (N L)

which is translated into the following code frag-
ment:
  {Gouraud Shading Code Function}
  intensity=dot ~roduct(~normal[vertex],
&light_vector);
  if (intensity < O)
{ intensity = O
}  
else
{
 intensity = ka + (kd * intensity);

}
The vertex intensities are passed into the scan
converter which is expanded to linearly interpolate
them across the polygon's surface. When each pixel
is plotted, the current value of interpolated inten-
sity is used to determine its colour. Although this
technique is inherently fast as the intensity is only
calculated once for each vertex, the resultant shad-
ing always produces a matt or dull finish (Fig 2).
Physically speaking, Gouraud shading only mod-
els ambient and diffuse reflection. A more versatile
and thorough technique is Phong shading.

The Phong method
The so-calledPhong illuminationmodel, conceived
by Bui-Tuong Phong in 1975, attempts to model
specular reflection in addition to the diffuse and
ambient components. As was discussed in the
section 'Physics of Light', the intensity of reflected
light from a surface can be computed by summing
the separate contributions from ambient, diffuse
and specular terms thus:


Iphong= Iambient + Idiffuse + Ispecular

The formation of the terms is now detailed.
Ambient light is light reflected from an object
derived from multiple reflections from other ob- ~ Fig 2
jects and the surrounding environment. It is simple
to model and can be included as a constant offset
term, in the intensity calculation.
A term modelling diffuse light (the proportion of
light absorbed at the surface that is retransmitted) is
based on the physics of absorption and re-emission
which dictate that the light is emitted in all direc-
tions, making the amount of reflection perceived by
the viewer independent of the position (Fig 3).
The amount of reflected light is given by Lam-
bert's law, thus

Idiffuse = Iincident Kdiffuse cos0
where the cosine term can be derived using the
vector dot product of the light vector, V, and the
surface normal N thus:

Idiffuse=IincidentKdiffuse (N*L)

Specular light is light seen when viewing a
surface along its mirror direction. A surface's mir-
ror direction is the direction in which light from the
light source would reflect from a perfect mirror. A
perfect mirror would only reflect specular light
along this direction; in practice a spectral envelope
is formed where specular reflections can be seen over a 
range about the mirror direction (fig4)
   Phong modelled the spectral envelope as cos_(to the power of n) 
where_ is the angular difference between mirror and viewer 
directions, given by the dot product of the mirror direction,
 R, and viewing direction, V,
thus:

I Specular = Iincident Kspecular (R*V) to the pwr of N

    The power term, n, alters the width of the enve-
   lope, thus for glossy surfaces it takes a larger value,
   typically 20. Having to calculate the mirror direc-
   tion for each surface is costly and can be avoided by
   assuming the viewer and light source are posi-
   tioned at infinity.
    If this is the case, the light vector L and viewing
   vector V are constant over the scene allowing a
   surface normal, H, to be calculated being the orien-

tation necessary to achieve mirror reflection thus:

H= (L+V) devided by 2


The angular difference between H and the surface normal 
N is used instead of the term involving R and V, making
 the calculation a lot simpler. The angles returned by 
the new equation are twice as large as the angles from 
the old, producing a differen
 envelope width, which can be compensated for by using 
larger values of the power term n.
The complete model is formed from a summation of these 
terms weighted by relevant factors to determine the proportion
 of each type of reflected light from the object thus:


 Ipoint=IambientKambient+IincidentKdiffuse(N*L)+IincidentKspecular(N+H) to the power of n

Our data structure allows the diffuse and specular terms 
to be included in the object definition, providing a degree 
of flexibility in the object surface characteristic. 
Generally, the weighting factors should sum to unity as 
they will produce a satuated, or over-white, picture if not. 
Fig 5 shows the effects 
of the illumination constants on the rendered image (the P has 
a large diffuse component, the C and W have large specular 
components, the C with small highlight power n and the W 
with large
n). The model was coded in our function Phong_illumination 
(Fig 6) and called from the enhanced polygon scan converter with the 
surface normal directions interpolated from the polygon corners. The 
result was multiplied by 256 to provide an index into
the VGA palette, to give the colour used when the pixel is plotted.

Palette Definition
To produce a smooth shaded object we must have a good range of colours 
to display on screen. Unfortunately, the production of the colour set,
 or palette, is a hardware-dependent process. This section describes the
 use of the routines outlined in Arti
le 1 (July 1992 issue) to produce a suitable palette for use with our
 illumination model.
All that is required to produce a palette containing the required colours 
is the necessary 8-bit RGB values in the three-colour arrays new_red[],
 new green[] and new_blue[]. Our renderer uses a function, make_range, 
which can set any section of the p
lette to contain colours from black, through a specified colour, to white
 using linear interpolation. The position at which the palette increments
 change from black-to-colour to colour-to-white can also be specified.
This technique allows for global alteration of object colour and proportion
 of specular reflection in addition to the specular weighting factor 
specified in the illumination model.

Use of vertex normals
Both shading techniques utilise the normals to polygon vertices to 
calculate the final intensity level of the pixels plotted. There is a
 problem when the world data structure contains coincidentpoints, thus
 forming a triangle or line.
The normal calculated from two edges when one
is formed from coincident points will be zero and
thus the polygon will have a dark corner! The
algorithm used in our renderer overcomes this
problem by using the next edge if one of zero length
is detected.

Colours
Our renderer uses the entire VGA palette to provide
a smooth transition from black to white for use in
the illumination model. Our example world struc-
ture lends itself to the use of three colours, one for
each letter. General research suggests that the eye is
sensitive to approximately 64 grey levels, allowing
us to partition the 256-colour palette into four
bands of 64 colours and still provide a reasonable
range. The make_range function can be used to
program these bands, for example the
palette values from 64 to 127 colours to
form a red set thus:

make_range(256,0,0,64,64,0.5);

The intensity calculated from the illu-
mination model can be multiplied by 64
rather than 256 to produce an index into
our smaller colour bands, and the scan
converter can be modified to use the
object number to determine a base pal-
ette index.
With these simple modifications, the
appearance of the rendered image can be
greatly enhanced (Fig 7).

Extended illumination model
For high-quality images, the 256-colour
limit imposed by VGA and many other
hardware display drivers is inadequate.
We can extend our illumination model to provide
24-bit true colour pixel values.
We provide a separate illumination term for
each of the red, green and blue colour planes. This
means expanding ourreflection weighting terms to
a total of nine, an ambient, diffuse and specular
term for each primary colour. The illumination
function is modified to produce intensity values for
the three primaries by replicating the complete
equation using the colour-specific weightings. This
produces a set of equations thus:

NOTE FROM THE PERSON WHO SCANNED THIS TEXT: IF YOU WANT THE EQUATIONS
 ASK FOR THEM!!
THEY CANNOT BE CORRECTLY PRESENTD USING ASCII.

The illumination function can be made to return
the combined 24-bit pixel value for direct display,
hardware permitting!

Other Issues
This series of articles has provided an insight into
producing a 3D rendering system which could be
implemented in C on a PC. Most of the concepts
have been described and code fragments provided
where appropriate. Now that we have a complete
3D graphics toolset, we will look briefly at other
issues involved in the development of a 3D graph-
ics system.

Improved Realism
Other embellishments which can be added to the
rendering system include texture mappng,
anti-aliasing, transparency and shadows. These
topics could themselves fill an issue of PCW, so
they will be described only briefly here:
Texture Mapping The addition of surface detail
can be achieved by mapping a separately specified
pattern onto the smooth polygon surfaces. This
pattern can be specified in 2D or 3D and can be used
to affect the colour or the actual surface shape. The
pattern can also be described as a 'real' pattern (a
scanned bitmap image, for example) or as a math-
ematical function (perhaps with a random element
to simulate a natural pattern like wood). Certain
combinations of these texture mapping techniques
are common: 2D mapping of a real image simulates
the 'baked-bean can label' effect where a paper
label (or magazine cover!) is wrapped around a 3D
object (the tin can); a 3D functionally specified map
is used for 'material mapping' where the object
appears to be carved from a solid substance; 2D
surface shape mapping is known as 'bump' map-
ping and is achieved by the pattern specifying a
controlled perturbation of the surface normals gen-


float Phong_illumination(normal)
Vertex_Type * normal;
{
float diffuse,
    specular;
diffuse = dot_product(normal,&light_vector);
if (diffuse<0) diffuse = 0;
specular = dot_product(normal,&H_vector);
if (specular<=0)
{
specular = 0 ;
}
  else
{
specular = pow(specular,highlight);
}
return(ka + (kd * diffuse+ ks * specular)); /*
return value 0 to 1 */
}erated by the Phong shading scan converter (a 2D
 sinusoidal perturbation gives a 'dimpled' effect).
 Texture mapping of any form relies on a lookup
 from the screen x, y and z coordinates to a 'texture
 domain' (x, y) or (x, y, z), and this lookup can be
 incorporated into the scan converter. Fig 8 shows
 2D texture (the body) and bump (the spout and
 handle) mapping.
 Anti-aliasing Uses various techniques to overcome
 the discrete nature of the raster format and simulate
 a continuous line or surface. This canbe carried out
 at render time or by using a post-filtering technique
 where the image is rendered and then the aliasing
 effects removed from it.
 Transparency The illumination model is extended
 to allow description of the transmitted light on the
 next incident surface. This also requires extension
 of the hidden surface algorithm as polygons behind
 a transparent object are no longer hidden! For true
 transparency effects, account should also be taken
 of the refraction effects of the material from which
 each object is constructed.
 Shadows Another extension of our illumination
 V Fig 8 .  model projects light vectors to form dense umbra

shadows from point sources of light. Shadow effects contribute 
considerably to the realism of computer graphics, and can be 
implemented using a form of the z-buffer; a second z-buffer
 is created with respect to z depth from a given light source, thus
when a polygon is drawn, if there is another polygon between it
 and the light source (indicated by an entry in the shadow z-buffer)
 then reduced illumination is used (i.e. the polygon being drawn is in
 the 'shade' of the polygon in the z-buffer).
Having expanded the ideas of solid object modelling, it must not be
forgotten that 3D scenes can be rendered using alternative techniques. 
Two of the most common are Ray-Tracing and Radiosity.

Ray Tracing
Ray-Tracing uses a similar data structure representation to the 
rendering system we have described, but instead of generating pixels 
for each point on an object it relies on the physics of illumination
it is assumed that a single ray of light passes 
hrough every pixel on the screen into the eye. Using simple geometry, 
it is possible to follow the path of this ray until it strikes an object 
in the screen. Where it strikes an object, it is assumed that the ray 
leaving the object (i.e. the one we a
e tracing) has come from two component raysone reflected off the surface 
and one transmitted through the surface. Thus at the intersection of the 
ray with an object, two rays are followed until they strike objects and so
 on. In this way, a single ra
 is traced recursively back to all its component rays. The number of 
intersections traced for any one ray is called the 'depth' of the ray
 trace.
At each intersection, the light contribution is calculated using an
 illumination model which depends on the colours of the component rays
 and the colour of the surface. Eventually, the colour of a single pixel
 is calculated from all the multiple comp
nent ray contributions. This process is repeated for all screen pixels.
The calculation of ray intersections with objects
is very computationally intensive which is why
ray-tracing is renowned to be slow. However, since
each ray through a pixel is independent, the algo-
rithm is ideal for parallelisation, with each proces-
sor tracing rays for a subset of the screen pixels. It
is also worth noting that calculating an intersection
with a sphere is very simple, and hence ray-traced
images can be easily recognised by their high level
of reflection and their abundance of spheres!

Radiosity
The radiosity teqhnique of 3D rendering adopts the
concept of a 'patch'. At its simplest, a patch could
be a polygon from a data structure similar to that
used in this series. The first part of the radiosity
algorithm calculates a 'matrix of interaction' be-
tween all patches, thus an NxN matrix is con-
structed (where N is the number of patches in the
scene) indicating the interaction between each
patch and all others in the scene. The interaction is
defined as the amount of illumination of one patch
incident on another, and is clearly affected by the
properties of the two patches and their orientations
with respect to each other and the light source.
One such matrix is calculated for each light
source, and once calculated the scene can be ren-
dered from any viewpoint, thus although the cost of
constructing the matrix is L.N2 (where L is the
number of light sources) this is only required once
for a given lighting setup, and so any view of the
scene can be rendered rapidly by referring to the
matrix.
Radiosity provides a 'truer' scene than ray trac-
ing because it correctly models the interaction of
different-coloured surfacesfor example, image
rendering a room with a white ceiling and red
walls. Where the walls meet the ceiling you would
expect to see some pink colouring on the ceiling.
Radiosity will give this effect, ray tracing will not.

Graphics support
With the ever-increasing use of high-quality 3D
graphics (especially for visualisation of large data
sets), commercial software packages are becoming
widely available. PHIGS is a high-level
language-type system (designed for interfacing to
C) which provides primitives forrotations, scalings
and shading, enabling a 3D rendering system to be
created with ease. Similar functionality is often
providedbygood compilers: Borland's Turbo C has
an extensive graphics library.
Applications packages which provide a com-
plete rendering system are also available. AVS is a
workstation-based package which provides full
menu-driven support for all types of rendering,
designed for visualising simulation data sets.
Even at the PC level there are many different
graphics hardware systems available. VGA,
SuperVGA, XGA are all PC graphics 'standards',
not to mention more specialised systems such as
IBM's 8514. The choice becomes wider once the
workstation world is considered; the IBM RS/6000
has many graphics options right up to the $10,000
GTO graphics adapter offering full 24-bit 'true'
colour. Sun workstations have a similar range of
options for graphics support. Companies like Sili-
con Graphics specialise in high-performance graph-
ics workstations with facilities such as hardware
z-buffering and Gouraud shaded polygon drawing.
Forreal-time 3D graphics usually a dedicated hard-
ware system is used, and Inmos, for example,
specialises in supplying graphics chipsits G300
colour look-up/timing generator is an industry
standard. Other integrated circuit manufacturers
(like Texas Instruments) also supply such devices.

Compromises
In the world of 3D graphics, compromises are still
necessary as the processing power to create realis-
tic real-time animated full-colour pictures is not
currently available at reasonable cost. Therefore in
graphics, as in any computer application, trade-offs
must be made. The classic juggling act in computer
graphlcs can be seen as a five-way competition:
Speed vs Complexity of image vs Cost vs Colours vs
Memory.

Summary
In this series we have introduced a range of tools for
producing realistic 3D graphics using an ordinary
PC. The techniques are widely applied in the graph-
ics we see on TV, in adverts, at the cinema and in
computer games, simulations and CAD packages.

References
Details of most of the techniques discussed in this series
 can be found in a number of graphics text books and journals.

AH Watt
Fundamentals of Three-Dimensional Computer
Graphics
Addison Wesley 1990
WM Newman RF Sproull
Principles of Interactive Computer Graphics
McGraw Hill International (2nd edition) 1979
J McGregor AH Watt
Advanced Programming Techniques for the BBC
Micro
Addison Wesley 1983
R Salmon M Slater
Computer Graphics Systems and Concepts
Addison Wesley 1989
DF Rogers
Procedural Elements of Computer Graphics
McGraw Hill International 1988
JM Lane LC Carpenter T Whitted JF Blinn
Scan Line Methods for Displaying
Parametrically Defined Surfaces
Communications of the ACM Vol 23 No 1 January 1980
IE Sutherland RF Sproull RA Schumacker
A Characterization of Ten Hidden Surface
Algorithms
Computing Surveys Vol 6 No 1 March 1974
DF Rogers JA Adams
Mathematical Elements for Computer Graphics
McGraw Hill International (2nd edition) 1990
Image Processing Magazine 1990-1992
