i
i
i
i
i
i
i
i
26 2. The Graphics Rendering Pipeline
each object passed to the geometry stage, these two matrices are usually
multiplied together into a single matrix. In the geometry stage the vertices
and normals of the object are transformed with this concatenated matrix,
putting the object into eye space. Then shading at the vertices is computed,
using material and light source properties. Projection is then performed,
transforming the object into a unit cube’s space that represents what the
eye sees. All primitives outside the cube are discarded. All primitives
intersecting this unit cube are clipped against the cube in order to obtain
a set of primitives that lies entirely inside the unit cube. The vertices then
are mapped into the window on the screen. After all these per-polygon
operations have been performed, the resulting data is passed on to the
rasterizer—the final major stage in the pipeline.
Rasterizer
In this stage, all primitives are rasterized, i.e., converted into pixels in the
window. Each visible line and triangle in each object enters the rasterizer in
screen space, ready to convert. Those triangles that have been associated
with a texture are rendered with that texture (image) applied to them.
Visibility is resolved via the Z-buffer algorithm, along with optional alpha
and stencil tests. Each object is processed in turn, and the final image is
then displayed on the screen.
Conclusion
This pipeline resulted from decades of API and graphics hardware evolution
targeted to real-time rendering applications. It is important to note that
this is not the only possible rendering pipeline; offline rendering pipelines
have undergone different evolutionary paths. Rendering for film production
is most commonly done with micropolygon pipelines [196, 1236]. Academic
research and predictive rendering applications such as architectural previ-
sualization usually employ ray tracing renderers (see Section 9.8.2).
For many years, the only way for application developers to use the pro-
cess described here was through a xed-function pipeline defined by the
graphics API in use. The fixed-function pipeline is so named because the
graphics hardware that implements it consists of elements that cannot be
programmed in a flexible way. Various parts of the pipeline can be set to
different states, e.g., Z-buffer testing can be turned on or off, but there is
no ability to write programs to control the order in which functions are
applied at various stages. The latest (and probably last) example of a
fixed-function machine is Nintendo’s Wii. Programmable GPUs make it
possible to determine exactly what operations are applied in various sub-
stages throughout the pipeline. While studying the fixed-function pipeline
provides a reasonable introduction to some basic principles, most new de-
i
i
i
i
i
i
i
i
2.5. Through the Pipeline 27
velopment is aimed at programmable GPUs. This programmability is the
default assumption for this third edition of the book, as it is the modern
way to take full advantage of the GPU.
Further Reading and Resources
Blinn’s book A Trip Down the Graphics Pipeline [105] is an older book
about writing a software renderer from scratch, but is a good resource for
learning about some of the subtleties of implementing a rendering pipeline.
For the fixed-function pipeline, the venerable (yet frequently updated)
OpenGL Programming Guide (a.k.a., the “Red Book”) [969] provides a
thorough description of the fixed-function pipeline and algorithms related
to its use. Our book’s website, http://www.realtimerendering.com, gives
links to a variety of rendering engine implementations.
i
i
i
i
i
i
i
i
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset