Forward
This article will analyse vertex processing as done by current and future graphics processing units (abbreviated as GPU). A brief introduction of the traditional processing method vertices undergo is included. Both fixed pipeline and programmable Vertex Shaders will be discussed. Basic understanding of mathematics and the 3D pipeline will help to understand this article, but it’s not necessary, and don’t be scared - remember you can always do a web search for any word or concept you don’t fully understand, or simply ask about it in our forums. The contents of this article are based on the public DX8 SDK, public/leaked DX9 presentation(s) and public OpenGL 2.0 standard specifications. After reading this article you should have an understanding of the basic processing that vertices might undergo, Fixed Function Transform and Lighting (abbreviated as TnL) Pipelines, Vertex Shaders (abbreviated as VS) Versions (1.0, 1.1, 2.0, …). Higher Order Surfaces (abbreviated as HOS) and Displacement Mapping are not included in-depth. Any (future) vertex processing techniques, implementations or extensions mentioned are offered as discussion topics, in no way should they be seen as indications of future or current hardware capabilities or implementations. Copyright and Patent Rights remain with Beyond3D and the Author(s).
Forget Triangles, Vertices Rule!
We all play games, but it’s easy to forget how many mathematics and processing power is involved to actually bring you the ultimate realistic 3D game play experience. That fabulous well-endowed female you just shot while playing your first person shooter is actually nothing more than a collection of vertices, polygons, textures, blending and a bunch of other boring math operations. Most of you undoubtedly have heard that a triangle is the basic building block of 3D graphics, while this holds true for a large part of the graphics pipeline, it’s not exactly true for the part of the pipeline that is being discussed in this article. The basic building block we need is a single, simple, boring (?), point in 3D space.
Defining the position of a point in 3D space is, traditionally, done using a Cartesian coordinate system. A coordinate system defines an origin, and three main axes : left/right or X, up/down or Y, forward/backward or Z. Along each axis a length relative to the origin can be used to uniquely define a point. For example two to the right, one up and one forward defines one unique point in 3D space. This concept is illustrated by the figure below:
Point in 3D Space
Obviously we can also define multiple points in this 3D space. Two such points can be connected using a line (also known as a vector), and three such points can be connected to form a triangle which brings us to basic building element of the rest of the 3D pipeline. A point in 3D Space is usually referred to as a “vertexâ€, and you need three “vertices†to build a triangle in 3D space:
Triangles are social creatures; they like to show up in groups. Using a set of vertices that form multiple triangles it’s possible to start building 3D objects. For example we need two triangles or six vertices to form a quad, a cube has six sides each side is a quad so you need 2*6=12 triangles to define a cube in space and for 12 triangles you need to have 36 vertices in 3D Space (obviously not all these vertices are different, there are only 8 unique vertices in a cube).
And anyone who has played with Legoâ„¢ knows that once you have blocks you can build anything you want. If you can build a cube you can build anything. For example the model below exists of thousands of triangles and these triangles are built from a large set of vertices:
So now that we know that vertices are at the origin of any object in 3D space we can concentrate on just one such basic element. Because anything we do with one single vertex in 3D space, we can do to all the others. And if we change the vertices we also change the triangles and if we change the triangles we change the object that they form!