VERC: Vertex and Pixel Shaders Last edited 5 years ago2019-04-21 13:08:09 UTC

You are viewing an older revision of this wiki page. The current revision may be more detailed and up-to-date. Click here to see the current revision of this page.
Per-pixel dynamic lighting. Real-time bumpmapping. Cartoon shading. Thermal goggles.

The games of today and tomorrow have unprecedented power at their fingertips. Features such as the above four will become accessible (and already have become accessible) to every next-generation game. Why couldn't this have happened last generation? I'll tell you; the programmable graphics pipeline has become a reality.

Programmable graphics pipeline? That sounds like some sort of industry buzzphrase, doesn't it? Well, no, it isn't - it accurately describes the raw power that has emerged through what are being called vertex shaders and pixel shaders. You may have heard of them, or you may not - but after reading this article, you'll know exactly what they are, and why they're so important.

Let's break down that phrase, "programmable graphics pipeline". It's made up of two parts - "programmable", which I'll get to later, and "graphics pipeline", which I shall explain now.

The graphics pipeline is simply the virtual pipeline that all polygons drawn have to go through in the graphics API of choice (Direct3D or OpenGL). There are various stages of the pipeline, each one getting the polygon closer to its final result - a collection of textured, lit pixels on the screen. Here's a sample graphics pipeline - different APIs often have slight variances, however:
User posted image
This may seem confusing at first. And, indeed, rightly so; there're lots of phrases in there that are alien to all but a computer graphics programmer. So, without further ado, let's rectify this situation and explain what these different stages actually do.

Firstly, the polygon passed to the renderer is transformed into camera space. Simply put, the vertices (corners) of the polygon are rotated, moved and scaled so that they are now in a location relative to the scene's camera, with the camera now sitting at the origin (0,0,0). Secondly, lighting is applied to the vertices of the polygon. This simply means that for every vertex of the polygon, a lighting value is computed - these different lighting values are interpolated (blended) across the polygon to get the base colours.

Then comes the next stage - transforming the polygons to screen space. It's also called projecting the polygons, as they are now changed into how they will appear on-screen, and are all flattened onto a plane. This means that polygons that are farther away are made smaller, and polygons that are nearer are made bigger, to give the illusion of perspective. This prepares the polygon for the next stage - converting it into pixels, ready to add to the framebuffer. The framebuffer is simply the memory where all the pixels to be drawn that scene are sent, and from there are sent to the screen.

Once it's been converted to pixels, texturing and other per-pixel operations such as fogging are applied. Then, finally, it can be sent to the framebuffer for rendering (after checking to see whether another pixel is closer and would stop it being visible). And that's the graphics pipeline.

Okay, so that's cleared up two thirds of "programmable graphics pipeline". But what of the first word? This is where the meat of this article kicks in. A programmable graphics pipeline is one that can be modified in various places by custom programs and scripts whilst still running on the graphics hardware. And it's this feature that's only become available in the last year or two - originally, the pipeline has been fixed so it can be run extremely fast on the graphics hardware.

There are two types of programs that can be made to replace two different parts of the pipeline - vertex shaders, and pixel (or, to be technically correct, fragment shaders). They each perform a different function, and replace two different parts of the pipeline - vertex shaders take over the lighting and screen-space transformation of polygons' vertices, and pixel shaders take over the texturing and fogging of pixels.

As you can probably guess, this gives the developer a huge amount of power. There is now the ability to modify the vertices and modify the final pixels whilst still using the graphics hardware to do the operations, offloading the work from the main CPU. But how are these programs written, and what sort of things can you do with them?

Let's tackle the first question, and discuss how these programs are written. When they first emerged, there was only one way, and that was to use a specialized language very similar to assembler (ASM). Here's an example of a very simple vertex program, written using the specialized assembler:
vs.1.1
dp4 pos.x, v0, c4
dp4 pos.y, v0, c5
dp4 pos.z, v0, c6
dp4 pos.w, v0, c7
mov oD0, c8
As you can see, it isn't particularly readable - it takes a lot of thought to actually find out exactly what it's doing due to the nature of the programming language. Computer graphics companies realized this, and decided that a higher-level language that is easier to use is required to get developers interested in using vertex and pixel shaders in all their latest products. And so, the first high-level shader programming language emerged - Cg.

Cg, created by nVidia, stands for "C for graphics" - and, as you can guess, is heavily based upon the existing language C. This makes it very easy for developers to code shaders, as C and C++ are extremely well-used languages, and makes it easier for them to write code that others can understand. Here's an example of a simple shader written using Cg:
float4 main(float2 detailCoords : TEXCOORD0,
float2 bumpCoords: TEXCOORD1,
float3 lightVector : COLOR0,
uniform float3 ambientColor,
uniform sampler2D detailTexture : TEXUNIT0,
uniform sampler2D bumpTexture : TEXUNIT1): COLOR
{
    float3 detailColor = tex2D(detailTexture, detailCoords).rgb;

    float3 lightVectorFinal = 2.0 * (lightVector.rgb - 0.5);
    float3 bumpNormalVectorFinal = 2.0 * (tex2D(bumpTexture, bumpCoords).rgb - 0.5);

    float diffuse = dot(bumpNormalVectorFinal, lightVectorFinal);

    return float4(diffuse * detailColor + ambientColor, 1.0);
}
This may not make sense to most people, but believe me, it makes far more sense to most graphics programmers. It's easier to read, and everything is laid out in a much more logical manner, making it easier to code as well. Cg is still relatively new, so it's taking time to get a hold, but it will certainly have a huge impact on things to come.

Other shader languages exist - RenderMan is one that has been around for a while, but is only used in non-real-time film rendering. However, there's a contestant to Cg that's just arrived on the scene - HLSL.

HLSL, created by the giant Microsoft and coming with DirectX 9, stands for High-Level Shader Language. It's very similar to Cg, but is only compatible with DirectX. Some developers prefer it, whereas others prefer to use Cg - it's down to personal choice. HLSL is optimized for the DirectX pipeline, which gives it an edge if the game being developed uses DirectX, but if another API is being given as an option, Cg is a better bet.

But enough of the tech talk; what can shaders do? Well, anything you want. Doom 3 uses them to do real-time dynamic lighting and bumpmapping, they can be used to do cel shading, and make pictures look like a work of art, coloured with crosshatching. The possibilities are endless - and the revolution is just beginning.
This article was originally published on the Valve Editing Resource Collective (VERC).
TWHL only archives articles from defunct websites. For more information on TWHL's archiving efforts, please visit the TWHL Archiving Project page.

Comments

You must log in to post a comment. You can login or register a new account.