Chapter 7. Shaders and Effects

Shaders have been a part of Direct3D for a while now. Vertex and pixel shaders have given developers the power to control every detail of how their data is manipulated in multiple stages of the pipeline, giving them increased realism. Now with Direct3D 11, the next iteration of shader technology is being released: shader model 5.0.

In this chapter:

  • What an effect file is

  • How to use the High Level Shading Language

  • What the various types of shaders are

  • A basic understanding of lighting

  • HLSL reference

Shaders in Direct3D

The capabilities of both vertex and pixel shaders have been increased. The number of instructions possible has gone up, more textures can be accessed, and shaders can be more complicated. Instead of just limiting the improvements to the previous shader types, Direct3D 11 introduces compute shaders and hull and domain shaders. These new shaders are accompanied by Direct3D 10’s introduction to geometry shaders and vertex and pixel shaders, which have been around since DirectX 8.

History of Programmable Shaders

Graphics programming before shaders used a set of fixed algorithms that collectively was known as the fixed function pipeline. To enable various features and effects, the fixed function pipeline essentially served as a way to enable and disable built-in states. This was very limiting to developers because what could be done in graphics APIs such as Direct3D and OpenGL was fixed by those in control of the API and was not expandable.

In DirectX 8, the graphics pipeline in Direct3D first offered programmable shaders, along with the fixed function algorithms. The graphics pipeline was programmable via assembly instructions or HLSL, the High Level Shading Language. In Direct3D 10, HLSL became the only way to write programmable shaders, and not even assembly instructions are supported. Also in Direct3D 10, the graphics pipeline became 100% programmable, and the fixed function pipeline was completely removed.

Because Direct3D no longer supports the fixed function pipeline, it falls to you to designate the behavior of how vertices and pixels are handled. The fixed function pipeline previously had a set way of processing vertices as they passed through on their way to being drawn. This restricted the options you had to fully control the pipeline, limiting you to the functionality the pipeline supported. There was a single method for handling lighting and a set maximum value of textures you could work with. This severely limited the effects you could achieve using Direct3D.

Today, as each vertex is processed by the system, you get the opportunity to manipulate it or to allow it to pass through unchanged. The same can be said for pixels. Any pixel being rendered by the system also is provided to you to be changed before going to the screen. The functionality to change vertices and pixels is contained within Direct3D’s shader mechanism.

Shaders are Direct3D’s way of exposing pieces of the pipeline to be dynamically reprogrammed by you. Direct3D supports several types of shaders: vertex, pixel, geometry, compute, hull, and domain.

Vertex shaders operate on just what you’d expect: vertices. Every vertex going through the pipeline is made available to the current vertex shader before being output. Likewise, any pixel being rendered must also pass through the pixel shaders. Geometry shaders are a special type of shader introduced with Direct3D 10. Geometry shaders allow for multiple vertices to be manipulated simultaneously, giving the option of controlling entire primitives of geometry.

One of the new shader types in Direct3D 11 is compute shaders. A compute shader, which requires at least hardware that supports shader model 4.0, is used to perform general parallel computing on graphics units. Compute shaders can be used for anything from graphics to physics, video encoding and decoding, etc. Not all tasks are suitable to be executed by a GPU because of the GPU’s design, but those tasks that do fit the architecture can see an increase in performance by taking advantage of the additional processing power.

The two other types of shaders new to Direct3D 11 are hull and domain shaders, as well as bringing an additional stage known as the tessellator. The tessellating stage is not programmable, but the shaders that accompany it (hull and domain) are. The hull shader basically is used to transform incoming surface data as it runs on the source control mesh’s control points, whereas the domain shader executes for each generated vertex. The idea of tessellation is to take a control mesh defined by surface (not triangle lists as we have done throughout this book) and a subdivide it dynamically to generate a mesh of varying levels of detail. Ideally this means that we can generate very high polygonal models within the graphics hardware without actually having to send that detail to the graphics hardware. Think of it as dynamic level of detail for advanced graphics programming.

Effect Files

In Direct3D there are what are known as effect files. Shaders created via these files are bundled together in what’s called an effect. Most of the time, you’ll be using a combination of vertex and pixel shaders together to create a certain behavior, called a technique. A technique defines a rendering effect, and an effect file can have multiple rendering techniques within.

While using shaders individually is still possible with Direct3D 11, you’ll find them extremely useful when grouped together into an effect. An effect is a simple way of packaging the needed shaders together to render objects in a particular way. The effect is loaded as a single object, and the included shaders are executed when necessary. By changing the effect you’re applying to your scene, you easily change the method Direct3D is using to do the rendering.

Effects are defined with an effect file, a text format that is loaded in from disk, compiled, and executed.

Although you can use effect files in Direct3D, they are not necessary. All demos throughout this book loaded the code for the individual shaders one at a time. The difference between effect files and what we have been doing so far is that effect files have additional information other than just the shaders themselves, such as rendering states, blend states, etc.

Effect File Layout

Effect files are a way of containing a particular set of rendering functionality. Each effect, applied when drawing objects in your scene, dictates what the objects look like and how they’re drawn. For example, you may create an effect whose job it is to texture objects, or you may create an effect to generate lighting bloom or blur. You could also have different versions of these effects within an effect file, which could be useful if you had to target lower-end machines but also offer an alternative to an effect designed for high-end machines. Effects have an amazing versatility in how they can be used.

Previously, vertex and pixel shaders were loaded and applied separately. Effects combine the shaders into a self-contained unit that encompasses functionality of multiple shader types.

Effects are comprised of a couple of different sections:

  • External variables—Variables that get their data from the calling program.

  • Input structures—Structures that define the information being passed between shaders.

  • Shaders—Shader’s code (an effect file can have many shaders within).

  • Technique block(s)—Defines the shaders and passes available within the effect.

The simplest form of effect contains a technique with a vertex shader that allows the incoming data from the vertex structure to just pass through. This means the vertex position and other properties will not be changed in any way and are passed on to the next stage in the pipeline.

A simple pixel shader will perform no calculations and return only a single color. Geometry shaders, and also domain and hull shaders, are optional and can be null. The contents of a basic effect file are shown next.

struct VS_OUTPUT
{
    float4 Pos : SV_POSITION;
    float4 Color : COLOR0;
};


VS_OUTPUT VS( float4 Pos : POSITION )
{
    VS_OUTPUT psInput;
    psInput.Pos = Pos;
    psInput.Color = float4( 1.0f, 1.0f, 0.0f, 1.0f );

    return psInput;
}


float4 PS( VS_OUTPUT psInput ) : SV_Target
{
    return psInput.Color;
}


technique11 Render
{
    pass P0
    {
        SetVertexShader( CompileShader( vs_4_0, VS( ) ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_4_0, PS( ) ) );
    }
}

Loading an Effect File

Effects are usually loaded in from a buffer using the D3DX11CreateEffectFrom-Memory function. Because this function loads the effect from essentially a buffer, you can read an effect off a file using std::ifstream and pass along the file’s contents to this function. D3DX11CreateEffectFromMemory has the following function prototype:

HRESULT D3DX11CreateEffectFromMemory(
    void* pData,
    SIZE_T DataLength,
    UINT FXFlags,
    ID3D11Device* pDevice,
    ID3DX11Effect** ppEffect
);

The parameters for the D3DX11CreateEffectFromMemory starts with the data of the effect file (the HLSL effect source code). The parameters that follow the source include the size of the source code in bytes, compilation flags, the Direct3D 11 device, and a pointer to an ID3DX11Effect object that will hold the effect.

External Variables and Constant Buffers

Most effects will need additional input past just the list of vertices; this is where external variables are useful. External variables are those variables declared within your effects that are visible from within your application code. Variables that receive information like current frame time, world projection, or light positions can be declared within the effect, so they can be updated from the calling program.

With the introduction of Direct3D 10, all external variables now reside in constant buffers. Constant buffers are used to group variables visible to the calling program so that they can be optimized for access. Constant buffers are similar in definition to structures and are created using the cbuffer keyword. An example can be seen in the following HLSL code snippet:

cbuffer Variables
{
    matrix Projection;
};

Constant buffers are commonly declared at the top of an effect file and reside outside of any other section. For ease of use, it can be useful to group together variables based on the amount they are accessed. For instance, variables that get an initial value would be grouped separately from variables that are updated on a frame by frame basis. You have the ability to create multiple constant buffers.

When the effect file is loaded, you can bind the external variables to the effect variables within your application. The following code shows how the external variable "Projection" is bound to the ID3DX11EffectMatrixVariable in the application.

ID3DX11EffectMatrixVariable * projMatrixVar = 0;

projMatrixVar = pEffect->GetVariableByName( "Projection" )->AsMatrix( );
projMatrixVar->SetMatrix( ( float* )&finalMatrix );

Input and Output Structures

Effect files consistently need to pass multiple values between shaders; to keep things simple, the variables are passed within a structure. The structure allows for more than one variable to be bundled together into an easy-to-send package and helps to minimize the work needed when adding a new variable.

For instance, vertex shaders commonly need to pass values like vertex position, color, or normal value along to the pixel shader. Since the vertex shader has the limitation of a single return value, it simply packages the needed variables into the structure and sends it to the pixel shader. The pixel shader then accesses the variables within the structure. An example structure called VS_OUTPUT is shown next.

struct VS_OUTPUT
{
    float4 Pos : SV_POSITION;
    float4 Color : COLOR0;
};

Using the structures is simple. First, an instance of the structure is created within the vertex shader. Next, the individual structure variables are filled out, and then the structure is returned. The next shader in the pipeline will use the VS_OUTPUT structure as its input and have access to the variables you set. A simple vertex shader is shown here to demonstrate the definition and usage of a structure.

VS_OUTPUT VS( float4 Pos : POSITION, float4 Color : COLOR )
{
    VS_OUTPUT psInput;

    psInput.Pos = mul( Pos, Projection );
    psInput.Color = Color;

    return psInput;
}

Technique Blocks

Effect files combine the functionality of multiple shaders into a single block called a technique. Techniques are a way to define how something should be drawn. For instance, you can define a technique that supports translucency or opaqueness. By switching between techniques, the objects being drawn will go from solid to see-through.

Techniques are defined within a shader using the technique11 keyword followed by the name of the technique being created.

technique11 Render
{
    // technique definition
}

Each technique has a set of vertex and pixel shaders that it uses as vertices, and pixels are passed through the pipeline. Effects allow for multiple techniques to be defined, but you must have at least one technique defined in an effect file. Each technique can also contain multiple passes. Most techniques you come across will contain only one pass, but just be aware that multiple passes are possible for more complicated effects. Each pass uses the available shader hardware to perform different kinds of special effects.

After loading the effect file, you need to gain access to its technique in order to use it. The technique is then stored in an ID3DX11EffectTechnique object for use later when rendering or defining a vertex layout. A small code sample showing how to create the technique object from an effect is shown here:

ID3DX11EffectTechnique* shadowTech;

shadowTech = effect->GetTechniqueByName( "ShadowMap" );

Because you can create simple or complex rendering techniques, techniques apply their functionality in passes. Each pass updates or changes the render state and shaders being applied to the scene. Because not all the effects you come up with can be applied in a single pass, techniques give you the ability to define more than one. Some post-processing effects such as depth of field require more than one pass. Keep in mind that utilizing multiple passes will cause objects to be drawn multiple times, which can slow down rendering times.

You now have a technique object ready to use when drawing your objects. Techniques are used by looping through the available passes and calling your draw functions. Before drawing with the shader technique in a pass, the technique is applied preparing the hardware for drawing. The Apply function is used to set the current technique along with all of its rendering states and data. An example can be seen in the following:

D3DX11_TECHNIQUE_DESC techDesc;

ShadowTech->GetDesc( &techDesc );

for( UINT p = 0; p < techDesc.Passes; p++ )
{
    ShadowTech->GetPassByIndex( p )->Apply( 0, d3dContext_ );

    // Draw function
}

Each pass is created using the pass keyword in the HLSL effect file, followed by its pass level. The pass level is a combination of the letter P followed by the number of the pass.

In the following example, there are two passes, P0 and P1, being defined. At least one pass must be defined for the technique to be valid.

technique11 Render
{
    pass P0
    {
        // pass shader definitions
    }

    pass P1
    {
        // pass shader definitions
    }
}

The main job of each pass is the setting of the shaders. Because the shaders you use can differ for each pass, they must be specifically defined using the functions SetVertexShader, SetGeometryShader, SetPixelShader, etc.

technique11 Render
{
    pass P0
    {
        SetVertexShader( CompileShader( vs_4_0, VS( ) ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_4_0, PS( ) ) );
    }
}

As you can see, the shader setting functions include a call to the function CompileShader . The CompileShader HLSL function takes the shader type of version (e.g., vertex shader 5.0 is vs_5_0) and takes the name of the function you’ve written in the HLSL file that is the main entry point to that shader.

Rasterizer States

Effect files allow you to set the rasterizer states from within the shader rather than on the application level. You’ve probably seen 3D modeling software display objects in wireframe mode. This mode displays 3D objects using only their outline. This lets you see how objects are made up, sort of like seeing the frame of a house without the walls getting in the way.

By default, Direct3D operates in solid mode, which causes faces to be drawn opaquely. This can be changed though by altering the rasterizer state. The rasterizer state tells Direct3D how things in the rasterizer stage should behave, such as what type of culling should take place, whether features like multi-sampling and scissoring are enabled, and the type of fill mode that should be used.

Rasterizer state objects are inherited from the ID3D11RasterizerState interface and are created using the CreateRasterizerState function, which has the following function prototype:

HRESULT CreateRasterizerState(
    const D3D11_RASTERIZER_DESC* pRasterizerDesc,
    ID3D11RasterizerState** ppRasterizerState
);

The functions used to set states in HLSL mimic what they look like on the application side. The D3D11_RASTERIZER_DESC is used to define the various state options and has the following structure:

typedef struct D3D11_RASTERIZER_DESC {
    D3D11_FILL_MODE FillMode;
    D3D11_CULL_MODE CullMode;
    BOOL            FrontCounterClockwise;
    INT             DepthBias;
    FLOAT           DepthBiasClamp;
    FLOAT           SlopeScaledDepthBias;
    BOOL            DepthClipEnable;
    BOOL            ScissorEnable;
    BOOL            MultisampleEnable;
    BOOL            AntialiasedLineEnable;
} D3D11_RASTERIZER_DESC;

The D3D11_FILL_MODE parameter controls how the geometry will be drawn. If you use the value D3D11_FILL_WIREFRAME, the geometry will be drawn in wire frame mode; otherwise, pass the value D3D11_FILL_SOLID to have all geometry drawn solid.

The second parameter is the culling mode parameter named D3D11_CULL_MODE . The culling mode tells the rasterizer which faces to draw and which to ignore. Imagine that you had a sphere made up of triangles. No matter which way you faced the sphere, not all of the triangles that make it up will be visible at any one time; only those triangles directly in front of you could be seen. The triangles on the back of the sphere are said to be back facing. Because of how the vertices that make up the triangles are defined, they have a particular winding order to them. The winding order is the direction in which vertices for a triangle are defined, clockwise or counterclockwise. Because of the nature of 3D objects, even if you defined all your triangles using the same winding order, just the act of rotating the object causes some of the triangles to be reversed from the camera point of view. Going back to the sphere, from the camera’s perspective, some of the triangles are clockwise, and some are counterclockwise. The culling mode tells Direct3D which triangles it can safely ignore and not draw. The D3D11_CULL_MODE has three options. D3D11_CULL_NONE uses no culling, D3D11_CULL_FRONT culls all polygons facing the camera, and D3D11_CULL_BACK culls all polygons facing away from the camera. By specifying a culling mode, this cuts down on the number of triangles that you’re asking Direct3D to draw.

If you want the details on all the other parameters in the D3D10_RASTERIZER_DESC structure, please consult the DirectX SDK documentation. Once you have the structure filled out, it is safe to call the CreateRasterizerState function to create the new rasterizer state.

After the new rasterizer state is created, it must be set before its effects take place. You use the function RSSetState to change the currently active rasterizer state, which is provided by the ID3D11DeviceContext interface.

void RSSetState( ID3D11RasterizerState* pRasterizerState );

High Level Shading Language

As we know, the High Level Shading Language (HLSL) is the programming language used to write shaders. Very similar in syntax and structure to C, HLSL allows you to create small shader programs that are loaded onto the video hardware and executed. With shader model 5.0 we can also use object-oriented programming concepts. Shader model 5.0 is a superset of shader model 4.0.

In this section we will briefly look at HLSL syntax a little more closely.

Variable Types

HLSL contains many of the variable types that you’ll find in C++ such as int, bool, and float; you’ll also find a few new ones like half, int1x4, and float4, which we discussed in Chapter 6.

Some variable types can contain multiple components allowing you to pack more than a single value into them. For instance, the variable type float4 allows you to store four float values within it. By storing values using these specialized types, the video hardware can optimize access to the data, ensuring quicker access.

float4 tempFloat = float4(1.0f, 2.0f, 3.0f, 4.0f );

Any variable that contains multiple components can have each individual component accessed using swizzling. Swizzling enables you to split, for instance, a float3 variable into its three components by specifying X, Y, or Z after the variable name. Take a look at the following example; the singleFloat variable is filled with the value found in the newFloat X component.

float3 newFloat = float3( 0.0f, 1.0f, 2.0f );
float singleFloat = newFloat.x;

Any variable containing multiple components can be accessed in this way.

Semantics

Semantics are a way of letting the shader know what certain variables will be used so their access can be optimized. Semantics follow a variable declaration and have types such as COLOR0, TEXCOORD0, and POSITION. As you can see in the following structure, the two variables Pos and Color are followed by semantics specifying their use.

struct VS_OUTPUT
{
    float4 Pos : SV_POSITION;
    float4 Color : COLOR0;
};

Some commonly used semantics include:

  • SV_POSITION—A float4 value specifying a transformed position

  • NORMAL0—Semantic that is used when defining a normal vector

  • COLOR0—Semantic used when defining a color value

There are many more semantics available; take a look at the HLSL documentation in the DirectX SDK for a complete list. A lot of semantics end in a numerical value because it is possible to define multiples of those types.

Function Declarations

Functions within HLSL are defined in pretty much the same way they are within other languages.

ReturnValue FunctionName( parameterName : semantic )
{
    // function code goes here
}

The function return value can be any of the defined HLSL types, including packed types and void.

When you’re defining a parameter list for a shader function, it is perfectly valid to specify a semantic following the variable. There are a few things you need to be aware of though when defining function parameters. Since HLSL doesn’t have a specific way for you to return a value by reference within your parameter list, it defines a few keywords that can be used to achieve the same results.

Using the out keyword before your parameter declaration lets the compiler know that the variable will be used as an output. Additionally, the keyword inout allows the variable to be used both as an input and output.

void GetColor( out float3 color )
{
    color = float3( 0.0f, 1.0f, 1.0f );
}

Vertex Shaders

Vertex shaders are the part of the pipeline where you are given control of every vertex that gets processed by the system. In previous versions of Direct3D, you had the option of using the fixed function pipeline, which has a built-in set of functionality that it uses when processing vertices. Now with the latest Direct3D, you must do all the processing yourself. To that end, you’ll need to write at least a simple vertex shader.

A vertex shader is one of three shaders that can exist within an effect file. As objects are sent to be drawn, their vertices are sent to your vertex shader. If you don’t want to do any additional processing to the vertices, you can pass them along to the pixel shader to be drawn. In most cases, though, you’ll at least want to apply a world or projection transform so the vertices are placed in the proper space to be rendered.

Using vertex shaders, you have a lot of power to manipulate the vertices past just doing a simple transform. The vertex can be translated along any of the axes, its color changed, or any of its other properties manipulated.

Pixel Shaders

Pixel shaders give you access to every pixel being put through the pipeline. Before anything is drawn, you’re given the chance to make changes to the color of each pixel. In some cases you’ll simply return the pixel color passed in from the vertex or geometry shaders, but in most cases you’ll apply lighting or textures that affect the color of the resulting pixel.

Texture Color Inversion

In this chapter we will create a simple pixel shader effect that inverts the color of a rendered surface, which is located on the companion website in the Chapter 7/ ColorInversion folder. This demo will use the exact same code from Chapter 6’s Cube demo, with the exception of a change we’ll be making in the pixel shader and that we are using an effect file with a technique.

The goal of this effect is to render the colors of a surface negated. This means white becomes black, black becomes white, and all other colors switch places with colors on the opposite side of the intensity chart. The effect to perform this is fairly easy and requires us to do 1 minus the color in the pixel shader to perform the inversion. This makes sense because 1 minus 1 (white) will equal 0 (changes white to black), whereas 1 minus 0 equals 1 (changes black to white).

The HLSL shader that performs the color inversion can be seen in Listing 7.1. This shader is exactly the same as the Cube demo from Chapter 6, with the exception that we are doing 1 — color in the pixel shader. Keep in mind that SV_TARGET is the output semantic that specifies that the pixel shader’s output is being used for the rendering target. A screenshot of the running demo can be seen in Figure 7.1.

Screenshot from the Color Inversion demo.

Figure 7.1. Screenshot from the Color Inversion demo.

Example 7.1. The Color Inversion demo’s HLSL shader.

Texture2D colorMap : register( t0 );
SamplerState colorSampler : register( s0 );


cbuffer cbChangesEveryFrame : register( b0 )
{
    matrix worldMatrix;
};
cbuffer cbNeverChanges : register( b1 )
{
    matrix viewMatrix;
};

cbuffer cbChangeOnResize : register( b2 )
{
    matrix projMatrix;
};


struct VS_Input
{
    float4 pos   : POSITION;
    float2 tex0 : TEXCOORD0;
};

struct PS_Input
{
    float4 pos   : SV_POSITION;
    float2 tex0 : TEXCOORD0;
};
PS_Input VS_Main( VS_Input vertex )
{
    PS_Input vsOut = ( PS_Input )0;
    vsOut.pos = mul( vertex.pos, worldMatrix );
    vsOut.pos = mul( vsOut.pos, viewMatrix );
    vsOut.pos = mul( vsOut.pos, projMatrix );
    vsOut.tex0 = vertex.tex0;

    return vsOut;
}


float4 PS_Main( PS_Input frag ) : SV_TARGET
{
    return 1.0f - colorMap.Sample( colorSampler, frag.tex0 );
}


technique11 ColorInversion
{
    pass P0
    {
        SetVertexShader( CompileShader( vs_5_0, VS_Main() ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_5_0, PS_Main() ) );
    }
}

The demo class uses an effect object of the type ID3DX11Effect (seen in Listing 7.2), and the code used to load the effect can be seen in the LoadContent function in Listing 7.3, which is limited only to the code used to load the effect and the code used to create the input layout, since the remainder of the function’s contents is not new.

Example 7.2. The Color Inversion demo’s class definition.

#include"Dx11DemoBase.h"
#include<xnamath.h>
#include<d3dx11effect.h>
class ColorInversionDemo : public Dx11DemoBase
{
    public:
        ColorInversionDemo( );
        virtual ~ColorInversionDemo( );

        bool LoadContent( );
        void UnloadContent( );

        void Update( float dt );
        void Render( );

    private:
        ID3DX11Effect* effect_;
        ID3D11InputLayout* inputLayout_;

        ID3D11Buffer* vertexBuffer_;
        ID3D11Buffer* indexBuffer_;

        ID3D11ShaderResourceView* colorMap_;
        ID3D11SamplerState* colorMapSampler_;

        XMMATRIX viewMatrix_;
        XMMATRIX projMatrix_;
};

Example 7.3. The Color Inversion LoadContent function.

bool ColorInversionDemo::LoadContent( )
{
    ID3DBlob* buffer = 0;

    bool compileResult = CompileD3DShader( "ColorInversion.fx", 0,
        "fx_5_0", &buffer );

    if( compileResult == false )
    {
        DXTRACE_MSG( "Error compiling the effect shader!" );
        return false;
    }
    HRESULT d3dResult;

    d3dResult = D3DX11CreateEffectFromMemory( buffer->GetBufferPointer( ),
        buffer->GetBufferSize( ), 0, d3dDevice_, &effect_ );

    if( FAILED( d3dResult ) )
    {
        DXTRACE_MSG( "Error creating the effect shader!" );

        if( buffer )
            buffer->Release( );

        return false;
    }

    D3D11_INPUT_ELEMENT_DESC solidColorLayout[] =
    {
        { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0,
           D3D11_INPUT_PER_VERTEX_DATA, 0 },
        { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12,
           D3D11_INPUT_PER_VERTEX_DATA, 0 }
    };

    unsigned int totalLayoutElements = ARRAYSIZE( solidColorLayout );

    ID3DX11EffectTechnique* colorInvTechnique;
    colorInvTechnique = effect_->GetTechniqueByName( "ColorInversion" );
    ID3DX11EffectPass* effectPass = colorInvTechnique->GetPassByIndex( 0 );

    D3DX11_PASS_SHADER_DESC passDesc;
    D3DX11_EFFECT_SHADER_DESC shaderDesc;
    effectPass->GetVertexShaderDesc( &passDesc );
    passDesc.pShaderVariable->GetShaderDesc(passDesc.ShaderIndex,
&shaderDesc);

    d3dResult = d3dDevice_->CreateInputLayout( solidColorLayout,
        totalLayoutElements, shaderDesc.pBytecode,
        shaderDesc.BytecodeLength, &inputLayout_ );

    buffer->Release( );
        if( FAILED( d3dResult ) )
        {
                DXTRACE_MSG( "Error creating the input layout!" );
                return false;
        }


        . . .

}

In Listing 7.3 we are able to use the same CompileD3DShader code that we’ve used throughout this book to compile the effect file, with the exception that we are not specifying an entry function name and that we are using a profile of "fx_5_0", where fx represents effect files rather than "vs_5_0" for vertex shaders, "ps_5_0" for pixel shaders, etc.

When we create the input layout, we must use a vertex shader from within the effect file that will correspond to that specific input layout. To do this we first obtain a pointer to the technique that specifies the vertex shader we wish to base the input layout on, which allows us to access the technique’s passes. Since each pass can use a different vertex shader, we must also obtain a pointer to the pass we are basing this input layout on. Using the pass, we can call GetVer-texShaderDesc to get a description object of the vertex shader used by that pass, followed by calling that object’s GetShaderDesc function, which will provide the vertex shader’s bytecode and size. We use the bytecode and the size of that code to create the input layout.

The last function with modified code to allow this demo to use an effect file is the rendering code seen in Listing 7.4. In the Render function we can set constant variables in shaders by using various effect variable objects such as ID3DX11EffectShaderResourceVariable for shader resource variables, ID3D X11EffectSamplerVariable for samplers, ID3DX11EffectMatrixVariable for matrices, and so forth.

To obtain a pointer to the variable we can use a function such as GetVaria-bleByName (or GetVariableByIndex). We then would call a form of "AsType" to convert it into the type we know the variable to be. For example, we would call AsShaderResource to obtain the variable as a shader resource, AsSampler to obtain it as a sampler, AsMatrix to obtain it as a matrix, and so forth.

Once we have a pointer to the variable we can call various functions to bind data to it (for example, call SetMatrix of a ID3DX11EffectMatrixVariable variable to pass along the data we wish to set to it). Once we’re done setting the shader variables, we can obtain a pointer to the technique we wish to render with and loop over each pass; drawing the mesh’s geometry. The Render function is shown in Listing 7.4.

Example 7.4. The rendering code for the Color Inversion demo.

void ColorInversionDemo::Render( )
{
    if( d3dContext_ == 0 )
        return;

    float clearColor[4] = { 0.0f, 0.0f, 0.25f, 1.0f };
    d3dContext_->ClearRenderTargetView( backBufferTarget_, clearColor );
    d3dContext_->ClearDepthStencilView( depthStencilView_,
        D3D11_CLEAR_DEPTH, 1.0f, 0 );

    unsigned int stride = sizeof( VertexPos );
    unsigned int offset = 0;

    d3dContext_->IASetInputLayout( inputLayout_ );
    d3dContext_->IASetVertexBuffers( 0, 1, &vertexBuffer_, &stride, &offset );
    d3dContext_->IASetIndexBuffer( indexBuffer_, DXGI_FORMAT_R16_UINT, 0 );
    d3dContext_->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_
TRIANGLELIST);

    XMMATRIX rotationMat = XMMatrixRotationRollPitchYaw( 0.0f, 0.7f, 0.7f );
    XMMATRIX translationMat = XMMatrixTranslation( 0.0f, 0.0f, 6.0f );
    XMMATRIX worldMat = rotationMat * translationMat;

    ID3DX11EffectShaderResourceVariable* colorMap;
    colorMap = effect_->GetVariableByName( "colorMap" )->AsShaderResource( );
    colorMap->SetResource( colorMap_ );

    ID3DX11EffectSamplerVariable* colorMapSampler;
    colorMapSampler = effect_->GetVariableByName("colorSampler")->AsSampler( );
    colorMapSampler->SetSampler( 0, colorMapSampler_ );
    ID3DX11EffectMatrixVariable* worldMatrix;
    worldMatrix = effect_->GetVariableByName( "worldMatrix" )->AsMatrix( );
    worldMatrix->SetMatrix( ( float* )&worldMat );

    ID3DX11EffectMatrixVariable* viewMatrix;
    viewMatrix = effect_->GetVariableByName( "viewMatrix" )->AsMatrix( );
    viewMatrix->SetMatrix( ( float* )&viewMatrix_ );

    ID3DX11EffectMatrixVariable* projMatrix;
    projMatrix = effect_->GetVariableByName( "projMatrix" )->AsMatrix( );
    projMatrix->SetMatrix( ( float* )&projMatrix_ );

    ID3DX11EffectTechnique* colorInvTechnique;
    colorInvTechnique = effect_->GetTechniqueByName( "ColorInversion" );

    D3DX11_TECHNIQUE_DESC techDesc;
    colorInvTechnique->GetDesc( &techDesc );

    for( unsigned int p = 0; p < techDesc.Passes; p++ )
    {
        ID3DX11EffectPass* pass = colorInvTechnique->GetPassByIndex( p );

        if( pass != 0 )
        {
            pass->Apply( 0, d3dContext_ );
            d3dContext_->DrawIndexed( 36, 0, 0 );
        }
    }

    swapChain_->Present( 0, 0 );
}

One last thing to note: Direct3D 11 does not offer the effect support code in the same directories as the other includes and libraries. You can find d3dx11effect.h in the DirectX SDK folder under SamplesC++Effects11Inc, and you can find the solution that we must build in SamplesC++Effects11 called Effects11_2010.sln (if you are using Visual Studio 2010). This solution is used to build a static library that we can link in order to use effect files in Direct3D. It is a little work to do these extra steps the first time, but it must be done for Direct3D 11. We only need to create this static library once, and then we can reuse it for all of our projects.

Color Shifting

Next we will create another simple pixel shader effect that shifts the color components of a rendered surface around, which is located on the companion website in the Chapter7/ColorShift folder. This demo will also use the exact same code from Chapter 6’s Cube demo, with the exception of a change we’ll be doing in the pixel shader.

For this effect we will simply transpose the color components of the texture’s sampled color so that the output is Red = Blue, Blue = Green, and Green = Red. We do this in the pixel shader by first obtaining the texture’s color, and then we create another float4 object to store the new shifted component values. The HLSL shader for this effect can be seen in Listing 7.5, and a screenshot can be seen in Figure 7.2.

Screenshot from the Color Shift demo.

Figure 7.2. Screenshot from the Color Shift demo.

Example 7.5. The Color Shift demo’s HLSL shader.

Texture2D colorMap : register( t0 );
SamplerState colorSampler : register( s0 );


cbuffer cbChangesEveryFrame : register( b0 )
{
    matrix worldMatrix;
};

cbuffer cbNeverChanges : register( b1 )
{
    matrix viewMatrix;
};

cbuffer cbChangeOnResize : register( b2 )
{
    matrix projMatrix;
};


struct VS_Input
{
    float4 pos   : POSITION;
    float2 tex0 : TEXCOORD0;
};

struct PS_Input
{
    float4 pos   : SV_POSITION;
    float2 tex0 : TEXCOORD0;
};


PS_Input VS_Main( VS_Input vertex )
{
    PS_Input vsOut = ( PS_Input )0;
    vsOut.pos = mul( vertex.pos, worldMatrix );
    vsOut.pos = mul( vsOut.pos, viewMatrix );
    vsOut.pos = mul( vsOut.pos, projMatrix );
    vsOut.tex0 = vertex.tex0;

    return vsOut;
}


float4 PS_Main( PS_Input frag ) : SV_TARGET
{
    float4 col = colorMap.Sample( colorSampler, frag.tex0 );
    float4 finalCol;

    finalCol.x = col.y;
    finalCol.y = col.z;
    finalCol.z = col.x;
    finalCol.w = 1.0f;

    return finalCol;
}


technique11 ColorShift
{
    pass P0
    {
        SetVertexShader( CompileShader( vs_5_0, VS_Main() ) );
        SetGeometryShader( NULL );
        SetPixelShader( CompileShader( ps_5_0, PS_Main() ) );
    }
}

Multitexturing

The last demo we will create will perform multitexturing, which is located on the companion website in the Chapter7/ColorShift folder. Multitexturing is an effect that displays two texture images on one surface. We can do this by sampling both texture images and using color1 × color2 as the final pixel’s results.

Multitexturing can be a useful technique. Sometimes we need to perform tasks such as light mapping, shadow mapping, detail mapping, etc., and the ability to use multiple images at one time does come in handy.

For the Multitexture demo we load a second texture image and pass that along to our shader like we do the first texture. Listing 7.6 shows the MultiTextureDemo class with the added resource view for the second texture. Throughout the demo’s code we can simply copy the same code we used for the first texture and use it for the second. Within the shader’s code we simply sample both texture images and multiply their results to get the final render. Keep in mind that the second texture uses t1, while the first uses t0 in the HLSL file, which can be seen in Listing 7.7. A screenshot of the effect can be seen in Figure 7.3.

Multitexture demo.

Figure 7.3. Multitexture demo.

Example 7.6. The Multitexture demo’s class with added texture.

class MultiTextureDemo : public Dx11DemoBase
{
    public:
        MultiTextureDemo( );
        virtual ~MultiTextureDemo( );

        bool LoadContent( );
        void UnloadContent( );

        void Update( float dt );
        void Render( );

    private:
        ID3DX11Effect* effect_;
        ID3D11InputLayout* inputLayout_;

        ID3D11Buffer* vertexBuffer_;
        ID3D11Buffer* indexBuffer_;

        ID3D11ShaderResourceView* colorMap_;
        ID3D11ShaderResourceView* secondMap_;
        ID3D11SamplerState* colorMapSampler_;

        XMMATRIX viewMatrix_;
        XMMATRIX projMatrix_;
};

Example 7.7. Multitexture demo’s HLSL source code.

Texture2D colorMap : register( t0 );
Texture2D secondMap : register( t1 );
SamplerState colorSampler : register( s0 );


cbuffer cbChangesEveryFrame : register( b0 )
{
    matrix worldMatrix;
};

cbuffer cbNeverChanges : register( b1 )
{
    matrix viewMatrix;
};

cbuffer cbChangeOnResize : register( b2 )
{
    matrix projMatrix;
};


struct VS_Input
{
    float4 pos   : POSITION;
    float2 tex0 : TEXCOORD0;
};

struct PS_Input
{
    float4 pos   : SV_POSITION;
    float2 tex0 : TEXCOORD0;
};


PS_Input VS_Main( VS_Input vertex )
{
    PS_Input vsOut = ( PS_Input )0;
    vsOut.pos = mul( vertex.pos, worldMatrix );
    vsOut.pos = mul( vsOut.pos, viewMatrix );
    vsOut.pos = mul( vsOut.pos, projMatrix );
    vsOut.tex0 = vertex.tex0;

    return vsOut;
}


float4 PS_Main( PS_Input frag ) : SV_TARGET
{
    float4 col = colorMap.Sample( colorSampler, frag.tex0 );
    float4 col2 = secondMap.Sample( colorSampler, frag.tex0 );

    return col * col2;
}

Geometry Shaders

Geometry shaders are a bit more complicated than the shaders you’ve worked with so far. Unlike vertex and pixel shaders, geometry shaders are able to output more or less than they take in. Vertex shaders must accept a single vertex and output a single vertex; pixel shaders work the same way. Geometry shaders, on the other hand, can be used to remove or add vertices as they pass through this portion of the pipeline. This is useful if you want to clip geometry based on some set criteria, or maybe you want to increase the resolution of the object through tessellation.

Geometry shaders exist within an effect file between the vertex and pixel shader stages. Since geometry shaders are optional, you may commonly see them set to a null value in effect techniques. When a geometry shader is necessary, though, it is set in an identical way as vertex and pixel shaders.

To give you an example of what geometry shaders can do, take a look at the following code. It contains the full geometry shader function, along with the structures and constant buffer to support it. The job of this particular shader is to take as input a single point from a point list and generate a full triangle to send along to the pixel shader.

cbuffer TriangleVerts
{
    float3 triPositions[3] =
    {
        float3( -0.25, 0.25, 0 ),
        float3( 0.25, 0.25, 0 ),
        float3( -0.25, -0.25, 0 )
    };
};


struct VS_OUTPUT
{
    float4 Pos : SV_POSITION;
    float4 Color : COLOR0;
};


[maxvertexcount(3)]
void GS( point VS_OUTPUT input[1], inout TriangleStream<VS_OUTPUT>
triangleStream )
{
    VS_OUTPUT psInput;

    for( int i = 0; i < 3; i++ )
    {
        float3 position = triPositions[i];

        position = position + input[0].Pos;

        psInput.Pos = mul( float4( position, 1.0f ), Projection );
        psInput.Color = input[0].Color;

        triangleStream.Append( psInput );
    }
}

Geometry Shader Function Declaration

Geometry shaders are declared slightly differently than vertex and pixel shaders. Instead of designating the return type for the function, the vertices this shader outputs are done so in the parameter list. The geometry shader itself has a return type of void.

Every geometry shader needs to designate the number of vertices that it will return and must be declared above the function using the maxvertexcount keyword. This particular function is meant to return a single triangle, so three vertices are required.

[maxvertexcount(3)]
void GS( point VS_OUTPUT input[1],
    inout TriangleStream<VS_OUTPUT> triangleStream )

Geometry shader functions take two parameters. The first parameter is an array of vertices for the incoming geometry. The type of geometry being passed into this function is based on the topology you used in your application code. Since this example uses a point list, the type of geometry coming into the function is a point, and there is only one item in the array. If the application used a triangle list, the type would be set as triangle, and three vertices would be in the array.

The second parameter is the stream object. The stream object is the list of vertices that are output from the geometry shader and passed to the next shader stage. This list of vertices must use the structure format that is used as the input to the pixel shader. Based on the type of geometry you’re creating within this shader, there are three stream object types available: PointStream, Triangle-Stream, and LineStream.

When adding vertices to a stream object, it will be occasionally necessary to end the strip being created. In that instance, you should make a call to the restartstrip function. This is useful when generating a series of interconnected triangles.

The Geometry Shader Explained

The geometry shader in the previous example generates three vertices for every point from a point list passed to it. The vertices are created by taking the initial position vertex and merging it with the vertex positions found in the triPositions variable. This variable holds a list of three vertices that are used to create a triangle at any position.

Because each triangle the shader is trying to create requires three vertices, a for-loop within the shader loops three times, generating a new vertex for each point of the triangle.

The final triangle points are then multiplied by the projection matrix to create the final positions. Each point in the triangle is added to the triangle stream after its creation.

Introduction to Lighting

In this section we’ll take a quick look at performing lighting in Direct3D. We will use code based on the Models demo from Chapter 8 to demonstrate the effect on a complex object, but we’ll defer explaining the model loading part of the demo until you reach the “Models” section in the next chapter. In this chapter, we’ll talk about lighting in general and the shaders used to achieve it. Once you finish Chapter 8 you will see how all of the demo’s code ties together.

Real-time lighting in games is evaluated these days in shaders, where lighting equations performed in vertex shaders are known as per-vertex lighting, and lighting done in the pixel shader is known as per-pixel lighting. Since there are many more pixels than vertices, and those pixels are extremely close together in distance, the lighting quality of performing these equations in a pixel shader is often much higher.

There are also a number of different algorithms for performing lighting. In this chapter we’ll examine a simple algorithm for performing the effect based on the standard lighting model used by Direct3D during the fixed-function pipeline days.

In general, the rendering equation in computer graphics is highly complex, math intensive, and definitely something you’ll need to research once you start to move toward advanced lighting topics, global illumination, and shadows, but in this chapter we will examine the three most common parts of this equation. These parts are known as the ambient, diffuse, and specular terms. Since this is a beginner’s book, we will briefly touch upon the easiest to understand concepts to at least introduce you to the topic for later use.

The Ambient Term

The ambient term is a value used to simulate light that has bounced off of the environment and onto the surface being shaded. In its basic form this is just a color value that is added to the total lighting value such as float4( 0.3f, 0.3f, 0.3f, 1.0f). In reality this term is nothing more than a solid color that slightly brightens the scene, which is highly unrealistic—an example of which can be seen in Figure 7.4.

Ambient-only light.

Figure 7.4. Ambient-only light.

In more complex lighting equations such as various global illumination algorithms and other techniques such as ambient occlusion, the value that represents bounced light is highly realistic and complex but often not possible to do in real time.

The Diffuse Term

The diffuse term is used to simulate light that has bounced off of a surface and into your eyes. This is not the same as light that has bounced off of other objects in the environment first and then off of the surface being shaded, which can be used to simulate many different phenomena, such as light bleeding in global illumination, soft shadows, etc.

The diffuse term is a light intensity that is modulated with the surface color (and/or surface texture, light color if it is anything other than pure white, etc.) to shade the surface in a way that looks appropriate for real-time games. Distance aside, light that shines directly on an object should fully light the object, and lights that are behind a surface should not affect the surface at all. Lights at an angle to a surface should partially light the surface as that angle decreases and the light becomes more parallel to the surface, which is shown in Figure 7.5.

Light angles.

Figure 7.5. Light angles.

To perform diffuse lighting we can use a simple equation based on the dot product calculation of two vectors. If we take the surface normal and the light vector, we can use the dot product to give us the diffuse intensity. The surface normal is simply just that, the normal of the triangle, and the light vector is a vector that is calculated by the subtraction of the light’s position and the vertex’s position (or pixel’s position when looking at per-pixel lighting).

Looking back again at Figure 7.5, if the light’s position is directly above the point, then the dot product between the light vector and the surface normal is equal to 1.0. since they are moving toward each other. (You can calculate this by doing the dot product math from Chapter 6 on a piece of paper.) If the light is behind the surface, the direction of the vectors will point in an opposite direction, causing the dot product to equal a value less than 0. All other angles between being directly above (1.0) and parallel to the surface (0.0) we know are a shade of the light’s intensity. We can literally multiply the diffuse term by the surface color to apply the diffuse contribution to the mix. The diffuse lighting equation is as follows:

float diffuse = clamp( dot( normal, lightVec ), 0.0, 1.0 );

If we are using a light color other than pure light, we would use the following equation:

float diffuse = clamp( dot( normal, lightVec ), 0.0, 1.0 );
float4 diffuseLight = lightColor * diffuse;

If we want to apply the diffuse light color to the surface color, assuming the surface color is just a color we fetched from a color map texture, we can use the following:

float diffuse = clamp( dot( normal, lightVec ), 0.0, 1.0 );
float4 diffuseLight = lightColor * diffuse;
float4 finalColor = textureColor * diffuseLight;

We clamp the diffuse term because we want anything less than 0.0 to equal 0.0 so that when we multiple that diffuse term with the light’s color or directly to the surface color it will make the final diffuse color black, which would represent no diffuse color. We don’t want negative numbers affecting other parts of the lighting equation in ways we don’t desire. An example of the diffuse-only light can be seen in Figure 7.6.

Diffuse-only light.

Figure 7.6. Diffuse-only light.

Specular Term

Specular lighting is similar to diffuse lighting, with the exception that specular lighting simulates sharp reflections of light as it bounces off of an object and hits the eyes. Diffuse light is used for rough surfaces, where the more microscopic bumpiness of a surface (the rougher it is) will cause light to scatter in a pattern that generally looks even in all directions. This is why if you rotate a highly diffuse lit object or view it from another angle, the intensity should remain the same.

With specular light, the smoothness of the surface is what will cause light to reflect back in its mirror direction. The smoother the surface is, the more sharp light reflections can be observed. Take for example a shiny piece of metal. The smoothness of this surface will cause light to reflect more sharply than chaotically (like diffuse). In computer graphics this creates the highlight you see on shiny objects. On non-shiny or smooth surfaces, the specular light will be low or non existent. When we model our surfaces, we must mix the right amount of diffuse and specular light to create a believable effect. For example, a slice of bread would not be as shiny as a metal ball. A mirror in real life is so smooth that light reflects in such a way that we can see a perfect mirror image on the surface. Another example can be seen with soft drink bottles, where if you rotate the object in your hand, the shiny highlight will seem to move and dance as the rotational relationship with the surface and the light source changes.

From the description of diffuse and specular light, we can say that diffuse light is not view dependent, but specular light is. That means the equation to perform specular light will use the camera vector instead of the light vector. The camera vector is the vector calculated from:

float4 cameraVec = cameraPosition – vertexPosition;

From the camera vector we can create what is also known as the half vector. We can then calculate the specular contribution using the following equation:

float3 halfVec = normalize( lightVec + cameraVec );

float specularTerm = pow( saturate( dot( normal, halfVec ) ), 25 );

An example of specular-only lighting can be seen in Figure 7.7.

Specular-only light.

Figure 7.7. Specular-only light.

Putting It All Together

The lighting demo can be found with the accompanying book code in the Chapter7/Lighting folder. In this demo we perform lighting within the pixel shader that uses the ambient, diffuse, and specular contributions. This demo is essentially the Models demo from Chapter 8 with the lighting effect added to it. Keep in mind that we will only look at the shaders in this chapter. In the next chapter, Chapter 8, we will cover how to load and render 3D models from a file.

The HLSL shader code for performing the lighting effect can be seen in Listing 7.8. In the vertex shader we transform the incoming position to calculate the outgoing vertex position, and we transform the normal by the 3 × 3 world matrix. We transform the normal because the final and true normal’s orientation is dependent on the object’s rotation. This transformation of the normal must be done to get the correct results.

Example 7.8. The lighting HLSL shader.

Texture2D colorMap : register( t0 );
SamplerState colorSampler : register( s0 );

cbuffer cbChangesEveryFrame : register( b0 )
{
    matrix worldMatrix;
};

cbuffer cbNeverChanges : register( b1 )
{
    matrix viewMatrix;
};

cbuffer cbChangeOnResize : register( b2 )
{
    matrix projMatrix;
};

cbuffer cbCameraData : register( b3 )
{
    float3 cameraPos;
};


struct VS_Input
{
    float4 pos   : POSITION;
    float2 tex0 : TEXCOORD0;
    float3 norm : NORMAL;
};

struct PS_Input
{
    float4 pos   : SV_POSITION;
    float2 tex0 : TEXCOORD0;
    float3 norm : NORMAL;
    float3 lightVec : TEXCOORD1;
    float3 viewVec : TEXCOORD2;
};


PS_Input VS_Main( VS_Input vertex )
{
    PS_Input vsOut = ( PS_Input )0;
    float4 worldPos = mul( vertex.pos, worldMatrix );
    vsOut.pos = mul( worldPos, viewMatrix );
    vsOut.pos = mul( vsOut.pos, projMatrix );

    vsOut.tex0 = vertex.tex0;
    vsOut.norm = mul( vertex.norm, (float3x3)worldMatrix );
    vsOut.norm = normalize( vsOut.norm );

    float3 lightPos = float3( 0.0f, 500.0f, 50.0f );
    vsOut.lightVec = normalize( lightPos - worldPos );

    vsOut.viewVec = normalize( cameraPos - worldPos );

    return vsOut;
}


float4 PS_Main( PS_Input frag ) : SV_TARGET
{
    float3 ambientColor = float3( 0.2f, 0.2f, 0.2f );
    float3 lightColor = float3( 0.7f, 0.7f, 0.7f );

    float3 lightVec = normalize( frag.lightVec );
    float3 normal = normalize( frag.norm );

    float diffuseTerm = clamp( dot( normal, lightVec ), 0.0f, 1.0f );
    float specularTerm = 0;

    if( diffuseTerm > 0.0f )
    {
        float3 viewVec = normalize( frag.viewVec );
        float3 halfVec = normalize( lightVec + viewVec );

        specularTerm = pow( saturate( dot( normal, halfVec ) ), 25 );
    }

    float3 finalColor = ambientColor + lightColor *
        diffuseTerm + lightColor * specularTerm;
    return float4( finalColor, 1.0f );
}

In the pixel shader we use a constant for the ambient term, perform N dot L (the dot product of the surface normal and the light vector) to find the diffuse contribution, and we perform N dot H (dot product of the normal and half vector) to find the specular contribution. We add all of these terms together to get the final light color. Take note that we are assuming a white diffuse and specular color, but if you want, you can adjust these colors to see how the results change the final output. As a bonus exercise, you can use constant buffer variables to specify the light color, camera position, and light position to allow you to manipulate these values with the keyboard to see how it affects the lit object in real time.

Summary

You should now be familiar with at least the basics of shader programming and what benefits it provides. The best way to continue learning shader programming is to play around with the shader code you’ve already written and see what effects you can come up with. A small change can have profound effects.

This chapter served as a brief reference to the level of shaders we’ve been writing throughout this book. There is a lot that can be done with HLSL, and much of learning to master it is to practice and experiment.

What You Have Learned

  • How to write vertex, pixel, and geometry shaders

  • How to use the High Level Shading Language

  • How to provide lighting in your scene

Chapter Questions

You can find the answers to the chapter review questions in Appendix A on this book’s companion website.

1.

Effect files are loaded using which function?

2.

What is HLSL?

3.

What is the purpose of a geometry shader?

4.

What is the purpose of domain and hull shaders?

5.

What are the two modes the rasterizer can operate in?

6.

Define semantics.

7.

What are compute shaders, and what is the lowest version that can be used?

8.

Define HLSL techniques and passes.

9.

What is the fixed function pipeline? How does Direct3D 11 make use of it?

10.

What is the tessellator unit used for?

On Your Own

1.

Implement an effect file with a single technique and pass and render an object using it.

2.

Build off of the previous “On Your Own” and take all of the shaders created in demos throughout this chapter and place them in a single effect file. Create a different technique for each effect. On the application side, allow the user to switch between the rendering techniques being applied by using the arrow keys.

3.

Modify the lighting demo to allow you to move the light position using the arrow keys of the keyboard. To do this you will need to send the light’s position to the shader via a constant buffer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset