Chapter 8. Cameras and Models in Direct3D

The topic of cameras in 3D scenes is often overlooked. The camera is a very important actor in the game scene, and the cameras seen in today’s games are often fairly complex. Cameras are critical because if the camera is frustrating or bad in any way, the gamer can have a negative opinion of the experience.

In this chapter we will look at two topics briefly. The first is the creation of two different types of cameras, and the second will show us how to load 3D models from a file.

In this chapter you will learn:

  • How to create a look-at camera

  • How to create an arc rotation camera

  • How to load models in OBJ format

Cameras in Direct3D

In game programming we create a view matrix that represents the virtual camera. This view matrix in XNA Math can be created with a function called XMMatrixLookAtLH (the right-handed version is XMMatrixLookAtRH), which has the following prototype:

XMMATRIX XMMatrixLookAtLH(
    XMVECTOR EyePosition,
    XMVECTOR FocusPosition,
    XMVECTOR UpDirection
);

The XMMatrixLookAtLH function takes the position of the camera, the position the camera is looking at, and the direction that points up in the game world. In addition to XMMatrixLookAtLH, we could have alternatively used XMMatrixLookToLH, which has the following prototype:

XMMATRIX XMMatrixLookToLH(
    XMVECTOR EyePosition,
    XMVECTOR EyeDirection,
    XMVECTOR UpDirection
);

The difference between XMMatrixLookAtLH and XMMatrixLookToLH is that the second function specifies a direction to look toward, not a fixed point to look at. When building cameras, the idea is that we want to manipulate the properties of our camera during the game’s update, and when it comes time to draw from the camera’s perspective, we generate the three vectors that are passed to one of these matrix functions.

In this chapter we will create a stationary look-at camera and a camera that rotates around a point along an arc.

Look-At Camera Demo

Stationary cameras are fairly straightforward; their purpose is to sit at a location and look in a direction. There are two main types of stationary cameras: fixed-position and dynamic position. Fixed-position cameras are given a set position, and that position does not change. This was used heavily in the original Resident Evil games. Dynamic fixed-position cameras are those cameras whose position is dynamically placed in the game world—for example in Halo Reach, when the player drives off a cliff and the chase camera turns into a stationary camera at the moment it is determined that the player has passed a certain plane and is considered dead.

We will create a simply stationary camera that will create the most basic camera system possible. On the companion website you can find this demo in the Chapter8/LookAtCamera/ folder.

The look-at camera needs to have a position, a target position, and a direction that specifies which way is up. Since up can usually be defined as (0,1,0), we will create a class, called LookAtCamera, that takes just a position and a target. The LookAtCamera class can be seen in Listing 8.1.

Example 8.1. The LookAtCamera.h header file.

#include<xnamath.h>


class LookAtCamera
{
    public:
        LookAtCamera( );
        LookAtCamera( XMFLOAT3 pos, XMFLOAT3 target );

        void SetPositions( XMFLOAT3 pos, XMFLOAT3 target );
        XMMATRIX GetViewMatrix( );

    private:
        XMFLOAT3 position_;
        XMFLOAT3 target_;
        XMFLOAT3 up_;
};

The LookAtCamera class is fairly small. The first constructor initializes the vectors to have components of all zeros, with the exception of the up direction, which is set to (0,1,0). If we use this class with this first constructor, we will have a view that looks like all of our demos so far. The second constructor will set our member objects to the position and target parameters, and the SetPositions function will do the same.

When it is time to render the scene using this camera, we can call GetViewMatrix to call XMMatrixLookAtLH with our position, target, and up member vectors to create the desired view matrix. Since XMMatrixLookAtLH requires the XMVECTOR type, we use XMLoadFloat3 to efficiently turn our XMFLOAT3 into an XMVECTOR. If we wanted to use XMVECTOR as our member variables, we’d have to align the memory of the class and take special care in order to get the code to correctly compile.

The functions for the LookAtCamera class can be seen in Listing 8.2.

Example 8.2. The functions for the LookAtCamera class.

#include<d3d11.h>
#include"LookAtCamera.h"


LookAtCamera::LookAtCamera( ) : position_( XMFLOAT3( 0.0f, 0.0f, 0.0f ) ),
    target_( XMFLOAT3( 0.0f, 0.0f, 0.0f ) ), up_( XMFLOAT3( 0.0f, 1.0f, 0.0f ) )
{

}


LookAtCamera::LookAtCamera( XMFLOAT3 pos, XMFLOAT3 target ) :
    position_( pos ), target_( target ), up_( XMFLOAT3( 0.0f, 1.0f, 0.0f ) )
{

}


void LookAtCamera::SetPositions( XMFLOAT3 pos, XMFLOAT3 target )
{
    position_ = pos;
    target_ = target;
}


XMMATRIX LookAtCamera::GetViewMatrix( )
{
    XMMATRIX viewMat = XMMatrixLookAtLH( XMLoadFloat3( &position_ ),
        XMLoadFloat3( &target_ ), XMLoadFloat3( &up_ ) );

    return viewMat;
}

The demo’s class, called CameraDemo, builds directly off of the 3DCube demo from Chapter 6. The difference here is that we are adding our stationary camera to the demo, and before we render we will obtain the view matrix from this camera. The CameraDemo class with our new camera can be seen in Listing 8.3.

Example 8.3. The CameraDemo class.

#include"Dx11DemoBase.h"
#include"LookAtCamera.h"


class CameraDemo : public Dx11DemoBase
{
    public:
        CameraDemo( );
        virtual ~CameraDemo( );

        bool LoadContent( );
        void UnloadContent( );

        void Update( float dt );
        void Render( );

    private:
        ID3D11VertexShader* solidColorVS_;
        ID3D11PixelShader* solidColorPS_;

        ID3D11InputLayout* inputLayout_;
        ID3D11Buffer* vertexBuffer_;
        ID3D11Buffer* indexBuffer_;

        ID3D11ShaderResourceView* colorMap_;
        ID3D11SamplerState* colorMapSampler_;

        ID3D11Buffer* viewCB_;
        ID3D11Buffer* projCB_;
        ID3D11Buffer* worldCB_;
        XMMATRIX projMatrix_;
        LookAtCamera camera_;
};

We set up our camera in the LoadContent function. The camera for this demo is positioned at the 3 X axis, 3 Y axis, and -12 Z axis. This will allow the object to appear on the screen with a camera that is slightly above and to the side of it, giving us a bit of an angle on the object. The LoadContent function can be seen in Listing 8.4.

Example 8.4. Setting up our camera in the LoadContent function.

bool CameraDemo::LoadContent( )
{
    // ... Previous demo’s code ...


    XMMATRIX projection = XMMatrixPerspectiveFovLH( XM_PIDIV4,
        800.0f / 600.0f, 0.01f, 100.0f );

    projection = XMMatrixTranspose( projection );
    XMStoreFloat4x4( &projMatrix_, projection );

    camera_.SetPositions( XMFLOAT3( 3.0f, 3.0f, -12.0f ),
        XMFLOAT3( 0.0f, 0.0f, 0.0f ) );

    return true;
}

The last bit of code is the Render function, where we call the GetViewMatrix of our stationary camera to obtain the view matrix that is passed to the view matrix’s constant buffer. This is the only changed code from the 3D Cube demo of Chapter 6. A screenshot of the Look-At Camera demo can be seen in Figure 8.1.

A screenshot of the Look-At Camera demo.

Figure 8.1. A screenshot of the Look-At Camera demo.

Example 8.5. Using our camera in the Render function.

void CameraDemo::Render( )
{
    if( d3dContext_ == 0 )
        return;
    float clearColor[4] = { 0.0f, 0.0f, 0.25f, 1.0f };
    d3dContext_->ClearRenderTargetView( backBufferTarget_, clearColor );
    d3dContext_->ClearDepthStencilView( depthStencilView_,
        D3D11_CLEAR_DEPTH, 1.0f, 0 );

    unsigned int stride = sizeof( VertexPos );
    unsigned int offset = 0;

    d3dContext_->IASetInputLayout( inputLayout_ );
    d3dContext_->IASetVertexBuffers( 0, 1, &vertexBuffer_, &stride, &offset );
    d3dContext_->IASetIndexBuffer( indexBuffer_, DXGI_FORMAT_R16_UINT, 0 );
    d3dContext_->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

    d3dContext_->VSSetShader( solidColorVS_, 0, 0 );
    d3dContext_->PSSetShader( solidColorPS_, 0, 0 );
    d3dContext_->PSSetShaderResources( 0, 1, &colorMap_ );
    d3dContext_->PSSetSamplers( 0, 1, &colorMapSampler_ );

    XMMATRIX worldMat = XMMatrixIdentity( );
    worldMat = XMMatrixTranspose( worldMat );
    XMMATRIX viewMat = camera_.GetViewMatrix( );
    viewMat = XMMatrixTranspose( viewMat );

    d3dContext_->UpdateSubresource( worldCB_, 0, 0, &worldMat, 0, 0 );
    d3dContext_->UpdateSubresource( viewCB_, 0, 0, &viewMat, 0, 0 );
    d3dContext_->UpdateSubresource( projCB_, 0, 0, & projMatrix_, 0, 0 );

    d3dContext_->VSSetConstantBuffers( 0, 1, &worldCB_ );
    d3dContext_->VSSetConstantBuffers( 1, 1, &viewCB_ );
    d3dContext_->VSSetConstantBuffers( 2, 1, &projCB_ );

    d3dContext_->DrawIndexed( 36, 0, 0 );

    swapChain_->Present( 0, 0 );
}

Arc-Ball Camera Demo

The next camera we will create will be an arc-ball camera. This type of camera is good for editors or moments in a game where an object is the target and the camera needs to rotate around that target in a spherical manner. The code for this demo can be found on the companion website in the Chapter8/ArcBall Camera/ folder.

For this demo we will need a few things. Since the target position is the one the camera is focusing on, it is the one position that is supplied. The camera’s position itself will rotate around the target, meaning that the position will be calculated in the GetViewMatrix function.

Another set of properties we can use for this type of camera is the distance from the object and restraints along the X axis rotation. The distance will work as a zoom, allowing us to move closer or further from the target position. The restraints will allow us to rotate along an arc 180 degrees, which will keep our camera from reaching a rotation where it is then upside down.

Listing 8.6 shows the ArcCamera class. Its members includes the current distance the camera is from the target position, the min and max distance the camera can be if we want to limit how close or how far the camera can be, and the X and Y rotation values. We also have the min and max rotation values so that we can add restraints to the camera.

Example 8.6. The ArcCamera class.

#include<xnamath.h>


class ArcCamera
{
    public:
        ArcCamera( );

        void SetDistance(float distance, float minDistance, float maxDistance);
        void SetRotation( float x, float y, float minY, float maxY );
        void SetTarget( XMFLOAT3& target );

        void ApplyZoom( float zoomDelta );
        void ApplyRotation( float yawDelta, float pitchDelta );

        XMMATRIX GetViewMatrix( );

    private:
        XMFLOAT3 position_;
        XMFLOAT3 target_;

        float distance_, minDistance_, maxDistance_;
        float xRotation_, yRotation_, yMin_, yMax_;
};

The arc camera has a constructor that defaults a target at the origin, a position at the origin, a distance of two units away from the target, and a set of rotation restraints that total 180 degrees (—90 to 90). The restraints allow us to go all the way up to the highest peak before the camera starts to turn upside down. Since GetViewMatrix will calculate the position, the constructor is simply giving it a default value that will eventually be replaced with a real position.

The other functions are SetDistance, which will set the current distance from the camera as well as setting the new min and max distance limits, SetRotation to set the current X and Y rotation as well as its limits, and SetTarget, which will set the current target position. Each of these functions is straightforward and can be seen in Listing 8.7.

Example 8.7. Initializing functions of the ArcCamera.

ArcCamera::ArcCamera( ) : target_( XMFLOAT3( 0.0f, 0.0f, 0.0f ) ),
    position_( XMFLOAT3( 0.0f, 0.0f, 0.0f ) )
{
    SetDistance( 2.0f, 1.0f, 10.0f );
    SetRotation( 0.0f, 0.0f, -XM_PIDIV2, XM_PIDIV2 );
}


void ArcCamera::SetDistance( float distance, float minDistance,
    float maxDistance )
{
    distance_ = distance;
    minDistance_ = minDistance;
    maxDistance_ = maxDistance;

    if( distance_ < minDistance_ ) distance_ = minDistance_;
    if( distance_ > maxDistance_ ) distance_ = maxDistance_;
}


void ArcCamera::SetRotation( float x, float y, float minY, float maxY )
{
    xRotation_ = x;
    yRotation_ = y;
    yMin_ = minY;
    yMax_ = maxY;

    if( yRotation_ < yMin_ ) yRotation_ = yMin_;
    if( yRotation_ > yMax_ ) yRotation_ = yMax_;
}


void ArcCamera::SetTarget( XMFLOAT3& target )
{
    target_ = target;
}

Next we have our functions to apply movement. First is the ApplyZoom function, which will increase or decrease the distance amount while clamping the result to our desired min and max distances. ApplyRotation does the same thing, but since the rotation along the X axis will control the camera appearing to move up or down, it is the axis that has the limits applied to it. Both of these functions add deltas to the values, which means it adds the change in value and not the absolute distance or rotation. This allows us to build pseudo-forces upon our camera until the final view matrix is calculated with a call to GetViewMatrix.

The GetViewMatrix function, which can be seen in Listing 8.8 along with ApplyZoom and ApplyRotation, is also fairly straightforward, thanks to XNA Math. First we create the position in local space, and then we’ll call its variable zoom in the code listing. With no rotation at all, this zoom position is our camera’s final position. The beauty of XNA Math and of matrices in general is that for us to transform this local position into its real position, we must apply the rotation of the camera. As we rotate the camera, we are only technically rotating the camera’s position around the target. If the target has a local position of (0,0,0), and our camera’s position has a local position of (0,0,distance), then to transform our camera to the correct world locations we must translate our target to the target position (which means we must make our target vector the target position since anything added to 0 is itself), we rotate the local position by the camera’s rotation matrix, and then we translate (i.e., offset) the rotated camera’s position by the target position. The translation is a simple vector addition.

The last step we need to perform is to calculate the up vector. This is as simple as creating a local space up vector of (0,1,0) and rotating it by our camera’s rotation matrix to get the true up vector. We use our calculated position, target position, and calculated up vector to pass along to XMMatrixLookAtLH to create our arc-ball controlled view matrix. The rotation matrix is created with a call to XMMatrix-RotationRollPitchYaw, which takes the yaw, pitch, and the roll rotation values and returns to us a rotation matrix. We supply our X and Y axis rotation values to this function, and XNA Math does the work for us.

Example 8.8. The ApplyZoom, ApplyRotation, and GetViewMatrix functions.

void ArcCamera::ApplyZoom( float zoomDelta )
{
    distance_ += zoomDelta;
    if( distance_ < minDistance_ ) distance_ = minDistance_;
    if( distance_ > maxDistance_ ) distance_ = maxDistance_;
}


void ArcCamera::ApplyRotation( float yawDelta, float pitchDelta )
{
    xRotation_ += yawDelta;
    yRotation_ += pitchDelta;

    if( xRotation_ < yMin_ ) xRotation_ = yMin_;
    if( xRotation_ > yMax_ ) xRotation_ = yMax_;
}


XMMATRIX ArcCamera::GetViewMatrix( )
{
    XMVECTOR zoom = XMVectorSet( 0.0f, 0.0f, distance_, 1.0f );
    XMMATRIX rotation = XMMatrixRotationRollPitchYaw( xRotation_,
        -yRotation_, 0.0f );

    zoom = XMVector3Transform( zoom, rotation );

    XMVECTOR pos = XMLoadFloat3( &position_ );
    XMVECTOR lookAt = XMLoadFloat3( &target_ );

    pos = lookAt + zoom;
    XMStoreFloat3( &position_, pos );

    XMVECTOR up = XMVectorSet( 0.0f, 1.0f, 0.0f, 1.0f );
    up = XMVector3Transform( up, rotation );

    XMMATRIX viewMat = XMMatrixLookAtLH( pos, lookAt, up );

    return viewMat;
}

This demo is the same as the Look-At Camera demo, with the minor exceptions that we’ve replaced our camera with an ArcCamera (see Listing 8.9) and we replaced our camera setup code to just specify the camera’s default distance, since the constructor already gives us all we really need (see Listing 8.10).

Example 8.9. The Arc Camera demo’s application class.

#include"Dx11DemoBase.h"
#include"ArcCamera.h"
#include<XInput.h>


class CameraDemo2 : public Dx11DemoBase
{
    public:
        CameraDemo2( );
        virtual ~CameraDemo2( );

        bool LoadContent( );
        void UnloadContent( );

        void Update( float dt );
        void Render( );

    private:
        ID3D11VertexShader* solidColorVS_;
        ID3D11PixelShader* solidColorPS_;

        ID3D11InputLayout* inputLayout_;
        ID3D11Buffer* vertexBuffer_;
        ID3D11Buffer* indexBuffer_;

        ID3D11ShaderResourceView* colorMap_;
        ID3D11SamplerState* colorMapSampler_;

        ID3D11Buffer* viewCB_;
        ID3D11Buffer* projCB_;
        ID3D11Buffer* worldCB_;
        XMMATRIX projMatrix_;

        ArcCamera camera_;

        XINPUT_STATE controller1State_;
        XINPUT_STATE prevController1State_;
};

Example 8.10. Changing our camera setup code in LoadContent to a single line.

camera_.SetDistance( 6.0f, 4.0f, 20.0f );

The Arc Camera demo builds off of not only the Look-At Camera demo (which is a modified version of the 3D Cube demo from Chapter 6) but also the XInput demo from Chapter 5. In this demo we are using XInput and an Xbox 360 controller to rotate our view around the target position. This all occurs in the Update function, which can be seen in Listing 8.11.

The Update function starts by obtaining the state of the device. If no device is plugged in, we cannot obtain any information from the device. Next we add code that allows us to exit the application via the Back button on the controller. This is not necessary but is a nice touch.

Next the Update function checks to see if the B face button was pressed. If so, it moves the camera a little bit away from the target. If the A button is pressed, the camera is moved closer toward the target.

The remainder of the function will use the right thumb-stick to rotate the camera. We do this by taking a fairly simple approach. If the X and Y axes of the thumb-stick have been moved a meaningful amount, then we move the yaw (Y rotation) and pitch (X rotation) in a positive or negative direction. In the demo we look to see if the sticks have been moved at least a value 1000, because anything smaller might cause the stick to be too sensitive to the touch.

A screenshot of the Arc Camera demo can be seen in Figure 8.2.

A screenshot of the Arc Camera demo.

Figure 8.2. A screenshot of the Arc Camera demo.

Example 8.11. The demo’s Update function.

void CameraDemo2::Update( float dt )
{
    unsigned long result = XInputGetState( 0, &controller1State_ );

    if( result != ERROR_SUCCESS )
    {
        return;
    }

    // Button press event.
    if( controller1State_.Gamepad.wButtons & XINPUT_GAMEPAD_BACK )
    {
        PostQuitMessage( 0 );
    }

    // Button up event.
    if( ( prevController1State_.Gamepad.wButtons & XINPUT_GAMEPAD_B ) &&
        !( controller1State_.Gamepad.wButtons & XINPUT_GAMEPAD_B ) )

    {
        camera_.ApplyZoom( -1.0f );
    }

    // Button up event.
    if( ( prevController1State_.Gamepad.wButtons & XINPUT_GAMEPAD_A ) &&
        !( controller1State_.Gamepad.wButtons & XINPUT_GAMEPAD_A ) )

    {
        camera_.ApplyZoom( 1.0f );
    }
    float yawDelta = 0.0f;
    float pitchDelta = 0.0f;

    if( controller1State_.Gamepad.sThumbRY < -1000 ) yawDelta = -0.001f;
    else if( controller1State_.Gamepad.sThumbRY > 1000 ) yawDelta = 0.001f;

    if( controller1State_.Gamepad.sThumbRX < -1000 ) pitchDelta = -0.001f;
    else if( controller1State_.Gamepad.sThumbRX > 1000 ) pitchDelta = 0.001f;

    camera_.ApplyRotation( yawDelta, pitchDelta );

    memcpy( &prevController1State_, &controller1State_, sizeof( XINPUT_STATE ) );
}

Meshes and Models

Throughout this book, the most complex model we’ve created was a cube for which we manually specified its geometry by hand (3D Cube demo from Chapter 6). This cube was essentially a mesh, and it marked the simplest closed-volume 3D object you can create. A mesh, as you will recall, is a geometry object that has one or more polygons, materials, textures, etc. A model, on the other hand, is a collection of meshes where the meshes collectively represent a larger entity. This can be a vehicle where the wheels, body, doors, windows, etc. are all meshes of the larger vehicle model.

The problem with manually specifying geometry is that eventually objects become too complex to create by hand, and we will have to use tools to get the job done. In this section we’ll briefly look at how we can load models from files and tools to create these models.

The final demo of this chapter is the Models demo, which can be found on the companion website in the Chapter8/Models/ folder. This demo builds directly off of the Arc Camera demo from earlier in this chapter.

The OBJ File Format

Wavefront OBJ files are text files that can be opened and edited in any text editor. The format is fairly simple, and its layout is usually to list vertices first, followed by texture coordinates, vertex normal vectors, and triangle indices. On each line is a different piece of information, and the starting character dictates what the rest of the line represents. For example, lines that are comments start with a # symbol, which can be seen as follows:

# 1104 triangles in group

Vertex position is on lines that start with a v. Each value after the v is for the X, Y, and Z positions, each separated by a whitespace. An example can be seen in the following:

v 0.000000 2.933333 -0.000000

Texture coordinates start with a vt and have two floating-point values that follow the keyword, and normals start with a vn. An example of each is as following:

vt 1.000000 0.916667
vn 0.000000 -1.000000 0.000000

Triangle information is on lines that start with an f. After the f there are three groups of values. Each group has the vertex position index into the vertex list, texture coordinate index into the texture coordinate list, and normal index into the normal list. Since each vertex position, texture coordinate, and normal specified in the file are unique, the face information in an OBJ file is not the same as indices using indexed geometry. When using index geometry, we have one index that is used for all attributes of a vertex (i.e., position, texture coordinate, etc.), whereas in the OBJ file it has separate indices for each attribute. An example of a triangle in the OBJ file can be seen in the following, where each index is separated by a /, and each group (vertex) is separated by a whitespace:

f 2/1/1 3/2/2 4/3/3

There are other keywords in an OBJ file. The keyword mtllib is used to specify the material file used by the mesh:

mtllib Sphere.mtl

The usemtl keyword is used to specify that the following mesh is to use the material specified in the file loaded by mtllib:

usemtl Material01

And the g keyword specifies the start of a new mesh:

g Sphere02

Reading Tokens from a File

The OBJ file is simply a text file. Each piece of information we want to read is on its own line, which means we need to write code that can parse lines of text from a file and break down the lines into smaller pieces of text. These smaller pieces of text are called tokens.

To do this we’ll create a simple class that can return to use tokens from within a text file. This class is called TokenStream, and it can be found in TokenStream.h and TokenStream.cpp in the Chapter8/Models/ folder on the companion website. Listing 8.12 shows the TokenStream.h header file.

Example 8.12. The TokenStream.h header file.

class TokenStream
{
      public:
         TokenStream( );

         void ResetStream( );

         void SetTokenStream( char* data );

         bool GetNextToken( std::string* buffer, char* delimiters,
              int totalDelimiters );

         bool MoveToNextLine( std::string *buffer );

      private:
         int startIndex_, endIndex_;
         std::string data_;
};

The TokenStream object will just store the entire file’s data and the current read indices (start and end) that mark the current positions it is reading from. We’ll see how this is used soon. First we’ll look at the constructor, ResetStream, and SetTokenStream functions in Listing 8.13. The constructor and ResetStream simply set the read indices to 0, and SetTokenStream will set the data member variable that will store the file’s text.

Example 8.13. The constructor, ResetStream, and SetTokenStream functions.

TokenStream::TokenStream( )
{
    ResetStream( );
}


void TokenStream::ResetStream( )
{
    startIndex_ = endIndex_ = 0;
}


void TokenStream::SetTokenStream( char *data )
{
    ResetStream( );
    data_ = data;
}

Next are two helper functions from the TokenStream.cpp source file. These functions simply test whether or not a character is a delimiter. A delimiter is a character that marks the separation of text. Taking the following text as an example, we can see that each word is separated by a whitespace. This whitespace is the delimiter.

“Hello world. How are you?”

The first isValidIdentifier function simply looks to see if the character is a number, letter, or symbol. This is usually used as a default check, whereas the overloaded isValidIdentifier function will check the character with an array of desired delimiters. If you open the spheres.obj model file for this demo, you will see that the only delimiters in this file are new lines, whitespaces, and /. The isValidIdentifier functions are listed in Listing 8.14.

Example 8.14. The isValidIdentifier functions.

bool isValidIdentifier( char c )
{
    // Ascii from ! to ~.
    if( ( int )c > 32 && ( int )c < 127 )
        return true;
    return false;
}


bool isValidIdentifier( char c, char* delimiters, int totalDelimiters )
{
    if( delimiters == 0 || totalDelimiters == 0 )
        return isValidIdentifier( c );

    for( int i = 0; i < totalDelimiters; i++ )
    {
        if( c == delimiters[i] )
            return false;
    }

    return true;
}

The next function is GetNextToken . This function will loop through the text until it reaches a delimiter. Once it finds a delimiter, it uses the start index (the position at which it began reading) and the end index (the position before the delimiter) to identify a token. This token is returned to the caller as the first parameter, which is the address of the object that will return this token. The function also returns true or false, depending on whether it was able to find a new token (which can be used to determine when we have reached the end of the data buffer). The GetNextToken function can be seen in Listing 8.15.

Example 8.15. The GetNextToken function.

bool TokenStream::GetNextToken( std::string* buffer, char* delimiters,
    int totalDelimiters )
{
    startIndex_ = endIndex_;

    bool inString = false;
    int length = ( int )data_.length( );

    if( startIndex_ >= length - 1 )
        return false;
    while( startIndex_ < length && isValidIdentifier( data_[startIndex_],
        delimiters, totalDelimiters ) == false )
    {
        startIndex_++;
    }

    endIndex_ = startIndex_ + 1;

    if( data_[startIndex_] == ’"’ )
        inString = !inString;

    if( startIndex_ > length )
    {
        while( endIndex_ < length && ( isValidIdentifier(
        data_[endIndex_], delimiters, totalDelimiters ) || inString == true ) )
        {
            if( data_[endIndex_] == ’"’ )
                inString = !inString;

            endIndex_++;
        }

        if( buffer != NULL )
        {
            int size = ( endIndex_ - startIndex_ );
            int index = startIndex_;

            buffer->reserve( size + 1 );
            buffer->clear( );

            for( int i = 0; i < size; i++ )
            {
                buffer->push_back( data_[index++] );
            }
        }

        return true;
    }

    return false;
}

The next and last function that is part of the TokenStream class is the Move-ToNextLine function, which will move from the current read indices to the next line of the data. We also return this line via the pointer parameter, and we do this because our data is a continuous array of characters and we want our read indices to stay ready to read the next token, or to read the remainder of a line from its current position. The MoveToNextLine function can be seen in Listing 8.16.

Example 8.16. The MoveToNextLine function.

bool TokenStream::MoveToNextLine( std::string* buffer )
{
    int length = ( int )data_.length( );

    if( startIndex_ < length && endIndex_ < length )
    {
        endIndex_ = startIndex_;

        while( endIndex_ < length && ( isValidIdentifier( data_[endIndex_] ) ||
            data_[endIndex_] == ’ ’ ) )
        {
            endIndex_++;
        }

        if( ( endIndex_ - startIndex_ ) == 0 )
            return false;

        if( endIndex_ - startIndex_ >= length )
            return false;

        if( buffer != NULL )
        {
            int size = ( endIndex_ - startIndex_ );
            int index = startIndex_;

            buffer->reserve( size + 1 );
            buffer->clear( );

            for( int i = 0; i < size; i++ )
            {
                buffer->push_back( data_[index++] );
            }
        }
    }
    else
    {
        return false;
    }

    endIndex_++;
    startIndex_ = endIndex_ + 1;

      return true;
}

Loading Meshes from OBJ Files

The class that will actually load the OBJ file is called ObjModel. This class uses the TokenStream class to parse the data and to create the triangle list of information from it. The OBJ file has vertex positions, texture coordinates, and normal vectors, so our ObjModel class will store pointers for each, as you can see in Listing 8.17.

Example 8.17. The ObjModel class.

class ObjModel
{
   public:
      ObjModel( );
      ~ObjModel( );

      void Release( );
      bool LoadOBJ( char *fileName );

      float *GetVertices()     { return vertices_; }
      float *GetNormals()       { return normals_; }
      float *GetTexCoords()   { return texCoords_; }
      int       GetTotalVerts() { return totalVerts_; }

   private:
      float *vertices_;
      float *normals_;
      float *texCoords_;
      int totalVerts_;
};

The LoadOBJ function (seen in Listing 8.18 and Listing 8.19) is more straightforward than it appears. The function first opens a file and determines the size of the file in bytes. It then reads this information into a temporary buffer that is then passed to a TokenStream object.

The first TokenStream object is used to read lines out of the data by calling MoveToNextLine. We’ll use a second TokenStream object to further parse each individual line for the specific information we are looking for.

When we parse a line, we look at the first character of the line to determine what information it has. If it starts with a v it is a position, if it starts with a vt it is a texture coordinate, or if it starts with a vn it is a vertex normal. We can use the whitespace delimiter to break down these lines into their components.

If we are reading a face (triangle indices) from the file, which appears after the f keyword, then we need to use another TokenStream object to break down the indices using the whitespace and / characters as a delimiter.

Example 8.18. The first half of the LoadOBJ function.

bool ObjModel::LoadOBJ( char *fileName )
{
    std::ifstream fileStream;
    int fileSize = 0;

    fileStream.open( fileName, std::ifstream::in );

    if( fileStream.is_open( ) == false )
        return false;

    fileStream.seekg( 0, std::ios::end );
    fileSize = ( int )fileStream.tellg( );
    fileStream.seekg( 0, std::ios::beg );

    if( fileSize <= 0 )
        return false;
char *buffer = new char[fileSize];

if( buffer == 0 )
    return false;

memset( buffer, ’’, fileSize );

TokenStream tokenStream, lineStream, faceStream;
std::string tempLine, token;

fileStream.read( buffer, fileSize );
tokenStream.SetTokenStream( buffer );

delete[] buffer;

tokenStream.ResetStream( );

std::vector<float> verts, norms, texC;
std::vector<int> faces;

char lineDelimiters[2] = { ’
’, ’ ’ };

while( tokenStream.MoveToNextLine( &tempLine ) )
{
    lineStream.SetTokenStream( ( char* )tempLine.c_str( ) );
    tokenStream.GetNextToken( 0, 0, 0 );

    if( !lineStream.GetNextToken( &token, lineDelimiters, 2 ) )
        continue;

    if( strcmp( token.c_str( ), "v" ) == 0 )
    {
        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        verts.push_back( ( float )atof( token.c_str( ) ) );

        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        verts.push_back( ( float )atof( token.c_str( ) ) );

        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        verts.push_back( ( float )atof( token.c_str( ) ) );
    }
    else if( strcmp( token.c_str( ), "vn" ) == 0 )
    {
        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        norms.push_back( ( float )atof( token.c_str( ) ) );

        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        norms.push_back( ( float )atof( token.c_str( ) ) );

        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        norms.push_back( ( float )atof( token.c_str( ) ) );
    }
    else if( strcmp( token.c_str( ), "vt" ) == 0 )
    {
        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        texC.push_back( ( float )atof( token.c_str( ) ) );

        lineStream.GetNextToken( &token, lineDelimiters, 2 );
        texC.push_back( ( float )atof( token.c_str( ) ) );
    }
    else if( strcmp( token.c_str( ), "f" ) == 0 )
    {
        char faceTokens[3] = { ’
’, ’ ’, ’/’ };
        std::string faceIndex;

        faceStream.SetTokenStream( ( char* )tempLine.c_str( ) );
        faceStream.GetNextToken( 0, 0, 0 );

        for( int i = 0; i < 3; i++ )
        {
            faceStream.GetNextToken( &faceIndex, faceTokens, 3 );
            faces.push_back( ( int )atoi( faceIndex.c_str( ) ) );

            faceStream.GetNextToken( &faceIndex, faceTokens, 3 );
            faces.push_back( ( int )atoi( faceIndex.c_str( ) ) );

            faceStream.GetNextToken( &faceIndex, faceTokens, 3 );
            faces.push_back( ( int )atoi( faceIndex.c_str( ) ) );
        }
    }
    else if( strcmp( token.c_str( ), "#" ) == 0 )
    {
        int a = 0;
        int b = a;
    }

    token[0] = ’’;
}

Once we have the data, we use the face information to generate a triangle list array of geometry. We cannot use the information in an OBJ file directly because the indices are defined per attribute, not per vertex. Once we generate the information in a manner Direct3D will be happy with, we return true after releasing all of our temporary data. The second half of the LoadOBJ function can be seen in Listing 8.19.

Example 8.19. The second half of the LoadOBJ function.

{
    // "Unroll" the loaded obj information into a list of triangles.

    int vIndex = 0, nIndex = 0, tIndex = 0;
    int numFaces = ( int )faces.size( ) / 9;

    totalVerts_ = numFaces * 3;

    vertices_ = new float[totalVerts_ * 3];

    if( ( int )norms.size( ) != 0 )
    {
        normals_ = new float[totalVerts_ * 3];
    }

    if( ( int )texC.size( ) != 0 )
    {
        texCoords_ = new float[totalVerts_ * 2];
    }

    for( int f = 0; f < ( int )faces.size( ); f+=3 )
    {
        vertices_[vIndex + 0] = verts[( faces[f + 0] - 1 ) * 3 + 0];
        vertices_[vIndex + 1] = verts[( faces[f + 0] - 1 ) * 3 + 1];
        vertices_[vIndex + 2] = verts[( faces[f + 0] - 1 ) * 3 + 2];
        vIndex += 3;

        if(texCoords_)
        {
            texCoords_[tIndex + 0] = texC[( faces[f + 1] - 1 ) * 2 + 0];
            texCoords_[tIndex + 1] = texC[( faces[f + 1] - 1 ) * 2 + 1];
            tIndex += 2;
        }

        if(normals_)
        {
            normals_[nIndex + 0] = norms[( faces[f + 2] - 1 ) * 3 + 0];
            normals_[nIndex + 1] = norms[( faces[f + 2] - 1 ) * 3 + 1];
            normals_[nIndex + 2] = norms[( faces[f + 2] - 1 ) * 3 + 2];
            nIndex += 3;
        }
    }

    verts.clear( );
    norms.clear( );
    texC.clear( );
    faces.clear( );

    return true;
}

The last code to look at lies within LoadContent. When we load our OBJ model, we create a new ObjModel object, called LoadOBJ, and use the pointers to the attributes to fill out our vertex structure array that will be passed to the vertex buffer. Once this information is in our vertex buffer, it is rendered as a normal triangle list, and our model should appear on the screen. You can try many different models of different complexities with this code besides the sphere model that comes with the demo. The code specific to loading the vertex buffer can be seen in Listing 8.20. A screenshot of the demo can be seen in Figure 8.3.

A screenshot of the Models demo.

Figure 8.3. A screenshot of the Models demo.

Example 8.20. The code in LoadContent specific to loading the vertex buffer.

// Load the models from the file.
ObjModel objModel;
if( objModel.LoadOBJ( "sphere.obj" ) == false )
{
    DXTRACE_MSG( "Error loading 3D model!" );
    return false;
}

totalVerts_ = objModel.GetTotalVerts( );

VertexPos* vertices = new VertexPos[totalVerts_];
float* vertsPtr = objModel.GetVertices( );
float* texCPtr = objModel.GetTexCoords( );

for( int i = 0; i < totalVerts_; i++ )
{
    vertices[i].pos = XMFLOAT3( *(vertsPtr + 0), *(vertsPtr + 1), *(vertsPtr + 2) );
    vertsPtr += 3;
    vertices[i].tex0 = XMFLOAT2( *(texCPtr + 0), *(texCPtr + 1) );
    texCPtr += 2;
}

D3D11_BUFFER_DESC vertexDesc;
ZeroMemory( &vertexDesc, sizeof( vertexDesc ) );
vertexDesc.Usage = D3D11_USAGE_DEFAULT;
vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexDesc.ByteWidth = sizeof( VertexPos ) * totalVerts_;

D3D11_SUBRESOURCE_DATA resourceData;
ZeroMemory( &resourceData, sizeof( resourceData ) );
resourceData.pSysMem = vertices;

d3dResult = d3dDevice_->CreateBuffer( &vertexDesc, &resourceData,
    &vertexBuffer_ );

if( FAILED( d3dResult ) )
{
    DXTRACE_MSG( "Failed to create vertex buffer!" );
    return false;
}

delete[] vertices;
objModel.Release( );

Advanced Topics

This chapter has just begun to scratch the surface of what is possible in 3D video game scenes. Although some of these topics can quickly become quite advanced, some of them you can begin to experiment with sooner rather than later. Just looking at the camera code we wrote earlier in this chapter should give you an indication that, with a little more work, you can create a camera system that allows you to create a wide range of various viewports for many different types of games.

In this section we will take a moment to discuss some topics that, even as a beginner, you can begin to explore once you are done with this book. With the right art assets, you can even create some impressive-looking demos or games using these general ideas as a foundation.

Complex Cameras

We touched upon two types of cameras in this chapter. The first camera was a simple stationary camera that had properties that directly fed the creation of the look-at view matrix. This type of camera has its purposes, but it alone would not have been enough to adequately discuss cameras in Direct3D.

The second type of camera was a little more useful when examining our 3D objects. The arc camera allowed us to freely rotate along the X axis, as well as impose limited rotations along the Y axis. Many 3D editors use cameras similar to this, where the target position becomes the driving force of the view, and the position is determined dynamically based upon rotation around that target.

There are many more cameras we could create in our 3D games. Following is a limited list of some of the more common 3D camera systems:

  • First-person

  • Free (ghost)

  • Chase (third-person)

  • Flight

  • Scripted

  • AI

  • Framing

First-person cameras are used greatly in first-person shooting games. With a first-person camera, the player is given the perspective of seeing through the avatar’s eyes. In first-person shooters (FPS), the player’s weapon(s) and parts of the avatar’s body can be visible, along with any interface elements such as crosshairs to help with aiming. Epic Game’s UDK (Figure 8.4) is a prime example of the first-person camera.

First-person view in the UDK sample demo.

Figure 8.4. First-person view in the UDK sample demo.

A free camera, also known as a ghost camera, is a cam that is able to freely move around the environment in all axes. This type of camera is often done in spectator modes of popular FPS games such as the sample demo from the UDK (Figure 8.5), in replay modes such as Bungie’s Halo Reach Theater, and many more. Although a free camera might or might not physically interact with the game world via collisions, it often has free flight throughout the scene with very few restrictions in movement.

Free camera in Epic’s UDK sample.

Figure 8.5. Free camera in Epic’s UDK sample.

A chase camera is the type of camera that chases an object inside of the scene. This is commonly used for third-person games (see Figure 8.6), flight games, etc., where the player’s camera is usually stationed behind the avatar. These cameras also have damping effects so that the camera gradually catches up to the rotation instead of moving as if it was attach to a rigid pole.

Chase camera in UDK’s sample demo.

Figure 8.6. Chase camera in UDK’s sample demo.

In flight games we often have many different types of cameras working together. There is the cockpit view that acts as a first-person camera, there are stationary cameras during replays, there can be free cameras for guided missiles and rockets, and there are chase cameras for a flight view behind the avatar (airplane, jet, etc.). In a modified version of the Chase Camera (Figure 8.7) sample demo for XNA Game Studio Express (available from http://create.msdn.com), the chase camera also has the ability to turn into an arc camera when the player uses the right joystick to arc around the aircraft while the left stick controls the aircraft, causing the chase camera to “chase” after it.

Going from chase to arc.

Figure 8.7. Going from chase to arc.

Scripted and artificial intelligence guided cameras are controlled by sources other than the player. For scripted cameras we can script the camera’s movements and play it back in real time. Scripting a camera, along with scripting the movements and animations of game objects and models, makes it possible to create real-time, in-game cinematic scenes, also known as cut-scenes.

Many games also use multiple types of cameras during gameplay. For example, some games might switch between first- and third-person perspectives based on the current context the player is in, games with split-screen views can have multiple cameras for each player rendering to different areas of the screen, and some games give you the option to decide which perspective you wish to use (e.g., XNA’s Ship Game Starter Kit in Figure 8.8).

Switching camera perspective based on personal preference in XNA’s Ship Game Starter Kit.

Figure 8.8. Switching camera perspective based on personal preference in XNA’s Ship Game Starter Kit.

XNA is a Microsoft development framework similar to the DirectX SDK so the permissions should be available to use SDK screenshots just like we can with the DirectX SDK.

3D Level Files

Loading 3D geometry from a file is the first step to loading entire environments. There are many aspects of an environment, many of which include the following:

  • Skies

  • Water

  • Terrain (land)

  • Buildings

  • Vehicles

  • Characters

  • Weapons

  • Power-ups

  • Trigger volumes (i.e., areas that trigger an event, like a cut-scene)

  • Environment props (e.g., rocks, trees, grass, brush, etc.)

  • Objective game props (e.g., flags for Capture the Flag, hill locations, etc.)

  • And much more

There is a wide range of different objects you can have in a virtual world, some of which are not visible. In today’s games, game levels are too large to manually specify by hand, and often an editor of some form is used. We generally refer to these as map or level editors.

Creating a level editor is no easy task and can often be very game specific. The file format that represents the game level is also highly game specific. As a quick example, let’s look at a simple sample file that stores nothing but positions, rotations, and scaling information for 3D models. Take a look at the following:

Level Level1
{
    PlayerStart 0,0,-100 0,0,0 1,1,1
    WeaponStart Pistol


    Weapons
    {
       Pistol -421,66,932 0,90,0 1,1,1
       Sniper 25,532,235 0,0,0 1,1,1
       RocketLauncher 512,54,336 0,0,0, 1,1,1
       ...
    }


    Enemies
    {
        ...
    }


    Scripts
    {
        ...
    }


    Triggers
    {
        ...
    }
}

In the example above, imagine that PlayerStart specified the player’s starting position, rotation, and scale in the game world when the level starts, and WeaponStart specifies which weapon the player has in his possession at the start. Let’s imagine that all of the weapons and their starting positions in the game world are defined within the Weapons block, the enemies within the Enemies block, and the scripts and triggers in their own blocks. If scripts load upon the level’s start, then triggers could potentially be invisible areas within the game world that trigger an action whenever the player enters it (such as opening a door when a player is near it, playing a cut-scene, etc.).

Even with this extremely basic and imaginary level file, you can already see that we barely have the details of many common games, and we’ve barely specified enough detail for the objects listed as is. Although this is not a real format of any kind, the amount of properties and information you can have for a single object or group of objects can become quite involved. Being able to create objects and edit them in an application can help make creating levels much easier and much more time efficient. Today many game companies utilize art tools such as 3D Studio Max or Maya and write custom exporters that create not just individual objects but the level/world data files that are loaded by the game.

Summary

The goal of this chapter was to introduce two simple 3D cameras that you can use to help you learn how to create additional types of cameras later on. This chapter also showed how to load models from the OBJ file format, which many 3D modeling applications support. The OBJ file format is a simple format to start off with because it is a simple-to-read text file with straightforward syntax.

What You Have Learned

  • How to create a look-at (stationary) camera

  • How to create an arc-ball camera

  • How to load meshes from an OBJ file

Chapter Questions

You can find the answers to chapter review questions in Appendix A on this book’s companion website.

1.

What is a stationary camera?

2.

What two types of stationary cameras did we discuss?

3.

What is an arc-ball camera?

4.

True or False: The OBJ file is a binary file for 3D models.

5.

Describe what a token stream is.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset