Operating System–Based Optimizations

This section focuses on optimizing the interaction between parts of your program that execute simultaneously and/or interact with shared resources. Multitasking and callback functions are used for these purposes. There are different ways and program states in which to use these, as well as some traps and difficulties to look out for when using them. This section gives explanations and field examples.

Multitasking

When used correctly, multitasking can greatly enhance program performance. However, it should not be used indiscriminately because it does bring with it some overhead and extra things to look out for during design and implementation. This section explains what multitasking is exactly, how to handle the problems it can bring along with it, and what kind of hidden overhead to watch out for.

What Is Multitasking?

Basically, single processor systems can perform only one task at a time. A processor takes an instruction from memory and executes it. After completion, the processor can take the next instruction from memory and execute it. A collection of instructions that is executed in this sequential way is called a task. You could compare a processor performing a task to a person reading instructions from a manual. Generally, a person will read instructions from a manual one at a time and follow them carefully. Only when an instruction could be followed successfully is the next instruction dealt with. However, if you had five people working for you, you could give them five manuals and have them execute five different tasks for you simultaneously. This is true also for multiprocessor systems; each processor can perform a task independently from the other processors.

No doubt you know already that it is possible for single processor systems to have multitasking operating systems. In fact, any operating system that allows you to run more than one program at a time is in effect a multitasking OS. Think of starting up a calculator program while you are already using a word processor and perhaps a paint program. So how is this possible? Let's return to the example of the person—let's call him Bob—reading a manual. What if you gave Bob five manuals and a stopwatch, and told him to switch manuals every ten minutes? This way he would seem to perform five different tasks. When you look at a single task however, it advances for only ten minutes after which it halts for another forty minutes. In fact, it probably halts for more than forty minutes because Bob needs time to put down one manual and pick up another (maybe he even needs some time to find his place in the manual he just picked up or to determine the order in which he deals with the manuals). This is also what happens when a multitasking OS with a single processor runs more than one task; the processor still performs only one task at a time; however, it switches between tasks at certain moments. This behavior is called task switching. Task switching brings some overhead with it, which means that the total amount of time spent on pure task execution decreases when the number of task switches increases. In the sections that follow you will see what kind of overhead is incurred.

Tasks can take different shapes. The next three sections discuss the shapes called Process, Thread, and Fiber.

What Is a Process?

Although not all literature and all systems define the word process in exactly the same way, there are some general things that can be said about processes. Most often the word process is used to indicate a program, or at least program characteristics. When a processor switches from one process to the next, it needs a lot of information to be able to continue with the next process at exactly the same place/context where it left off last time. This is the only way process execution can continue as if it was never interrupted by a task switch. A process is therefore defined by the information that is needed during a task switch. Think of:

  • The values of the CPU registers

  • These contain the context created by executed processor instructions.

  • A stack

  • This contains the context created by function calls, variable definitions, and so on (refer to Chapter 8, "Functions," for more details on registers in function calls).

  • Address space

  • This memory is sometimes called the apartment or working set . It is that part of system memory (the range of memory addresses) that is available to the process.

  • Meta data

  • Think of security privileges, current working directory, process name, process priority, and so on.

This is a lot of information and consequently process switching is the most expensive kind of task switch there is, as far overhead is concerned. The reason for this is robustness of the OS. By using virtual memory management (refer to Chapter 9, "Efficient Memory Management," for more details on memory management) to give each process its own private piece of memory, it is possible to create the illusion that each process has the system all to itself. This means that when a process goes into some kind of faulty state, it can mess up its own memory and resources but not that of another process. This holds true as long as the virtual memory management system does not get messed up and no resources are locked by the misbehaving process. The two sections that follow show some lighter ways to switch between different tasks.

Listing 13.10 shows how new processes can be created under Windows. In the section Task Switching you will see what kinds of strategies an OS can use to determine when and how to switch between tasks.

Code Listing 13.10. Creating a New Process Under Windows
#include <windows.h>

void main(void)
{
    STARTUPINFO st;
    PROCESS_INFORMATION pr;

    memset(&st, 0, sizeof(st));
    st.cb = sizeof(st);

    CreateProcess(NULL, "c:  windows  calc.exe ",
                  NULL, NULL, 1, 0, NULL, NULL, &st,  &pr);
}

Note that starting a process under Windows is actually nothing more than telling the OS to start up a specific executable. Certain parameters can be set for this executable, such as working directory, command line arguments, and so on. Consult your compiler documentation for more details on process execution. Listing 13.11 shows how a new process can be created under UNIX.

Code Listing 13.11. Creating a New Process Under UNIX
#include <stdio.h>
#include <sys/unistd.h>

void a( void *dummy )
{
    for(;;)
      printf("a");
}

void b( void *dummy )
{
     for(;;)
        printf("b");
}

void main()

{
    if (fork() == 0)
        a();
    b();
}

Note that under UNIX it is possible to start a new process with a function from the original address space.

A new address space is created in which this function is executed.

The next section discusses a less overhead-intense way of creating a new task.

What Is a Thread?

Most often the word thread is used to indicate a path of execution within a process. As you have already seen, different processes can be run simultaneously by switching between them. The same trick can be performed within a process; that is, different parts of a process can be run simultaneously by switching between tasks within the process. These kinds of tasks are called threads. As a thread lives within a certain process, it needs less defining information:

  • The values of the CPU registers

  • A stack

  • Thread priority (Often specified in relation to the priority of the process in which the thread lives.)

All threads of a process inherit the remaining defining information from the process:

  • Address space

  • Meta data

There are three conclusions to be drawn from the preceding two bulleted lists:

  1. Task switching between two threads in the same process does not introduce much overhead.

  2. All threads in the same process share the same address space and can therefore use the same global data (variables, file handlers, and so on) They can of course also mess up each other's memory. For more information see the section titled "Problems with Multitasking."

  3. Task switching between threads of different processes introduces the same overhead as any other task switch between processes.

Note that each active process has at least one thread of execution. This thread is called the main thread.

Listing 13.12 shows how threads can be used under Windows.

Code Listing 13.12. Creating Multiple Threads in Windows
#include <windows.h>
#include <process.h>
#include <fstream.h>

struct StartInput
{
        char    name[8];
        int     number;
        int     length;
} ;

void StartThread(void* startInput)
{
        StartInput *in = (StartInput*) startInput;

        int k = in->number;
        int j = in->length + k;

        for(; k < j; k++)
        {
            cout << k << in->name << endl;
        }
        _endthread();
}


void main(void)
{
        StartInput startInputA, startInputB;

        strcpy(startInputA.name,"ThreadA");
        strcpy(startInputB.name,"ThreadB");
        startInputA.number = 0;
        startInputB.number = 5;
        startInputA.length = 10;
        startInputB.length = 15;
        _beginthread(StartThread, 0, (void*) &startInputA);
        _beginthread(StartThread, 0, (void*) &startInputB);

        Sleep(6000);
}

Listing 13.12 shows one way of creating two new threads from within a Windows process. By passing a pointer to a function in the call beginthread, a new thread is created and scheduled for task switching by the OS. Its execution starts with the first instruction of the function that was passed—in this case the function StartThread. As the format used for passing arguments to a new thread is fixed (void*) a casting trick must be performed in order to pass more than 4 bytes of information. In Listing 13.11 a pointer to a structure is passed. Because the receiving function StartThread knows exactly what kind of structure to expect, it can retrieve all the structure fields without a problem.

There are three more interesting points to note about Listing 13.12. First, the two created threads receive pointers to two different structures, startInputA and startInputB . In a sequential program you would probably have passed startInputA first to one function, changed some of its fields, and then passed it to another function. However, because two threads will run simultaneously and because the threads share the same address space, changing any field in startInputA would change the values used in both threads! Second, the function StartThread is used as a starting address for tasks (threads). The reason this is possible is that threads have their own private stack. This means each thread will have a private copy of k and j . Third, the main thread is put to sleep after starting the two new threads. The reason for this is that all threads of a process terminate when the process itself (the main thread) terminates. The Sleep function is used as a trick to keep the main thread alive long enough for the two new threads to do their jobs. Strictly speaking the endthread call in the function StartThread is not really necessary, as threads end automatically when they run out of instructions. Sleep is an ideal way to suspend a thread, because a sleeping thread is not scheduled until its nap time is over. Note that using Sleep is therefore an infinitely better construction than using a long running loop. Not only does Sleep guarantee more precisely when the thread will continue executing, but a loop is also very processor intense. This means that a loop slows down all other tasks on the system.

Do not forget to set the compiler code generation settings to multithreaded when writing multithreaded programs under Windows. In the file 13Source02.cpp on the Web site, you can find what the same program would look like under UNIX. Compile it with:

g++ 13Source02.cpp -o example -lpthread

It is also possible to perform your own task scheduling within a thread. The next section tells more about this.

What Is a Fiber?

It is, of course, always possible to implement your own paths of execution through a process and take care of task switching within your own code. The tasks created this way are often called fibers—because they are a subset of a thread. Exactly which information is accessible within a fiber and which information is needed upon a task switch is dependent on how you implement a fiber; however, it will be difficult to shield the rest of a process from damages the fiber might inadvertedly do. The fiber will almost always inherit the address space of the process (and therefore also that of any other threads within the process). Some OSes offer supporting calls to help you create your own fibers and switch between them. Check your OS and compiler documentation for more information.

Task Switching

Task switching is sometimes called process scheduling or thread scheduling. A multitasking OS must at the very least have a thread that takes care of scheduling the different tasks that need to run simultaneously. A task that is scheduled is said to have received a time slice, which means it can run for a certain period of time. This section presents different strategies that task schedulers can use for determining which task receives a time slice and how long this time slice will be.

Scheduling Processes and Threads

Tasks are usually scheduled according to their priority, which is an attribute that can be set for each task. However, the OS still has to decide how to define what a task actually is. For instance, the scheduler could see the system as running a collection of threads, and schedule these based solely on their priority. In this case no consideration is given to the fact that switching between threads in the same process costs less overhead than switching between threads of different processes. Another strategy would be to try and schedule a certain percentage or number of threads within the same process before scheduling a thread in another process.

Consult your OS and compiler documentation in order to determine how your system defines scheduling of tasks.

Cooperative Multitasking

An OS that uses cooperative multitasking assigns an equal amount of time to each task. The importance, or priority, of the task is not taken into consideration. Tasks that do not need a full time slice can decide to be cooperative and give up the remainder of their slice so another task can be scheduled earlier. A problem here can be that tasks with a very low priority take up a disproportionate amount of time.

Preemptive Multitasking

An OS that uses preemptive multitasking assigns time slices according to task importance or priority. This means that a task with a high priority will be scheduled more often (and perhaps even with a larger time slice) than a task with a low priority. A danger here is that setting the priority of a task too high can mean that other tasks have little or no opportunity to do any processing. This includes a task that you might try to start in order to lower the priority of the task that is causing the problems.

Tasks can also be preempted when they block themselves for synchronization or enter an idle state, such as is done with the Sleep function.

Real-Time OS

For certain critical tasks (such as processes running in an embedded environment), a minimal response time must be guaranteed by the OS. This means that scheduling strategies must keep the incurred task-switching costs to a minimum and allow worst-case scenarios to be predicted. These kinds of requirements specify a real-time OS.

Problems with Multitasking

As you have seen in the section What Is a Thread? the address space of a process can be accessed by all the threads of that process. And, because threads can run simultaneously, they can also access the same memory addresses simultaneously—this is why in Listing 13.11 each starting thread is given its own data structure. This section highlights the kinds of problems that can occur because of this characteristic of multitasking, and how to prevent them from happening inside your code.

Memory Corruption

Because all the threads in a process have the same address space, a bug in one thread can corrupt the data and instructions of another thread. This makes debugging extra difficult.

Protecting Shared Resources

Every once in a while different threads have to access the same resources (memory, files, ports, libraries, and so on). This does not necessarily have to cause a problem. Think, for instance, of two threads used for Internet access—two file download threads perhaps—which share a single data structure with initialization data (IP address, port number, modem settings, and so on). When both threads only read from this data there is no need for extra security. However, if one of the threads changes the data (writes to it) then something special must be done. The reason for this is that the programmer of a thread can never anticipate when a task switch will occur, and therefore cannot guarantee that it will never occur during the write action. A thread may want to change the IP address in a shared structure from 127.1.1.0 to 212.33.33.00. If the OS switches from this writing task to a task that reads from this structure during the middle of the write action, the reading thread will read a corrupt IP-Address: 212.33.1.0. This is why programmers need a way to lock a certain resource for a specific thread. There are different kinds of locks available on most OSes, but they all work according to the same basic principles:

  • A lock can be claimed and released.

  • The claim and release functions of locks are atomic. This means a task switch cannot occur during the execution of these functions.

  • When thread A tries to claim a lock that is already claimed by thread B, thread A is put on hold until thread B releases the lock. When thread B releases the lock, thread A becomes the owner of the lock and continues its execution. From that moment on, other threads will be blocked when trying to claim the lock. These other threads in turn stay blocked until thread A releases the lock.

  • Most locks are associated in some way with a list of threads. This means if several threads try to claim the same lock, the first will own it and the remaining threads have to wait in line for their turn.

  • The programmer makes the association between a lock and a resource.

The last bulleted item in the preceding list is a very important one. It means that it is perfectly possible to program a thread to use a resource without claiming the associated lock first. This often causes problems when different programmers work on the same multithreaded code; a programmer making changes to an existing thread may not be aware of the fact that certain resources are associated with locks and add code that uses the resources without locking them first.

Listing 13.13 shows a Windows program with two worker threads. One thread writes to a structure at given times and the other thread reads from it. The structure is protected by a lock that is imaginatively called lock . The functions EnterCriticalSection and LeaveCriticalSection are used, respectively, to claim and release this lock. The definition of the lock and the claim and release calls is of course OS specific. The set of instructions placed between the claim and release of a lock is called a critical section, which is where these OS calls get their names.

Code Listing 13.13. Protecting a Shared Resource in a Windows Program
#include <windows.h>
#include <process.h>
#include <fstream.h>


struct
{
    int     number;
}  SharedData;


CRITICAL_SECTION Lock;

void ReadThread(void* dummy)
{
    int prevnr  = -1;
    int nr      = 0;

    // Get initial number and print it.
    EnterCriticalSection(&Lock);
        nr = SharedData.number;
    LeaveCriticalSection(&Lock);

    prevnr = nr;

    cout << nr << endl;

    for(;;)
    {
        EnterCriticalSection(&Lock);
            nr = SharedData.number;
        LeaveCriticalSection(&Lock);

        // Print only changed numbers.
        if (nr != prevnr)
            cout << nr << endl;

        Sleep(100);
    }
}


void WriteThread(void* dummy)
{
    int nr      = 100;

    for(;;)
    {
        EnterCriticalSection(&Lock);
            SharedData.number = nr++;
        LeaveCriticalSection(&Lock);

        Sleep(5);
    }
}


void main(void)

{

    InitializeCriticalSection(&Lock);

    SharedData.number = 0;

    _beginthread(WriteThread, 0, NULL);
    _beginthread(ReadThread,  0, NULL);

    Sleep(20000);

}

In Listing 13.13 you can clearly see that the association between the lock and the resource is one introduced by the programmer. By simply removing a pair of EnterCriticalSection and LeaveCriticalSection calls you can program the threads to access the SharedData structure without protection. Note also that if you want the program in Listing 13.13 to output to screen really nicely, you have to make sure that no task switch can occur during the execution of the cout function. The way to do this is to create a second lock and place EnterCriticalSection and LeaveCriticalSection calls around each cout function call. For proof that the ReadThread and the WriteThread actually wait for each other you can add another call to the function Sleep() within one of the critical sections. You will then notice that this causes both threads to halt, the first because it is sleeping and the second because it is waiting for the sleeping thread to release the lock. This is also a useful technique to use when testing for possible deadlocks. Functions that are normally so fast that a deadlock almost never occurs can be artificially slowed down.

In Listing 13.13 only one thread could access the shared resource at a time. Sometimes, however, optimizations in locking strategy can be made by looking carefully at when a shared resource really needs to be locked. Consider the following example. A process contains three threads that use the same data structure. One thread only reads from this structure—let's call this the read thread—the two other threads need to read from and write to this structure—let's call these read/write threads. Instead of each thread locking the resource when it accesses it, you can make a distinction between read access and write access. There is no reason why the read thread cannot access the data structure while one of the read/write threads is also only reading it. However, as soon as a read/write thread wants to write to the data structure, locking must be done to make sure the read thread is not reading the data at the same time. Listing 13.14 shows, in pseudocode, how this can implemented.

Code Listing 13.14. Optimized Locking Strategy: Simultaneous Reading from Different Threads
ReadThread()
{
    EnterCriticalSection(&readlock);

        // No other ReadThread can access the data
        //  during this critical section. One other
        //  ReadWriteThread can access the data but
        //  only for reading.

        // Do as much reading here as you want.

    LeaveCriticalSection(&readlock);

    // Now another ReadThread or ReadWriteThread can lock the
    //  data, do not perform any reading or writing here.
}


ReadWriteThread()
{
    EnterCriticalSection(&writelock);

        // No other ReadWriteThread will access the data
        //  for reading or writing during this critical section.

        // Do as much reading here as you want because only
        //  this thread and one ReadThread can access the data.

        EnterCriticalSection(&readlock);

            // No other thread  (Read nor ReadWrite) will access
            //  the data during this critical section.

            // Here you can change the data (write to the data)

        LeaveCriticalSection(&readlock);

        // Back to previous critical section; readlock has been
        //  released so only perform read actions here.

    LeaveCriticalSection(&writelock);

    // Now another ReadThread and ReadWriteThread can lock the data.
    //  do not perform any reading or writing here.
}

Several threads can use the ReadThread and ReadWriteThread functions. A thread that wants to read as well as write the data structure has to claim the writelock just before accessing the data for reading. This way it is sure that no other read/write thread is changing the data. However, it allows another reading thread to continue unbothered. When a read/write thread wants to change the data, it has to claim both the writelock and readlock (in that order!) to ensure that no other thread has access to the data. These kinds of optimizations can improve program performance by eliminating as much idle waiting time incurred by locks as possible. However, the use of the locks can become quite complex and needs to be documented well, especially in light of future changes that may be made to the program. Functions that contain sufficient locks, so they can be called by different threads without causing problems with shared resources, are called re-entrant functions. As was stated before, there are many different kinds of locks. Some use counters with which the programmer can specify how many threads can claim a lock. The characteristics of the different kinds of locks that are available to you on a certain OS-plus-compiler combination should be taken into account when devising an optimized locking strategy. Consult your OS and compiler documentation for more information.

Deadlocks

Using locks enables programmers to create threads that do not corrupt shared resources. However, these locks have the effect that they halt (or block) certain threads at certain times, and halted threads cannot release locks that they have claimed. When a locking strategy is not carefully designed from the start, this can cause a situation to occur in which two or more threads wait for each other to release a lock. This is called a deadlock. The pseudocode in Listing 13.15 demonstrates a potential deadlock.

Code Listing 13.15. Potential Deadlock Situation
CRITICAL_SECTION resourceA, resourceB;

void ThreadA()
{
    EnterCriticalSection(&resourceA);

        // Do something with resourceA

        EnterCriticalSection(&resourceB);

            // Do something with resourceB (and A)

        LeaveCriticalSection(&resourceB);

    LeaveCriticalSection(&resourceA);
}


void ThreadB(void* dummy)
{
    EnterCriticalSection(&resourceB);

        // Do something with resourceB

        EnterCriticalSection(&resourceA);

            // Do something with resourceA (and B)

        LeaveCriticalSection(&resourceA);

    LeaveCriticalSection(&resourceB);
}

A deadlock occurs in Listing 13.15 when the following happens:

  1. ThreadA claims resourceA

  2. A task switch occurs which makes ThreadB active

  3. ThreadB claims resourceB

No matter what happens after this, ThreadA and ThreadB will ultimately become deadlocked. This is because ThreadB will try to claim resourceA before it releases resourceB and ThreadA will not even consider releasing resourceA before it is able to claim resourceB. ThreadA will block on the call EnterCriticalSection(&resourceB) and ThreadB will block on the call EnterCriticalSection(&resourceA)

Listing 13.15 also demonstrates why deadlocks are often so hard to find. Potential deadlocks may never actually occur or they may occur only sporadically. The fewer instructions placed between the claiming of resources (steps 1 and 3 in the preceding list), the less likely it becomes that a task switch will occur exactly at that point. This is also why debugging deadlocks is so difficult; as you have seen in Chapter 4, "Tools and Languages," the timing of a debug executable is very different from that of a release executable.

Another good way to introduce a deadlock is to claim a lock and never release it. This can happen when functions become more complicated and/or claims and releases are harder to match.

void functionQ()
{
    EnterCriticalSection(&lock)
    if (conditionA)
        if (conditionB)
           return;
    LeaveCriticalSection(&lock)

When both condition A and B are true, functionQ is terminated (return). However, the lock is never released. When a lock is left open like this, another thread trying to claim the lock will block. However, the thread that has the lock will, in all likelihood, not block when it calls the functionQ next time, because on most OSes a thread can not block on a lock that it has already claimed.

The correct way to write an exit like this would be:

void functionQ2()
{
    EnterCriticalSection(&lock)
    if (conditionA)
        if (conditionB)
        { LeaveCriticalSection(&lock); return;}
    LeaveCriticalSection(&lock)

Preventing Multitasking Problems

Here are some programming practices that can help you minimize multitasking problems.

  • Place claims and releases of locks as tightly around the usage of shared data as possible. In practice, critical sections should contain only instructions that directly or indirectly make use of the shared resource (see Listing 13.13).

  • Check all exit points of functions that claim locks.

  • Style your code in such a way that it is easy to see which claims go with which releases. See Listing 13.13, where extra indentation is used for the instructions of the critical sections.

  • Write clearly in the design, and in comments in the code, which locks should be associated with which resources and how the locks should be used (counting locks, read locks, and write locks separated and so on; see Listing 13.14).

When to Use Multitasking

As you have seen in the sections What Is a Process? and What Is a Thread? using multitasking brings with it a certain amount of overhead. This means that for most purely sequential programs, using multithreading will decrease performance. This section shows when multitasking can be used to boost program performance.

Different Program States

Using different threads to represent different program states is a good way to use multitasking to boost program performance and keep the software relatively simple. Think, for instance, of a TCP/IP program that communicates over the Internet. The main thread can be used to process user input and give user feedback, a second thread can take care of sending data, and a third thread can take care of receiving data. Because the threads basically run simultaneously, the programmer does not have to incorporate complex and time-consuming polling routines throughout the code to check if the program state should be changed. Refer to Chapter 1 for more details on polling.

Working During Idle and Waiting Times

When a programmer expects that a program has to wait when trying to claim a certain resource (hard disc, external disc, network, printer, and so on), he can decide to place interaction with the resource in a separate thread so the main thread can continue working. The new thread will not take up a lot of scheduling time because it will not be scheduled while it is waiting for the resource to become available. Listing 13.16 shows a function in pseudocode called WriteBackup, which can be used in a separate thread to interact with a device for output.

Code Listing 13.16. Using Threads for Slow Device Interaction
struct BackupInfo
{
    DEVICE              dev;
    CRITICAL_SECTION    *dataLock;
    unsigned char       *data;
    int                 datalen;
} ;

void WriteBackup(void* input)
{
    BackupInfo  *info = (BackupInfo*) input;


    // Wait until device can be claimed.
    Claim(info->dev);

        // Lock data for output.
        EnterCriticalSection(info->dataLock);

            // Write data to device.

            WriteData(info->dev, info->data, info->datalen);

        // Release data.
        LeaveCriticalSection(info->dataLock);

    // Release device.
    Release(info->dev);
}

Callback Functions

Callback functions can be used perfectly in combination with multithreaded programming as a way of increasing overall program performance. Callback functions were introduced in Chapter 1 as a way for tasks to signal that they have completed a certain job. Listing 13.16 can easily be adapted to make use of a callback function to signal that the backup was successfully written. There are more applications for callback functions. Listing 13.17 shows how a timer function can be started in a separate thread. This timer receives information on how many times per second it should call a certain callback function, and how many times this callback function should be called in total.

Code Listing 13.17. Using a Timer Callback
#include <windows.h>
#include <process.h>
#include <fstream.h>

void PrintStatus(int in)
{
    cout << in << endl;
}

struct TimerData
{
    void    (*CallBack)(int in);
    int     delay, nrAlarms, data;
} ;

void Timer(void *input)
{
    TimerData *info = (TimerData*) input;

    for (int i = 0; i < info->nrAlarms; i++)
    {
        Sleep(info->delay);

        info->CallBack(info->data);
    }
}

void main(void)
{
    TimerData   timedat;

    timedat.CallBack    = PrintStatus;
    timedat.data        = 5;
    timedat.delay       = 1000;
    timedat.nrAlarms    = 20;

    _beginthread(Timer, 0, &timedat);

    // Do something useful.

    Sleep(6000);
}

Such a timer can be used to print status information periodically or to update data (sprite position on screen and so on) while the main thread continues with the work at hand. Note that the timer in Listing 13.17 can be activated with different callback functions in order to do different kinds of things. Timers are sometimes used in watchdog threads. A watchdog periodically checks the status of a certain resource (a task, a device, a piece of memory and so on). When this resource is found to be in a faulty state (printer out of paper or tasks set in a deadlock) it can perform a predefined action such as warn the user or reboot the system.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset