Chapter 13. Asynchronous Programming and Multithreading

Topics in This Chapter

  • Asynchronous Programming: Unlike synchronous programming, in which a task can begin only when the preceding one completes, asynchronous programming permits multiple tasks to be performed simultaneously.

  • Multithreading: Multiple threads can enhance the performance of an application that can separate its tasks into operations that can run on separate threads. This section describes how a program can implement multithreading, the factors that affect thread scheduling, and when it's useful to create multiple threads.

  • Thread Synchronization: The use of multiple threads in an application raises several synchronization issues regarding how to create thread-safe code. Several .NET manual synchronization techniques are presented, including the Monitor class, the Mutex class, and the use of semaphores.

An application or component can be designed to operate in a synchronous or asynchronous manner. In the synchronous model, tasks are performed in sequence—as in a relay race, one runner (task) must complete his segment before the next one can start. In contrast, asynchronous programming permits an application to be broken into subtasks that perform concurrently. This approach (sometimes referred to as send and forget) allows one method to call another method and then continue processing without waiting for the called method to finish.

The key to asynchronous programming is the use of threads. A thread is essentially a code sequence that runs independently. This permits a program to work on multiple tasks in a parallel manner. For example, an application may use one thread to accept user input to a form, while a second thread concurrently processes a print request. When used judiciously, threads can greatly improve a program's performance and responsiveness; when used incorrectly, they can cause programs to hang or terminate without properly completing a task.

A thread is created and run by the operating system—not by .NET. What .NET does is create a wrapper around a thread so that it obeys the rules of the .NET managed environment. An asynchronous application may work indirectly or directly with threads. In the former case, delegates are used to automatically allocate and handle threads; in the latter case, a program explicitly creates instances of the Thread class and takes responsibility for synchronizing thread behavior.

The chapter begins with an overview of threads and then looks at asynchronous programming using both delegates and explicit thread creation. The final section examines the synchronization issues that arise when multiple threads are running, and introduces several synchronization techniques that can be used to enable threads to share resources.

What Is a Thread?

When an assembly (.exe file) begins execution, a primary thread is created that serves as the entry point to the application—in C#, this is an application's Main() method. The thread is the unit or agent responsible for executing code.

.NET does not physically create threads—that is the responsibility of the operating system. Instead, it provides a Thread class that serves as a managed version of the unmanaged physical thread. The Thread class, located in the System.Threading namespace, exposes properties and methods that allow a program to perform thread-related operations. These class members allow an application to create a thread, set its priority, suspend, activate or kill it, and have it run in the background or foreground.

Figure 13-1 is a simplified representation of the relationship between a process, applications, and threads. Physically, a thread consists of CPU registers, a call stack (memory used for maintaining parameter data and method calls), and a container known as Thread Local Storage (TLS) that holds the state information for a thread.

Threads contained in a process

Figure 13-1. Threads contained in a process

Multithreading

In a single CPU system, only one thread can execute at a time. The order in which threads run is based on their priority. When a thread reaches the top of the priority queue, its code stream is executed for a fixed amount of time known as a time slice. If the thread does not complete execution, its state information must be stored so that the thread can later resume execution at the point it is interrupted. The state information includes registers, stack pointers, and a program counter that tells the thread which instruction is executed next. All of this information is stored in the area of memory allocated to Thread Local Storage.

Core Note

Core Note

.NET provides support for multiple processor systems by permitting a process to be assigned to a processor. This is set using the ProcessAffinity property of the System.Diagnostics.Process class.

Thread Priority

As mentioned, the order in which a thread runs is based strictly on its priority. If a thread is running and a thread with a higher priority becomes available to run, the running thread is preempted to allow the higher priority thread to run. If more than one thread has the same priority, the operating system executes them in a round-robin fashion.

In .NET, a thread's Priority property is used to get or set its priority level. It may have one of five values based on the ThreadPriority enum: Lowest, BelowNormal, Normal, AboveNormal, and Highest. The default is ThreadPriority.Normal.

You should override thread priorities only in situations where a task has a clearly defined need to execute with a low or high priority. Using thread priorities to fine-tune an algorithm can be self-defeating for several reasons:

  • Even threads with the highest priority are subject to blocking by other threads.

  • Raising the priority of a thread can place it into competition with the operating system's threads, which can affect overall system performance.

  • An operating system keeps track of when a thread runs. If a thread has not run for a while, its priority is increased to enable it to be executed.

Foreground and Background Threads

.NET classifies each thread as either a background or foreground thread. The difference in these two types is quite simple: An application ends when all foreground threads stop; and any background threads still running are stopped as part of the shutdown process.

By default, a new thread is set to run as a foreground thread. It can be changed to background by setting its IsBackground property to true. Clearly, you only want to set this for noncritical tasks that can logically and safely end when the program does. Note that even though .NET attempts to notify all background threads when the program shuts down, it's good practice to explicitly manage thread termination.

Thread State

During its lifetime, a thread may exist in several states: It begins life in an Unstarted state; after it is started and the CPU begins executing it, it is in Running mode; when its slice of execution time ends, the operating system may suspend it; or if it has completed running, it moves into Stopped mode. Running, Stopped, and Suspended are somewhat deterministic states that occur naturally as the operating system manages thread execution. Another state, known as WaitSleepJoin, occurs when a thread must wait for resources or for another thread to complete its execution. After this blocking ends, the thread is then eligible to move into Running mode.

Figure 13-2 illustrates the states that a thread may assume and the methods that invoke these states. It is not a complete state diagram, because it does not depict the events that can lead to a thread being placed in an inconsistent state. For example, you cannot start a running thread nor can you abort a suspended thread. Such attempts cause an interrupt to be thrown.

Thread states

Figure 13-2. Thread states

A thread's current state is available through a read-only property named ThreadState. This property's value is based on the ThreadState enum that defines 10 states:

Aborted        = 256   StopRequested    = 1
AbortRequested = 128   Suspended        = 64
Background     = 4     SuspendRequested = 2
Running        = 0     Unstarted        = 8
Stopped        = 16    WaitSleepJoin    = 32

If a program is not interested in a specific state, but does need to know if a thread has been terminated, the Boolean Thread.IsAlive property should be used.

Asynchronous Programming

In a synchronous (single-threaded) application, program execution follows a single path; in an asynchronous (multithreaded) version, operations occur in parallel on multiple paths of execution. This advantage of this latter approach is that slow applications, such as file I/O, can be performed on a separate thread while the main thread continues execution.

Figure 13-3 provides an abstract representation of the two techniques. In the synchronous version, each method is executed in sequence; in the asynchronous version, method B runs at the same time as A and C. This prospect of two or more tasks running (nearly) simultaneously raises a set of questions not present in a single-threaded program:

  • What type of communication between the main thread and worker thread is required? The code on the worker thread can be invoked and forgotten, or it may be necessary for the main thread to know when the task is completed.

  • How does the main thread know when the worker thread is completed? Two approaches are available: the callback technique, in which the worker thread returns control to the main thread when it is finished; or a polling approach, in which the main thread calls a method that returns the results of the worker thread execution.

  • How to synchronize thread requests for the same resources? The issues here are similar to those faced when synchronizing access to a database. The integrity of the data must be maintained and deadlock situations must be avoided.

  • How to shutdown an application while worker threads are still executing? Several choices are available: They can be terminated; the main application can continue to run until all threads finish; or the main application can end and allow the threads to continue running.

Synchronous versus asynchronous execution

Figure 13-3. Synchronous versus asynchronous execution

Before tackling these issues, let's first look at the basics of how to write code that provides asynchronous code execution. As we see in the next section, threads can be explicitly created and used for parallel code execution. An easier approach is to use a delegate to allocate a worker thread and call a method to execute on the thread—a process referred to as asynchronous delegate invocation. Delegates can also be used to specify the callback method that a worker thread calls when it finishes execution.

Although a discussion of creating threads is deferred until later in this chapter, it's worth noting now that the threads allocated for asynchronous methods come from a pre-allocated thread pool. This eliminates the overhead of dynamically creating threads and also means they can be reused. At the same time, indiscriminate use of asynchronous calls can exhaust the thread pool—causing operations to wait until new threads are available. We'll discuss remedies for this in the section on threads.

Asynchronous Delegates

Delegates—which were introduced in Chapter 4, “Working with Objects in C#”—provide a way to notify one or more subscribing methods when an event occurs. In the earlier examples, all calls were synchronous (to methods on the same thread). But delegates can also be used to make an asynchronous call that invokes a method on a separate worker thread. Before looking at the details of this, let's review what a delegate is and how it's used.

The following code segment illustrates the basic steps involved in declaring a delegate and using it to invoke a subscribing method. The key points to note are that the callback method(s) must have the same signature as the delegate's declaration, and that multiple methods can be placed on the delegate's invocation chain (list of methods to call). In this example, the delegate is defined to accept a string parameter and return no value. ShowUpper and ShowMessage have the same signature.

//(1) Declare delegate. Declare anywhere a class can be declared.
public delegate void myDelegate(string msg);
private void TestDelegate()
{
   // (2) Create instance of delegate and pass method to it
   myDelegate msgDelegate= new myDelegate(ShowMessage);
   //     Second method is placed on delegate invocation chain
   msgDelegate+= new myDelegate(ShowUpper);
   // (3) Invoke delegate
   msgDelegate("Delegate Called.");
}
// First method called by delegate
private void ShowMessage(string msg)
{
   MessageBox.Show(msg);
}
// Second method called by delegate
private void ShowUpper(string msg)
{
   msg = msg.ToUpper();   // Make uppercase before displaying
   MessageBox.Show(msg);
}

Understanding the Delegate Class

When a delegate is defined, .NET automatically creates a class to represent the delegate. Here is the code generated for the delegate in the preceding example:

// Class created from delegate declaration
public class myDelegate : MulticastDelegate
{
   // Constructor
   public myDelegate(Object target, Int32 methodPtr);
   public void virtual Invoke(string msg);
   // Used for asynchronous invocation
   public virtual IAsyncResult BeginInvoke(
           string msg, AsyncCallback callback,
           Object state);
   // Used to get results from called method
   public virtual void EndInvoke(IAsyncResult result);
   // Other members are not shown
}

A close look at the code reveals how delegates support both synchronous and asynchronous calls.

Constructor

Takes two parameters. The important thing to note here is that when your program creates an instance of the delegate, it passes a method name to the constructor—not two parameters. The compiler takes care of the details of generating the parameters from the method name.

Invoke

The compiler generates a call to this method by default when a delegate is invoked. This causes all methods in the invocation list to be called synchronously. Execution on the caller's thread is blocked until all of the methods in the list have executed.

BeginInvoke

This is the method that enables a delegate to support asynchronous calls. Invoking it causes the delegate to call its registered method on a separate worker thread. BeginInvoke has two required parameters: the first is an AsyncCallback delegate that specifies the method to be called when the asynchronous method has completed its work; the second contains a value that is passed to the delegate when the method finishes executing. Both of these values are set to null if no callback is required. Any parameters defined in the delegate's signature precede these required parameters.

Let's look at the simplest form of BeginInvoke first, where no callback delegate is provided. Here is the code to invoke the delegate defined in the preceding example asynchronously:

IAsyncResult IAsync =
      msgDelegate.BeginInvoke("Delegate Called.",null,null)

There is one small problem, however—this delegate has two methods registered with it and delegates invoked asynchronously can have only one. An attempt to compile this fails. The solution is to register only ShowMessage or ShowUpper with the delegate.

Note that BeginInvoke returns an object that implements the IAsyncResult interface. As we see later, this object has two important purposes: It is used to retrieve the output generated by the asynchronous method; and its IsCompleted property can be used to monitor the status of the asynchronous operation.

You can also pass an AsyncCallBack delegate as a parameter to BeginInvoke that specifies a callback method the asynchronous method invokes when its execution ends. This enables the calling thread to continue its tasks without continually polling the worker thread to determine if it has finished. In this code segment, myCallBack is called when ShowMessage finishes.

private delegate void myDelegate(string msg);
myDelegate d= new myDelegate(ShowMessage);
d.BeginInvoke("OK",new AsyncCallback(myCallBack),null);

It is important to be aware that myCallBack is run on a thread from the thread pool rather than the application's main thread. As we will see, this affects the design of UI (user interface) applications.

EndInvoke

Is called to retrieve the results returned by the asynchronous method. The method is called by passing it an object that implements the IAsyncResult interface—the same object returned when BeginInvoke is called. These two statements illustrate this approach:

// Save the interface returned
IAsyncResult IAsync = GetStatus.BeginInvoke(null,null);
// ... Do some work here; then get returned value
int status = GetStatus.EndInvoke(IAsync);

EndInvoke should be called even if the asynchronous method returns no value. It can be used to detect exceptions that may be thrown by the asynchronous method; and more importantly, it notifies the Common Language Runtime (CLR) to clean up resources that were used in creating the asynchronous call.

Examples of Implementing Asynchronous Calls

The challenge in using BeginInvoke is to determine when the called asynchronous method finishes executing. As touched on earlier, the .NET Framework offers several options:

  • EndInvoke. After BeginInvoke is called, the main thread can continue working and then call this method. The call to EndInvoke blocks process on the main thread until the asynchronous worker thread completes its execution. This should never be used on a thread that services a user interface because it will lock up the interface.

  • Use a WaitHandle Synchronization object. The IAsyncResult object returned by BeginInvoke has a WaitHandle property that contains a synchronization object. The calling thread can use this object (or objects) to wait until one or more asynchronous tasks complete execution.

  • CallBack Method. As mentioned earlier, one of the parameters to BeginInvoke can be a delegate that specifies a method to be called when the asynchronous method finishes. Because the callback method is run on a new thread from the thread pool, this technique is useful only when the original calling thread does not need to process the results of the asynchronous method.

  • Polling. The IAsyncResult object has an IsCompleted property that is set to true when the method called by BeginInvoke finishes executing. Polling is achieved by periodically checking this value.

Figure 13-4 illustrates the four options.

Options for detecting the completion of an asynchronous task

Figure 13-4. Options for detecting the completion of an asynchronous task

Using Polling and Synchronization Objects

Table 13-1 lists the IAsyncResult properties that are instrumental in implementing the various asynchronous models. The class is in the System.Runtime.Remoting.Messaging namespace.

Table 13-1. Selected IAsyncResult Properties

Property

Description

AsyncState

The object that is passed as the last parameter to the BeginInvoke method.

AsyncWaitHandle

Returns a WaitHandle type object that is used to wait for access to resources. Access is indicated by a “signal” that the asynchronous task has completed. Its methods allow for various synchronization schemes based on one or multiple active threads:

  • WaitOneBlocks thread until WaitHandle receives signal.

  • WaitAnyWaits for any thread to send a signal (static).

  • WaitAllWaits for all threads to send a signal (static).

AsyncDelegate

Returns the delegate used for the asynchronous call.

IsCompleted

Boolean value that returns the status of the asynchronous call.

The WaitHandle and IsCompleted properties are often used together to implement polling logic that checks whether a method has finished running. Listing 13-1 illustrates this cooperation. A polling loop is set up that runs until IsCompleted is true. Inside the loop, some work is performed and the WaitHandle.WaitOne method is called to detect if the asynchronous method is done. WaitOne blocks processing until it receives a signal or its specified wait time (20 milliseconds in this example) expires.

Example 13-1. Asynchronous Invocation Using Polling to Check Status

// Code to return a Body Mass Index Value
private delegate decimal bmiDelegate(decimal ht, decimal wt);
decimal ht_in = 72;
decimal wt_lbs=168;
// (1) Invoke delegate asynchronously
bmiDelegate bd= new bmiDelegate(CalcBMI);
IAsyncResult asRes= bd.BeginInvoke(ht_in, wt_lbs,null,null);
int numPolls=0;
while(!asRes.IsCompleted)
{
   //     Do some work here
   // (2) Wait 20 milliseconds for method to signal completion
   asRes.AsyncWaitHandle.WaitOne(20,false);
   numPolls+=1;
}
// (3) Get result now that asynchronous method has finished
decimal myBMI = bd.EndInvoke(asRes);
Console.WriteLine("Polls: {0}  BMI: {1:##.00}",
      numPolls, myBMI);       // --> Polls: 3  BMI: 22.78
// Calculate BMI
private decimal CalcBMI(decimal ht, decimal wt)
{
   Thread.Sleep(200);         // Simulate a delay of 200 ms
   Console.WriteLine("Thread:{0}",
         Thread.CurrentThread.GetHash());
   return((wt * 703 *10/(ht*ht))/10);
}

For demonstration purposes, this example includes a 200-millisecond delay in the asynchronous method CalcBMI. This causes WaitOne, which blocks for up to 20 milliseconds, to execute seven times (occasionally eight) before the loop ends. Because EndInvoke is not reached until the asynchronous calculation has ended, it causes no blocking.

A more interesting use of the WaitHandle methods is to manage multiple asynchronous tasks running concurrently. In this example, the static WaitAll method is used to ensure that three asynchronous tasks have completed before the results are retrieved. The method is executed by passing it an array that contains the wait handle created by each call to BeginInvoke. As a side note, this point where threads must rendezvous before execution can proceed is referred to as a barrier.

int istart= Environment.TickCount;  // Start Time
bmiDelegate bd1     = new bmiDelegate(Form1.CalcBMI);
IAsyncResult asRes1 = bd1.BeginInvoke(72, 168,null,null);
//
bmiDelegate bd2     = new bmiDelegate(CalcBMI);
IAsyncResult asRes2 = bd2.BeginInvoke(62, 124,null,null);
//
bmiDelegate bd3     = new bmiDelegate(CalcBMI);
IAsyncResult asRes3 = bd3.BeginInvoke(67, 132,null,null);
// Set up array of wait handles as required by WaitAll method
WaitHandle[] bmiHandles = {asRes1.AsyncWaitHandle,
                           asRes2.AsyncWaitHandle,
                           asRes3.AsyncWaitHandle);
// Block execution until all threads finish at this barrier point
WaitHandle.WaitAll(bmiHandles);
int iend = Environment.TickCount;
// Print time required to execute all asynchronous tasks
Console.WriteLine("Elapsed Time: {0}", iend – istart);
// Get results
decimal myBMI1 = bd1.EndInvoke(asRes1);
decimal myBMI2 = bd2.EndInvoke(asRes2);
decimal myBMI3 = bd3.EndInvoke(asRes3);

To test performance, the method containing this code was executed multiple times during a single session. The results showed that execution time was more than 700 milliseconds for the first execution and declined to 203 for the fourth and subsequent ones when three different threads were allocated.

Execution:   1     2     3     4     5
Thread:     75    75    80    75    75
Thread:     75    80    12    80    80
Thread:     80    75    80    12    12
Time(ms):  750   578   406   203   203

For comparison, the code was then run to execute the three tasks with each BeginInvoke followed by an EndInvoke. It ran at a consistent 610 ms, which is what would be expected given the 200 ms block by each EndInvoke—and is equivalent to using synchronous code. The lesson to a developer is that asynchronous code should be used when a method will be executed frequently; otherwise the overhead to set up multithreading negates the benefits.

Core Note

Core Note

Applications that need to host ActiveX controls or interact with the clipboard must apply the STAThread (single-threaded apartment) attribute to their Main() method. Unfortunately, you cannot use WaitAll() in applications that have this attribute due to conflicts between COM and the Win32 method that WaitAll wraps. Visual Studio users should be aware of this because C# under VS.NET adds the attribute by default.

Using Callbacks

Callbacks provide a way for a calling method to launch an asynchronous task and have it call a specified method when it is done. This is not only an intuitively appealing model, but is usually the most efficient asynchronous model—permitting the calling thread to focus on its own processing rather than waiting for an activity to end. As a rule, the callback approach is preferred when the program is event driven; polling and waiting are better suited for applications that operate in a more algorithmic, deterministic manner.

The next-to-last parameter passed to BeginInvoke is an optional delegate of type AsyncCallback. The method name passed to this delegate is the callback method that an asynchronous task calls when it finishes executing a method. The example in Listing 13-2 should clarify these details.

Example 13-2. Using a Callback Method with Asynchronous Calls

using System.Runtime.Remoting.Messaging ;
// Delegate is defined globally for class
public delegate decimal bmiDelegate(decimal wt, decimal ht);

public class BMIExample
{
   public void BMICaller(decimal ht, decimal wt, string name)
   {
      bmiDelegate bd= new bmiDelegate(CalcBMI);
      // Pass callback method and state value
      bd.BeginInvoke(ht,wt,new AsyncCallback(OnCallBack),name);
   }
   // This method is invoked when CalcBMI ends
   private void OnCallBack(IAsyncResult asResult)
   {
      // Need AsyncResult so we can get original delegate
      AsyncResult asyncObj = (AsyncResult)asResult;
      // Get state value
      string name= (string)asyncObj.AsyncState ;
      // Get original delegate so EndInvoke can be called
      bmiDelegate bd= (bmiDelegate)asyncObj.AsyncDelegate;
      // Always include exception handling
      try {
         decimal bmi = bd.EndInvoke(asResult);
         Console.WriteLine("BMI for {0}: {1:##.00}",name,bmi);
      } catch (Exception ex)
      {
         Console.WriteLine(ex.Message);
      }
   }
   private decimal CalcBMI(decimal ht, decimal wt)
   {
      Console.WriteLine("Thread:{0}",
            Thread.CurrentThread.GetHashCode());
      return((wt * 703 *10/(ht*ht))/10);
   }
}

Things to note:

  • The BeginInvoke signature includes optional data parameters as well as a delegate containing the callback method and a state object:

    bd.BeginInvoke(ht,wt,new AsyncCallback(OnCallBack),name);
    
  • The final parameter can be information of any type that is useful to the code that receives control after the asynchronous method completes. In this example, we pass the name of the person whose BMI is calculated.

  • The callback method must have the signature defined by the AsyncCallback delegate.

    public delegate void AsyncCallback(IAsyncResult
       asyncResult);
    
  • The callback method must cast its parameter to an AsyncResult type in order to access the original delegate and call EndInvoke.

    AsyncResult asyncObj = (AsyncResult)asResult;
    // Get the original delegate
    bmiDelegate bd= (bmiDelegate)asyncObj.AsyncDelegate;
    decimal bmi = bd.EndInvoke(asResult);
    
  • The call to EndInvoke should always be inside an exception handling block. When an exception occurs on an asynchronous method, .NET catches the exception and later rethrows it when EndInvoke is called.

  • The BMICaller method is invoked from an instance of BMIExample using the following code. Note that the main thread is put to sleep so it does not end before the result is calculated.

    BMIExample bmi = new BMIExample();
    bmi.BMICaller(68,122, "Diana");
    Thread.Sleep(500);  // Give it time to complete
    

Multiple Threads and User Interface Controls

When working with Windows Forms and user interfaces in general, it is important to understand that all controls on a form belong to the same thread and should be accessed only by code running on that thread. If multiple threads are running, a control should not be accessed—even though it's technically accessible—by any code not running on the same thread as the control. This is a .NET commandment; and as is the nature of commandments, it can be broken—but with unpredictable results. Suppose our application wants to use the callback method in the preceding example to display the calculated BMI value on a label control. One's instinct might be to assign the value directly to the control:

private void OnCallBack(IAsyncResult asResult)
{
   // ... Initialization code goes here
   decimal bmi = bd.EndInvoke(asResult);
   Label.Text= bmi.ToText();  // Set label on UI to BMI value
}

This may work temporarily, but should be avoided. As an alternative, .NET permits a limited number of methods on the Control class to be called from other threads: Invoke, BeginInvoke, EndInvoke, and CreateGraphics. Calling a control's Invoke or BeginInvoke method causes the method specified in the delegate parameter to be executed on the UI thread of that control. The method can then work directly with the control.

To illustrate, let's replace the assignment to Label.Text with a call to a method DisplayBMI that sets the label value:

DisplayBMI(bmi);

We also add a new delegate, which is passed to Invoke, that has a parameter to hold the calculated value.

// Delegate to pass BMI value to method
private delegate void labelDelegate(decimal bmi);

private void DisplayBMI(decimal bmi)
{
   // Determines if the current thread is the same thread
   // the Form was created on.
   if(this.InvokeRequired == false)
   {
      labelthread.Text= bmi.ToString("##.00");
   }
   else
   {
      // The Form's Invoke method is executed, which
      // causes DisplayBMI to run on the UI thread.
      // bmiObj is array of arguments to pass to method.
      object[] bmiObj= {bmi};
      this.Invoke(new labelDelegate(DisplayBMI),bmiObj);
   }
}

This code segment illustrates an important point about threads and code: The same code can be run on multiple threads. The first time this method is called, it runs on the same thread as OnCallBack. The InvokeRequired property is used to determine if the current thread can access the form. If not, the Invoke method is executed with a delegate that calls back DisplayBMI on the UI thread—permitting it to now interact with the UI controls. To make this an asynchronous call, you only need replace Invoke with BeginInvoke.

Using MethodInvoker to Create a Thread

In situations where your code needs to create a new thread but does not require passing arguments or receiving a return value, the system-defined MethodInvoker delegate should be considered. It is the simplest possible delegate—it takes no parameters and returns no value. It is created by passing the name of a method to be called to its constructor. It may then be invoked synchronously (Invoke) or asynchronously (BeginInvoke):

// NewThread is method called by delegate
MethodInvoker mi = new MethodInvoker(NewThread);
// Note that parameters do not have to be null
mi.BeginInvoke(null,null); // Asynchronous call
mi();                      // Synchronous call

The advantage of using the built-in delegate is that you do not have to design your own, and it runs more efficiently than an equivalent custom delegate.

Using Asynchronous Calls to Perform I/O

Asynchronous operations are not new; they were originally implemented in operating systems via hardware and software as a way to balance the slow I/O (Input/Output) process against the much faster CPU operations. To encourage asynchronous I/O, the .NET Framework includes methods on its major I/O classes that can be used to implement the asynchronous model without explicitly creating delegates or threads. These classes include FileStream, HttpWebRequest, Socket, and NetworkStream. Let's look at an example using the FileStream class that was introduced in Chapter 5, “C# Text Manipulation and File I/O.”

FileStream inherits from the System.IO.Stream class an abstract class that supports asynchronous operations with its BeginRead, BeginWrite, EndRead, and EndWrite methods. The Beginxxx methods are analogous to BeginInvoke and include callback and status parameters; the Endxxx methods provide blocking until a corresponding Beginxxx method finishes.

The code in Listing 13-3 uses BeginRead to create a thread that reads a file and passes control to a callback method that compresses the file content and writes it as a .gz file. The basic callback method operations are similar to those in Listing 13-2. Note how the file name is retrieved from the AsyncState property. The compression technique—based on the GZipStream class—is available only in .NET 2.0 and above.

Example 13-3. Using Aysnchronous I/O to Compress a File

// Special namespaces required:
using System.IO.Compression;
using System.Runtime.Remoting.Messaging;
//
// Variables with class scope
Byte[] buffer;
FileStream infile;
// Compress a specified file using GZip compression
private void Compress_File(string fileName)
{
   bool useAsync = true;  // Specifies asynchronous I/O
   infile = new FileStream(fileName, FileMode.Open,
         FileAccess.Read, FileShare.Read, 2000, useAsync);
   buffer = new byte[infile.Length];
   int ln = buffer.Length;
   // Read file and let callback method handle compression
   IAsyncResult ar = infile.BeginRead(buffer, 0, ln,
         new AsyncCallback(Zip_Completed), fileName);
   //
}
// Callback method that compresses raw data and stores in file
private void Zip_Completed(IAsyncResult asResult)
{
   // Retrieve file name from state object
   string filename = (string)asResult.AsyncState;
   infile.EndRead(asResult);   // Wrap up asynchronous read
   infile.Close();
   //
   MemoryStream ms = new MemoryStream();
   // Memory stream will hold compressed data
   GZipStream zipStream = new GZipStream(ms,
         CompressionMode.Compress, true);
   // Write raw data in compressed form to memory stream
   zipStream.Write(buffer, 0, buffer.Length);
   zipStream.Close();
   // Store compressed data in a file
   FileStream fs = new FileStream(filename+".gz",
         FileMode.OpenOrCreate,FileAccess.Write,FileShare.Read);
   byte[] compressedData = ms.ToArray();
   fs.Write(compressedData, 0, compressedData.Length);
   fs.Close();
}

As a rule, asynchronous techniques are not required for file I/O. In fact, for read and write operations of less than 64KB, .NET uses synchronous I/O even if asynchronous is specified. Also, note that if you specify asynchronous operation in the FileStream constructor (by setting the useAsync parameter to true), and then use synchronous methods, performance may slow dramatically. As we demonstrate in later chapters, asynchronous techniques provide a greater performance boost to networking and Web Services applications than to file I/O.

Working Directly with Threads

The asynchronous techniques discussed in the previous section work best when an application or component's operations can be run on independent threads that contain all the data and methods they need for execution—and when the threads have no interest in the state of other concurrently running threads. The asynchronous techniques do not work as well for applications running concurrent threads that do have to share resources and be aware of the activities of other threads.

The challenge is no longer to determine when a thread finishes executing, but how to synchronize the activities of multiple threads so they do not corrupt each other's work. It's not an easy thing to do, but it can greatly improve a program's performance and a component's usability. In this section, we'll look at how to create and manage threads running concurrently. This serves as a background for the final section that focuses on synchronization techniques used to ensure thread safety.

Creating and Working with Threads

An application can create a thread, identify it, set its priority, set it to run in the background or foreground, coordinate its activities with other threads, and abort it. Let's look at the details.

The Current Thread

All code runs on either the primary thread or a worker thread that is accessible through the CurrentThread property of the Thread class. We can use this thread to illustrate some of the selected Thread properties and methods that provide information about a thread:

Thread currThread = Thread.CurrentThread;
Console.WriteLine(currThread.GetHashCode());
Console.WriteLine(currThread.CurrentCulture);      // en-US
Console.WriteLine(currThread.Priority);            // normal
Console.WriteLine(currThread.IsBackground);        // false
Console.WriteLine(AppDomain.GetCurrentThreadId()); // 3008

Thread.GetHashCode overrides the Object.GetHashCode method to return a thread ID. The thread ID is not the same as the physical thread ID assigned by the operating system. That ID, which .NET uses internally to recognize threads, is obtained by calling the AppDomain.GetCurrentThreadID method.

Creating Threads

To create a thread, pass its constructor a delegate that references the method to be called when the thread is started. The delegate parameter may be an instance of the ThreadStart or ParameterizedTheadStart delegate. The difference in the two is their signature: ThreadStart accepts no parameters and returns no value; ParameterizedThreadStart accepts an object as a parameter, which provides a convenient way to pass data to thread.

After the thread is created, its Start method is invoked to launch the thread. This segment illustrates how the two delegates are used to create a thread:

Thread newThread  = new Thread(new ThreadStart(GetBMI));
newThread.Start();      // Launch thread asynchronously

Thread newThread  = new Thread(new
      ParameterizedThreadStart(GetBMI));
newThread.Start(40);    // Pass data to the thread

To demonstrate thread usage, let's modify the method to calculate a BMI value (see Listing 13-2) to execute on a worker thread (Listing 13-4). The weight and height values are passed in an array object and extracted using casting. The calculated value is exposed as a property of the BMI class.

Example 13-4. Passing Parameters to a Thread's Method

// Create instance of class and set properties
BMI b = new BMI();
decimal[] bmiParms = { 168M, 73M };  // Weight and height
// Thread will execute method in class instance
Thread newThread  = new Thread(
      new ParameterizedThreadStart(b.GetBMI));
newThread.Start(bmiParms);        // Pass parameter to thread
Console.WriteLine(newThread.ThreadState);  // Unstarted
Console.WriteLine(b.Bmi); // Use property to display result
// Rest of main class ...
}
public class BMI
{
   private decimal bmival;
   public void GetBMI(object obj)
   {
      decimal[] parms= (decimal[])obj;
      decimal weight = parms[0];
      decimal height = parms[1] ;


      // Simulate delay to do some work
      Thread.Sleep(1000);  // Build in a delay of one second
      bmival = (weight * 703 * 10/(height*height))/10 ;
   }
   // Property to return BMI value
   public decimal Bmi
   { get {return bmival; }}
}

In reality, the method GetBMI does not do enough work to justify running on a separate thread; to simulate work, the Sleep method is called to block the thread for a second before it performs the calculation. At the same time, the main thread continues executing. It displays the worker thread state and then displays the calculated value. However, this logic creates a race condition in which the calling thread needs the worker thread to complete the calculation before the result is displayed. Because of the delay we've included in GetBMI, that is unlikely—and at best unpredictable.

One solution is to use the Thread.Join method, which allows one thread to wait for another to finish. In the code shown here, the Join method blocks processing on the main thread until the thread running the GetBMI code ends execution:

newThread.Start();
Console.WriteLine(newThread.ThreadState);
newThread.Join();        // Block until thread finishes
Console.WriteLine(b.bmi);

Note that the most common use of Join is as a safeguard to ensure that worker threads have terminated before an application is shut down.

Aborting a Thread

Any started thread that is not in a suspended state can be requested to terminate using the Thread.Abort method. Invoking this method causes a ThreadAbortException to be raised on its associated thread; thus, the code running the thread must implement the proper exception handling code. Listing 13-5 shows the code to implement both the call and the exception handling.

The calling method creates a thread, sleeps for a second, and then issues an Abort on the worker thread. The parameter to this command is a string that can be displayed when the subsequent exception occurs. The Join command is then used to wait for the return after the thread has terminated.

The method running on the worker thread loops until it is aborted. It is structured to catch the ThreadAbortException raised by the Abort command and print the message exposed by the exception's ExceptionState property.

Example 13-5. How to Abort a Thread

using System;
using System.Threading;
class TestAbort
{
   public static void Main()
   {
      Thread newThread = new Thread(new ThreadStart(TestMethod));
      newThread.Start();
      Thread.Sleep(1000);
      if(newThread.IsAlive)
      {
         Console.WriteLine("Aborting thread.");
         // (1) Call abort and send message to Exception handler
         newThread.Abort("Need to close all threads.");
         // (2) Wait for the thread to terminate
         newThread.Join();
         Console.WriteLine("Shutting down.");
      }
   }

   static void TestMethod()
   {
      try
      {
         bool iloop=true;
         while(iloop)
         {
            Console.WriteLine("Worker thread running.");
            Thread.Sleep(500);
            // Include next statement to prevent abort
            // iloop=false;
         }
      }
      catch(ThreadAbortException abortException)
      {
        // (3) Display message sent with abort command
        Console.WriteLine((string)abortException.ExceptionState);
      }
   }
}

The Abort command should not be regarded as a standard way to terminate threads, any more than emergency brakes should be regarded as a normal way to stop a car. If the thread does not have adequate exception handling, it will fail to perform any necessary cleanup actions—leading to unpredictable results. Alternate approaches to terminating a thread are presented in the section on thread synchronization.

Multithreading in Action

To gain insight into thread scheduling and performance issues, let's set up an application to create multiple threads that request the same resources. Figure 13-5 illustrates our test model. The server is a class that loads images from its disk storage on request and returns them as a stream of bytes to a client. The client spins seven threads with each thread requesting five images. To make things interesting, the threads are given one of two different priorities. Parenthetically, this client can be used for stress testing because the number of threads and images requested can be set to any value.

Multithreading used to return images as a byte array

Figure 13-5. Multithreading used to return images as a byte array

The ImageServer class shown in Listing 13-6 uses the Stream class to input the requested image file, write it into a memory stream, and convert this stream to an array of bytes that is returned to the client. Note that any exceptions thrown in the server are handled by the client code.

Example 13-6. Class to Return Images

public class ImageServer
{
   public static byte[] GetMovieImage(string imageName,
                                      int threadNum )
   {
     // Returns requested image to client as a series of bytes,
      // and displays thread number of calling thread.
      int imgByte;
      imageName= "c:\images\"+imageName;
      // If file not available exception is thrown and caught by
      // client.
      FileStream s = File.OpenRead(imageName);
      MemoryStream ms = new MemoryStream();
      while((imgByte =s.ReadByte())!=-1)
      {
         ms.WriteByte(((byte)imgByte));
      }
      // Display order in which threads are processed
      Console.WriteLine("Processing on Thread: {0}",threadNum);
      return ms.ToArray();
   }
}

The code shown in Listing 13-7 uses the techniques described earlier to create seven threads that call the static FetchImage method on the ImageServer class. The threads are alternately assigned a priority of Lowest or AboveNormal, so that we can observe how their scheduling is affected by priority. Each thread makes five requests for an image from the server by calling its GetMovieImage method. These calls are inside an exception handling block that displays any exception message originating at the server.

Example 13-7. Using Multithreading to Retrieve Images

using System;
using System.Collections;
using System.Threading;
namespace ThreadExample
{
   class SimpleClient
   {
      static void Main(string[] args)
      {
         Threader t=new Threader();
      }
   }
   class Threader
   {
      ImageServer server;
      public Threader(){
         server = new ImageServer(); Object used to fetch images
         StartThreader();
      }
      public void StartThreader()
      {
         // Create seven threads to retrieve images
         for (int i=0; i<7; i++)
         {
            // (1) Create delegate
           ThreadStart threadStart = new ThreadStart(FetchImage);
            // (2) Create thread
            Thread workerThread = new Thread(threadStart);
            // (3) Set two priorities for comparison testing
            if( i % 2 == 1)
              workerThread.Priority = ThreadPriority.Lowest;
            else
               workerThread.Priority = ThreadPriority.AboveNormal;
            // (4) Launch Thread
            workerThread.Start();
         }
      }
      public void FetchImage()
      {
         // Display Thread ID
         Console.WriteLine(
              "Spinning: "+Thread.CurrentThread.GetHashCode());


         string[] posters = {"afi1.gif","afi2.gif",
                            "afi4.gif", "afi7.gif","afi89gif"};
         // Retrieve five images on each thread
         try
         {
            for (int i=0;i<5;i++)
            {
               byte[] imgArray = server.GetMovieImage(
                     posters[i],
                     Thread.CurrentThread.GetHashCode());
               MemoryStream ms = new MemoryStream(imgArray);
               Bitmap bmp = new Bitmap(ms);
            }
         }
         catch (Exception ex)
         {
            Console.WriteLine(ex.Message);
         }
      }  // FetchImage
   }     // Threader
         // ImageServer class goes here...
}        // ThreadExample

Because GetMovieImage prints the hash code associated with each image it returns, we can determine the order in which thread requests are fulfilled. Figure 13-6 shows the results of running this application. The even-numbered threads have the higher priority and are processed first in round-robin sequence. The lower priority threads are then processed with no interleaved execution among the threads.

Effect of thread priority on thread execution

Figure 13-6. Effect of thread priority on thread execution

The program was run several times to test the effects of varying the number of images requested. In general, the same scheduling pattern shown here prevails, although as more images are requested the lower priority threads tend to run in an interleaved fashion.

Using the Thread Pool

Creating threads can be a relatively expensive process, and for this reason, .NET maintains a collection of predefined threads known as a thread pool. Threads in this pool can be acquired by an application and then returned for reuse when they have finished running. Recall from Section 13.2 that when a program uses asynchronous delegate invocation to create a thread, the thread actually comes from the thread pool. An application can also access this pool directly by following two simple steps.

The first step is to create a WaitCallback delegate that points to the method to be executed by the thread. This method must, of course, match the signature of the delegate, which takes one object parameter and returns no value. Next, the QueueUserWorkItem static method of the ThreadPool class is called. The first parameter to this method is the delegate; it also takes an optional second parameter that can be used to pass information to the method called by the delegate.

To illustrate, let's alter the previous example to acquire threads from a pool rather than creating them explicitly. An object parameter must be added to FetchImage so that it matches the delegate signature. Then, replace the code to create threads with these two statements:

WaitCallback callBack = new WaitCallback(FetchImage);
ThreadPool.QueueUserWorkItem(callBack, "image returned");

This places a request on the thread pool queue for the next available thread. The first time this runs, the pool must create a thread, which points out an important fact about the thread pool. It contains no threads when it is created, and handles all thread requests by either creating a thread or activating one already in the pool. The pool has a limit (25) on the number of threads it can hold, and if these are all used, a request must wait for a thread to be returned. You can get some information about the status of the thread pool using the GetAvailableThreads method:

int workerThreads;
int asyncThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out
asyncThreads);

This method returns two values: the difference between the maximum number of worker and asynchronous threads the pool supports, and the number of each currently active. Thus, if three worker threads are being used, the workerThreads argument has a value of 22.

The thread pool is most useful for applications that repeatedly require threads for a short duration. For an application that requires only a few threads that run simultaneously, the thread pool offers little advantage. In fact, the time required to create a thread and place it in the thread pool exceeds that of explicitly creating a new thread.

Core Note

Core Note

Threads exist in the thread pool in a suspended state. If a thread is not used in a given time interval, it destroys itself—freeing its resources.

Timers

Many applications have a need to perform polling periodically to collect information or check the status of devices attached to a port. Conceptually, this could be implemented by coupling a timer with a delegate: The delegate handles the call to a specified method, while the timer invokes the delegate to place the calls at a specified interval. In .NET, it is not necessary to write your own code to do this; instead, you can use its prepackaged Timer classes. Let's look at a couple of the most useful ones: System.Timers.Timer and Windows.Forms.Timer. The former is for general use, whereas the latter is designed for Windows Forms applications.

System.Timers.Timer Class

To use the Timer class, simply register an event handling method or methods with the class's Elapsed event. The signature of the method(s) must match that of the ElapsedEventHandler delegate associated with the event:

public delegate void ElapsedEventHandler(object sender,
                                         ElapsedEventArgs e);

The Elapsed event occurs at an interval specified by the Timer.Interval property. A thread from the thread pool is used to make the call into the event handler(s). This code segment demonstrates how the Timer causes a method to be called every second:

using System;
using System.Timers;


public class TimerTest
{
   public static void Main()
   {
      SetTimer t = new SetTimer();
      t.StartTimer();
   }
}
class SetTimer
{
   int istart;
   public void StartTimer()
   {
      istart= Environment.TickCount; //Time when execution begins
         Timer myTimer = new myTimer();
         myTimer.Elapsed+=new ElapsedEventHandler(OnTimedEvent);
      myTimer.Interval=1000;           // 1000 milliseconds
      myTimer.Enabled=true;
      Console.WriteLine("Press any key to end program.");
      Console.Read();
      myTimer.Stop();
   }
   // Timer event handler
   private void OnTimedEvent(object source, ElapsedEventArgs e)
   {
      Console.WriteLine("Elapsed Time: {0}",
                        Environment.TickCount-istart);
   }
}

System.Windows.Forms.Timer Class

We can dispense with a code example of this class, because its implementation parallels that of the Timers.Timer class, with two differences: It uses a Tick exception rather than Elapsed, and it uses the familiar EventHandler as its delegate. However, the feature that distinguishes it from the other Timer class is that it does not use a thread from the thread pool to call a method. Instead, it places calls on a queue to be handled by the main UI thread. Except for situations where the time required by the invoked method may make the form unresponsive, a timer is preferable to using threading. It eliminates the need to deal with concurrent threads and also enables the event handler to directly update the form's controls—something that cannot be done by code on another thread.

Thread Synchronization

Thread synchronization refers to the techniques employed to share resources among concurrent threads in an efficient and orderly manner. The specific objective of these techniques is to ensure thread safety. A class (or its members) is thread-safe when it can be accessed by multiple threads without having its state corrupted. The potential corruption arises from the nature of thread scheduling. Recall from the previous section that a thread executes in time slices. If it does not finish its task, its state is preserved and later restored when the thread resumes execution. However, while suspended, another thread may have executed the same method and altered some global variables or database values that invalidate the results of the original thread. As an example, consider the pseudo-code in Figure 13-7 that describes how concurrent threads execute the same code segment.

Execution path that requires synchronization

Figure 13-7. Execution path that requires synchronization

Because the first thread is suspended before it updates the log file, both threads update the file with the same value. Because server applications may have hundreds of active threads, there is clear need for a mechanism to control access to shared resources.

The implementation of the pseudo-code is presented in Listing 13-8. Executing this code multiple times produces inconsistent results, which is the pitfall of using code that is not thread-safe. About half the time, the counter is incremented correctly by 2; other times, the first thread is preempted and the second thread gets in before the first finishes updating. In this case, the counter is incorrectly incremented by 1.

Example 13-8. Example of Class That Requires Synchronization

using System;
using System.Threading;
using System.IO;
public class MyApp
{
   public static void Main()
   {
      CallerClass cc = new CallerClass();
      Thread worker1 =
            new Thread(new ThreadStart(cc.CallUpdate));
      Thread worker2 =
            new Thread(new ThreadStart(cc.CallUpdate));
      worker1.Start();
      worker2.Start();
   }
}
public class CallerClass
{
   WorkClass wc;
   public CallerClass()
   {
      wc= new WorkClass();  // create object to update log
   }
   public void CallUpdate()
   {
      wc.UpdateLog();
   }
}
public class WorkClass
{
   public void UpdateLog()
   {


      // Open stream for reading and writing
      try
      {
         FileStream fs = new
         FileStream(@"c:log.txt",FileMode.OpenOrCreate,
               FileAccess.ReadWrite, FileShare.ReadWrite);
         StreamReader sr = new StreamReader(fs);
         // Read current counter
         string ctr = sr.ReadLine();
         if(ctr==null) ctr="0";
         int oldCt = int.Parse(ctr) + 1;
         // If the thread's time slice ends here, the counter
         // is not updated.
         fs.Seek(0,SeekOrigin.Begin);
         StreamWriter sw= new StreamWriter(fs);
         sw.WriteLine(oldCt.ToString());
         Console.WriteLine(oldCt);
         sw.Close();
         sr.Close();
      }  catch(Exception ex)
         {
            Console.WriteLine(ex.Message);
         }
   }
}     // WorkClass

A solution is to ensure that after a thread invokes UpdateLog, no other thread can access it until the method completes execution. That is essentially how synchronization works: permitting only one thread to have ownership of a resource at a given time. Only when the owner voluntarily relinquishes ownership of the code or resource is it made available to another thread. Let's examine the different synchronization techniques available to implement this strategy.

The Synchronization Attribute

The developers of .NET recognized that the overhead required to make all classes thread-safe by default would result in unacceptable performance. Their solution was to create a .NET architecture that naturally supports the ability to lock code segments, but leaves the choice and technique up to the developer. An example of this is the optional Synchronization attribute. When attached to a class, it instructs .NET to give a thread exclusive access to an object's code until the thread completes execution. Here is the code that implements this type of synchronization in the log update example:

[Synchronization]
public class WorkClass: ContextBoundObject

The class to which the [Synchronization] attribute is applied should derive from the ContextBoundObject class. When .NET sees this is a base class, it places the object in a context and applies the synchronization to the context. This is referred to as context-bound synchronization. For this to make sense, let's look at the .NET architecture to understand what a context is.

When an application starts, the operating system runs it inside a process. The .NET runtime is then loaded and creates one or more application domains (AppDomains) inside the process. As we will see in the next chapter, these are essentially logical processes that provide the managed environment demanded by .NET applications. Just as a process may contain multiple AppDomains, an AppDomain may contain multiple contexts.

A context can be defined as a logical grouping of components (objects) that share the same .NET component services. Think of a context as a layer that .NET wraps around an object so that it can apply a service to it. When a call is made to this object, it is intercepted by .NET and the requested service is applied before the call is routed to the object. Synchronization is one type of component service. In our example, .NET intercepts the call to UpdateLog and blocks the calling thread if another thread has ownership of the context containing this method. Another component service of interest—call authorization—enables .NET to check the calling thread to ensure it has the proper credentials to access the object.

The [Synchronization] attribute is the easiest way to control thread access to a class—only two statements are changed in our preceding example. The drawback to this approach is that it must be applied to the entire class—even if only a small section of the class contains critical code that requires thread synchronization. The manual synchronization approaches we look at next permit a more granular implementation.

The Monitor Class

The Monitor class allows a single thread to place a lock on an object. Its methods are used to control thread access to an entire object or selected sections of code in an object. Enter and Exit are its most commonly used methods. Enter assigns ownership of the lock to the calling thread and prevents any other thread from acquiring it as long as the thread owns it. Exit releases the lock. Let's look at these methods in action.

Using Monitor to Lock an Object

Monitor.Enter takes an object as a parameter and attempts to grant the current thread exclusive access to the object. If another thread owns the object, the requesting thread is blocked until the object is free. The object is freed by executing the complementary Monitor.Exit.

To illustrate the use of a monitor, let's return to the example in Listing 13-8 in which two threads compete to read and update a log file. The read and write operations are performed by calling the UpdateLog method on a WorkClass object. To ensure these operations are not interrupted, we can use a monitor to lock the object until the method completes executing. As shown here, it requires adding only two statements:

public void CallUpdate()
{
   Monitor.Enter(wc);   // wc is WorkClass object
   wc.UpdateLog();
   Monitor.Exit(wc);

In addition to Monitor.Enter, there is a Monitor.TryEnter method that attempts to acquire an exclusive lock and return a true or false value indicating whether it succeeds. Its overloads include one that accepts a parameter specifying the number of millseconds to wait for the lock:

if (!Monitor.TryEnter(obj) return; // Return if lock unavailable
if (!Monitor.TryEnter(obj, 500) return; // Wait 500 ms for lock

Encapsulating a Monitor

A problem with the preceding approach is that it relies on clients to use the monitor for locking; however, there is nothing to prevent them from executing UpdateLog without first applying the lock. To avoid this, a better design approach is to encapsulate the lock(s) in the code that accesses the shared resource(s). As shown here, by placing Monitor.Enter inside UpdateLog, the thread that gains access to this lock has exclusive control of the code within the scope of the monitor (to the point where Monitor.Exit is executed).

public void UpdateLog()
{
   Monitor.Enter(this);   // Acquire a lock
   try
   {
      // Code to be synchronized
   }
   finally   // Always executed
   {
      Monitor.Exit(this);   // Relinquish lock
   }

Note the use of finally to ensure that Monitor.Exit executes. This is critical, because if it does not execute, other threads calling this code are indefinitely blocked. To make it easier to construct the monitor code, C# includes the lock statement as a shortcut to the try/finally block. For example, the previous statements can be replaced with the following:

lock(this)
{
   // Code to be synchronized
}

Monitor and lock can also be used with static methods and properties. To do so, pass the type of object as a command parameter rather than the object itself:

Monitor.Enter(typeof(WorkClass));
// Synchronized code ...
Monitor.Exit(typeof(WorkClass));

Core Recommendation

Core Recommendation

Be wary of using synchronization in static methods. Deadlocks can result when a static method in class A calls static methods in class B, and vice versa. Even if a deadlock does not occur, performance is likely to suffer.

The Mutex

To understand the Mutex class, it is first necessary to have some familiarity with the WaitHandle class from which it is derived. This abstract class defines “wait” methods that are used by a thread to gain ownership of a WaitHandle object, such as a mutex. We saw earlier in the chapter (refer to Table 13-1) how asynchronous calls use the WaitOne method to block a thread until the asynchronous operation is completed. There is also a WaitAll method that can be used to block a thread until a set of WaitHandle objects—or the resources they protect—are available.

An application can create an instance of the Mutex class using one of several constructors. The most useful are

public Mutex();
public Mutex(bool initiallyOwned);
public Mutex(bool initiallyOwned, string name);

The two optional parameters are important. The initallyOwned parameter indicates whether the thread creating the object wants to have immediate ownership of it. This is usually set to false when the mutex is created within a class whose resources it is protecting. The name parameter permits a name or identifier to be assigned to the mutex. This permits a specific mutex to be referenced across AppDomains and even processes. Because thread safety usually relies on encapsulating the locking techniques within an object, exposing them by name to outside methods is not recommended.

Using a mutex to provide thread-safe code is a straightforward process. A mutex object is created, and calls to its wait methods are placed strategically in the code where single thread access is necessary. The wait method serves as a request for ownership of the mutex. If another thread owns it, the requesting thread is blocked and placed on a wait queue. The thread remains blocked until the mutex receives a signal from its owner that it has been released. An owner thread releases a mutex in two ways: by calling the object's ReleaseMutex method or when the thread is terminated. Here is an example of how the log update application is altered to use a mutex to provide thread safety:

public class WorkClass
{
   Mutex logMutex;
   public WorkClass()
   {
      logMutex = new Mutex(false);
   }

   public void UpdateLog()
  {
      logMutex.WaitOne();   // Wait for mutex to become available
         // Code to be synchronized
      logMutex.ReleaseMutex();
   }
}

As part of creating an instance of WorkClass, the constructor creates an instance of the Mutex class. The Boolean false parameter passed to its constructor indicates that it is not owned (the parameterless constructor also sets ownership to false). The first thread that executes UpdateLog then gains access to the mutex through the WaitOne call; when the second thread executes this statement, it is blocked until the first thread releases the mutex.

The Semaphore

The Semaphore class is another WaitHandle derived class. It functions as a shared counter and—like a mutex—uses a wait call to control thread access to a code section or resource. Unlike a mutex, it permits multiple threads to concurrently access a resource. The number of threads is limited only by the specified maximum value of the semaphore.

When a thread issues a semaphore wait call, the thread is not blocked if the semaphore value is greater than 0. It is given access to the code and the semaphore value is decremented by 1. The semaphore value is incremented when the thread calls the semaphore's Release method. These characteristics make the semaphore a useful tool for managing a limited number of resources such as connections or windows that can be opened in an application.

The Semaphore class has several overloaded constructor formats, but all require the two parameters shown in this version:

public Semaphore(int initialCount, int maximumCount );

The maximumCount parameter specifies the maximum number of concurrent thread requests the semaphore can handle; initialCount is the initial number of requests the semaphore can handle. Here is an example:

Semaphore s = new Semaphore(5,10);

This semaphore permits a maximum of 10 concurrent threads to access a resource. When it is first created, only 5 are permitted. To increase this number, execute the Semaphore.Release(n) command—where n is the number used to increment the count permitted. The intended purpose of this command is to free resources when a thread completes executing and wants to exit a semaphore. However, the command can be issued even if the thread has never requested the semaphore.

Now let's see how the Semaphore class can be used to provide synchronization for the log update example. As a WaitHandle derived class, its implementation is almost identical to the mutex. In this example, the semaphore is created with its initial and maximum values set to 1—thus restricting access to one thread at a time.

public class WorkClass
{
   private Semaphore s;
   public WorkClass()
   {
      // Permit one thread to have access to the semaphore
      s  = new Semaphore(1, 1);
   }
   public void UpdateLog(object obj)
   {
      try {
         s.WaitOne();   // Blocks current thread
      // code to update log ...
      } finally
      {
         s.Release();
      }
   }
}

Avoiding Deadlock

When concurrent threads compete for resources, there is always the possibility that a thread may be blocked from accessing a resource (starvation) or that a set of threads may be blocked while waiting for a condition that cannot be resolved. This deadlock situation most often arises when thread A, which owns a resource, also needs a resource owned by thread B; meanwhile, thread B needs the resource owned by thread A. When thread A makes its request, it is put in suspended mode until the resource owned by B is available. This, of course, prevents thread B from accessing A's resource. Figure 13-8 depicts this situation.

Deadlock situation

Figure 13-8. Deadlock situation

Most deadlocks can be traced to code that allows resources to be locked in an inconsistent manner. As an example, consider an application that transfers money from one bank account to another using the method shown here:

public void Transfer(Account acctFrom,
                     Account acctTo, decimal amt)
{
   Monitor.Enter(acctFrom);   // Acquire lock on from account
   Monitor.Enter(acctTo);     // Acquire lock on to account
   // Perform transfer ...
   Monitor.Exit(acctFrom);    // Release lock
   Monitor.Exit(acctTo);      // Release lock
}

As you would expect, the method locks both account objects so that it has exclusive control before performing the transaction. Now, suppose two threads are running and simultaneously call this method to perform a funds transfer:

Thread A:   Transfer(Acct1000, Acct1500, 500.00);
Thread B:   Transfer(Acct1500, Acct1000, 300.00);

The problem is that the two threads are attempting to acquire the same resources (accounts) in a different order and run the risk of creating a deadlock if one is preempted before acquiring both locks. There are a couple of solutions. First, we could lock the code segment being executed to prevent a thread from being preempted until both resources are acquired:

lock(this)
{
   ... Monitor statements
}

Unfortunately, this can produce a performance bottleneck. Suppose another method is working with one of the account objects required for the current transaction. The thread executing the method is blocked as well as all other threads waiting to perform a funds transfer.

A second solution—recommended for multithreading in general—is to impose some order on the condition variables that determine how locking can occur. In this example, we can impose a lock sequence based on the objects' account numbers. Specifically, a lock must be acquired on the account with the lower account number before the second lock can be obtained.

If(acctFrom < acctTo)
{
   Monitor.Enter(acctFrom);
   Monitor.Enter(acctTo);
}else
{
   Monitor.Enter(acctTo);
   Monitor.Enter(acctFrom);
}

As this example should demonstrate, a deadlock is not caused by thread synchronization per se, but by poorly designed thread synchronization. To avoid this, code should be designed to guarantee that threads acquire resource locks in a consistent order.

Summary of Synchronization Techniques

Table 13-2 provides an overview of the synchronization techniques discussed in this chapter and provides general advice on selecting the one to best suit your application's needs.

Table 13-2. Overview of Selected Thread Synchronization Techniques

Technique

Description

When to Use

Synchronization attribute

An attribute that can be used with classes that inherit from the ContextBoundObject class.

To limit thread access to an entire object. If you need to protect only a small section of code while permitting access to other class members, choose another technique.

Monitor/lock

Locks selected code segments that are encased between a Monitor.Enter and Monitor.Exit statement. Lock provides equivalent code with built-in event handling.

To provide single thread access to selected code segments in an object. To synchronize access to value types, use a mutex.

Mutex

Uses wait methods inherited from the WaitHandle class to manage thread access to resources.

To permit a thread to request exclusive access to one or more resources. Requests can be made across AppDomains and processes.

Semaphore

Uses wait methods inherited from the WaitHandle class to manage multiple concurrent thread access to resources.

To make a limited number of resources available concurrently to more than one thread.

In addition to these, .NET offers specialized synchronization classes that are designed for narrowly defined tasks. These include Interlocked, which is used to increment and exchange values, and ReaderWriterLock, which locks the writing operation on a file but leaves reading open to all threads. Refer to online documentation (such as MSDN) for details on using these.

Summary

Designing an application to perform tasks concurrently can result in an application that provides better responsiveness to a user and manages system resources more efficiently. This requires replacing the traditional synchronous approach to code execution with an asynchronous approach that uses threads. A thread is a path of execution. Each program begins running on a main thread that may create worker threads to perform tasks concurrent to its own processing.

One way to create a thread that executes a specified method is to make an asynchronous delegate invocation. This is done by creating a delegate and passing it the name of the method to be called. The delegate is then invoked with its BeginInvoke method. This causes the delegate's method to be executed on a thread that is fetched from a thread pool managed by .NET. An optional parameter to BeginInvoke is a callback method that is called when the worker thread ends. Unlike synchronous processing, in which a call to a method blocks processing in the calling method, asynchronous invocation returns control to the calling method so that it can continue processing.

Applications that require more control over a thread can create their own by passing a ThreadStart or ParameterizedThreadStart delegate to the Thread constructor. A thread is executed by calling its Start method.

After the decision is made to use threads, the problem of thread-safe code must be considered. An operating system executes threads in time slices. When a thread's time expires, it is swapped out and another thread begins executing. The effects of a thread being interrupted in the middle of a task can produce unpredictable results. Thread synchronization is used to ensure that one thread has exclusive access to a code path until it completes processing. .NET provides several approaches to synchronization: an automatic approach that uses the Synchronization attribute to lock an object until a thread has finished using it; and the Mutex, Monitor, and Semaphore classes that provide a manual—but more granular—approach to implementing thread safety.

Test Your Understanding

1:

An asynchronous delegate must have a void return value.

  1. True

  2. False

2:

Given this delegate

private delegate void myDelegate(string msg);
myDelegate d = new myDelegate(PrintMessage);

identify the role of ia, p1, p2, and p3 in this BeginInvoke call:

ia = d.BeginInvoke(p1, p2, p3);

3:

What is thread local storage used for?

4:

What is the default maximum number of threads that a thread pool can hold?

5:

What two delegates are used to create a thread directly, and how do they differ?

6:

Describe a syntactically simpler way to generate the following code:

Monitor.Enter(obj);
{
   // Code to synchronize
} finally {
   Monitor.Exit(obj);
}

7:

How many times does the following code print the console message?

private static s;
public static void Main()
{
   s = new Semaphore(0, 3);
   // Create and start five numbered threads
   for(int i = 1; i <= 5; i++)
   {
      Thread t = new Thread(new ThreadStart(Worker));
      t.Start();
   }
}
private static void Worker(object num)
{
   s.WaitOne();
   Console.WriteLine("Thread enters semaphore  ");
   Thread.Sleep(100);
   s.Release();
}
  1. 0

  2. 1

  3. 3

  4. 5

8:

What happens when you attempt to run this code?

class UseMutex
{
   public void ThreadStart()
   {
      Mutex mutex = new Mutex(false, "MyMutex");
      mutex.WaitOne();
      Console.WriteLine("Worker Thread");
   }

   static void Main()
   {
      UseMutex obj = new UseMutex();
      Thread thread = new Thread(
            new ThreadStart(obj.ThreadStart));
      Mutex mutex = new Mutex(true, "MyMutex");
      thread.Start();
      Thread.Sleep(1000);
      Console.WriteLine("Primary Thread");
      mutex.ReleaseMutex();
   }
}
  1. It prints:

    Worker Thread
    Primary Thread
    
  2. It prints:

    Primary Thread
    Worker Thread
    
  3. The program deadlocks and there is no output.

9:

To illustrate deadlocking, Edsger Dijkstra introduced a “Dining Philosopher” metaphor that has become a classical way of introducing resource allocation and deadlocking. The metaphor (with variations) goes like this:

Five philosophers, who spend their lives alternately thinking and eating, are sitting around a table. In the center of the round table is an infinite supply of food. Before each philosopher is a plate, and between each pair of plates is a single chopstick. Once a philosopher quits thinking, he or she attempts to eat. In order to eat, a philosopher must have possession of the chopstick to the left and right of the plate.

Test Your Understanding

Your challenge is to design a program that creates five threads—one to represent each philosopher—that continually perform the tasks of thinking and eating. The program should implement a synchronization scheme that allows each thread to periodically acquire two chopsticks so that it can eat for a fixed or random (your choice) amount of time. After eating, a philosopher should release its chopsticks and think for a while before trying to eat again.

Hint: Use Monitor.Wait() to try to acquire chopsticks, and Monitor.PulseAll() to notify all threads that chopsticks are being released.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset