A thread represents a single flow of execution logic in a program. Some programs never need more than a single thread to execute efficiently, but many do, and that is what this chapter is about. Threading in .NET allows you to build responsive and efficient applications. Many applications need to perform multiple actions at the same time (such as user interface interaction and data processing), and threading provides the capability to achieve this. Being able to have your application perform multiple tasks is a very liberating and yet complicating factor in your application design. Once you have multiple threads of execution in your application, you need to start thinking about what data in your application needs to be protected from multiple accesses, what data could cause threads to develop an interdependency that could lead to deadlocking (Thread A has a resource that Thread B is waiting for, and Thread B has a resource that Thread A is waiting for), and how to store data you want to associate with the individual threads. You will also want to consider race conditions when dealing with threads. A race condition occurs when two threads access a shared variable at the same time. Both threads read the variable and get the same value and then race to see which thread can write the value last to the shared variable. The last thread to write to the variable “wins,” as it is writing over the value that the first thread wrote. You will explore some of these issues to help you take advantage of this wonderful capability of the .NET Framework. You will also see the areas where you need to be careful and items to keep in mind while designing and creating your multithreaded application.
Synchronization is about coordinating activities between threads or processes while making sure that data being accessed by multiple threads or processes stays valid. Synchronization allows threads and processes to operate in unison. Understanding the constructs that allow you to have multiple threads executing in your program gives you the power to create more scalable applications that can better utilize available resources.
Concurrency is about various aspects of your program cooperating and working in tandem to achieve goals. When operations are running concurrently in your application, you have multiple actions occurring at the same time. Concurrency is fostered by synchronization of threads.
Use ThreadStaticAttribute
to mark any static
fields as not shareable between threads:
public class Foo { [ThreadStaticAttribute()] public static string bar = "Initialized string"; }
By default, static fields are shared between all threads that access these fields in the same application domain. To see this, you’ll create a class with a static field called bar
and a static method to access and display the value contained in this field:
private class ThreadStaticField { [ThreadStaticAttribute()] public static string bar = "Initialized string"; public static void DisplayStaticFieldValue() { string msg = $"{Thread.CurrentThread.GetHashCode()}" + $"{ contains static field value of: {ThreadStaticField.bar} "; Console.WriteLine(msg); } }
Next, create a test method that accesses this static field both on the current thread and on a newly spawned thread:
private static void TestStaticField() { ThreadStaticField.DisplayStaticFieldValue(); Thread newStaticFieldThread = new Thread(ThreadStaticField.DisplayStaticFieldValue); newStaticFieldThread.Start(); ThreadStaticField.DisplayStaticFieldValue(); }
This code displays output that resembles the following:
9 contains static field value of: Initialized string 10 contains static field value of: Initialized string 9 contains static field value of: Initialized string
In the preceding example, the current thread’s hash value is 9
, and the new thread’s hash value is 10
. These values will vary from system to system. Notice that both threads are accessing the same static bar
field. Next, add the ThreadStaticAttribute
to the static field:
private class ThreadStaticField { [ThreadStaticAttribute()] public static string bar = "Initialized string"; public static void DisplayStaticFieldValue() { string msg = $"{Thread.CurrentThread.GetHashCode()}" + $"{ contains static field value of: {ThreadStaticField.bar} "; Console.WriteLine(msg); } }
Now, output resembling the following is displayed:
9 contains static field value of: Initialized string 10 contains static field value of: 9 contains static field value of: Initialized string
Notice that the new thread returns a null
for the value of the static bar
field. This is the expected behavior. The bar
field is initialized only in the first thread that accesses it. In all other threads, this field is initialized to null
. Therefore, it is imperative that you initialize the bar
field in all threads before it is used.
Remember to initialize any static field that is marked with ThreadStaticAttribute
before it is used in any thread; that is, this field should be initialized in the method passed in to the ThreadStart
delegate. You should make sure to not initialize the static field using a field initializer as shown in the prior code, since only one thread gets to see that initial value.
The bar
field is initialized to the "Initialized string"
string literal before it is used in the first thread that accesses this field. In the previous test code, the bar
field was accessed first, and, therefore, it was initialized in the current thread. Suppose you were to remove the first line of the TestStaticField
method, as shown here:
private static void TestStaticField() { //ThreadStaticField.DisplayStaticFieldValue(); Thread newStaticFieldThread = new Thread(ThreadStaticField.DisplayStaticFieldValue); newStaticFieldThread.Start(); ThreadStaticField.DisplayStaticFieldValue(); }
This code now displays similar output to the following:
10 contains static field value of: Initialized string 9 contains static field value of:
The current thread does not access the bar
field first and therefore does not initialize it. However, when the new thread accesses it first, it does initialize it.
Note that adding a static constructor to initialize the static field marked with this attribute will still follow the same behavior. Static constructors are executed only one time per application domain.
The “ThreadStaticAttribute Attribute” and “Static Modifier (C#)” topics in the MSDN documentation.
You need to provide thread-safe access through accessor functions to an internal member variable.
The following NoSafeMemberAccess
class shows three methods: ReadNumericField
, IncrementNumericField
, and ModifyNumericField
. While all of these methods access the internal numericField
member, the access is currently not safe for multithreaded access:
public static class NoSafeMemberAccess { private static int numericField = 1; public static void IncrementNumericField() { ++numericField; } public static void ModifyNumericField(int newValue) { numericField = newValue; } public static int ReadNumericField() => (numericField); }
NoSafeMemberAccess
could be used in a multithreaded application, and therefore it must be made thread-safe. Consider what would occur if multiple threads were calling the IncrementNumericField
method at the same time. It is possible that two calls could occur to IncrementNumericField
while the numericField
is updated only once. To protect against this, you will modify this class by creating an object that you can lock against in critical sections of the code:
public static class SaferMemberAccess { private static int numericField = 1; private static object syncObj = new object(); public static void IncrementNumericField() { lock(syncObj) { ++numericField; } } public static void ModifyNumericField(int newValue) { lock (syncObj) { numericField = newValue; } } public static int ReadNumericField() { lock (syncObj) { return (numericField); } } }
Using the lock
statement on the syncObj
object lets you synchronize access to the numericField
member. This now makes all three methods safe for multithreaded access.
To mark a block of code as a critical section, you use the lock
keyword. The lock
keyword should not be used on a public type or on an instance out of the control of the program, as this can contribute to deadlocks. Examples of this are using the "this"
pointer, the type object for a class (typeof(MyClass)
), or a string literal ("MyLock"
). If you are attempting to protect code in only public static methods, you could also use the System.Runtime.CompilerServices.MethodImpl
attribute for this purpose with the MethodImplOption.Synchronized
value:
[MethodImpl (MethodImplOptions.Synchronized)] public static void MySynchronizedMethod() { }
There is a problem with synchronization using an object such as syncObj
in the SaferMemberAccess
example. If you lock an object or type that can be accessed by other objects within the application, other objects may also attempt to lock this same object.
A deadlock is a situation in which two programs or threads of execution that are sharing the same resources are effectively preventing each other from accessing the resources, resulting in both being blocked and stopping execution.
A quick example of a deadlock is:
Thread 1 accesses Resource A and grabs a lock on it.
Thread 2 accesses Resource B and grabs a lock on it.
Thread 1 attempts to grab Resource B but is waiting for Thread 2 to let go.
Thread 2 attempts to grab Resource A but is waiting for Thread 1 to let go.
At this point the threads are deadlocked.
This will manifest itself in poorly written code that locks itself, such as the following code:
public class DeadLock { public void Method1() { lock(this) { // Do something. } } }
When Method1
is called, it locks the current deadLock
object. Unfortunately, any object that has access to the DeadLock
class may also lock it, as shown here:
public class AnotherCls { public void DoSomething() { DeadLock deadLock = new DeadLock(); lock(deadLock) { Thread thread = new Thread(deadLock.Method1); thread.Start(); // Do some time-consuming task here. } } }
The DoSomething
method obtains a lock on the deadLock
object and then attempts to call the Method1
method of the deadLock
object on another thread, after which a very long task is executed. While the long task is executing, the lock on the deadLock
object prevents Method1
from being called on the other thread. Only when this long task ends, and execution exits the critical section of the DoSomething
method, will the Method1
method be able to acquire a lock on this object. As you can see, this can become a major headache to track down in a much larger application.
Jeffrey Richter came up with a relatively simple method to remedy this situation, which he details quite clearly in the article “Safe Thread Synchronization” in the January 2003 issue of MSDN Magazine. His solution is to create a private field within the class on which to synchronize. Only the object itself can acquire this private field; no outside object or type may acquire it. This solution is also now the recommended practice in the MSDN documentation for the lock
keyword. The DeadLock
class can be rewritten as follows to fix this problem:
public class DeadLock { private object syncObj = new object(); public void Method1() { lock(syncObj) { // Do something. } } }
Now in the DeadLock
class, you are locking on the internal syncObj
, while the DoSomething
method locks on the DeadLock
class instance. This resolves the deadlock condition, but the DoSomething
method still should not lock on a public type. Therefore, change the AnotherCls
class like so:
public class AnotherCls { private object deadLockSyncObj = new object(); public void DoSomething() { DeadLock deadLock = new DeadLock(); lock(deadLockSyncObj) { Thread thread = new Thread(deadLock.Method1); thread.Start(); // Do some time-consuming task here. } } }
Now the AnotherCls
class has an object of its own to protect access to the DeadLock
class instance in DoSomething
instead of locking on the public type.
To clean up your code, you should stop locking any objects or types except for the synchronization objects that are private to your type or object, such as the syncObj
in the fixed DeadLock
class. This recipe makes use of this pattern by creating a static syncObj
object within the SaferMemberAccess
class. The IncrementNumericField
, ModifyNumericField
, and ReadNumericField
methods use this syncObj
to synchronize access to the numericField
field. Note that if you do not need a lock while the numericField
is being read in the ReadNumericField
method, you can remove this lock block and simply return the value contained in the numericField
field.
Minimizing the number of critical sections within your code can significantly improve performance. Use what you need to secure resource access, but no more.
If you require more control over locking and unlocking of critical sections, you might want to try using the overloaded static Monitor.TryEnter
methods. These methods allow more flexibility by introducing a timeout value. The lock
keyword will attempt to acquire a lock on a critical section indefinitely. However, with the TryEnter
method, you can specify a timeout value in milliseconds (as an integer) or as a TimeSpan
structure. The TryEnter
methods return true
if a lock was acquired and false
if it was not. Note that the overload of the TryEnter
method that accepts only a single parameter does not block for any amount of time. This method returns immediately, regardless of whether the lock was acquired.
The updated class using the Monitor
methods is shown in Example 12-1.
public static class MonitorMethodAccess { private static int numericField = 1; private static object syncObj = new object(); public static object SyncRoot => syncObj; public static void IncrementNumericField() { if (Monitor.TryEnter(syncObj, 250)) { try { ++numericField; } finally { Monitor.Exit(syncObj); } } } public static void ModifyNumericField(int newValue) { if (Monitor.TryEnter(syncObj, 250)) { try { numericField = newValue; } finally { Monitor.Exit(syncObj); } } } public static int ReadNumericField() { if (Monitor.TryEnter(syncObj, 250)) { try { return (numericField); } finally { Monitor.Exit(syncObj); } } return (-1); } [MethodImpl (MethodImplOptions.Synchronized)] public static void MySynchronizedMethod() { } }
Note that with the TryEnter
methods, you should always check to see whether the lock was in fact acquired. If not, your code should wait and try again or return to the caller.
You might think at this point that all of the methods are thread-safe. Individually, they are, but what if you are trying to call them and you expect synchronized access between two of the methods? If ModifyNumericField
and ReadNumericField
are used one after the other by Class 1 on Thread 1 at the same time Class 2 is using these methods on Thread 2, locking or Monitor
calls will not prevent Class 2 from modifying the value before Thread 1 reads it. Here is a series of actions that demonstrates this:
ModifyNumericField
with 10
ModifyNumericField
with 15
ReadNumericField
and gets 15
, not 10
ReadNumericField
and gets 15
, which it expectedTo solve this problem of synchronizing reads and writes, the calling class needs to manage the interaction. The external class can accomplish this by using the Monitor
class to establish a lock on the exposed synchronization object SyncRoot
from MonitorMethodAccess
, as shown here:
int num = 0; if(Monitor.TryEnter(MonitorMethodAccess.SyncRoot,250)) { MonitorMethodAccess.ModifyNumericField(10); num = MonitorMethodAccess.ReadNumericField(); Monitor.Exit(MonitorMethodAccess.SyncRoot); } Console.WriteLine(num);
When you are learning to code for thread-safe access, it is helpful to brush up on deadlock prevention algorithms, such as the Banker’s Algorithm by Edsger Dijkstra, and operating system books to help you think your way through the code you are creating and how it will react.
The “Lock Statement,” “Thread Class,” and “Monitor Class” topics in the MSDN documentation; the “Safe Thread Synchronization” article in the January 2003 issue of MSDN Magazine; the Wikipedia articles “Banker’s algorithm” and “Deadlock Prevention algorithms”.
An exception thrown in a spawned worker thread will cause this thread to be silently terminated if the exception is unhandled. You need to make sure all exceptions are handled in all threads. If an exception happens in this new thread, you want to handle it and be notified of its occurrence.
You must add exception handling to the method that you pass to the ThreadStart
delegate with a try
-catch
, try
-finally
, or try
-catch
-finally
block. The code to do this is shown in Example 12-2 in bold.
public class MainThread { public void CreateNewThread() { // Spawn new thread to do concurrent work Thread newWorkerThread = new Thread(Worker.DoWork); newWorkerThread.Start(); } } public class Worker { // Method called by ThreadStart delegate to do concurrent work public static void DoWork () { try { // Do thread work here throw new Exception("Boom!"); } catch(Exception e) { // Handle thread exception here Console.WriteLine(e.ToString()); // Do not rethrow exception } finally { // Do thread cleanup here } } }
If an unhandled exception occurs in the main thread of an application, the main thread terminates, along with your entire application. An unhandled exception in a spawned worker thread, however, will terminate only that thread. This will happen without any visible warnings, and your application will continue to run as if nothing happened, or worse, may start to act strangely due to corrupted data or improper execution and interaction of the worker threads.
Simply wrapping an exception handler around the Start
method of the Thread
class will not catch the exception on the newly created thread. The Start
method is called within the context of the current thread, not the newly created thread. It also returns immediately once the thread is launched, so it isn’t going to wait around for the thread to finish. Therefore, the exception thrown in the new thread will not be caught since it is not visible to any other threads.
If the exception is rethrown from the catch
block, the finally
block of this structured exception handler will still execute. However, after the finally
block is finished, the rethrown exception is, at that point, rethrown. The rethrown exception cannot be handled and the thread terminates. If there is any code after the finally
block, it will not be executed, since an unhandled exception occurred.
Never rethrow an exception at the highest point in the exception-handling hierarchy within a thread. Since no exception handlers can catch this rethrown exception, it will be considered unhandled, and the thread will terminate after all finally
blocks have been executed.
What if you use the ThreadPool
and QueueUserWorkItem?
This method will still help you because you added the handling code that will execute inside the thread. Just make sure you have the finally
block set up so that you can notify yourself of exceptions and clean up any outstanding resources in other threads as shown earlier.
To provide a last-chance exception handler for your WinForms
application, you need to hook up to two separate events. The first event is System.AppDomain.CurrentDomain.UnhandledException
, which will catch all unhandled exceptions in the current AppDomain
on worker threads; it will not catch exceptions that occur on the main UI thread of a WinForms
application. See Recipe 5.8 for more information on the System.AppDomain.UnhandledException
event. To catch those, you need to hook up to the second event, System.Windows.Forms.Application.ThreadException
, which will catch unhandled exceptions in the main UI thread. Also see Recipe 5.7 for more information about the ThreadException
event.
The “Thread Class” and “Exception Class” topics in the MSDN documentation.
You need a way of receiving notification from an asynchronously invoked delegate that it has finished. This scheme must allow your code to continue processing without having to constantly call IsCompleted
in a loop or to rely on the WaitOne
method. Since the asynchronous delegate will return a value, you must be able to pass this return value back to the invoking thread.
Use the BeginInvoke
method to start the asynchronous delegate, but use the first parameter to pass a callback delegate to the asynchronous delegate, as shown in Example 12-3.
public class AsyncAction2 { public void CallbackAsyncDelegate() { AsyncCallback callBack = DelegateCallback; AsyncInvoke method1 = TestAsyncInvoke.Method1; Console.WriteLine( $"Calling BeginInvoke on Thread {Thread.CurrentThread.ManagedThreadId}"); IAsyncResult asyncResult = method1.BeginInvoke(callBack, method1); // No need to poll or use the WaitOne method here, so return to the calling // method. return; } private static void DelegateCallback(IAsyncResult iresult) { Console.WriteLine( $"Getting callback on Thread {Thread.CurrentThread.ManagedThreadId}); AsyncResult asyncResult = (AsyncResult)iresult; AsyncInvoke method1 = (AsyncInvoke)asyncResult.AsyncDelegate; int retVal = method1.EndInvoke(asyncResult); Console.WriteLine($"retVal (Callback): {retVal}"); } }
This callback delegate will call the DelegateCallback
method on the thread on which the method was invoked when the asynchronous delegate is finished processing. If the thread is currently executing other code, the callback will wait until the thread is free. The thread will continue to exist, as the system knows that a callback is pending, so you do not have to account for the thread not being there when the callback is ready to be invoked.
The following code defines the AsyncInvoke
delegate and the asynchronously invoked static method TestAsyncInvoke.Method1
:
public delegate int AsyncInvoke2(); public class TestAsyncInvoke2 { public static int Method1() { Console.WriteLine( $"Invoked Method1 on Thread {Thread.CurrentThread.ManagedThreadId}"); return (1); } }
To run the asynchronous invocation, create an instance of the AsyncAction
class and call the CallbackAsyncDelegate
method like so:
AsyncAction2 aa2 = new AsyncAction2(); aa2.CallbackAsyncDelegate();
The output for this code is shown next. Note that the thread ID for Method1
is different:
Calling BeginInvoke on Thread 9 Invoked Method1 on Thread 10 Getting callback on Thread 10 retVal (Callback): 1
The asynchronous delegates in this recipe are created and invoked in the same fashion as the asynchronous delegate in Recipe 12.3. Instead of using the IsCompleted
property to determine when the asynchronous delegate is finished processing (or using the WaitOne
method to block for a specified time while the asynchronous delegate continues processing), this recipe uses a callback to indicate to the calling thread that the asynchronous delegate has finished processing and that its return value, ref
parameter values, and out
parameter values are available.
Invoking a delegate in this manner is much more flexible and efficient than simply polling the IsCompleted
property to determine when a delegate finishes processing. When polling this property in a loop, the polling method cannot return and allow the application to continue processing. A callback is also better than using a WaitOne
method, since the WaitOne
method will block the calling thread and prevent processing from occurring.
The CallbackAsyncDelegate
method in this recipe makes use of the first parameter to the BeginInvoke
method of the asynchronous delegate to pass in another delegate. This contains a callback method to be called when the asynchronous delegate finishes processing. After calling BeginInvoke
, this method can now return, and the application can continue processing; it does not have to wait in a polling loop or be blocked while the asynchronous delegate is running.
The AsyncInvoke
delegate that is passed into the first parameter of the BeginInvoke
method is defined as follows:
public delegate void AsyncCallback(IAsyncResult ar)
When this delegate is created, as shown here, the callback method passed in, DelegateCallback
, will be called as soon as the asynchronous delegate completes:
AsyncCallback callBack = new AsyncCallback(DelegateCallback);
DelegateCallback
will not run on the same thread as BeginInvoke
but rather on a Thread
from the ThreadPool
. This callback method accepts a parameter of type IAsyncResult
. You can cast this parameter to an AsyncResult
object within the method and use it to obtain information about the completed asynchronous delegate, such as its return value, any ref
parameter values, and any out
parameter values. If the delegate instance that was used to call BeginInvoke
is still in scope, you can just pass the IAsyncResult
to the EndInvoke
method. In addition, this object can obtain any state information passed into the second parameter of the BeginInvoke
method. This state information can be any object type.
The DelegateCallback
method casts the IAsyncResult
parameter to an AsyncResult
object and obtains the asynchronous delegate that was originally called. The EndInvoke
method of this asynchronous delegate is called to process any return value, ref
parameters, or out
parameters. If any state object was passed in to the BeginInvoke
method’s second parameter, it can be obtained here through the following line of code:
object state = asyncResult.AsyncState;
The “AsyncCallback Delegate” topic in the MSDN documentation.
Use the AllocateDataSlot
, AllocateNamedDataSlot
, or GetNamedDataSlot
method on the Thread
class to reserve a thread local storage (TLS) slot. Using TLS, you can store a large object in a data slot on a thread and use it in many different methods—without having to pass the structure as a parameter.
For this example, a class called ApplicationData
represents a set of data that can grow to be very large:
public class ApplicationData { // Application data is stored here. }
Before you can use this structure, there must be a data slot in TLS to store the class. First, GetNamedDataSlot
is called to get the appDataSlot
. Since appDataSlot
doesn’t exist, by default GetNamedDataSlot
creates it. The following code creates an instance of the ApplicationData
class and stores it in the data slot named appDataSlot
:
ApplicationData appData = new ApplicationData(); Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), appData);
Whenever you need this class, you can retrieve it with a call to Thread.GetData
. The following line of code gets the appData
structure from the data slot named appDataSlot
:
ApplicationData storedAppData = (ApplicationData)Thread.GetData( Thread.GetNamedDataSlot("appDataSlot"));
At this point, the storedAppData
structure can be read or modified. After the action has been performed on storedAppData
, it must be placed back into the data slot named appDataSlot
:
Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), storedAppData);
Once the application is finished using this data, you can release the data slot from memory using the following method call:
Thread.FreeNamedDataSlot("appDataSlot");
The HandleClass
class in Example 12-4 shows how TLS can be used to store a structure.
public class HandleClass { public static void Run() { // Create structure instance and store it in the named data slot ApplicationData appData = new ApplicationData(); Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), appData); // Call another method that will use this structure HandleClass.MethodB(); // When done, free this data slot Thread.FreeNamedDataSlot("appDataSlot"); } public static void MethodB() { // Get the instance from the named data slot ApplicationData storedAppData = (ApplicationData)Thread.GetData( Thread.GetNamedDataSlot("appDataSlot")); // Modify the ApplicationData // When finished modifying this data, store the changes back into // into the named data slot Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), storedAppData); // Call another method that will use this structure HandleClass.MethodC(); } public static void MethodC() { // Get the instance from the named data slot ApplicationData storedAppData = (ApplicationData)Thread.GetData(Thread.GetNamedDataSlot("appDataSlot")); // Modify the data // When finished modifying this data, store the changes back into // the named data slot Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), storedAppData); } }
Thread local storage is a convenient way to store data that is usable across method calls without the user having to pass the structure to the method or even knowing where the structure was actually created.
Data stored in a named TLS data slot is available only to that thread; no other thread can access a named data slot of another thread. The data stored in this data slot is accessible from anywhere within the thread. This setup essentially makes this data global to the thread. You should be aware that TLS slots are a limited resource and can vary based on platform.
To create a named data slot, use the static Thread.GetNamedDataSlot
method. This method accepts a single parameter, name
, that defines the name of the data slot. This name should be unique; if a data slot with the same name exists, then the contents of that data slot will be returned, and a new data slot will not be created. This action occurs silently; there is no exception thrown or error code to inform you that you are using a data slot someone else created. To be sure that you are using a unique data slot, use the Thread.AllocateNamedDataSlot
method. This method throws a System.ArgumentException
if a data slot already exists with the same name. Otherwise, it operates similarly to the GetNamedDataSlot
method.
Note that this named data slot is created on every thread in the process, not just the thread that called this method. This fact should not be much more than an inconvenience to you, though, since the data in each data slot can be accessed only by the thread that contains it. In addition, if a data slot with the same name was created on a separate thread and you call GetNamedDataSlot
on the current thread with this name, none of the data in any data slot on any thread will be destroyed.
GetNamedDataSlot
returns a LocalDataStoreSlot
object that is used to access the data slot. Note that you can’t create this class using the new
keyword; you must create it through one of the AllocateDataSlot
or AllocateNamedDataSlot
methods on the Thread
class.
To store data in this data slot, use the static Thread.SetData
method. This method takes the object passed in to the data
parameter and stores it in the data slot defined by the dataSlot
parameter.
The static Thread.GetData
method retrieves the object stored in a data slot. This method retrieves a LocalDataStoreSlot
object that is created through the Thread.GetNamedDataSlot
method. The GetData
method then returns the object that was stored in that particular data slot. Note that the object returned might have to be cast to its original type before it can be used.
The static method Thread.FreeNamedDataSlot
will free the memory associated with a named data slot. This method accepts the name of the data slot as a string
and, in turn, frees the memory associated with that data slot. Remember that when a data slot is created with GetNamedDataSlot
, a named data slot is also created on all of the other threads running in that process. This is not really a problem when you’re creating data slots with the GetNamedDataSlot
method because, if a data slot exists with this name, a LocalDataStoreSlot
object that refers to that data slot is returned, a new data slot is not created, and the original data in that data slot is not destroyed.
This situation becomes more of a problem when you’re using the FreeNamedDataSlot
method. This method will free the memory associated with the data slot name passed in to it for all threads, not just the thread that it was called on. Freeing a data slot before all threads have finished using the data within that data slot can be disastrous to your application.
A way to work around this problem is to not call the FreeNamedDataSlot
method at all. When a thread terminates, all of its data slots in TLS are freed automatically. The side effect of not calling FreeNamedDataSlot
is that the slot is taken up until the garbage collector determines that the thread on which the slot was created has finished and the slot can be freed.
If you know the number of TLS slots you need for your code at compile time, consider using the ThreadStaticAttribute
on a static field of your class to set up TLS-like storage.
The “Thread Local Storage and Thread Relative Static Fields,” “ThreadStaticAttribute Attribute,” and “Thread Class” topics in the MSDN documentation.
Use a semaphore to enable resource-counted access to the resource. For example, if you have an Xbox One and a copy of Halo 5 (the resource) and a development staff eager to blow off some steam (the clients), you have to synchronize access to the Xbox One. Since the Xbox One has up to eight controllers, up to eight clients can be playing at any given time. The rules of the house are that when you die, you give up your controller.
To accomplish this, create a class called Halo5Session
with a Semaphore
called _XboxOne
like this:
public class Halo5Session { // A semaphore that simulates a limited resource pool. private static Semaphore _XboxOne;
To get things rolling, you need to call the Play
method, as shown in Example 12-5, on the Halo5Session
class.
public static void Play() { // An XboxOne has 8 controller ports so 8 people can play at a time // We use 8 as the max and zero to start with as we want Players // to queue up at first until the XboxOne boots and loads the game // using (_XboxOne = new Semaphore(0, 8, "XboxOne")) { using (ManualResetEvent GameOver = new ManualResetEvent(false)) { // // 13 Players log in to play // List<XboxOnePlayer.PlayerInfo> players = new List<XboxOnePlayer.PlayerInfo>() { new XboxOnePlayer.PlayerInfo { Name="Igor",Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="AxeMan",Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Dr. Death", Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="HaPpyCaMpEr", Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Executioner", Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="FragMan",Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Beatdown", Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Stoney",Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Pwned",Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Big Dawg", Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Playa",Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="BOOM",Dead=GameOver}, new XboxOnePlayer.PlayerInfo { Name="Mr. Mxylplyx", Dead=GameOver} }; foreach (XboxOnePlayer.PlayerInfo player in players) { Thread t = new Thread(XboxOnePlayer.JoinIn); // put a name on the thread t.Name = player.Name; // fire up the player t.Start(player); } // Wait for the XboxOne to spin up and load Halo5 (3 seconds) Console.WriteLine("XboxOne initializing..."); Thread.Sleep(3000); Console.WriteLine( "Halo 5 loaded & ready, allowing 8 players in now..."); // The XboxOne has the whole semaphore count. We call // Release(8) to open up 8 slots and // allows the waiting players to enter the XboxOne(semaphore) // up to eight at a time. // _XboxOne.Release(8); // wait for the game to end... GameOver.WaitOne(); } } } }
The first thing the Play
method does is to create a new semaphore that has a maximum resource count of 8
and a name of _XboxOne
. This is the semaphore that will be used by all of the player threads to gain access to the game. A ManualResetEvent
called GameOver
is created to track when the game has ended.
To simulate the developers, you create a thread for each with its own XboxOnePlayer.PlayerInfo
class instance to contain the player name and a reference to the original GameOver.ManualResetEvent
held in the Dead
event on the PlayerInfo
, which indicates the player has died. To create the thread you use the ParameterizedThreadStart
delegate, which takes the method to execute on the new thread in the constructor, but also allows you to pass the data object directly to a new overload of the Thread.Start
method.
Once the players are in motion, the Xbox One “initializes” and then calls Release
on the semaphore to open eight slots for player threads to grab on to, and then waits until it detects that the game is over from the firing of the Dead
event for the player.
The players initialize on separate threads and run the JoinIn
method, as shown in Example 12-6. First they open the Xbox One semaphore by name and get the data that was passed to the thread. Once they have the semaphore, they call WaitOne
to queue up to play. Once the initial eight slots are opened or another player “dies,” then the call to WaitOne
unblocks and the player “plays” for a random amount of time and then dies. Once the players are dead, they call Release
on the semaphore to indicate their slot is now open. If the semaphore reaches its maximum resource count, the GameOver
event is set.
public class XboxOnePlayer { public class PlayerInfo { public ManualResetEvent Dead {get; set;} public string Name {get; set;} } // Death Modes for Players private static string[] _deaths = new string[7]{"bought the farm", "choked on a rocket", "shot their own foot", "been captured", "fallen to their death", "died of lead poisoning", "failed to dodge a grenade", }; /// <summary> /// Thread function /// </summary> /// <param name="info">PlayerInfo item</param> public static void JoinIn(object info) { // open up the semaphore by name so we can act on it using (Semaphore XboxOne = Semaphore.OpenExisting("XboxOne")) { // get the data object PlayerInfo player = (PlayerInfo)info; // Each player notifies the XboxOne they want to play Console.WriteLine($"{player.Name} is waiting to play!"); // they wait on the XboxOne (semaphore) until it lets them // have a controller XboxOne.WaitOne(); // The XboxOne has chosen the player! (or the semaphore has // allowed access to the resource...) Console.WriteLine($"{player.Name} has been chosen to play. " + $"Welcome to your doom {player.Name}. >:)"); // figure out a random value for how long the player lasts System.Random rand = new Random(500); int timeTillDeath = rand.Next(100, 1000); // simulate the player is busy playing till they die Thread.Sleep(timeTillDeath); // figure out how they died rand = new Random(); int deathIndex = rand.Next(6); // notify of the player's passing Console.WriteLine($"{player.Name} has {_deaths[deathIndex]} " + "and gives way to another player"); // if all ports are open, everyone has played and the game is over int semaphoreCount = XboxOne.Release(); if (semaphoreCount == 3) { Console.WriteLine("Thank you for playing, the game has ended."); // set the Dead event for the player player.Dead.Set(); } } } }
When the Play
method is run, output similar to the following is generated:
Igor is waiting to play! AxeMan is waiting to play! Dr. Death is waiting to play! HaPpyCaMpEr is waiting to play! Executioner is waiting to play! FragMan is waiting to play! Beatdown is waiting to play! Stoney is waiting to play! Pwned is waiting to play! Big Dawg is waiting to play! Playa is waiting to play! XboxOne initializing... BOOM is waiting to play! Mr. Mxylplyx is waiting to play! Halo 5 loaded & ready, allowing 8 players in now... Stoney has been chosen to play. Welcome to your doom Stoney. >:) Executioner has been chosen to play. Welcome to your doom Executioner. >:) Beatdown has been chosen to play. Welcome to your doom Beatdown. >:) Pwned has been chosen to play. Welcome to your doom Pwned. >:) Playa has been chosen to play. Welcome to your doom Playa. >:) HaPpyCaMpEr has been chosen to play. Welcome to your doom HaPpyCaMpEr. >:) Big Dawg has been chosen to play. Welcome to your doom Big Dawg. >:) FragMan has been chosen to play. Welcome to your doom FragMan. >:) Playa has been captured and gives way to another player Stoney has been captured and gives way to another player Pwned has been captured and gives way to another player Big Dawg has been captured and gives way to another player Mr. Mxylplyx has been chosen to play. Welcome to your doom Mr. Mxylplyx. >:) BOOM has been chosen to play. Welcome to your doom BOOM. >:) FragMan has was captured and gives way to another player Dr. Death has been chosen to play. Welcome to your doom Dr. Death. >:) HaPpyCaMpEr has been captured and gives way to another player Igor has been chosen to play. Welcome to your doom Igor. >:) Beatdown has been captured and gives way to another player Executioner has been captured and gives way to another player AxeMan has been chosen to play. Welcome to your doom AxeMan. >:) BOOM has died of lead poisoning and gives way to another player Thank you for playing, the game has ended. Mr. Mxylplyx has died of lead poisoning and gives way to another player
Semaphores are used primarily for resource counting and are available cross-process when named (as they are based on the underlying kernel semaphore object). Cross-process may not sound too exciting to many .NET developers until they realize that it also means cross-AppDomain
. Say you are creating additional AppDomain
s to hold assemblies you are loading dynamically that you don’t want to stick around for the whole life of your main AppDomain
; the semaphore can help you keep track of how many are loaded at a time. Being able to control access up to a certain number of users can be useful in many scenarios (socket programming, custom thread pools, etc.).
The “Semaphore,” “ManualResetEvent,” and “ParameterizedThreadStart” topics in the MSDN documentation.
Use a named Mutex
as a common signaling mechanism to do the coordination. A named Mutex
can be accessed from both pieces of code even when running in different processes or AppDomain
s.
One situation in which this can be useful is when you are using shared memory to communicate between processes. The SharedMemoryManager
class presented in this recipe will show the named Mutex
in action by setting up a section of shared memory that can be used to pass serializable objects between processes. The “server” process creates a SharedMemoryManager
instance, which sets up the shared memory and then creates the Mutex
as the initial owner. The “client” process then also creates a SharedMemoryManager
instance that finds the shared memory and hooks up to it. Once this connection is established, the “client” process then sets up to receive the serialized objects and waits until one is sent by waiting on the Mutex
the “server” process created. The “server” process then takes a serializable object, serializes it into the shared memory, and releases the Mutex
. It then waits on it again so that when the “client” has received the object, it can release the Mutex
and give control back to the “server.” The “client” process that was waiting on the Mutex
then deserializes the object from the shared memory and releases the Mutex
.
In the example, you will send the Contact
structure, which looks like this:
[StructLayout(LayoutKind.Sequential)] [Serializable()] public struct Contact { public string _name; public int _age; }
The “server” process code to send the Contact
looks like this:
// create the initial shared memory manager to get things set up using(SharedMemoryManager<Contact> sm = new SharedMemoryManager<Contact>("Contacts",8092)) { // this is the sender process // launch the second process to get going string processName = Process.GetCurrentProcess().MainModule.FileName; int index = processName.IndexOf("vshost"); if (index != -1) { string first = processName.Substring(0, index); int numChars = processName.Length - (index + 7); string second = processName.Substring(index + 7, numChars); processName = first + second; } Process receiver = Process.Start( new ProcessStartInfo( processName, "Receiver")); // give it 5 seconds to spin up Thread.Sleep(5000); // make up a contact Contact man; man._age = 23; man._name = "Dirk Daring"; // send it to the other process via shared memory sm.SendObject(man); }
The “client” process code to receive the Contact
looks like this:
// create the initial shared memory manager to get things set up using(SharedMemoryManager<Contact> sm = new SharedMemoryManager<Contact>("Contacts",8092)) { // get the contact once it has been sent Contact c = (Contact)sm.ReceiveObject(); // Write it out (or to a database...) Console.WriteLine("Contact {0} is {1} years old.", c._name, c._age); // show for 5 seconds Thread.Sleep(5000); }
The way this usually works is that one process creates a section of shared memory backed by the paging file using the System.IO.MemoryMappedFiles.MemoryMappedFile
. You can see in Example 12-7 where the MemoryMappedFile
is set up in the constructor code for the SharedMemoryManager
and the private SetupSharedMemory
method. The constructor takes a name to use as part of the shared memory name and the base size of the shared memory block to allocate. It is the base size because the SharedMemoryManager
has to allocate a bit extra for keeping track of the data moving through the buffer.
The code to send an object through the shared memory is contained in the SendObject
method, as shown in Example 12-8. First, it checks to see if the object being sent is indeed serializable by checking the IsSerializable
property on the type of the object. If the object is serializable, an integer with the size of the serialized object and the serialized object content are written out to the shared memory section. Then, the Mutex
is released to indicate that there is an object in the shared memory. It then waits on the Mutex
again to wait until the “client” has received the object.
public void SendObject(TransferItemType transferObject) { // create a memory stream, initialize size using (MemoryStream ms = new MemoryStream()) { // get a formatter to serialize with BinaryFormatter formatter = new BinaryFormatter(); try { // serialize the object to the stream formatter.Serialize(ms, transferObject); // get the bytes for the serialized object byte[] bytes = ms.ToArray(); // check that this object will fit if(bytes.Length + sizeof(Int32) > MemoryRegionSize) { string msg = $"{typeof(TransferItemType)} object instance serialized" + $"to {bytes.Length} bytes which is too large for the shared " + $"memory region"; throw new ArgumentException(msg, nameof(transferObject)); } // write to the shared memory region using (MemoryMappedViewStream stream = MemMappedFile.CreateViewStream()) { BinaryWriter writer = new BinaryWriter(stream); writer.Write(bytes.Length); // write the size writer.Write(bytes); // write the object } } finally { // signal the other process using the mutex to tell it // to do receive processing MutexForSharedMem.ReleaseMutex(); // wait for the other process to signal it has received // and we can move on MutexForSharedMem.WaitOne(); } } }
The ReceiveObject
method shown in Example 12-9 allows the client to wait until there is an object in the shared memory section and then reads the size of the serialized object and deserializes it to a managed object. It then releases the Mutex
to let the sender know to continue.
public TransferItemType ReceiveObject() { // wait on the mutex for an object to be queued by the sender MutexForSharedMem.WaitOne(); // get the object from the shared memory byte[] serializedObj = null; using (MemoryMappedViewStream stream = MemMappedFile.CreateViewStream()) { BinaryReader reader = new BinaryReader(stream); int objectLength = reader.ReadInt32(); serializedObj = reader.ReadBytes(objectLength); } // set up the memory stream with the object bytes using (MemoryStream ms = new MemoryStream(serializedObj)) { // set up a binary formatter BinaryFormatter formatter = new BinaryFormatter(); // get the object to return TransferItemType item; try { item = (TransferItemType)formatter.Deserialize(ms); } finally { // signal that we received the object using the mutex MutexForSharedMem.ReleaseMutex(); } // give them the object return item; } }
A Mutex
is designed to give mutually exclusive (thus the name) access to a single resource. A Mutex
can be thought of as a cross-process named Monitor
, which “enters” the Mutex
by waiting on it and becoming the owner, then “exits” by releasing the Mutex
for the next thread that is waiting on it. If a thread that owns a Mutex
ends, the Mutex
is released automatically.
Using a Mutex
is slower than using a Monitor
, as a Monitor
is a purely managed construct, whereas a Mutex
is based on the Mutex
kernel object. A Mutex
cannot be “pulsed” as can a Monitor
, but it can be used across processes while a Monitor
cannot. Finally, the Mutex
is based on WaitHandle
, so it can be waited on with other objects derived from WaitHandle
, like Semaphore
and the event classes.
The SharedMemoryManager
class is listed in its entirety in Example 12-10.
The “MemoryMappedFiles,” “MemoryMappedFile Class,” “Mutex,” and “Mutex Class” topics in the MSDN documentation.
Use an AutoResetEvent
to notify each thread when it is going to be served. For example, a diner has a cook and multiple waitresses. The waitresses can keep bringing in orders, but the cook can serve up only one at a time. You can simulate this with the Cook
class shown in Example 12-11.
public class Cook { public string Name { get; set; } public static AutoResetEvent OrderReady = new AutoResetEvent(false); public void CallWaitress() { // we call Set on the AutoResetEvent and don't have to // call Reset like we would with ManualResetEvent to fire it // off again. This sets the event that the waitress is waiting for // in GetInLine // order is ready.... Console.WriteLine($"{Name} finished order!"); OrderReady.Set(); } }
The Cook
class has an AutoResetEvent
called OrderReady
that the cook will use to tell the waiting waitresses that an order is ready. Since there is only one order ready at a time, and this is an equal-opportunity diner, the waitress who has been waiting longest gets her order first. The AutoResetEvent
allows for just signaling the single thread when you call Set
on the OrderReady
event.
The Waitress
class has the PlaceOrder
method that is executed by the thread. PlaceOrder
takes an object
parameter, which is passed in from the call to t.Start
in the next code block. The Start
method uses a ParameterizedThreadStart
delegate, which takes an object
parameter. PlaceOrder
has been set up to be compatible with it. It takes the AutoResetEvent
passed in and calls WaitOne
to wait until the order is ready. Once the Cook
fires the event enough times that this waitress is at the head of the line, the code finishes:
public class Waitress { public static void PlaceOrder(string waitressName, AutoResetEvent orderReady) { // order is placed.... Console.WriteLine($"Waitress {waitressName} placed order!"); // wait for the order... orderReady.WaitOne(); // order is ready.... Console.WriteLine($"Waitress {waitressName} got order!"); } }
The code to run the “diner” creates a Cook
and spins off the Waitress
threads, and then calls all waitresses when their orders are ready by calling Set
on the AutoResetEvent
:
// We have a diner with a cook who can only serve up one meal at a time Cook Mel = new Cook() { Name = "Mel" }; string[] waitressNames = { "Flo", "Alice", "Vera", "Jolene", "Belle" }; // Have waitresses place orders foreach (var waitressName in waitressNames) { Task.Run(() => { // The Waitress places the order and then waits for the order Waitress.PlaceOrder(waitressName, Cook.OrderReady); }); } // Have the cook fill the orders for (int i = 0; i < waitressNames.Length; i++) { // make the waitresses wait... Thread.Sleep(2000); // ok, next waitress, pickup! Mel.CallWaitress(); }
There are two types of events, AutoResetEvent
and ManualResetEvent
. There are two main differences between the events. The first is that AutoResetEvent
s release only one of the threads that are waiting on the event, while a ManualResetEvent
will release all of them when Set
is called. The second difference is that when Set
is called on an AutoResetEvent
, it is automatically reset to a nonsignaled state, while the ManualResetEvent
is left in a signaled state until the Reset
method is called.
The output from the sample code looks like this:
Waitress Alice placed order! Waitress Flo placed order! Waitress Vera placed order! Waitress Jolene placed order! Mel finished order! Waitress Alice got order! Waitress Belle placed order! Mel finished order! Waitress Jolene got order! Mel finished order! Waitress Belle got order! Mel finished order! Waitress Flo got order! Mel finished order! Waitress Vera got order!
The “AutoResetEvent” and “ManualResetEvent” topics in the MSDN documentation and Programming Applications for Microsoft Windows, Fourth Edition (Microsoft Press).
Use the Interlocked
family of functions to ensure atomic access. Interlocked
has methods to increment and decrement values, add a specific amount to a given value, exchange an original value for a new value, compare the current value to the original value, and exchange the original value for a new value if it is equal to the current value.
To increment or decrement an integer value, use the Increment
or Decrement
methods, respectively:
int i = 0; long l = 0; Interlocked.Increment(ref i); // i = 1 Interlocked.Decrement(ref i); // i = 0 Interlocked.Increment(ref l); // l = 1 Interlocked.Decrement(ref i); // l = 0
To add a specific amount to a given integer value, use the Add
method:
Interlocked.Add(ref i, 10); // i = 10; Interlocked.Add(ref l, 100); // l = 100;
To replace an existing value, use the Exchange
method:
string name = "Mr. Ed"; Interlocked.Exchange(ref name, "Barney");
To check if another thread has changed a value out from under the existing code before replacing the existing value, use the CompareExchange
method:
int i = 0; double runningTotal = 0.0; double startingTotal = 0.0; double calc = 0.0; for (i = 0; i < 10; i++) { do { // store of the original total startingTotal = runningTotal; // do an intense calculation calc = runningTotal + i * Math.PI * 2 / Math.PI; } // check to make sure runningTotal wasn't modified // and replace it with calc if not. If it was, // run through the loop until we get it current while (startingTotal != Interlocked.CompareExchange( ref runningTotal, calc, startingTotal)); }
In an operating system like Microsoft Windows, with its ability to perform preemptive multitasking, you must give certain considerations to data integrity when working with multiple threads. There are many synchronization primitives to help secure sections of code, as well as signal when data is available to be modified. To this list is added the capability to perform operations that are guaranteed to be atomic in nature.
If there has not been much threading or assembly language in your past, you might wonder what the big deal is and why you need these atomic functions at all. The basic reason is that the line of code written in C# ultimately has to be translated down to a machine instruction, and along the way, the one line of code written in C# can turn into multiple instructions for the machine to execute. If the machine has to execute multiple instructions to perform a task and the operating system allows for preemption, it is possible that these instructions may not be executed as a unit. They could be interrupted by other code that modifies the value being changed by the original line of C# code in the middle of the C# code being executed. As you can imagine, this could lead to some pretty spectacular errors, or it might just round off the lottery number that keeps a certain C# programmer from winning the big one.
Threading is a powerful tool, but like most “power” tools, you have to understand its operation to use it effectively and safely. Threading bugs are notorious for being some of the most difficult to debug, as the runtime behavior is not constant. Trying to reproduce them can be a nightmare and adding logging can change the behavior, or worse, make the issue disappear! Recognizing that working in a multithreaded environment imposes a certain amount of forethought about protecting data access, and understanding when to use the Interlocked
class, will go a long way toward preventing long, frustrating evenings with the debugger.
The “Interlocked” and “Interlocked Class” topics in the MSDN documentation.
Use ReaderWriterLockSlim
to give multiple-read/single-write access with the capacity to upgrade the lock from read to write. As an example, say a developer is starting a new project. Unfortunately, the project is understaffed, so the developer has to respond to tasks from many other individuals on the team. Each of the other team members will also ask the developer for status updates on their tasks, and some can even change the priority of the tasks the developer is assigned.
The developer is assigned a task via the AddTask
method. To protect the DeveloperTasks
collection we use a write lock on ReaderWriterLockSlim
, calling EnterWriteLock
when adding the task to the DeveloperTasks
collection and ExitWriteLock
when the addition is complete:
public void AddTask(DeveloperTask newTask) { try { Lock.EnterWriteLock(); // if we already have this task (unique by name) // then just accept the add as sometimes people // give you the same task more than once :) var taskQuery = from t in DeveloperTasks where t == newTask select t; if (taskQuery.Count<DeveloperTask>() == 0) { Console.WriteLine($"Task {newTask.Name} was added to developer"); DeveloperTasks.Add(newTask); } } finally { Lock.ExitWriteLock(); } }
When a project team member needs to know about the status of a task, they call the IsTaskDone
method, which uses a read lock on the ReaderWriterLockSlim
by calling EnterReadLock
and ExitReadLock
:
public bool IsTaskDone(string taskName) { try { Lock.EnterReadLock(); var taskQuery = from t in DeveloperTasks where t.Name == taskName select t; if (taskQuery.Count<DeveloperTask>() > 0) { DeveloperTask task = taskQuery.First<DeveloperTask>(); Console.WriteLine($"Task {task.Name} status was reported."); return task.Status; } } finally { Lock.ExitReadLock(); } return false; }
There are certain managerial members of the team who have the right to increase the priority of the tasks they assigned to the developer. They accomplish this by calling the IncreasePriority
method on the Developer
. IncreasePriority
uses an upgradable lock on ReaderWriterLockSlim
by first calling the EnterUpgradeableLock
method to acquire a read lock, and then, if the task is in the queue, upgrading to a write lock in order to adjust the priority of the task. Once the priority is adjusted, the write lock is released, which degrades the lock back to a read lock, and that lock is released through a call to ExitUpgradeableReadLock
:
public void IncreasePriority(string taskName) { try { Lock.EnterUpgradeableReadLock(); var taskQuery = from t in DeveloperTasks where t.Name == taskName select t; if(taskQuery.Count<DeveloperTask>()>0) { DeveloperTask task = taskQuery.First<DeveloperTask>(); Lock.EnterWriteLock(); task.Priority++; Console.WriteLine($"Task {task.Name}" + $" priority was increased to {task.Priority}" + " for developer"); Lock.ExitWriteLock(); } } finally { Lock.ExitUpgradeableReadLock(); } }
The ReaderWriterLockSlim
was created to replace the existing ReaderWriterLock
for a number of reasons:
ReaderWriterLock
was more than five times slower than using a Monitor
.
Recursion semantics of ReaderWriterLock
were not standard and were broken in some thread reentrancy cases.
The upgrade lock method is nonatomic in ReaderWriterLock
.
While the ReaderWriterLockSlim
is only about two times slower than the Monitor
, it is more flexible and prioritizes writes, so in “few write, many read” scenarios, it is more scalable than the Monitor
. There are also methods to determine what type of lock is held as well as how many threads are waiting to acquire it.
By default, lock acquisition recursion is disallowed. If you call EnterReadLock
twice, you get a LockRecursionException
. You can enable lock recursion by passing a LockRecusionPolicy.SupportsRecursion
enumeration value to the constructor overload of ReaderWriterLockSlim
that accepts it. Even though it is possible to enable lock recursion, it is generally discouraged, as it complicates matters and creates issues that are not fun to debug.
There are some scenarios where the ReaderWriterLockSlim
is not appropriate for use, although most of these are not applicable to everyday development:
Due to the incompatible HostProtection
attributes, ReaderWriterLockSlim
is precluded from use in SQL Server CLR scenarios.
Because ReaderWriterLockSlim
doesn’t mark critical regions, hosts that use thread aborts won’t know that it will be harmed by them, so issues will arise in the hosted AppDomain
s.
ReaderWriterLockSlim
cannot handle asynchronous exceptions (thread aborts, out of memory, etc.) and could end up with a corrupt lock state, which could cause deadlocks or other issues.
ReaderWriterLockSlim
has thread affinity, so it usually cannot be used with async
and await
. For those cases, use SemaphoreSlim.WaitAsync
instead.
The entire code base for the example is listed here:
static Developer s_dev = null; static bool s_end = false; /// <summary> /// </summary> public static void TestReaderWriterLockSlim() { s_dev = new Developer(15); LaunchTeam(s_dev); Thread.Sleep(10000); } private static void LaunchTeam(Developer dev) { LaunchManager("CTO", dev); LaunchManager("Director", dev); LaunchManager("Project Manager", dev); LaunchDependent("Product Manager", dev); LaunchDependent("Test Engineer", dev); LaunchDependent("Technical Communications Professional", dev); LaunchDependent("Operations Staff", dev); LaunchDependent("Support Staff", dev); } public class DeveloperTaskInfo { public string Name { get; set; } public Developer Developer { get; set; } } private static void LaunchManager(string name, Developer dev) { var dti = new DeveloperTaskInfo() { Name = name, Developer = dev }; Task manager = Task.Run(() => { Console.WriteLine($"Added {dti.Name} to the project..."); DeveloperTaskManager mgr = new DeveloperTaskManager(dti.Name, dti.Developer); }); } private static void LaunchDependent(string name, Developer dev) { var dti = new DeveloperTaskInfo() { Name = name, Developer = dev }; Task manager = Task.Run(() => { Console.WriteLine($"Added {dti.Name} to the project..."); DeveloperTaskDependent dep = new DeveloperTaskDependent(dti.Name, dti.Developer); }); } public class DeveloperTask { public DeveloperTask(string name) { Name = name; } public string Name { get; set; } public int Priority { get; set; } public bool Status { get; set; } public override string ToString() => this.Name; public override bool Equals(object obj) { DeveloperTask task = obj as DeveloperTask; return this.Name == task?.Name; } public override int GetHashCode() => this.Name.GetHashCode(); } public class Developer : IDisposable { /// <summary> /// Dictionary for the tasks /// </summary> private List<DeveloperTask> DeveloperTasks { get; } = new List<DeveloperTask>(); private ReaderWriterLockSlim Lock { get; set; } = new ReaderWriterLockSlim(); private System.Threading.Timer Timer { get; set; } private int MaxTasks { get; } public Developer(int maxTasks) { // the maximum number of tasks before the developer quits MaxTasks = maxTasks; // do some work every 1/4 second Timer = new Timer(new TimerCallback(DoWork), null, 1000, 250); } ~Developer() { Dispose(true); } // Execute a task protected void DoWork(Object stateInfo) { ExecuteTask(); try { Lock.EnterWriteLock(); // if we finished all tasks, go on vacation! if (DeveloperTasks.Count == 0) { s_end = true; Console.WriteLine( "Developer finished all tasks, go on vacation!"); return; } if (!s_end) { // if we have too many tasks quit if (DeveloperTasks.Count > MaxTasks) { // get the number of unfinished tasks var query = from t in DeveloperTasks where t.Status == false select t; int unfinishedTaskCount = query.Count<DeveloperTask>(); s_end = true; Console.WriteLine( "Developer has too many tasks, quitting! " + $"{unfinishedTaskCount} tasks left unfinished."); } } else Timer.Dispose(); } finally { Lock.ExitWriteLock(); } } public void AddTask(DeveloperTask newTask) { try { Lock.EnterWriteLock(); // if we already have this task (unique by name) // then just accept the add as sometimes people // give you the same task more than once :) var taskQuery = from t in DeveloperTasks where t == newTask select t; if (taskQuery.Count<DeveloperTask>() == 0) { Console.WriteLine($"Task {newTask.Name} was added to developer"); DeveloperTasks.Add(newTask); } } finally { Lock.ExitWriteLock(); } } /// <summary> /// Increase the priority of the task /// </summary> /// <param name="taskName">name of the task</param> public void IncreasePriority(string taskName) { try { Lock.EnterUpgradeableReadLock(); var taskQuery = from t in DeveloperTasks where t.Name == taskName select t; if(taskQuery.Count<DeveloperTask>()>0) { DeveloperTask task = taskQuery.First<DeveloperTask>(); Lock.EnterWriteLock(); task.Priority++; Console.WriteLine($"Task {task.Name}" + $" priority was increased to {task.Priority}" + " for developer"); Lock.ExitWriteLock(); } } finally { Lock.ExitUpgradeableReadLock(); } } // <summary> // Allows people to check if the task is done // </summary> // <param name="taskName">name of the task</param> // <returns>False if the taks is undone or not in the list, // true if done</returns> public bool IsTaskDone(string taskName) { try { Lock.EnterReadLock(); var taskQuery = from t in DeveloperTasks where t.Name == taskName select t; if (taskQuery.Count<DeveloperTask>() > 0) { DeveloperTask task = taskQuery.First<DeveloperTask>(); Console.WriteLine($"Task {task.Name} status was reported."); return task.Status; } } finally { Lock.ExitReadLock(); } return false; } private void ExecuteTask() { // look over the tasks and do the highest priority var queryResult = from t in DeveloperTasks where t.Status == false orderby t.Priority select t; if (queryResult.Count<DeveloperTask>() > 0) { // do the task DeveloperTask task = queryResult.First<DeveloperTask>(); task.Status = true; task.Priority = -1; Console.WriteLine($"Task {task.Name} executed by developer."); } } #region IDisposable Support private bool disposedValue = false; // To detect redundant calls protected virtual void Dispose(bool disposing) { if (!disposedValue) { if (disposing) { Lock?.Dispose(); Lock = null; Timer?.Dispose(); Timer = null; } disposedValue = true; } } public void Dispose() { Dispose(true); } #endregion } public class DeveloperTaskManager : DeveloperTaskDependent, IDisposable { private System.Threading.Timer ManagerTimer { get; set; } public DeveloperTaskManager(string name, Developer taskExecutor) : base(name, taskExecutor) { // intervene every 2 seconds ManagerTimer = new Timer(new TimerCallback(Intervene), null, 0, 2000); } ~DeveloperTaskManager() { Dispose(true); } // Intervene in the plan protected void Intervene(Object stateInfo) { ChangePriority(); // developer ended, kill timer if (s_end) { ManagerTimer.Dispose(); TaskExecutor = null; } } public void ChangePriority() { if (DeveloperTasks.Count > 0) { int taskIndex = _rnd.Next(0, DeveloperTasks.Count - 1); DeveloperTask checkTask = DeveloperTasks[taskIndex]; // make those developers work faster on some random task! if (TaskExecutor != null) { TaskExecutor.IncreasePriority(checkTask.Name); Console.WriteLine( $"{Name} intervened and changed priority for task {checkTask.Name}"); } } } #region IDisposable Support private bool disposedValue = false; // To detect redundant calls protected override void Dispose(bool disposing) { if (!disposedValue) { if (disposing) { ManagerTimer?.Dispose(); ManagerTimer = null; base.Dispose(disposing); } disposedValue = true; } } public new void Dispose() { Dispose(true); } #endregion } public class DeveloperTaskDependent : IDisposable { protected List<DeveloperTask> DeveloperTasks { get; set; } = new List<DeveloperTask>(); protected Developer TaskExecutor { get; set; } protected Random _rnd = new Random(); private Timer TaskTimer { get; set; } private Timer StatusTimer { get; set; } public DeveloperTaskDependent(string name, Developer taskExecutor) { Name = name; TaskExecutor = taskExecutor; // add work every 1 second TaskTimer = new Timer(new TimerCallback(AddWork), null, 0, 1000); // check status every 3 seconds StatusTimer = new Timer(new TimerCallback(CheckStatus), null, 0, 3000); } ~DeveloperTaskDependent() { Dispose(); } // Add more work to the developer protected void AddWork(Object stateInfo) { SubmitTask(); // developer ended, kill timer if (s_end) { TaskTimer.Dispose(); TaskExecutor = null; } } // Check Status of work with the developer protected void CheckStatus(Object stateInfo) { CheckTaskStatus(); // developer ended, kill timer if (s_end) { StatusTimer.Dispose(); TaskExecutor = null; } } public string Name { get; set; } public void SubmitTask() { int taskId = _rnd.Next(10000); string taskName = $"({taskId} for {Name})"; DeveloperTask newTask = new DeveloperTask(taskName); if (TaskExecutor != null) { TaskExecutor.AddTask(newTask); DeveloperTasks.Add(newTask); } } public void CheckTaskStatus() { if (DeveloperTasks.Count > 0) { int taskIndex = _rnd.Next(0, DeveloperTasks.Count - 1); DeveloperTask checkTask = DeveloperTasks[taskIndex]; if (TaskExecutor != null && TaskExecutor.IsTaskDone(checkTask.Name)) { Console.WriteLine($"Task {checkTask.Name} is done for {Name}"); // remove it from the todo list DeveloperTasks.Remove(checkTask); } } } #region IDisposable Support private bool disposedValue = false; // To detect redundant calls protected virtual void Dispose(bool disposing) { if (!disposedValue) { if (disposing) { TaskTimer?.Dispose(); TaskTimer = null; StatusTimer?.Dispose(); StatusTimer = null; } disposedValue = true; } } public void Dispose() { Dispose(true); } #endregion }
You can see the series of events in the project in the output. The point at which the developer has had enough is highlighted:
Added CTO to the project... Added Director to the project... Added Project Manager to the project... Added Product Manager to the project... Added Test Engineer to the project... Added Technical Communications Professional to the project... Added Operations Staff to the project... Added Support Staff to the project... Task (6267 for CTO) was added to developer Task (6267 for CTO) status was reported. Task (6267 for CTO) priority was increased to 1 for developer CTO intervened and changed priority for task (6267 for CTO) Task (6267 for Director) was added to developer Task (6267 for Director) status was reported. Task (6267 for Director) priority was increased to 1 for developer Director intervened and changed priority for task (6267 for Director) Task (6267 for Project Manager) was added to developer Task (6267 for Project Manager) status was reported. Task (6267 for Project Manager) priority was increased to 1 for developer Project Manager intervened and changed priority for task (6267 for Project Manager) Task (6267 for Product Manager) was added to developer Task (6267 for Product Manager) status was reported. Task (6267 for Technical Communications Professional) was added to developer Task (6267 for Technical Communications Professional) status was reported. Task (6267 for Operations Staff) was added to developer Task (6267 for Operations Staff) status was reported. Task (6267 for Support Staff) was added to developer Task (6267 for Support Staff) status was reported. Task (6267 for Test Engineer) was added to developer Task (5368 for CTO) was added to developer Task (5368 for Director) was added to developer Task (5368 for Project Manager) was added to developer Task (6153 for Product Manager) was added to developer Task (913 for Test Engineer) was added to developer Task (6153 for Technical Communications Professional) was added to developer Task (6153 for Operations Staff) was added to developer Task (6153 for Support Staff) was added to developer Task (6267 for Product Manager) executed by developer. Task (6267 for Technical Communications Professional) executed by developer. Task (6267 for Operations Staff) executed by developer. Task (6267 for Support Staff) executed by developer. Task (6267 for CTO) priority was increased to 2 for developer CTO intervened and changed priority for task (6267 for CTO) Task (6267 for Director) priority was increased to 2 for developer Director intervened and changed priority for task (6267 for Director) Task (6267 for Project Manager) priority was increased to 2 for developer Project Manager intervened and changed priority for task (6267 for Project Manager) Task (6267 for Test Engineer) executed by developer. Task (7167 for CTO) was added to developer Task (7167 for Director) was added to developer Task (7167 for Project Manager) was added to developer Task (5368 for Product Manager) was added to developer Task (6153 for Test Engineer) was added to developer Task (5368 for Technical Communications Professional) was added to developer Task (5368 for Operations Staff) was added to developer Task (5368 for Support Staff) was added to developer Task (5368 for CTO) executed by developer. Task (5368 for Director) executed by developer. Task (5368 for Project Manager) executed by developer. Task (6267 for CTO) status was reported. Task (6267 for Director) status was reported. Task (6267 for Project Manager) status was reported. Task (913 for Test Engineer) status was reported. Task (6267 for Technical Communications Professional) status was reported. Task (6267 for Technical Communications Professional) is done for Technical Communications Professional Task (6267 for Product Manager) status was reported. Task (6267 for Product Manager) is done for Product Manager Task (6267 for Operations Staff) status was reported. Task (6267 for Operations Staff) is done for Operations Staff Task (6267 for Support Staff) status was reported. Task (6267 for Support Staff) is done for Support Staff Task (6153 for Product Manager) executed by developer. Task (2987 for CTO) was added to developer Task (2987 for Director) was added to developer Task (2987 for Project Manager) was added to developer Task (7167 for Product Manager) was added to developer Task (4126 for Test Engineer) was added to developer Task (7167 for Technical Communications Professional) was added to developer Task (7167 for Support Staff) was added to developer Task (7167 for Operations Staff) was added to developer Task (913 for Test Engineer) executed by developer. Task (6153 for Technical Communications Professional) executed by developer. Developer has too many tasks, quitting! 21 tasks left unfinished. Task (6153 for Operations Staff) executed by developer. Task (5368 for CTO) priority was increased to 0 for developer CTO intervened and changed priority for task (5368 for CTO) Task (5368 for Director) priority was increased to 0 for developer Director intervened and changed priority for task (5368 for Director) Task (5368 for Project Manager) priority was increased to 0 for developer Project Manager intervened and changed priority for task (5368 for Project Manager) Task (6153 for Support Staff) executed by developer. Task (4906 for Product Manager) was added to developer Task (7167 for Test Engineer) was added to developer Task (4906 for Technical Communications Professional) was added to developer Task (4906 for Operations Staff) was added to developer Task (4906 for Support Staff) was added to developer Task (7167 for CTO) executed by developer. Task (7167 for Director) executed by developer. Task (7167 for Project Manager) executed by developer. Task (5368 for Product Manager) executed by developer. Task (6153 for Test Engineer) executed by developer. Task (5368 for Technical Communications Professional) executed by developer. Task (5368 for Operations Staff) executed by developer. Task (5368 for Support Staff) executed by developer. Task (2987 for CTO) executed by developer. Task (2987 for Director) executed by developer. Task (2987 for Project Manager) executed by developer. Task (7167 for Product Manager) executed by developer. Task (4126 for Test Engineer) executed by developer.
The “ReaderWriterLockSlim” topic in the MSDN documentation.
Use async
, await
, and the *Async
versions of the database calls so that other work can be accomplished with threads in the program while you wait for the database I/O to complete.
If you aren’t familiar with async
and await
, which were introduced in C# 5.0, see the MSDN topic “Asynchronous Programming with Async and Await” for more details.
If you use SqlConnection
and SqlCommand
, you would use the SqlConnection.OpenAsync
and the SqlCommand.ExecuteReaderAsync
method to open and query the database asynchronously:
using (SqlConnection conn = new SqlConnection(Settings.Default.NorthwindConnectionString)) { await conn.OpenAsync(); SqlCommand cmd = new SqlCommand("SELECT * FROM CUSTOMERS", conn); SqlDataReader reader = await cmd.ExecuteReaderAsync(); while (reader.Read()) { Console.WriteLine($"Customer {reader["ContactName"].ToString()} " + $"from {reader["CompanyName"].ToString()}"); } }
If you use Entity Framework, you can use the IQueryable<T>.ToListAsync
extension method to open the connection and execute the query asynchronously:
using (var efContext = new NorthwindEntities()) { var list = await (from cust in efContext.Customers select cust).ToListAsync(); foreach(var cust in list) { Console.WriteLine($"Customer {cust.ContactName} " + $"from {cust.CompanyName}"); }
If you wanted to write a new record using EntityFramework
and then get it back to check it, you would use the System.Data.Entity.DbContext.SaveChangesAsync
method after adding a new entity to the context. Then you could use the IQueryable<T>.FirstOrDefaultAsync
extension method to retrieve just the first matching item or null
:
// Make a new customer and save them Customer c = new Customer(); c.CustomerID = "JENNA"; c.ContactName = "Jenna Roberts"; c.CompanyName = "Flamingo Industries"; efContext.Customers.Add(c); await efContext.SaveChangesAsync(); var jenna = await efContext.Customers.Where(cu => cu.ContactName == "Jenna Roberts").FirstOrDefaultAsync(); Console.WriteLine($"New Customer {jenna.ContactName} " + $"from {jenna.CompanyName}"); }
While some of the database technologies have currently implemented async
support, not all of them are so lucky. LINQ to SQL is still derived from System.Data.Linq.DataContext
, which does not have async
support. While you can still use LINQ to SQL for non-async
operations like so:
using (var l2sContext = new NorthwindLinq2SqlDataContext()) { var list = (from cust in l2sContext.Customers select cust); foreach (var cust in list) { Console.WriteLine($"Customer {cust.ContactName} " + $"from {cust.CompanyName}"); } }
if you tried to use the async
support as shown here with LINQ to SQL, you will get this error:
var list = await (from cust in l2sContext.Customers select cust).ToListAsync(); // Additional information: The source IQueryable doesn't implement // IDbAsyncEnumerable<NorthwindLinq2Sql.Customer>. Only sources that implement // IDbAsyncEnumerable can be used for Entity Framework
If you want async
support for your database actions, you need to either use the System.Data.SqlClient
constructs (like SqlConnection
, SqlDataReader
, etc.) or use Entity Framework 6 or above.
See the “System.Data.SqlClient namespace,” “Asynchronous Programming with Async and Await,” and “Entity Framework Async Query & Save” topics in the MSDN documentation.
Use Task.ContinueWith
to execute a follow-on task once a primary task has been completed.
ContinueWith
allows you to append a task to be executed asynchronously upon the completion of an original task. This is useful in instances when some of the tasks have an ordering constraint and some do not.
As an example, think about the 4 × 400 meter relay in the Olympics. There are a number of primary tasks (runners who run the first leg of the relay for each country), followed by tasks that depend on the result of the first task (the runners who are running the remaining legs of the relay). None of the dependent tasks can start until the previous task (the passing of the baton) is finished.
To represent each of the runners in the relay, we have a RelayRunner
class that contains the Country
the runner is representing, the Leg
of the race he or she will run, if he or she has the baton currently (HasBaton
), and how long it took him or her to run the leg of the relay (LegTime
). Finally, we have a method to make the RelayRunner
sprint when it is his or her turn (Sprint
).
public class RelayRunner { public string Country { get; set; } public int Leg { get; set; } public bool HasBaton { get; set; } public TimeSpan LegTime { get; set; } public int TotalLegs { get; set; } public RelayRunner Sprint() { Console.WriteLine( $"{Country} for Leg {Leg} has the baton and is running!"); Random rnd = new Random(); int ms = rnd.Next(100, 1000); Task.Delay(ms); // finished.... LegTime = new TimeSpan(0,0,0,0,ms); if (Leg == TotalLegs) Console.WriteLine($"{Country} has finished the race!"); return this; } }
Now that we have runners, we need to set up the countries for them to run for (countries
), and some tracking about who is running for each team (teams
); who is in the race in general (runners
); and who are the first-leg runners (firstLegRunners
):
// Relay race in the olympics string[] countries = { "Russia", "France", "England", "United States", "India", "Germany", "China" }; Task<RelayRunner>[,] teams = new Task<RelayRunner>[countries.Length, 4]; List<Task<RelayRunner>> runners = new List<Task<RelayRunner>>(); List<Task<RelayRunner>> firstLegRunners = new List<Task<RelayRunner>>();
We will populate these collections with the runners such that the first-leg runner from each team has the baton; and if the runner is not the first runner from his or her team, his or her start is subject to when the prior runner on the team finishes (ContinueWith
):
for (int i = 0; i < countries.Length; i++) { for (int r = 0; r < 4; r++) { var runner = new RelayRunner() { Country = countries[i], Leg = r+1, HasBaton = r == 0 ? true : false, TotalLegs = 4 }; if (r == 0) // add starting leg for country { Func<RelayRunner> funcRunner = runner.Sprint; teams[i, r] = new Task<RelayRunner>(funcRunner); firstLegRunners.Add(teams[i, r]); } else // add other legs for country { teams[i, r] = teams[i, r - 1].ContinueWith((lastRunnerRunning) => { var lastRunner = lastRunnerRunning.Result; // Handoff the baton Console.WriteLine($"{lastRunner.Country} hands off from " + $"{lastRunner.Leg} to {runner.Leg}!"); Random rnd = new Random(); int fumbleChance = rnd.Next(0, 10); if (fumbleChance > 8) { Console.WriteLine( $"Oh no! {lastRunner.Country} for Leg " + $"{runner.Leg} fumbled the hand off from Leg " + $"{lastRunner.Leg}!"); Thread.Sleep(1000); Console.WriteLine($"{lastRunner.Country} for Leg " + $"{runner.Leg}" + " recovered the baton and is running again!"); } lastRunner.HasBaton = false; runner.HasBaton = true; return runner.Sprint(); }); } // add to our list of runners runners.Add(teams[i, r]); } }
To simulate the starting gun, we will use Parallel.ForEach
to call Start
on each of the first-leg-runner tasks. This guarantees a more random start than if we had done a simple for
loop:
//Fire the gun to start the race! Parallel.ForEach(firstLegRunners, r => { r.Start(); });
Finally, we use the list of all of the Task<RelayRunner>
tasks and call Task.WaitAll
in order to wait for the finish of the race:
// Wait for everyone to finish Task.WaitAll(runners.ToArray());
While running tasks in a certain order goes against parallelism in general—as it would be best for scalability to be able to run a set of tasks independently in any order—the reality is that most tasks do have an order and being able to represent that order in code is useful. ContinueWith
provides a number of overloads with many parameters to control the action taken after the initial task completes. Table 12-1 lists the types of control structures available as parameters.
Value | Description |
---|---|
Action<Task> |
The action to run when the initial task completes. This will be passed the original task as a reference. |
Func<Task, TResult> |
The function to run when the initial task completes. This will be passed the original task as a reference. |
CancellationToken |
The token assigned to the continuation task to allow for cancelling of the continuation. |
TaskScheduler |
The scheduler to use for this task (other than the default if a different scheduling algorithm is necessary based on the tasks). |
Object |
A state object to pass into the continuation. |
To display the standings at the end of the race, we used the following code, which groups the runners by team and adds together their times (LegTime
) with the LINQ Sum
extension method:
Console.WriteLine(" Race standings:"); var standings = from r in runners group r by r.Result.Country into countryTeams select countryTeams; string winningCountry = string.Empty; int bestTime = int.MaxValue; HashSet<Tuple<int, string>> place = new HashSet<Tuple<int, string>>(); foreach (var team in standings) { var time = team.Sum(r => r.Result.LegTime.Milliseconds); if (time < bestTime) { bestTime = time; winningCountry = team.Key; } place.Add(new Tuple<int, string>(time, $"{team.Key} with a time of {time}ms")); } int p = 1; foreach(var item in place.OrderBy(t => t.Item1)) { Console.WriteLine($"{p}: {item.Item2}"); p++; } Console.WriteLine($" The winning team is from {winningCountry}");
The output from the race will look similar to this:
France for Leg 1 has the baton and is running! United States for Leg 1 has the baton and is running! Russia for Leg 1 has the baton and is running! England for Leg 1 has the baton and is running! France hands off from 1 to 2! England hands off from 1 to 2! Russia hands off from 1 to 2! United States hands off from 1 to 2! Russia for Leg 2 has the baton and is running! Oh no! England for Leg 2 fumbled the hand off from Leg 1! Oh no! France for Leg 2 fumbled the hand off from Leg 1! United States for Leg 2 has the baton and is running! Russia hands off from 2 to 3! United States hands off from 2 to 3! Russia for Leg 3 has the baton and is running! Russia hands off from 3 to 4! United States for Leg 3 has the baton and is running! Russia for Leg 4 has the baton and is running! United States hands off from 3 to 4! United States for Leg 4 has the baton and is running! United States has finished the race! Russia has finished the race! Germany for Leg 1 has the baton and is running! Germany hands off from 1 to 2! Germany for Leg 2 has the baton and is running! Germany hands off from 2 to 3! Germany for Leg 3 has the baton and is running! India for Leg 1 has the baton and is running! India hands off from 1 to 2! India for Leg 2 has the baton and is running! Germany hands off from 3 to 4! Germany for Leg 4 has the baton and is running! India hands off from 2 to 3! India for Leg 3 has the baton and is running! India hands off from 3 to 4! India for Leg 4 has the baton and is running! India has finished the race! China for Leg 1 has the baton and is running! Germany has finished the race! China hands off from 1 to 2! China for Leg 2 has the baton and is running! China hands off from 2 to 3! China for Leg 3 has the baton and is running! China hands off from 3 to 4! China for Leg 4 has the baton and is running! China has finished the race! France for Leg 2 recovered the baton and is running again! France for Leg 2 has the baton and is running! France hands off from 2 to 3! France for Leg 3 has the baton and is running! France hands off from 3 to 4! France for Leg 4 has the baton and is running! France has finished the race! England for Leg 2 recovered the baton and is running again! England for Leg 2 has the baton and is running! England hands off from 2 to 3! England for Leg 3 has the baton and is running! England hands off from 3 to 4! England for Leg 4 has the baton and is running! England has finished the race! Race standings: 1: India with a time of 696ms 2: Germany with a time of 698ms 3: China with a time of 699ms 4: Russia with a time of 1510ms 5: United States with a time of 1540ms 6: France with a time of 2659ms 7: England with a time of 3625ms The winning team is from India
The “Task.ContinueWith,” “Task.WaitAll,” and “Parallel.ForEach” topics in the MSDN documentation and Concurrency in C# Cookbook, by Stephen Cleary (O’Reilly).