Chapter 5. Fundamentals of the Windows Workflow Foundation

<feature><title>In This Chapter</title> </feature>

Introduction

This text mainly focuses on the creation of services using Windows Communication Foundation. The various chapters discuss the definition, security, structure, and customization of services. One aspect not covered in many discussions about Windows Communication Foundation is the actual implementation of the services. To realize the value of an service-oriented architecture (SOA), services must expose valuable functionality. The primary representation of application logic has been in code. Irrespective of language, one distills the actions of an application—from data retrieval to processing logic—in a programming language. The Windows Workflow Foundation brings the power of a model-based, declarative process execution engine into the .NET Framework in order to move the experience of developing the functionality of services beyond writing lines and lines of code.

What Is Windows Workflow Foundation?

Windows Workflow Foundation is a component of the .NET 3.0 Framework for developing workflow-enabled applications. It is a technology used within Microsoft in products such as Microsoft Office SharePoint Server 2007, Microsoft Speech Server 2007, and in the next wave of Microsoft Dynamics products. This same technology is available to ISVs and software developers who use the .NET Framework. There are three core components to Windows Workflow Foundation:

  • Activity Framework

  • Runtime Environment

  • Workflow Designer

What Windows Workflow Foundation Is Not

The term, workflow, is quite overloaded within the software development industry and the larger business community. It is important to clearly state how Windows Workflow Foundation fits into those popular conceptions of workflow.

  • Windows Workflow Foundation is not a server, although one could centralize workflow functionality and expose it as a server for other applications to utilize.

  • Windows Workflow Foundation is not a Business Process Management (BPM) tool, although one could build a BPM tool using Windows Workflow Foundation as the workflow execution engine.

  • Windows Workflow Foundation is not targeted at business analysts, although one could expose functionality using the rehostable designer to allow business analysts to build their own workflow. The flexibility of Windows Workflow Foundation allows that to be incorporated into the analysts’ familiar environments. If the included designer does not work, one could also create a custom workflow designer.

  • Windows Workflow Foundation is not an enterprise application integration tool, although one could encapsulate third-party system functionality into activities and compose those into workflows.

  • Windows Workflow Foundation, however, is not a toy. It has been designed to operate to enterprise scale, in a redundant server-farm environment, with high performance. It is ready to be used by enterprise-class applications today, as evidenced by its usage with SharePoint Server. Windows Workflow Foundation by itself, however, is not an enterprise-class application; it is a developer toolkit.

  • That said, Windows Workflow Foundation is not only for server-based deployments. It can be used with a Windows Forms application to execute any of the application logic, from service coordination, to UI customization. It can be used within a web application to manage process state. In short, it can be used as code to provide logic anywhere one can write .NET code.

Activities

Activities are the building blocks of Windows Workflow Foundation. From providing complex execution logic to executing an update against a SQL database, that behavior is encapsulated into discrete units of work called activities. An activity is any class that ultimately derives from System.Workflow.ComponentModel.Activity. There are two aspects to an activity:

  • Runtime behavior

  • Design-time behavior

Runtime behavior is the code executed when the activity is used within a workflow. This might include calling a web service or executing a chunk of code, as well as coordinating the execution of child activities. One question often asked is, “How well does Windows Workflow Foundation perform?” At the activity level, the answer is simple: An activity executes as fast as the same piece of code residing in a .NET assembly. This is because an activity is simply a compiled .NET assembly that contains a class derived from Activity. It will execute just as fast (or as slow) as the .NET code contained within. That is the individual activity; overhead is then incurred for managing the lifecycle of an activity, as outlined in Figure 5.1.

The lifecycle of an activity.

Figure 5.1. The lifecycle of an activity.

An activity is initialized when the workflow it is contained within is created, and remains in the Initialized state. That is, when CreateInstance is called and the workflow instance is created, all the activities are initialized. An activity is then moved into the Executing state when scheduled for execution. The normal path is then for the activity to move to the Closed state, where it should gracefully rest in peace, its work done.

There are complications, of course, which can occur along the way. An activity can encounter an error, entering the Faulting state on its way to closing. The activity might be progressing nicely, but it might be cancelled by another activity, entering its Canceling state prior to moving into the Closed state. Finally, an activity might be required to be awakened from its Closed state and move the Compensating state in cases where the activity has defined a way to roll back or compensate for its execution, and for some reason, an error, or maybe a cancellation of the process, needs to invoke that compensation logic.

That is a brief summary of the runtime behavior of an activity. An activity also has a design experience that is important when building workflows. An activity might require a special representation on the design surface in order to convey to the developer what the activity is doing. This might be something quite simple (an icon, logo, color or shape attached to the activity when it is displayed) or a complex layout that mimics the execution behavior of the activity. An example of an activity that benefits from a strong design component is the Parallel activity, as shown in Figure 5.2.

A Parallel activity.

Figure 5.2. A Parallel activity.

Beyond the graphical representation of the activity, there might be validation behavior one would like as part of the design experience. Perhaps a PlaceOrder activity is not configured correctly until it contains both a ship-to address and a deliver-by date. The validation components allow one to specify the exact criteria required for the activity to be used within a workflow. This could be as simple as ensuring that a property has a value assigned to it, or as complex as a calculation and database lookup to determine whether a property value or values fall within a specified tolerance. Design and validation are topics discussed later in this chapter.

Out of the Box Activities

The activities that are shipped with Windows Workflow Foundation are often referred to as the out of the box activities. They are a collection of fundamental activities, many structural, that can be used to create simple workflows and compose new activities. Table 5.1 is a brief summary of each of these activities.

Table 5.1. Out of the Box Activities

Activity

Description

CallExternalMethod

Invokes method in host

Code

Executes code defined in code beside

Compensate

Invokes target activity’s compensation

CompensatableSequence

Sequence activity with capability to define compensation

ConditionedActivityGroup

Rule-based activity group

Delay

Pauses workflow execution for a period of time

EventDriven

Sequence whose execution is triggered by an event

EventHandlingScope

Executes child activities while listening for events

FaultHandler

Composite activity that executes on exception in workflow

HandleExternalEvent

Waits and receives message from host

IfElse

Conditional branching activity

InvokeWebService

Calls a web service

InvokeWorkflow

Asynchronously initiates another workflow

Listen

Waits for the first of a set of events

Parallel

Schedules parallel execution of child branches

Policy

Executes a rules policy

Replicator

Spawns a dynamic number of execution contexts for the contained sequence

Sequence

Enables the sequential execution of child activities

SetState

Sets the next state to be entered (used only in a state machine workflow)

State

Represents a state in a state machine workflow (used only in a state machine workflow)

StateFinalization

Occurs before transitioning to a new state (used only in a state machine workflow)

StateInitialization

Occurs when the State activity starts running (used only in a state machine workflow)

Suspend

Suspends workflow execution

SynchronizationScope

Serializes execution of contained activities to control access to shared variables

Terminate

Halts the workflow with an error

Throw

Raises an exception within the workflow

TransactionScope

Sequence activity executing within a transaction

CompensatableTransactionScope

Transaction scope with a defined compensation sequence

WebServiceInput

Exposes a workflow as a web service

WebServiceOutput

Returns a value when exposed as a web service

WebServiceFault

Returns a fault when exposed as a web service

While

Loops based on rule condition

Creating Custom Activities

The activities included with Windows Workflow Foundation exist to provide a strong starting point for creating a workflow. However, it will be a very common activity (no pun intended) for a workflow developer to need to create a custom activity. From encapsulating frequently used functions to creating a custom composite activity to model a new pattern of execution, developers will need to start a workflow project by thinking about the activities needed. As time progresses, these activities can be reused, composed into higher-level activities, and handled just as common objects are today.

Basic

The most basic way to create a custom activity is to simply inherit from System.Workflow.ComponentModel.Activity. This will create all the basic machinery for an activity, except for the actual logic implemented by the activity. To do this, one should override the Execute() method, as shown in Listing 5.1.

Example 5.1. A Basic Activity

public class BasicActivity: Activity
{
        public BasicActivity()
        {
                this.Name = "BasicActivity";
        }
        protected override ActivityExecutionStatus Execute
            (ActivityExecutionContext executionContext)
        {
            Console.WriteLine("Basic Activity");
            return ActivityExecutionStatus.Closed;
        }
}

The Execute() method performs the work of the activity and is required to notify the runtime its status. In this case, a status of Closed is returned, indicating the activity has completed its work. A more complex pattern would be to return a status of Executing while waiting for some long-running work to complete. On completion of the long-running work, such as a manager approving an expense report, the activity notifies the runtime it has been completed. While waiting, the workflow instance might be idled and persisted awaiting the completion of the long running work.

As activities form the building blocks of a workflow, this pattern shows how one can very quickly wrap existing functionality inside an activity. An existing code library or component call can be wrapped inside of an activity with very little work. In Chapter 6, “Using the Windows Communication Foundation and the Windows Workflow Foundation Together,” this technique will be used to encapsulate calls to Windows Communication Foundation services. The activity shown earlier is not particularly interesting, nor does it expose any useful customization. This might work if one is wrapping an API consisting completely of hard coded or parameter-less functions. One usually wants to control some parameters that will affect the behavior to provide a useful activity. The simplest way to expose that capability is to add a property to the activity class, as shown in Listing 5.2.

Example 5.2. Adding a Property

public string TextToPrint
{
      get {return textToPrint;}
      set {textToPrint = value;}
}
protected override ActivityExecutionStatus Execute
      (ActivityExecutionContext executionContext)
{
      Console.WriteLine("Text To Print: {0}", TextToPrint);
      return ActivityExecutionStatus.Closed;
}

By adding this property, the activity can be configured when the workflow is designed, as shown in Figure 5.3.

A standard property on the property grid.

Figure 5.3. A standard property on the property grid.

When declaratively creating workflows in XAML (Extensible Application Markup Language), the XML representation of the workflow, these properties are set as attributes on the activity, as in Listing 5.3.

Example 5.3. Properties in XAML

<SequentialWorkflowActivity x:Class="SampleWFApplication.Workflow1"
  x:Name="Workflow1" xmlns:ns0="clr-namespace:SampleWFApplication"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/workflow">
  <ns0:BasicActivity TextToPrint="Hello World"
    x:Name="basicActivity1"/>
</SequentialWorkflowActivity>

This allows the properties to be set in the designer to customize the behavior of the activity. But it remains fairly static, limited to the value input at design time. What if the scenario called for passing in a customer object that the workflow was created to evaluate? This is accomplished through the use of dependency properties. Dependency properties are similar to the standard .NET properties described earlier, but differ in declaration and usage. There is a built-in code snippet to create these in Visual Studio, but the general pattern is given in Listing 5.4.

Example 5.4. Creating a DependencyProperty

public static DependencyProperty OrderAmountProperty = System.Workflow. ComponentModel
Creating a DependencyProperty.DependencyProperty.Register("OrderAmount", typeof(int), typeof(BasicActivity));
[Description("This property holds the amount of the order")]
[Category("Order Details")]
[Browsable(true)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)]
public int OrderAmount
{
    get
    {
        return ((int)(base.GetValue(BasicActivity.OrderAmountProperty)));
    }
    set
    {
        base.SetValue(BasicActivity.OrderAmountProperty, value);
    }
}

This slightly longer declaration appears to have some elements of a property, but also contains a static DependencyProperty declaration. A DependencyProperty is a special type of property that is attached to a DependencyObject, one of the classes that Activity inherits from. A DependencyProperty differs from a traditional property in that it supports three special use cases:

  • Activity binding

  • Metadata, assigning a value only at design time that is immutable during run-time

  • Attached properties; dynamically adding a property to an activity

The most common scenario is using dependency properties to support activity binding. The advantage gained by using dependency properties is additional design-time behaviors. Dropping the activity onto the design surface and inspecting its properties yields a new icon next to the property just declared, as shown in Figure 5.4.

DependencyProperty in grid.

Figure 5.4. DependencyProperty in grid.

Clicking on that new icon raises a new dialog, the bind dialog, as shown in Figure 5.5.

The Bind dialog.

Figure 5.5. The Bind dialog.

The Bind dialog allows the value of the property to be dynamically bound to another value in the workflow. This could be a property on the workflow passed in at the time of workflow creation or this could be a property on another activity. At design time, the activity is told where to look for the value of this property. By selecting another value in the workflow of the same type (and this can be a custom type), a binding expression now appears in the property grid as the value of the property. It looks something like this:

Activity=Workflow1, Path=WorkflowOrderAmount

The first part allows for the source (in this case, the parent workflow) and then the property on that activity to be resolved. Dot notation can be used here, so if the desired value to be bound is a few layers beneath a property, it can be reached.

Of course, with a dependency property, the value can still be hard-coded. In the previous example, there would be nothing to prevent one from inputting a fixed number, say 42, into the property grid.

Within the Bind dialog is a Bind to a New Member tab, as shown in Figure 5.6.

Property promotion via binding to a new member.

Figure 5.6. Property promotion via binding to a new member.

This dialog lets a dependency property of an activity be “promoted” to be a member of the containing activity. In this case, binding to a new member will create a dependency property (or public member) on the workflow and insert the proper binding syntax for the activities value. This lets the property of an activity contained within a composite activity to bubble up to a property on the containing activity. When creating activities through composition, this is a useful way to mimic the polymorphism of inheritance. The containing activity can expose the properties of a contained activity as if it had inherited them from that activity.

Composition

The second way to create an activity, and the one Visual Studio defaults to, is through composition. The activity class definition of a newly created activity in Visual Studio looks like this:

public partial class ACompositeActivity: SequenceActivity

This inherits from SequenceActivity, the fundamental activity for building sequential workflows. The Execute() method of SequenceActivity is responsible for scheduling the execution of each contained child activity in sequence. By inheriting, that behavior is not overwritten, it is desired. Additionally, the design behavior of the SequenceActivity is preserved. This allows an activity developer to create a new activity by dragging and dropping other activities into the new activity, thus creating an activity out of existing activities. This is a powerful tool for creating new units of work, activities to be used inside of a workflow. This means that given a powerful enough set of basic activities implemented in code, perhaps wrapping an existing API, one can very rapidly create those units of functionality into new, higher-level units of functionality. To drive this concept home, consider the following activities:

  • SendEmail

  • LookupManager

  • WaitForResponseToEmail

With these activities, along with some of the structural activities provided by the out of the box activities, one can compose an arbitrarily complex approval activity, and then expose that activity out as “the” approval activity to use in any workflow (see Figure 5.7).

An Approval activity composed of more basic activities.

Figure 5.7. An Approval activity composed of more basic activities.

As higher-order activities are created, such as Approval, these too can be composed again and again into additional activities, allowing process of arbitrary sophistication to be created. A NewProductIdeaGeneration activity might consist of a parallel activity containing a set of cross-departmental feedback requests that each need an approval. This activity could then be used in a New Product Creation workflow with the workflow developer unaware of the many layers of implementation detail there are behind the NewProductIdeaGeneration activity. This developer just knows that the NewProductIdeaGeneration activity will execute, and when it completes, it will be populated with a set of vetted, approved ideas to be used elsewhere in the workflow. As mentioned earlier, properties of contained activities can be promoted to be properties on the containing activity, allowing the useful customization properties to be exposed to users of the containing activity.

It is in these higher-level, composition-based activities where many organizations look to have nondevelopers arrange them into workflows. Consider the document approval activity created earlier. By providing that activity and a parallel activity, a business analyst could create the approval workflow for a given class of documents. This is the approach exposed through SharePoint Designer and Microsoft Office SharePoint Server, which allows a business analyst or power user to create and customize document workflows in SharePoint. The challenge of providing a repository where activities can be referenced across an enterprise or department, and their characteristics expressed for other developers and analysts to reference, is left as an exercise to the reader.

Custom Composite Activities

There exists an additional type of activity one can create—a custom composite activity. A composite activity is one that contains child activities, and its Execute() method is responsible for scheduling the execution of those child activities. Out of the box, examples of composite activities are Sequence, While, and Parallel.

Those activities, however, express only a few types of execution semantics. There exist many other ways in which one might want to execute the child activities, such as a priority execution where child activities are executed in some type of prioritized ordering. One might want to have an activity that executes in parallel but allows the developer to set a looping variable on each branch, so that not only does it executes in parallel, but also executes each branch a specified number of times.

The implementation of such activities deals with some advanced intricacies of activity execution. Shukla and Schmidt’s “Essential Windows Workflow Foundation” is recommended to explore the details of activity execution and ways to create advanced, custom composite activities.

The composite activity executes by scheduling execution of its child activities and subscribing to those child activities’ completed event. The activity then returns the Executing status, indicating to the runtime to continue by executing the next activity scheduled. On receiving the child completed event, the composite can proceed scheduling other activities or evaluate if it can close. When all of the activities complete or the composite decides enough has been done, the composite will return a state of Closed, indicating it has completed. The workflow runtime will enforce a restriction that all child activities must be Closed before allowing the parent activity to close.

Communicating with Activities

Workflows do not operate in a purely isolated environment; rather, they will frequently need to interact with the host application to communicate messages out to the host or to wait to receive a message from the host. Two out of the box activities are designed to support this: HandleExternalEvent and CallExternalMethod. These activities communicate with the host via a contract shared between the host and the workflow. The implementation of this contract is provided to the runtime ExternalDataExchangeService as a local service.

To clarify what is going on here, an example of each of the activities follows based on the scenario of surveying employees. First, create a contract to be shared between the host and workflow, and decorate with the ExternalDataExchange attribute as shown in Listing 5.5.

Example 5.5. Interface for Workflow Communication

using System;
using System.Workflow.ComponentModel;
using System.Workflow.Activities;

namespace ExternalEventSample
{
    [ExternalDataExchange()]
    public interface ISurveyResponseService
        {
        void SurveyEmployee(string employee, string surveyQuestion);
        event EventHandler<SurveyEventArgs> SurveyCompleted;
        }

    [Serializable]
    public class SurveyEventArgs : ExternalDataEventArgs
    {
        private string employee;
using System;
using System.Workflow.ComponentModel;
using System.Workflow.Activities;
namespace ExternalEventSample
{
    [ExternalDataExchange()]
    public interface ISurveyResponseService
    {
        void SurveyEmployee(string employee, string surveyQuestion);
        event EventHandler<SurveyEventArgs> SurveyCompleted;
    }
    [Serializable]
    public class SurveyEventArgs : ExternalDataEventArgs
    {
        private string employee;
        public string Employee
        {
            get { return employee; }
            set { employee = value; }
        }
        private string surveyResponse;
        public string SurveyResponse
        {
            get { return surveyResponse; }
            set { surveyResponse = value; }
        }
        public SurveyEventArgs(Guid instanceId,
                               string employee, string surveyResponse)
            : base(instanceId)
        {
            this.employee = employee;
            this.surveyResponse = surveyResponse;
        }
}

This interface defines an event, a custom event arguments class, and a public method. Next, provide an implementation of the interface, as in Listing 5.6. This will be used by the host to expose the functionality to the workflow.

Example 5.6. Implementation of Interface

using System;


namespace ExternalEventSample
{
    class SurveyResponseService : ISurveyResponseService
    {
        public void SurveyEmployee(string employee, string surveyQuestion)
        {
            // here we would notify and display the survey
            Console.WriteLine("Hey {0}, what do you think of {1}?",
            employee, surveyQuestion);
        }
        public event EventHandler<SurveyEventArgs> SurveyCompleted;
        public void CompleteSurvey(Guid instanceId, string employee,
                                           string surveyResponse)
        {
            // the host will call this method when it wants
            // to raise the event into the workflow.
            // Note that the workflow instance id needs to be passed in.
            EventHandler<SurveyEventArgs> surveyCompleted =
            this.SurveyCompleted;
            if (surveyCompleted != null)
            {
                surveyCompleted(null,
                new SurveyEventArgs(instanceId, employee, surveyResponse));
            }
        }
    }
}

Now, add the ExternalDataExchange service to the runtime and add an instance of the interface implementation as a local service, as in Listing 5.7. Additionally, use the OnWorkflowIdled event as the opportunity to send the response to the host. An assumption is made in this sample that only one workflow type will be executing, and the only time it will go idle is while waiting for a survey response.

Example 5.7. Configure the Host for Communication

using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using System.Workflow.Runtime;
using System.Workflow.Runtime.Hosting;
using System.Workflow.Activities;

namespace ExternalEventSample
{
     class Program
     {
          static SurveyResponseService surveyService;
          static void Main(string[] args)
          {
              using (WorkflowRuntime workflowRuntime = new WorkflowRuntime())
              {
                  // add the local service to the external data exchange service
                  surveyService = new SurveyResponseService();
                  ExternalDataExchangeService dataService =
                       new ExternalDataExchangeService();
                  workflowRuntime.AddService(dataService);
                  dataService.AddService(surveyService);
                  AutoResetEvent waitHandle = new AutoResetEvent(false);
                  workflowRuntime.WorkflowCompleted +=
                   delegate(object sender, WorkflowCompletedEventArgs e)
                   { waitHandle.Set(); };
                  workflowRuntime.WorkflowTerminated +=
                    delegate(object sender, WorkflowTerminatedEventArgs e)
                    {
                      Console.WriteLine(e.Exception.Message);
                      waitHandle.Set();
                    };
                    workflowRuntime.WorkflowIdled += OnWorkflowIdled;
                    WorkflowInstance instance =
                      workflowRuntime.CreateWorkflow(typeof
                      (WorkflowConsoleApplication13.Workflow1));
                    instance.Start();
                    waitHandle.WaitOne();
              }
          }
          static void OnWorkflowIdled(object sender, WorkflowEventArgs e)
          {
              surveyService.CompleteSurvey(e.WorkflowInstance.InstanceId,
              "Matt", "Very Satisfied");
          }
     }
}

Moving to the workflow, drag the CallExternalMethod activity onto the design surface. Note that the smart tag validation indicates that neither the interface type nor the method name has been defined. Clicking on the InterfaceType property will bring up a standard type browser that will allow the selection of the proper interface (see Figure 5.8). The method can then be selected from the drop-down list of the MethodName property. After the method is selected, additional properties will be added that correspond to the input parameters as well as the output as defined by the interface, as shown in Figure 5.9.

Browsing for the interface to use.

Figure 5.8. Browsing for the interface to use.

The post-binding property page; note the new property.

Figure 5.9. The post-binding property page; note the new property.

When the CallExternalMethod activity executes, it gets access to the implementation of the contract via the ActivityExecutionContext.GetService and calls the method.

To use the HandleExternalEvent activity, the first thing that has to be done is provide a way for the host application to raise the event for the workflow runtime to receive and route the message. This is accomplished by calling the method on the service implementation:

public void CompleteSurvey(Guid instanceId, string employee, string surveyResponse
{
        EventHandler<SurveyEventArgs>surveyCompleted = this.SurveyCompleted;
        if (surveyCompleted != null)
        {
            surveyCompleted(null, new SurveyEventArgs(instanceId, employee, surveyResponse));
        }
}

This will raise an event which the runtime will route to the workflow instance based on the workflowId parameter that has been passed in. Internally, the HandleExternalEvent activity creates a queue and subscribes to messages placed on the queue. When the event is received, the runtime places that message on the queue waiting for that type of message. It is possible to have multiple queues listening for the same type of message—imagine waiting for multiple employees to complete a survey. In this case, the granularity of which queue (and therefore, which activity) an event will be routed to can be specified by using correlation. A more thorough treatment of correlation can be found in the documentation contained in the Windows SDK.

To use HandleExternalEvent, first drop it onto the design surface. Similar to CallExternalMethod, an interface type must be specified, and then the event itself (see Figure 5.10).

Selecting the event type.

Figure 5.10. Selecting the event type.

When the workflow reaches the HandleExternalEvent activity, it will set up a queue to listen for the specified event type, and then will either go idle, or will process other activities that are scheduled for execution (if, for instance, the HandleExternalEvent activity occurs inside a Parallel activity).

Many times, a workflow needs to have a timeout while listening for an event. In the earlier sample, the workflow will listen indefinitely for the event to be received. It might be that the process should wait for an hour, or two days, or three weeks, but at some point, additional logic has to be executed, such as sending a reminder email out. The Listen activity can be used to facilitate this. The Listen activity has many branches, each of which must start with an activity that implements IEventActivity. The HandleExternalEvent activity is one of these activities; another is the Delay activity. The Delay activity exposes a TimeSpan property to set the Delay duration. In this way, the Listen activity can be used to model a timeout while waiting for input. In one branch is the HandleExternalEvent activity, and in the other a Delay activity, as shown in Figure 5.11.

Using the Listen activity to model a timeout.

Figure 5.11. Using the Listen activity to model a timeout.

In the Windows SDK, there is a tool called wca.exe that can be pointed at an existing interface to create strongly typed activities for sending and receiving messages. This allows activities to be created that correspond to frequently used communications and increases the performance by generating a strongly typed activity. For each event declared on the interface, a derivative of HandleExternalEvent will be created; for each method, a derivative of CallExternalMethod will be created.

Design Behavior

It is the job of the workflow activity developer to define both the runtime and the design-time characteristics of an activity. Just as a Windows Forms control can have special design time functions, an activity can present the workflow developer with a special user interface to set up a complex set of properties, display itself in such a way that the intent or usage of the activity is clear (a red color for an exception activity), or ensure that the inputs to the activity are valid.

The design characteristics of an activity are defined in special classes outside the activity declaration itself. An activity is declared, and that declaration is decorated with attributes for design-time characteristics. This allows for multiple activities to share a designer class, and keeps the code of the activity focused on the execution behavior.

There are two major types of design behavior that can be specified. The first is the actual design and display of the activity on the workflow design surface. The second is validation, specifying the valid and invalid ways an activity can be used. This validation can be structural (a transaction scope cannot be nested in another transaction scope) or logical (a PlaceOrder activity must have an order amount greater than 0). Design-time behavior is a complicated topic and beyond the scope of this introductory chapter. The reader is encouraged to explore the Windows SDK for samples and documentation on the topic of activity design.

Validation

Validation is an important aspect to the design behavior of an activity. To validate an activity, there are two steps:

  1. Create a validator class that derives from ActivityValidator

  2. Apply the ActivityValidator attribute to the Activity class

An ActivtyValidator object implements one important method:

public ValidationErrorCollection Validate (ValidationManager manager, object obj)

This method evaluates any and all possible validation conditions and returns a collection of the errors that occurred. In addition, there are two types of errors: errors and warnings. An error will stop compilation of the workflow; a warning will allow compilation to complete, but will output a warning message. An example of this might be on a ShipItem activity: One might not be required to put a postal code on the shipping label, however, it is important enough that its absence should be called to someone’s attention at compile time. It might be the desired behavior not to provide a postal code, but most of the time, one should be provided, so a warning will alert the developer of the potential error.

Activity validators are also called when a workflow is loaded from an XmlReader in order to ensure that a valid workflow has been handed to the runtime to execute. In this case, a failed validation will result in an error during the attempted loading of the workflow.

A sample validator follows in Listing 5.8, this code checks to ensure that a value has been entered for the TransactionAmount property:

Example 5.8. Sample Validator

private class CustomValidator : ActivityValidator
{
    public override ValidationErrorCollection Validate
                      (ValidationManager manager, object obj)
    {
        ValidationErrorCollection errorCollection =
            base.Validate(manager, obj);
        DemoActivity demoActivity = obj as DemoActivity;
        if (obj.TransactionAmount <= 0)
        {
            errorCollection.Add(new ValidationError
              ("Transaction Amount must be greater than 0",8675309, false));
        }
        return errorCollection;
    }
}

The constructor for ValidationError has a few overloads, allowing an error number to be specified, a Boolean for determining whether this is a warning, and a property name if a specific property is responsible for the error. The property name, if specified, will allow the designer to set focus to that property in the property grid if the validation error is clicked on in the designer.

When an error collection is retuned, the developer is notified of this through the smart tag that appears adjacent to the activity. Clicking on the smart tag allows the developer to review the different errors that were returned by the executing validator, as shown in Figure 5.12.

Validation results displayed in a smart tag.

Figure 5.12. Validation results displayed in a smart tag.

The validator is applied to an activity by decorating the definition of the class with the ActivityValidator attribute, as shown in the following code.

[ActivityValidator(typeof(CustomValidator))]
public partial class DemoActivity : SequenceActivity

Transactions and Compensation

Vital parts of any process are the ability to make sure that work is accomplished, and to be able to handle situations where the work at any given step needs to be undone.

Windows Workflow Foundation supports two different constructs aimed at solving those problems.

Over the short term of execution, it makes sense to have an ACID (atomic, consistent, isolated, and durable) transaction. If multiple database updates are occurring and transactional objects are being modified, it is desirable to make sure that all or none of the changes occur. The .NET Framework 2.0 provides a very nice model for this in the System.Transactions namespace. By wrapping a series of transaction-aware calls in a transaction scope, the developer gets transaction support, while System.Transactions manages the complexities of involving resource managers and escalating transactions (see Listing 5.9).

Example 5.9. Using System.Transactions

using (TransactionScope ts = new TransactionScope())
{
    // do transactional work here
    ts.Complete();
}

This is the model of transactions that is supported by the TransactionScope and CompensatableTransactionScope activities included in Windows Workflow Foundation. The TransactionScope activity functions in the same way that the using statement in Listing 5.9 wraps a series of calls in a single transaction. Any action taken by activities on transaction-aware objects, such as updates to a SQL database, will occur within an ACID transaction managed by System.Transactions. To ensure that the workflow state and the transaction state are consistent, the call to the persistence service to persist the workflow following the transaction scope will be included in the transaction as well. In this way, the entire transaction indicating scope completion as well as individual changes made by the contained activities will either commit or fail, so there is never an inconsistency of workflow state and transaction state.

Transactions work very well when executed over a short period of time. In a long-running process, the mechanics of an ACID transaction begin to break down. It is not feasible to maintain a lock on a row in SQL for a period of weeks during an expense-reporting process. This is where the concept of compensation comes in.

Compensation is the set of actions that need to be taken if a completed, closed activity has to be rolled back. This model gives the flexibility needed to offer fine-grained control that is specific to an individual activity. In the case of expense reporting, it might be desirable if the entire process is cancelled, and that the initial records of the expense report are kept in the database but are marked with a “cancelled” status. It might also be the case that the need is to delete those rows. Either one of those behaviors, or any other, can be implemented as the compensation for a given activity. Compensation can be defined for a specific activity by implementing ICompensatableActivity as shown in Listing 5.10.

Example 5.10. Compensatable Activity

public ActivityExecutionStatus Compensate (ActivityExecutionContext executionContext)
{
    // un-do the activity
    return ActivityExecutionStatus.Closed;
}

This allows the developer to define the specific behaviors required by the individual activity when it is told to compensate. Compensation will occur in the following cases:

  • An unhandled exception occurs within the workflow

  • By using the CompensateActivity to provide more fine-grained control over which activities compensate

It is important to note that the compensation will occur for every instance of an activity marked as ICompensatableActivity that has completed. This means that if such an activity is placed within a while loop that executes 10 times, the compensation logic will be executed for each instance of the activity; in this case, 10 times. This is accomplished by tracking the context in which the activity executed and keeping this as part of the state of the workflow.

There are also compensation scope activities, which allow compensation behavior to be modeled as a sequential workflow around a group of arbitrary activities. The CompensatableSequence and CompensatableTransactionScope surface this functionality when one right-clicks on the activity and selects View Compensation Handler, which will display a sequential activity to define the specific compensation behavior for that group of activities as seen in Figures 5.13 and 5.14. This is very useful in scenarios where compensation behavior needs to be defined for a group of activities.

CompensatableSequence context menu.

Figure 5.13. CompensatableSequence context menu.

CompensationHandler sequence.

Figure 5.14. CompensationHandler sequence.

The CompensatableTransactionScope functions in the same way, except that the contained activities will execute within a System.Transactions transaction to ensure ACID behavior in the short term, and will define a rollback if the sequence needs to be undone in three weeks if an error occurs.

Workflow Models

A single activity, on its own, is not much to look at. It performs a task, and it executes that task in a nice, controlled fashion, but a single activity is comparable to a single method call on a class. Useful, but it is not until those method calls are composed that the purpose of the application becomes apparent.

It is the arrangement of work, of activities, that is the definition of a workflow. Fundamentally, a workflow is the arrangement of work. This loose concept leads to the many overloaded uses of the term. In the case of Windows Workflow Foundation, the works arranged are activities. An activity might perform a very narrow, specific function, such as inserting a row into a database, or it might be very structural, allowing the parallel execution of the contained activities.

In Windows Workflow Foundation, the arrangement of activities into workflows is accomplished in a fashion consistent with the remainder of the model, by composing activities into a containing activity that defines the execution behavior of the contained workflows. So, to say this another way, the runtime of the Windows Workflow Foundation only knows how to execute activities—that is the primary job it is tasked with. By providing different root activities, the execution behavior is in turn determined by that activity.

Moving beyond the abstract, there are two out of the box models of workflow in Windows Workflow Foundation, and each of these models corresponds to a special activity type. These are sequential and state machine workflows. A sequential workflow is what many people traditionally associate with workflow: a flowchart defining linear execution. A state machine, a concept familiar to many developers, defines a number of states an application can be in (and the application can be in one and only one state at any time), the events to listen for, and any logic to execute on receipt of those events including changing state. This by no means limits the execution pattern to these two types; on the contrary, any execution behavior that can be defined in the code of the Execute() method can be a workflow model. Jon Flanders, a Windows Workflow Foundation developer, wrote such a model that would randomly execute contained activities. Although this might describe business processes that need workflow, it shows the spectrum of execution patterns one could implement.

Sequential Workflows

A sequential workflow, without relying too heavily on recursive definitions, is simply the sequential execution of the contained activities, henceforth referred to as activities. The most basic sequential workflow would be a linear arrangement of activities, as shown in Figure 5.15.

A basic sequential workflow.

Figure 5.15. A basic sequential workflow.

Windows Workflow Foundation contains activities that can be used to provide greater flexibility in sequential execution. The two most frequently used are the IfElse and Parallel activities. Another commonly used activity is the ConditionedActivityGroup (CAG), which is described in detail in the “Rules Engine” section of this chapter. As mentioned elsewhere throughout this chapter, it is possible to compose these activities to arbitrary depth; that is, a branch in a parallel can contain an IfElse with a branch using a CAG, which has a path of execution using a Parallel activity.

IfElse Activity

When just sketching a process, not many boxes will be drawn until the inevitable decision diamond appears, indicating some choice that needs to be made. When drawing, the criteria is often written inside the diamond, indicating the criteria that defines the condition, as shown in Figure 5.16.

Napkin workflow with decision diamond.

Figure 5.16. Napkin workflow with decision diamond.

Inside of a workflow, when a decision needs to be made, the IfElse activity is used. The IfElse activity contains at least a single child branch, an activity of type IfElseBranchActivity. An IfElseBranchActivity is a sequential composite activity with a special property, Condition. It is the condition that determines whether the activities contained inside the IfElseBranchActivity are to be executed or ignored. The IfElse activity can contain an arbitrary number of branches, each having a defined condition. The IfElse activity defined last, or appearing to the rightmost side in the designer, will not require a condition. This activity is the else branch inside the if statement. Within the designer, to add an additional branch, right-click on the IfElse activity and select the Add Branch item that appears on the context menu, as shown in Figure 5.17. The result is seen in Figure 5.18.

IfElse context menu.

Figure 5.17. IfElse context menu.

Multiple branch IfElse activity.

Figure 5.18. Multiple branch IfElse activity.

The validators on the IfElseBranchActivity require a condition be assigned unless it is the Else branch. However, there is no enforcement of exclusivity of conditions. That is, there might be three branches: The first defines a condition x > 5 and the second defines a condition x > 10. The validators do not look to see that there is a collision (namely, if x > 10, the condition on both branches would be satisfied). The IfElse activity will evaluate the conditions in a left-to-right sequence and will execute the first branch, and only the first branch, whose condition evaluates to true. If all the conditions evaluate to false and there is an Else branch, the Else branch will be executed. The different types of rule conditions and their use are discussed later in this chapter in the “Rules Engine” section.

Parallel Activity

Within the process, there might be a time when multiple activities need to execute simultaneously. There might also be a time where one wants to ensure that some number (n) of activities have completed before moving on to the next activity. The activities arranged in parallel might all execute over roughly the same amount of time, or the time might be radically different; one branch calling quickly to a web service to retrieve the credit limit of a customer, while another branch is waiting for a call-center representative to complete an over the phone survey of the same customer. Hopefully, the web service containing credit limit information would return well before the phone survey occurs. The model of this process can be seen in Figure 5.19

A simple parallel activity.

Figure 5.19. A simple parallel activity.

The Parallel activity is designed for this scenario. The Parallel activity consists of at least two Sequence activities. Additional branches can be added by selecting the Parallel activity, right-clicking, and selecting Add Branch in the same way that one adds branches to the IfElse activity. This leads to the very important conversation regarding the execution order of items contained inside a Parallel activity, which first must start off with the statement that the Parallel activity is not multithreaded.

This point bears repeating: The Parallel activity is not multithreaded. This stems primarily from the design decision made by the team that a workflow instance will run on one and only one thread at a given time. This decision was made to keep the simple, compositional nature of the Windows Workflow Foundation programming model, in a word, simple. In the words of the architects themselves:

One big advantage of a single-threaded execution model is the simplification of activity development. Activity developers need not worry about concurrent execution of an activity’s methods. Locking, preemption, and other aspects of multi-threaded programming are not a part of Windows Workflow Foundation activity development, and these simplifications are important given Windows Workflow Foundation’s goal of broad adoption by .NET developers (from Essential Windows Workflow Foundation by Shukla and Schmidt).

Multithreaded programming is complicated, and the ability to reuse activities in multiple workflow types makes activity-hardening a very tough task when the concerns of multi-threading are added. Additionally, this is not to say that the runtime engine is single-threaded. On the contrary, by default the engine will schedule workflow instances to execute on as many threads as it has been programmed to use to execute workflows. A workflow instance is tied to a single thread while it is executing, but the usual use of workflow is that there will be multiple instances of that workflow running at any one time, each on its own thread. Workflows can execute in parallel on multiple threads, but a single workflow instance will use one thread.

The question inevitably is raised, “What about my <tricky bit of computation> that requires me to multithread or else my performance will be horrible?” Again, going back to Shukla and Schmidt:

Computations that benefit from true concurrency are, for Windows Workflow Foundation programs, best abstracted as features of a service; the service can be made available to activities... In this way, the service can be executed in an environment optimized for true concurrency, which might or might not be on the machine on which the Windows Workflow Foundation program instance is running.

In other words, if one has such a computation, the best way to leverage it from a work-flow is to expose it as a service, using many of the techniques described elsewhere in this book. Chapter 6 will deal with integrating Windows Communication Foundation services with Windows Workflow Foundation programs.

Consider the following workflow that contains a parallel activity in three branches. Each of the activities will output the branch they are in and the position in which they appear in the branch, as shown in Figure 5.20.

A parallel activity with three branches.

Figure 5.20. A parallel activity with three branches.

The output of this workflow will be the following:

Output of Parallel Execution

Left 0
Middle 0
Right 0
Left 1
Middle 1
Right 1
Left 2

To understand the earlier interleaved sequence, one must understand how the activities are actually being executed. A workflow instance maintains a schedule of activities to execute. As the parallel activity executes, it schedules each of the child activities (these are Sequence activities) for execution. This is scheduled by placing a delegate to the Execute() method onto the execution queue used by the runtime. The Parallel activity has scheduled the Left, Middle, and Right Sequence activities to execute, in that order. The execution queue now looks as shown in Figure 5.21.

Execution queue as scheduled by the Parallel activity.

Figure 5.21. Execution queue as scheduled by the Parallel activity.

The first activity to execute is Left Sequence. It is responsible for executing its contained activities, so it adds its activity to the execution queue. The queue now looks as shown in Figure 5.22.

The execution queue after the Left Sequence activity executes.

Figure 5.22. The execution queue after the Left Sequence activity executes.

Execution continues until the queue looks as shown in Figure 5.23.

The execution queue with code activities scheduled.

Figure 5.23. The execution queue with code activities scheduled.

When the left0 activity completes, the event handler of the Left Sequence activity is called and schedules the next activity, left1, for execution. The queue now looks as shown in Figure 5.24.

The execution queue after left0 executes.

Figure 5.24. The execution queue after left0 executes.

This pattern continues until all the Sequence activities have completed. Then the Parallel activity completes and the workflow moves on.

The workflow developer can have finer-grained control over the execution sequence by using the SynchronizationScope activity. SynchronizationScope is a composite activity one can use to serialize the execution of a group of activities. In the workflow just shown, assume that the right branch must execute only after the left branch has completed fully. It might be that the business process describes these two branches as parallel, but due to implementation, it is necessary that one execute prior to another (note to the process consultant, this might be a bottleneck to look at!). A SynchronizationScope activity has a SynchronizationHandles property that is used to determine the way in which the various SynchronizationScope activities interact with one another. The SynchronizationHandles property is a collection of strings used as handles on a shared resource. When a SynchronizationScope executes, it attempts to obtain a lock on each of the handles and will not execute until it obtains the lock. If the handle is locked by another SynchronizationScope, the other SynchronizationScope activities will wait to acquire the lock. In this way, access to these handles, and the activities within, can be serialized.

This is best shown by the workflow in Figure 5.25, which is the example shown earlier modified to contain three SynchronizationScope activities. The left most synchronization activity has a synchronization handle of a, the middle b, and the right a, b, or both of them. In plain language, the right branch activity will not execute until both the left and middle branch have completed.

A parallel activity with SynchronizationScope activities on each branch.

Figure 5.25. A parallel activity with SynchronizationScope activities on each branch.

Listing 5.11 expresses this in XAML.

Example 5.11. SynchronizationScope Example

<SequentialWorkflowActivity x:Class="WCFHandsOne.SynchronizationWithParallel"
  x:Name="SynchronizationWithParallel"
  xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/workflow">
  <ParallelActivity x:Name="parallelActivity1">
    <SequenceActivity x:Name="sequenceActivity1">
      <SynchronizationScopeActivity
       x:Name="synchronizationScopeActivity1" SynchronizationHandles="a">
        <CodeActivity x:Name="codeActivity1"
          ExecuteCode="codeActivity1_ExecuteCode" />
        <CodeActivity x:Name="codeActivity4"
          ExecuteCode="codeActivity1_ExecuteCode" />
        <CodeActivity x:Name="codeActivity7"
          ExecuteCode="codeActivity1_ExecuteCode" />
      </SynchronizationScopeActivity>
    </SequenceActivity>
    <SequenceActivity x:Name="sequenceActivity2">
      <SynchronizationScopeActivity
         x:Name="synchronizationScopeActivity2" SynchronizationHandles="b">
          <CodeActivity x:Name="codeActivity2"
             ExecuteCode="codeActivity2_ExecuteCode" />
           <CodeActivity x:Name="codeActivity5"
             ExecuteCode="codeActivity2_ExecuteCode" />
      </SynchronizationScopeActivity>
    </SequenceActivity>
    <SequenceActivity x:Name="sequenceActivity3">
      <SynchronizationScopeActivity
        x:Name="synchronizationScopeActivity3" SynchronizationHandles="a, b">
        <CodeActivity x:Name="codeActivity6"
           ExecuteCode="codeActivity3_ExecuteCode" />
        <CodeActivity x:Name="codeActivity3"
           ExecuteCode="codeActivity3_ExecuteCode" />
      </SynchronizationScopeActivity>
    </SequenceActivity>
   </ParallelActivity>
</SequentialWorkflowActivity>

With the code beside in Listing 5.12:

Example 5.12. SynchronizationScope Code Beside

public partial class SynchronizationWithParallel: SequentialWorkflowActivity
{
        private int i,j,k;
        private void codeActivity1_ExecuteCode(object sender, EventArgs e)
        {
            Console.WriteLine("Left {0}", i);
            i++;
        }
        private void codeActivity2_ExecuteCode(object sender, EventArgs e)
        {
            Console.WriteLine("Middle {0}", j);
            j++;
        }
        private void codeActivity3_ExecuteCode(object sender, EventArgs e)
        {
            Console.WriteLine("Right {0}", k);
            k++;
        }
}

It is insightful to trace through the execution of this workflow. Earlier, the behavior of the Parallel activity was discussed, assume that the execution queue is as shown in Figure 5.26.

Execution queue scheduling of SynchronizationScope activities.

Figure 5.26. Execution queue scheduling of SynchronizationScope activities.

The left SynchronizationScope will attempt to lock on the handle a, and will be successful. It will schedule its first child activity for execution, yielding the queue shown in Figure 5.26.

The execution queue following the left SynchronizationScope successfully locking its handle, A.

Figure 5.27. The execution queue following the left SynchronizationScope successfully locking its handle, A.

The middle SynchronizationScope will successfully lock its handle, b, and will schedule middle0 for execution. When the right SynchronizationScope executes, it will attempt to obtain locks on a and b and will fail to obtain them. The left and middle SynchronizationScope activities will execute in an interleaved fashion until the right SynchronizationScope can obtain locks on both A and B. Therefore, its execution output would look like this:

Execution Output of Synchronization Scope Workflow

Left 0
Middle 0
Left 1
Middle 1
Left 2
Right 0
Right 1

One can also imagine the use of long-running activities in the braches, those activities that do some work and then await a message to conclude execution, such as a RequestApproval activity. In these scenarios, when an activity yields control, such as by returning a status of Executing while it waits for some message, the Parallel activity will continue executing the other branches. If an activity does a long-running amount of CPU-intensive work and never yields control back to the scheduler (by return Executing or Closed), the Parallel activity will not continue with the other items. This makes sense because the workflow instance is single threaded, so if one activity is continuing to do work, another activity cannot execute.

State Machine Workflows

One of the most important things an application might do is track the state of some process. Whether it is orders moving through an e-commerce website, or determining what actions are valid steps to perform inside of a Windows Forms application, knowing the current state and how to move from state to state are vital to any complex application. A state machine workflow can help to solve such a problem.

A state machine workflow consists formally of n-states, which include the current state of the application, an initial state, and a completed state. A state consists of actions to be performed on entering the state, actions to be performed before leaving the state, and a set of EventDriven activities. The EventDriven activities define some event for which they listen and the set of actions to be performed on receiving the event. The EventDriven activities require a single IEventActivity to be placed as the first child activity. Common examples of IEventActivity are the HandleExternalEvent and Delay activities. One of the subsequent actions should be a SetState activity that will initiate a state transition from the current state to a new state.

Within the designer, one sees two views of a state machine. The first is of the states themselves. The second view is inside one of the groups of actions described earlier, either the StateInitialization and StateFinalization activities or within EventDrivenActivity. One moves between these two views in the designer by double-clicking on the group of actions to view more detail (see Figure 5.28), or within the group of actions by clicking on the state to return to the state view (see Figure 5.29).

A state machine–based workflow.

Figure 5.28. A state machine–based workflow.

Inside The EventDriven activity on the state.

Figure 5.29. Inside The EventDriven activity on the state.

A state machine workflow is completed when the current state is the state defined as the CompletedStateName on the root state machine workflow. The execution pattern is determined by the combination of states and potential transitions. This allows for a very nondeterministic execution pattern. Because of this, state machines are often used to model human-based processes. Consider a document approval process: The number of drafts a document might go through cannot easily be planned for. Additionally, a document might move all the way to an AlmostPublished state when the facts change and the document moves all the way back to the JustStarted state.

A state machine is not limited, though, to human-based processes. Manufacturing processes lend themselves to a state machine workflow because there are many, many different states the process can be in and minimal work that needs to be done between the transitions. For instance, after a car rolls out of the PaintDryingState on the PaintDried event, a notification might need to be sent to the shipping system that an additional vehicle will be ready for transport. The state machine provides a useful model for this process, especially because there might be instances when a car is in the same state multiple times, based on the customization criteria.

The decision between choosing a state machine or a sequential workflow model (or something custom) ultimately depends on how accurately the model (the workflow) represents the reality of the process. It is possible to represent the earlier document-routing scenario in a sequential workflow, with a series of while loops to indicate all the potential branches of retrying and routing back to the beginning. Similarly, many sequential work-flows could be represented as state machines. This is not to be so bold as to claim there is an isomorphism between the two sets, but rather to suggest that many problems could be solved with either approach—the criterion has to be how naturally the model represents reality. In the case where neither a sequential nor a state machine approach models the workflow well, one can always create a custom root activity to define one’s own workflow type.

Custom Root Activities

One is not limited to the execution semantics of the two root activities previously described. The role of any composite activity is to manage the execution of its child activities. By creating a custom root activity, one can implement any execution behavior desired (provided its logic can be somehow expressed in code). As with creating custom activities, a thorough treatment of the topic is outside of the scope of this introductory chapter. The reader is encouraged to reference Shukla and Schmidt for additional information in this area.

Workflow Hosting

Much has been written of the activity model and the ways to compose workflows. Those two things alone would not result in much, outside of a nice graphical representation of a business process. A runtime environment is crucial for turning that model into an executing application. The Windows Workflow Foundation runtime is remarkable in its achievement of minimal overhead and maximal extensibility. As the following sections will illustrate, almost any runtime behavior can be customized, including the threading model. At the same time, the runtime can be executed within a Windows Forms application or a web application to provide the logic behind the user interface anywhere access to a .NET application domain is available. That engine can then scale up to be the core of the workflow engine within Microsoft Office SharePoint Server 2007.

Hosting the Runtime

The base case of workflow hosting is the model that is included in the Visual Studio 2005 Extensions for Windows Workflow Foundation. When one selects a new workflow project (sequential or state machine), a very basic hosting environment is created. Listing 5.13 is the default program.cs that is included when tone creates a new workflow project.

Example 5.13. Console Based Workflow Host

static void Main(string[] args)
{
    using(WorkflowRuntime workflowRuntime = new WorkflowRuntime())
    {
        AutoResetEvent waitHandle = new AutoResetEvent(false);
        workflowRuntime.WorkflowCompleted +=
            delegate(object sender, WorkflowCompletedEventArgs e)
            {waitHandle.Set();};
        workflowRuntime.WorkflowTerminated +=
            delegate(object sender, WorkflowTerminatedEventArgs e)
            {
                Console.WriteLine(e.Exception.Message);
                waitHandle.Set();
            };
        WorkflowInstance instance = workflowRuntime.CreateWorkflow
            (typeof(WorkflowConsoleApplication1.Workflow1));
        instance.Start();
        waitHandle.WaitOne();
    }
}

The first thing that is done is the instantiation of the workflow runtime:

WorkflowRuntime workflowRuntime = new WorkflowRuntime()

This creates the execution environment for the workflows to execute within. Some housekeeping occurs to ensure that the console application will not exit prematurely when the main method finishes prior to the workflow completing. Remember, the workflow instance will be created on a separate thread (using the default scheduler service). Two events are wired up using anonymous delegates to ensure that errors are reported to the console, and successful completion of the workflow allows the console application to exit gracefully:

workflowRuntime.WorkflowCompleted += delegate
(object sender, WorkflowCompletedEventArgs e) {waitHandle.Set();};
workflowRuntime.WorkflowTerminated += delegate
(object sender, WorkflowTerminatedEventArgs e)
{
    Console.WriteLine(e.Exception.Message);
    waitHandle.Set();
};

Then a workflow instance is created by specifying the type of the workflow to be created. How this method is overridden to provide other workflow creation behavior will be discussed later.

WorkflowInstance instance = workflowRuntime.CreateWorkflow
(typeof(WorkflowConsoleApplication1.Workflow1));

Finally, the workflow instance is scheduled to begin execution, which will occur on a separate thread from the console application. The console application will patiently await the workflow’s completion.

instance.Start();

The following are the tasks required by a simple workflow host:

  1. A workflow runtime is created and made available.

  2. Handlers are wired to events of interest; in this case, completion and termination.

  3. An instance is created.

  4. An instance is started.

This pattern is followed inside of any hosting environment, with the possible addition of step 1.5, the addition of runtime services to the WorkflowRuntime. This step is optional because

  1. Services might not be needed at all.

  2. Services can be configured declaratively within app.config.

This last point might seem familiar to readers who have completed some of the chapters focusing on the Windows Communication Foundation. The ability to define behaviors in code or in configuration is a theme present in both of these frameworks. The configuration option is more flexible, allowing services to be configured at deployment, whereas the code option insures certain behavior is implanted irrespective of deployment-time decisions.

Runtime Services

The flexibility of the runtime comes from its delegation of responsibility to a set of well-defined runtime services. This pluggable model of runtime services that have nothing to do with Windows Communication Foundation services lets a workflow developer choose to have file-based persistence while executing in a Windows Forms application and a SQL-based persistence while executing in the context of an ASP.NET application, with the workflow being none the wiser. There are a number of core problems identified as general areas where a suggested solution would be provided while remaining flexible for an implementation tailored and tuned for a specific scenario. These include persistence, tracking, and scheduling.

Persistence Services

A defining feature of a workflow system is its capability to enable long-running work. Long-running work generally refers to a pattern of request-response activities, with the time span between request and response a suitably long duration (minutes, hours, days). Windows Workflow Foundation provides this ability to workflow developers, and makes the task of persisting workflows while they wait transparent. This is accomplished through a persistence service. When a workflow is idled or is required to persist, the work-flow runtime provides access to a persistence service. If a persistence service is not available, the workflow will not be persisted and will remain in-memory while awaiting its eventual resumption.

The persistence service is responsible for storing all the information about the workflow state to some location outside the workflow runtime. When an event occurs that requires that specific workflow instance, the workflow runtime will first check in memory to see whether that workflow is already executing. If it is not, it uses the persistence service to load the workflow instance back into memory. What is important to note is that the workflow can be reloaded into memory weeks after the last time it did anything. Additionally, the workflow could be reloaded onto a completely different machine than the one that initially persisted it. It is through a persistence service that one can obtain scalability by hosting workflow within a farm setting. The runtime will attempt to persist the workflow in the following situations:

  1. The host calls WorkflowInstance.Unload()

  2. The workflow instance completes or terminates

  3. The workflow instance goes idle (no activities scheduled to execute)

  4. Completion of an atomic transaction

  5. Completion of an activity marked PersistOnClose

To force persistence at a point within the workflow, an “empty” activity decorated with the PersistOnClose attribute could be used to ensure persistence.

Out of the box, Windows Workflow Foundation provides a default implementation of a persistence service: SqlWorkflowPersistenceService. To use SqlWorkflowPersistenceService, the following steps are necessary:

  1. Create the database using the scripts found at WindowsMicrosoft.NETFrameworkv3.0Windows Workflow FoundationSQLENSqlPersistenceService_Schema.sql and WindowsMicrosoft.NETFrameworkv3.0Windows Workflow FoundationSQLENSqlPersistenceService_Logic.sql

  2. Attach the persistence service to the workflow runtime

The last step is done either in code or within the application configuration file. The approaches are outlined in Listing 5.14 and Listing 5.15, respectively.

Example 5.14. Adding the Persistence Service in Code

using (WorkflowRuntime workflowRuntime = new WorkflowRuntime())
{
    WorkflowPersistenceService persistenceService =
        new SqlWorkflowPersistenceService(
        "Initial Catalog=SqlPersistenceService;Data Source=localhost; Integrated
Adding the Persistence Service in Code Security=SSPI;",
        false,
        new TimeSpan(1, 0, 0),
        new TimeSpan(0, 0, 5));
    workflowRuntime.AddService(persistenceService);
...

}

Example 5.15. Adding the Persistence Service via Config

<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
       <configSections>
               <section name="WorkflowRuntime" type="System.Workflow.Runtime.Configuration
Adding the Persistence Service via Config.WorkflowRuntimeSection, System.Workflow.Runtime, Version=3.0.00000.0, Culture=neutral,
Adding the Persistence Service via Config PublicKeyToken=31bf3856ad364e35"/>
       </configSections>
       <WorkflowRuntime Name="WorkflowServiceContainer">
               <Services>
                       <addtype="System.Workflow.Runtime.Hosting.
Adding the Persistence Service via Config SqlWorkflowPersistenceService, System.Workflow.Runtime, Version=3.0.00000.0,
Adding the Persistence Service via Config Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionString="Initial
Adding the Persistence Service via Config Catalog=WorkflowPersistenceStore;Data Source=localhost;Integrated Security=SSPI;"
Adding the Persistence Service via Config UnloadOnIdle="true"/>
               </Services>
       </WorkflowRuntime>
...
</configuration>

In the second case, the persistence service is being configured with the UnloadOnIdle flag set to true. This forces the runtime to persist the workflows whenever they are reported as idle. SqlWorkflowPersistenceService is also responsible for managing the tracking of timers. If an expense-reporting workflow has a timeout of one week before the approval must escalate, something needs to keep track of that timer in order to load the workflow in the case where nothing happens for a week. SqlWorkflowPersistenceService stores the timer expiration within its SQL tables. When the runtime is started and the service is loaded, one of the first actions it performs is a scan of the timer expirations to determine whether any were “missed” while the host was unavailable. In this way, the persistence service ensures that even if the host was down, on resuming, any workflows whose timers have expired will be processed. There are a number of other features in SqlWorkflowPersistenceService that pertain specifically to scaling out in a farm setting. The reader is encouraged to explore the documentation further to investigate that scenario.

Note

Creating a custom persistence service is a relatively straightforward endeavor. An example of a file system persistence service is available within the Windows SDK.

Tracking Services

While a workflow is executing, there are many potentially interesting pieces of data one might want to collect. How long did an activity take? When did the workflow start and when did it finish? What was the value of the purchase order that went through the system? All of these questions can be answered via the information recorded by the tracking service. The tracking service is used by the workflow runtime to output various pieces of data to some kind of external store.

The mechanics of the tracking service are relevant to understand here. The runtime will call the TryGetProfile() method of the tracking service to obtain a tracking profile, if it exists, for the currently executing instance. If a profile exists, it describes what information should be sent via the tracking channel. The tracking service itself does not determine what data gets tracked; it provides a profile to the runtime, which in turn decides when to track. This design does not rely on a tracking service author to provide a high performance implementation of event filtering.

Tracking Profiles

A tracking profile defines on what events the workflow runtime should send a message to the associated tracking channel. It allows a developer to shape the data that is recorded from a workflow execution. A tracking profile can be defined within code, and the object serializes to XML for a more portable representation of the tracking profile. There are three types of events for which the tracking profile can be configured. Workflow events are those that pertain to the lifecycle of a workflow, from its creation to termination. Activity events are those pertaining to the lifecycle of an activity. Finally, user events are those emitted from an activity during its execution when the TrackData() method is called from the base Activity class.

A tracking profile is a collection of tracking points, of the types WorkflowTrackPoint, ActivityTrackPoint, or UserTrackPoint. Each of these tracking points consists of MatchingLocations, ExcludedLocations, Extracts, and Annotations. The locations are defined by a type (on what activity does the location apply?), the ExecutionStatusEvents (on what events does the location apply?), and the conditions (under what criteria should the location apply?). ExcludedLocation explicitly defines when tracking should not occur. Extracts define what data should be tracked, either as a property of the activity or the workflow, expressed in dot notation. Finally, Annotations are a set of strings that should be included in the tracking record if a record is sent.

The Windows SDK contains a number of samples related to tracking. The Tracking Profile Object Model sample provides an example of declaring a tracking profile in code and then outputting the tracking profile in its serialized XML form, which the reader might find more instructive. The XML output is contained in Listing 5.16.

Example 5.16. XML Tracking Profile

<?xml version="1.0" encoding="utf-16" standalone="yes"?>
<TrackingProfile xmlns="http://schemas.microsoft.com/winfx/2006/workflow/trackingprofile" 
XML Tracking Profileversion="1.0.0">
     <TrackPoints>
         <ActivityTrackPoint>
             <MatchingLocations>
                  <ActivityTrackingLocation>
                      <Activity>
                          <TypeName>activityName</TypeName>
                          <MatchDerivedTypes>false</MatchDerivedTypes>
                      </Activity>
                      <ExecutionStatusEvents>
                          <ExecutionStatus>Initialized</ExecutionStatus>
                          <ExecutionStatus>Executing</ExecutionStatus>
                          <ExecutionStatus>Canceling</ExecutionStatus>
                          <ExecutionStatus>Closed</ExecutionStatus>
                          <ExecutionStatus>Compensating</ExecutionStatus>
                          <ExecutionStatus>Faulting</ExecutionStatus>
                      </ExecutionStatusEvents>
                      <Conditions>
                          <ActivityTrackingCondition>
                              <Operator>Equals</Operator>
                              <Member>memberName</Member>
                              <Value>memberValue</Value>
                          </ActivityTrackingCondition>
                      </Conditions>
                  </ActivityTrackingLocation>
              </MatchingLocations>
              <ExcludedLocations>
                  <ActivityTrackingLocation>
                      <Activity>
                          <TypeName>activityName</TypeName>
                          <MatchDerivedTypes>false</MatchDerivedTypes>
                      </Activity>
                      <ExecutionStatusEvents>
                          <ExecutionStatus>Compensating</ExecutionStatus>
                      </ExecutionStatusEvents>
                  </ActivityTrackingLocation>
              </ExcludedLocations>
              <Annotations>
                  <Annotation>Track Point Annotations</Annotation>
              </Annotations>
              <Extracts>
                  <WorkflowDataTrackingExtract>
                      <Member>Name</Member>
                  </WorkflowDataTrackingExtract>
              </Extracts>
          </ActivityTrackPoint>
      </TrackPoints>
</TrackingProfile>

This follows the pattern discussed earlier by defining a track point by setting a location—namely all executions of activityName on all of its status transitions—and then excluding when it would be in the compensating status. Additionally, the location will be valid only when the condition is Activity.memberName == memberValue. An annotation is added so that if the criterion is set, a tracking record will be created that includes the text Track Point Annotations. Finally, the data to be extracted is defined; in this case, the Name property from the workflow. By specifying WorkflowDataTrackingExtract, one can obtain access to the properties of the root activity of the workflow. If one were to specify ActivityDataTrackingExtract, one could obtain a tracking record that contains the data of the associated activity.

SqlTrackingService As with the persistence service, a default implementation is provided that leverages SQL Server in order to store the information. The following steps will enable tracking within an application:

  1. Create the database using the scripts found at WindowsMicrosoft.NETFrameworkv3.0Windows Workflow FoundationSQLENTracking_Schema.sql and WindowsMicrosoft.NETFrameworkv3.0Windows Workflow FoundationSQLENTracking_Logic.sql

  2. Attach the persistence service to the workflow runtime

  3. Specify a tracking profile to receive only the desired events

Similar to SqlWorkflowPersistenceService, the service can be added to the runtime either in code or by the configuration file. In the case of the tracking service, it is possible that multiple tracking services could be configured, each returning a different profile of events for a given workflow. An example is when the SqlTrackingService is used for general reporting on the process, but highly critical errors need to be routed through an EventLogTrackingService to surface in the existing management tools. The TryGetProfile() method on the EventLogTrackingService might return a profile that specifies a location of the OrderProcessing activity, but might listen only for the FaultingEvent.

The Windows SDK contains a sample profile designer that can be used to analyze a workflow and specify the tracking points. It will then generate a serialized, XML form of the tracking profile that can then be used within code or within the tracking database to provide a profile for a given type.

Querying the Tracking Store

SqlTrackingService stores the tracking information within the tracking database throughout a number of different tables. There are stored procedures designed to access this data, but there is also a querying object model that was designed to help sift through the data stored within the database. The SqlTrackingQuery can be used to return a specific workflow instance by using the TryGetWorkflow() method, which will return a SqlTrackingWorkflowInstance, an object that mimics the WorkflowInstance with methods added to query against the different types of tracking data available.

For finer-grained control over the query, SqlTrackingQuery also has the GetWorkflows() method, which has an optional parameter of type SqlTrackingQueryOptions that allows the specification of types of workflow, status of workflows, and maximum and minimum time for the workflow to have been in that status. Additionally, a collection of TrackingDataItemValue objects can be specified to return only those records that specifically match the criteria that one is looking for. For advanced querying, such as querying on a range of these extracted values, a query will need to be written against the stored procedures and views included with the tracking database. The views vw_TrackingDataItem and vw_ActivityExecutionStatusEvent are good starting points to begin designing such a query.

Scheduler Services

The scheduler service is responsible for providing threads to the runtime in order to actually perform the execution of workflows. As noted in the discussion of the Parallel activity, a single instance of a workflow executes on only one thread. The engine itself can schedule multiple threads to be executing different workflow instances at the same time. It is quite probable that the number of executing workflow instances will be greater than the number of threads available to the application. Therefore, the runtime has to marshal threads to workflow instances in some fashion, and the way that is implemented is via the scheduler service.

DefaultWorkflowSchedulerService is, as its name implies, the default behavior of the runtime. It uses the .NET thread pool to provide threads for the executing workflows. It has one configuration setting of note, MaxSimultaneousWorkflows, which specifies how many threads will be used to execute workflow instances at the same time. By default, MaxSimultaneousWorkflows is set to a multiple of the number of processors on the machine. One is free to change this setting, but there are some words of caution. The scheduler service itself uses a single thread, so setting MaxSimultaneousWorkflows = Max Threads would be very bad, possibly resulting in a deadlock as the scheduler service is starved out of ever getting a thread. Additionally, the thread pool used by the default scheduler is the system thread pool, so there are plenty of other things that could take those threads. Operations such as transactions involving the Distributed Transaction Coordinator (DTC) might involve additional threads, so setting MaxSimultaneousWorkflows close to the maximum number of threads is generally not recommended. As with any performance tuning, it is best to experiment with this setting at a stage of development where the process is well defined, and the experiment can be repeated in order to truly understand the impact of turning this control knob.

There are also times in which multithreading is not desirable. Consider a workflow used by an ASP.NET page. Spawning additional threads within IIS takes away from the number of threads IIS has to serve additional incoming requests. The solution to this seems obvious enough: The ASP.NET page is not going to complete executing until the workflow does some work and then reaches an idle state or completes, so why not use that thread? This is precisely the scenario ManualWorkflowSchedulerService was created for.

ManualWorkflowSchedulerService provides a thread for the workflow to execute on by donating the currently executing thread to the workflow. Put another way, ManualWorkflowSchedulerService says, “Hey, I’m waiting here until the workflow is done, so why don’t you use this thread I’d just be hogging anyway?” Therefore, control on the host is blocked until the workflow yields that thread, namely by completing or by going idle. Because the workflow will use the current thread, just calling WorkflowInstance.Start() is not enough to execute the workflow; doing so simply places the workflow in the running state where it awaits a thread in order to begin executing.

To hand the thread to the workflow, call ManualWorkflowSchedulerService.RunWorkflow(). This assigns the current thread of execution to the workflow to handle the next item on the queue, namely, the execution of the workflow. When the workflow completes, terminates, or goes idle, the call to RunWorkflow() will complete and return control to the host. Similarly, after sending a message to an idled workflow in this environment, the message will not be processed until RunWorkflow() is called again. This is similar to kicking a can down the street. It will roll and do some work, but it will reach a point at which it stops and waits to be kicked again in order to continue down the street. In Listing 5.17, note that the workflow will not process the event raised by the local service until RunWorkflow() has been called. RunWorkflow() will give the current thread to the runtime in order to execute a specific instance of the workflow until that instance completes or goes idle.

Example 5.17. Using the Manual Scheduler Service

public bool OrderProcess(Order order)
{
    ManualWorkflowSchedulerService schedulerService =
    workflowRuntime.GetService<ManualWorkflowSchedulerService>();
    orderLocalService.RaiseProcessing(order.WorkflowID);
    schedulerService.RunWorkflow(order.WorkflowID);
    // RunWorkflow completes when the workflow completes or goes idle
    return true;
}

Other Built-in Services

The persistence, tracking, and scheduler services are the most common services a workflow developer will encounter. Writing a scheduler service is not a common task, but the choice between the default and manual services is one that will be frequently encountered. The persistence and tracking services have been optimized for the general scenario, and it is not uncommon to find developers writing a custom version of one or both of these services. In the case of the tracking service, the services can be “stacked” on top of one another, allowing multiple means of tracking. The persistence service is more fundamental to the operation of the runtime, so only one of those is allowed. The runtime services do not stop there, however; a developer might encounter a number of other services, and in some scenarios, want to customize them.

Loader Service

The loader service is responsible for transforming an incoming XML stream into a workflow definition for execution. DefaultLoaderService operates on the assumption that the incoming XML stream is XAML. This service is invoked when one of the alternative WorkflowRuntime.CreateWorkflow() methods is called with an XmlReader passed in as the parameter (as opposed to the type passed in the default case). Creating a custom loader service is an effective way to map from an existing XML description of a process into the workflow definition. A workflow simply consists of a tree of activities, so by parsing the XML file according to its structure, one can quickly create such a tree of activities to return to the runtime for execution. For developers looking to leverage an existing process design tool with its own XML process representation, a custom loader service can be used to directly execute that unknown representation of activity arrangement.

Queue Services

WorkflowQueuingService is the one runtime service that cannot be overridden. It is responsible for managing the queues used by a workflow instance. The Windows Workflow Foundation runtime uses these internal queues as the basis for all communication with workflows and activities. The activities discussed in the “Communicating with Activities” section are abstractions built on top of this queuing mechanism. Fundamentally, those activities map to the creation of queues and the placement of messages onto those queues in order to deliver messages and continue execution. Within activity execution, it can access this service via the ActivityExecutionContext to create queues as well as to gain access to queues in order to retrieve or place messages onto the queues. An activity can also subscribe to the QueueItemAvailable event in order to process a message when it arrives on that queue.

SharedConnectionWorkflowCommitWorkBatchService In addition to its status as one of the longest class names in the .NET Framework, SharedConnectionWorkflowCommitWorkBatchService handles the special case where the SQL tracking service and SQL persistence service are configured to use the same database. In this case, SharedConnectionWorkflowCommitWorkBatchService will perform both tracking and persistence database queries using the same database connection, allowing a simple SQL transaction to be used to commit the update. This allows us to bypass the transaction escalation that would occur with the System.Transaction used in the default WorkflowCommitBatchService, thus avoiding the overhead of involving the Microsoft Distributed Transaction Coordinator to manage updates to two different databases within a single transaction.

This base class is used to combine a series of calls to be performed all at once throughout workflow execution. The most common example of this is a group of calls to TrackData() in the tracking service. These will be batched together to execute at the first workflow idle point. The runtime is responsible for initiating the commit, but it will pass WorkflowCommitBatchService a delegate to allow the service to add additional tasks, the batch, to be added into the transaction.

Listing 5.18 is an example of using SharedConnectionWorkflowCommitWorkBatchService inside of the configuration file to provide batching of persistence and tracking transactions to the same database. This will optimize the performance of the database writes in a scenario where tracking and persistence share the same database.

Example 5.18. Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <configSections>
        <section name="WorkflowServiceContainer" type="System.Workflow.Runtime
Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration.Configuration.WorkflowRuntimeSection, System.Workflow.Runtime, Version=1.0.0.0,
Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
    </configSections>
    <WorkflowServiceContainer Name="Container Name" UnloadOnIdle="true">
         <CommonParameters>
           <add name="ConnectionString" value="Initial Catalog=WorkFlowStore;Data
Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration Source=localhost;Integrated Security=SSPI;" />
        </CommonParameters>
        <Services>
            <add type="System.Workflow.Runtime.Hosting. DefaultWorkflowSchedulerService, 
Using the SharedConnectionWorkflowCommitWorkBatchService in ConfigurationSystem.Workflow.Runtime, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
           <addtype="System.Workflow.Runtime.Hosting.
Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration SharedConnectionWorkflowTransactionService, System.Workflow.Runtime, Version=1.0.0.0,
Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add type="System.Workflow.Runtime.Tracking.SqlTrackingService, System
Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration.Workflow.Runtime, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
            <add type="System.Workflow.Runtime.Hosting. SqlWorkflowPersistenceService,
Using the SharedConnectionWorkflowCommitWorkBatchService in Configuration System.Workflow.Runtime, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
        </Services>
    </WorkflowServiceContainer>
    <system.diagnostics>
    </system.diagnostics>
</configuration>

Custom Services

As a workflow developer, one is not limited to the runtime services provided; one can create additional services to add to the runtime. The scenarios for this include abstracting behavior out of the activity itself and providing access to a shared resource across multiple workflow instances. The WorkflowRuntime.AddService() method takes in an object—any object. This object acts as a service within the runtime, and is available to any executing activity by calling ActivityExecutionContext.GetService<T>(), where T is the type of object one looks to get back. The activity can then call any of the methods on that service. An example might provide some additional clarity.

Consider an activity that mimics the Policy activity, except that it needs to acquire the rule set from an external source. The implementation of how the rule set is provided is not relevant to the activity’s execution. It can rely on a service in the runtime to provide the rule set prior to execution. This allows a workflow developer to simply use the activity within the workflow, and then select the appropriate runtime service to match the deployment scenario. In the case of a Windows Forms application, this might come from the file system; in a server environment, there might be a sophisticated cache-from-database pattern that the service follows. The point is that custom runtime services provide a nice layer of abstraction to insulate the execution of an activity from trivial implementation details, such as where to acquire the rule set from. By defining an abstract base service class, details of the implementation can be customized by providing different derived classes, as detailed in Listings 5.19 and 5.20. The base class can simply be an object, or it can inherit from WorkflowRuntimeService. By inheriting from WorkflowRuntimeService, one can take advantage of the Start and Stop methods to perform initialization or tear down work when the runtime itself is started or stopped.

Example 5.19. Base RuleSetProviderService

using System;
using System.Collections.Generic;
using System.Text;
using System.Workflow.Runtime;
using System.Workflow.Runtime.Hosting;
using System.Workflow.Activities.Rules;

namespace CustomService
{
        public abstract class RuleSetProviderService :WorkflowRuntimeService
        {
        protected override void Start()
        {
           //implement startup logic here
        }

        protected override void Stop()
        {
            //implement shutdown logic here
        }

        public abstract RuleSet GetRuleSet(string rulesetName);
        }
}

Example 5.20. Implementation of RuleSetProvider

public class SqlRuleSetProviderService : RuleSetProviderService
{
   public override RuleSet GetRuleSet(string rulesetName)
   {
       // get the RuleSet from Sql and return
       return new RuleSet();
   }
}

By adding this to the runtime, either in code or in configuration, our activity (see Listing 5.21) can now access this service.

Example 5.21. DynamicRuleSetActivity

protected override ActivityExecutionStatus Execute(
                    ActivityExecutionContext executionContext)
{
    RuleSetProviderService rspService = executionContext.
        GetService<RuleSetProviderService>();
    RuleSet ruleset = rspService.GetRuleSet(ruleSetName);
    // remainder of execution logic
    return ActivityExecutionStatus.Closed;
}

The activity requests a service that is of the type RuleSetProviderService, the abstract type. This allows the developer to substitute various implementations of the service at deployment time.

Rules Engine

Three things are important when one defines application logic. The first is simple; it is the ability to define the logic in code. The second is the runtime environment to execute. With these two alone, one can implement application logic, but does so by hard-coding aspects of the logic that are likely to change the rules. In addition to providing the object model to construct workflows and the runtime environment in which to execute them, Windows Workflow Foundation contains a rules engine. This allows for the separation of rules from process. Consider the expense-reporting scenario. One defines the process of order processing, which might look something like the example in Figure 5.30.

A Visio order-processing example.

Figure 5.30. A Visio order-processing example.

This process might change infrequently in terms of its structure of deciding the approval route, the approval by a manager, and the ultimate notification and disbursement of goods. This process might change only if an optimization is discovered or a compliance standard forces a different approval path. One thing would change with greater frequency: the condition that determines the approval path. Initially, this limit might be set to something purely currency based; for example, if the total is greater than $75. As business changes and expenses garner extra scrutiny, a simple total amount might not be enough. Suppose that the analysis of which approval process should be used now depends on a much more complicated condition such as the following:

If ((Customer.Category == Silver or Gold) AND
Expense.Submitter.LeadingAverageSimilarExpenses > $75) THEN ...

It is important to note here that the process has not changed. What has changed is the condition used by the process. The process remains ignorant of the mechanics of the condition; it simply asks that a condition be evaluated. The separation of rules from process is important in enabling agility in a process, and is a natural extension of the declarative programming model embraced by Windows Workflow Foundation.

In addition to the execution being flexible, the rules designer is re-hostable inside a Windows Forms application, allowing a rule set to be edited outside of Visual Studio. Scenarios in which this would be useful include allowing power users or business analysts to design the rules to be executed inside of a process or allowing rule configuration inside of a management tool, where a full installation of Visual Studio is not desirable. It is also possible to create completely custom rule-authoring environments that construct the rule set.

Tip

There is a sample application that allows the customization of the rule set by assigning risk scoring to an Excel spreadsheet. The rule set is then executed against the balance sheet information entered in the first sheet. This sample is available at http://wf.netfx3.com/files/folders/rules_samples/entry313.aspx.

Rules as Conditions

The first place one will likely encounter the Windows Workflow Foundation rules engine is in the condition property of an activity, such as the IfElse activity. In the scenario described earlier, there is a branching activity, such as IfElse, that requires the output of a condition to determine the proper flow of execution. Other activities that use this type of property are the While activity and ConditionedActivityGroup. Custom activities can also leverage conditions to determine their execution path. An example of this would be an activity similar to the IfElse activity, but functioning as a switch statement and evaluating which of the branches should execute. (As an aside, this behavior can be achieved using the CAG activity.)

There are two types of rule conditions:

  • Code condition

  • Declarative rule condition

The type of condition determines how the rule condition is constructed and executed. In the case of the code condition, the condition is expressed in code file of the workflow. The condition is implemented as in Listing 5.22, with the return value of the condition set by the Result property of the ConditionalEventArgs parameter passed into the method.

Example 5.22. Code Condition

private void BigOrderCondition(object sender, ConditionalEventArgs e)
{
        if (Order.Amount > 100)
        {
            e.Result = true;
        }
        else
        {
            e.Result = false;
        }
}

A code condition gives one flexibility in implementation of the condition: Anything that can be expressed in .NET code can be used to evaluate the condition. The downside is the tight coupling of the condition code to the workflow itself. This makes the previously mentioned separation of rules from process near impossible. The solution is to externalize the rules into a rules file, an XML serialization of the rules, and load that file along with the workflow. The activity contains a reference to which condition it needs to evaluate, but then relies on the condition to be provided from this rules store.

A declarative rule condition is used in the following way:

  1. Add an activity requiring a condition, and view its properties.

  2. Select Declarative Rule Condition as the condition type. Doing so will cause the grid to expand with additional options.

  3. Click the ellipsis icon next to the Condition Name property. This will bring up the condition selector, which lists all the previously created declarative rule conditions in this workflow.

  4. Click the New button to create a new condition. Doing so will display the Rules Editor window.

  5. Begin typing the condition, and see the IntelliSense-like drop-downs that appear to guide one through selecting the terms to evaluate.

The code being typed in might look like C#, but it is actually translated into an XML representation of the rule and is stored in the .rules file associated with the current workflow. Looking inside the .rules file in Listing 5.23, one can see the same structure of the condition defined, albeit slightly more verbosely than anticipated.

Example 5.23. .rules XML

<RuleExpressionCondition Name="Condition1">
  <RuleExpressionCondition.Expression>
    <ns0:CodeBinaryOperatorExpression Operator="GreaterThan"
      xmlns:ns0="clr-namespace:System.CodeDom;Assembly=System,
        Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">
      <ns0:CodeBinaryOperatorExpression.Left>
        <ns0:CodeFieldReferenceExpression FieldName="Amount">
          <ns0:CodeFieldReferenceExpression.TargetObject="">
            <ns0:CodeFieldReference Expression="" FieldName="order">
              <ns0:CodeFieldReferenceExpression.TargetObject="">
                <ns0:CodeThisReferenceExpression="" />
              </ns0:CodeFieldReferenceExpression.TargetObject>
            </ns0:CodeFieldReference Expression>
          </ns0:CodeFieldReferenceExpression.TargetObject>
        </ns0:CodeFieldReferenceExpression>
      </ns0:CodeBinaryOperatorExpression.Left>
      <ns0:CodeBinaryOperatorExpression.Right>
        <ns0:CodePrimitiveExpression>
          <ns0:CodePrimitiveExpression.Value>
            <ns1:Int32 xmlns:ns1="clr-namespace:System;Assembly=mscorlib,
              Version=2.0.0.0, Culture=neutral,
              PublicKeyToken=b77a5c561934e089">
                 50
            </ns1:Int32>
          </ns0:CodePrimitiveExpression.Value>
        </ns0:CodePrimitiveExpression>
      </ns0:CodeBinaryOperatorExpression.Right>
    </ns0:CodeBinaryOperatorExpression>
  </RuleExpressionCondition.Expression>
</RuleExpressionCondition>

When workflows are compiled into assemblies, the rules are compiled alongside the workflow into the assembly. In the case where a declarative approach is being used, the CreateWorkflow() method contains overloads that take a second XmlReader parameter that points to the rules file. This allows the rules to be managed separately from the process itself. If that flexibility is desired in the compiled case, the rules can be deserialized from the rules file and a dynamic update can be used to insert the updated rules into the workflow instance as shown in Listing 5.24.

Example 5.24. Dynamic Update of RuleDefintions

WorkflowChanges workflowchanges = new
  WorkflowChanges(workflowInstance.GetWorkflowDefinition());
CompositeActivity transient = workflowchanges.TransientWorkflow;
RuleDefinitions newRuleDefinitions = // acquire new rules defintion here...
transient.SetValue(RuleDefinitions.RuleDefinitionsProperty,
  newRuleDefintions);
workflowInstance.ApplyWorkflowChanges(workflowchanges);

The SDK also contains a sample of using the dynamic update capability to target and update an individual rule.

The ConditionedActivityGroup Activity

The CAG is a composite activity that can be used to create highly dynamic execution patterns, as shown in Figure 5.31.

The ConditionedActivityGroup.

Figure 5.31. The ConditionedActivityGroup.

The CAG is designed by creating any number of child activities (which could include a sequence activity for multiple steps), and defining the rules that govern those activities’ execution. The idea is best understood within the context of an example. Consider a grocery–order-processing scenario in which there are multiple activities that can be performed on an order, depending on the type of items it contains. The rules might be as follows:

  • If an item is fragile, use the fragile-packing process.

  • If an item is perishable, package it within dry ice.

  • If an item is from the baby-food category, insert an upcoming event flyer.

The other logic is that all the items in the order must be looped through to properly evaluate this.

One might ask, “Can’t I just do this in a normal sequential work-flow?” The answer is usually, yes, it is possible, but how would that problem be solved in a sequential workflow?

The problem would be solved with a number of looping constructs to ensure reconsidering additional execution of individual branches. This quickly becomes a spaghetti workflow, with loops becoming the primary means of control flow, requiring a new developer to spend quite some time understanding how the application works, as well as creating a large cost (in terms of development and testing) to modify the process. Contrast that to the logic in Figure 5.32.

A ConditionedActivityGroup with three activity groupings.

Figure 5.32. A ConditionedActivityGroup with three activity groupings.

Here, the individual branches of execution have been defined and the decision to execute a branch is controlled by a set of rules, accessible from the CAG itself. This pattern allows for centralization of the rules (to outside the workflow) and the definition of discrete branches of functionality without focusing on how to nest the loops correctly to enable the execution behavior desired.

An additional feature of the CAG is the model of repeated execution. If the value of UntilCondition is set to true, the CAG will not continue to execute any additional branches. If the UntilCondition is set to true, the CAG will take the step to cancel the currently executing activities in other branches, allowing the CAG to close and relinquish control to the next available activity.

Rules as Policy

In the previous sections, rules have been looked at as a simple if <condition> statement to serve as a gate around some other action. In the simple case of IfElse, the branch to execute is determined by the evaluation of the condition. A While activity depends on the condition to determine the number of times it needs to loop. There is another way to use rules, and that is in the execution of a rule policy.

In a rule policy, a RuleSet is built out of a collection of rules. A rule is defined as both a condition and an action to be performed. In the previous case, the first half of the rule is utilized, and the action is determined by the activity depending upon the condition. In a RuleSet, the action is defined as part of the individual rule. The RuleSet is then executed against an arbitrary object. It is against this object that the conditions are evaluated (is the order greater than $100?) and the actions are performed (set the discount to 22%). A RuleSet lets one group gather a collection of rules related to some processing task and evaluate all the rules together. The rules engine also supports prioritization of rules as well as a fine grained control over the chaining behavior of the rules.

Consider yet again the example of order processing. At some point in the execution of the process, the discount must be calculated. For most organizations the discount is not calculated simply by assigning a fixed number, it is usually a complex combination of conditions: How big is the order? How much has the customer ordered over the last year? Are the items being ordered in high demand? and so on. This list can begin to grow and become quite complicated, and it can quickly become difficult to continue to model these as an increasingly complicated series of if/else statements or switch statements in code.

As the number of rules increases beyond three or four, it becomes increasingly difficult to code all the various permutations and dependencies of those rules. As an example, insurance companies might have thousands of rules involved in their risk-scoring applications. The overhead to create, maintain, and debug such a rule set in code becomes unwieldy and defeats the flexibility that rules-based processing can provide. Within a workflow, PolicyActivity is used to provide these capabilities.

Forward Chaining

As rule sets become increasingly complicated, there will inevitably exist dependencies between rules. Rule 4 might base the discount on the expected profit from the order. Rule 10 might perform an analysis of the customer’s delinquency rate and change the expected profit based on a new probability that the customer might not pay. Now that this expected profit has changed, what becomes of rule 4? It might be that the business requirement is that this rule should be evaluated again. This is referred to as the forward-chaining behavior of a rules engine. The rules engine is responsible for detecting such dependencies and selectively re-executing rules based on changes to the facts that the conditions depend on. Consider the rule set in Table 5.2.

Table 5.2. Sample Rule Set

Rule Number

Rule

1

If Order.Total > 1000 then Discount = 12%

2

If Order.Shipping > 0 then Discount = Discount + 3%

3

If Order.ShipToState = 'WA' then Shipping = 25

For inputs, Order.Total = 1200, Shipping = 0, and ShipToState = 'WA'.

The execution of this rule set is as shown in Table 5.3.

Table 5.3. Rule Set Execution

Step

Execution

1

Rule 1 evaluates to true; Discount set to 12%

2

Rule 2 evaluates to false

3

Rule 3 evaluates to true; Shipping set to 25

4

Rule 2 evaluates again (Shipping changed); Discount increased

As rule sets contain more rules, the evaluation pattern will become increasingly complicated. Consider modeling this execution behavior in code, especially the ability to reevaluate rules when the facts it depends on change. This code would quickly spiral out of any state of maintainability and not easily allow new rules to be added. Something such as a pricing policy might be extremely flexible, and rule-based calculation enables that flexibility.

Forward chaining can be used to evaluate across a collection of objects as well. Consider an Order object with an array of OrderItem objects. Additionally, consider a counter variable, i, an integer initialized to 0. Using the following rule set:

Rule Number

Rule

1

If i < Order.Items.Length then ProcessItem(i), i=i+1

With this single rule, the ProcessItem() method will be called for each item. The execution is easy to trace: evaluate initially, process the item for the counter, and then increase the counter. Increasing the counter changes the fact the rule depends on, forcing its reevaluation. To make this more robust, a higher priority rule should be executed that initializes the counter to 0.

External Policy Execution

The previous examples have focused on rules executing within the context of a workflow, with the object being operated on as the workflow class. But this is simply a matter of how the functionality is surfaced. A rule set can execute against any .NET object, any custom class that one creates may be the target of a rule set. The following steps are necessary to execute rules against an arbitrary object:

  1. Deserialize the rule set into a RuleSet object.

  2. Validate the rule set against the type (this ensures that if a rule mentions object.foo, the type contains foo).

  3. Create a RuleExecution object that stores the state of execution.

  4. Call RuleSet.Execute.

The code in Listing 5.25 does this.

Example 5.25. External Rule Execution

RuleExecution ruleExecution;
WorkflowMarkupSerializer serializer = new WorkflowMarkupSerializer();
XmlTextReader reader = new XmlTextReader(new StringReader(ruleSetXmlString));
RuleSet ruleSet = serializer.Deserialize(reader) as RuleSet;
//check that rules are valid
RuleValidation ruleValidation = new RuleValidation(executionObject.GetType(), null);
if (!ruleSet.Validate(ruleValidation))
{
    // handle errors
}
else
{
    ruleExecution = new RuleExecution(ruleValidation, executionObject);
}
ruleSet.Execute(ruleExecution);

This can be generalized to scenarios where processing occurs on many clients, but it is desirable to have a central rule store to ensure that all the clients are always using the same rule policies.

Summary

The chapter has provided an introduction to the major concepts of the Windows Workflow Foundation. The activity model, designer, and runtime are tools that can be used to create declarative processes for use inside of any .NET application. There is a substantial amount of extensibility built into Windows Workflow Foundation that enables it to be used in a wide variety of scenarios. Additionally, Windows Workflow Foundation includes a rules engine to allow declarative rules to be executed within a .NET application. Chapter 6, “Using the Windows Communication Foundation and the Windows Workflow Foundation Together,” will look at how to integrate Windows Workflow Foundation with Windows Communication Foundation.

References

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset