Chapter 9

Identity in Azure

What’s in This Chapter?

  • Understanding a federated identity and claims-based identity
  • Working with federation and claims with Windows Identity Foundation
  • How to deploy and troubleshoot a WCF service on Windows Azure

Most applications need some way to identify users and to determine what a specific user may or may not do, and this is no different for applications running in Windows Azure, in fact it’s more critical for many reasons. Windows Azure is unlike your typical server sitting in a data center under your control, in that applications are not part of your own network environment or domain. In your own network, you can fall back on the security at an infrastructure level, which is definitely not the case in Windows Azure, which is accessible from anywhere, so you need to compensate for this. Also, you can’t rely on network credentials to authenticate users because that doesn’t work over a firewall. Another aspect of the security picture is the increasing need for applications to interact. Put this all together, and you need a different strategy for identity.

Identity in the Cloud

Many applications (or services) need to uniquely identify the user—some for the purpose of giving you a personalized experience, and others to determine your access rights. Until a few years ago, you could be identified in two ways. The first was through credentials like a username and password unique to the application; the other was through your network credentials used across the applications in the local network. The latter is user-friendly; log on once and all applications know you. However, beyond the realm of your local network, your network credentials mean nothing. So on the Internet, applications tend to use a username and password. The last time I checked I had more than 100 accounts on various websites, and I probably forgot about quite a few. I doubt this is any different for other regular computer users. You need something better to manage your identity in the cloud.

With cloud computing the need for secure and flexible identity management is even more important. Cloud computing blurs the line between the local network and the Internet. Applications running in the cloud may conceptually be internal applications, with data that should be well guarded against falling into the wrong hands. In these cases the identity of the user and the user’s access rights must be above all suspicion. Granting access based on just a username and password that is entered on a public accessible web page may not be secure enough. The security mechanism used by cloud applications should facilitate options that are more secure. On the other hand, it should also facilitate less-strict forms of identity. After all, an application hosted in the cloud could also be meant for public use.

Another aspect that impacts the way you need to think about security in the cloud is applications working together. A photo print service for instance could work together with your online photo album to make the prints you want. Instead of uploading the photos you want printed, you can grant the photo print service partial access to the online photo album. This means identity in the cloud also needs some way to provide other people or applications limited access to your data.

Understanding Federated Identity

You can solve many of the problems just discussed with federated identity. But before going into that deeper, you need a definition of what an (electronic) identity is. The problem is that if you talk to different people, you’ll get different definitions, for instance:

  • Username and password
  • Some unique key identifying a user
  • Authentication and authorization
  • All data associated with a user account

For the purpose of this discussion, an identity is a set of attributes associated with a user or entity wanting access to some secure resource. Some of these attributes identify the user or entity uniquely (for example, security identifier, e-mail address, and so on), so the sum of the attributes is in essence unique.

A federated identity also qualifies as an identity but has some additional characteristics. First, it enables single sign-on across many applications, regardless of whether these applications share the same security domain. Second, the information making up your identity doesn’t need to be stored in a single place, but rather in places where they make sense. In addition, applications get only the information they need and to which access is granted. This way an application doesn’t simply have access to all information about a user. For instance, an application that only needs to know whether you’re an adult won’t get your date of birth, only an indicator telling the application you are in fact an adult. Both the distributed nature of a federated identity and that an application gets access to only what it needs means that privacy is handled better.

note.eps
Because the attributes sent to an application can be filtered, there is no guarantee of uniqueness. However, this is also not always necessary. For instance, an application that helps you find a clothing store may need only your location and gender. Where a unique identifier is needed, that identifier can still be decoupled from your actual identity for privacy purposes.

How Does Identity Federation Work?

With identity federation an application does not authenticate the user. That job is left to an Identity Provider (IdP), which as the name suggests provides you with an electronic identity after it has verified that you are who you say you are. The identity is passed to the application in a token, which contains the information the application needs and has been approved to get. The token is produced by a Security Token Service (STS), which is often part of the IdP. As you’ll see later, this isn’t necessarily the case, which can make things confusing. This is also why you’ll find that in some literature IdP and STS are used interchangeably. Here the term IP-STS indicates an STS that is also an IdP.

When an STS creates a token, it signs the token, so an application is certain it was issued by that STS. Most often a token is also encrypted, and in most cases you can assume it is; although it isn’t necessary in all scenarios. To work together, the STS and the application must have a trust relationship. This is why an application (or service) using the STS is also called a Relying Party (RP), which is the term used while discussing the theory. For Windows Azure, an RP is synonymous with a Hosted Service. In general terms you can think of an RP as one or more secured resources a client might want to access, such as a web page, a service, or a file.

Because the application in no longer authenticating users directly, the authentication process is more complex, as shown in the sequence diagram in Figure 9-1.

Redirection ensures that the client gets to the IP-STS and back to the RP after the client has acquired a token (refer to Figure 9-1). Steps 5 and 8 respectively deal with generating a token and validating the generated token. The first step consists of signing and possibly encrypting the token for additional security. The second step decrypts the token (if needed) and checks the signature to see if the token comes from a trusted STS. After that the RP determines how to check authorization, which is discussed later.

In generic terms, the process shown in Figure 9-1 is how federated identity works. There are, however, two flavors of the process: active federation and passive federation. With active federation the client plays an active role in the process, hence active federation. This means that the client sets up communication with the STS and sends a token request. This is possible with clients that support the protocols and encryption techniques used to make identity federation work. The problem is that web browsers are actually dumb in this respect; they know nothing more than (secure) HTTP, and as such can’t request a token. To solve this, the user is sent to a login-page hosted on the IP-STS, so there is no need for the browser to request a token. The token is sent back as part of the response to the browser. A small piece of script then posts the token to the RP. To the browser the token is just some data sent back and forth.

A good example of passive federation is Microsoft LiveID. When you log in to a service requiring a LiveID, you’re sent to https://login.live.com, which authenticates you. You can see what goes on under the covers using a network profiler such as Fiddler (www.fiddler2.com) or F12 Developer Tools built into Internet Explorer 9. You can also do this after you’ve built your own RP and STS later in this chapter.

Identity Federation and Network Credentials

A key aspect of a federated identity token is that it is signed. This is typically done using an X.509 certificate, but other mechanisms such as symmetric keys are also supported in most protocols. Basically, the RP needs to trust the issuer of the token and ensure the token is authentic. How the user was authenticated and where the STS is located is not relevant to the RP. This makes it easy to have an STS in a local network, only accessible to users on that network, and only accepting network credentials, which provides tokens for an RP in the cloud. Unless such an RP also accepts tokens from another STS, it is only accessible to users that are on the local network. This effectively extends the local network to include the RP and achieves single sign-on across application in the local network and the RP based on network credentials. Figure 9-2 shows how this works.

Identity Federation Beyond the Basics

Identity federation breaks down some of the barriers from the past, but it won’t help much if an RP can work only with a single STS. If that is the case, you would just outsource authentication and add an additional layer of complexity in the process. Fortunately there is no limit to the number of STSs an RP can trust, and the RP must determine how to deal with identities from different STSs. For instance, you can allow users to authenticate using their Facebook or LinkedIn credentials. Whether you see these as different accounts, or allow the user to access the same account with both Facebook and LinkedIn, is up to you.

Because an RP trusts an STS doesn’t say anything about the way the user was authenticated. The RP just trusts the STS and the token it gets from the RP. This opens the door to another option when dealing with multiple security domains, as shown in Figure 9-3. Instead of having the RP trust multiple STSs, the RP can trust a single STS. That STS can then trust other STSs to authenticate users. Because the STS federates only the identity, such an STS is also known as the Federation Provider or FP-STS.

Cross-domain federation (refer to Figure 9-3) is key for collaboration and business-to-business scenarios. Consider for instance of two companies: OfficeGiant.com and Laws’R’us. OfficeGiant.com sells office supplies, and Laws’R’us is a big law firm with several departments. Within each department the office manager is responsible for buying office supplies as needed. With conventional security technologies there are several ways to solve this:

  • One account could be used for the entire company and the username and password shared among the office managers. This is obviously not a good approach. There’s no way to track who used the account, so fake or wrong orders are a possibility, especially if an office manager is disgruntled.
  • Each office manager could create an account and have it linked to the company. This is better than the previous option, but from an administrative point of view, it not handy. Also, if the office manager left the company, who’s going to disable the account?
  • The company could get a master account with the ability to create accounts for each office manager. This is a good approach but still requires management of user accounts, which are not linked to local domain accounts. So if an office manager would leave the company, the account should be disabled.

With cross-domain federation OfficeGiant.com could accept the IP-STS at Laws’R’us as a trusted IdP for its FP-STS. The OfficeGiant.com FP-STS transforms the incoming token from the Laws’R’us IP-STS and can add information to the token telling the application that the user is actually from Laws’R’us. If the IP-STS at Laws’R’us uses the network credentials as previously shown in Figure 9-2, employees at Laws’R’us are logged in transparently with OfficeGiant.com with their network credentials. You can assume that the network credentials are managed much better than accounts on some external site, so when someone leaves the company, the account is quickly disabled. Also, whoever has access to OfficeGiant.com can be controlled by the Laws’R’us system administrator by adding or removing users from groups.

Understanding Claims

As stated earlier, identity federation isn’t just about authentication; it is also about the information making up a user’s identity. This information captured in attributes is sent to the RP in the token in the form of claims. A claim is basically a name-value pair with some piece of information about the user. This can be anything, such as an e-mail address, membership number, or role. The name of the value is often some unique identifier such as a URI, and if possible a standard value, such as http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress for an e-mail address, so any application understands the meaning of that particular claim. Claim values can be values direct from some data store such as a birth date but also a derived value such as age. A typical STS can determine what goes into a token through rules applicable to an RP.

Claims are called such because they claim something about the user. The claims sent to an RP are (or should be) relevant to that RP, and for that RP the given values are what makes up the identity of the user. The STS where the claim originates is key in determining how truthful the claim is; providing truthfulness is relevant. For example, on an online forum my name might be Darth Vader and my real name is unknown. However, when I do my tax returns, Darth Vader can’t do the job. The RP needs my real name and Social Security number. What’s more, the RP accepts only a token from an STS that it trusts to give my real name and Social Security number and has verified this. I can’t just go around and give any Social Security number. This is no different from real life; your membership card for the local gym will suffice to get you into the gym, but a police officer will not accept it because the officer can’t trust that the gym has verified your data. Your driver’s license on the other hand is given out by an organization a police officer trusts.

Although you can still do role-based authorization using the standard role claim type, claims enable you to model security around much more than just role membership. Because claims can contain any type of value, they can tie into the business logic. For instance, when the age of the user is a claim value, you can do a check against the value in the business logic. Another example might be a bank-account number. There’s no need to do a lookup in a database because it can be made part of the token. This flexibility makes the use of claims as the basis for authorization powerful and much more natural than role-based authorization.

Putting information such as a Social Security number or bank account number into a token may sound scary, but you can argue that it is much safer than putting them in an application database. An STS is likely to be much better secured because it deals only with personal data. An application with a lot of functionality can have vulnerabilities in all sorts of places. More important, a token needs to be trusted by an STS. A token from an untrusted STS will simply be discarded, so spoofing data is not possible unless you hijack the STS or get a hold of the private key of the signing certificate. A commercial STS such as Active Directory Federation Services (ADFS) 2.0 has several lines of defense to guard against such breaches and is rigorously tested.

Windows Identity Foundation Overview

With the theory firmly under your belt, you can now move to how you can work with identity federation and claims-based identity in your application. Several standards implement the theory in practice. For instance, the interactions discussed before may include the WS-MetadataExchange standard to retrieve a policy structured with the WS-Policy standard, an STS implementing the WS-Trust standard and formatting tokens with Security Assertion Markup Language (SAML), and the WS-Federation standard to tie the communication between client and service together. To make matters worse, this is not the only option.

You can imagine that implementing these standards and the cryptography needed yourself is a daunting task. Fortunately, Microsoft has done the heavy lifting for you. Windows Identity Foundation (WIF) is a framework for building applications using federated claims-based identity. It abstracts WS-Trust, WS-Federation, and SAML, and presents developers with an API on top of the .NET Framework. WIF works with .NET Framework 3.5 SP1 and up, and runs on Windows Server 2003 and up. This of course includes Windows Azure, which builds on Windows Server.

note.eps
WIF is not automatically installed when you install all the components needed to develop Windows Azure applications using the Web Platform Installer. See Chapter 8 for more information on setting up your development environment with WIF.

WIF is an important part of the Microsoft software stack moving forward. Applications such as Microsoft Office are being made claims-aware by integrating WIF. This is no surprise because federated claims-based identity is much better than other options, as discussed earlier in this chapter. For that reason it is a good idea to build your own applications using WIF, so you need to get acquainted with it.

note.eps
In this book you learn about the basics of WIF. For more details you can read the excellent whitepaper “Microsoft Windows Identity Foundation (WIF) Whitepaper for Developers,” which you can download from http://bit.ly/wifwhitepaper.

Understanding How WIF Integrates into .NET

With WIF you can build claims-aware applications, but you can also use it to build an STS. In both cases you have to deal with only the API and not with the underlying standards. If you build a relying party, you don’t need to learn a lot of new stuff because WIF integrates with the existing .NET Framework user infrastructure provided by the IPrincipal and IIdentity interfaces. WIF defines several interfaces and classes that build on .NET Framework constructs, making WIF easy to use for .NET developers.

The Claim Class

WIF revolves around claims-based identity, so a key class is the Claim class, which corresponds to a single value of a particular claim type. The Claim class looks like the following code snippet.

download.eps
public class Claim
{
    public virtual string ClaimType { get; }
    public virtual string Issuer { get; }
    public virtual string OriginalIssuer { get; }
    public virtual IDictionary<string, string> Properties { get; }
    public virtual IClaimsIdentity Subject { get; }
    public virtual string Value { get; }
    public virtual string ValueType { get; }

    public virtual Claim Copy();
    public virtual void SetSubject(IClaimsIdentity subject);
}

code snippet 01_Claim class.txt

The key properties on the Claim class are ClaimType and Value. The ClaimType is a unique string value such as a URI, as discussed earlier. This nice, low-level approach means you can easily define your own types. The value is also always a string, but the ValueType property may give a clue as to the actual data type. Several standard claims, such as e-mail, date of birth, and of course name, are available through the ClaimTypes class, which exposes claim types as public string constants. However, you can also define your own claims as you see fit.

A claim always has an issuer, which you can check with the Issuer property. For identity federation there may be multiple issuers in the chain, in which case the original issuer of the claim is in the OriginalIssuer property. Then the Issuer property contains the last STS in the chain. In Properties you can also find meta data about the claim, such as metadata harvested from a SAML token. In most cases you won’t do anything with this metadata. Finally, the Subject property is the identity the claim belongs to, which is an object of type IClaimsIdentity discussed next.

The IClaimsIdentity Interface

WIF defines the IClaimsIdentity interface, which extends the IIdentity interface, so it can be used as an IIdentity replacement. WIF provides a default implementation of this interface with the ClaimsIdentity class. The IClaimsIdentity interface has the following signature:

download.eps
public interface IClaimsIdentity : IIdentity
{
    ClaimCollection Claims { get; }
    IClaimsIdentity Actor { get; set; }
    string Label { get; set; }
    string NameClaimType { get; set; }
    string RoleClaimType { get; set; }
    SecurityToken BootstrapToken { get; set; }

    IClaimsIdentity Copy();
}

code snippet 02_IClaimsIdentity.txt

Most important in the preceding interface is the Claims property, which gives you access to the claims applicable to the user. What’s interesting is that this is basically the only information available from a token. There is no username and no information about the roles the user is in. Any and all information is captured in claims, including username and roles, which should be specific claims. These claims need to be mapped to the IIdentity.Name property and the IPrincipal.IsInRole method. This is what the NameClaimType and RoleClaimType properties are for. They represent the claim types to search for on the claims collection for the username and roles. It is not mandatory for a claims identity to contain a username or roles. An identity is just a set of claims that the application can do checks on. It is up to the application to determine which claims must be available. The other three properties are mainly there for advanced scenarios. The Label property is there for convenience to keep different identities apart. The Actor property is used in delegation scenarios, and the BootstrapToken property contains the original security token for the application if WIF is configured for this.

The IClaimsPrincipal Interface

WIF also defines the IClaimsPrincipal interface to go with the IClaimsIdentity. This enables WIF to replace the active principal in context, which is HttpContext.User for ASP.NET application, ServiceContext.User for WCF services, or Thread.CurrentPrincipal at a more basic level. The IClaimsPrincipal interface looks as follows:

public interface IClaimsPrincipal : IPrincipal
{
    ClaimsIdentityCollection Identities { get; }

    IClaimsPrincipal Copy();
} 

code snippet 03_IClaimsPrincipal.txt

What stands out on the IClaimsPrincipal interface is that a principal can contain multiple identities. In the common case there is always a single identity, but there are scenarios in which multiple identities from different STSs could be requested to form a comprehensive identity. Such scenarios are not discussed in this book, but it is good to be aware of the reason the interface looks the way it does.

Checking for a Claim

The whole idea behind claims is that you can use logic to check for a certain claim—either to use that claim to perform some action, such as sending an e-mail to the e-mail address in the Email claim, or to authorize the user for some functionality. Basically what you need to do is loop all the claims associated with the identity until you find the right type and possibly value. Because this is somewhat inconvenient to code all the time, you can create extension methods, as shown in Listing 9-1.

download.eps

Listing 9-1: Extension Methods to Access Claims

public static class IClaimsIdentityExtensions
{
    // Get all values for the given claim type.
    public static IEnumerable<string> GetClaimValues(
        this IClaimsIdentity identity,
        string claimType)
    {
        return from c in identity.Claims
               where c.ClaimType == claimType
               select c.Value;
    }

    // Get the first value for the given claim type.
    public static string GetClaimValue(
        this IClaimsIdentity identity,
        string claimType)
    {
        IEnumerable<string> values = GetClaimValues(identity, claimType);
        if (values == null || values.Count() == 0) return string.Empty;
        return values.ElementAt(0);
    }

    public static bool ClaimHasValue(
        this IClaimsIdentity identity,
        string claimType,
        string value)
    {
        IEnumerable<string> values = GetClaimValues(identity, claimType);
        foreach(string s in values)
        {
            if(s.Equals(value, StringComparison.OrdinalIgnoreCase))
                return true;
        }
        return false;
    }
}

The main extension method in Listing 9-1 is GetClaimValues, which directly interacts with the ClaimsCollection of IClaimsIdentity. As you can see it is just a single LINQ statement, but the trouble is that it returns an IEnumerable<string> instead of a single value. That’s because any claim can exist multiple times with different values. This is most likely for claims such as a role claim but could be so for many more. If functionally speaking you need only a single value and you can expect that there is only a single value, you can use the GetClaimValue method, which just gets the first value it encounters for the given claim type. You can use this to send e-mail for instance. Finally, the ClaimHasValue extension method loops all the claims of a particular type to check if the required value is among them, so you can easily authorize a user with an if or switch statement like in the following snippet.

if (identity.ClaimHasValue(ClaimTypes.Gender, "Male"))
{
    ShowMaleCatalog();
}
else
{
    ShowFemaleCatalog();
}

code snippet 04_CheckForClaim.txt

Before you can use the extension methods like in the preceding code, you need the IClaimsIdentity. Here again you must deal with the fact that there can be multiple. Luckily in the common case there is only one, so you can use the following code snippet to get the first identity of the principal.

download.eps
IClaimsPrincipal principal = Thread.CurrentPrincipal as IClaimsPrincipal;
if (principal == null)
{
    throw new SecurityException("Couldn't get IClaimsPrincipal.");
}
IClaimsIdentity identity = principal.Identities[0];

code snippet 05_GetIClaimsIdentity.txt

You can combine this code with the extension methods in Listing 9-1 to create a helper class that enables you to perform checks with a single line of code.

note.eps
An alternative to the previous code is to use the ClaimsPrincipalPermission, however, this is a more advanced technique and beyond the scope of this book. For more information, you can visit http://bit.ly/claimscheck.

Understanding Federation Metadata

Up until now the focus has been on getting the principal, identity, and claims with code. But how do you actually know which claims you are going to get? Unfortunately, this is determined by the STS. That said, the STS publishes the claims it can produce at the default location /FederationMetadata/2007-06/FederationMetadata.xml, which looks like the XML is Listing 9-2.

download.eps

Listing 9-2: Example Federation Metadata

<?xml version="1.0" encoding="utf-8"?>
<EntityDescriptor
   ID="_9070250a-3132-496a-9e3f-cd24d189c6cc"
   entityID="http://exampleSTS.com/"
   xmlns="urn:oasis:names:tc:SAML:2.0:metadata">

  <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
     <!~DH omitted for brevity ~DH>
  </ds:Signature>

  <RoleDescriptor
     xsi:type="fed:SecurityTokenServiceType"
     protocolSupportEnumeration="http://docs.oasis-open.org/wsfed/federation/200706"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xmlns:fed="http://docs.oasis-open.org/wsfed/federation/200706">

    <KeyDescriptor use="signing">
      <KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#">
        <X509Data>
          <X509Certificate><!~DH omitted for brevity ~DH></X509Certificate>
        </X509Data>
      </KeyInfo>
    </KeyDescriptor>
   
    <ContactPerson contactType="administrative">
      <GivenName>contactName</GivenName>
    </ContactPerson>

    <fed:ClaimTypesOffered>
      <auth:ClaimType
         Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"
         Optional="true"
         xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706">
        <auth:DisplayName>Name</auth:DisplayName>
        <auth:Description>The name of the subject.</auth:Description>
      </auth:ClaimType>
      <!~DH more claim types go here ~DH>
    </auth:ClaimType>
    </fed:ClaimTypesOffered>

    <fed:SecurityTokenServiceEndpoint>
      <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">
        <Address> http://exampleSTS.com/</Address>
      </EndpointReference>
    </fed:SecurityTokenServiceEndpoint>

    <fed:PassiveRequestorEndpoint>
      <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">
        <Address> http://exampleSTS.com/</Address>
      </EndpointReference>
    </fed:PassiveRequestorEndpoint>
  </RoleDescriptor>
</EntityDescriptor>

In Listing 9-2 you see a lot of XML namespaces. Don’t worry about understanding those. These are basically part of the plumbing. Also part of the plumbing—and in most cases taken care of for you—is the stuff in the <ds:Signature> element and the <KeyDescriptor> element. The latter of those contains the public key RPs needed to sign messages sent to the STS. The former is the signature of the Federation Metadata, so it can’t be tampered with. Even different whitespace changes the signature, so you never want to touch a Federation Metadata with an editor after it has been signed.

The <fed:SecurityTokenServiceEndpoint> and <fed:PassiveRequestorEndpoint> tell RPs where to go to get a security token. In the latter case this is the URL of the STS login page the user is sent to when passive federation is used.

Most interesting for developers is the <fed:ClaimTypesOffered> element, which lists the claim types an RP can request from the STS. Important there is the URI of the claim type, which uniquely distinguishes it from any others. This is the value that you need to look for when searching for a particular claim. Also note the optional attribute. If this is set to true, it may or may not be in the security token, which could be significant. This is actually where the Federation Metadata of the STS and the RP should match up. An RP also publishes Federation Metadata but with less information. Most of what’s in there should correspond with what’s in the STS Federation Metadata, but instead of <fed:ClaimTypesOffered> the Federation Metadata of an RP contains <fed:ClaimTypesRequested>, which looks as follows:

download.eps
<fed:ClaimTypesRequested>
  <auth:ClaimType
      Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"
      Optional="true"
      xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706" />
      <!~DH more claim types go here ~DH>
</fed:ClaimTypesRequested>

code snippet 06_ClaimsRequested.txt

<fed:ClaimTypesRequested> contains the same elements as <fed:ClaimTypesOffered>, which means you also see the Optional attribute returning. This is key because if a claim is optional for the STS and required by the RP, a token sent by the STS may not contain all claims required by the RP. The STS can change the set of claims an RP can request, and some products even enable the STS to differentiate between different RPs it serves. For this reason it is a good idea for an RP to periodically check the Federation Metadata to determine if all the claims it requires are still provided by the STS, and if not notify the application administrator.

Working with Windows Identity Foundation

Now that you know about identity federation, claims, and WIF, it’s time to put the theory into practice. Passive federation and active federation are discussed separately, starting with passive federation. Currently that is the most common scenario, and it is also a lot simpler to get working than active federation.

Creating a Claims-Aware Website

To start with passive federation in Windows Azure, you need to create an ASP.NET Web Role like you did in the last chapter and hook it up to an STS. The steps that follow walk you through the process.

1. Run Visual Studio as Administrator.
2. Create a new Windows Azure project named WifPassiveFed.
3. Add an ASP.NET Web Role.
4. Rename WebRole1 to WebRoleRP to indicate this is a relying party.
5. Set the endpoint of WebRoleRP to a fixed port, as shown in the previous chapter so that you can browse to http://127.0.0.1:8080/ to access the RP.
6. In the Solution Explorer right-click the solution, and select Add ⇒ New Web Site.
7. Select ASP.NET Security Token Service, and provide a location inside the current solution folder.
8. When the website has been added, expand the FederationMetadata folder until you see the FederationMetadata.xml file. Right-click the file, and select View in Browser.
9. Copy the URL in the browser.
10. Start the Windows Identity Foundation Federation Utility—FedUtil for short—to start a wizard that guides you in turning the application into a claims-aware application. You can find FedUtil in the WIF SDK folder or by typing fed in the Start menu.
note.eps
If you use Visual Studio Professional or higher, you can also right-click the WebRole in the Solution Explorer and select Add STS reference. If you do this, the web.config in the next step presets to the web.config of the WebRole.
11. FedUtil starts with a screen to select web.config and to set the URL under which the application is reachable. Select web.config of the website and set the Application URI field to reflect the URL from step 5, as shown in Figure 9-4.
12. Click Next. You see a warning indicating that you are not using a secure connection. Although you should use a secure connection in production, you don’t need to use one in development, so you can just ignore the warning and click Yes.
13. You are now presented with a screen to select the STS used to authenticate users. Select Use an Existing STS.
14. Paste the URL copied in step 9 into the textbox, and click Next. Ignore the warning you get.
15. Click Next, Next, and then Finish.

Understanding FedUtil

FedUtil sets up your website to use WIF, but what does it do exactly? First, it adds a website WebRoleRP_STS to the solution. This is a simple STS that just authenticates anyone. It doesn’t require a password. (More about the STS later.)

warning.eps
An STS that doesn’t require a password isn’t secure. You should use this only for development purposes. For production applications, use a tried and tested STS such as Active Directory Federation Services 2.0 or the Azure Access Control Service. The latter is discussed in Chapter 14.

The second thing FedUtil has done is change the web.config of WebRoleRP. The authentication mode has been set to none because authentication is handled by two HTTP modules configured in the <system.webServer> section. FedUtil also added an appSettings value indicating where to find the Federation Metadata file of the hosted service and has granted all users access to it by adding a <location> element. Finally, a new section was added with the name <microsoft.identityModel>. A sample of these changes is shown in Listing 9-3.

download.eps

Listing 9-3: Sample web.config Changes by FedUtil

<system.webServer>
  <modules runAllManagedModulesForAllRequests="true">
    <add name="WSFederationAuthenticationModule"
      type="Microsoft.IdentityModel.Web.WSFederationAuthenticationModule,
         Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral,
         PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler" />
    <add name="SessionAuthenticationModule"
         type="Microsoft.IdentityModel.Web.SessionAuthenticationModule,
         Microsoft.IdentityModel, Version=3.5.0.0, Culture=neutral,
         PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler" />
  </modules>
</system.webServer>
<appSettings>
  <add key="FederationMetadataLocation"
       value="D: WifWindowsAzureWebRoleRP_STSFederationMetadata
2007-06FederationMetadata.xml" />
</appSettings>
<microsoft.identityModel>
  <service>
    <audienceUris>
      <add value="http://127.0.0.1:81/" />
    </audienceUris>
    <federatedAuthentication>
      <wsFederation passiveRedirectEnabled="true"
                    issuer="http://localhost:8800/WebRoleRP_STS/"
                    realm="http://127.0.0.1:8080/"
                    requireHttps="false" />
      <cookieHandler requireSsl="false" />
    </federatedAuthentication>
    <applicationService>
      <claimTypeRequired>
        <!~DHFollowing are the claims offered by STS
            'http://localhost:8800/WebRoleRP_STS/'. Add or uncomment
            claims that you require by your application and then update
            the federation metadata of this application.~DH>
        <claimType
          type="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"
          optional="true" />
        <claimType
          type="http://schemas.microsoft.com/ws/2008/06/identity/claims/role"
          optional="true" />
      </claimTypeRequired>
    </applicationService>
    <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.
ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel,
Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
      <trustedIssuers>
        <add thumbprint="CC85FEF38933D30CA163F37C0D283B19144F9F98"
             name="http://localhost:8800/WebRoleRP_STS/" />
      </trustedIssuers>
    </issuerNameRegistry>
  </service>
</microsoft.identityModel>

The WSFederationAuthenticationModule configured in Listing 9-3 handles incoming WS-Federation tokens to authenticate the user for the first time. If the user presents a valid token, this module processes the token and authenticates the user. Because a token must be posted to the application, this is not something that can happen for each request the user makes, and this is where the SessionAuthenticationModule comes in. For subsequent requests in the same session, its job is to create a ClaimsPrincipal object from session data and assign it to HttpContext.User. It does this based on a cookie that contains the token.

note.eps
The modules configured by FedUtil are in the Microsoft.IdentityModel assembly. FedUtil does not add a reference to this assembly to the WebRole because it is not needed for any code, and it expects the assembly to be in the Global Assembly Cache.

The microsoft.IdentityModel section tells WIF where to find the STS and for which domain (realm) it authenticates, the claim types provided by the STS and used in the hosted service, and finally the thumbprint of the certificate the STS uses to sign the token, so WIF can determine whether a token received from the STS is valid. This information is basically what FedUtil got from the Federation Metadata file of the STS.

Testing the Application

Now that all is in place, you can run the hosted service and see what happens. When you click Run, the Web Role will start and open in a browser window—no surprise there. However, instead of showing you the default page, you are automatically redirected to the STS created for you by FedUtil. The STS shows you the login page, so you can login to the STS itself, as is needed for passive federation. As mentioned earlier, the created STS is simple, so you don’t need to enter a password. You can just enter any username, and press Submit.

When you log in, a token is created and posted back to the hosted service, but instead of seeing the home page, you see the exception in Figure 9-5.

The exception in Figure 9-5 is caused by the fact that the token transmitted to the hosted service is a piece of XML, and by default request validation in ASP.NET doesn’t accept that because of Cross-Site Scripting (XSS) attacks. You could solve this by disabling request validation completely, but because that poses a serious security risk it is definitely not recommended. The better option, and the only one discussed further in this book, is replacing the default RequestValidator class with one that can distinguish a token from other potentially harmful requests. This sounds complicated, but it isn’t. You can actually find one in the Quick Start samples of the WIF SDK, but you can also roll your own in just a few lines of code, as shown in Listing 9-4.

download.eps

Listing 9-4: RequestValidator Class Accepting a Federation Token

using System;
using System.Web;
using System.Web.Util;
using Microsoft.IdentityModel.Protocols.WSFederation;

public class WifRequestValidator : RequestValidator
{
    protected override bool IsValidRequestString(
        HttpContext context,
        string value,
        RequestValidationSource requestValidationSource,
        string collectionKey,
        out int validationFailureIndex)
    {
        validationFailureIndex = 0;

        // Check for passive federation in form (HTTP POST) values
        if (requestValidationSource == RequestValidationSource.Form)
        {
            // Validate if being checked is WS-Federation response 
            if(collectionKey.Equals(
                             WSFederationConstants.Parameters.Result))
            {
                // Try to construct a SignInResponseMessage from value
                var message = 
                WSFederationMessage.CreateFromFormPost(context.Request)
                    as SignInResponseMessage;
                if (message != null) return true;
            }
        }

        // Validation inconclusive, fall back on built-in method
        return base.IsValidRequestString(
            context,
            value,
            requestValidationSource,
            collectionKey,
            out validationFailureIndex);
    }
}

The main method of the RequestValidator class is the IsValidRequestString method, which you need to override. It is called for each value in the request collection to validate that value. If any value is not safe, this results in the exception in Figure 9-5. At the end of Listing 9-4 the base method is called, so as long as you don’t do anything stupid in the code before it, you can always fall back on the built-in validation. The purpose of the override in Listing 9-4 is to ensure that a security token posted to the hosted service is actually accepted as a valid request. This is done by checking the value of the field in the request collection that’s used for passive federation. The key of this field is identified by the constant WSFederationConstants.Parameters.Result. From the value WIF should create a SignInResponseMessage, and only if this succeeds can the overridden validator indicate that the value is valid.

To use the RequestValidator in Listing 9-4, you must register it with the Http Runtime as the request validator of choice. To do this, you need to add (or change) the httpRuntime element in web.config as follows:

<httpRuntime requestValidationType="WebRoleRP.WifRequestValidator,WebRoleRP"/>

At this point the code does not yet compile because the code in Listing 9-4 requires a reference to the Microsoft.IdentityModel assembly. After you add the reference you can run the hosted service again. You should be logged into the application, and the username you entered is shown in the top-right corner, as shown in Figure 9-6.

Extending the Local Development STS

The development STS produces two claims: a name claim and a role claim. The name claim corresponds to the username and is actually used as such in the ClaimsIdentity by default. The role claim indicates a role that the user is part of. As explained earlier, this handles the result of the IsInRole method, and a token can contain multiple role claims. The development STS isn’t useful because the name claim is whatever you type as username, and there is only one fixed role. To make the STS useful for development purposes, it needs to be more flexible. The following are the steps to do that.

note.eps
The following steps assume that you’re already familiar with Visual Studio. If you’re not, please refer to http://bit.ly/learnvs
1. Add a new WebForm page named FormsLogin.aspx to the WebRoleRP_STS.
2. From the Toolbox drag a Login-control onto the page, and then save the page.
3. Open web.config and change Login.aspx to FormsLogin.aspx in the <authentication> element, so it looks like the following code snippet.
download.eps
<authentication mode="Forms">
  <forms loginUrl="FormsLogin.aspx"
         protection="All"
         timeout="30"
         name=".ASPXAUTH"
         path="/"
         requireSSL="false"
         slidingExpiration="true"
         defaultUrl="default.aspx"
         cookieless="UseDeviceProfile"
         enableCrossAppRedirects="false" />
</authentication>

code snippet 07_AuthenticationConfig.txt

4. Open CustomSecurityTokenService.cs in the App_Code folder.
5. Scroll all the way down to the GetOutputClaimsIdentity method.
6. Replace
outputIdentity.Claims.Add(new Claim(ClaimTypes.Role, "Manager"));
with the following code snippet:
download.eps
// Add email claim with the member’s email.
var member = Membership.GetUser();
outputIdentity.Claims.Add(new Claim(ClaimTypes.Email, member.Email));

// Add a role claim for each role the member is in.

foreach (var role in Roles.GetRolesForUser())
{
    outputIdentity.Claims.Add(new Claim(ClaimTypes.Role, role));
}

code snippet 08_AddClaimToToken.txt

7. Click Website ⇒ ASP.NET Configuration.
8. Click the Security link or tab.
9. Click Create user.
10. Enter the required information to create a new user, and click the Create User button.
11. Go to the Security tab again, and click Enable roles; then click Create or Manage roles.
12. Add two roles.
13. For each role add the user you created to that role.
a.Click Manage on the role you want to add the user to.
b.Find the user.
c.Check the User Is In Role check box.
d.Click Back.

When you run the solution now, you’re presented with the login page you created, and you need to enter the username and password of the user you created. You are then authenticated with the application as before. The original unsecured login page that comes with the STS by default is now disabled and is replaced with ASP.NET Membership. The ASP.NET Roles functionality is now also enabled, and you can manage users as you would in any other ASP.NET application using Membership and Roles. This makes the STS much more suitable to test your application with. If you want it to be even more effective, you can add claims in the GetOutputClaimsIdentity method, which you can store in ASP.NET Profiles.

Alternatively, you can use the Thinktecture IdentityServer, which you can download from http://identityserver.codeplex.com, where you can also find installation instructions. Another option is using ADFS 2.0, which is a free add-on to Windows Server 2008 and up. If an STS is going to be deployed in the local production network, ADFS 2.0 is a likely candidate, so using it would get you closer to a live environment. The downside is the need for Active Directory and a Windows domain to get ADFS 2.0 working. On a Windows Vista/7 machine that will not work, so you need a development server or to use a virtual machine.

The previous exercise shows you that if you have an existing application using ASP.NET Membership and Roles, you can easily migrate it to WIF. All you need to do is add role claims to the token corresponding to the roles the user has in the application. Then you need to remove Membership and Roles from the application and hook the application up to the STS.

Using Claims Received from the STS

When you have an application you’re developing hooked up to an STS, you can start using the claims that you receive. For debugging purposes it is always good to see which claims you are actually getting, and it is easy to list these. A good way to do this is to add a GridView-control to the Master Page, so the claims are visible at the bottom of each page, like this:

<h2>Claims</h2>
<asp:GridView runat="server" ID="claimsGrid" />

On the Page_Load method of the Master Page, you can then add the following code to fill the GridView-control with the claims received:

download.eps
IClaimsPrincipal principal = Thread.CurrentPrincipal as IClaimsPrincipal;
if (principal != null)
{
    IClaimsIdentity identity = principal.Identities[0];
    claimsGrid.DataSource = identity.Claims;
    claimsGrid.DataBind();
}

code snippet 09_LoadClaimsInGridView.txt

When you now login through the STS, you see the page in Figure 9-7. Because of the way the GridView-control is defined, all claim properties are shown, so it gives a nice insight into what is actually in the token.

Armed with the knowledge of the claims you receive, you can use the IClaimsIdentity extension methods shown earlier in Listing 9-1 to use the claims in whichever way is suitable for your application.

Creating a Claims-Aware Web Service

By now you should be fairly familiar with WIF and the concepts around it. If you develop only web applications, then what you’ve learned so far may actually be all you need. Chances are that you will also be working with WCF web services at some point, and that’s more complicated. The security bar is raised quite a bit higher for web services. This means that the quick-and-dirty solution of having FedUtil creating an STS will not work.

Creating Certificates

Because the security requirements are higher, you need a secure connection on both the web service and the STS. To host an application, including an STS, under SSL you need certificates. Certificates are also used for signing and encrypting tokens, and although you could do so with the same certificate used for a secure connection, it is good practice to separate these.

You don’t need to understand the finer points of certificate signing and encryption, but a token is signed with the private key of a certificate and checked by the receiving party with the public key. Reversely, tokens are encrypted with the public key of a certificate and decrypted by the receiving party with the private key. Effectively, this means that the STS requires a certificate with a private key to sign tokens and needs the public key of the certificate the RP uses to decrypt the token. The RP needs the public key of the signing certificate, and a certificate with a private key for decryption.

If you choose to use separate signing and encryption certificates, you need the following certificates:

  • SSL certificate for the STS
  • SSL certificate for the RP
  • Certificate used by the STS to sign tokens
  • Certificate used by the STS to encrypt tokens and by the RP to decrypt tokens

To start, you can create these certificates locally for development purposes. Doing this right is somewhat of a hassle because you need to ensure the certificates are located in the right certificate store and the private key is accessible to IIS, which by default isn’t the case.

For your development environment the Windows Azure SDK creates an SSL certificate for the address 127.0.0.1 when you run an HTTPS endpoint for the first time. This covers the SSL certificate needed for the RP. The WIF SDK already created a certificate called STSTestCert when you used the Visual Studio STS project template. That’s the certificate used by default to sign tokens.

note.eps
FederationMetadata.xml is signed and contains information about the signing certificate. If you want to change the signing certificate, you need to regenerate FederationMetadata.xml. This is also the case if you want to add more claim types. Shawn Cicoria has created an editor you can use, which you can download from http://bit.ly/fedmetadatagen.

Creating the SSL Certificate

For development purposes you still need only an SSL certificate for the STS, which can run under localhost and an encryption certificate for the RP. You can give the latter certificate any name you want, but it makes sense to have the name reflect its purpose, for instance RpEncrypt. Use the following steps to create the certificates.

1. Open the Visual Studio command prompt or the Windows SDK command prompt as Administrator. If you use the Windows SDK command prompt, change directory to C:Program FilesMicrosoft SDKsWindowsv7.1in.
2. Type makecert -a sha256 -n CN=localhost -pe -r -sky exchange -ss My -sr LocalMachine.
3. Repeat step 2, replacing localhost with RpEncrypt.

Allowing the Compute Emulator and IIS to Access the Private Key

The certificates are now created, but the Windows Azure Compute Emulator and IIS can’t access the private key yet, and this necessary for everything to work. You can do this with the Microsoft Management Console (MMC).

1. Click the Start button, and type mmc and press enter.
2. Click File ⇒ Add/Remove Snap-In.
3. Select the Certificates Snap-in and click Add.
4. Select Computer account.
5. Leave Local computer selected, and click Finish and then OK.
6. In the left pane, expand the tree until you see Console Root ⇒ Certificates (Local Computer) ⇒ Personal ⇒ Certificates. In the middle pane the certificate you created should be listed, as shown in Figure 9-8.
7. Right-click the RpEncrypt certificate, and select All Tasks ⇒ Manage Private Keys.
8. Click Add.
9. Enter IIS_IUSRS in the text area if you use Windows 7 or Windows 2008 R2; otherwise type NETWORK SERVICE, and click Check Names.
10. If all is well, the text you entered will change into a user account. Click OK, followed by OK again.
11. Repeat steps 7 through 10 for the localhost certificate.

Exporting the Certificates

The certificates are now created and have the right authorizations set. However, they are not in the right certificate store yet. The localhost certificate should also be in the Trusted Root section of the local computer, so when you hit localhost with a browser, you won’t get any security warnings. You can do this by copying the certificate to the other store. For both the STS and the RP to work correctly with the encryption certificate, it needs to be in the Personal store of the current user with the private key and in the Trusted People store of the current user without the private key. The best way to do this is to export and import the certificate with the MMC. In the process you create a certificate file you need later.

1. Right click the RpEncrypt certificate, and select All Tasks ⇒ Export.
2. Click Next.
3. Select Yes, Export the Private Key, and click Next.
4. Click Next.
5. Type a password for the certificate (twice), and click Next.
6. Enter the filename and location, and click Next and then Finish.
7. Export again, this time without the private key.

Importing the Certificates

You now have a *.pfx file of the certificate with the private key and a *.cer file without the private key. You can now import the exported certificate to the Personal certificate store of the current user using MMC.

1. Add Certificates to the selected snap-ins.
2. Select My User Account.
3. Click Finish, followed by OK.
4. In the left pane, expand the tree until you see Console Root ⇒ Certificates - Current User ⇒ Personal.
5. Right-click the Personal folder, and select All Tasks ⇒ Import.
6. Click Next.
7. Change the file select filter to *.pfx so that you can select the certificate with the private key.
8. Click Next.
9. Enter the password and click Next.
10. Click Next and then Finish.
11. Repeat the import process for the *.cer certificate, and place it in Console Root ⇒ Certificates - Current User ⇒ Trusted People.

Creating a Local Development STS for WCF

WCF requires an STS to run under HTTPS. This means you need to host the STS under IIS or IIS Express because the web server built in to Visual Studio doesn’t support HTTPS. For this reason it is a good idea to build the local development STS first when working with WCF, instead of having FedUtil generate one. To get the STS working under HTTPS, you first need to enable HTTPS for the Default Web Site in IIS.

1. Open the Internet Information Services (IIS) Manager by clicking the Start button and typing iis and pressing enter.
2. In the left pane, expand the tree to select Default Web Site.
3. In the right pane click Bindings.
4. Click Add.
5. In the Add Site Binding dialog, select https as type. This automatically switches the port to the HTTPS port 443.
6. From the SSL Certificate drop-down, select the localhost certificate, as shown in Figure 9-9.
7. Click OK to close the dialog and Close to finish.
8. Verify by typing https://localhost in the browser. It should show the IIS welcome screen.

With localhost running under HTTPS, you can now add the STS to it. Because Visual Studio needs to add the project to IIS, you need to run it as Administrator.

1. Go to Start and find Visual Studio or Visual Web Developer. Right-click it, and select Run as Administrator.
2. Click File ⇒ New Web Site.
3. Select Visual C# as the programming language.
4. Select WCF Security Token Service, and click the Browse button next to the Web location textbox.
5. In the left bar of the dialog, select Local IIS.
6. Select the Default Web Site folder in the tree on the right, and click the Create New Application button, which you can find on the right top of the screen, as you can see in Figure 9-10.
7. Name the new application LocalSts and check Use Secure Sockets Layer.
8. Click Open and then OK to create the STS.

You have now created the STS, but there is still some work to do. At this point the STS uses some default certificates for signing and encryption, but you created specific certificates for that purpose. Because you use these certificates in Windows Azure later, you must replace the default certificates with the ones you created. The certificates are controlled by two application-settings elements in web.config. Find the following configuration:

<appSettings>
  <add key="IssuerName" value="ActiveSTS"/>
  <add key="SigningCertificateName" value="STSTestCert"/>
  <add key="EncryptingCertificateName" value="CN=DefaultApplicationCertificate"/>
</appSettings>

and replace it with this:

<appSettings>
  <add key="IssuerName" value="ActiveSTS"/>
  <add key="SigningCertificateName" value="CN=StsSign"/>
  <add key="EncryptingCertificateName" value="CN=RpEncrypt"/>
</appSettings>

The last thing you need to do is ensure that the service meta data is requested over HTTPS. To do this you need to use the mexHttpsBinding instead of the mexHttpBinding for the meta data exchange endpoint and set httpsGetEnabled to true on the service meta data setting of the service behavior. These settings are highlighted in the (abbreviated) service model configuration in web.config.

download.eps
<system.serviceModel>
  <services>
    <service name="Microsoft.IdentityModel.Protocols.WSTrust.
WSTrustServiceContract" behaviorConfiguration="ServiceBehavior">
      <!~DHomitted for brevity~DH>
      <endpoint address="mex" binding="mexHttpsBinding"
                contract="IMetadataExchange" />
    </service>
  </services>
  <bindings>
    <!~DHomitted for brevity~DH>
  </bindings>
  <behaviors>
    <serviceBehaviors>
      <behavior name="ServiceBehavior">
        <!~DHomitted for brevity~DH>
        <serviceMetadata httpsGetEnabled="true" />
        <!~DHomitted for brevity~DH>
      </behavior>
    </serviceBehaviors>
  </behaviors>
</system.serviceModel>

code snippet 10_ServiceModelConfiguration

You can’t verify that the STS works, but you can open the web service in the browser, which should show you information about the SecurityTokenService Service.

Creating a Web Service and Client

When creating web services, you should first ensure that the service works. Therefore, you should first create the service and a client that talks to it before trying to secure it with WIF.

1. Start Visual Studio or Visual Web Developer as Administrator.
2. Create a new Windows Azure project named WifActiveFed.
3. Add a WCF Service Web Role.
4. Rename WCFServiceWebRole1 to WcfWebRoleRP to indicate you will be building a WCF relying party.
5. In the Solution Explorer double-click the WcfWebRoleRP role configuration to show the configuration properties.
6. Uncheck the HTTP endpoint check box under Launch Browser For, so the browser won’t start when you run the service.
7. On the left select the Endpoints tab.
8. Change the port number of Endpoint1 to port 8080, so it can’t collide with IIS, and click the Save button.

Getting all the parts working with WIF is a challenge, so for now you don’t need to bother with the functionality. Just leave the default Service1 as it is and continue with creating a client project.

1. Start the web service without debugging with Ctrl+F5.
2. Start a new instance of Visual Studio or start Visual C# Express.
3. Create a new Console Application named WcfActiveFedClient.
4. In the Solution Explorer right-click the project, and select Add Service Reference.
5. In the address textbox enter the address of the web service, which is http://127.0.0.1:8080/Service1.svc, and click the Go button.
6. The dialog should now show you Service1. Click OK to add the reference to the project.
7. Put the code in the following snippet in the Main method:
download.eps
while (true)
{
    Console.Write("Enter a value: ");
    var input = Console.ReadLine();
    if (string.IsNullOrEmpty(input)) break;
    int value;
    if (int.TryParse(input, out value))
    {
        var client = new ServiceReference1.Service1Client();
        Console.WriteLine(client.GetData(value));
    }
}

code snippet 11_LoopWebServiceCall.txt

8. Save the file and run the client.
9. Enter and integer value to test if the client works correctly with the service.

Connecting the Web Service to the STS

Now that you have the web service and client working, you can hook them up to the STS. This sounds easy because this is done with FedUtil. But there is more to it than that. First, you need to ensure the RP runs under a secure SSL connection, so all interaction between client, STS, and service is secure.

1. In the Solution Explorer double-click the WcfWebRoleRP role configuration.
2. Go to the Endpoints tab and click Add Endpoint.
3. You can give any name to the endpoint, but it helps to give it a name that indicates what it is, such as HttpsEndpoint.
4. Switch the protocol to HTTPS.
5. Set the Public Port to 8443, so it doesn’t interfere with SSL port 443 on IIS.
6. For now you don’t need to select a certificate, in which case the DevFabric uses the built-in certificate for IP 127.0.0.1, so the screen looks like Figure 9-11.
7. Go to the Configuration tab, and notice that the HTTPS endpoint check box is checked to launch it in the browser on startup. You can uncheck this, but for verification purposes it can be handy.
8. Save the configuration.
9. Verify that the service runs under SSL by pressing Ctrl+F5 to start without debugging.
10. When the browser opens, change the address to https://127.0.0.1:8443/Service1.svc. After ignoring the certificate warning, you should see the service page.

To avoid the certificate warning, you need to make the built-in DevFabric certificate trusted. You can do this using the MMC.

1. In the left pane of the MMC, fold open the tree until you see Console Root ⇒ Certificates (Local Computer) ⇒ Personal ⇒ Certificates.
2. Select the localhost certificate and copy it.
3. In the left pane, fold open the tree until you see Console Root ⇒ Certificates (Local Computer) ⇒ Trusted Root Certificates ⇒ Certificates.
4. Right-click the Certificates folder, and select Paste.

Now you’re almost ready to fire up FedUtil. To do its job, FedUtil needs a configured endpoint in web.config to know which service it’s going to secure. However, in .NET Framework 4.0 WCF services don’t need an endpoint configuration because this is handled at machine level for default bindings. So before running FedUtil you need to add a temporary endpoint configuration.

1. Open web.config of WcfWebRoleRP.
2. Add the following endpoint configuration to the <system.serviceModel>.
download.eps
<services>
  <service name="WcfWebRoleRP.Service1">
    <endpoint address=""
              binding="basicHttpBinding"
              contract="WcfWebRoleRP.IService1" />
  </service>
</services>

code snippet 12_EndpointConfig.txt

3. Save web.config.
4. Run FedUtil as administrator.
5. Point FedUtil to the web.config of the web service.
6. In the Application URI enter the URL of the application as it will run in production, so https://[YourAppName].cloudapp.net.
7. Click Next to show the screen with Application Information. This is different from passive federation because it shows the service(s) and contract(s) you can secure. This is the endpoint you added in set 2.
8. Click Next to show the STS selection screen. This time select Use an Existing STS.
9. In the textbox you need to enter the location of the FederationMetadata.xml file that FedUtil needs to configure the service. You can open the LocalSts project and browse the file to get the correct URL. The dialog should look like Figure 9-12.
10. Click Next to show the dialog to set encryption. Select Enable encryption and then choose an existing certificate, which you can select with Select Certificate.
11. Select the RpEncrypt certificate as in Figure 9-13, and click OK.
12. Click Next to show the claims offered by the STS.
13. Click Next again to show the summary, and click Finish.

Compared to passive federation, the changes FedUtil made to the <Microsoft.identityModel> section of the web.config are minor. It has set the audience URIs and some information about the STS in the <issuerNameRegistry> element. The most important difference is that the <service> element now contains a name attribute, with the name of the web service it applies to. This also explains why compared to Listing 9-3 this section contains so few elements. This information is now part of the <system.serviceModel> section, which has been modified extensively. The endpoint has been changed to a ws2007FederationHttpBinding, and a binding configuration has been added. This in itself doesn’t have anything to do with WIF because this binding predates WIF. For this reason WIF works together with WCF through a behavior extension. This extension is added as the federatedServiceHostConfiguration behavior extension and then configured in the behaviors for the service it applies to. This is highlighted in the following configuration snippet:

download.eps
<behaviors>
  <serviceBehaviors>
    <behavior>
      <federatedServiceHostConfiguration
          name="WcfWebRoleRP.Service1" />
      <serviceMetadata httpGetEnabled="true" />
      <serviceDebug includeExceptionDetailInFaults="false" />
      <serviceCredentials>
        <!~DHCertificate added by FedUtil.
            Subject='CN=RpEncrypt', Issuer='CN=RpEncrypt'.~DH>
        <serviceCertificate
            findValue="149A6F02DC3D60CC312CD009A188229303DC63FA"
            storeLocation="LocalMachine" storeName="My"
            x509FindType="FindByThumbprint" />
      </serviceCredentials>
    </behavior>
  </serviceBehaviors>
</behaviors>
<extensions>
  <behaviorExtensions>
    <add name="federatedServiceHostConfiguration"
         type="[omitted for brevity]" />
  </behaviorExtensions>
</extensions>

csode snippet 13_WifBehaviorConfig.txt

Apart from the highlighted elements, FedUtil has added a reference to the encryption certificate in the <serviceCredentials> element. To ensure that the web service can find the certificate, you need to add it to the WebRole configuration.

1. In the Solution Explorer double-click the WcfWebRoleRP role configuration.
2. Go to the Certificates tab, and click Add Certificate.
3. You can give any name to the certificate configuration, but it makes sense to give it the name of the certificate itself, in this case RpEncrypt.
4. Go to the Thumprint column, and click the Ellipsis button to select the correct certificate.
5. Select the RpEncrypt certificate, and click OK.
6. Save the configuration.

With the binding configuration set by FedUtil, you’d expect that now everything would work fine, and all you’d have to do is update the service client to use the right binding. This unfortunately is not the case, so you still need to make some changes to the configuration. These changes exercise a little more control over the interaction between the client, STS, and service, which you need later. To achieve this you must replace the ws2007FederationHttpBinding with the custom binding in Listing 9-5. When you do this, don’t forget to replace the highlighted Address with that of your application in production.

download.eps

Listing 9-5: Custom Binding for Identity Federation

<customBinding>
  <binding name="WifActiveFedBinding">
    <security authenticationMode="SecureConversation"
              messageSecurityVersion="WSSecurity11WSTrust13
WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10"
              requireSecurityContextCancellation="false">
      <secureConversationBootstrap
          authenticationMode="IssuedTokenOverTransport"
          messageSecurityVersion="WSSecurity11WSTrust13
WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10">
        <issuedTokenParameters>
          <additionalRequestParameters>
            <AppliesTo xmlns="http://schemas.xmlsoap.org/ws/2004/09/policy">
              <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">
                <Address>https://[YourAppName].cloudapp.net/</Address>
              </EndpointReference>
            </AppliesTo>
          </additionalRequestParameters>
          <claimTypeRequirements>
            <add claimType="http://schemas.xmlsoap.org/ws/2005/05/identity
/claims/name"
                 isOptional="true" />
            <add claimType="http://schemas.microsoft.com/ws/2008/06/identity
/claims/role"
                 isOptional="true" />
          </claimTypeRequirements>
          <issuerMetadata
              address="https://localhost/LocalSTS/Service.svc/mex" />
        </issuedTokenParameters>
      </secureConversationBootstrap>
    </security>
    <httpsTransport />
  </binding>
</customBinding>
<EntityDescriptor

After you’ve added the custom binding, you need to make some more minor modifications to the configuration. The following steps take you through these changes:

1. Open web.config.
2. Give the behavior added by FedUtil a name, for instance WifActiveFedBehavior.
3. Add the behavior configuration to the service configuration.
4. Tie the endpoint to the new custom binding called WifActiveFedBinding in Listing 9-5. To ensure the endpoint works regardless of the address, remove the URL FedUtil placed in the address property.
5. Add a meta data exchange endpoint to the service, and ensure it runs under SSL by setting the mexHttpsBinding instead of the mexHttpBinding.
6. To complement the meta data exchange binding over SSL, find the behavior of the serviceMetadata element, and rename httpGetEnabled to httpsGetEnabled.

After you take the preceding steps, the service configuration should look like the following snippet:

download.eps
<service name="WcfWebRoleRP.Service1"
         behaviorConfiguration="WifActiveFedBehavior">
  <endpoint address=""
            binding="customBinding"
            contract="WcfWebRoleRP.IService1"
            bindingConfiguration="WifActiveFedBinding" />
  <endpoint address="mex"
            binding="mexHttpsBinding"
            contract="IMetadataExchange" />
</service>

code snippet 14_Modified WifiConfig.txt

The last thing you have to change is some WIF configuration. When WIF receives a token, it checks to see if the token actually applies to it. For this reason the STS adds information to the token, indicating the intended recipient. On the receiving end you must set for which URIs you accept tokens, which you do in the <audienceUris> element of the WIF service definition. Right now the only URI in there is the URI you entered in FedUtil, which is the URI of the production environment. To make it work on your local machine, you need to add the URI of the local service as highlighted in the following configuration.

download.eps
<microsoft.identityModel>
  <service name="WcfWebRoleRP.Service1">
    <audienceUris>
      <add value="https://[YourAppName].cloudapp.net/" />
      <add value="https://127.0.0.1:8443/Service1.svc" />
    </audienceUris>
    <!~DHOmitted for brevity~DH>
  </service>
</microsoft.identityModel>

code snippet 15_WifAudienceUriConfig.txt

note.eps
In the preceding sample, the full URI to the service has been added instead of the root URI. This is a temporary work around because of an issue with the STS. This is discussed in more detail in the section “Deploying and Troubleshooting” later in the chapter.

Updating the Client

Now that the service has been meticulously tweaked, updating the client is easy. All you need to do is reconfigure the service reference to point to the right location. When you do this in Visual Studio, the service reference is automatically updated.

1. Make sure the latest service deploys by pressing Ctrl+F5 in the WifActiveFed solution.
2. Open the client project in Visual Studio of Visual C# Express.
3. In the Solution Explorer, open the Service References subfolder to reveal ServiceReference1.
4. Right-click ServiceReference1, and select Configure Service Reference.
5. In the address textbox change the address to the new secure URL https://127.0.0.1:8443/service1.svc and click OK.

After following the preceding steps, the client should run without any changes to the code or config and get a result. Be aware that the current STS configuration works based on your Windows identity as provided by the client to the STS. You can verify this by changing the code of the GetData service method to return the username of the logged on user.

IPrincipal principal = Thread.CurrentPrincipal;
Return string.Format("Hello {0}. You entered: {1}",
                     Principal.Identity.Name, value);

The fact that authentication works based on the Windows identity doesn’t actually matter, even if you use an STS in production that utilizes another authentication method. This is because you’ve effectively outsourced the authentication method, so the service works regardless of which authentication type you use. You have to change only the binding to the STS and have the service accept the signing certificate of the STS. On the client, you also need to change the binding to supply another type of user credentials. This is discussed in Chapter 14 where you learn about the AppFabric Access Control Service.

Deploying and Troubleshooting

As you’ve seen so far, setting up a website with passive federation is a walk in the park compared to working with active federation. When it comes to deployment and troubleshooting, it’s much the same. Because you have a user interface when working with a website, it’s much easier to get to exceptions and trace them. Deploying and troubleshooting a WCF service on Windows Azure is a different matter. You can’t just deploy, and debugging is also much harder, because you can’t just throw in trace statements. For this reason the focus of this section is on getting the WCF service to work; although, some of it applies to the website as well.

Getting WIF to Work on Windows Azure

Although the Azure Compute emulator is quite good at emulating the Windows Azure environment, there are some key differences with the actual staging and production environments in Windows Azure. Also, the staging environment may be physically the same, but there are some subtle differences that are especially important when using WIF.

The key aspect of much of the trouble you can run into with WIF is that you don’t have access to the file system of Windows Azure instances. As explained in the previous chapter, this is important to ensure scalability, but it does have some nasty side effects when it comes to WIF.

Working with Certificates

Before you deploy, you need to set up the environment properly. On top of what you learned in Chapter 8, you also need to install the needed certificates.

Up until now you’ve used certificates you created yourself. Although this works fine in development, there are some serious drawbacks when using these in a live environment. The problem is that your local machine is not a trusted issuer of certificates, so when your certificate is validated by a browser or other client, the least that can happen is that you get a security warning. Earlier you avoided this by making the certificates trusted and disabling certificate chain validation, and even then you may have had to ignore a warning here and there. This doesn’t work in a in a live environment. You can’t ask users to ignore the security warnings because that defeats the purpose of having them in the first place. And users definitely should not be asked to put certificates in the Trusted People store. For this reason you should get the needed certificates from a trusted root such as COMODO, GeoTrust, or VeriSign.

Working with certificates from a trusted root is also good practice for signing and encryption, but there the need is smaller. Normally, a certificate is checked including the chain back to the trusted root. This is to ensure that if the root is no longer trusted, certificates issued by that root are automatically invalidated.

With self-signed certificates the check back to the trusted root obviously doesn’t work. However if you manage trust between an STS and an RP under your control, using a self-signed certificate is fine. In that case you don’t need the check back to the trusted root.

In your development environment, you simulated the certificate chain by copying certificates to the Trusted Root and Trusted People folders. In Windows Azure you can’t do this, so you need to disable the check. You can do so by adding the highlighted configuration to the following WIF service configuration.

download.eps
<microsoft.identityModel>
  <service name="WcfWebRoleRP.Service1">
    <!~DHOmitted for brevity~DH>
    <certificateValidation certificateValidationMode="None" />
  </service>
</microsoft.identityModel>

code snippet 16_DisableCertChainValidation.txt

Installing Certificates

When you have the certificates, trusted or otherwise, you must install these in Windows Azure. Because you don’t have access to the file system, and can’t install them in the instance directly, Windows Azure provides you with an external certificate store. From there certificates are copied to any instance created. This does mean, however, that you have no control over the actual certificate store these certificates are installed in, so you can’t go around and move a certificate to the Trusted Root, for instance. That said, the certificates are of course installed in a store accessible from your application.

To install certificates into Windows Azure, take the following steps:

2. Select Hosted Services, Storage Accounts & CDN.
3. Select the Hosted Services folder to see your subscription with all the hosted services underneath in the middle section of the screen.
4. Right-click the Certificates folder of the hosted service you want to add a certificate to, and click Add Certificate.
5. Select the certificate file of the certificate you want to upload to Windows Azure. You can upload *.pfx files only.
6. Enter the certificate password, and click OK.

You should upload the certificate used to create a secure connection and the encryption certificate of the RP.

Installing the STS Certificate

When you installed the HTTPS and RP encryption certificates, you installed them with the private key. However, to check the STS signature of a token only the certificate itself is needed. Having the private key of the STS certificate actually doesn’t make sense, and when making use of an STS that is not under your control, you don’t even have the private key. This means you should, and in most cases can only, upload the certificate without the private key. The problem is that you can only upload a *.pfx file to Windows Azure, and by default this is a certificate with the private key. If you can obtain a *.pfx certificate without a private key from whoever manages the STS, you’re fine. Otherwise you’ll most likely get a *.cer file, which means you need to convert the certificate. Because you can export only a certificate without a private key from MMC as a *.cer file, you have the same problem with the certificate being used so far. Fortunately, you can convert a certificate with just a few lines of code. The method in Listing 9-6 shows you how.

download.eps

Listing 9-6: Method to Convert a Certificate to *.pfx

public void ConvertCert(string fullPath, string password)
{
    var dir = Path.GetDirectoryName(fullPath);
    var file = Path.GetFileNameWithoutExtension(fullPath);
    var cert = new X509Certificate(fullPath);
    var certBytes = cert.Export(X509ContentType.Pfx, password);
    File.WriteAllBytes(dir + @"" + file + ".pfx", certBytes);
}

Configuring the SSL Certificate

In the development environment you used the DevFabric certificate for 127.0.0.1, which was automatically tied to the secure endpoint, because you didn’t select a certificate. However, in the production environment you use an actual SSL certificate, which you installed a little earlier. You need to configure this certificate in the WebRole as well.

1. In the Solution Explorer double-click the WcfWebRoleRP configuration.
2. Select the Certificates tab.
3. Click Add Certificate.
4. Enter a name for the certificate. Although any name can suffice, it makes sense to give the certificate the name of domain it will secure, so use [YourAppName].cloudapp.net.
5. Go to the thumbprint textbox to enter the thumbprint of the certificate. You can do this in one of two ways:
  • If you installed the certificate on your development machine, you can click the Ellipsis button to select the certificate.
  • Otherwise you can double-click the certificate file from Windows Explorer and copy the thumbprint from the Details tab.
6. Save the configuration.

Deployment

You can deploy your application the same way as discussed in Chapter 8 but with one proviso: You need to ensure the Microsoft.IdentityModel assembly is copied along with your application. Because this assembly is not part of the .NET Framework, it is also not installed on Windows Azure. However, on your local machine it’s in the Global Assembly Cache, so by default it will not be deployed. In the current WcfWebRoleRP the assembly isn’t even referenced because it is used only in configuration. To ensure the assembly gets copied along with the project, take the following steps:

1. In WcfWebRoleRP reference Microsoft.IdentityModel.
a.In the Solution Explorer right-click the References folder, and select Add Reference.
b.Switch to the .NET tab.
c.Select the Microsoft.IdentityModel assembly, and click OK.
2. In the References folder find the added assembly, and select it.
3. In the Properties window, set the Copy Local property to True.

As discussed previously, until you are ready to go to production, it makes sense to turn off custom errors in web.config so that it is easier to see configuration errors. And speaking of configuration, it makes sense to change the port numbers you set earlier in the WebRole configuration to use the default https and https ports, instead of 8080 and 8443.

Dealing with URI Issues

If you deploy your application to the staging environment, now it will run. Or at least it will not give any errors if you open the service page. The client you built will not yet work, even if you change the endpoint it points to at the endpoint in the staging environment. The problem is that contrary to the local development environment and the production environment, the staging environment URI is random. The staging URI looks something like this:

https://4ab5ac2001324585ba5a902f4242a98c.cloudapp.net/

This URL changes every time you deploy, and this causes a chicken-and-egg problem. Normally, you would just go into web.config and add another audience URI as you did earlier for the local development environment earlier. But when deployed, you can’t change the web.config in the running instances. But if you add the URI to the web.config, you have to redeploy, which changes the URI again. The easiest way to solve this is adding mode="Never" to the <audienceUris> element, which disables the audience URI check altogether. For staging this may be acceptable, but it would be wise to alter it before going to production, which defeats the purpose of a staging environment somewhat. You wouldn’t be able to do a virtual IP switch deployment to production in that case. If you don’t need to deploy that way, you’re in the clear. Another option is to modify the STS so the token it sends back actually uses a URI that is already in the <audienceUris> element. This will not work with a product such as ADFS 2.0, but with the local development STS this is not an issue. In the same manner you can also solve the problem discussed earlier that the local development STS uses the complete URI of the service instead of just the root URI.

1. In the LocalSts project open CustomSecurityTokenService.cs in the App_Code folder.
2. Find the GetScope method.
3. In the GetScope method there’s a constructor to create the scope object. The first parameter is the URI to which the token will apply, and as you can see it takes the original URI. Replace that parameter with "[YourAppName].cloudapp.net".

You can modify the last step to be a method that determines the correct URI based on a translation table, configuration, or whatever other solution you can think of. Now you can modify the client configuration to point to the correct URI. You can find the root URI in the Windows Azure Management Portal, as discussed in Chapter 8.

Scaling Up

As long as you work with one instance, chances are that everything will work just fine. But Windows Azure wasn’t made to scale for nothing. Your application is hosted in an environment in which you can add instances as needed. This is possible because all instances share the same configuration and code. There’s a catch though, and it has to do with the security session. In a load-balanced environment you need some way to ensure that it doesn’t matter which instance a request hits. In ASP.NET this is done with a cookie, but with the default binding FedUtil uses it is not. That’s an important reason why earlier you had to switch to the custom binding in Listing 9-5. On the <security> element of the binding requreSecurityContextCancellation property is set to false. You wouldn’t think so from the name, but this actually switches WCF to cookie mode. Using cookies is only half the story though because cookies are encrypted. By default this is done using the machine key, a unique key for every instance, and the Windows Data Protection API, also known as DPAPI. Because the machine key is different on every instance, the cookie can be decrypted only on that instance. This means that the other instances in the load-balanced environment cannot decrypt the cookie and use it. This would cause the client to be sent back to the STS for a new token, after which a new cookie is set in place of the old one.

The best way to solve this problem is by using a certificate for the encryption instead of the machine key. You can do this with a custom SessionSecurityTokenHandler, and this works for both a website and a web service. The way you set it up is different though.

Setting Up Cookie Handling in a Website

To set up the SessionSecurityTokenHandler in a website, you need override the service configuration, and you can do this by hooking into the ServiceConfigurationCreated event of the FederatedAuthentication class. Listing 9-7 shows what the event handler should look like. It adds three cookies transforms, one to deflate the cookie so it is compressed, one to encrypt the cookie using the service certificate, and one to sign the cookie. You could, of course, use different certificates to encrypt and sign, but the service certificate is easily available.

download.eps

Listing 9-7: Replacing the SessionSecurityTokenHandler

void OnWifSvcConfigurationCreated(object sender,
         ServiceConfigurationCreatedEventArgs e)
{
    var certificate = e.ServiceConfiguration.ServiceCertificate);
    var transforms = new List<CookieTransforms>(
         new CookieTransform[] {
             new DeflateCookieTransform(),
             new RsaEncryptionCookieTransform(certificate),
             new RsaSignatureCookTransform(certificate) });
    var handler = new SessionSecurityTokenHandler(tranforms);
    e.ServiceConfiguration.SecurityTokenHandlers.AddOrReplace(handler);
}

To use the preceding event handler, you still need to wire it up. You do this in the Application_Start event handler in global.asax. The following line of code does the trick:

FederatedAuthentication.ServiceConfigurationCreated +=
    OnWifSvcConfigurationCreated;

Setting Up Cookie Handling in a WCF Service

To set up the SessionSecurityTokenHandler on a WCF service, you need to create a custom handler, which uses the same cookie transforms used for websites. When you create the new handler, you need to inherit from SessionSecurityTokenHandler. In the constructor, you need to hook up the cookie transforms. Listing 9-8 shows you how.

download.eps

Listing 9-8: RsaSessionSecurityHandler Constructor

public RsaSessionSecurityTokenHandler(X509Certificate2 certificate)
{
    var transforms = new List<CookieTransforms>(
         new CookieTransform[] {
             new DeflateCookieTransform(),
             new RsaEncryptionCookieTransform(certificate),
             new RsaSignatureCookTransform(certificate) });
    this.SetTransforms(transforms);
}

You also need to override the ValidateToken method to ensure that the incoming token is intended for the endpoint the request was sent to. This is needed because a cookie will be sent along with any request if the client thinks the cookie applies to the request being made. Listing 9-9 shows the overridden ValidateToken method. It basically checks the URI embedded in the token with the URI of the endpoint.

download.eps

Listing 9-9: RsaSessionSecurityHandler.ValidateToken method

public override ClaimsIdentityCollection ValidateToken(
    SessionSecurityToken token, string endpointId)
{
    // argument checks omitted for brevity

    Uri endpointUri;
    Uri tokenEndpointUri;
    bool endpointHasUri = Uri.TryCreate(endpointId,
                                        UriKind.Absolute,
                                        out endpointUri);
    bool tokenHasUri = Uri.TryCreate(token.EndpointId,
                                     UriKind.Absolute,
                                     out tokenEndpointUri);
    if (endpointHasUri && tokenHasUri)
    {
        if (endpointUri.Scheme != tokenEndpointUri.Scheme ||
            endpointUri.DnsSafeHost != tokenEndpointUri.DnsSafeHost ||
            endpointUri.AbsolutePath != tokenEndpointUri.AbsolutePath)
        {
            throw new SecurityTokenValidationException(
                "The incoming token is not scoped to the endpoint.");
        }
    }
    else if (String.Equals(endpointId, token.EndpointId,
                           StringComparison.Ordinal) == false)
    {
        throw new SecurityTokenValidationException(
            "The incoming token is not scoped to the endpoint.");
    }
    return this.ValidateToken(token);
}

To apply the RsaessionSecurityTokenHandler to the service, you need a service behavior. This is a class implementing the IServiceBehavior interface, which has three methods. You need to do some work only in the Validate method, which is shown in Listing 9-10. The AddBindingParameters and ApplyDispatchBehavior methods don’t have to do anything, so you can leave those completely empty. (These should not throw a NotImplementException.)

download.eps

Listing 9-10: Validate Method for the RsaSessionServiceBehavior

public void Validate(ServiceDescription svcDescription,
                     ServiceHostBase svcHostBase)
{
    FederatedServiceCredentials.ConfigureServiceHost(
        svcHostBase,
        RoleEnvironment.GetConfigurationSettingValue("Deployment"));

    var behaviors = svcHostBase.Description.Behaviors;
    FederatedServiceCredentials credentials =
        behaviors.Find<FederatedServiceCredentials>();
    credentials.SecurityTokenHandlers.AddOrReplace(
        new RsaSessionSecurityTokenHandler(
            svcHostBase.Credentials.ServiceCertificate.Certificate));
}

The final piece of plumbing you need is a behavior extension so that you can make the behavior available in configuration. Listing 9-11 shows you the code for this extension.

Listing 9-11: RsaSessionServiceBehaviorExtension

public class RsaSessionServiceBehaviorExtension :
    BehaviorExtensionElement
{
    public override Type BehaviorType
    {
        get { return typeof(RsaSessionServiceBehavior); }
    }

    protected override object CreateBehavior()
    {
        return new RsaSessionServiceBehavior();
    }
}

Hooking up the RsaSessionServiceBehavior is easy. In the <system.serviceModel> section you need to add it to add the behavior extension to the <behaviorExtensions> element inside the <extensions> element, as shown in the following code:

download.eps
<extensions>
  <behaviorExtensions>
    <add name="RsaSessionServiceBehavior"
         type="WcfWebRoleRP.RsaSessionServiceBehaviorExtension,
               WcfWebRoleRP" />
  </behaviorExtensions>
</extensions>

code snippet 17_RsaBheaviorConfig.txt

Next you need to add the behavior to the service behavior you defined for the web service, which is nothing more than adding the following:

<RsaSessionServiceBehavior />

With the behavior in place, you can redeploy your service and scale up to as many instances as you need.

Exposing the Correct WCF Meta Data

Your Windows Azure instances live behind a load balancer. When a client does a request, it hits the load balancer, and the load balancer routes the request to one of the instances. WCF is not aware that it is working behind a load balancer, so when you use the meta data exchange endpoint to get the service configuration, the meta data actually contains the internal address used behind the load balancer. This address is not reachable from outside the load balancer, so the meta data is incorrect. Up until now you had no problems because you altered the configuration manually. But after you deploy for production use, chances are your service is going to be used by other parties, so the meta data must be correct. To ensure the meta data requested actually contains the address of the service under which it is reachable from outside the load balancer, you need to tell WCF to use the host specified in the incoming request headers to construct the meta data address. You can do so by adding the following configuration to the WifActiveFedBehavior your created earlier.

download.eps
<useRequestHeadersForMetadataAddress>
  <defaultPorts>
    <add scheme="http" port="80" />
    <add scheme="https" port="443" />
  </defaultPorts>
</useRequestHeadersForMetadataAddress>

code snippet 18_MetadataCorrectionConfig.txt

Diagnosing Issues

Diagnosing issues in a website is easy; you can turn off custom errors to show exceptions in the page and insert trace statements that you can show in the page. Also you can use full debugging support in Visual Web Developer Express to debug applications running on Windows Azure. One thing you can do to make this even easier is run the website outside of the Windows Azure DevFabric, so you can focus on Azure-specific issues when you do run in DevFabric.

With WCF services, you can also use debugging support, but where it concerns the binding and built-in behaviors, you can’t diagnose anything. You just get a runtime exception and hope you can decipher what went wrong. To make it easier to diagnose issues, you can use diagnostics tracing that’s built into the .NET Framework. .NET Framework and WIF components write trace information to the diagnostics system. Trace listeners can pick up the trace information and write it to a trace log. However, as mentioned several times already, you don’t have access to the local file system of Windows Azure instances, so where do you leave the log information? The answer is Azure Table Storage because it runs separately from your Windows Azure instances.

When you create a project, an AzureLocalStorageTraceListener is automatically created. As the name implies this is a trace listener that writes trace information to local storage. Hence it doesn’t work on a live deployment of Windows Azure. For most issues that’s sufficient because if you’ve debugged the functionality of your application, what remains are configuration issues. Understanding what goes wrong in the development environment can help you resolve issues with the live environment. If you need to do diagnostics tracing in a live environment, you must use a trace listener that writes to your storage account. You can find one at http://bit.ly/SimpleAzureTraceListener, and you can find more information on the whole diagnostics system in Windows Azure at http://msdn.microsoft.com/en-us/library/gg433048.aspx.

Getting the diagnostics working on your local machine just takes a few steps:

1. Open web.config of WcfWebRoleRP.
2. Comment out the existing <system.diagnostics> section.
3. Almost at the top of the configuration, there’s a commented <system.diagnostics> section for Windows Azure; uncomment it. This enables diagnostics tracing for WCF.
4. You also need diagnostics tracing for WIF. To do this, add another source for the Microsoft.IdentityModel namespace, as follows:
<source name="Microsoft.IdentityModel" switchValue="Verbose">
  <listeners>
    <add name="AzureLocalStorage" />
  </listeners>
</source>
5. Open WebRole.cs.
6. After the three lines of code setting up diagnosticsConfig, add the following line of code:
DiagnosticMonitor.Start(
    "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString",
    diagnosticConfig);
7. Save and run the application on your local development environment.

After you’ve run the client, the service should have collected trace information. Now you need to get that information from the local table storage. For this you can use the Azure Storage Explorer that you can download from http://azurestorageexplorer.codeplex.com. After installation you can do the following to get to the trace files:

1. Start Azure Storage Explorer, which is in the Neudesic folder of the Start menu.
2. Click Add Account.
3. Check the check box Developer Storage, and click Add Storage Account.
4. In the left pane you now see all containers in your development storage environment. Select wad-tracefiles.
5. In the right pane a trace file should appear. Select it, and in the Blob section of the task bar, click Download.
6. Select a location to save the file to, and click OK.
7. Open the location you saved the file to in Windows Explorer, and double-click the file to open it in the WCF Service Trace Viewer.

Summary

In this chapter, you have learned about identity federation and claims-based identity. These two mechanisms implemented in Windows Identity Foundation (WIF) enable you to outsource authentication to an Identity Provider. A Security Token Service can then create a token that is sent to the application, also called a Relying Party, which can use the information in the token to authorize the user.

Implementing passive federation for a website is straightforward and runs out-of-the-box when you hook up the website to an STS with FedUtil. For WCF service it is complex because the security bar is set a lot higher. You must create several certificates, for a secure connection, for encrypting the security token, and for signing the security token. You also need to implement a STS that runs under a secure connection. You can create a local development STS, but as an alternative you can use a product such as Active Directory Federation Services 2.0. When you have an STS running, you can create a WCF service and a client to use it. Getting those to operate requires you to tweak the binding configuration FedUtil inserts.

Getting the WCF service to run on the DevFabric environment is only half the work. If you want to deploy the application, you still must take care of issues surrounding certificates, sessions, and meta data, which is the result of the Windows Azure live environment being a highly scalable, load-balanced environment.

Finally, you have learned how to set up diagnostics so that you can easily track issues with WCF and WIF.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset