Chapter 2. TDD Using Hello World

TDD Using Hello World

We continue our comparison of TDD and DDT techniques in this chapter, with a straightforward "Hello world!" example (actually a system login) performed using the TDD approach. Logging in seems strangely apt for a "Hello world!" example, as (a) it's a simple concept and exists in virtually every serious IT application, and (b) it's the system saying "Hello" to the user (and the user saying "Hello" back).

Note

Note For non-TDDers: You may want to have a casual flip through this chapter just to see what's going on in the coding room down the hall at the TDD shop, and then get engaged in the DDT way of doing things, in Chapter 3.

For TDDers: We understand your pain, and try to make clear in this chapter just how hard your job is. Then, in the next chapter, we're going to show you what we believe is an easier way.

Before we dive down the TDD rabbit hole and get started on the example, let's look at the top ten characteristics of TDD (from, we admit, an ICONIX/DDT perspective).

Top Ten Characteristics of TDD

This section expands on the left column in Table 1-1 from the previous chapter, which lists the main differences between TDD and DDT.

10. Tests drive the design.

With TDD, tests have essentially three purposes:

  • Drive the design

  • Document the design

  • Act as regression tests

The first item, that of tests driving the design during coding, rather than an up-front process of thinking and modeling driving the design, is the defining characteristic of TDD. It's what made TDD such a revolutionary (not the same as "good") approach. TDD drives designs at a short-range, tactical level, leaving more strategic decisions to be handled elsewhere. Think of a blind man with a cane being able to detect what's immediately in front of him, as opposed to being able to see a truck heading straight at you, a bit further down the road.

9. There is a Total Dearth of Documentation.

Documentation isn't an inherent part of the TDD process. Result: no documentation.

The mantra that "the code is the design" means that very little design documentation may be written, making it difficult for newcomers to learn the details of the project. If they ask for a system overview, or which class is the main one for a particular screen or business function, being told to go and trawl through the unit tests (as "the tests are the design") isn't going to be too helpful. Enterprising individuals might well think to set up a project wiki and put some pages on there covering different aspects of the design. . . or they might not. Creation of design documentation isn't an inherent part of the process, so it's not likely to happen.

8. Everything is a unit test.

If it's not in JUnit, it doesn't exist...

The mindset of a test-first methodology is, as you'd expect, to write a test first. Then add something new to make the test pass, so you can "prove" that it's done (fans of Gödel's incompleteness theorems should look away now). Unfortunately, this can mean that just the "sunny day scenario" is proven to be implemented. All the "unexpected" stuff—different ways the user may traverse the UI, partial system failures, or out-of-bounds inputs and responses—remain unexpected because no one took the time to think about them in any structured, systematic kind of way.

BUT ...

7. TDD tests are not quite unit tests (or are they?).

There's been some discussion in the TDD world about the true nature of TDD tests,[5], mainly centering around the question: "Are TDD tests true unit tests?" The quick and simple answer is: "Yes. No. Not really." TDD tests (also sometimes called programmer tests, usually to pair them up with customer tests, aka acceptance tests) have their own purpose; therefore, on a true test-first project the tests will look subtly different from a "classical" fine-grained unit test. A TDD unit test might test more than a single unit of code at a time. TDDer and Extremo Martin Fowler writes the following:

Unit testing in XP is often unlike classical unit testing, because in XP you're usually not testing each unit in isolation. You're testing each class and its immediate connections to its neighbors.[6]

This actually puts TDD tests roughly halfway between classical unit tests (which DDT uses) and DDT's own controller tests (see Chapter 6). In this book, however, we'll keep on calling TDD tests "unit tests," as it's what the rest of the world generally calls them.

6. Acceptance tests provide feedback against the requirements.

Acceptance tests also form part of the specification... or they would if we had acceptance tests (and if we had a specification). Unless you've layered another process on top of TDD (such as XP or Acceptance TDD), you'll be driving unit tests directly from the requirements/stories.

5. TDD lends confidence to make changes.

Confidence to make the continual stream of changes that is "design by refactoring". But is that confidence misplaced?

To rephrase the heading, a green bar means "all the tests I've written so far are not failing." When running unit tests through a test-runner such as that built into Eclipse, Flash Builder, IntelliJ, etc., if all your tests pass you'll see a green bar across the window: a sort of minor reward (like a cookie but not as tasty) that may produce an endorphin kick and the feeling that all's well with the world, or at least with the project.

Having covered your code with a thick layer of unit tests, the idea is that this notion of "all being well" should give you the confidence needed to be able to continuously refactor your code without accidentally introducing bugs. However, do take a read through Matt's "Green Bar of Shangri-La" article.[7] (Hint: how do you know if you missed some unit tests?)

4. Design emerges incrementally.

First you write a test, and then you write some code to make the test pass. Then you refactor the code to improve the design, without breaking any tests that have been written so far. Thus, the design emerges as you grow the code base incrementally through the test/code/refactor cycle. For non-TDDers, this is roughly analogous to banging your forehead against a wall to make sure that you're able to feel pain, before you start building the wall, whose existence you will then verify by banging your forehead against it. Of course you need a "mock wall" to bang your forehead against, since the real wall doesn't exist yet...we like to call this "Constant Refactoring After Programming" since it has such a descriptive acronym.

3. Some up-front design is OK.

It's absolutely within the TDD "doctrine" to spend time up-front thinking through the design, even sketching out UML diagrams—though ideally this should be in a collaborative environment, e.g., a group of developers standing at a whiteboard. (That said, doing a lot of up-front design and writing all those billions of tests and refactoring the design as you go would represent a lot of duplicated effort.)

In theory, this means upfront design will get done, but in practice TDDers (who often do TDD in conjunction with SCRUM) find that upfront design is not on the deliverable list for the current sprint.

2. TDD produces a lot of tests.

The test-first philosophy underlying TDD is that before you write any code, you first write a test that fails. Then you write the code to make the test pass. The net result is that aggressive refactoring—which will be needed by the bucket-load if you take this incremental approach to design—is made safer by the massive number of tests blanketing the code base. So the tests double as a design tool and as heavily leaned-upon regression tests.

TDD also doesn't really distinguish between "design level" (or solution space) tests and "analysis level" (or problem space) tests.[8]

1. TDD is Too Damn Difficult.

From the Department of Redundancy Department, TDD is Too Damn Difficult. The net effect of following TDD is that everything and its hamster gets a unit test of its own. This sounds great in theory, but in practice you end up with an awful lot of redundant tests.[9] TDD has an image of a "lightweight" or agile practice because it eschews the notion of a "big design up-front," encouraging you to get coding sooner. But that quick illusion of success is soon offset by the sheer hard work of refactoring your way to a finished product, rewriting both code and tests as you go.

Login Implemented Using TDD

We thought it was important to show a full-blown example of TDD early in the book, so that's precisely what this chapter is about. The general idea behind TDD is that you tackle a list of requirements (or user stories) one at a time, and for each one implement just enough to fulfill the requirement. Beginning with a good understanding of the requirement, you first give some thought to the design. Then you write a test. Initially the test should fail—in fact, it shouldn't even compile—because you haven't yet written the code to make it pass. You then write just enough code to make the test pass, and revisit the design to make sure it's "tight."[10] Re-run the tests, add another test for the next element of code, and so on. Repeat until the requirement is implemented. Eat a cookie.

Understand the Requirement

Let's consider a simple Login user story:

As an end-user, I want to be able to log into the web site.

It feels like there's some missing detail here, so you and your pair-programming pal Loretta go in search of Tim, the on-site customer.

"Ah, that's slightly more involved," Tim explains. "There's more to it than simply logging the user in. You need to see it from both the end-user's perspective and the web site owner's perspective. So, as a web site owner, I'm interested in making sure freeloading users can't get access to paid-for functionality, that they're channeled through a revenue-maximizing path, and that the site isn't susceptible to brute-force attempts to crack users' passwords."

"And you'd want users to feel secure, too," you add, and Tim nods.

You and Loretta head back to the programmer's palace and expand the user story into a more detailed set:

    1. As a website owner, I want to provide a secure login capability so I can gain revenue by charging for premium content.

    1. As a website user, I want to be able to log in securely so I can access the premium content.

  1. As a website owner, I want the system to lock a user account after 3 failed login attempts.

  2. As a website user, I want my password entry to be masked to prevent casual shoulder-surfers from seeing it.

You argue briefly about whether this is really just one user story with added detail, or two—or three—separate stories. Then you agree to simply move on and start coding.

Think About the Design

Before leaping into the code, it's normal to give a little thought to the design. You want the system to accept a login request—presumably this means that the user will enter a username and password. You may need to send Loretta back to extract more details from the on-site customer. But then what? How will the password be checked?

Some collaborative white-boarding produces the sketch shown in Figure 2-3.

Initial design sketch for the Login story

Figure 2-3. Initial design sketch for the Login story

Note

Figure 2-3 shows a mix of notation styles; but this diagram isn't about UML correctness, it's about thinking through the design and conveying an overview of the developers' plan of attack.

LoginAction is a generic, Spring Framework-esque UI action class to handle incoming web requests—in this case, a login request. It accepts two input values, the username and password, and simply hands the parameters on to a class better suited to handle the login request.

LoginManager accepts the username and password, and will need to make an external call to a RESTful service in order to validate the username and password. If validation fails, a ValidationException is thrown, and caught by the LoginAction, which sends a "FAIL" response back to the user. We also suspect that UserAccount will be needed at some point, but we hope to find out for sure when we begin coding.

Write the First Test-First Test First

Let's take a quick look at the project structure. Using Eclipse we've created a project called "LoginExample-TDD," which gives us the following folders:

LoginExample-TDD
   |__ src
   |__ test
   |__ bin

All the production code will go in packages beneath "src," all the unit tests will go into "test," and the compiled code will go into "bin."

Given the white-boarded design sketch in Figure 2-1, it makes sense to focus on LoginManager first. So we'll start by creating a LoginManager test class:

import static junit.framework.TestCase.*;
import org.junit.Test;

public class LoginManagerTest extends TestCase {

    @Test
    public void login() throws Exception {
    }

}

So far, so good. We've created a test skeleton for the login() method on LoginManager, which we identified during the brief design white-boarding session. We'll now add some code into this test, to create a LoginManager instance and attempt a login:

@Test
public void login() throws Exception {
    LoginManager manager = new LoginManager();
    try {
        manager.login("robert", "password1");
    } catch (LoginFailedException e) {
        fail("Login should have succeeded.");
    }
}

At this stage, Eclipse's editor has become smothered with wavy red lines, and compilation certainly fails (see Figure 2-4).

All dressed up with nowhere to go ... this test needs some code to run against.

Figure 2-4. All dressed up with nowhere to go ... this test needs some code to run against.

This compilation failure is a valid stage in the TDD process: the test compilation errors tell us that some code must be written in order to make the test pass. So let's do that next, with two new classes:

public class LoginManager {
    public void login(String username, String password)
    throws LoginFailedException {
    }
}

public class LoginFailedException extends Exception {
}

The code now compiles, so—being eager-beaver TDDers—we rush to run our new unit test straightaway, fully expecting (and hoping for) a red bar, indicating that the test failed. But surprise! Look at Figure 2-5. Our test did not successfully fail. Instead, it failingly succeeded.

Oops, the test passed when it was meant to fail! Sometimes a passing test should be just as disconcerting as a failing test. This is an example of the product code providing feedback into the tests, just as the tests provide feedback on the product code. It's a symbiotic relationship, and it answers the question, what tests the tests? (A variant of the question, who watches the watchers?)

Green bar—life is good. Except... what's that nagging feeling?

Figure 2-5. Green bar—life is good. Except... what's that nagging feeling?

Following the process strictly, we now add a line into the code to make the test fail:

public void login(String username, String password)
throws LoginFailedException {
    throw new LoginFailedException();
}

The login() method simply throws a LoginFailedException, indicating that all login attempts currently will fail. The system is now pretty much entirely locked down: no one can login until we write the code to allow it. We could alternatively have changed the first test to be "@Test loginFails()" and pass in an invalid username/password. But then we wouldn't have been implementing the basic pass through the user story first—to enable a user to log in successfully. At any rate, we've now verified that we can indeed feel pain in our forehead when we bang it against a wall!

Next, let's add some code to make the test pass:

public void login(String username, String password)
throws LoginFailedException {
    if ("robert".equals(username)
     && "password1".equals(password)) {
return;
     }
     throw new LoginFailedException();
}

If we re-run the test now, we get a green bar, indicating that the test passed. Does it seem like we're cheating? Well, it's the simplest code that makes the test pass, so it's valid—at least until we add another test to make the requirements more stringent:

@Test
public void anotherLogin() throws Exception {
    LoginManager manager = new LoginManager();
    try {
        manager.login("mary", "password2");
    } catch (LoginFailedException e) {
        fail("Login should have succeeded.");
    }
}

We now have two test cases: one in which valid user Robert tries to log in, and another in which valid user Mary tries to log in. However, if we re-run the test class, we get a failure:

junit.framework.AssertionFailedError: Login should have succeeded.
        at com.softwarereality.login.LoginManagerTest.anotherLogin(LoginManagerTest.java:24)

Now, clearly it's time to get real and put in some code that actually does a login check.

Write the Login Check Code to Make the Test Pass

The real code will need to make a call to a RESTful service, passing in the username and password, and get a "login passed" or "login failed" message back. A quick instant-message to the relevant middleware/single-sign-on (SSO) team produces a handy "jar" file that encapsulates the details of making a REST call, and instead exposes this handy interface:

package com.mymiddlewareservice.sso;

public interface AccountValidator {

    public boolean validate(String username, String password);
    public void startSession(String username);
    public void endSession(String username);
}

The library also contains a "black-box" class, AccountValidatorFactory, which we can use to get at a concrete instance of AccountValidator:

public class AccountValidatorFactory {
    public static AccountValidator getAccountValidator() {...}
}

We can simply drop this jar file into our project, and call the middleware service in order to validate the user and establish an SSO session for him or her. Utilizing this convenient library, LoginManager now looks like this:

public class LoginManager {
    public void login(String username, String password)
throws LoginFailedException {
        AccountValidator validator =
            AccountValidatorFactory.getAccountValidator();
        if (validator.validate(username, password)) {
            return;
        }
        throw new LoginFailedException();
    }
}

If we were to re-run the tests now, the call to AccountValidator would make a network call to the remote SSO service and validate the username and password... so the test should pass quite happily.

But wait, is that the screeching halt of tires that you can hear?

This raises an important issue with unit testing in general: you really don't want your tested code to be making external calls during testing. Your unit test suite will be executed during the build process, so relying on external calls makes the build more fragile: suddenly it's dependent on network availability, servers being up and working, and so forth. "Service not available" shouldn't count as a build error.

For this reason, we generally go to great lengths to keep the unit-tested code insular. Figure 2-6 shows one way this could be done. In the sequence diagram, LoginManager checks whether it's running in the "live" environment or a unit-test/mock environment. If the former, it calls the real SSO service; if the latter, it calls a mock version.

A common anti-pattern: the code branches depending on which environment it's in.

Figure 2-6. A common anti-pattern: the code branches depending on which environment it's in.

Yikes. This would quickly become a nightmare of "if-else if" statements that check whether this is a live or test environment. Production code would become more complex as it becomes littered with these checks every time it needs to make an external call, not to mention special-handling code to return "dummy" objects for the tests' purposes. You really want to place as few restrictions and stipulations (aka hacks, workarounds) on the production code as possible—and you definitely don't want the design to degrade in order to allow for unit testing.

Create a Mock Object

Luckily, there's another way that also happens to promote good design. So-called mock object frameworks such as JMock, EasyMock, and Mockito[11] use special magic (okay, Java reflection and dynamic proxies behind the scenes) to create convenient, blank/non-functioning versions of classes and interfaces. For this example, we'll use JMock (we also use Mockito later in the book).

Going back to our LoginManagerTest class, we make it "JMock-ready" using an annotation and a context:

@RunWith(JMock.class)
public class LoginManagerTest  {
    Mockery context = new JUnit4Mockery();

Re-running the test now—and getting the same error as earlier—you can verify that the test is now going through JMock by scanning through the stack trace for the AssertionFailedError:

junit.framework.AssertionFailedError: Login should have succeeded.
        at com.softwarereality.login.LoginManagerTest.anotherLogin(LoginManagerTest.java:32)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:66)
        at org.jmock.integration.junit4.JMock$1.invoke(JMock.java:37)

Creating a mock instance of AccountValidator is pretty straightforward:

validator = context.mock(AccountValidator.class);

The next problem is how to get LoginManager to use this mock instance instead of the production version of AccountValidator returned by AccountValidatorFactory. In the LoginManager code we just wrote, the login() method directly calls AccountValidatorFactory.getAccountValidator(). We have no "hook" with which to tell it to use our version instead. So let's add a hook:

public class LoginManager {

    private AccountValidator validator;

    public void setValidator(AccountValidator validator) {
        this.validator = validator;
    }

    synchronized AccountValidator getValidator() {
        if (validator == null) {
            validator = AccountValidatorFactory.getAccountValidator();
        }
        return validator;
    }
public void login(String username, String password)
    throws LoginFailedException {
        if (getValidator().validate(username, password)) {
            return;
        }
        throw new LoginFailedException();
    }
}

This version uses a little bit of dependency injection (DI) to allow us to inject our own flavor of AccountValidator. Before the login() method is called, we can now set an alternative, mocked-out version of AccountValidator. The only change inside login() is that instead of using the validator member directly, it calls getValidator(), which will create the "real" version of AccountValidator if we haven't already injected a different version.

The complete login() test method now looks like this:

@Test
public void login() throws Exception {
    final AccountValidator validator = context.mock(AccountValidator.class);
    LoginManager manager = new LoginManager();
    manager.setValidator(validator);

    try {
        manager.login("robert", "password1");
    } catch (LoginFailedException e) {
        fail("Login should have succeeded.");
    }
}

Running the test still causes a red bar, however. This is because the new, mocked-out AccountValidator.validate() method returns false by default. To make it return true instead, you need to tell JMock what you're expecting to happen. You do this by passing in one of its appropriately named Expectations objects into the context:

context.checking(new Expectations() {{
    oneOf(validator).validate("robert", "password1"); will(returnValue(true));
}});

Note

At first glance, this probably doesn't look much like valid Java code. The designers of JMock are actually going for a "natural language" approach to the syntax, giving it a so-called fluent interface[12] so that the code reads more like plain English. Whether they've achieved a clearer, human-readable API or something far more obtuse is open to debate![13]

This code snippet is saying: "During the test, I'm expecting one call to validate() with the arguments "robert" and "password1", and for this one call I want the value true to be returned. In fact, this is one of JMock's great strengths—it allows you to specify exactly how many times you're expecting a method to be called, and to make the test fail automatically if the method isn't called at all, is called the wrong number of times, or is called with the wrong set of values.

For our immediate purposes, however, this is just a pleasant bonus: at this stage we're interested only in getting our mock object to return the value we're expecting.

Re-running the code with the new Expectations in place results in a green bar. We'll now need to do the same for the other test case—where we're passing in "mary" and "password2". However, simply repeating this technique will result in quite a bit of duplicated code—and it's "plumbing" code at that, very little to do with the actual test case. It's time to refactor the test class into something leaner.

Refactor the Code to See the Design Emerge

We'll start by refactoring the test code, as that's our immediate concern. However, the real point of this exercise is to see the product code's design emerge as you add more tests and write the code to make the tests pass.

Here's the refactored test code:

@RunWith(JMock.class)
public class LoginManagerTest  {

    Mockery context = new JUnit4Mockery();
    LoginManager manager;
    AccountValidator validator;

    @Before
    public void setUp() throws Exception {
        validator = context.mock(AccountValidator.class);
        manager = new LoginManager();
        manager.setValidator(validator);
    }
@After
    public void tearDown() throws Exception {
        manager.setValidator(null);
        manager = null;
        validator = null;
    }

    @Test
    public void login() throws Exception {
        context.checking(new Expectations() {{
            oneOf(validator).validate("robert", "password1"); will(returnValue(true));
        }});

        try {
            manager.login("robert", "password1");
        } catch (LoginFailedException e) {
            fail("Login should have succeeded.");
        }
    }

    @Test
    public void anotherLogin() throws Exception {
       context.checking(new Expectations() {{
           oneOf(validator).validate("mary", "password2"); will(returnValue(true));
       }});

       try {
           manager.login("mary", "password2");
       } catch (LoginFailedException e) {
           fail("Login should have succeeded.");
       }
    }
}

We've created @Before and @After fixtures, which are run before and after each test case to set up the mock validator and the LoginManager, which is the class under test. This saves having to repeat this setup code each time. There's still some work that can be done to improve the design of this test class, but we'll come back to that in a moment. Another quick run through the test runner produces a green bar—looking good. But so far, all we've been testing for is login success. We should also pass in some invalid login credentials, and ensure that those are handled correctly. Let's add a new test case:

@Test( expected = LoginFailedException.class )
public void invalidLogin() throws Exception {
    context.checking(new Expectations() {{
        oneOf(validator).validate("wideboy", "blagger1"); will(returnValue(false));
    }});

    manager.login("wideboy", "blagger1");
}

This time, that old trickster Wideboy is trying to gain access to the system. But he won't get very far—not because we have a tightly designed SSO remote service running encrypted over SSL, but because our mock object has been set to return false. That'll show 'im! Stepping through the code, the first line uses a JUnit 4 annotation as with the other test methods; however, this time we've also specified that we're expecting the LoginFailedException to be thrown. We're expecting this exception to be triggered because the mock object will return false, indicating a login failure from the mocked-out SSO service. The code that's actually under test is the login() method in LoginManager. This test demonstrates that it's doing what's expected.

Looking at the last three test methods, you should see that there's something of a pattern of repeated code emerging, despite our recent refactoring. It would make sense to create a "workhorse" method and move the hairy plumbing code—setting the test case's expectations and calling LoginManager—into there. This is really just a case of moving code around, but, as you'll see, it makes a big difference to the test class's readability. Actually, we end up with two new methods: expectLoginSuccess() and expectLoginFailure().

Here are our two new methods:

void expectLoginSuccess(final String username, final String password) {
    context.checking(new Expectations() {{
        oneOf(validator).validate(username, password); will(returnValue(true));
    }});
    try {
        manager.login(username, password);
    } catch (LoginFailedException e) {
        fail("Login should have succeeded.");
    }
  }

  void expectLoginFailure(final String username, final String password) throws
LoginFailedException {
        context.checking(new Expectations() {{
            oneOf(validator).validate(username, password); will(returnValue(false));
        }});
        manager.login(username, password);
  }

Here's what the three refactored test methods now look like:

@Test
public void login() throws Exception {
    expectLoginSuccess("robert", "password1");
}
@Test
public void anotherLogin() throws Exception {
    expectLoginSuccess("mary", "password2");
}

@Test( expected = LoginFailedException.class )
public void invalidLogin() throws Exception {
    expectLoginFailure("wideboy", "blagger1");
}

All of a sudden, each test case looks very focused on purely the test values, without any of the plumbing code hiding the purpose or expectations of the test case. Another quick re-run of the tests produces a green bar (see Figure 2-7) so the change doesn't appear to have broken anything.

We refactored the code (again), so we re-run the tests to make sure nothing's broken.

Figure 2-7. We refactored the code (again), so we re-run the tests to make sure nothing's broken.

Although UML modeling is sometimes seen as anathema to TDD (though by a shrinking minority, we hope), we took advantage of Enterprise Architect (EA)'s reverse engineering capability to vacuum up the new source code into a class model—see Figure 2-8. We also put together a quick sequence diagram to show how the code interacts—see Figure 2-9. Note that these diagrams might or might not be created on a real TDD project. We'd guess probably not, especially the sequence diagram. After all, who needs documentation when there's JUnit code?

The product code, reverse-engineered into EA

Figure 2-8. The product code, reverse-engineered into EA

Notice that throughout this coding session, we haven't even attempted to write any tests for the remote SSO service that our code will call at production time. This is because—at this stage, at least—the SSO service isn't what we are developing. If it turns out that there's a problem in the SSO code, we'll have enough confidence in our own code—as it's covered with unit tests—to safely say that the problem must be inside the "black box" service. This isn't so much "it's their problem, not ours," as that's distinctly territorial, but rather it helps to track down the problem and get the issue resolved as quickly as possible. We can do this because we've produced well-factored code with good unit-test code coverage.

Having said that, at some stage we do want to prove that our code works along with the SSO service. In addition to good old-fashioned manual testing, we'll also write an acceptance test case that tests the system end-to-end.

A sequence diagram showing the product behavior that's under test

Figure 2-9. A sequence diagram showing the product behavior that's under test

Acceptance Testing with TDD

The TDD story pretty much starts and ends with unit testing. Given a user story, you create test fixtures and unit test code, and drive the product code from the tests. However, there's a major aspect of testing that is often overlooked by teams who adopt TDD: customer tests (aka acceptance tests).

With acceptance tests you're writing tests that "cover" the requirements; with unit tests you're writing tests that "cover" the design and the code. Another way to look at it is that acceptance tests make sure you're coding the right thing; unit tests make sure you're coding it right.

While unit testing principles are crisp, clear, well understood, and generally easy to grasp, guidance on acceptance testing is rare, vague, and woolly. Acceptance testing frameworks such as Fitnesse provide little more than holding pages for tables of values, while leaving the difficult part—plumbing the tests into your code base—as an exercise for the reader. In fact, we've noticed that many TDDers just don't think about acceptance tests. It seems that when TDD was unshackled from XP and allowed to float away over the horizon, a vitally important part of it—customer acceptance tests—was left behind. This almost blinkered focus on the code isn't surprising, as it tends to be programmers who introduce TDD into organizations.

Conclusion: TDD = Too Damn Difficult

Now that we've been through the exhausting exercise of driving the design from the unit tests, let's reflect again on the anemic amount of code that was produced by this herculean effort. Figure 2-10 shows the sum total of the code that we've tested (on the right, a grand total of 28 lines in LoginManager), and the unit test code that got us there (65 lines in LoginManagerTest, on the left).

It took this amount of test code (on the left), not counting refactoring, to produce this amount of product code (on the right).

Figure 2-10. It took this amount of test code (on the left), not counting refactoring, to produce this amount of product code (on the right).

At first glance, this really doesn't appear all that unreasonable. Sixty-five lines of unit test code to produce 25 lines of production code—a bit more than 2:1—isn't that unreasonable a price to pay for having a good suite of regression tests. The real issue is how much work it took us to get to those 65 lines of JUnit code.

When you consider that our 65 lines of final test code took somewhere around eight refactorings to develop, you can come up with an approximate level of effort on the order of writing 500 lines of test code to produce 25 lines of product code. So we're looking at an "effort multiplier" of something on the order of 20 lines of test code for every line of product code.

For all this churning of wheels, gnashing of teeth, and object mockery, we've actually tested only the "check password" code. All of the other requirements (locking the account after three failed login attempts, username not found in the account master list, etc.) haven't even begun to be addressed. So the TDD version of Login (all requirements) would be a 60- or 70-page chapter.

If this seems Too Damn Difficult to you, we think you'll enjoy the next chapter, where we'll show you an easier way.

Summary

In this chapter, we continued our comparison of TDD and DDT by following along with a "Hello world!" (aka Login) requirement using TDD. It became obvious how the design begins to emerge from the code and the tests. However, you may have noticed that we were spending more time refactoring the test code than the "real" product code (though this was in large part exaggerated by the simplicity of the example—there will be lots more refactoring of product code in a more complex example). The process is also very low-level, we might even say myopic, focusing on individual lines of code from quite an early stage. While one of the primary goals of TDD is improved code quality, it's definitely not a "big picture" methodology. To gain the picture of your overall system design, especially on larger projects, you would need some other design process in addition to TDD.

In the next chapter we restart the "Hello world!" example and, this time, use it to illustrate how Design-Driven Testing works.



[5] See http://stephenwalther.com/blog/archive/2009/04/11/tdd-tests-are-not-unit-tests.aspx.

[6] See www.artima.com/intv/testdriven4.html.

[7] See www.theregister.co.uk/2007/04/25/unit_test_code_coverage/

[8] That is except for XP-style customer acceptance tests, although these aren't officially part of TDD. If you want to add acceptance tests to TDD, you need to look outside the core process, to some other method such as BDD, Acceptance Test-Driven Development (ATDD), or XP itself. Or ICONIX/DDT, of course.

[9] Our idea of a "redundant test" may be different from a TDDer's—more about factoring out redundant unit tests (by writing coarser-grained controller tests instead) in Chapter 6.

[10] In other words, the design should cover just what's been written so far, in as efficient a way as possible—with no code wasted on "possible future requirements" that may never happen.

[11] See www.jmock.org, http://mockito.org, and http://easymock.org. Mock object frameworks are available for other languages, e.g., Mock4AS for use with FlexUnit. NUnit (for .Net) supports dynamic mock objects out of the box.

[12] See www.martinfowler.com/bliki/FluentInterface.html.

[13] This white paper by the creators of JMock describes the evolution of an embedded domain-specific language (EDSL), using JMock as their example. It's well worth checking out if you're interested in DSLs/fluent interfaces: www.mockobjects.com/files/evolving_an_edsl.ooplsa2006.pdf.

[14] An excerpt from Doug's talk, "Alice in Use Case Land," given as the keynote speech at the UML World conference way back in 2001. See the full transcript here: www.iconixsw.com/aliceinusecaseland.html and also in Appendix "A for Alice".

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset