Chapter 10. Building confidence with testing

This chapter covers

  • Automated testing using ScalaCheck
  • Using JUnit to test Scala code
  • Writing better tests with dependency injection
  • Behavior-driven development using Specs
  • Testing actor-based systems

So far, I’ve been showing you code without writing tests—so why worry about that right now? The answer is, I wrote tests around the code but didn’t mention doing so because I wanted you to focus more on the Scala language. Now that’s going to change. My goal for this chapter is to make you comfortable writing automated tests in Scala so that you can build production-quality software.

The path to writing well-crafted code[1] is the path where you write tests for your code. The common perception about writing tests is that it’s hard, but this chapter will change that mindset. I’m going to show you how you can get started with practices like test-driven development and continuous integration for your Scala project. The idea of test-driven development (TDD) is to write the test before you write code. I know this seems backward, but I promise you that by the end of this chapter it will make sense. You’ll learn that writing tests is more like doing a design exercise than testing, and it makes sense to design your software. Your design tool will be code—more specifically, test code.

1 “Manifesto for Software Craftsmanship,” http://manifesto.softwarecraftsmanship.org.

I’ll start by introducing automated testing and how developers use it in real-world projects. There are two kinds of automated tests: ones you write (the most common) and ones you generate for your code. First I discuss how you can generate tests for your code using the ScalaCheck tool, because it’s easy. Scala, being a statically typed language, enjoys a unique position where tools like ScalaCheck can generate tests for your functions or classes based on types. ScalaCheck is a great way to get started with automated tests. But to truly get the benefit of automated tests, you also have to write them manually.

The majority of this chapter focuses on writing automated tests. Many testing tools are available for writing tests for your Scala code but this chapter walks you through using two tools: JUnit (www.junit.org) and Specs (http://etorreborre.github.com/specs2/).

If you’re a Java developer and have used JUnit before, using it to test your Scala code is easy. Specs is a testing tool written in Scala for Scala and provides more expressiveness in your tests. I’ll take you through the process of writing tests, the tools available to you, and design techniques you can use to make your design testable. The testability property of your design determines how easy it is to write tests. I’ll show you how to implement dependency injection in Scala.

Dependency injection is a design pattern used by developers to make their code more testable (read more about this in section 10.5). As a hybrid language, Scala provides a number of abstraction techniques you can use to implement dependency injection. This chapter explores all of them. I’ll also show you how to use a framework like Spring (www.springsource.org), a popular dependency injection framework in Java, in your Scala project.

Writing automated tests is commonly said to be hard, but the reality is if you use the right tools and techniques it’s easy. Without any further delay, let’s get started by asking: what are automated tests? And how do they fit into the software development process?

10.1. Importance of automated testing

I don’t care how good you think your design is, if I can’t walk in and write a test for an arbitrary method of yours in five minutes it’s not as good as you think it is, and whether you know it or not, you’re paying a price for it.

Michael Feathers

Automated tests are tests that are recorded or prewritten and can be run by a machine without any manual intervention. The tool that allows you to run these tests is called an automated testing tool. As mentioned earlier, there are two kinds of automated tests: ones you write and ones generated by a tool. Regardless of how the automated tests are created, it’s important to understand the value of having them around and running them as often as you can. To grasp their benefits, let’s explore how automated tests fit into the agile software development process. Chances are you’re already doing agile software development,[2] but if you aren’t, having these tests available is still valuable.

2 “Agile software development,” http://en.wikipedia.org/wiki/Agile_software_development.

In the agile software development process, teams don’t analyze and design the application up front; they build it using an evolutionary design.[3] In this process developers only design for today, not tomorrow. They design an application based on what they know today, understanding that some of the design decisions made today might be wrong tomorrow. They also implement functionality of the application incrementally. In this model, application design evolves and goes through lots of changes. Two important questions need to be answered:

3 Martin Fowler, “Is Design Dead?,” May 2004, http://martinfowler.com/articles/designDead.html.

  • What does evolving design have to do with automated testing?
  • Why is evolving the design better than designing the application up front?

The first question is more important in the context of this chapter. Automated tests are important because your application goes through lots of changes and you might break existing functionality. In this ever-changing environment, you won’t be able to keep up using manual testing. You need automated tests that can run repeatedly to ensure that your application is behaving as expected and that nothing unexpected has changed.

The next question asks why evolutionary design is better. Why not design the application up front so you don’t have to change so frequently? In some cases you have to do an upfront design, like integrating with an external commercial product, and you don’t have control over its source code.

But most of the time you have to cope with requirements that change over time. Agile software development tries to reduce the cost of change, and getting a correct upfront design is hard in the face of changing requirements. It becomes costly to maintain and change a big upfront design.[4]

4 “Waterfall model,” http://en.wikipedia.org/wiki/Waterfall_model.

You can’t think of all the features your application will implement or how the various components of your application will work with each other. The larger the application becomes, the harder it becomes to design up front. Agile processes embrace more of an incremental approach to software development, where you build a little, test a little, and review the application features with users to get their feedback. In this process it’s vital to have automated tests that give you feedback to assure you that your application is working.

Automated tests not only help you find problems, they also work as documentation for the application. If you want to understand the behavior of a class or component, you can look at the associated tests. Section 10.6 shows you how to develop executable documentation in Scala using Specs. The problem with the traditional way of documenting code is that the documentation goes stale quickly because most of us forget to keep it up-to-date with code changes. But if you have tests that act as documentation, you’ll keep them up to date with code changes because every code change is preceded by or is the result of a test change.

There are varied types of automated tests: specification-based, unit, integration, functional, and regression, to name a few. This chapter focuses on specification-based tests and test-driving software using unit tests. Other types of tests also play an important role in software development but are beyond the scope of this chapter. In specification-based testing you express the behavior of your application in an executable description, and the tool generates tests to break it. On the other hand, unit tests are something you write to design and verify your application.

If you haven’t done any sort of automated testing, it might take a while to get used to it. Don’t worry too much at the beginning, and don’t give up, because the benefits mentioned earlier will pay you back. You’ll be able to respond to change quickly because now you have tests to provide feedback.

I begin by discussing how to generate automated tests using ScalaCheck so that you can start getting the benefits of automated tests while you learn how to write them for your Scala project.

10.2. Automated test generation using ScalaCheck

ScalaCheck is a tool for testing Scala and Java programs by generating test data based on property specifications. The basic idea is that you define a property that specifies the behavior of a piece of code, and ScalaCheck automatically generates random test data to check whether the property holds true. Don’t confuse the ScalaCheck property with a JavaBean property. In ScalaCheck, a property is a testable unit. To create a new property in ScalaCheck, you have to make a statement that describes the behavior you want to test. For example, I’m making the following claim about the reverse method defined in the String class:

val anyString = "some string value"
anyString.reverse.reverse == anyString

My claim is that if the reverse method is invoked twice on an instance of a String, I get the same value back. The job of ScalaCheck would be to falsify this statement by generating random test data. Without going any further, let’s try a little example with ScalaCheck. Create a new directory called scalacheck and add the following build.sbt file to the root of the directory:

name := "ScalaCheckExample"

version := "1.0"

organization := "Scala in Action"

scalaVersion := "2.10.0"

resolvers ++= Seq(
  "Sonatype Snapshots" at "http://oss.sonatype.org/content/repositories/
     snapshots",
  "Sonatype Releases" at "http://oss.sonatype.org/content/repositories/
     releases"
)

libraryDependencies ++= Seq (
  "org.scalacheck" %% "scalacheck" % "1.10.0" % "test"
)

// append options passed to the Scala compiler
scalacOptions ++= Seq("-deprecation", "-unchecked", "-feature")

This project file will download and add the ScalaCheck dependency to your project (don’t forget to do a reload and an update). You can also download the latest ScalaCheck (http://code.google.com/p/scalacheck/downloads/list) and play with it using Scala REPL. In this chapter I show all the examples using the SBT project because it’s more convenient to build and compile. In the next section, you’ll create your first ScalaCheck test to verify the claim about the reverse method.

10.2.1. Testing the behavior of a string with ScalaCheck

To create a new property in ScalaCheck, you have to use the org.scalacheck.Prop trait. The property in ScalaCheck is represented by instances of the Prop trait. There are several ways to create an instance of a property in ScalaCheck but the one you’re going to use here is org.scalacheck.Prop.forAll.

forAll is a factory method that creates a property that can be tested by ScalaCheck. This method takes a function as an argument that should return a Boolean and can take any type of parameter as long as there’s a generator. Generators are components used by ScalaCheck to generate test data. (You’ll read more about generators later in this section.) Here’s how the property would look for the statement I made about reverse in the previous section:

Prop.forAll((a: String) => a.reverse.reverse == a)

The way to read the preceding property is this: for all strings, the expression (a: String) => a.reverse.reverse == a should hold true (this matches with the claim in the previous section). ScalaCheck will use the generator for String type to generate random string data to validate the statement. To run this property with SBT, you need to wrap it inside the Properties class (later I show you how to use ScalaCheck with your tests). The org.scalacheck.Properties represents a collection of ScalaCheck properties, and SBT has built-in support for running Properties:

package checks
import org.scalacheck._

object StringSpecification extends Properties("String") {
  property("reverse of reverse gives you same string back") =
    Prop.forAll((a: String) => a.reverse.reverse == a)
}

Save the preceding code in a StringSpecification.scala file under the src/test/scala folder of your project, and run the test action from the SBT prompt. If the setup is correct so far, you’ll notice that ScalaCheck has tried 100 times to falsify your property but failed (see figure 10.1).

Figure 10.1. The output from running ScalaCheck from the SBT prompt

After 100 tests it should be safe to say that the property does hold true. Let’s add another property that will check for any two strings x and y, where the expression x.startsWith(y) should be equivalent to x.reverse.endsWith(y.reverse). The ScalaCheck property should look like the following:

property("startsWith") = Prop.forAll {(x: String, y: String) =>
      x.startsWith(y) == x.reverse.endsWith(y.reverse)
}

Does this hold true? Go ahead and try this and see whether ScalaCheck can prove the property to be wrong. Again this property holds true after 100 tests. Let’s try to create a property that’s not always true and see whether ScalaCheck is able to catch it. The statement is this: for any two strings x and y, the expression x > y is equivalent to x.reverse > y.reverse. The ScalaCheck property looks like the following:

property("string comparison") = Prop.forAll {(x: String, y: String) =>
    x > y == x.reverse > y.reverse
}

In this case ScalaCheck will fail and show the arguments for which the expression doesn’t hold true. The output may not always be visible to you because ScalaCheck uses character values from Char.MIN_VALUE to Char.MAX_VALUE. The following listing shows the complete String specification class.

Listing 10.1. String specification for ScalaCheck

In listing 10.1 you create a specification for the String class. Granted, you haven’t specified the complete behavior of the class, but you can see how ScalaCheck specifications work. You extend the Properties trait to make your specification runnable by SBT. Each statement that you want to verify is wrapped around a ScalaCheck property. You’re using the Prop.forAll factory method to create a new property by passing a function that captures the statement that needs to be verified by ScalaCheck. ScalaCheck executes this function by passing random test data generated by built-in generators.

I hope by now you get the idea of how ScalaCheck properties are created and can be used to test the behavior of Scala code.

Note

The idea of automated testing didn’t originate with ScalaCheck, but from a tool called QuickCheck,[5] a testing tool for the Haskell language. Sometime these tools are also called specification-based unit testing tools. You provide a specification of a class or method in terms of properties. This kind of specification-based testing tool relies heavily on the correctness of the type system. Because Scala and Java are both statically typed languages, ScalaCheck would be a great way to create specifications and add them to your project.

5 “Introduction to QuickCheck,” modified Oct 25, 2012, www.haskell.org/haskellwiki/Introduction_to_QuickCheck.

The next section discusses ScalaCheck generators so that when the time comes, you can create your own custom generator for a new type you create.

10.2.2. ScalaCheck generators

In the previous section you wrote your first ScalaCheck specification without worrying about generators, so why bother now? The reason you didn’t worry about generators was that ScalaCheck knows how to generate test data for the String type (it knows about other types too[6])—but how about a new type that you created? In this case you’re on your own. The good news is ScalaCheck provides all the building blocks you need to roll your own generator.

6 “ScalaCheck user guide,” updated April 12, 2012, http://code.google.com/p/scalacheck/wiki/UserGuide.

The ScalaCheck generators are responsible for generating test data, and the org.scalacheck.Gen class represents them. Think of generators as functions that take some generation parameters and return a generated value sometimes. For some combinations of parameters, the generator may not generate any value. A generator for type T could be represented by a function of type Gen.Params => T. The ScalaCheck library already ships with various kinds of generators, but one in particular is quite important: the arbitrary generator. This is a special generator that generates arbitrary values for any supported type. It’s the generator ScalaCheck used when testing the String specification you created in the previous section. To run any specification, ScalaCheck needs a generator to generate test data, so generators play an important role in ScalaCheck. The next section shows you how to create a custom generator in ScalaCheck.

10.2.3. Working with ScalaCheck

In this section I show you how to use ScalaCheck with a simple real-world use case. In the real world, you don’t write specifications for the String class but rather for types (classes and traits) that you’ll create. Instead of creating a new type on your own, let’s take a look at the scala.Either class. This will be close, in terms of complexity, to the types you create or deal with in your project. In Scala, the Either type allows you to represent a value for one of two possible types: Left and Right. Usually, by convention, Left represents failure and Right represents success.

Note

Take a look at the scaladoc[7] of the Either type to get a feel for what you can do with this type.

7 Scala Either, http://mng.bz/106L.

In this section you’ll add specification tests for some of its API methods. First I list the specifications you want to test. This is clearly not an exhaustive list, but it’s a good starting point:

1.  Either will have value on either Left or Right, but not both at any point in time.

2.  fold on the Left should produce the value contained by Left.

3.  fold on the Right should produce the value contained by Right.

4.  swap returns the Left value to the Right and vice versa.

5.  getOrElse on Left returns the value from Left or the given argument if this is Right.

6.  forAll on Right returns true if Left or returns the result of the application of the given function to the Right value.

The complexity of the specifications grows as you go down the list, but you’ll see how easy it is to implement them.

First, create a custom ScalaCheck generator for the Either type, because there’s no built-in generator for this type. Creating new generators in ScalaCheck is as easy as combining the existing generators. To keep things simple, only create generators that can generate Int values for Left and Right (later I show you how to parameterize the generator). To create a new generator for Left, use the existing generator for the Int value to create instances for Left:

import Gen._
import Arbitrary.arbitrary

val leftValueGenerator = arbitrary[Int].map(Left(_))

The preceding code snippet creates a new instance of the Int type generator and maps it to create values for Left. Similarly, for creating instances of Right, use the following code:

val rightValueGenerator = arbitrary[Int].map(Right(_))

To successfully generate instances of the Either type, you have to randomly generate instances of Left or Right. To solve these kinds of problems, the ScalaCheck Gen object ships with helper methods like oneOf or frequency, called combinators. They allow you to combine multiple generators. For example, you could use Gen.oneOf to combine leftValueGenerator and rightValueGenerator to create a generator for the Either type. And oneOf ensures that Left and Right values are generated randomly:

implicit val eitherGenerator =
     oneOf(leftValueGenerator, rightValueGenerator)

By defining the generator as an implicit val, you don’t have to pass it to ScalaCheck properties—ScalaCheck will pick it up. The generator you’ve defined here only generates Int values, but if you wanted to play with different types of values, you’d also define the generator like this:

implicit def arbitraryEither[X, Y](implicit xa: Arbitrary[X],
            ya: Arbitrary[Y]): Arbitrary[Either[X, Y]] =
    Arbitrary[Either[X, Y]](
      oneOf(arbitrary[X].map(Left(_)), arbitrary[Y].map(Right(_)))
    )

The generators for both Left and Right are type-parameterized so they’ll take any type of parameter for which the arbitrary generator is defined in ScalaCheck.

You can also use Gen.frequency to get more control over each individual generator and its uses. If you wanted to use leftValueGenerator 75% of the time compared to the rightValueGenerator, you could use Gen.frequency like this:

implicit val eitherGenerator =
     frequency((3, leftValueGenerator), (1, rightValueGenerator))

The generator is created. Let’s move on with your first specification. This specification is easy to implement—all you have to do is check that both Left and Right aren’t present at the same time. In this case you’ll use the isLeft and the isRight methods available in the Either type that return true or false based on whether it contains a value:

property("isLeft or isRight not both") = Prop.forAll((e: Either[Int, Int])
     => e.isLeft != e.isRight)

If isLeft and isRight are equal, your specification fails because it clearly states that both Left and Right can’t have values at the same time.

For the second specification (“fold on the Left should produce the value contained by Left”) and the third (“fold on the Right should produce the value contained by Right”), use the fold method defined in the Either type:

property("left value") = Prop.forAll{(n: Int) =>
            Left(n).fold(x => x, b => error("fail")) == n }

property("Right value") = Prop.forAll{(n: Int) =>
            Right(n).fold(b => error("fail"), x => x) == n }

Both cases will error out if they try to access the wrong value. The contract of the fold is like the following, where it only applies the appropriate function parameter:

def fold[X](fa: A => X, fb: B => X) = this match {
   case Left(a) => fa(a)
   case Right(b) => fb(b)
}

Go ahead and add these properties to a specification class and run them (see listing 10.2 for the complete specification).

Customizing the number of tests generated by ScalaCheck

ScalaCheck provides configurable options which allow you to control how ScalaCheck verifies your property. If you want to generate more than 100 successful tests before a property is declared successful, you can pass ScalaCheck arguments to your test through SBT. The trick is to use the SBT test-only action. This action allows you to provide test names as arguments and pass additional test arguments. If you don’t specify any test names, it will run all the tests like the SBT test action. You can change the default setting for the minimum successful (-s) tests from 100 to 500 by passing test arguments to SBT like the following:

>test-only -- -s 500

By passing –s (ScalaCheck-specific configuration), you’ve configured the minimum successful tests that will be generated by ScalaCheck before a property is pronounced successful. Check the ScalaCheck documentation to learn about all the configuration options.

The fourth specification (“swap returns the Left value to Right and vice versa”) is a little harder but nothing that can’t be fixed. According to this specification, the swap method of the Either type could swap the value from Left to Right and vice versa. Here you can use pattern matching to check whether the value corresponds to Left or Right. For example, if it’s Left, then after swap the value should be available on the Right side and vice versa for the Right value:

property("swap values") = Prop.forAll{(e: Either[Int, Int]) => e match {
       case Left(a) => e.swap.right.get == a
       case Right(b) => e.swap.left.get == b
    }
}

The following listing shows the complete specification for the Either type, including specification numbers 5 and 6.

Listing 10.2. Complete EitherSpecification

The previous listing creates a generator for the Either type by using the building blocks provided by ScalaCheck. Arbitrary.arbitrary is one of those building blocks that lets you create new custom generators. Using it, you create generators for both Left and Right values of the Either type. Then, using the combinators available in the Gen object, you create a generator for the Either type. The rest of the code is defining ScalaCheck properties for all the specifications declared at the beginning of the section.

There are plenty of Scala open source libraries, like Scalaz (https://github.com/scalaz/scalaz) and Lift (https://github.com/lift/framework), that use ScalaCheck for testing their classes. You can always download them and go through their ScalaCheck tests to see various ways you can use them.

It’s also easy to use ScalaCheck to test Java codebases. Because Scala and Java interoperate, you don’t have to do anything special to test java codebases. ScalaCheck also supports the generation of Java collection classes.

As you’ve already figured out, ScalaCheck is a powerful framework. For example, with 20–25 lines of code, you managed to generate 600 tests. With the ability to create custom generators, I’m sure you can think of places in your project where ScalaCheck will be valuable.

What about the new functionality you’ve yet to implement? You aren’t sure how it should look yet—the classes, traits, and functions you’d need to implement the functionality. The next section introduces you to a design technique called test-driven development, which might solve your problem.

10.3. Test-driven development cycle

Test-driven development[8] (TDD) is a technique of using tests to drive the design of software (see figure 10.2). At first it sounds misleading because you usually associate tests with a verification of the software. You test your software to make sure it’s working as expected. It’s more like the last thing you do before releasing your software. TDD completely reverses that and makes testing a central part of the software development lifecycle. In agile software development, TDD is one of the most, if not the most, important practice. But you don’t have to buy in to agile to reap the benefits of TDD—you can use it with any process. Remember: TDD is a design tool. In the end you get a test suite, but that’s more of a secondary effect. Let’s go through and understand how TDD works, and then I’ll explain why it works.

8Test-driven Development,” http://en.wikipedia.org/wiki/Test-driven_development.

Figure 10.2. Test-driven development cycle

The figure outlines how TDD works as a development practice. You always start with a failing end-to-end test. An end-to-end test (sometimes called an integration test) exercises your application from top to bottom. This could mean the test is making an HTTP request through a browser and checking the response back. Then you write a bunch of unit tests to break the problem into smaller pieces and make them pass. You only write production code when you have a failing test, and you only refactor when your tests are passing. One way to think about it is to take one of the acceptance criteria of the feature you’re supposed to implement and write it as an executable test.

Let’s consider the following feature request. As a pricing analyst, you want to calculate a price for products so you can bill customers correctly:

Acceptance criteria:

  • A 100 product code should use cost plus the percent amount.
  • Example: 150 (cost) + 20% = $180
  • All products whose ID starts with B should use an external price source to get the price.

In this case, if you pick the first acceptance criterion, your job would be to implement that criterion as a test. When you start to implement the acceptance criterion as a test, the following are the some of the questions that might pop into your head:

  • Where should you implement the pricing logic?
  • Should you create a trait or start with a simple function?
  • What parameters will the function take?
  • How should you test the output?
  • Should it hit the database or filesystem to pull up the cost?

If this is the case, you have already started to think about the design. But at this point your focus should be only on the unit of work at hand. That means the acceptance criterion you’re working on. The most common theme of TDD is to pick the simplest solution that could possibly work. In this case the simplest solution would be to create a function that takes a product code, looks it up in a Map, and returns the price using the formula specified in the acceptance criterion. Probably, using Map to look up the cost design decision you made might not hold true on the next test. When that happens, you’ll make the necessary code changes and look up the cost from some persistence store—you get the idea. Do the simplest thing that could possibly work, and then incrementally design and build your application.

Once your test is running, you have the opportunity to refactor or clean up. Refactoring (www.refactoring.com) is a technique you can use to improve the design of existing code without changing the behavior. This test-code-refactor cycle repeats for each feature or step that you implement. Sometimes this cycle is called the red-green-refactor cycle. When a test is failing, you’re in the red state; then you make the test pass and move to the green state. TDD is a development practice, and it takes some time to get used to it. As you go through some examples, it will become clearer.

The good news is that the Scala community is blessed with lots of testing tools to use. I’m going to focus on two of the most popular: JUnit and Specs. JUnit is more popular among Java developers and can be easily used to test Scala code. But most Scala programmers use Specs to test their Scala code.

As you start writing tests, you’re also building a test suite. If you don’t run them often, then you’re not extracting the benefits from them. The next section discusses setting up a continuous integration[9] environment to get continuous benefits from them.

9 Martin Fowler, “Continuous Integration,” May 1, 2006, http://martinfowler.com/articles/continuousIntegration.html.

10.3.1. Setting up your environment for TDD

Once you and your team get comfortable with TDD, you need a tool that checks out the latest code from your source code repository and runs all the tests after every check-in of the source control system. This ensures that you always have a working software application. A continuous integration (CI) tool does that automatically for you. Almost all the existing CI tools will work for Scala projects. Table 10.1 shows some Scala tools that you could use in your Scala project.

Table 10.1. Tools to set up your TDD environment

Name

Description

Jenkins CI[a] Open source continuous integration tool that could build and test your project continuously. You can configure it to point to your source control and run builds every time the repository is updated. In essence, almost all the CI tools have these features. You could also use any other popular CI tool for your Scala project.
Jenkins SBT plugin[b] Allows you to run SBT build actions from Jenkins and lets you configure SBT using Jenkins. For CI tools that don’t have native support for SBT but support Maven, you can easily generate a POM file for your SBT project using the make-pom SBT command.
Code coverage[c] Code coverage is a measurement of source code that’s under automated tests. Code coverage tools help you to identify the area of the code that’s not tested. Almost all Java code coverage tools will work for Scala projects, but using tools that work with your build tool, like SBT, is always better.

a Jenkins home page, http://jenkins-ci.org.

b sbt plug-in, edited on Aug. 27, 2011, http://wiki.jenkins-ci.org/display/JENKINS/sbt+plugin.

c jacoco4sbt, https://bitbucket.org/jmhofer/jacoco4sbt/wiki/Home.

Tip

SBT is still fairly new compared to other build tools available in the market. If you have a testing tool or CI environment that doesn’t work well with SBT, you can use Maven (http://maven.apache.org) as your build tool. There’s a Maven Scala plug-in[10] that makes your Maven project Scala-aware and allows you to compile and run your Scala tests. You can also generate a .POM file (Maven build file) from your SBT project using the make-pom action.

10 maven-scala-plugin, version 2.14.2, Aug. 4, 2010, http://scala-tools.org/mvnsites/maven-scala-plugin/.

I’ve mentioned only a handful of tools you can include in your project to have a continuous feedback cycle. The toolset around Scala is always evolving, so try out a few tools and pick the one that best fits your project. The next section explains how you can use JUnit to test your Scala code.

10.3.2. Using JUnit to test Scala code

JUnit (www.junit.org) is a simple testing framework written in Java that allows you to write automated tests. This is a popular framework used in many Java projects. If you’ve used the JUnit testing tool previously to write tests for Java code, I’m happy to inform you that you can use it to test Scala code too. To use JUnit inside your SBT project, add the following dependency to your project file:

libraryDependencies += "junit" % "junit" % "4.10" % "test"

By default, SBT doesn’t recognize JUnit-style test cases, so you have to add another dependency to your project to make SBT aware of JUnit test cases:

libraryDependencies += "com.novocode" % "junit-interface" % "0.8" % "test"

The junit-interface[11] tool implemented the test interface of SBT so that SBT can run JUnit test cases. After you reload and update your SBT project, you’re ready to add JUnit test cases and run them using the test action from SBT console. This works out great if you have legacy JUnit tests that you want to retain while porting your application from Java to Scala, or you have both Java and Scala projects[12] that you’re building with SBT.

11 Stefan Zeiger, szeiger/junit-interface, https://github.com/szeiger/junit-interface.

12 SBT, https://github.com/harrah/xsbt.

JUnit is a good way to get started writing automated tests, but it’s not an appropriate testing tool for Scala projects because it still doesn’t understand Scala natively. There are multiple open source Scala testing tools you can use to write more expressive tests. Section 10.6 looks into a Scala testing tool called Specs that most Scala developers use, but for now let’s try to understand an important concept called dependency injection, which helps in designing more testable applications.

10.4. Better tests with dependency injection

Dependency injection (DI) is a design pattern that separates behavior from dependency resolution (the way your components find other dependent components). This pattern also helps to design programs that are highly decoupled in nature. Let’s look at a naïve example to understand how DI works (see figure 10.3).

Figure 10.3. CalculatePriceService and its calculators

This example is about calculating the price of a product based on various pricing rules. Typically any pricing system will have hundreds of rules, but to keep things simple I will only talk about two:

  • The cost-plus rule determines the price by adding a percentage of the cost.
  • Getting the price from an external pricing source.

With these rules in place, the calculate price service would look something like the next listing.

Listing 10.3. Basic CalculatePriceService

The cost-plus rule is implemented by the costPlusCalculator , and the external price source is handled by the externalPriceSourceCalculator . Both calculators extend the Calculator trait. The CalculatePriceService class uses these calculators based on the parameter priceType. Right now the two possible values for priceType are "costPlus" and "externalPriceSource". Let’s relate this example to the definition of DI. The behavior of the CalculatePriceService is to use the appropriate price calculator to determine the price for a given product. At the same time this class is also resolving its dependencies. Is there anything wrong with managing your own dependencies?

Note

Dependency injection is a specific form of inversion of control where the concern being inverted is the process of obtaining the needed dependencies.

Yes, there are some potential problems with this, in particular when your software is evolving. What if your client decides to use a different external pricing source to calculate the price or redefines the cost-plus calculation logic for some customers? In these cases you have to come up with different implementations of calculators and change the CalculatePriceService accordingly. This might be okay in some situations, but if you’re planning to build this as a component that will be shared by projects, you have a problem.

Using DI, you can easily solve this problem. If the dependent calculators could be passed in (injected) to the CalculatePriceService, then the service could be easily configured with various implementations of calculators. In its simplest form, you could pass these calculators through the constructor:

The only thing that’s changed compared to the previous code listing is that now the instances of two calculators are passed in as constructor arguments. In this case, the caller of the service takes the responsibility to determine the dependent price calculators and inject them into the service. This makes the service highly decoupled because it doesn’t care how the costPlusCalculator or externalPriceSourceCalculator is created or implemented. This also gives you flexibility in terms of design because now you can easily incorporate the changes your customer is talking about and come up with different implementations of pricing rules.

10.4.1. Techniques to implement DI

What does DI have to do with testing? In unit testing it’s important to understand the unit you’re testing. When you’re testing the calculate method of CalculatePriceService, your system under test is the CalculatePriceService, not the costPlusCalculator or the externalPriceSourceCalculator. But if you don’t isolate the calculators, your test will end up testing them as well. This is okay when you’re testing your system end to end using an integration test, but not when you want to test only the behavior of the CalculatePriceService. In this small example, it might be hard to see the difference, but in a large application without isolation of dependencies, you’ll end up initializing the system over and over again for each component you test. Isolation is important if you want to write simple and manageable unit tests.

The second problem with a closely coupled system is the speed of testing. It’s important to have your tests run faster. Remember that your tests are your feedback mechanism, so if they run slowly you won’t get faster feedback. In this example, each of your calculators might access the database or an external web service, and these will slow down your tests.

Definition

Test double is a common umbrella term for a test-specific equivalent of a component that your system under test depends on.

Ideally you’ll create a test version for each calculator so that you can focus your testing to verify only the system under test, which in this case is CalculatePriceService. In the test version of the calculator, you can return a hardcoded price or use an in-memory database to speed things up. This will also give you more control over the test data. One key aspect of TDD is rerunnable tests. If your tests are heavily dependent on the external data, they will become brittle because the external data could change and break your tests.

Note

A measure of a good unit test is that it should be free of side effects, the same as writing a pure function in functional programming.

If you follow TDD as a driver for your design, you don’t have to worry too much about the coupling problem—your tests will force you to come up with a decoupled design. You’ll notice that your functions, classes, and methods follow a DI pattern.

The following sections discuss ways you can implement dependency injection in Scala. Table 10.2 shows the list.

Table 10.2. Techniques to implement dependency injection

Technique

Description

Cake pattern Handles dependency using trait mixins and abstract members.
Structural typing Uses structural typing to manage dependencies. The Scala structural typing feature provides duck typing[a] in a type-safe manner. Duck typing is a style of dynamic typing in which the object’s current behavior is determined by the methods and properties currently associated with the object.
Implicit parameters Manages dependencies using implicit parameters so that as a caller you don’t have to pass them. In this case, dependencies could be easily controlled using scope.
Functional programming style Uses function currying to control dependencies. Function currying is a technique by which you can transform a function with multiple arguments into multiple functions that take a single argument and chain them together.
Using a DI framework Most of the techniques mentioned here will be home-grown. I show you how to use a DI framework in your Scala project.

a Duck typing, http://en.wikipedia.org/wiki/Duck_typing.

These techniques can help to write more testable code and provide a scalable solution in Scala. Let’s take our favorite CalculatePriceService and apply each of the techniques mentioned in the table.

10.4.2. Cake pattern

A cake pattern[13] is a technique to build multiple layers of indirection in your application to help with managing dependencies. The cake pattern is built on the three abstraction techniques described in Table 10.3.

13 Martin Odersky and Matthias Zenger, “Scalable Component Abstractions,” presented at OOPSLA’05, Oct. 16-20, 2005, http://lamp.epfl.ch/~odersky/papers/ScalableComponent.pdf

Table 10.3. Abstractions used in the cake pattern

Name

Description

Abstract members Provides a way to abstract the concrete types of components. Using abstract types you can create components that don’t depend on concrete types, and the type information could be provided by other components that use them. (See this chapter’s codebase for an example.)
Self type annotation Allows you to redefine this and is a way to declare the dependencies required by a component. Using a trait mixin, you can inject various implementations of dependencies. (See this chapter’s codebase for an example.)
Mixin composition You’ve already seen this in chapter 4. A mixin allows you to use Scala traits to override and add new functionality.

These concepts were covered in detail in chapter 7, so let’s see how the cake pattern can help you decouple the CalculatePriceService from its calculators and make it more testable. The first thing you can do is extract the calculator instances from the service to its own namespace called Calculators:

The idea behind this Calculator trait is to have a component namespace that has all the calculators in your application. Similarly, let’s create a component namespace for the CalculatePriceService and declare its dependency to Calculators by self type:

You’re using the self type this: Calculators to redefine this. That will also allow you to statically ensure that no one can create CalculatePriceService without mixing in the Calculators trait. The benefit is that now you can reference both costPlusCalculator and externalPriceSourceCalculator freely. The self type will ensure that they’re available during runtime.

You must be wondering why both calculators are declared as abstract inside the Calculators trait. It’s because you want to control how these calculators are created. Remember from the tests, you don’t want to use the calculators; instead you want to use a fake or TestDouble version of the calculators. At the same time, you want to use the real version of the calculators in production mode. This is where the trait mixin comes in handy. For production mode you could create a pricing system by composing all the real versions of these components, as in the following:

object PricingSystem extends CalculatePriceServiceComponent
            with Calculators {
  val costPlusCalculator = new CostPlusCalculator
  val externalPriceSourceCalculator = new ExternalPriceSourceCalculator
}

The pricing system is initialized with the real implementation of costPlusCalculator and externalPriceSourceCalculator, and for testing the pricing could be created using the fake implementation:

In the case of the TestPricingSystem, the calculators are implemented using TestDouble so that it helps to write tests around the calculate price service. In your tests you’ll use the TestPricingSystem shown in the following listing.

Listing 10.4. JUnit test case for calculating price service (cake pattern)

You mix the test version of the pricing system into your test class. This will automatically make the fake implementation of the calculators available inside the test. That simplifies your test and lets you focus testing on the CalculatePriceService. The two tests are testing whether the CalculatePriceService is using the right type of calculator when invoked with the name of the calculator.

This is a common technique used by Scala developers to manage dependencies. In smaller projects, it’s reasonable to have the wiring of dependencies implemented like the PricingSystem and the TestPricingSystem, but for large projects it may become difficult to manage them. For large projects it makes more sense to use a DI framework (section 10.5.2 shows how to use off-the-shelf DI) that allows you to completely separate object creation and injection from business logic.

10.4.3. Structural typing

Structural typing in Scala is the way to describe types by their structure. The previous section created the Calculators trait as a namespace for all the calculators, and CalculatePriceService used it to get to individual calculators. The contract between these two traits is the two abstract vals: costPlusCalculator and externalPriceSourceCalculator, because CalculatePriceService doesn’t care about anything else. To create a structure that captures this information, make Scala treat that as a new type:

type Calculators = {
      val costPlusCalculator: Calculator
      val externalPriceSourceCalculator: Calculator
}

The code creates a new type called Calculators by specifying the structure. type is a keyword in Scala used to create new types or a type alias. Now you can use this type to inject various implementations of calculators into the CalculatePriceService:

class CalculatePriceService(c: Calculators) {
  val calculators = Map(
   "costPlus" -> calculate(c.costPlusCalculator) _ ,
   "externalPriceSource" -> calculate(c.externalPriceSourceCalculator) _)
  def calculate(priceType: String, productId: String): Double = {
    calculators(priceType)(productId)
  }
  private[this] def calculate(c: Calculator)(productId: String):Double =
     c.calculate(productId)
}

When using a structural type, you don’t necessarily have to name your type—you can use it inline, as in the following:

In this case, the type of the constructor parameter is defined as inlined . The advantage of structural typing in Scala is that it’s immutable and type-safe. The Scala compiler will ensure that the constructor parameter of CalculatePriceService implements both the abstract vals costPlusCalculator and externalPriceSourceCalculator. Again, you could create two types of configuration—one for testing and another for production:

Based on what you’re doing, you have the flexibility to pick the appropriate configuration. This is one of my favorite ways to handle dependencies because it’s easy and simple. Yet it does come with a price. Internally, structural typing is implemented using reflection, so it’s slower compared to other approaches. Sometimes that’s acceptable, but be aware of it when using structural typing.

10.4.4. Implicit parameters

Implicit parameters provide a way to allow parameters to be found. Using this technique you can have the Scala compiler inject appropriate dependencies into your code. (You’ve already seen implicit parameters in action [section 10.2.2].) ScalaCheck uses implicit parameters to decide an appropriate generator to use for a given property. To declare a parameter implicit, you have to mark the parameter with the implicit keyword.

The following example injects the calculators as a parameter to CalculatePriceService and marks them as implicit:

class CalculatePriceService(
    implicit val costPlusCalculator: CostPlusCalculator,
    implicit val externalPriceSourceCalculator:
     ExternalPriceSourceCalculator
)

The beauty of implicit parameters is that if you don’t supply them when creating the instance of CalculatePriceService, the Scala compiler will search for “implicit” values that match your parameter in the compilation scope. If the compiler fails to find an appropriate implicit value, it fails the compilation.

Create an object called ProductionServices that defines these implicit values for production code:

object ProductionServices {
   implicit val costPlusCalculator = new CostPlusCalculator
   implicit val externalPriceSourceCalculator =
     new ExternalPriceSourceCalculator
}

To provide values for implicit parameters, you also have to mark each value with implicit—otherwise the compiler won’t recognize it. You have to import this object when running in production mode, and the easiest way to do that is use a configuration object like the following:

object ProductionConfig {
  import ProductionServices._
  val priceService = new CalculatePriceService
}

Similarly, for testing, create a separate configuration object and provide a test implementation of the services:

object TestServices {
    implicit val costPlusCalculator = new CostPlusCalculator {
      override def calculate(productId: String) = 0.0
    }
    implicit val externalPriceSourceCalculator =
       new ExternalPriceSourceCalculator {
      override def calculate(productId: String) = 0.0
    }
  }
  object TestConfig {
    import TestServices._
    val priceService = new CalculatePriceService
  }

You don’t necessarily have to always use implicit values for implicit parameters because you can always explicitly pass parameters the old-fashioned way. Using implicit to handle dependencies can easily get out of hand as your application grows in size, unless they’re grouped together like the preceding configuration objects. Otherwise, your implicit declaration and imports will be scattered around the code and will make it hard to debug compilation issues. Note that implicit parameter resolution depends on types. Instead of defining both costPlusCalculator and externalPriceSourceCalculator as a type of Calculator, you had to provide more specific types. Sometimes this constraint can be too restrictive to build a scalable design.

10.4.5. Dependency injection in functional style

The general idea behind DI is inversion of control.[14] Instead of a component controlling its dependencies, it’s passed from outside (usually by some container or framework). When you work with functions, then, DI is already happening automatically. If you consider a function as a component, then its dependencies are its parameters. This makes functions inherently testable. If you create function currying, you can also hide the dependencies as you did with other patterns. Function currying is a technique of transforming functions that takes multiple arguments into a chain of functions each with a single argument. The following is the new interface of Calculators that only uses functions:

14 Martin Fowler, “Inversion of Control Containers and the Dependency Injection Pattern,” Jan. 23, 2004, http://martinfowler.com/articles/injection.html.

trait Calculators {
     type Calculator = String => Double
  protected val findCalculator: String => Calculator
  protected val calculate: (Calculator, String) => Double =
    (calculator, productId) => calculator(productId)
}

The type Calculator is an alias of function that takes product ID and returns the price. The findCalculator function determines the calculator for a given price type. And finally calculate is a function that takes an instance of Calculator and productId to calculate the price of the product. This is quite similar to the interfaces you designed earlier, but this time with only functions.

You can turn the calculate function into a curried function by invoking the curried method defined for all function types in Scala:

val f: Calculator => String => Double = calculate.curried

The curried method takes a function of n parameters and transforms it to n functions with one parameter. In this case it created a function that takes Calculator and returns a function that calculates the price for a productid. The benefit of doing this now is you have a function that knows how to calculate price but hides the Calculator from the users. The following is the example test implementation of the Calculators:

The priceCalculator method returns a function that takes the productId and returns the price of the product that encapsulates the dependencies used to compute the price. This is an example of how you can do dependency injection using functional programming.

10.4.6. Using a dependency injection framework: Spring

Scala’s abstract members, self type and mixin provide more abstraction techniques than are available in Java, but DI frameworks provide the following additional services that aren’t available in these abstraction techniques:

  • They create a clean separation between object initialization and creation from the business logic. These frameworks provide a separate lifecycle to create dependencies as part of the application initialization. This way, your wiring between components becomes transparent from the code.
  • These frameworks help you to work with various other frameworks. For example, if you’re planning to use existing Java web frameworks, then a DI framework will help to inject your Scala objects as dependencies.
  • Most of the DI frameworks, like Spring (www.springsource.org) and Guice (http://code.google.com/p/google-guice/), provide aspect-oriented programming (AOP)[15] support to handle cross-cutting behaviors like transaction and logging out of the box.

    15 “Aspect-oriented programming,” http://en.wikipedia.org/wiki/Aspect-oriented_programming.

The good news is you can use any Java DI framework with your Scala project. This section shows you how to use the Spring framework as a DI framework in your Scala project. (I won’t explain how the Spring dependency injection framework works, but if you’re new to it read the tutorials[16] available on the Spring framework website.)

16 “The IoC container,” Spring Framework, http://static.springsource.org/spring/docs/2.5.x/reference/beans.html.

The Spring framework allows you to configure dependencies in multiple ways. I’ll show you how to configure it using the external XML configuration file. In the Spring world, all the dependencies are called beans, because all the objects follow the JavaBean[17] convention. According to this convention a class should provide a default constructor, and class properties should be accessible using get, set, and is methods.

17 “JavaBeans,” http://en.wikipedia.org/wiki/JavaBean.

To make a property a bean property, Scala provides a handy annotation called @BeanProperty. This annotation tells the Scala compiler to generate getter and setter methods automatically so you don’t have to worry about it. The following listing shows the beanified version of the CalculatePriceService.

Listing 10.5. Bean version of CalculatePriceService

This version of CalculatePriceService looks almost identical to the version from section 10.3, except here both the costPlusCalculator and externalPriceSourceCalculator are declared as bean properties using @BeanProperty annotations. The @BeanProperty annotations will generate the following getters and setters for costPlusCalculator and externalPriceSourceCalculator properties:

def getCostPlusCalculator: Calculator = this.costPlusCalculator
def setCostPlusCalculator(c: Calculator) { this.costPlusCalculator = c }

def getExternalPriceSourceCalculator: Calculator =
     this.externalPriceSourceCalculator
def setExternalPriceSourceCalculator (c: Calculator) {
     this. externalPriceSourceCalculator = c
}

Both price calculators are already beans because they provide a default constructor. The only missing piece is to wire up dependencies to the service, and in Spring you can do this by specifying a configuration file, as shown in the next listing.

Listing 10.6. Spring application context file

This is a standard version of a Spring configuration file, where the calculators and the CalculatePriceService are defined by setting up the dependencies . Save this Spring application-context.xml file in your src/main/resources folder in your SBT project. This is the main configuration file for the pricing application. This file will be used to initialize the application beans. Similarly, you also have a test version of the configuration file under src/test/resources that refers to the fake implementation of calculators. You could also create fake instances inside the test and inject them. You’ll use the latter one to see how you could inject fake versions of the calculators. But first add the following dependencies to your SBT project file:

val spring = "org.springframework" % "spring" % "2.5.6"
val springTest = "org.springframework" % "spring-test" % "2.5.6"
val junit = "junit" % "junit" % "4.4" % "test"

val junitInterface = "com.novocode" % "junit-interface" % "0.5" % "test"

Both the Spring framework and the Spring test framework are added as dependencies. Because you haven’t learned about Specs yet, let’s use JUnit as the testing tool. Again, junitInterface is the testing interface for SBT so that it can run the JUnit tests.

To test CalculatePriceService, you can use Spring to configure the beans and override the appropriate calculator inside the test. To use Spring with the JUnit test, add the following annotations along with the test class declaration:

@RunWith(classOf[SpringJUnit4ClassRunner])
@ContextConfiguration(
     locations = Array("classpath:/application-context.xml"))

The RunWith annotation allows JUnit tests to get access to instantiated beans as defined in the application context file. The ContextConfiguration lets you specify which configuration file to use to initialize beans. If you have a test version of the configuration file, specify that. Inside the test, if you declare a variable of CalculatePriceService with the @Resource annotation, Spring will create and inject an instance of it into the test. Here’s the skeleton JUnit test with Spring configuration:

@RunWith(classOf[SpringJUnit4ClassRunner])
@ContextConfiguration(locations =
     Array("classpath:/application-context.xml"))
class CalculatePriceServiceTest {

  @Resource
  var calculatePriceService: CalculatePriceService = _
}

The instance of CalculatePriceService will be created by the Spring framework and injected inside the test for you. At this point, this test class is set up for testing the calculate price service. The following is the JUnit test to check that the calculate price service uses the cost-plus calculator to calculate price:

@Test
def shouldUseCostPlusCalculatorWhenPriceTypeIsCostPlus() {
  val fakeCostPlusCalculator = new Calculator {
    def calculate(productId: String) = 2.0D
  }
  calculatePriceService.setCostPlusCalculator(fakeCostPlusCalculator)
  val price = calculatePriceService.calculate("costPlus", "some product")
  assertEquals(2.0D, price)
}

The real implementation of the costPlusCalculator is replaced by a fake implementation. The test is passing "costPlus" as a price type, and according to the logic (see listing 10.5) it will use the cost-plus calculator. Similarly, the following is the test for an external price source calculator:

@Test
def testShouldReturnExternalPrice() {
  val fakeExternalPriceSourceCalculator = new Calculator {
     def calculate(productId: String) = 5.0D
  }

  calculatePriceService.setExternalPriceSourceCalculator(
     fakeExternalPriceSourceCalculator)
  val price = calculatePriceService.calculate("externalPriceSource",
     "dummy")
  assertEquals(5.0D, price)
}

In a similar fashion, the real implementation is swapped with a fake version before invoking the service. The following listing shows the complete JUnit test.

Listing 10.7. Complete unit testing using Spring dependency injection

You annotate your JUnit test to allow it to see all the Spring beans using RunWith and specify the configuration file used to create the beans. Note that if you have a test version of the configuration, you should specify it here—that way you don’t have to create a fake implementation per test. In large projects it’s recommended to have a test version of a configuration file where you can configure all your beans with fake implementations of their dependencies. As you can see, there’s nothing much to change to use Scala classes and traits with the Spring framework. And this is true for all other dependency injection frameworks available in Java. There’s some up-front work to use a DI framework, but for large projects it’s worth it—unless you’re using some Scala framework that provides native support for managing dependencies.

I covered a lot of ground in this section, and I’m sure the techniques you’ve learned here will help you to write more decoupled and testable systems in Scala.

The next section covers another testing tool, called Specs. JUnit was great to get quickly up and running with testing Scala code, but now it’s time to get used to the Scala-based testing framework, which is more expressive and easier to use.

10.5. Behavior-driven development using Specs2

Behavior-driven development (BDD) is about implementing an application by describing the behavior from the point of view of stakeholders. So far I’ve been talking about test-driven development in this chapter, so why bother discussing BBD? How is it different from TDD?

The answer is that it isn’t different. BDD[18] is doing TDD the right way. The first thing to notice is that the definition of BDD doesn’t talk about testing at all. This is on purpose, because one pitfall of doing TDD is that some people put more emphasis on testing than solving the business problem. And BDD puts more emphasis on solving business problems. In fact, it recommends looking at the application from the stakeholder’s perspective. The end result of doing BDD in a project has the following two important outcomes:

18 David Chelimsky, et al., The RSpec Book: Behaviour-Driven Development with RSpec, Cucumber, and Friends, Pragmatic Bookshelf, 2010, www.pragprog.com/book/achbd/the-rspec-book.

  • Delivering value quicklyBecause you’re focused on viewing the application from the stakeholder’s point of view, you understand and deliver value quickly. It helps you to understand the problem and recommend appropriate solutions.
  • Focus on behaviorThis is the most important improvement because at the end of the day, behaviors that you implement are the ones your stakeholders want. Having a focus on behavior also reduces the effort spent on up-front design, analysis, and documentation, which rarely adds value to the project.

To get developers and stakeholders on the same page, you need the Ubiquitous[19] language, a common language everybody speaks when describing the behavior of an application. And you also need a tool so you can express these behaviors and write automated specifications that assert the behavior.

19 “UbiquitousLanguage,” http://martinfowler.com/bliki/UbiquitousLanguage.html.

Note

I’ve been using test and specification synonymously, but specification is a better way to talk about behavior with stakeholders. Think of a specification as a list of examples.

In BDD, you still follow the red-green-refactor cycle during your development. The only thing that changes is the way you look at these tests or specifications. It’s time to see some BDD in action, and the next section introduces you to the BDD tool that most Scala developers use: Specs2.

10.5.1. Getting started with Specs2

Specs2[20] is the BDD library for Scala, and it’s written in Scala. At the time of this writing it’s the de facto BDD library used by Scala developers. The easiest way to get started with Specs is to add it as a dependency to your SBT project. Add the following to your SBT build.sbt file:

20 Specs 2, http://etorreborre.github.com/specs2/.

scalaVersion := "2.10.0"
libraryDependencies += "org.specs2" %% "specs2" % "1.13" % "test"

If you’re planning to use some other version of Specs, make sure it’s compatible with the Scala version set in your SBT project. Once you reload and update your project, you’re ready to use Specs. The best part is that SBT knows how to run the Specs specification natively. Write the first Specs specification using the same calculate price service you saw in the previous section. Create the empty specification for CalculatePriceService:

package scala.book
import org.specs._
class CalculatePriceServiceSpecification extends Specification

To declare a Specs specification, you always have to import org.specs2.mutable._ and extend the Specification trait. Next, specify the behaviors of the calculate price service:

package scala.book
import org.specs2.mutable._
class CalculatePriceServiceSpecification extends Specification {
  "Calculate price service" should {
    "calculate price for cost plus price type" in {}
    "calculate price for external price source type" in {}
  }
}

You’ve added a structure to the specification. First you use the should method to define the system, followed by the description of two behaviors of the service. The Specs framework adds methods like should and in to the String class using implicit conversion so that your specification can become more expressive and readable. When you run this specification using the SBT test action, you’ll see the output shown in figure 10.4.

Figure 10.4. Specs output of running a specification

If you have a color-enabled terminal window, Specs will show the test output in different colors. Because I haven’t implemented the specification, in figure 10.4, it’s yellow. (If I had implemented the specification, green would indicate a passed test, red a failure.)

Implement these pending specifications using the cake pattern implementation of the service. In section 10.5.1, you created two versions of CalculatePriceService—one that uses the real calculators and another using the fake implementation of the calculators for testing. Here’s the test version of CalculatePriceService:

trait TestPricingSystem
     extends CalculatePriceServiceComponent with Calculators {
  class StubCostPlusCalculator extends CostPlusCalculator {
    override def calculate(productId: String) = 5.0D
  }
  class StubExternalPriceSourceCalculator
     extends ExternalPriceSourceCalculator {
    override def calculate(productId: String) = 10.0D
  }
  val costPlusCalculator = new StubCostPlusCalculator
  val externalPriceSourceCalculator = new StubExternalPriceSourceCalculator
}

Both calculators return a hardcoded price. This is perfectly fine because the focus right now is on CalculatePriceService, and the assumption is that both calculators work correctly. To use this version of the pricing system, you need to mix this in with the specification, as shown in the following listing.

Listing 10.8. Specification for CalculatePriceService

The TestPricingSystem is mixed in with the CalculatePriceService using the fake implementation of calculators. uses the Specs’ built-in matcher called beEqualTo, and uses an overloaded version of it. The must method is again added by Specs using implicit conversions to almost all the types to make the specification more readable. This example demonstrates how easy it is to write a good expressive specification with Specs. The next section explores Specs features available to you for writing expressive specifications.

10.5.2. Working with specifications

To effectively work with Specs, you need to get comfortable with specifications and the available matchers. The matchers are the way you add expectations in your specification. beEqualTo and must be_== are examples of matchers. Specs ships with many built-in matchers, and you can find the complete list in the Specs documentation.[21]

21 Specs, MatchersGuide, “How to add expectations to your examples,” http://code.google.com/p/specs/wiki/MatchersGuide.

You saw a sample specification in the previous section. Now I’ll show you a variation of that. Depending on the kind of behavior you’re describing, pick the appropriate one.

The basic format of the Specs specification is that you extend the Specification trait and then provide examples:

package variousspecs
import org.specs2.mutable._
object MySpec extends Specification {
  "example1" in {}
  "example2" in {}
}

One way to look at a specification is as a group of examples that describe the behavior of your application. But typically, when writing specifications you’ll have a component for which you’re describing the behavior; this is your system under specification. You can organize the examples as a group for a system under specification:

object SUSSpec extends Specification {
  "my system" should {
    "do this" in {}
    "do that" in {}
  }
}

You can also nest examples if you want to refine your examples. You may want to add an example to describe the behavior of a cost-plus calculator when the product ID is empty. According to the stakeholder, the price in this case should be 0.0. Here’s how the example looks:

"calculate price for cost plus price type" in {
   val service = new CalculatePriceService
   val price: Double = service.calculate("costPlus", "some product")
   price must beEqualTo(5.0D)

   "for empty product id return 0.0" in {
      val service = new CalculatePriceService
      service.calculate("costPlus", "") must beEqualTo(0.0D)
   }
 }

You’re nesting the example for the special case when the product is empty. By default, examples are run in isolation, and they don’t share any state. What that means is that you have to take extra measures to share the variables and state. Because having shared state between examples is a bad idea, I’m not going to cover that here.

Another interesting way to declare specifications in Specs is to use data tables.[22] Data tables allow you to execute your example with a set of test data. For example, if you have to describe an example of how the cost-plus rule calculates price, having one example with one sample data isn’t enough. To describe its behavior properly, you need a set of data that evaluates the rule. Specs data tables come in handy in these cases. They let you specify your sample data in a table format like the following:

22 Specs, “How to use Data Tables,” updated March 30, 2010, http://code.google.com/p/specs/wiki/AdvancedSpecifications.

"cost plus price is calculated using 'cost + 20% of cost + given service
  charge' rule" in {

   "cost" | "service charge" | "price" |>
   100.0  !   4              ! 124     |
   200.0  !   4              ! 244     |
   0.0  !     2              ! 2       | {
    (cost, serviceCharge, expected) =>
      applyCostPlusBusinessRule(cost, serviceCharge) must be_==(expected)
    }
}

The example describes the rule, and the data table helps capture the data you need to verify the applyCostPlusBusinessRule method. The first row of the table is the header and is used for readability purposes. The second and following rows have sample data followed by a closure that’s invoked for each row of data. Inside the closure you’re evaluating the applyCostPlusBusinessRule method and checking the expected result. To use data tables in your specification, you have to mix in the Data-Tables trait. And the > at the beginning of the table is also required—think of it as a play command. The > makes the table executable as part of the example.

Specs data tables are a great way to create examples with sets of example data. You can also use ScalaCheck with Specs and have it generate sample data for your example.

The next section explores how automated testing fits into the asynchronous messaging world. In chapter 9 you learned about actors as a specific example of messaging systems. Now let’s see how to write tests around them.

10.6. Testing asynchronous messaging systems

So far this chapter has talked about testing or created examples for systems that are synchronous, where the test invokes the system and control comes back to the test when the system is done performing an action. But in asynchronous fire-and-forget systems, the control will come back to the test while the system is executing. From the test, you don’t get the feedback you’re looking for. To overcome this challenge, developers sometimes extract the business logic outside of the messaging layer (always a good idea) and test it separately. One drawback with this kind of approach is that you’re no longer testing your system end to end. For example, to verify that one actor is sending a message to another actor after some action, you need to write an integration test that sends a message to one actor and waits for the reply. The general rule for writing integration tests around asynchronous systems is to detect invalid system state or wait for some expected notification with a timeout.

Writing automated tests around asynchronous systems is quite new, and the tools for it are still maturing. One tool worth mentioning here is Awaitility,[23] which provides a nice testing DSL for testing asynchronous systems. Let’s see Awaitility work in a simple example. Imagine that you have an order-placing service that saves orders to the database asynchronously, and you place an order by sending a PlaceOrder message. Here’s the dummy ordering service implemented as an actor:

23 Awaitility, http://code.google.com/p/awaitility/.

package example.actors
case class PlaceOrder(productId: String, quantity: Int, customerId: String)

class OrderingService extends Actor {
    def act = {
      react {
        case PlaceOrder(productId, quantity, customer) =>
      }
    }
  }

Inside the specification you’ll use Awaitility’s await method to wait until the order is saved into the database. If the order isn’t saved in the database, then you know that something went wrong while processing the message. Here’s the specification for the ordering service:

The preceding example sends an asynchronous message to the ordering service and waits until the order is saved into the database. The default timeout for Awaitility is 10 seconds, and you can easily set your timeout by invoking the overloaded version of await. Inside the orderSavedInDatabase, you could go to the data source and check whether the order is saved for a given customer ID.

Awaitility doesn’t provide any infrastructure to help you test asynchronous systems, but it does make your examples readable.

10.7. Summary

This chapter covered an important topic that is critical to developing high-quality software. Picking up a new programming language and trying to use it to build a large application is difficult. And one of the common hurdles is to find a way to write automated tests in the new language or programming environment. This chapter gave you that introduction and introduced tools you can use in Scala projects.

First I introduced you to automated testing and how you can generate automated tests using ScalaCheck. You learned how to define specifications in ScalaCheck and create custom test data generators. ScalaCheck is a great way to get test protection for your Scala project.

You learned about agile software development and the role test-driven development plays inside it. You also explored how TDD is beneficial to building reliable software and how it helps in evolving design. To use TDD as a practice in a Scala project, you need tool support. I explained how to set up a continuous environment and use SBT as a build tool. I listed some of the common tools used by Scala developers.

Building applications using automated tests requires that your design be testable. One of the critical properties for a testable design is inversion of control, used in Java, Ruby, and other languages. Scala, being both object-oriented and functional, has more options to create abstractions. Section 10.5 showed you ways of doing dependency injection in Scala. Concepts like self type and abstract members aren’t only restricted to dependency injection—in fact, you can take these abstract ideas and build reusable components in Scala.

The most common mistake made by developers when doing TDD is putting focus on testing, whereas the most important thing is the behavior of the application. BDD fixes that confusion by putting the focus back on behavior and customer collaboration. I introduced you to a tool called Specs that allows you to write expressive specifications. I mentioned that you can use JUnit to test your Scala code, but noted that it isn’t recommended. Using Scala specification/testing tools will improve the readability of your tests and will provide better integration with other Scala tools.

On the surface, writing automated tests looks difficult, but I’m confident you don’t feel that way anymore. With Scala’s rich ecosystem of tools, it’s easy to get started with automated tests or specifications, and you don’t have any excuse not to use them.

The next chapter discusses functional programming. You’ve seen some functional programming features of Scala in previous chapters and examples, but chapter 11 ties them together with functional programming concepts so you can write more reliable and correct Scala programs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset