Chapter 7. Reporting JUnit results

This chapter covers

  • Logging from a Base Test Case
  • Logging with Log4Unit
  • Reporting results with Ant, including <junitreport>
  • Customizing Ant’s test run reports
  • Using a custom TestListener
  • Counting assertions

This chapter covers various tools and techniques available for reporting JUnit test results, including extending JUnit to write your own custom reporting mechanisms. JUnit by itself provides two simple mechanisms for reporting test results: simple text output to System.out and its famous Swing and AWT “green bar” GUIs (the AWT GUI being a vestige of JUnit’s Java 1.1 support). The results reporting that JUnit provides out of the box is useful for developers at their desktops, but that is about it. You need to extend JUnit or use it with another tool if you want automated test reports in formats such as XML or HTML.

There are a slew of JUnit extensions out there, many of which revolve around subclassing TestCase. These extensions usually can be executed with the built-in JUnit test runners or with Ant’s <junit> task with no extra work. Ideally you want reporting solutions that are reusable across any of these extensions; therefore, you should extend JUnit reporting by implementing or extending standard APIs in Ant or JUnit.

JUnit is most often executed in one of three contexts, each of which provides different reporting features and opportunities for extension:

  • IDE
  • Command line
  • Ant build script (or, increasingly, Maven target)

 

Note

Maven is a build tool that grew out of many people’s collective experience using Ant to build, test, and manage Java-based projects. Maven promotes the concept of a project object model, which it creates through a standardized project deployment descriptor. Maven is firmly based on Ant and Jelly, an XML-based scripting language. For more information about Maven, visit maven.apache.org.

 

Some IDEs launch one of JUnit’s built-in GUI test runners, while others have their own GUI test runner implementations with added features. Command-line test runners output results to a console, which can be redirected to a file. Ant’s <junit> and <junitreport> suite of tasks together provide XML results files transformed to HTML reports.

Different report styles and formats serve different purposes and users. Managers like to see reports in their browsers or maybe on paper printouts in bug triage meetings. Developers and QA engineers like results displayed graphically in their IDEs so they don’t have to “shell out” to the command line to run JUnit in a separate window.

7.1. Using a Base Test Case with a logger

Problem

You want to perform logging from within your test cases.

Background

JUnit automates evaluating assertions so that developers don’t waste time routinely verifying the output of test methods. Logging from test cases should not be used as a way to verify that tests have passed. Let the JUnit framework handle that. But there are several situations where logging messages or even test results to a file, console, or other device is useful in JUnit testing. For example:

  • Temporary debugging, which can be turned on or off by configuration
  • Auditing and storage of test results
  • Auditing of the test environment or data used during a test run
  • Tracking the progress of tests that take a long time to run
  • Trapping customized test results for import into a test management system

You can always configure and instantiate a logger from within any test case as you would from any other Java class. But if you are going to write many tests (especially if you are in a team environment, sharing common test infrastructure such as base test classes), it is practical to write a base test case class that provides the logging configuration and instantiation, and then subclass the log-providing class as desired. If you have a common base class that everyone uses for a particular subsystem or project, you can include the logging configuration as part of that class.

Recipe

Create a base test case class that extends junit.framework.TestCase and provides a configured logger to its subclasses.

You have several options for finding and using a logger:

  • Write your own logging mechanism
  • Use a logger written by someone on your staff
  • Use a logging library from a third party, such as Log4J, Avalon’s LogKit, Jakarta Commons Logging
  • Use the java.util.logging package in JDK 1.4

The pattern for setting up a logger in a base test class is the same, regardless of which logger you choose:

  • Extend TestCase with a class named similarly to BaseTestCase, which might include other commonly used testing utilities (perhaps a JNDI lookup() utility, or some other custom logic about where to find, that finds test data).
  • Set up a default configuration for the logger and initialize it in the BaseTestCase, and make the preconfigured logger accessible to subclasses through a protected variable, an accessor to retrieve the logger instance, or through inherited log methods.
  • Make your test cases extend BaseTestCase so they can use the logger as needed.

Listing 7.1 shows a BaseTestCase class that configures two logger instances, one for static contexts, such as static initializer blocks and suite() methods, and one for non-static contexts, such as setUp(), tearDown(), and test methods. Two separate loggers for static and non-static might be overkill, but this allows the example to show two different approaches for setting up the logger. The example uses Apache Avalon’s LogKit (any version of 1.x can compile and run with this example). In terms of features and ease of use, LogKit is a full-featured, flexible logging kit somewhere in between JDK 1.4’s java.util.logging package and Jakarta’s premier logger, Log4J. You can read more about LogKit and download the library at avalon.apache.org/logkit/.

 

Note

Avalon is a Java platform for component-oriented programming including a core framework, utilities, tools, components, and containers hosted at Apache.

 

Listing 7.1. BaseTestCase configured with Avalon LogKit

The most important thing about the code example in listing 7.1 is the general technique of embedding a shared logger instance in a base test case class, not the specifics of using any particular logging implementation.

Discussion

Loggers such as LogKit, Log4J, and the Java 1.4 logging API allow you to configure logging on a per-class or per-package basis, by log level or by named categories. Such configurability is useful for enabling logging for a particular subsystem or class hierarchy and helping isolate log messages from a particular set of tests or type of log message.

The advantage to extending a BaseTestCase (for logging and other utilities it might offer) is that subclasses can access the logger with no extra work. The drawback to any subclassing strategy is that it ties the subclasses to the parent class through inheritance. An alternative to subclassing is to write a logging utility class that configures and instantiates a shared logger, and then use that utility class from within your tests. This tack decouples your test case classes from a common base class added just for logging. But it is so common in practice to evolve a useful, in-house Base Test Case of some kind, that it is a good recipe to have in your personal cookbook.

Related

  • 7.2—Using Log4Unit

7.2. Using Log4Unit

Problem

You want a ready-made solution for logging messages from within your test cases.

Background

Log4Unit is an extension of JUnit’s TestCase class that gives you Log4J-based logging functionality with the least amount of effort. It provides the following features:

  • Log4Unit derived test cases default to logging to System.out if the Log4J library is not present in the class path at runtime.
  • Log4Unit configures and instantiates a Log4J logger instance and implements utility logging methods such as info(Object message) and debug (Object message, Throwable t) for you.
  • Log4Unit comes with a customized Swing-based test runner that shows log statements and test summary information in a dialog box that pops up with the push of a button.

Recipe

Use Log4Unit (www.openfuture.de/Log4Unit/) to integrate your tests with the Log4J logger. Log4Unit is free, open source, and licensed under the Lesser GPL. The latest version as of this writing is v0.2.0. Download the .zip or .gzip file and unpack it into a new directory, such as log4unit-020.

You also need Log4J (http://logging.apache.org/log4j) to see the features of Log4Unit. The latest release of Log4J as of this writing is v1.2.8.

To use Log4Unit:

  • Extend your TestCases from junit.log4j.LoggedTestCase.
  • Write a Log4J configuration file (it can be in Java properties or XML format—see the Log4J documentation for details), or place the directory containing Log4Unit’s provided log4j.properties in your class path (probably the src/ directory where you unzipped Log4Unit).
  • Add log4j-1.2.8.jar and log4unit-0.2.0.jar to your usual test class path.
  • As another option, you can use junit.logswingui.TestRunner as your GUI test runner if you want to have access to the test summary: java junit.logswingui.TestRunner [-noloading] [TestCase].

Listing 7.2 demonstrates some basic features of Log4Unit by showing you the simplest type of test you can write with Log4Unit. Note that we import and extend junit.log4j.LoggedTestCase. The LoggedTestCase superclass configures and instantiates a Log4J logger instance and implements utility logging methods such as info(Object message) and debug(Object message, Throwable t) for you. All we do in this example apart from extending the base class is call the inherited info(Object message) log method twice and debug(Object message) once to demonstrate the basic functionality and default logging configuration.

Listing 7.2. Log4UnitExample.java

First let’s run Log4UnitExample from the command line using JUnit’s built-in text-based test runner and see the resulting output.

 

Note

The Log4J jar is needed to run, but not to compile this example; and some of these messages are generated at the DEBUG log level, so change your log4j.properties file from the INFO to DEBUG level to see them all.

 

   java -cp libjunit.jar;liblog4j-1.2.8.jar;liblog4unit-0.2.0.jar;classes
     junit.textui.TestRunner junit.cookbook.reporting.log4unit.Log4UnitExample

   30 Jun 2003 22:01:42,663 - Log4J successfully instantiated.
   .30 Jun 2003 22:01:42,693 - ** SETUP ENTERED **
   30 Jun 2003 22:01:42,713 - > entered
      testConnection(junit.cookbook.reporting.log4unit.Log4UnitExample)
   30 Jun 2003 22:01:42,733 - Initiating connection to server now
   30 Jun 2003 22:01:42,753 - Tear down finished.

   Time: 0.08

   OK (1 test)

You can see that the default logging configuration prints out date and time (to the millisecond) to the console, followed by a successful start-up message and default log messages for setUp() and tearDown(). We overrode setUp() with our own log message and let tearDown() print its default message. The two INFO messages we logged show up in the middle, displaying the test being executed and a message.

If you look in the directory from where you executed this command, you see a file named bugbase-test.log. Log4Unit produces this log and the console output because it uses Log4J to handle the calls to the various logging priorities (DEBUG, INFO, WARN, ERROR, and FATAL). In the default configuration the console and the log contain the same information, but you can configure Log4J to customize the output for each location.

Log4J supports a plethora of possibilities for increasing log output and customizing the logging configuration. Some of it is useful for tests, such as source line numbers and elapsed time recording. But because these are features of Log4J and not Log4Unit, we won’t delve into them here. Please refer to the Log4J web site for more information (http://logging.apache.org/log4j).

Another feature of Log4Unit is its customized Swing-based test runner with its test protocol feature. Running the same example with the junit.logswingui.TestRunner, we see a dialog box with a new Protocol button as shown in figure 7.1.

Figure 7.1. junit.logswingui.TestRunner showing the Protocol button on the right

When you press the Protocol button a dialog box pops up with log statements and test summary information, as shown in figure 7.2.

Figure 7.2. The Log4Unit Test Protocol dialog box, showing log statements and test summary information

Discussion

A helpful feature of Log4Unit is that test cases default to logging to System.out if the Log4J library is not present in the class path at runtime. If you are wondering how it does this, LoggedTestCase’s constructor discovers whether Log4J is available and sets a boolean flag accordingly. There is an if statement in each log method that passes the log message and level to System.out when the flag is false.

When running the junit.logswingui.TestRunner, you might see a large number of Log4J errors on the console display. These errors describe class loading problems caused by trying to reload Log4J classes that prefer not to be reloaded,[1] so if you see these messages the first time you use the Log4J test runner, you have two options:

1 There are similar problems when trying to reload some JDBC driver classes within tests, an issue we deal with in more detail in chapter 10, “Testing and JDBC.”

  • Use the -noloading option, or
  • Add the Log4J package (org.apache.log4j.*) to the “excluded classes” list as we describe in recipe 8.6, “The graphical test runner does not load your classes properly”.[2]

    2 If you run your tests from Ant’s <junit> tasks, then this option doesn’t exist, but that is no problem because Ant uses its own test runner.

You should be aware of the limitations to using Log4Unit:

  • Log4Unit only supports Log4J. Log4J is a very flexible and powerful logging API, but sometimes you need to use another logging implementation. Log4Unit is open source and small, so you could pretty easily customize it to support the logging implementation of your choice, or abstract the logging implementation using a bridge such as Jakarta Commons Logging, found at http://jakarta.apache.org/commons/logging.html.
  • Log4Unit requires your TestCases to extend LoggedTestCase. It is one thing to willingly extract a Base Test Case; it is another to be forced into it. That said, there’s nothing about Log4Unit that prevents subclasses of LoggedTestCase from running in any other JUnit test runner. As we said, even if Log4J is not in the class path, the tests run as normal TestCases minus the logging features. Nevertheless, if you plan to write hundreds or thousands of tests, we recommend that you extend a base TestCase of your own from LoggedTestCase and extend all your other tests from your own base TestCase. That way, if you ever decide you need to remove or replace Log4Unit, you only have to change your base TestCase and not hundreds of TestCases.

Related

  • 7.1—Using a Base Test Case with a logger

7.3. Getting plain text results with Ant

Problem

You want to output JUnit test reports in plain text.

Background

Plain text reports are useful in contexts such as Telnet terminals, UNIX and DOS command shells, and inline in email messages. Ant has the built-in capability to produce two types of plain text JUnit test reports: brief and plain, which are useful in these contexts.

Recipe

Use the brief or plain formatter type to output plain text reports to a file or to the console. The <junit>, <test>, and <batchtest> tasks all support the use of the nested <formatter> element. Table 7.1 describes the attributes of the <formatter> element:

Table 7.1 Attributes of the <formatter> element of Ant’s <junit>, <test>, and <batchtest> tasks for executing JUnit tests

Attribute

Description

classname Lets you specify your own custom formatter implementation class instead of using xml, plain, or brief (see recipe 7.6, “Extending Ant’s JUnit results format” for use of this extension feature).
extension Extension to append to files output by the formatter. Required if using a custom formatter, but defaults to .txt for plain and brief, and .xml when using xml.
type Choice of xml, plain, or brief (unless using your own formatter implementation with the classname attribute).
usefile Whether to output formatted results to a file. Defaults to true.

The two formatting options we are concerned with in this recipe are the brief and plain types.

Listing 7.3 shows an Ant target for running a set of JUnit tests with the brief results formatter. You typically use this target in a complete Ant build script, of course. See recipe 7.4, “Reporting results in HTML with Ant’s <junitreport> task” for a more complete build.xml example.

Listing 7.3. junit-run Ant target using brief results formatter
   <!--property declarations, clean, compile and other build
   targets omitted to save page space -->

   <target name="junit-run"
           description="=> run JUnit tests">
       <junit haltonfailure="no" fork="yes" printsummary="no"> 
           <classpath>
               <pathelement location="${classes.dir}"/>
               <pathelement path="${java.class.path}"/>
           </classpath>
           <batchtest fork="yes">
               <formatter type="brief"  
                          usefile="no"/>    
               <fileset dir="${src.dir}">
                   <include name="${junit.includes}"/>
                   <exclude name="${junit.excludes}"/>
               </fileset>
           </batchtest>
       </junit>
   </target>
Eliminate duplicate summary information—The brief text formatter already includes a summary at the end of a test run, so we set printsummary to no to avoid duplicating that information.
Use brief formatter—This is how to specify the brief formatting type. As we have mentioned previously, the other available values are plain and xml.
Display results to console—We have decided to display test results to the console, rather than to a file, so we set usefile to no.

The brief format type output looks like this on the console (we ran this Ant build target with the -emacs flag to reduce logging adornments):

   junit-run:
   Testsuite: junit.cookbook.tests.extensions.ReloadableTestClassLoaderTest
   Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.02 sec

   Testsuite: junit.cookbook.tests.reporting.CookbookTestListenerTest
   Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.591 sec

Change the formatter to type="plain" and run it again. You can see the plain type output prints the name and elapsed time of each test method:

   junit-run:
   Testsuite: junit.cookbook.tests.extensions.ReloadableTestClassLoaderTest
   Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.03 sec

   Testcase: testGetResourceString took 0.01 sec
   Testcase: testGetResourceAsStreamString took 0 sec
   Testcase: testLoadClassString took 0.02 sec
   Testcase: testIsJar took 0 sec

   Testsuite: junit.cookbook.tests.reporting.CookbookTestListenerTest
   Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.591 sec
   Testcase: testStartTest took 0.561 sec
   Testcase: testEndTest took 0.01 sec
   Testcase: testAddError took 0.01 sec
   Testcase: testAddFailure took 0 sec

We set printsummary="no" when using these formatters because the summary output just repeats some of the same information output by these formatters.

Both of these formatters will output one text file per test case class if you run your tests using the <batchtest> task, since the <batchtest> task dynamically picks up tests to run based on pattern matching for file names. We have our includes/excludes pattern match the source files here, but you could use classes (we use source file names because it saves time in pattern matching a mix of outer and inner class file names). If you want to automatically send the output of the results as text in the body of an e-mail message (since attaching dozens or hundreds of text files would be unusable to recipients of the e-mail), you can use the Ant <concat> and <mail> tasks to do so with a target like this:

   <target name="mail-report">
       <property name="junit.report.file" value="junit-results.txt"/>
       <concat destfile="${junit.report.file}">
          <fileset dir="${junit.reports.dir}" includes="TEST-*.txt"/>
       </concat>
       <mail mailhost="mail.manning.net"
             mailport="25"
             subject="JUnit test results"
             tolist="[email protected],[email protected]"
             messagefile="${junit.results.file}">
           <from address="[email protected]"/>
       </mail>
   </target>

Each test result file is automatically named by the formatters as TEST-classname.txt, so it’s easy to include them all in a <fileset>. The <concat> task concatenates all these files into one new file named by the destfile attribute, which is set by a property in our example to ensure that the same file is picked up and used below in the <mail> task. The messagefile attribute of the <mail> task will use the file specified by ${junit.results.file} as the body of the email message that is sent. You could easily spruce up the target as shown to make it more dynamic, such as by using the <tstamp> task to create a time stamp property, which you could use to append to the email subject.

Discussion

The easiest way to get off the ground with automated JUnit test results is by running JUnit tests in Ant, and running the outputs into automated emails or HTML reports. Test results can be output in XML by the <junit> task and transformed to HTML using the <junitreport> task.

Related

  • 7.4—Reporting results in HTML with Ant’s <junitreport> task
  • 7.5—Customizing <junit> XML reports with XSLT

7.4. Reporting results in HTML with Ant’s <junitreport> task

Problem

You need to easily and automatically report JUnit test results in a presentable HTML format.

Background

You often need to make JUnit test results available as a report to a wider audience than the individual developer or QA engineer. The way to do this is with professional looking file-based reports, which can be emailed as attachments or posted online in HTML format. Tabular, cross-linked HTML reports are useful for their hyperlinked navigability, especially if you need to navigate around hundreds or thousands of test results.

Recipe

Use Ant’s <formatter> element to tell Ant to save the JUnit results in XML files. You can specify <formatter> under either <junit> or <batchtest>, depending on how you set up your target to run JUnit tests. In the same target you use to run the JUnit tests, or in a separate target if you prefer, use the <junitreport> task with a nested <report> element to transform the XML result files’ output by the previous test run into a professional looking HTML report.

Listing 7.4 shows a simplified but functional Ant build script with one target for running a set of JUnit tests using <batchtest> and another for transforming the XML test results into HTML using the <junitreport> task.

Listing 7.4. build.xml using the <junitreport> task

The <batchtest> attribute todir specifies an output directory for individual test suite result files. For each test suite (test case class), batchtest generates a separate results file.
Select the xml formatter.
The <junitreport> attribute todir specifies an output directory for one merged XML file, containing the results of all the tests executed by <batchtest>. The <fileset> specifies which test result files to include in the final report.
Choose the frames format to produce an HTML report that looks like Javadoc.
Specify the output directly for the final HTML report.

Ant’s <junitreport> task comes with two embedded XSL stylesheets for generating HTML reports: one with HTML frames and one without. The HTML frames report is organized similarly to standard Javadoc output with a three-paned, interlinked frameset that facilitates navigation.

The noframes report comes out as one long HTML page, which is definitely harder to navigate than the frames version if you have more than a few unit tests.

To choose which style of report to generate, you specify either frames or noframes to the format attribute of the <report> element. Internally, Ant maps these values to one of its two XSL stylesheets to produce the appropriate report format. For more discussion of the reporting options, including non-HTML options and ways to customize the default capabilities, see the remaining recipes in this chapter.

Discussion

Other than some dependency checking for whether to run certain batches of tests and some extra system properties (such as ${test.data.dir}) which one may want to declare, the targets shown in listing 7.4 are nearly identical to targets we have used on commercial projects to run and report JUnit tests in a team environment. We separate the running and reporting into two targets because this allows a developer to run just the tests without creating the HTML report. The printsummary="withOutAndErr" attribute setting of the <junit> task is used to output a summary of the test results to the console, so that a developer can see a summary of the test run without running the junit-report target to generate the HTML report. The report generation only takes a few seconds to execute, but it adds up over time if you are repeatedly running tests with Ant while working. We make the junit-report target depend on the junit-run target so that the release engineers can just call the junit-report target without having to call the junit-run target separately.

Using <junitreport> and its <report> element is the easiest way to produce professional looking JUnit HTML reports. If the default report formats provided by <report> are lacking or not to your taste, you can customize them or design your own reporting format, as discussed in recipe 7.5, “Customizing <junit> XML reports with XSLT” and recipe 7.6, “Extending Ant’s JUnit results format.”

Related

  • 7.5—Customizing <junit> XML reports with XSLT
  • 7.6—Extending Ant’s JUnit results format

7.5. Customizing <junit> XML reports with XSLT

Problem

You are running JUnit with Ant and you need to customize the reports produced by the <junitreport> task.

Background

If the two types of HTML reports that can be produced by Ant out of the box are not to your liking, you can define your own custom HTML output by customizing the existing XSL templates or by writing new ones from scratch.

Perhaps, instead of HTML output, you need a custom XML report format to facilitate importing test results into a test management system or a publishing engine (such as a PDF generator expecting a particular XML format). In these cases, you need a way to tell Ant to transform its default XML output into another XML format.

Recipe

First, output test results in XML using the <formatter type="xml"> (see recipe 7.3, “Getting plain text results with Ant”). Next, merge the results with the <junit-report> task. This creates one large XML results document. You have two main options for transforming this document using XSL stylesheets:

  • Customize one of the <junitreport> XSL stylesheets that comes with Ant.
  • Make <junitreport> use a custom XSL stylesheet of your own.

Customizing one of the existing XSL sheets is useful only if you want to mildly customize the format that Ant provides by default, such as embedding an image or changing the background color or fonts. This is because of some quirks and limitations of the <report> element (see the Discussion section for details).

For transforming XML formatted JUnit test results into HTML reports, Ant (since at least Ant 1.3) provides two XSL stylesheets in $ANT_HOME/etc: junit-frames.xsl and junit-noframes.xsl. Make a copy of either one for customization and keep your modified copy in a directory in your project. Then use your modified stylesheet to override the built-in default of the same name (either junit-frames.xsl or junit-noframes.xsl) by using the styledir attribute of the <report> element.

A simple but effective change to one of these stylesheets is to skip outputting into the HTML the listings of Java system property names and values recorded during the execution of each JUnit test. These properties listings are seldom useful, and they add many kilobytes of additional report file content, which can be an issue if trim, lightweight HTML reports are desired. Just comment out or delete the following elements (meaning everything between and including the opening and closing of the following named elements in the XSL) in junit-noframes.xsl (line numbers correspond to the line numbers in the actual junit-noframes.xsl file shipped with Ant 1.6.0):

1.  Both <script> elements (lines 123–158)

2.  The <div class="Properties"> element (lines 271–276)

3.  The <xsl:template match="properties"> element (lines 334–340)

Then you can point Ant to your modified stylesheet’s directory location (represented by the property variable modified.xsl.dir below) in a target utilizing the <junitreport> task and <report> element as shown in listing 7.5.

Listing 7.5. Ant script snippet using <report> element with custom stylesheet
   <target name="junit-custom-report"
           description="=> generate XML and custom HTML reports">
       <junitreport todir="${junit.reports.dir}/xml">
           <fileset dir="${junit.reports.dir}/xml">
               <include name="TEST-*.xml"/>
           </fileset>
           <report format="noframes" styledir="${modified.xsl.dir}"/>
     </junitreport>
   </target>

For more extreme transformations, we recommend using the <style>/<xslt> task (this task has two names, which can be used interchangeably in Ant scripts, but we prefer <style>) as you normally would, using the merged XML file output by the <junitreport> task as the input file to the transformation.

To use the <style> task with Ant’s JUnit reporting, assume you have an XSL custom stylesheet written to transform <junitreport> XML results into another form of XML. Call it custom-junitreport.xsl. First, we usually want to merge together the individual XML results files output by the <batchtest> XML formatter during a test run. The <junitreport> task can merge all those files, and do nothing else, if you just leave out the <report> element, as shown in listing 7.6:

Listing 7.6. Ant script snippet using <junitreport> task to merge XML result files into one
   <target name="junit-report"
           description="=> generate JUnit merged XML report">
       <junitreport todir="${junit.reports.dir}/xml">
           <fileset dir="${junit.reports.dir}/xml">
               <include name="TEST-*.xml"/>
           </fileset>
       </junitreport>
   </target>

In the target shown in listing 7.6, the <junitreport> task merges together all the files matching the name pattern TEST-*.xml (which is the default output file naming pattern of the XML formatter) in the ${junit.reports.dir}. By default, the resulting merged XML file is named TESTS-TestSuites.xml. The file TESTS-TestSuites.xml is then passed as input to the <style> task, where your custom stylesheet guides the transformation, as shown in listing 7.7:

Listing 7.7. Ant script snippet using <style> task just to transform XML results
   <target name="transform" description="=> create custom JUnit report">
       <style in="${junit.reports.dir}/xml/TESTS-TestSuites.xml"   
              out="TESTS-TestSuites.html"
              extension=".html"
              style="custom-junitreport.xsl"/>   
   </target>
Test results to transform—The in attribute specifies the XML-based test results file to transform into a report. This file is a merged version of all the individual test suite result files.
XSL stylesheet for transformation—The style attribute specifies your custom XSL stylesheet for transforming the XML-based test results. The stylesheet could transform XML into HTML, a PDF, plain text or any other format. In our example, the target is an HTML report.

Discussion

When using the <junitreport> task with a custom stylesheet, you must place the custom stylesheet in the directory specified by the styledir attribute of the <report> element. You must name the stylesheet either junit-frames.xsl or junit-noframes.xsl. This is a quirk (arguably a defect) in the <report> element, which should be fixed so that it can take any file name for a stylesheet rather than demanding one of the two predefined file names to exist in a directory.

Another quirk is that the format attribute depends on the values frames or noframes (note that the default is frames, if format is left unspecified) even for custom stylesheets. Outside the context of HTML, specifying frames or noframes doesn’t make much sense. What if your stylesheet outputs XML or PDF? A workaround is to always name your custom XSL sheet as junit-frames.xsl and leave format unspecified. Because the format attribute defaults to frames, you can leverage that fact and tolerate not having descriptive filenames. Of course, if you have multiple custom stylesheets, separate them by descriptive directory names.

Also note that the frames-based report includes test output captured from the System.out stream, whereas the noframes report does not.

It seems that the advantage of the <junitreport> task for this set of problems is primarily in its XML-merging capabilities. Once the XML results files have been merged into a single large XML file, it would seem to be as easy, if not easier, to use a custom XSL sheet with the regular Ant <style> task as it would be to configure the <report> element to use a custom stylesheet.

So the answer is probably this: if you need minor tweaks to the format, use the <report> task with one of the existing stylesheets and customize it. But if you need major changes to the output format, such as transforming the output to another XML structure, then use the <junitreport> task to merge the results into one file, and then use the <style> task to transform it with your custom stylesheet. The latter option gives you all the features and options of the <style> task without the quirky limitations of the <report> task (which requires Xalan 2 and doesn’t support nearly as many options as style/xslt).[3]

3 In fact, why don’t the Ant folks ditch the <report> task and just use the <style> task, at least under the covers?

Related

  • 7.4—Reporting results in HTML with Ant’s <junitreport> task
  • 7.6—Extending Ant’s JUnit results format

7.6. Extending Ant’s JUnit results format

Problem

You are running JUnit with Ant and you need to customize the results format to add more information or adhere to a specialized format.

Background

We have seen a situation in which a legacy test results management system, originally developed without support for Ant or JUnit, needed to be outfitted with support for test results produced from JUnit, which was being run by Ant. One of the requirements was to make Ant’s XML output of JUnit results conform to the input file format of the repository. The XML files could then be analyzed and reported on by the results management system without knowing their origin. The results repository took an XML input file that looked similar to the XML formatted results which Ant’s <junit> task can output when using a nested <formatter type="xml"> element. A good solution to the problem was to extend and customize the XML results format of Ant’s XML formatter.

Another situation in which you might want to customize Ant’s JUnit results format would be if you wanted your results to be in PostScript, PDF, or HTML format. You can output the desired format directly without producing intermediate XML results files that need to be processed by XSL.

Recipe

1.  Implement the interface JunitResultFormatter found in the package org.apache.tools.ant.taskdefs.optional.junit.

2.  Specify the name of the custom formatter class in the classname attribute of the <junitreport> task in your build script.

Listing 7.8 shows one way to implement these steps in a custom results formatter that outputs reports in HTML format. Note that this class depends on Ant tools, so you need ant.jar and ant-junit.jar, which are both part of the Ant distribution, to compile it.

Listing 7.8. HtmlJUnitResultFormatter
   package junit.cookbook.reporting.ant;

   import java.io.*;
   import java.text.NumberFormat;
   import java.util.Hashtable;
   import junit.framework.*;
   import org.apache.tools.ant.BuildException;
   import org.apache.tools.ant.taskdefs.optional.junit.*;

   public class HtmlJUnitResultFormatter implements JUnitResultFormatter {

       /** Formatter for timings. */
       private NumberFormat nf = NumberFormat.getInstance();

       /** Timing helper. */
       private Hashtable testStarts = new Hashtable();

       /** Where to write the log to. */
       private OutputStream out;

       /** Helper to store intermediate output. */
       private StringWriter middle;

       /** Convenience layer on top of {@link #middle middle}. */
       private PrintWriter wri;

       /** Suppress endTest if testcase failed. */
       private Hashtable failed = new Hashtable();
       private String systemOutput = null;
       private String systemError = null;

       public void setOutput(OutputStream out) {
           this.out = out;
       }

       public void setSystemOutput(String out) {
           systemOutput = out;
       }

       public void setSystemError(String err) {
           systemError = err;
       }

       public HtmlJUnitResultFormatter() {
           middle = new StringWriter();
           wri = new PrintWriter(middle);
       }

       /**
        * The whole testsuite ended.
        */
       public void endTestSuite(JUnitTest suite) throws BuildException {
           String nl = System.getProperty("line.separator");
           StringBuffer header = new StringBuffer(
                   "<html>"
                       + nl
                       + "<head><title>JUnit Results</title></head>"
                       + nl
                       + "<body>"
                       + nl + "<table border="1">" + nl);
               header.append(
                       "<tr><th>Suite: "
                           + suite.getName()
                           + "</th><th>Time</th></tr>" + nl);

           StringBuffer footer = new StringBuffer();
               footer.append(nl + "<tr><td>");
               footer.append("Tests run:");
               footer.append("</td><td>");
               footer.append(suite.runCount());
               footer.append("</td></tr>" + nl + "<tr><td>");
               footer.append("Failures:");
               footer.append("</td><td>");
               footer.append(suite.failureCount());
               footer.append("</td></tr>" + nl + "<tr><td>");
               footer.append("Errors:");
               footer.append("</td><td>");
               footer.append(suite.errorCount());
               footer.append("</td></tr>" + nl + "<tr><td>");
               footer.append("Time elapsed:");
               footer.append("</td><td>");
               footer.append(nf.format(suite.getRunTime() / 1000.0));
               footer.append(" sec");
               footer.append("</td></tr>");
               footer.append(nl);

           // append both the output and error streams to the log
           if (systemOutput != null && systemOutput.length() > 0) {
               footer
                   .append("<tr><td>Standard Output</td><td>")
                   .append("<pre>")
                   .append(systemOutput)
                   .append("</pre></td></tr>");
           }

           if (systemError != null && systemError.length() > 0) {
               footer
                   .append("<tr><td>Standard Error</td><td>")
                   .append("<pre>")
                   .append(systemError)
                   .append("</pre></td></tr>");
           }

           footer.append("</table>" + nl + "</body>" + nl + "</html>");

           if (out != null) {
               try {
                   out.write(header.toString().getBytes());
                   out.write(middle.toString().getBytes());
                   out.write(footer.toString().getBytes());
                   wri.close();
                   out.flush();
               } catch (IOException ioe) {
                   throw new BuildException("Unable to write output", ioe);
               } finally {
                   if (out != System.out && out != System.err) {
                       try {
                           out.close();
                       } catch (IOException e) {
                       }
                   }
               }
           }
       }

       /**
        * From interface TestListener.
        * <p>A new Test is started.
        */
       public void startTest(Test test) {
           testStarts.put(test, new Long(System.currentTimeMillis()));
           failed.put(test, Boolean.FALSE);
           wri.print("<tr><td>");
           wri.print(JUnitVersionHelper.getTestCaseName(test));
           wri.print("</td>");
       }
       /**
        * From interface TestListener.
        * <p>A Test is finished.
        */
       public void endTest(Test test) {
           synchronized (wri) {
               if (Boolean.TRUE.equals(failed.get(test))) {
                   return;
               }

               Long secondsAsLong = (Long) testStarts.get(test);
               double seconds = 0;
               // can be null if an error occured in setUp
               if (secondsAsLong != null) {
                   seconds = (System.currentTimeMillis()
                         - secondsAsLong.longValue()) / 1000.0;
               }

               wri.print("<td>");
               wri.print(nf.format(seconds));
               wri.print(" sec</td></tr>");
           }

       }
       /**
        * Interface TestListener for JUnit > 3.4.
        *
        * <p>A Test failed.
        */
       public void addFailure(Test test, AssertionFailedError t) {
           formatThrowable("failure", test, (Throwable) t);
       }
       /**
        * Interface TestListener.
        *
        * <p>An error occured while running the test.
        */
       public void addError(Test test, Throwable t) {
           formatThrowable("error", test, t);
       }

       private void formatThrowable(String type, Test test, Throwable t) {
           synchronized (wri) {
               if (test != null) {
                   failed.put(test, Boolean.TRUE);
                   endTest(test);
               }

               wri.println("<td><pre>");
               wri.println(t.getMessage());
               // filter the stack trace to squelch Ant and JUnit stack
               // frames in the report
               String strace = JUnitTestRunner.getFilteredTrace(t);
               wri.print(strace);
               wri.println("</pre></td></tr>");
           }

       }
       /**
        * From interface JUnitResultFormatter. We do nothing with this
        * method, but we have to implement all the interface's methods.
          */
       public void startTestSuite(JUnitTest suite) throws BuildException {

       }
   }

Although this looks like an awful lot of code, the general idea is straightforward. At each stage of executing the test suite, Ant generates various events: one when the test suite starts executing, one when it ends, one for each test, and one for each test failure or error. For each of these events, we have provided an event handler that outputs HTML corresponding to each event.

For the “start test suite” event, there is nothing to do. If we wanted to add some kind of test suite header, we would have added that here. For the “start test” event, we start an HTML table row and write out the name of the test. What we write out next depends on how the test ends. If the test fails, we treat the assertion failure as a Throwable object (it is an AssertionFailedError, after all) and print the stack trace as preformatted text in a <pre> tag. This is the same behavior we use when the test ends with an error, due to throwing an unexpected exception. Finally, for the “end test suite” event, we write out a summary of the test run, with failure and error counts as well as any text written to the standard output and error streams. This is a pretty comprehensive report!

Here is an Ant target that can be used in an Ant build file for running tests and reporting results using our custom formatter:

   <target name="ant-custom-formatter"
           description="-> demos custom Ant results formatter">
       <mkdir dir="${custom.reports.dir}"/>
       <junit printsummary="yes" haltonfailure="no">
           <classpath>
               <pathelement location="${classes.dir}"/>
               <pathelement path="${java.class.path}"/>
           </classpath>

           <batchtest fork="yes" todir="${custom.reports.dir}">
               <formatter classname=
                 "junit.cookbook.reporting.ant.HtmlJUnitResultFormatter"
                   extension=".html"
                   usefile="true"/>
               <fileset dir="${src.dir}">
                   <include name="**/tests/runner/AllTests.java"/>
               </fileset>
           </batchtest>
       </junit>
   </target>

Discussion

One limitation of our HtmlJUnitResultFormatter example is that it outputs one HTML file per test case class that executes. So while it is fine for reporting results for a few medium to large test suites, which will produce a few short- to medium-length HTML reports files, it becomes unusable when dealing with dozens or hundreds of test case classes.

This recipe could be enhanced to produce HTML frames documents to organize and link the individual HTML reports together. You should also be able to easily see how to write your own custom XML output formatter from this example—just use XML tags instead of HTML. Also, see Ant’s own XMLJUnitResultFormatter for inspiration.

Generally, implementing custom reports formats comes down to a choice between writing Java extensions of Ant’s JUnit APIs, as we do in this recipe, or writing new or customized XSL stylesheets for the <junitreport> task. The approach you choose often depends on which technology better suits your reporting requirements and the skill set of your team.

Related

  • 7.3—Getting plain text results with Ant
  • 7.5—Customizing <junit> XML reports with XSLT

7.7. Implementing TestListener and extending TestRunner

Problem

You want total control of the format of JUnit’s test results reporting.

Background

A common question on the JUnit Yahoo! group is how to customize JUnit’s test results reporting. The default reporting of the text-based test runner is pretty bare bones (“.” for pass, “E” for error, “F” for failure). The Swing-based and AWT-based runners display similar results in an interactive GUI.

For getting HTML or XML results files out of your JUnit test runs, the most common practice is to use Ant’s <junit> and <junitreport> tasks to execute the tests and report the results. But we have come across cases where Ant is not or cannot be used or where Ant might be more of a hassle to work with than simply extending the JUnit framework. (If your only problem with Ant is that it does not support your target XML or HTML reporting format, first see recipe 7.6, and see if that’s enough to solve your problem). In cases such as these, you can extend JUnit to format and output results any way you want by using APIs in the JUnit framework.

Recipe

Implement junit.framework.TestListener to define the results format and output mechanism, and then extend junit.runner.TestRunner to “register” the listener with the test runner. We’ll go through the process in steps.

 

Note

Observer/Observable— In this context a Listener, as in TestListener, is one of the participants of an implementation of the Observer pattern as captured in the so-called “Gang of Four” (Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides) book Design Patterns.

In short, a listener, also known as an observer or subscriber, is an object that attaches or registers itself with another object (the “observable”) in order to receive updates or notifications when the observable changes state. The observable is also known as the “subject” or “publisher.” Each publisher can have many subscribers. In our recipe here, the CookbookTestRunner is the publisher for the CookbookTestListener. But TestRunners can also be listeners (and implement the TestListener interface), registering themselves with test results in order to handle the display or routing of test events themselves. To see some source code examples of runners that are listeners, see junit.awtui.TestRunner.runSuite() and junit.swingui.TestRunner.doRunTest(Test testSuite).

 

First, implement the TestListener interface to define the output format for results. Listing 7.9 shows the interface to implement in order to control results output for a test runner. Note that all the methods accept an object implementing junit.framework.Test (either a TestCase or a TestSuite) as a parameter.

Listing 7.9. junit.framework.TestListener

Each test has the potential to produce one or more failures or errors as JUnit executes it. The TestListener is notified by a failure or error event if a test fails or has an error. The TestListener is not notified of successes so we can assume that any tests that start and end without an error or failure succeeded. Note that the listener also receives notification of start and end events. These events give us a good place to decorate and format each test and its results as it executes and notifies the test listener. Something that some people find strangely absent are start-Suite() and endSuite() events and methods, since the API seems incomplete without them. But we can add these methods to our TestListeners and could even extend the TestListener interface with our own interface that required extra methods. In fact, this is what the Ant JUnit reporting tasks do, as we see elsewhere in this chapter.

For demonstration purposes we just need a simple test listener implementation that does something interesting. So we will write a listener that is capable of writing out test results in a simple XML format. The test listener is responsible for formatting the results of each test. The results are stored in XML as we build the document in memory using the org.w3c.dom API, and provide a print() method that serializes the XML to an output stream, and a getXmlAsString() method that returns the XML as a String. Listing 7.10 shows CookbookTestListener, our TestListener implementation.

Listing 7.10. CookbookTestListener

Report that a new test suite is executing. This creates an XML element that looks like <testsuite class="com.mycom.test.MyTestSuite">.
Report an error, complete with its message and a stack trace of the corresponding unexpected exception.
Report a failure, complete with its message and a stack trace of the corresponding AssertionFailedError.
Report that an individual test is starting to execute. This creates an XML element that looks like <test name="testMyTestName">.
Note that an individual test has completed, incrementing the running total of executed tests. For a TestCase object, countTestCases() always returns 1.
Write the XML document we are creating to the TestListener's PrintStream using the identity transform.[4]
This method is useful during testing, or whenever you might want to see the XML document we are creating as a String.
Report the end of a test suite, including a summary of the test results.

4 The identity transform is an XSL transformation that applies an identity template to each XML element. The result is output identical to the input: a copy of the input XML document.

Now we have implemented the listener methods that will receive callbacks from the test runner as tests are executed and test methods start, end, or have an error of failure. In each callback method we used DOM APIs to format the test class, test method names, failures, errors, and results as XML. The second thing we must do is to extend TestRunner so we can tell it to use our TestListener implementation for reporting results. To do that, we have to implement three methods: main() for executing the runner on the command line, processArgs() for handling command-line arguments, and doRun() to register our listener with the test runner. Then we can kick off the test run, and call the listener’s print() method. Listing 7.11 shows our custom test runner.

Listing 7.11. CookbookTestRunner, an extension of TestRunner

That’s it! Now we can run our test runner and see test results in XML. If we pass a filename with the -o flag we added, the results are saved to the specified file. If we do not specify a file with -o, the results are streamed to standard output (the console) by default. The CookbookTestListener compiles as is with JDK 1.4 or higher. With JDK 1.3 or earlier you need a JAXP-compliant XML parser implementation such as Xerces (xml.apache.org) and the org.w3c.dom classes. Once we’ve compiled our test listener and runner and added them to the class path along with junit.jar, we run them:

   java –cp %CP% junit.cookbook.reporting.CookbookTestRunner –o junit-
    results.xml junit.tests.framework.AllTests

The following XML is the output of a test run with an intentional error and an intentional failure to demonstrate how they appear in the results. Note that passing test methods are just listed with their name. The test listener could be enhanced to print out more information such as timing information for each test method. Counts of one error and one failure appear in the results summary at the bottom, along with a tally of the number of seconds all the tests took to run.

   <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
   <testsuite class="junit.cookbook.tests.reporting.CookbookTestListenerTest">
   <test name="testStartTest"/>
   <test name="testEndTest"/>
   <test name="testAddError">
   <error message="Thrown on purpose!"><![CDATA[java.lang.Error: Thrown on purpose!
   at junit.cookbook.tests.reporting.CookbookTestListenerTest.testAddError
    (CookbookTestListenerTest.java:38)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       // several stack frames
   ]]></error>
   </test>
   <test name="testAddFailure">
   <failure message="Intentional
      failure"><![CDATA[junit.framework.AssertionFailedError: Intentional failure

   at junit.cookbook.tests.reporting.CookbookTestListenerTest.testAddFailure
    (CookbookTestListenerTest.java:52)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       // several stack frames
   ]]></failure>
   </test>
   <summary>
   <tests>4</tests>
   <errors>1</errors>
   <failures>1</failures>
   <runtime>0.161</runtime>
   </summary>
   </testsuite>

Discussion

Extending the test runner framework and implementing your own test listener might seem daunting, but there is not that much to it. If you need fast, efficient, and highly customized JUnit results output and you have the team and expertise to do it, extending JUnit to do exactly what you want is a great recipe.

 

Ant Tip

Be sure to consider using Ant to achieve your custom results formatting goals before spending time on extending JUnit yourself. If you don’t know about Ant’s JUnit reporting capabilities, we strongly suggest looking into them. The <junit> task has a <formatter> subelement (discussed in recipe 7.5) that can output results as XML (as one option), which you can customize by implementing an interface (see recipe 7.6). The <junitreport> task provides HTML formatting of the XML-formatted results by default, but can be used with any XSL stylesheet to produce customized reports.

 

Related

  • 7.5—Customizing <junit> XML reports with XSLT
  • 7.6—Extending Ant’s JUnit results format

7.8. Reporting a count of assertions

Problem

You need a report of the number of assertions in your test cases.

Background

You might want to measure your testing productivity or progress in quantity of assertions rather than quantity of test case classes or test methods. You might also want to check whether tests have assertions, in case you want to flag them in a test run log or build report.

Recipe

Extend junit.framework.Assert with the capability to count the number of assert methods invoked during a test run. Use your assert methods in your test cases instead of the usual assert methods (which TestCase inherits from Assert). The CountingAssert class, as shown in listing 7.12 is mostly a copy and paste of the original Assert class, minus some Javadoc comments. The Javadoc comments that we left in (except for the getAssertCount() method) document only the methods we’ve altered by including statements to increment the assertion count total. These methods are important: we only need to instrument these few with calls to increase the counter because all the other assert methods are variations that delegate calls back to these few.

Be warned: this listing is quite long!

Listing 7.12. CountingAssert, an extension of Assert
   package junit.cookbook.reporting;

   import junit.framework.Assert;
   import junit.framework.AssertionFailedError;
   import junit.framework.ComparisonFailure;

   public class CountingAssert extends Assert {
       private static int assertCount = 0;

       /**
        * getAssertCount() should be called by a TestRunner or
        * TestListener at the end of a test suite execution.
        * It returns a count of how many assertions were executed
        * during the run.
        *
        * @return assertionCounter count of assertions executed
        * during a run.
        */
       public static int getAssertCount() {
           return assertCount;
       }

       protected CountingAssert() {
       }

       /**
        * Asserts that a condition is true. If it isn't, it throws
        * an AssertionFailedError with the given message. Most of
        * the other assert*() methods delegate to this one.
        */
       static public void assertTrue(String message, boolean condition) {
           assertCount++;
           if (!condition)
           fail(message);
       }

       static public void assertTrue(boolean condition) {
           assertTrue(null, condition);
       }

       static public void assertFalse(String message, boolean condition) {
           assertTrue(message, !condition);
       }

       static public void assertFalse(boolean condition) {
           assertFalse(null, condition);
       }

       /**
        * Fails a test with the given message.
        */
       static public void fail(String message) {
           throw new AssertionFailedError(message);
       }

       static public void fail() {
           fail(null);
       }
       /**
        * Asserts that two objects are equal. If they are not,
        * an AssertionFailedError is thrown with the given message.
        */
       static public void assertEquals(
           String message,
           Object expected,
           Object actual) {
           assertCount++;
           if (expected == null && actual == null)
               return;
           if (expected != null && expected.equals(actual))
               return;
           failNotEquals(message, expected, actual);

       }

       static public void assertEquals(Object expected, Object actual) {
           assertEquals(null, expected, actual);
       }
       /**
        * Asserts that two Strings are equal.
        */
       static public void assertEquals(
           String message,
           String expected,
           String actual) {
           assertCount++;
           if (expected == null && actual == null)
               return;
           if (expected != null && expected.equals(actual))
               return;
           throw new ComparisonFailure(message, expected, actual);
       }

       static public void assertEquals(String expected, String actual) {
           assertEquals(null, expected, actual);
       }
       /**
        * Asserts that two doubles are equal concerning a delta.
        * If they are not, an AssertionFailedError is thrown with
        * the given message. If the expected value is infinity
        * then the delta value is ignored.
        */
       static public void assertEquals(
           String message,
           double expected,
           double actual,
           double delta) {
           assertCount++;
           // handle infinity specially since subtracting
           // to infinite values gives NaN and the
           // the following test fails
           if (Double.isInfinite(expected)) {
               if (!(expected == actual))
                   failNotEquals(
                       message,
                       new Double(expected),
                       new Double(actual));
           } else if (!(Math.abs(expected - actual) <= delta))
               // Because comparison with NaN always returns false
              failNotEquals(message, new Double(expected), new Double(actual));
       }

       static public void assertEquals(
           double expected,
           double actual,
           double delta) {
           assertEquals(null, expected, actual, delta);
       }
       /**
        * Asserts that two floats are equal concerning a delta.
        * If they are not, an AssertionFailedError is thrown with
        * the given message. If the expected value is infinity
        * then the delta value is ignored.
        */
       static public void assertEquals(
           String message,
           float expected,
           float actual,
           float delta) {
           assertCount++;
           if (Float.isInfinite(expected)) {
               if (!(expected == actual))
                   failNotEquals(
                       message,
                       new Float(expected),
                       new Float(actual));
           } else if (!(Math.abs(expected - actual) <= delta))
               failNotEquals(message, new Float(expected), new Float(actual));
        }

        static public void assertEquals(
            float expected,
            float actual,
            float delta) {
            assertEquals(null, expected, actual, delta);
        }

        static public void assertEquals(
            String message,
            long expected,
            long actual) {
            assertEquals(message, new Long(expected), new Long(actual));
        }

        static public void assertEquals(long expected, long actual) {
            assertEquals(null, expected, actual);
        }

        static public void assertEquals(
            String message,
            boolean expected,
            boolean actual) {
            assertEquals(message, new Boolean(expected), new Boolean(actual));
        }

        static public void assertEquals(boolean expected, boolean actual) {
            assertEquals(null, expected, actual);
        }

        static public void assertEquals(
            String message,
            byte expected,
            byte actual) {
            assertEquals(message, new Byte(expected), new Byte(actual));
        }

        static public void assertEquals(byte expected, byte actual) {
            assertEquals(null, expected, actual);
        }

        static public void assertEquals(
            String message,
            char expected,
            char actual) {
            assertEquals(message, new Character(expected), new Character(actual));
        }

        static public void assertEquals(char expected, char actual) {
            assertEquals(null, expected, actual);
        }

        static public void assertEquals(
            String message,
            short expected,
            short actual) {
            assertEquals(message, new Short(expected), new Short(actual));
        }

        static public void assertEquals(short expected, short actual) {
            assertEquals(null, expected, actual);
        }

        static public void assertEquals(
            String message,
            int expected,
            int actual) {
            assertEquals(message, new Integer(expected), new Integer(actual));
        }

        static public void assertEquals(int expected, int actual) {
            assertEquals(null, expected, actual);
        }

        static public void assertNotNull(Object object) {
            assertNotNull(null, object);
        }

        static public void assertNotNull(String message, Object object) {
            assertTrue(message, object != null);
        }

        static public void assertNull(Object object) {
            assertNull(null, object);
        }

        static public void assertNull(String message, Object object) {
            assertTrue(message, object == null);
        }
        /**
         * Asserts that two objects refer to the same object. If they are not,
         * an AssertionFailedError is thrown with the given message.
         */
        static public void assertSame(
            String message,
            Object expected,
            Object actual) {
            assertCount++;
            if (expected == actual)
                return;
            failNotSame(message, expected, actual);
        }

        static public void assertSame(Object expected, Object actual) {
            assertSame(null, expected, actual);
        }
        /**
         * Asserts that two objects refer to the same object. If they are not,
         * an AssertionFailedError is thrown with the given message.
         */
        static public void assertNotSame(
            String message,
            Object expected,
            Object actual) {
            assertCount++;
            if (expected == actual)
                failSame(message);
        }

        static public void assertNotSame(Object expected, Object actual) {
            assertNotSame(null, expected, actual);
        }

        static private void failSame(String message) {
            String formatted = "";
            if (message != null)
                formatted = message + " ";
            fail(formatted + "expected not same");
        }

        static private void failNotSame(
            String message,
            Object expected,
            Object actual) {
            String formatted = "";
            if (message != null)
                formatted = message + " ";
            fail(
                formatted
                    + "expected same:<"
                    + expected
                    + "> was not:<"
                    + actual
                    + ">");
        }

        static private void failNotEquals(
            String message,
            Object expected,
            Object actual) {
            String formatted = "";
            if (message != null)
                formatted = message + " ";
                fail(
                    formatted + "expected:<" + expected
                        + "> but was:<" + actual + ">");

        }
   }

Now use CountingAssert’s assert methods in your test cases instead of the usual assert methods (which TestCase inherits from Assert). For a simple example, here is a test method with five assertions that CountingAssert will count.

   public void testFoo() {
       CountingAssert.assertNotNull(this);
       CountingAssert.assertSame("hello",this,this);
       CountingAssert.assertEquals(1,1);
       CountingAssert.assertEquals(true,true);
       CountingAssert.assertTrue(true);
   }

Finally, you need to use something similar to the CookbookTestRunner to retrieve the assertion total from the CountingAssert class. Listing 7.13 shows a slightly modified CookbookTestRunner (see recipe 7.7), which obtains the assertion count total from CountingAssert after completing a test run, and then simply prints the total to the console. For brevity, we show only the main() method. For the rest of the class, see listing 7.11

Listing 7.13. CookbookTestRunner.main() displaying the assertion count
   public static void main(String args[]) {
       TestResult results = null;
       CookbookTestRunner runner = null;

       if (args.length == 3 && args[0].equals("-o")) {
           try {
               runner = new CookbookTestRunner(args[1], args[2]);
           } catch (FileNotFoundException e) {
               e.printStackTrace();
           } catch (ParserConfigurationException e) {
               e.printStackTrace();
           }
       } else if (args.length == 1) {
           try {
               runner = new CookbookTestRunner(args[0]);
           } catch (ParserConfigurationException e) {
               e.printStackTrace();
           }
       } else {
           throw new RuntimeException(
               "Usage: java TestRunner [-o outputFile] Test "
               + System.getProperty("line.separator")
               + "where Test is the fully qualified name of "
               + "a TestCase or TestSuite");

           }

           System.out.println("assertion count = "
               + CountingAssert.getAssertCount());

       }

Discussion

This recipe describes using CountingAssert’s method in your tests in place of the methods that TestCase inherits from JUnit’s Assert class. One alternative to this would be to write your own TestCase extension class (a Base Test Case) which extends CountingAssert. The alternative is convenient in that you would be able to use CountingAssert’s methods without having to refer explicitly to the class name, but forces all your test case classes to extend your Base Test Case, rather than JUnit’s TestCase. Whether you follow the technique in this recipe, try this alternative, or do something else, you will end up reimplementing most of Assert.[5] We also prefer to reuse, rather than reimplement, but as you would only need to do it once, it might be worth the effort. It depends on how much you need to be able to count assertions.[6]

5 Worse than that, you might end up copying and pasting a large amount of code, which we frown upon.

6 J. B. has never wanted nor needed to count assertions, because he sees it as a meaningless metric that is easy to fool. If you measure assertion count, then all you get is more assertions, and not necessarily better tests.

A limitation with this recipe is that it won’t work transparently with Ant’s JUnit test runner. Because our solution depends on a custom test runner (CookbookTestRunner) to retrieve and print out the assertion count total after all the tests are run, you have to extend or modify the Ant JUnit test runner class to implement support for retrieving and displaying the assertions total from the CountingAssert class. So an alternative worth considering for both the standalone test runner and the Ant test runner context is to write a custom TestListener that retrieves the assertion count total from CountingAssert and displays it. Either way, you need separate implementations for Ant and for standalone JUnit because Ant uses its own TestListener and report-formatting API.

Related

  • 7.7—Implementing TestListener and extending TestRunner
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset