Chapter 8. Analyzing Testing Information

In the last three chapters, we covered the topic of writing tests at different levels: unit, functional, and acceptance. So far, we have tested the new interface that we created, and we learned to apply all the new methods. This was a relatively easy task, but we don't know how good we did in our testing. There are some specific metrics that we can analyze to generate a direct and immediate report on the quality of the tests. These reports will help us in taking informed decisions regarding the architecture of our code.

Codeception is bundled with most of these report generation tools, and it's quite easy as it's been until now.

In this chapter, we will primarily cover the code coverage metrics, and we'll briefly touch on some other metrics, which can be obtained through various software.

  • Improving the quality of your tests
  • Improving our code with the aid of additional tools

Improving the quality of your tests

Since the beginning of programming and, in particular, testing, many programmers started questioning themselves on what it means to write good tests, or in other words, how do I know that the test I have written is good? What are the metrics for this?

It's definitely not a question of personal preference or skill.

One of the first methods that was created for analyzing the quality of the tests was called code coverage. From a wider perspective, code coverage measures how much of the code is covered by the tests. There is a correlation between software bugs and the test code coverage, where the software with more code coverage has fewer bugs, although the tests won't remove the possibility of bugs being introduced, for instance, as a manifestation of complex interactions between modules or unexpected inputs and corner cases. This is why you need to be careful when planning and designing your tests, and you need to take into consideration that this won't remove the need for regression and exploratory testing, at least, not entirely.

There are several code coverage criteria that are normally used for the code coverage programs.

  • Line coverage: This is based on the number of executable lines that were executed.
  • Function and method coverage: This calculates the number of functions or methods that were executed.
  • Class and trait coverage: This measures the covered classes and traits when all of their methods are executed.
  • Opcode coverage: This is similar to line coverage, although a single line might generate more than one opcode. The line coverage considers a line to have been covered as soon as one of its opcodes are executed.
  • Branch coverage: This measures if each possible combination of Boolean expression in the control structures are being evaluated when the tests are run.
  • Path coverage: This is also called Decision-to-Decision (DD) path, and it considers all the possible execution paths, in terms of its unique sequence of branch execution from the beginning to the end of each method or function.
  • Change Risk Anti-Patterns (C.R.A.P.) Index: This is based on the cyclomatic complexity and the code coverage of a unit of code. This index can be lowered by refactoring the code or by incrementing the number of tests. Either way, it's primarily used for unit tests.

Since Codeception uses PHP_CodeCoverage, it does not support opcode coverage, branch coverage, and path coverage.

With this in mind, if we go back to our unit tests, we will understand a bit better the structure of our tests and how they are currently working.

Let's start by enabling the code coverage in our unit tests and then looking at their results.

Later, we will look at the functional and acceptance coverage reports, and then explore some other interesting information, which we can extract from our code.

Enabling code coverage in Codeception

Codeception provides a global and a specific configuration for code coverage. Depending on the structure of your application and the type of test you are going to implement based on your test plan, you can have either a generic configuration in /tests/codeception.yml, or a specific configuration for each suite configuration file, such as /tests/codeception/unit.suite.yml. You can also have both of these configurations. However, in this case, the single suite configuration will override the setting of the global configuration.

We are going to use the global configuration file. So at the end of the file, append the following lines:

# tests/codeception.yml

coverage:
    enabled: true
    white_list:
        include:
            - ../models/*
            - ../modules/v1/controllers/*
            - ../controllers/*
            - ../commands/*
            - ../mail/*
    blacklist:
        include:
            - ../assets/*
            - ../build/*
            - ../config/*
            - ../runtime/*
            - ../vendor/*
            - ../views/*
            - ../web/*
            - ../tests/*

This should be enough for getting started. The first option enables the code coverage, while the rest of the options tell Codeception and the code coverage program which files to include when writing the report for the white list and the black list. This will ensure that the results aggregate the information that is relevant to us, in other words, what we've written, rather than the framework itself.

We won't need to run the build command of Codeception, as there isn't a new module that has to be imported into our tester guys.

If we look at the help option for the run action of Codeception, then we will notice that it has two main options for generating the reports that we are interested in.

  • --coverage: This generates the actual coverage report, and it is accompanied by a series of other options for controlling the format and the verbosity of the report
  • --report: This generates an overall report of the tests that were run

In conjunction with these two options, we will be able to generate the HTML and XML test and coverage reports, depending on the use. In particular, the XML report will be quite handy when we get to Chapter 9, Eliminating Stress with the Help of Automation.

Note

It's important to keep in mind that currently the coverage reports of the acceptance tests are not merged with the reports generated for the functional and unit tests. This is due to the way in which the code coverage is calculated and intercepted. Later, we will see what will be needed for generating the coverage reports for acceptance tests.

Extracting the code coverage information for unit tests

In the Codeception documentation, this is normally referred to as the local coverage report and it is applied to both the unit and functional tests. We'll touch upon remote coverage when talking about the coverage for acceptance tests.

We can easily generate the coverage by appending the --coverage flag to the command shown here:

$ ../vendor/bin/codecept run unit --coverage

This will end with an output similar to the following:

...
Time: 44.93 seconds, Memory: 39.75Mb

OK (32 tests, 68 assertions)
Code Coverage Report:     
  2015-01-05 21:43:13     
                          
 Summary:                 
  Classes: 25.00% (2/8)   
  Methods: 45.00% (18/40)
  Lines:   26.42% (56/212)

appmodels::ContactForm
  Methods:  33.33% ( 1/ 3)   Lines:  80.00% ( 12/ 15)
appmodels::Dog
  Methods: 100.00% ( 2/ 2)   Lines: 100.00% (  3/  3)
appmodels::LoginForm
  Methods: 100.00% ( 4/ 4)   Lines: 100.00% ( 18/ 18)
appmodels::User
  Methods:  84.62% (11/13)   Lines:  79.31% ( 23/ 29)

Note

The execution time you see here is based on a machine with an i7-m620 processor, on which runs the Linux kernel. The coverage increases the time exponentially. On the same machine, running the unit tests takes less than 10 seconds.

There are methods for shortening the execution time. This can be done by using Robo, which is a task runner, and its specific Codeception plugin is robo-paracept. More information can be found in the official Codeception documentation at http://codeception.com/docs/12-ParallelExecution.

This report gives us a succinct and immediate output of the code coverage of our unit tests.

The coverage for classes, methods, and lines (and where the percentage is calculated from), and a slightly detailed breakdown per class can be seen from the summary.

We can see that we succeeded in covering 100 percent of the Dog and LoginForm classes, and we nonetheless achieved a good 84.62 percent of the methods of the User class, but disappointingly, we covered only 33.33 percent of the methods of the ContactForm.

But, what did we miss?

Well, there's only one way to find out, and that is by generating the HTML coverage report.

Generating a detailed coverage report of the unit tests

With the help of the --coverage-html option, we can generate a detailed code coverage report. Then, we can inspect it in order to understand what was covered and what was missed:

$ ../vendor/bin/codecept run unit --coverage-html

This will now end with the following output line:

HTML report generated in coverage

The report will be saved in the _output/coverage/ directory, where you will find two files: dashboard.html and index.html. The first gives you some nice graphs, which are a little more interesting than the coverage report summary printed on the console, but it is mostly used for showing off and it is not useful for understanding what's wrong with the tests. There's, in fact, an open request for suppressing this output on the console (https://github.com/Codeception/Codeception/issues/1592).

Generating a detailed coverage report of the unit tests

Details of the Insufficient Coverage panel on the dashboard

As you can see from the preceding screenshot, the bit that you might be interested in at this level of detail is the Insufficient Coverage panel, (currently) sitting at bottom-left of the page.

We will discuss the other panels later.

You will be really interested in the index.html file. From there, you can see some of the detailed statistics and you can dig into every single file that has been analyzed, to see what lines the tests have covered and so improve your tests from there.

Generating a detailed coverage report of the unit tests

Summary of the coverage across all files analyzed

The summary of the coverage shows what's been covered, in some detail. This helped us in discovering immediately what was wrong with our testing, and in our case, one of the tests provided by Yii for ContactForm was not covered sufficiently. In the preceding screenshot, we can see that it shows 80 percent coverage of lines, 33.33 percent coverage of the methods, but it does not show anything regarding the classes. This is because, unless you have all the methods covered, you won't have the class marked as covered.

This may not prove be a problem. There are methods that are not a part of our implementation and these can only be tested by using an integration test, and then there are others that can be covered by paying a bit of attention. If we click on the ContactForm.php link, then we would see the following:

Generating a detailed coverage report of the unit tests

Summary of the coverage of the code in the selected file

Of the two methods that have not been covered, we don't really need to cover the first method, attributeLabels(). Technically, this is because of two reasons: the first reason is that as it is a part of the Yii framework, we assume that it will work; the second reason is that it's a trivial method, and it always returns an internal variable, which can't be controlled in any way.

The other method is the contact() method and it has been covered partially. So, we're going to fix this. It may well be possible that this specific test will get corrected in a future version of the framework. This might be something that you need to look out for.

By clicking on the contact($email) link, or by just scrolling to the bottom of the page, we will find our method, and this will show us that all the paths have not been covered.

Generating a detailed coverage report of the unit tests

Discovering what needs to be covered with the aid of color coded lines

Our case is quite simple, so we will try to fix these errors either by adding the @codeCoverageIgnore directive to the documentation of the method that we want to exclude, or by adjusting or adding a new test to it in order to reach as close as possible to 100 percent. Remember, this is what we will be aiming for, but this is not necessarily our target.

// /tests/codeception/unit/models/ContactFormTest.php

/**
 * @return array customized attribute labels
 * @codeCoverageIgnore
 */
public function attributeLabels()
{
    return [
        'verifyCode' => 'Verification Code',
    ];
}

The solution to cover the remaining branch of the if statement is to add a test similar to the following:

// /tests/codeception/unit/models/ContactFormTest.php

public function testContactReturnsFalseIfModelDoesNotValidate()
{
    $model = $this->getMock(
          'appmodelsContactForm', ['validate']
    );
    $model->expects($this->any())
          ->method('validate')
          ->will($this->returnValue(false));

    $this->specify('contact should not send', function () use (&$model) {
        expect($model->contact(null), false);
        expect($model->contact('[email protected]'), false);
    });

}

Now, let's run our tests again, and we will see the screenshot shown here:

Generating a detailed coverage report of the unit tests

We've reached 100 percent coverage! Yay!

I'll leave it to you to fix the remaining errors. Certain situations might be hard to cover, and you may need additional hints and suggestions on how to restructure your tests.

Aggregating functional tests to unit tests

Now that we've seen what is going on in our unit tests and how to visually understand if we have effectively covered as much as we could, we can move to the functional tests that we wrote previously.

As we saw earlier, we can just add the functional suite to the command line for generating the aggregated reports.

$ ../vendor/bin/codecept run unit,functional --coverage

We will also see that by omitting the suites we will end up with the same result, but we don't know when the Codeception developers will merge all the three suites into a single coverage report, so just keep this in mind and consult the documentation.

Our unit tests have covered the models in their entirety. Our functional tests should focus on the controllers. You should be able to spot that the login page and the REST module controller have not been covered completely. So, let's discuss these one by one.

The login page will display the missing coverage for the login and the logout action.

In the first case, it seems pretty easy to cover that. We have to make sure that we reach that action after logging in. So, let's add the following assertion right after the successful login at the end of the test file:

// tests/codeception/functional/LoginCept.php

$I->amGoingTo('ensure I cannot load the login page if I am logged in'),
$I->amOnPage('/index-test.php/login'),
$I->seeCurrentUrlEquals('/index-test.php'),

As we can see, we're using a few specific paths for testing the website. This isn't a problem when interacting with the Codeception REST module, but here we have to be verbose.

The other portion that we have to cover is a little more complex. Once we are logged in, notice that the logout button has a JS click event attached to it, and that will send a POST request to /logout.

Since PHPBrowser won't be able to read JS, nor will it have the ability to do a specific POST call, we won't be able to cover this piece of code. Don't even think about using sendPost() as it's a specific method, which comes from the REST module of Codeception.

The only solution for this is to leave the coverage of this bit to the acceptance tests or to WebDriver.

Due to the fact that acceptance and functional tests have not been merged, we can exclude this method from the coverage report by using @codeCoverageIgnore. However, make sure that this isn't a case anymore and discuss it with your colleagues before excluding the method coverage from all the tests.

The last part that we need to cover is the controller of the REST interface. Here, we have a mixed situation. We have uncovered the functions that are mostly a part of our framework, such as the anonymous function that performs the authentication and checkAccess(), we have a small bit in actionUpdate(), which forbids anything but a PUT, and we have another control statement in actionSearch(), which controls who can search what.

In the first two cases we'll gladly avoid them from getting covered, as we've explicitly excluded the framework files which these two are part of.

For actionUpdate(), we'll find out that we won't even need a specific check, as Yii already defines the type of HTTP call that is allowed against the default REST interfaces.

We can add a test that ensures that we can't perform a POST on the interface and it can be added to any of the already present tests. This could be something along the lines of the following code block:

// tests/codeception/functional/UserAPIEndpointsCept.php

// I must be authenticated at this point.
$I->amGoingTo('I cannot update using POST'),
$I->sendPOST('users/' . $userId);
$I->seeResponseCodeIs(405);

Lastly, we want to ensure that the user can only search for his own username to get the ID, as we outlined in Chapter 6, Testing the API – PHPBrowser to the Rescue. In order to do this, we can simply add something similar to the code block shown here:

// tests/codeception/functional/UserAPICept.php

// I must be authenticated at this point.
$I->amGoingTo('ensure I cannot search someone else'),
$I->sendGET('users/search/someoneelse'),
$I->seeResponseCodeIs(403);

If we run the tests with coverage, then we'll get a 100 percent on all the files that we wanted to see the coverage on.

Aggregating functional tests to unit tests

The final overview of the coverage for unit and functional tests

Generating acceptance tests' coverage report

Now that we've seen what to make of our coverage reports, we'll quickly look at the configuration that will help us in obtaining the coverage reports for the acceptance tests.

These coverage reports might not be the most important ones, but if constructed correctly, then they should prove that our scenarios are well written. Normally, the focus of acceptance tests is on ensuring browser cross- and preserving retro-compatibility.

As we've seen in Chapter 7, Having Fun Doing Browser Testing, Codeception talks to the Selenium standalone server, which in turn launches the browser and performs the required tests through the browser driver. Because of this architecture, the c3 project has been created, which basically listens to the browser calls and understands which bit of our code is being executed remotely.

So, first of all, let's get c3. We can either download it from Composer or from the official website (https://github.com/Codeception/c3) by running this command from the root of the project:

$ wget https://raw.github.com/Codeception/c3/2.0/c3.php

If you're downloading it through Composer, then you'll have to add some additional instructions to the composer.json file. You should take the official documentation as the main reference point.

Once you have it, include it in the index-test.php file:

// web/index-test.php
//...
include_once __DIR__ . '/c3.php';
$config = require(__DIR__ . '/../tests/codeception/config/acceptance.php'),
(new yiiwebApplication($config))->run();

With this, we have hooked c3 to Yii. Now, we just need to make Codeception aware of it. So open the codeception.yml file, and add the following options to the coverage section of the file:

# tests/codeception.yml
# ...
coverage:
    enabled: true
    remote: true
    remote_config: ../tests/codeception.yml
    # whitelist:
    # blacklist:
    c3_url: 'https://basic-dev.yii2.sandbox/index-test.php/'

We need to enable the remote coverage, set the configuration of the file by using remote_config, and then specify the URL c3 should be listening on.

Note

The detailed explanation of the remote code coverage and its configuration can be read from the official documentation of Codeception, which can be found at http://codeception.com/docs/11-Codecoverage, and from the README.md file, which is either located in the tests/ directory of your project or at https://github.com/yiisoft/yii2-app-basic/tree/master/tests#remote-code-coverage.

Now, all our remote calls will go through the index-test.php file, and they will use c3 to generate the coverage data.

Additionally, we may want to get a trimmed down report for specific acceptance tests, and in our case, we can decide to focus our attention only on the controllers that are being hit, and then choose to remove any reporting for the models.

In order to do so, consider what we already have in the main configuration file. We just need to add the following to our acceptance.suite.yml file:

# tests/codeception/acceptance.suite.yml
coverage:
    blacklist:
        include:
            - ../models/*

At this point, you can generate the reports separately by using the code block shown here:

$ ../vendor/bin/codecept run acceptance --coverage-html

You can also do this by simply running the tests for the whole suite, as follows:

$ ../vendor/bin/codecept run --coverage-html

As we saw earlier, both of these methods will generate a separate report for the acceptance tests. It might happen that in the future this is no longer valid, so be sure to head over to the official documentation and check that.

Once we generate the reports, we will notice two things: the tests with the coverage report might take ages, so we don't want to run this every time we make a change to the interface. Secondly, we will have to cover the missing logout test that we have highlighted before.

So, let's go to our LoginCept.php file and add what's missing.

$I->amGoingTo('ensure I cannot load the login page if I am logged in'),
$I->amOnPage('/index-test.php/login'),
$I->seeCurrentUrlEquals('/index-test.php'),

$I->amGoingTo('try to logout'),
$I->click('Logout (admin)'),
if (method_exists($I, 'wait')) {
    $I->wait(3); // only for selenium
}
$I->seeCurrentUrlEquals('/index-test.php'),

Please note that we need to be very specific while using the URLs, just as we were with the functional tests.

Once this is done, we should find ourselves with the complete coverage of all the suites.

In the next section, we'll see what else we can generate, and then we'll take it to the next level with the aid of automation in the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset