Introducing testing for our purposes

Now that we have defined what to do, we need to discuss what kind of testing is needed and how much of it we want to test, based on the approaches we have outlined in Chapter 1, The Testing Mindset.

We're going to cover the following areas of testing:

  • Unit tests: This is to achieve isolated components testing
  • Integration tests: This is to ensure the various components are working well together
  • Acceptance tests: These are the most relevant types of tests from the user perspective, as they try to meet the right requirements defined at the beginning

Clearly, without knowing how our application is structured, it is hard to understand what kind of work we're going to endure.

So, before getting into defining the actual tests, we need to start breaking down our application into several modules, and overseeing the structure from an architectural point of view.

There are many ways to perform an architectural breakdown, some might be stricter and more detailed, using a textual list, while others might end up being a rough sketch using a diagram. This heavily depends on the size and complexity of your application, and, in our case, a diagram seems to fit our purposes.

We need to remember that we always want to balance the effort and the time spent on these initial phases with the amount of detail required at any given point. For instance, we might not know exactly how the modal window login will interact with the rest of the application, whether we need to develop a user model that is more complex than the one we will start with or even split it into different components so as to provide a different functionality to the frontend, or whether this is out of scope with the work we want to do and we can do it in a self-contained way.

Moreover, the diagram can miss small bits, which we might forget to test or consider when evaluating our test plan. For example, the JavaScript side of our application might include several small sets of utility functions that should be considered as separate modules for manageability and reuse.

Introducing testing for our purposes

Partial view of the structure of our application

As a solution to these problems, it is always advisable to revisit the structure of the software module and its own breakdown when approaching the development of the specific feature. This is something we will see in detail in the next chapters.

In the preceding diagram, we can see that our application comprises essentially three main areas, starting from the bottom: a data storage system (database), a model representing the data, and a functional part (the view/controller part of the application). On top of everything sits our main interaction bit given by the user browser. This is not representative of the whole application, rather just the specific areas where we're going to work on.

As we've already seen, the unit tests are aimed at testing an atomic bit of the application, such as a class or a small set of related functions: their purpose is to be small and isolate, meaning they should have no external dependencies. Keep in mind that totally isolated tests in web development are difficult to achieve, and we are in fact not allowed to touch parts of our infrastructure, for instance, the database interaction. These tests are actually called small tests in Google's internal terminology , which immediately indicates their scope and the time they will take to run.

Note

Google is currently one of the publicly known companies that have made testing one of their core values. Their approach makes constant use of adjectives to distinguish between types of tests.

To read more about Google's way of testing, you might be interested in How Google Tests Software, Addison Wesley, James Whittaker, Jason Arbon, and Jeff Carollo.

In our application, the unit test can be represented in the following way:

Introducing testing for our purposes

Graphical representation of unit testing coverage

A practical example is the user model we're going to create, and, as stated before, we might have other unit tests that we might want to write, for instance, in our JavaScript layer in the frontend, in case we were dealing with a client-side application where part of the business logic lives in the user browser.

Just remember, the tests covering the user model shouldn't make any use of external dependencies (for example, external helpers, such as the security module) and, secondly, they can avoid touching parts of the framework over which we don't have any control, specifically those that have potentially already been covered by other tests.

When focusing a bit more on the global picture, we can now see how things stack up and interact with each other. With integration tests, we might be required to use mocks and fakes anyway, but this is not highly recommended as it would be used for unit tests. In Google's terminology, these tests are called medium tests, as they take a bit more when executed and are also trivial to develop in certain situations.

Introducing testing for our purposes

Graphical representation of integration tests coverage.

The last pieces of the jigsaw are the acceptance tests, as shown in the following:

Introducing testing for our purposes

Graphical overview of acceptance tests.

Acceptance tests are similar to system tests (or end-to-end tests), but they target the user rather than the consistency of the overall system from an engineering point of view. Acceptance tests are close to what could be a real-world use: these tests are required to ensure that all components are working well together, and meet the acceptance criteria defined at the beginning, as specific actions that outline the user interaction with the application.

Acceptance criteria are those we have defined previously when outlining our features: the user should be able to log in using a modal window.

I've intentionally avoided to use a business domain language, as we want to keep it as wide as possible for this initial part, instead we're going to dive into that later on.

At Google, acceptance (and end-to-end) tests are also called large or enormous tests because they will take a lot more to implement and to execute. They also require an infrastructure that could mimic a real-world scenario, which may not be trivial to set up. Because of this, creating corner cases can be quite difficult as this means that we're going to test only the defined scenarios and any specific case we think to be meaningful to the area that we're testing.

In our case, this might be something along the lines of "The user will receive an error when using wrong credentials."

Again, we will specifically dig into these details later on in this book.

Using a top-down approach versus a bottom-up approach

It's important to reiterate that BDD has been created as an improvement over TDD and quite an important one at that. It provides a better and more flexible language to define acceptance criteria, which will also help define the scope of the testing needed.

We have two ways to define our testing strategy and our test plan: using either a bottom-up (or outside-in) or a top-down (or inside-out) approach, as shown in the following diagram:

Using a top-down approach versus a bottom-up approach

Comparison of different size of tests and their benefit.

It's not new for agencies and startups when trying to build up and improve their QA to start from the bottom, implementing unit tests and trying to get a good amount of coverage.

The use of TDD is encouraged and it's actually the first step in getting into the testing mentality, by writing tests first and then going through the red, green, and refactor phases. But its sole focus relies on the code, and the responsibility to implement and ensure they're covering the right amount of code rests with the developer.

Unit tests will help you focus on small and atomic parts of your application, and the tests, by being rather quick to be executed, will help you discover bugs frequently and improve the quality of the code developed. Your architectural and design skills will also improve significantly.

At a certain point, you will find yourself knowing that there's still something that is not touched by tests. While the project grows, the amount of manual and exploratory testing grows with it.

Integration tests can help you alleviate this problem, but please refrain yourself from spawning an incredible amount of integration tests: these can quickly become brittle and unmaintainable, especially when the external dependencies can become out-of-sync.

Acceptance tests are going to keep everything together and eliminate the need for the repetitive tasks you can perform when doing manual tests. Again, acceptance tests are not a replacement for exploratory testing and should instead focus on the acceptance criteria defined.

As you can imagine, the top-down approach gives you the following advantages:

  1. A complete solution with a good enough coverage
  2. A clear panoramic of the testing infrastructure
  3. A good balance between effort, development, and tests
  4. Most of all, the confidence that your system is solid, if not rock-solid

What to test and what not to test

The distribution of test coverage could end up being distributed as 100 percent–20 percent–10 percent, for unit, integration, and acceptance tests, respectively. The percentage for integration and acceptance can grow quite a fair bit in user-facing projects.

In this context, it is particularly important to understand what code coverage means.

If you haven't already, you will probably find some software engineer that will convince you that 100 percent coverage is essential and not reaching it is some sort of shame you have to wear for the rest of the project, looking down at the ground for you're not a respectable developer.

Reaching full coverage is a noble aim, and that's where we will try to get, but we need also to be realists and, as highlighted before, understand that there are many situations where this is not possible at all.

The "what to test" question, or in other words the scope of the testing, is defined by our acceptance criteria for each feature we are going to develop.

Using the top-down approach, we will also be able to highlight which bits are important to be integration tested, while trying to achieve 100 percent for units.

The master test plan

At the end of this initial planning work, you will have everything needed to define your master test plan.

The master test plan is a unified way to document the scope and details of what needs to be tested and how.

You don't need to be formal, and there's no specific requirement or procedure to follow, unless you're working for a big company where it's considered a deliverable at the beginning of the project to be signed off by the stakeholders.

In our case, it will be roughly defined by the following:

  • User API implementation:
    • Unit test as much as possible (aim for 100 percent, but 60 percent to 70 percent is considered acceptable on a case-by-case basis)
    • Functional tests to cover all the entry points of the application
    • Well defined corner-cases—bad parameters and/or requests (for example, GET instead of POST) as client-side errors, and server-side errors handling (50* errors and similar)
  • User login from modal window:
    • Functional tests to ensure we are getting the right markup
    • Well defined corner cases—for example, no e-mail specified, e-mail with no Gravatar setup
    • Acceptance tests—user clicks on the login button, modal is displayed, user logs in, user sees him/her as logged in; user is logged in, click on the logout button, the user sees him/her as logged out

As you can imagine, the test plan should be a document that lives together with the project, being expanded and amended upon necessity when introducing new features or changing others. This requirement determines some constraints that should be respected if you want to keep having a specification document that is simple enough to be updated in a short time (10 minutes top) and, at a glance, lets you know what the implied risk and importance of each component and feature is.

If you want to understand more of the topic, I would strongly suggest you read more starting from Attributes-Components-Capabilities (ACC) at https://code.google.com/p/test-analytics/wiki/AccExplained.

The ACC goes together with risk analysis and mitigation. By putting your components, their relative capabilities (or features), and the attributes they should provide, such as "secure", "stable", "elegant", and so on in a grid, you can immediately understand where you should focus your testing attentions. For each row, you can give a risk value, relative to the other features. We want to keep the value relative to avoid making it too difficult to compute and also because it is meaningful only in this context.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset