©  Caio Ribeiro Pereira 2016

Caio Ribeiro Pereira, Building APIs with Node.js , 10.1007/978-1-4842-2442-7_8

8. Testing the Application: Part 1

Caio Ribeiro Pereira

(1)São Vicente - SP, São Paulo, Brazil

Creating automated tests is highly recommended. There are several types of tests: unitary, functional, acceptance, and others. This chapter focuses only on the acceptance test, which in our case aims to test the outputs and behaviors of our API’s routes.

To create and execute the tests, it’s necessary to use a test runner. We’ll use Mocha (Figure 8-1), which is very popular in the Node.js community.

A435096_1_En_8_Fig1_HTML.jpg
Figure 8-1. Mocha test runner

Mocha has the following features:

  • TDD style.

  • BDD style.

  • Code coverage HTML report.

  • Customized test reports.

  • Asynchronous test support.

  • Easy integration with the should, assert, and chai modules.

It is a complete environment for writing tests, and you can learn more by accessing the Mocha web site at https://mochajs.org .

Setting Up the Test Environment

To set up our test environment, first we are going to set up a new database that we can use to work with some fake data. This practice is often used to make sure that an application can work easily in multiple environments. For now, our API has just a single environment, because all the examples were developed in the development environment.

To enable support for multiple environments, let’s rename the current libs/config.js file libs/config.development.js and then create the libs/config.test.js file. The only new parameter in this new file is logging: false, which disables the SQL log outputs. It is necessary to disable these logs to avoid difficulty with the test report in the terminal. This file (libs/config.test.js) should look like the following code.

 1   module.exports = {
 2     database: "ntask_test",
 3     username: "",
 4     password: "",
 5     params: {
 6       dialect: "sqlite",
 7       storage: "ntask.sqlite",
 8       logging: false,
 9       define: {
10         underscored: true
11       }
12     },
13     jwtSecret: "NTASK_TEST",
14     jwtSession: {session: false}
15   };

Next we have to set up some files, each one with data specific to its corresponding environments. To load the settings according to the current environment, we must create a code to identify what environment it is. In this case, we can use the process.env object, which basically returns several environment variables from the OS .

A good practice in Node.js projects is to work with the variable process.env.NODE_ENV and use this value as a base. Our application will have to load the settings from a test or development environment. By default, the development will be defined when process.env.NODE_ENV returns null or an empty string.

Based on this brief explanation, let’s re-create the libs/config.js file to load the settings according to the right system environment.

1   module.exports = app => {
2     const env = process.env.NODE_ENV;
3     if (env) {
4       return require(`./config.${env}.js`);
5     }
6     return require("./config.development.js");
7    };

In our project, we are going to explore the acceptance tests only. To create them, we need to use these modules:

  • babel-register: To run ES6 codes.

  • mocha: To run the tests.

  • chai: To write BDD tests.

  • supertest: To execute some requests in the API.

All of these modules will be installed as devDependencies in the package.json file to use them only as test dependencies. To do this, you need to use the --save-dev flag with the npm install command, as shown here.

1   npm install [email protected] [email protected] [email protected] [email protected] --save-d
2   ev

Now, let’s encapsulate the mocha test runner into the npm test alias command to internally run the command NODE_ENV=test mocha test/**/*.js. To implement this new command, edit package.json and include the scripts.test attribute.

 1   {
 2     "name": "ntask-api",
 3     "version": "1.0.0",
 4     "description": "Task list API",
 5     "main": "index.js",
 6     "scripts": {
 7       "start": "babel-node index.js",
 8       "test": "NODE_ENV=test mocha test/**/*.js"
 9     },
10     "author": "Caio Ribeiro Pereira",
11     "dependencies": {
12       "babel-cli": "^6.5.1",
13       "babel-preset-es2015": "^6.5.0",
14       "bcrypt": "^0.8.5",
15       "body-parser": "^1.15.0",
16       "consign": "^0.1.2",
17       "express": "^4.13.4",
18       "jwt-simple": "^0.4.1",
19       "passport": "^0.3.2",
20       "passport-jwt": "^2.0.0",
21       "sequelize": "^3.19.2",
22       "sqlite3": "^3.1.1"
23     },
24     "devDependencies": {
25       "babel-register": "^6.5.2",
26       "chai": "^3.5.0",
27       "mocha": "^2.4.5",
28       "supertest": "^1.2.0"
29     }
30   }

Then, we are going to export our main API module, index.js, to allow the API to be started during the tests. To do so, you must include the module.exports = app at the end of the index.js file, and we’ll also disable the logs created by the consign module via consign({verbose: false}) settings to not pollute the test report.

 1   import express from "express";
 2   import consign from "consign";
 3   
 4   const app = express();
 5
 6   consign({verbose: false})
 7     .include("libs/config.js")
 8     .then("db.js")
 9     .then("auth.js")
10     .then("libs/middlewares.js")
11     .then("routes")
12     .then("libs/boot.js")
13     .into(app);
14   
15   module.exports = app;

Now, the application can be internally started by the supertest module during the tests. To avoid the server running twice in the test environment, you need to modify the libs/boot.js to run the database sync and have the server listen only when process.env.NODE_ENV does not have the test value.

To change this, open and edit the libs/boot.js using the following simple code.

1   module.exports = app => {
2     if (process.env.NODE_ENV !== "test") {
3       app.db.sequelize.sync().done(() => {
4         app.listen(app.get("port"), () => {
5           console.log(`NTask API - Port ${app.get("port")}`);
6         });
7       });
8     }
9   };

To finish our test environment setup, let’s prepare some Mocha-specific settings to load the API server and the modules chai and supertest as global variables. This will accelerate the execution of tests; after all, each one will load these modules again and again, and if we have all the main things load once, we will save milliseconds in test execution. To implement this simple practice, create the file test/helpers.js.

1   import supertest from "supertest";
2   import chai from "chai";
3   import app from "../index.js";
4   
5   global.app = app;
6   global.request = supertest(app);
7   global.expect = chai.expect;

Then, let’s create a simple file that allows us to include some settings as parameters to the mocha command. This will be responsible for loading test/helpers.js and also use the --reporter spec flag to show a detailed report about the tests. After that, we’ll include the --compilers js:babel- register flag for Mocha be able to run the tests in ECMAScript 6 standard via the babel-register module.

The last flag is --slow 5000, which waits five seconds before starting all tests (time enough to start the API server and database connection safely). Create the test/mocha.opts file using the following parameters.

1   --require test/helpers
2   --reporter spec
3   --compilers js:babel-register
4   --slow 5000 =

Writing the First Test

Now that we have finished the setup of the test environment, it is time to test something. We can write some test code for routes/index.js because it is very simple to test: Basically, we can make sure the API is returning the JSON correctly, comparing the results with the static JSON const expected = {status: "NTask API"} to see if both match.

To create our first test, let’s use the request.get("/") function to validate if this request is returning the status 200. To finish this test, we check if the req.body and expected are the same using the expect(res.body).to.eql(expected) function.

To implement this test, create the test/routes/index.js file using the following code.

 1   describe("Routes: Index", () => {
 2     describe("GET /", () => {
 3       it("returns the API status", done => {
 4         request.get("/")
 5           .expect(200)
 6           .end((err, res) => {
 7             const expected = {status: "NTask API"};
 8             expect(res.body).to.eql(expected);
 9             done(err);
10           });
11       });
12     });
13   });

To execute this test, run the following command.

1   npm test

After the execution, you should have output similar to what is shown in Figure 8-2.

A435096_1_En_8_Fig2_HTML.jpg
Figure 8-2. Running the first test

Testing the Authentication Endpoint

In this section, we implement several tests. To start, let’s test the endpoint from routes/token.js, which is responsible to generate JSON web tokens for authenticated users.

Basically, this endpoint will have four tests to validate.

  • Request authenticated by a valid user.

  • Request with a valid e-mail but with the wrong password.

  • Request with an unregistered e-mail.

  • Request without an e-mail and password.

Create the test test/routes/token.js with the following structure.

 1   describe("Routes: Token", () => {
 2     const Users = app.db.models.Users;
 3     describe("POST /token", () => {
 4       beforeEach(done => {
 5         // Runs before each test...
 6       });
 7       describe("status 200", () => {
 8         it("returns authenticated user token", done =>{
 9           // Test's logic...
10         });
11       });
12       describe("status 401", () => {
13         it("throws error when password is incorrect", done => {
14           // Test's logic...
15         });
16         it("throws error when email not exist", done => {
17           // Test's logic...
18          });
19          it("throws error when email and password are blank", done => {
20            // Test's logic...
21         });
22       });
23     });
24   });

To start writing these tests, we first need to code some queries to clear the user table and create one valid user inside the beforeEach() callback. This function will be executed before each test. To do this, we’ll use the model app.db.models.Users and its functions Users.destroy({where: {}}) to clean the user table and Users.create() to save a single valid user for each test execution. This will allow us to test the main flows of this route.

 1   beforeEach(done => {
 2     Users
 3       .destroy({where: {}})
 4       .then(() => Users.create({
 5         name: "John",
 6         email: "[email protected]",
 7         password: "12345"
 8       }))
 9       .then(() => done());
10   });

Now, we are going to implement things test by test. The first test is a successful case. To test it, let’s use the function request.post("/token") to request a token by sending the e-mail and password of a valid user via the send() function.

To finish the test, the end(err, res) callback must return res.body with the token key to be checked via the expect(res.body).to.include.keys("token") function. To conclude a test, it’s required to execute the callback done() in the end of a test.

Always send the variable err as a parameter into the done(err) function, because if something goes wrong during the test, this function will show the details of the error. Here is the complete code of this first test.

 1   it("returns authenticated user token", done => {
 2     request.post("/token")
 3       .send({
 4         email: "[email protected]",
 5         password: "12345"
 6       })
 7       .expect(200)
 8       .end((err, res) => {
 9         expect(res.body).to.include.keys("token");
10         done(err);
11       });
12   });

After this first test, we’ll write some further tests to verify if errors is being handled appropriately. Now, let’s test the request of an invalid password, expecting a 401 unauthorized access status code. This test is simpler, because basically we’ll test if the request returns the status 401 error, via the function expect(401).

 1   it("throws error when password is incorrect", done => {
 2     request.post("/token")
 3       .send({
 4         email: "[email protected]",
 5         password: "WRONG_PASSWORD"
 6       })
 7       .expect(401)
 8       .end((err, res) => {
 9         done(err);
10      });
11   });

The next test is very similar to the last one, but now, it tests an invalid user’s e-mail, expecting the request to return the 401 status code again.

 1   it("throws error when email not exist", done => {
 2     request.post("/token")
 3       .send({
 4         email: "[email protected]",
 5         password: "12345"
 6       })
 7       .expect(401)
 8       .end((err, res) => {
 9         done(err);
10       });
11   });

To finish this test case, let’s check if we get the same 401 status code when no e-mail and no password are sent. This one is even simpler, because we don’t need to send parameters in this request .

1   it("throws error when email and password are blank", done => {
2     request.post("/token")
3       .expect(401)
4       .end((err, res) => {
5         done(err);
6       });
7   });

Conclusion

To run all tests now, just run the npm test command again. You should see a result similar to Figure 8-3.

A435096_1_En_8_Fig3_HTML.jpg
Figure 8-3. Test result

Keep reading because the subject of tests is extensive, and we’ll keep talking about it in the next chapter and write more tests for our API’s routes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset