Let's start our unit testing journey with the data models we wrote for the Notes application. Because this is unit testing, the models should be tested separately from the rest of the Notes application.
In the case of most of the Notes models, isolating their dependencies implies creating a mock database. Are you going to test the data model or the underlying database? Testing a data model and not mocking out the database means that to run the test, one must first launch the database, making that a dependency of running the test. On the other hand, avoiding the overhead of launching a database engine means that you will have to create a fake Sequelize implementation. That does not look like a productive use of our time. One can argue that testing a data model is really about testing the interaction between your code and the database, that mocking out the database means not testing that interaction, and therefore we should test our code against the database engine used in production.
With that line of reasoning in mind, we'll skip mocking out the database, and test the models against a real database server. Instead of running the test against the live production database, it should be run against a database containing test data.
To simplify launching the test database, we'll use Docker to start and stop a version of the Notes application that's set up for testing.
If you haven't already done so, duplicate the source tree to use in this chapter. For example, if you had a directory named chap10
, create one named chap11
containing everything from chap10
.
In the notes
directory, create a new directory named test
.
Mocha (http://mochajs.org/) is one of many test frameworks available for Node.js. As you'll see shortly, it helps us write test cases and test suites, and it provides a test results reporting mechanism. It was chosen over the alternatives because it supports Promise's.
While in the notes
directory, type this to install Mocha and Chai:
$ npm install [email protected] [email protected] --save-dev
We saw similar commands plenty of times, but this time, we used --save-dev
rather than the --save
we saw earlier. This new option saves the package name to the devDependencies
list in package.json
rather than the dependencies
list. The idea is to record which dependencies to use in production and which are used in development or testing environments.
We only want Mocha and Chai installed on a development machine, not on a production machine. With npm, installing, the production version of our code is done this way:
$ npm install --production
Alternatively, it can be done as follows:
$ NODE_ENV=production npm install
Either approach causes npm to skip installing packages listed in devDependencies
.
In compose/compose-docker.yml
, we should then add this to both the userauth
and notesapp
sections:
environment: - NODE_ENV="production"
When the production deployment Dockerfile executes npm install
, this ensures that development dependencies are not installed.
Because we have several Notes models, the test suite should run against any model. We can write tests using the Notes model API we developed, and an environment variable should be used to declare the model to test.
In the test
directory, create a file named test-model.js
containing this as the outer shell of the test suite:
'use strict'; const assert = require('chai').assert; const model = require(process.env.MODEL_TO_TEST); describe("Model Test", function() { .. });
The Chai library supports three flavors of assertions. We're using the assert
style here, but it's easy to use a different style if you prefer. For the other styles supported by Chai, see http://chaijs.com/guide/styles/.
We'll specify the Notes model to test with the MODEL_TO_TEST
environment variable. For the models that also consult environment variables, we'll need to supply that configuration as well.
With Mocha, a test suite is contained within a describe
block. The first argument is descriptive text, which you use to tailor presentation of test results.
Rather than maintaining a separate test database, we can create one on the fly while executing tests. Mocha has what are called "hooks" which are functions executed before or after test case execution. The hook functions let you the test suite author, set up and tear down required conditions for the test suite to operate as desired. For example, to create a test database with known test content.
describe("Model Test", function() { beforeEach(function() { return model.keylist().then(keyz => { var todel = keyz.map(key => model.destroy(key)); return Promise.all(todel); }) .then(() => { return Promise.all([ model.create("n1", "Note 1", "Note 1"), model.create("n2", "Note 2", "Note 2"), model.create("n3", "Note 3", "Note 3") ]); }); }); .. });
This defines a beforeEach
hook, which is executed before every test case. The other hooks are before
, after
, beforeEach
, and afterEach
. The Each hooks are triggered before or after each test case execution.
This uses our Notes API to first delete all notes from the database (if any) and then create a set of new notes with known characteristics. This technique simplifies tests by ensuring that we have known conditions to test against.
We also have a side effect of testing the model.keylist
and model.create
methods.
In Mocha, test cases are written using an it
block contained within a describe
block. You can nest the describe
blocks as deeply as you like. Add the following beforeEach
block as shown:
describe("check keylist", function() { it("should have three entries", function() { return model.keylist().then(keyz => { assert.equal(3, keyz.length, "length 3"); }); }); it("should have keys n1 n2 n3", function() { return model.keylist().then(keyz => { keyz.forEach(key => { assert.match(key, /n[123]/, "correct key"); }); }); }); it("should have titles Node #", function() { return model.keylist().then(keyz => { var keyPromises = keyz.map(key => model.read(key)); return Promise.all(keyPromises); }) .then(notez => { notez.forEach(note => { assert.match(note.title, /Note [123]/); }); }); }); });
The idea is of course to call Notes API functions, then to test the results to check whether they matched the expected results.
This describe
block is within the outer describe
block. The descriptions given in the describe
and it
blocks are used to make the test report more readable.
It is important with Mocha to not use arrow functions in the describe
and it
blocks. By now, you will have grown fond of these because of how much easier they are to write. But, Mocha calls these functions with a this
object containing useful functions for Mocha. Because arrow functions avoid setting up a this
object, Mocha would break.
You'll see that we used a few arrow functions here. It's the function supplied to the describe
or it
block, which must be a traditional function
declaration. The others can be arrow functions.
How does Mocha know whether the test code passes? How does it know when the test finishes? This segment of code shows one of the three methods.
Namely, the code within the it
block can return a Promise object. The test finishes when the promise finishes, and it passes or fails depending on whether the Promise concludes successfully or not.
Another method for writing tests in Mocha is with non-asynchronous code. If that code executes without throwing an error then the test is deemed to have passed. Assertion libraries such as Chai are used to assist writing checks for expected conditions. Any assertion library can be used, so long as it throws an error.
In the tests shown earlier, we did use Chai assertions to check values. However, those assertions were performed within a Promise. Remember that Promises catch errors thrown within the .then
functions, which will cause the Promise to fail, which Mocha will interpret as a test failure.
The last method for writing tests in Mocha is used for asynchronous code. You write the it
block callback function so that it takes an argument. Mocha supplies a callback function to that argument, and when your test is finished, you invoke that callback to inform Mocha whether the test succeeded or failed.
it("sample asynchronous test", function(done) { performAsynchronousOperation(arg1, arg2, function(err, result) { if (err) return done(err); // test failure // check attributes of result against expected result if (resultShowsFail) return done(new Error("describe fail"); done(); // Test passes }); });
Using Mocha to test asynchronous functions is simple. Call the function and asynchronously receive results, verifying that the result is what's expected, then call done(err)
to indicate a fail or done()
to indicate a pass.
We have more tests to write, but let's first get set up to run the tests.
The simplest model to test is the in-memory model. Let's add this to the scripts
section of package.json
:
"test-notes-memory": "MODEL_TO_TEST=../models/notes-memory mocha",
Then, we can run it as follows:
$ npm run test-notes-memory > [email protected] test-notes-memory /Users/david/chap11/notes > MODEL_TO_TEST=../models/notes-memory mocha Model Test check keylist ✓ should have three entries ✓ should have keys n1 n2 n3 ✓ should have titles Node # 3 passing (23ms)
The mocha
command is used to run the test suite. With no arguments, it looks in the test
directory and executes everything there. Command-line arguments can be used to tailor this, so you can run a subset of tests or change the reporting format.
That wasn't enough to test much, so let's go ahead and add the some more tests:
describe("read note", function() { it("should have proper note", function() { return model.read("n1").then(note => { assert.equal(note.key, "n1"); assert.equal(note.title, "Note 1"); assert.equal(note.body, "Note 1"); }); }); it("Unknown note should fail", function() { return model.read("badkey12") .then(note => { throw new Error("should not get here"); }) .catch(err => { // this is expected, so do not indicate error }); }) }); describe("change note", function() { it("after a successful model.update", function() { return model.update("n1", "Note 1 title changed", "Note 1 body changed") .then(newnote => { return model.read("n1"); }) .then(newnote => { assert.equal(newnote.key, "n1"); assert.equal(newnote.title, "Note 1 title changed"); assert.equal(newnote.body, "Note 1 body changed"); }); }) }); describe("destroy note", function() { it("should remove note", function() { return model.destroy("n1").then(() => { return model.keylist() .then(keyz => { assert.equal(2, keyz.length); }); }) }); it("should fail to remove unknown note", function() { return model.destroy("badkey12") .then(() => { throw new Error("should not get here"); }) .catch(err => { // this is expected, so do not indicate error }); }) });
Model Test check keylist ✓ should have three entries ✓ should have keys n1 n2 n3 ✓ should have titles Node # read note ✓ should have proper note ✓ Unknown note should fail change note ✓ after a successful model.update destroy note ✓ should remove note ✓ should fail to remove unknown note 8 passing (31ms)
In these additional tests, we have a couple of negative tests. In each test that we expect to fail, we supply a notekey
that we know is not in the database, and we then ensure that the model gives us an error.
Notice how the test report reads well. Mocha's design is what produces this sort of descriptive results report. While writing a test suite, it's useful to choose the descriptions so the report reads well.
That was good, but we obviously won't run Notes in production with the in-memory Notes model. This means that we need to test all the other models. While each model implements the same API, we can easily make a mistake in one of them.
Testing the LevelUP and filesystem models is easy, just add this to the scripts
section of package.json
:
"test-notes-levelup": "MODEL_TO_TEST=../models/notes-levelup mocha", "test-notes-fs": "MODEL_TO_TEST=../models/notes-fs mocha",
Then run the following command:
$ npm run test-notes-fs $ npm run test-notes-levelup
This will produce a successful test result.
The simplest database to test is SQLite3, since it requires zero setup. We have two SQLite3 models to test, let's start with notes-sqlite3.js
. Add the following to the scripts
section of package.json
:
"test-notes-sqlite3": "rm -f chap11.sqlite3 && sqlite3 chap11.sqlite3 --init models/schema-sqlite3.sql </dev/null && MODEL_TO_TEST=../models/notes-sqlite3 SQLITE_FILE=chap11.sqlite3 mocha"
This command sequence puts the test database in the chap11.sqlite3
file. It first initializes that database using the sqlite3
command-line tool. Note that we've connected its input to /dev/null
because the sqlite3
command will prompt for input otherwise. Then, it runs the test suite passing in environment variables required to run against the SQLite3 model.
Running the test suite does find an error:
$ npm run test-notes-sqlite3 .. read note ✓ should have proper note 1) Unknown note should fail .. 7 passing (385ms) 1 failing 1) Model Test read note Unknown note should fail: Uncaught TypeError: Cannot read property 'notekey' of undefined at Statement.<anonymous> (models/notes-sqlite3.js:72:44) --> in Database#get('SELECT * FROM notes WHERE notekey = ?', [ 'badkey12' ], [Function]) at models/notes-sqlite3.js:68:16 at models/notes-sqlite3.js:67:16
The failure indicators in this report are as follows:
The failing test calls model.read("badkey12")
, which we know does not exist. Writing negative tests paid off.
The failing line of code at models/notes-sqlite3.js
(line 72) reads as follows:
var note = new Note(row.notekey, row.title, row.body);
It's easy enough to insert "util.log(util.inspect(row));
" just before this and learn that, for the failing call, SQLite3 gave us a "row
" object with the undefined
value.
The test suite calls the read
function multiple times with a notekey
value that does exist. Obviously, when given an invalid notekey
value, the query gives an empty results set and SQLite3 invokes the callback with both the undefined
error and the undefined
row values. This is common behavior for database modules. An empty result set isn't an error, and therefore we received no error and an undefined row.
In fact, we saw this behavior earlier with models/notes-sequelize.js
. The equivalent code in models/notes-sequelize.js
does the right thing, and it has a check which we can adapt. Let's rewrite the read
function in models/notes-sqlite.js
to this:
exports.read = function(key) { return exports.connectDB().then(() => { return new Promise((resolve, reject) => { db.get("SELECT * FROM notes WHERE notekey = ?", [ key ], (err, row) => { if (err) reject(err); else if (!row) { reject(new Error("No note found for " + key)); } else { var note = new Note(row.notekey, row.title, row.body); log('READ '+ util.inspect(note)); resolve(note); } }); }); }); };
This is simple, we just check whether row
is undefined and, if so, throw an error. While the database doesn't see an empty results set as an error, Notes does. Further, Notes already knows how to deal with a thrown error.
Make this change and the test passes.
This is the bug we referred to in Chapter 7, Data Storage and Retrieval. We simply forgot to check for this condition in this particular method. Thankfully, our diligent testing caught the problem. At least that's the story to tell the managers rather than telling them that we forgot to check for something we already knew could happen.
Now that we've fixed models/notes-sqlite3.js
, let's also test models/notes-sequelize.js
using the SQLite3 database. To do this, we need a connection object to specify in the SEQUELIZE_CONNECT
variable. While we can reuse the existing one, let's create a new one. Create a file named test/sequelize-sqlite.yaml
containing this:
dbname: notestest username: password: params: dialect: sqlite storage: notestest-sequelize.sqlite3 logging: false
This way, we don't overwrite the "production" database instance with our test suite. Since the test suite destroys the database it tests, it must be run against a database we are comfortable destroying. The logging
parameter turns off the voluminous output Sequelize produces so that we can read the test results report.
Add the following to the scripts
section of package.json
:
"test-notes-sequelize-sqlite": "MODEL_TO_TEST=../models/notes-sequelize SEQUELIZE_CONNECT=test/sequelize-sqlite.yaml mocha",
Then run the test suite:
$ npm run test-notes-sequelize-sqlite .. 8 passing (2s)
And, we pass with flying colors!
We've been able to leverage the same test suite against multiple Notes models. We even found a bug in one model. But, we have two test configurations remaining to test. Our test matrix reads as follows:
models-fs
: PASSmodels-memory
: PASSmodels-levelup
: PASSmodels-sqlite3
: 1 failure, now fixedmodels-sequelize
with SQLite3: PASSmodels-sequelize
with MySQL: untestedmodels-mongodb
: untestedThe two untested models both require the setup of a database server. We avoided testing these combinations because setting up the database server makes it more difficult to run the test. But our manager won't accept that excuse because the CEO needs to know we've tested Notes while configured similarly to the production environment.
In production, we'll be using a regular database server, of course, with MySQL or MongoDB—the primary choices. Therefore, we need a way that incurs a low overhead to run tests against those databases because they're the production configuration. Testing against the production configuration must be so easy that we should feel no resistance in doing so. That's to ensure that tests are run against that configuration often enough for testing to make the desired impact.