(photo by tdlucas5000)
1. The API
The APIs of Jasmine and Mocha are very similar where you write your test suite with blocks and each test, also called a spec, using the function.
The assertions, or expectations as they are often called, are where things start to differ. Mocha does not have a built in assertion library. There are several options though for both Node and the browser: Chai, should.js, expect.js, and better-assert. A lot of developers choose Chai as their assertion library. Because none of these assertion libraries come with Mocha, this is another thing you will need to load into your test harness. Chai comes with three different assertion flavors. It has the style, the style, and the style. The style is similar to what Jasmine provides. For example, if you want to write an expectation that verifies equals 5, this is how you would do it with both Jasmine and Chai:
Pretty similar right? If you are switching from Jasmine to Mocha, the path with the easiest learning curve is to use Chai with the style. In Jasmine, the assertion methods like use camel case whereas the compliment in Chai uses dot notation, . Both Jasmine and Mocha use and functions.
2. Test Doubles
Test doubles are often compared to stunt doubles, as they replace one object with another for testing purposes, similar to how actors and actresses are replaced with stunt doubles for dangerous action scenes. In Jasmine, test doubles come in the form of spies. A spy is a function that replaces a particular function where you want to control its behavior in a test and record how that function is used during the execution of that test. Some of the things you can do with spies include:
- See how many times a spy was called
- Specify a return value to force your code to go down a certain path
- Tell a spy to throw an error
- See what arguments a spy was called with
- Tell a spy to call the original function (the function it is spying on). By default, a spy will not call the original function.
In Jasmine, you can spy on existing methods like this:
You can also create a spy if you do not have an existing method you want to spy on.
In contrast, Mocha does not come with a test double library. Instead, you will need to load in Sinon into your test harness. Sinon is a very powerful test double library and is the equivalent of Jasmine spies with a little more. One thing to note is that Sinon breaks up test doubles into three different categories: spies, stubs, and mocks, each with subtle differences.
A spy in Sinon calls through to the method being spied on whereas you have to specify this behavior in Jasmine. For example:
In your test, the original would be called.
The next type of test double is a stub, which acts as a controllable replacement. Stubs are similar to the default behavior of Jasmine spies where the original method is not called. For example:
In your code, if is called during the execution of your tests, the original would not be called and a fake version of it (the test double) that returns would be used. In Sinon, a stub is a test double built on top of spies, so stubs have the ability to record how the function is being used.
So to summarize, a spy is a type of test double that records how a function is used. A stub is a type of test double that acts as a controllable replace as well as having the capabilities of a spy.
From my experience, Jasmine spies cover almost everything I need for test doubles so in many situations you won’t need to use Sinon if you are using Jasmine, but you can use the two together if you would like. One reason I do use Sinon with Jasmine is for its fake server (more on this later).
3. Asynchronous Testing
Asynchronous testing in Jasmine 2.x and Mocha is the same.
Above, is a constructor function with a static method . Behind the scenes, uses which performs the XHR request. I want to assert that when resolves successfully, the resolved value is an instance of . Because I have stubbed out to return a pre-resolved promise, no real AJAX request is made. However, this code is still asynchronous.
By simply specifying a parameter in the callback function (I have called it like in the documentation but you can call it whatever you want), the test runner will pass in a function and wait for this function to execute before ending the test. The test will timeout and error if is not called within a certain time limit. This gives you full control on when your tests complete. The above test would work in both Mocha and Jasmine 2.x.
If you are working with Jasmine 1.3, asynchronous testing was not so pretty.
Example Jasmine 1.3 Asynchronous Test
In this Jasmine 1.3 asynchronous test example, Jasmine will wait a maximum of 500 milliseconds for the asynchronous operation to complete. Otherwise, the test will fail. is constantly checking to see if becomes true. Once it does, it will continue to run the next block where I have my assertion.
4. Sinon Fake Server
One feature that Sinon has that Jasmine does not is a fake server. This allows you to setup fake responses to AJAX requests made for certain URLs.
In the above example, if a request is made to , a 200 response containing two users, Gwen and John, will be returned. This can be really handy for a few reasons. First, it allows you to test your code that makes AJAX calls regardless of which AJAX library you are using. Second, you may want to test a function that makes an AJAX call and does some preprocessing on the response before the promise resolves. Third, maybe there are several responses that can be returned based on if the request succeeds or fails such as a successful credit card charge, an invalid credit card number, an expired card, an invalid CVC, etc. You get the idea. If you have worked with Angular, Sinon’s fake server is similar to the $httpBackend service provided in angular mocks.
5. Running Tests
Mocha comes with a command line utility that you can use to run tests. For example:
This assumes your tests are located in a directory called . The recursive flag will find all files in subdirectories, and the watch flag will watch all your source and test files and rerun the tests when they change.
Jasmine however does not have a command line utility to run tests. There are test runners out there for Jasmine, and a very popular one is Karma by the Angular team. Karma also allows support for Mocha if you’d like to run your Mocha tests that way.
In conclusion, the Jasmine framework has almost everything built into it including assertions/expectations and test double utilities (which come in the form of spies). However, it does not have a test runner so you will need to use a tool like Karma for that. Mocha on the other hand includes a test runner and an API for setting up your test suite but does not include assertion and test double utilities. There are several choices for assertions when using Mocha, and Chai tends to be the most popular. Test doubles in Mocha also requires another library, and Sinon.js is often the de-facto choice. Sinon can also be a great addition to your test harness for its fake server implementation.
So, if you were to choose a test framework setup today, what might it look like?
If you go with Jasmine, you will likely use:
- Karma (for the test runner)
- Sinon (possibly for its fake server unless your framework provides an equivalent, like if you are using Angular)
If you go with Mocha, you will likely use:
- Chai (for assertions)
- Sinon (for test doubles and its fake server)
- Karma or CLI (for the test runner)
About the Author
What is a unit anyway? In the best case, it is a pure function that you can deal with in some way — a function that always gives you the same result for a given input. This makes unit testing pretty easy, but most of the time you need to deal with side effects, which here means DOM manipulations. It’s still useful to figure out which units we can structure our code into and to build unit tests accordingly.
Building Unit Tests
With that in mind, we can obviously say that starting with unit testing is much easier when starting something from scratch. But that’s not what this article is about. This article is to help you with the harder problem: extracting existing code and testing the important parts, potentially uncovering and fixing bugs in the code.
The process of extracting code and putting it into a different form, without modifying its current behavior, is called refactoring. Refactoring is an excellent method of improving the code design of a program; and because any change could actually modify the behaviour of the program, it is safest to do when unit tests are in place.
This chicken-and-egg problem means that to add tests to existing code, you have to take the risk of breaking things. So, until you have solid coverage with unit tests, you need to continue manually testing to minimize that risk.
If you ran that example, you’d see a problem: none of the dates get replaced. The code works, though. It loops through all anchors on the page and checks for a property on each. If there is one, it passes it to the function. If returns a result, it updates the of the link with the result.
Make Things Testable
The problem is that for any date older than 31 days, just returns undefined (implicitly, with a single statement), leaving the text of the anchor as is. So, to see what’s supposed to happen, we can hardcode a “current” date:
Now, the links should say “2 hours ago,” “Yesterday” and so on. That’s something, but still not an actual testable unit. So, without changing the code further, all we can do is try to test the resulting DOM changes. Even if that did work, any small change to the markup would likely break the test, resulting in a really bad cost-benefit ratio for a test like that.
Refactoring, Stage 0
Instead, let’s refactor the code just enough to have something that we can unit test.
We need to make two changes for this to happen: pass the current date to the function as an argument, instead of having it just use , and extract the function to a separate file so that we can include the code on a separate page for unit tests.
Here’s the contents of :
Now that we have something to test, let’s write some actual unit tests:
- Run this example. (Make sure to enable a console such as Firebug or Chrome’s Web Inspector.)
If a test fails, it will output the expected and actual result for that test. In the end, it will output a test summary with the total, failed and passed number of tests.
If all tests have passed, like they should here, you would see the following in the console:
Of 6 tests, 0 failed, 6 passed.
To see what a failed assertion looks like, we can change something to break it:
Expected 2 day ago, but was 2 days ago.
Of 6 tests, 1 failed, 5 passed.
While this ad-hoc approach is interesting as a proof of concept (you really can write a test runner in just a few lines of code), it’s much more practical to use an existing unit testing framework that provides better output and more infrastructure for writing and organizing tests.
Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.
The choice of framework is mostly a matter of taste. For the rest of this article, we’ll use QUnit (pronounced “q-unit”), because its style of describing tests is close to that of our ad-hoc test framework.
Three sections are worth a closer look here. Along with the usual HTML boilerplate, we have three included files: two files for QUnit ( and ) and the previous .
Then, there’s another script block with the actual tests. The method is called once, passing a string as the first argument (naming the test) and passing a function as the second argument (which will run the actual code for this test). This code then defines the variable, which gets reused below, then calls the method a few times with varying arguments. The method is one of several assertions that QUnit provides. The first argument is the result of a call to , with the variable as the first argument and a string as the second. The second argument to is the expected result. If the two arguments to are the same value, then the assertion will pass; otherwise, it will fail.
Finally, in the body element is some QUnit-specific markup. These elements are optional. If present, QUnit will use them to output the test results.
The result is this:
With a failed test, the result would look something like this:
Because the test contains a failing assertion, QUnit doesn’t collapse the results for that test, and we can see immediately what went wrong. Along with the output of the expected and actual values, we get a between the two, which can be useful for comparing larger strings. Here, it’s pretty obvious what went wrong.
Refactoring, Stage 1
The assertions are currently somewhat incomplete because we aren’t yet testing the variant. Before adding it, we should consider refactoring the test code. Currently, we are calling for each assertion and passing the argument. We could easily refactor this into a custom assertion method:
Here we’ve extracted the call to into the function, inlining the variable into the function. We end up with just the relevant data for each assertion, making it easier to read, while the underlying abstraction remains pretty obvious.
Testing The DOM manipulation
Now that the function is tested well enough, let’s shift our focus back to the initial example. Along with the function, it also selected some DOM elements and updated them, within the load event handler. Applying the same principles as before, we should be able to refactor that code and test it. In addition, we’ll introduce a module for these two functions, to avoid cluttering the global namespace and to be able to give these individual functions more meaningful names.
Here’s the contents of :
The new function is an extract of the initial example, but with the argument to pass through to . The QUnit-based test for that function starts by selecting all elements within the element. In the updated markup in the body element, the is new. It contains an extract of the markup from our initial example, enough to write useful tests against. By putting it in the element, we don’t have to worry about DOM changes from one test affecting other tests, because QUnit will automatically reset the markup after each test.
Let’s look at the first test for . After selecting those anchors, two assertions verify that these have their initial text values. Afterwards, is called, passing along a fixed date (the same as in previous tests). Afterwards, two more assertions are run, now verifying that the property of these elements have the correctly formatted date, “2 hours ago” and “Yesterday.”
Refactoring, Stage 2
The next test, , does nearly the same thing, except that it passes a different date to and, therefore, expects different results for the two links. Let’s see if we can refactor these tests to remove the duplication.
Here we have a new function called , which encapsulates the logic of the two previous calls to test, introducing arguments for the test name, the date string and the two expected strings. It then gets called twice.
Back To The Start
With that in place, let’s go back to our initial example and see what that looks like now, after the refactoring.
For a non-static example, we’d remove the argument to . All in all, the refactoring is a huge improvement over the first example. And thanks to the module that we introduced, we can add even more functionality without clobbering the global namespace.
QUnit itself has a lot more to offer, with specific support for testing asynchronous code such as timeouts, AJAX and events. Its visual test runner helps to debug code by making it easy to rerun specific tests and by providing stack traces for failed assertions and caught exceptions. For further reading, check out the QUnit Cookbook.