Adding Code Coverage Reports to a JavaScript Project

By | January 19, 2017

The idea of code coverage as a metric to ensure a code base is exercised really appeals to me. However when I would try to add it to my existing project I would struggle, flounder and decide to try again later on after burning too much time on it. The lure of ready to run testing stacks in the form of NPM packages lulled me into a false sense of ease. In truth they only work so long as you’re using their package management libraries, their test libraries and basically adopt their version of the JavaScript stack. One that used mocha would also use bower, another woulduse gulp but not work with phantomjs or browser based unit testing at all. In this latest attempt I was successful by breaking up the problem into constituents until I could execute one part at a time and only then collecting them together again into my own solution.

Today’s JavaScript ecosystem is comprised of many frameworks, package management tools, compilers, minifiers and other parts that make even getting started a month long R&D project. Each combination of these many parts into a working project is commonly referred to as a stack. Though the pieces will advertise the same types of solutions, the layers of one stack cannot be replaced with another library without the consideration of interoperability and long term support. In a microcosm of this ecosystem lives the testing stacks which involve a specialized set of knowledge I have had to learn, re-learn and continue learning now.

Writing unit tests and functional tests involves test runners, assertion libraries, task automation frameworks, test frameworks, code coverage reporter and code instrumentation frameworks. It feels hard enough just to write tests and now we have to figure out how to get those pieces to all work together in order to write even our first test. Manually putting those pieces together is exactly what it takes to add in code coverage to an existing stack. My desire for a shortcut, to cut corners and use one of the fancy looking pre-built test stacks and fit it into my existing project was my downfall in each previous attempt. Each solution had some issue, maybe they didn’t support source maps or couldn’t run in a headless browser and for in each dead end I threw up my hands and vowed to return. Only by learning each step required to produce your desired result can allow you to create your own stack and achieve our goal of adding code coverage.

Splitting apart the testing stack and running each step independently gave me a way to build a working solution. The first step in getting a code coverage report is instrumenting your code such that the reporter can turn listen to which functions get called and which don’t. You’ll want to run that instrumenter of your choice on your JavaScript after compilers like Babel and bundlers like Webpack have finished with it. This can take a long time for a decent sized project and your final file size will have bloated considerably but this isn’t for production so don’t let the file size worry you. Since we use Babel I ended up settling on a plugin for an instrumenter that worked well with that component. Istanbul is a code coverage tool for which I ended up selecting after I found several other components along the way that completed this stack. The babel-plugin-istanbul worked to execute Istanbul when setup in a Webpack config file that used babel-loader to include the plugin and excluded test files from being instrumented. Your Webpack config must also be setup to output source maps which we’ll need to use later on. At this point I am able to run webpack and I have an instrumented build. Webpack config gist

webpack webpack-coverage.config.js
Code that has been instrumented

Code that has been instrumented, notice the `__coverage__` text.

Once you have your code instrumented you can move onto the next step in the process. Remember this is one action at time instead of one command to rule them all. I already had gulp setup to run mocha tests through the headless phantomjs browser and I was fortunate in finding my next component for Istanbul which was a hook for phantomjs named mocha-phantomjs-istabul. This hook collects the coverage data produced by your newly instrumented code when it gets executed inside phantomjs. The coverage file output isn’t very useful however and it doesn’t make any sense when we look at it. Now I can run my mocha tests, which execute in the phantomjs headless browser and export a raw coverage data file.

phantomjs ./node_modules/mocha-phantomjs/lib/ specs/testRunner.html spec '{"hooks": "mocha-phantomjs-istanbul", "coverageFile": "./results/coverage/coverage.json"}'
Code coverage data collected during execution. Not very readable.

Code coverage data collected during execution. Not very readable.

Now that we have a coverage file we can process that file into a report that will give us original file names, line numbers and make all this useful. Istanbul report processors don’t support Webpack bundled code out of the box and another library is require to make sense of the coverage file that points to your Webpack bundle alone. We can use remap-istanbul which supports the Webpack source map decorated code to produce the final HTML report which gives you a list of all youroriginal files and execution counts allowing you to finally see how much of your code hasn’t been exercised by your tests.

./node_modules/.bin/remap-istanbul --input results/coverage/coverage.json --output results/coverage --type html
Code coverage report overview

Code coverage report overview

Code coverage in file shows execution counts and unexecuted lines

Code coverage in file shows execution counts and exercised lines

Success! You’ve created a readable code coverage report of your project. If don’t have an end to end stack that matches any of the test frameworks ideas of ‘ready to run’ you then need to think of each execution as a separate piece. Once one step is completed then you can focus on the next piece in the stack.

  1. Instrumentation of code: babel-plugin-istanbul
  2. Collection of coverage data while tests are executed: mocha-phantomjs-Istanbul
  3. Output of a detailed actionable report: remap-istanbul

Our projects end-to-end process to execute a coverage report. Gulp config gist

  1. Task runner Gulp controls the flow by starting Webpack
  2. Webpack loads the babel-loader plugin to execute Babel and compile ES6 into ES5
  3. The Babel compile process loads the babel-plugin-istanbul plugin which uses Istanbul to add instrumentation to our code
  4. Webpack outputs our instrumented code and source maps
  5. Gulp then picks up to start the test process by passing the test files to gulp-mocha-phantomjs
  6. A hook into phantomjs includes mocha-phantomjs-istanbul which will collect information output by the instrumentation code during the normal execution of our test suite.
  7. mocha-phantomjs-istanbul outputs it’s coverage data to a file
  8. Gulp again picks up to pass that coverage file into our report builder remap-istanbul
  9. remap-istanbul looks at the coverage file which points back to our bundled Webpack file as well as looking for the source map for our bundled file. It then makes a new coverage file in memory that is mapped to the original files and passes that new coverage data to Istanbul reporters, saving their output

In the end I was successful in adding code coverage to the project by learning and understanding each step required to create that coverage report. Don’t get hung up on trying to fit one stack into another and instead learn what steps are required to reach your goal. Once you understand all the steps required, even a little bit, you will be able to add code coverage to your project and come up a stack all your own.

Leave a Reply

Your email address will not be published. Required fields are marked *

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.