Presenters: Markus Hjort & Tuomas Karkkainen

Objectives:

Learn how to write better tests, and understand what the objectives and limitations of automated tests are.

Intended audience:

Developers, developers, developers, intermediate to advanced, experienced with or desiring to learn about assessing quality and coverage of automated tests.

Bring a laptop!

Contents

Although developer testing has reached the mainstream, when looked at closer, we often find that these tests are in fact not helpful. Low quality tests prevent refactoring, create a false sense of security, and confuse the developer. This interactive presentation addresses the assessment of developer test quality.

Participants will then work through a series of exercises helping them evaluate the quality of tests. The exercises demonstrate common test problems found in real world applications. The presenters and the audience discuss what they understand to be the quality of tests and the methodologies used to assess it.

During the session attendees will consider the following:

  • Do the tests cover enough of the behavior of the production class? Can the behavior be changed to do something unintended without getting a red bar.
  • Can the production code be refactored so that the tests accurately reflect the change? i.e. tests break if the behavior changes, but pass if only the design changes.
  • Is the granularity of the tests correct? Are the tests focused on the behavior of one production class, and its interaction with other objects, or does it depend on the implementation of its collaborators?
  • Are the tests written in a way to communicate the intent of the production code so they eliminate the need for javadoc or other documentation?
  • What to do when the version control for a project has tests that do not pass. Are the tests broken or is the production code broken? How to tell the difference?
  • Does it look like the tests were written earnestly or were they only written to satisfy an outside observer?  For example, does the test accurately specify behavior, or does it only run through the code to crank up the numbers in the a numerical coverage measurement tool.
  • How many corner cases must be tested? Not all of them are relevant but some of them are critical. Finding the right balance between required and cumbersome tests is key.

The session will end with a playful competition similar to the other exercises.

Format and length: Interactive presentation, 90 minutes

We will distribute bootable Linux "live cd's" that contain all the required tools and source code and automatically start up eclipse when booted.  We have run this same exercise before at XPDay London and the live cd's worked very well.  They booted nicely on almost all computers, and we had 3 spare laptops that were also distributed to participants.  Participants will work on the exercises in groups of 3 to 4 people.