Saturday, January 24, 2009

Defect-Driven Test Design - What if You Start Where You Want to Finish?

When I start a new software testing project, I always begin by drafting a test plan. I've always viewed test plans as useful tools in pointing your test imagination in the right directions for the project at hand. The very act of committing your thoughts, knowledge, and the results of your test and project investigation to a written plan enforces a level of discipline and organization on the testing. The adage that "plan is also a verb" is very true with regard to test plans.

A written plan is also a great vehicle to use to collect input from other members of your project team such as your Development team counterparts, from whom you can gather information about what new features may be more at risk, the support team, from whom you can collect information about the issues that are affecting your customers, and from the marketing team, from whom you can collect information about business priorities and customer expectations. The task of conducting a review of your plan with these other members of your project team can be a grueling exercise, as they may question both the means and motivations supporting test design or strategy, but their input, along with project design and functional specifications in a crucial element in your creating an effective plan.

But wait - there's another source of information that you should mine. Your bug database.

Think about it for a minute. Where do you find bugs? New code is always suspect as you haven't seen how it behaves under test. But, where else should you look? In the code where you have found bugs in the past.[1] Odds are, this code, is complex, which means it may be hard to change or maintain without introducing new bugs, or it is subject to constant change as the application under test evolves from release to release, or maybe it's design is faulty, so it has to be repeatedly corrected, or maybe it's just plain buggy. By examining your bug database, you can get a good picture which components in your application have a checkered past.

However, you shouldn't stop there. What you want to do is to create a bug taxonomy[2] for your application.

The term "taxonomy" is typically used to describe the classification of living organisms into ordered groups. You should do the same thing with your bug database. Each bug probably contains information about the component or module in which the bug was found. You can use this data to identify the parts of the application that are at risk, based on their bug-related history. You also have have information as to the the types of bugs that you've found in the past. Maybe you'll find a pattern of user access permission problems, where classes of users are able to exercise functions that should only be available to admin users. Or maybe you'll find a pattern of security problems where new features failed to incorporate existing security standards. These are the areas in which you should invest time developing new tests.

OK, so now you have your past history of application specific bugs grouped into a classification that describes the history of your application. What can you do to make the taxonomy an even more effective testing tool? Move beyond the bugs themselves, to classify the root causes of the bugs, and then attack these root causes on your project. You may find that many bugs are caused by problems in your development, design, documentation, and even your test processes. For example, unclear requirements definitions may result in features incorrectly, or in unnecessary or invalid test environments being used. For an extensive bug taxonomy, including multiple types of requirements-related problems, see the taxonomy created by Boris Beizer in his 1990 book "Software Testing Techniques."[3] This is probably the best known bug taxonomy. It includes bug classifications based on implementation, design, integration, requirements and many other areas.

But, don't stop there. What you have in your taxonomy is a representation of the bugs that you have found. What about the bugs that you haven't (yet) found? Take your taxonomy and use it as the starting point for brainstorming about the types of bugs that your application MAY contain. Expand your taxomony to not only describe what has happened, but what might happen. For example, if you have seen database connection related problems in the past, you could test to ensure that database transactions can be rolled back by the application when its database connection fails. Then take it one step further. If you're having problems with database connections failing, what about other server connections? What about the LDAP server that the application relies on for authentication? How does the application respond if that server is unreachable? Does the application crash, or does it generate an informative error message for the user?

How can you then use the taxonomy that you've built? Giri Vijayaraghavan and Cem Kaner identified another use for your bug taxonomy in their 2003 paper "Bug Taxonomies: Use Them to Generate Better Tests"[4]. Their suggestion is to use the big taxonomy as a check against your test plans. For any bug type that you can define, you should have a corresponding test in your test plans. In other words, if you don't have such a test, then you have a hole in your test planning.

To sum it all up, bugs are the goals of software tests and removing bugs from your product is the goal of your software testing. You shouldn't however look at the bugs as a final destination. They are also a persistent resource and as such, they are part of the institutional memory of your project. Building new tests based on these bugs should be part of your testing. Taking the bugs as a starting point, and expanding them into a taxonomy of both actual and potential bugs should be part of your effort to complete the always "unfinished agenda" of your future testing.

References:

[1] Glenford Myers, The Art of Software Testing, p.11.

[2] http://en.wikipedia.org/wiki/Taxonomy

[3] Boris Beizer, Software Testing Techniques, 2nd edition (New York: Van Nostrand Reinhold, 1990). The bug statistics and taxonomy can be copied and used. I found it on-line here: http://opera.cs.uiuc.edu/probe/reference/debug/bugtaxst.doc

[4] http://www.testingeducation.org/a/bugtax.pdf

1 comment:

Unknown said...

Nice, I am about to start a new version of our software and need to redo my Test Plans and Test Case Matrices - I'm going to try a taxonomy exercise and see what I come up with.