Saturday, January 31, 2009

Parameterized Testing - Very Easy with TestNG's DataProvider

I was starting work on some parameterized JUnit tests the other day, when a co-worker*, someone who has a uncanny knack of reducing my complicated questions to simple answers, suggested, "Why don't you try a TestNG DataProvider instead?"

I was already trying to write the tests in Groovy, a scripting language with which I had very little experience, so the prospect of also trying a new test framework was not exactly appealing. But, I decided to give it a try. After a couple of hours of trial and error and self-inflicted syntax errors, I came to the conclusion that a TestNG DataProvider was the way to go. DataProviders are easy to use, and the coding is very straightforward.

TestNG[1] was developed by Cedric Beust and Alexandru Popescu in response to limitations in JUnit3. Actually, "limitations" may be too strong a word. In their book "Next Generation Java Testing"[2], they take pains to say that they developed TestNG in response to "perceived limitations" in JUnit3. In some cases, these were not limitations, but rather, design goals in JUnit that were in conflict to some types of tests. For example, the manner in which each JUnit test case re-instantiates the test class for a "clean" starting point. In this regard, TestNG supports test models beyond unit tests, for example, tests that are inter-dependent.

TestNG provides a couple of ways to support passing parameters to test cases. The parameters can be passed to the test cases through properties defined in testng.xml, or with the DataProvider annotation. DataProviders support complex passing complex objects as parameters. Here's how it works:

You define a method associcated with the "@DataProvider" annotation that returns an array of object arrays to your test cases.

Hang on - an array of arrays? Why is that needed? Here's why - each array of objects is passed to the test cases. In this way, you can pass an array of multiple parameters, each of whatever object type you want to the test cases. It's like this, say you want to pass a String and an Integer[3] to a test case. The DataProvider method returns an object of this type:

Object [][]

So that with these values:

Groucho, 1890
Harpo, 1888
Chico, 1887

The DataProvider provides:

array of objects for call #1 to test method = [Groucho] [1890]
array of objects for call #2 to test method = [Harpo] [1888]
array of objects for call #3 to test method = [Chico] [1887]

Or: this array of arrays of objects = [array for call #1] [array for call #2] [array for call #3]

Simple, right? Once you get past the idea of an array of arrays. Here's the Groovy code.

package misc

import org.testng.annotations.*
import org.testng.TestNG
import org.testng.TestListenerAdapter
import static org.testng.AssertJUnit.*;

public class DataProviderExample {

/* Test that consumes the data from the DataProvider */
@Test(dataProvider = "theTestData")
public void printData(String name, Integer dob) {
println("name: " + name + " dob: " + dob)
}

/* Method that provides data to test methods that reference it */
@DataProvider(name = "theTestData")
public Object[][] createData ( ) {
[
[ "Groucho", new Integer(1890) ] ,
[ "Harpo", new Integer(1888) ] ,
[ "Chico", new Integer(1887) ]
] as Object[][]
}

}

When you run this with TestNG, the output looks like:

[Parser] Running:
/workspace_groovy/BlogCode/temp-testng-customsuite.xml

name: Groucho dob: 1890
name: Harpo dob: 1888
name: Chico dob: 1887
PASSED: printData("Groucho", 1890)
PASSED: printData("Harpo", 1888)
PASSED: printData("Chico", 1887)

===============================================
misc.DataProviderExample
Tests run: 3, Failures: 0, Skips: 0
===============================================

I'll return to Groovy and TestNG in some future blog posts as they are very "groovy" test tools. Well, as Groucho would say, "hello, I must be going..."

References:

[1] http://testng.org/doc
[2] http://testng.org/doc/book.html
[3] This is a very slight variation on the sample shown here: http://testng.org/doc/documentation-main.html#parameters-dataproviders

* Děkujeme vám Jirka! ;-)

Saturday, January 24, 2009

Defect-Driven Test Design - What if You Start Where You Want to Finish?

When I start a new software testing project, I always begin by drafting a test plan. I've always viewed test plans as useful tools in pointing your test imagination in the right directions for the project at hand. The very act of committing your thoughts, knowledge, and the results of your test and project investigation to a written plan enforces a level of discipline and organization on the testing. The adage that "plan is also a verb" is very true with regard to test plans.

A written plan is also a great vehicle to use to collect input from other members of your project team such as your Development team counterparts, from whom you can gather information about what new features may be more at risk, the support team, from whom you can collect information about the issues that are affecting your customers, and from the marketing team, from whom you can collect information about business priorities and customer expectations. The task of conducting a review of your plan with these other members of your project team can be a grueling exercise, as they may question both the means and motivations supporting test design or strategy, but their input, along with project design and functional specifications in a crucial element in your creating an effective plan.

But wait - there's another source of information that you should mine. Your bug database.

Think about it for a minute. Where do you find bugs? New code is always suspect as you haven't seen how it behaves under test. But, where else should you look? In the code where you have found bugs in the past.[1] Odds are, this code, is complex, which means it may be hard to change or maintain without introducing new bugs, or it is subject to constant change as the application under test evolves from release to release, or maybe it's design is faulty, so it has to be repeatedly corrected, or maybe it's just plain buggy. By examining your bug database, you can get a good picture which components in your application have a checkered past.

However, you shouldn't stop there. What you want to do is to create a bug taxonomy[2] for your application.

The term "taxonomy" is typically used to describe the classification of living organisms into ordered groups. You should do the same thing with your bug database. Each bug probably contains information about the component or module in which the bug was found. You can use this data to identify the parts of the application that are at risk, based on their bug-related history. You also have have information as to the the types of bugs that you've found in the past. Maybe you'll find a pattern of user access permission problems, where classes of users are able to exercise functions that should only be available to admin users. Or maybe you'll find a pattern of security problems where new features failed to incorporate existing security standards. These are the areas in which you should invest time developing new tests.

OK, so now you have your past history of application specific bugs grouped into a classification that describes the history of your application. What can you do to make the taxonomy an even more effective testing tool? Move beyond the bugs themselves, to classify the root causes of the bugs, and then attack these root causes on your project. You may find that many bugs are caused by problems in your development, design, documentation, and even your test processes. For example, unclear requirements definitions may result in features incorrectly, or in unnecessary or invalid test environments being used. For an extensive bug taxonomy, including multiple types of requirements-related problems, see the taxonomy created by Boris Beizer in his 1990 book "Software Testing Techniques."[3] This is probably the best known bug taxonomy. It includes bug classifications based on implementation, design, integration, requirements and many other areas.

But, don't stop there. What you have in your taxonomy is a representation of the bugs that you have found. What about the bugs that you haven't (yet) found? Take your taxonomy and use it as the starting point for brainstorming about the types of bugs that your application MAY contain. Expand your taxomony to not only describe what has happened, but what might happen. For example, if you have seen database connection related problems in the past, you could test to ensure that database transactions can be rolled back by the application when its database connection fails. Then take it one step further. If you're having problems with database connections failing, what about other server connections? What about the LDAP server that the application relies on for authentication? How does the application respond if that server is unreachable? Does the application crash, or does it generate an informative error message for the user?

How can you then use the taxonomy that you've built? Giri Vijayaraghavan and Cem Kaner identified another use for your bug taxonomy in their 2003 paper "Bug Taxonomies: Use Them to Generate Better Tests"[4]. Their suggestion is to use the big taxonomy as a check against your test plans. For any bug type that you can define, you should have a corresponding test in your test plans. In other words, if you don't have such a test, then you have a hole in your test planning.

To sum it all up, bugs are the goals of software tests and removing bugs from your product is the goal of your software testing. You shouldn't however look at the bugs as a final destination. They are also a persistent resource and as such, they are part of the institutional memory of your project. Building new tests based on these bugs should be part of your testing. Taking the bugs as a starting point, and expanding them into a taxonomy of both actual and potential bugs should be part of your effort to complete the always "unfinished agenda" of your future testing.

References:

[1] Glenford Myers, The Art of Software Testing, p.11.

[2] http://en.wikipedia.org/wiki/Taxonomy

[3] Boris Beizer, Software Testing Techniques, 2nd edition (New York: Van Nostrand Reinhold, 1990). The bug statistics and taxonomy can be copied and used. I found it on-line here: http://opera.cs.uiuc.edu/probe/reference/debug/bugtaxst.doc

[4] http://www.testingeducation.org/a/bugtax.pdf