Sunday, December 27, 2009

Choosing Kaizen Over Cheez Whiz

The end of December is a good time to reflect on the year that is ending. You can look back on your accomplishments, your victories and defeats, and the wreckage (or just the abandonment) of your new years' resolutions from the previous year. This year, the end of the year has gotten me thinking development and test processes.

One of my original goals in starting this blog was for it to as a resource for "new-to-it" software test engineers and managers. The general type of question that I want the blog to be able to answer is "now what do I do?" What I especially want is to be able suggest were specific actions that the reader can follow, instead providing only general, more abstract advice. These actions are related to three components of a successful software test team:
  • People - who to hire and when to hire them, how to organize them into a team, how to maximize the effectiveness of a team
  • Tools - which tools to use, and how to use them, and how to use them in combination, and
  • Processes - how to create a flexible, efficient, and repeatable set of procedures
The Monday Morning 9:00AM Question

One specific case where I want the blog to be useful involves the first steps that a new software test manager should perform. I want the blog to be able to help a person in a situation where it's the start of the first day after you've just been appointed software test manager. You have no team, no tools, and no processes in place. As you sit at your desk, you're not only asking yourself "what do I do?", you're asking yourself, "what do I do first?"

I addressed the "people" component aspect of this question in an earlier post to this blog where I discussed the characterIstics of the first person to add to a team. I'll write at a later date on the first tools that you'll want to start using and the best way to use them together. For today, I want to talk about the first process that you want to implement.

"We Own the Process"

For many software engineers (the author included), the word "process" can have a negative connotation, as it conjures up images of having to spend lots of your time doing everything other than writing code. I once worked for a manager who, when he announced at a department meeting that "we own the process" subsequently discovered that we had hidden several jaras of "cheez whiz" in his office. (In case anyone has not heard of "cheez whiz," it's a famous American delicacy. It cannot legally be called cheese as it is so heavily processed, instead, it is described as a "pasturized process cheese food."

That was an unfortunate and extreme incident as "process" really ought not to be a bad word. In order for a software test organization to be successful over the long term, it has to develop a process that can be relied on as a roadmap to lead the team through difficult problems. In fact, the very "process" (pardon the pun) of defining, examining and documenting your software development and test process can itself be a useful task in making your organization more effective. If you force yourself to write down the rules under which your organization operates, you will probably find ways to improve it. (But, this is a subject for a different blog post!) This process also has to be repeatable so that the team's success can be repeatable. And, most importantly of all, the process has to always be a means to an end, where that end is the creation of high quality software, on time and on budget.

To get back to subject of this post, which particular process should the team put into place first? There are several possible answers to this question:
  • Requirements traceability - In defining tests, you should be able to establish a relationship between the product's requirements and your tests tests. Any gaps represent holes in your test coverage.
  • Formal test planning - There's a place for ad hoc or exploratory testing[1] in any project, but it cannot represent the majority of a testing effort. You'll doomed to overlook some critical tests unless you impose some discipline on the process by compiling your test ideas into a plan. The goal of test planning is not to produce a document, but to create define the tests, ensure that you have proper test coverage, mitigate risks, etc. The document is almost a by-product to the act of collecting and defining the information in the document. Like I always tell people, "plan is also a verb."
  • Defect tracking - If you find a bug, but have no means to record it, is it still a bug? Obviously, you have to track and record bugs in a presistent data store.
  • Test results tracking - Likewise, if you run a test, but do not have the means to record the results for future analysis, and to compare the results with future test runs, then the test is a lot less useful than it could be.
Did you notice the common thread tying these processes together? They all involve maintaining information in a form that someone other than the original author or designer can use and in a form that is persisted in a database or document so that a wide audience can also use it. I like to use the phrase "institutional memory" to describe the output from these documentation and data recording tasks, as the information thus recorded becomes the memory or legacy of the testing.

Hey, what about test automation? Well, for the purposes of this discussion, I'm putting test automation in a separate category, sort of like breathing. Every team and every project has to invest in automation. Without automation, you can never achieve a sufficient level of test coverage, and the consistency of test execution and test results to ensure the quality of the software under test. In addition, you'll never be able to put into place an effective continuous integration process without a high degree of test automation. The degree to which automation is essential makes me treat this in a different class than other software test processes.

I think, however, if I had to advise someone as to the first process to set up for a new test team, I would choose "kaizen." [2] In other words, continuous improvement.

A systematic approach to continuous improvement can be effective in any complex human endeavor, whether it's software testing or formula 1 racing, or investment banking. You learn from past mistakes, modify your approach to tasks to incorporate those lessons learned, or, in Scrum terms, you continuously "inspect and adapt" [3].

But, what is it about software testing that makes continuous improvement especially applicable?

The Answer is: Bugs

"We deal in lead." [4]
- Steve McQueen in "The Magnificent Seven"

"We deal in bugs."
- Any software tester

The common denominator that ties all software testing together is that every test has the same ultimate goal; to find an as yet undiscovered bug.[5] When you practice software testing, you spend your time looking for, investigating, recreating, and trying to understand the root cause of software bugs. And, every bug presents an opportunity for some type of design or implementation improvement. What's more, the bugs that look for and miss, also present opportunities for improvement in test design and implementation.[6]

In software testing, you are always inspecting both the software product under test and adapting the test programs and test approach to the current state of that product. Your tests and test plans can never be static, they always have to adapt, and be improved, to meet the current conditions. And, it's important that this improvement be continuous, and not only be a task that you think about at the end of a project. In terms of a software product's development and testing lifecycle, waiting days or weeks to make a change to a test or adapt your test plan to deal with an unexpected set of new bugs.

So, how (specifically) can you implement a process of continuous improvement? Some ways are to:
  • Turn every new bug into a new test - This one is easy, whenever you find a new bug, write an automated regression test for it. This will ensure that if the bug is ever re-introduced, you'll be able to catch it.
  • Mine the customer support calls for new test ideas, and for missing tests - In the course of preparing your test plan, you should collect input from other product team members. You should also review the problems that actual customers are having. You may find that they are using the product in unexpected ways. I once saw a problem where a telephone voicemail system offered customers a feature where they could have the system call them back. People started to use this as a wakeup service, but the call back processes always ran at lower priority level than incoming call processes. What happened? A lot of people received some very late wakeup calls. ;-)
  • Regularly re-review existing plans and tests - Like I said earlier, the plans should be dynamic, not static.
Closing Thoughts

And, remember, "there is always something left to improve"[7] - No matter how successful your testing is, there will be bugs that you missed. The complexity of software products, coupled with the flexible (or is it "brittle"?) nature of software as a medium, means that changes will be made and bugs will be introduced. So, if you think that that your test plan and tests are "complete," think again. The surest way to fall behind is by standing still!








[7] As much as I'd like to take credit for the line "there is always something left to improve," I can't. It's attributed to Ben Hogan (1912-1997), the great American golfer of the 1940's and 1950's. His life story is really fascinating. He was born into poverty, reportedly witnessed his father's suicide at age 7, survived a crippling automobile accident, and after decades of failure and constant practice, made himself into the best professional golfer of his time. He is also without a doubt the greatest self-taught athlete in history. No one, in any sport, has ever worked harder to constantly improve than Hogan. After he retired from competition, he founded a golf club manufacturing company. His products were known for their high level of quality as he would never sell anything of low quality with his name on it. In the early 1990's, I happened to have a broken golf club manufactured by the Hogan company. It was broken in such a way that it could not be repaired, so I sent a letter to the company asking to buy a replacement. I received a personal letter from Mr Hogan, asking me to send him the club. He wanted to examine it himself to understand if it was a manufacturing defect that could be improved on. (He later sent me a free replacement club too!)

(Special thanks to Jirka for the Kaizen inspiration!)

Sunday, October 18, 2009

Shocked by the Disk Space Used by iPhoto - When QE Experience Comes in Handy

I was backing up by iPhoto library today and was shocked to see that had recently grown in size by about 6GB. This increase in disk usage happened while I had added no more than (100) or so new pictures.


At first, I assumed that some form of corruption had happened. On closer inspection, however, I found the true culprit:

du -h | grep G
6.0G ./iPod Photo Cache
1.0G ./Modified/2007
2.6G ./Modified
2.0G ./Originals/2007
1.6G ./Originals/2008
1.2G ./Originals/2009
6.8G ./Originals
16G .
Nothing had happened to the iPhoto library or database. The increase in disk space usage was due to my finally getting an iPod that could handle pictures. ;-)

But - there's a lesson here that all software QE engineers have learned. When you think you've found a new bug, you have to examine the software under test for any changes, AND you also have to examine the environment in which the software runs for changes.

I like to look at software testing as an exercise in algebra. Your task is to take a very complex equation and find its answer. In that equation, you have some variables to resolve. The trick some times is that the variables may not be so obvious at first! Even if they are 6GB in size...

Friday, October 9, 2009

Thinking about Scrum and Base Running

I attended an excellent class on Scrum[1] a couple of weeks ago. I've always been a bit of a skeptic of formal project management or development frameworks, but I have to say that I was impressed. Some elements of scrum are really either common sense, such as the need for transparency, and others, such as short development cycles that result in an always functioning system hold real promise.

The one aspect of scrum that I identified most with, however, was "discover and adapt." In order to be successful in testing software, you always have to operating in a discover and adapt mode. You can define a detailed test plan, and base that plan on the best information available at the time, but in the course of a software project test cycle, you have to constantly re-evaluate the current state of the software under test, and then adapt your future plans to match. You may start off by testing subsystem A, but then find that its testing is blocked, but you can continue to make progress by testing subsystem B.

One crucial part of your adapting to what you discover about the software under test, is that your priorities for testing will always change. You highest priority will always be to run tests that will find the most serious as yet undiscovered bug. But, the places in the code where that bug may be lurking can change as either the code changes, or your understanding of the code increases.

There's a great analogy for "discover and adapt" in sports. (And, no, it doesn't involve golf this time. ;-) It involves American baseball, and concerns the sometimes under appreciated skill of base running.

When you think about what it takes to be a great base runner, you might think that the only factor is speed. Speed is important, to be sure, but what is even more important is the base runner's judgment. He has to be able to adapt quickly to changing situations, and decide when to be aggressive and "take the extra base," and when to be more conservative. In short, he has to discover and adapt. How does he do this? He watches the ball and the opposing players, not the bases. The bases aren't changing, the position of the ball and those players is.

I found this wonderful passage about Joe DiMaggio in David Halberstam's book[2] "Summer of '49" that describes this:

...Stengel, his new manager, was equally impressed, and when DiMaggio was on base he would point to him as an example of the perfect base runner. "Look at him," Stengel would say as DiMaggio ran out a base hit, "he's always watching the ball. He isn't watching second base. He isn't watching third base. He knows they haven't been moved. He isn't watching the ground, because he knows they haven't built a canal or a swimming pool since he was last there. He's watching the ball and the outfielder, which is the one thing that is different on every play..."

It's like that in software too. You can't keep your eye on static plans, you have to keep your eye on the current (and changing) state of the project and the code, and always discover and adapt.



[2] Halberstam, David, "Summer of '49,", (Morrow: New York), 1989.

Fault Injection Testing - First Steps with JBoss Byteman

Fault injection testing[1] is a very useful element of a comprehensive test strategy in that it enables you to concentrate on an area that can be difficult to test; the manner in which the application under test is able to handle exceptions.

It's always possible to perform exception testing in a black box mode, where you set up external conditions that will cause the application to fail, and then observe those application failures. Setting and automating (and reproducing) these such as these can, however, be time consuming. (And a pain in the neck, too!)

JBoss Byteman

I recently found a bytecode injection tool that makes it possible to automate fault injection tests. JBoss Byteman[2] is an open-source project that lets you write scripts in a Java-like syntax to insert events, exceptions, etc. into application code.

Byteman version 1.1.0 is available for download from: - the download includes a programmer's guide. There's also a user forum for asking questions here:, and a JIRA project for submitted issues and feature requests here:

A Simple Example

The remainder of this post describes a simple example, on the scale of the classic "hello world" example, of using Byteman to insert an exception into a running application.

Let's start by defining the exception that we will inject into our application:
1  package sample.byteman.test;
3 /**
4 * Simple exception class to demonstrate fault injection with byteman
5 */
7 public class ApplicationException extends Exception {
9 private static final long serialVersionUID = 1L;
10 private int intError;
11 private String theMessage = "hello exception - default string";
13 public ApplicationException(int intErrNo, String exString) {
14 intError = intErrNo;
15 theMessage = exString;
16 }
18 public String toString() {
19 return "**********ApplicationException[" + intError + " " + theMessage + "]**********";
20 }
22 } /* class */

There's nothing complicated here, but note the string that is passed to the exception constructor at line 13.

Now, let's define our application class:

1 package sample.byteman.test;
3 /**
4 * Simple class to demonstrate fault injection with byteman
5 */
7 public class ExceptionTest {
9 public void doSomething(int counter) throws ApplicationException {
10 System.out.println("called doSomething(" + counter + ")");
11 if (counter > 10) {
12 throw new ApplicationException(counter, "bye!");
13 }
14 System.out.println("Exiting method normally...");
15 } /* doSomething() */
17 public static void main(String[] args) {
18 ExceptionTest theTest = new ExceptionTest();
19 try {
20 for (int i = 0; i < 12; i ++) {
21 theTest.doSomething (i);
22 }
23 } catch (ApplicationException e) {
24 System.out.println("caught ApplicationException: " + e);
25 }
26 }
28 } /* class*/

The application instantiates an instance of ExceptionTest at line 18, then runs the doSomething method in a loop until a counter is greater then 10. Then it raises the exception that we defined earlier. When we run the application, we see this output:
java -classpath bytemanTest.jar sample.byteman.test.ExceptionTest
called doSomething(0)
Exiting method normally...
called doSomething(1)
Exiting method normally...
called doSomething(2)
Exiting method normally...
called doSomething(3)
Exiting method normally...
called doSomething(4)
Exiting method normally...
called doSomething(5)
Exiting method normally...
called doSomething(6)
Exiting method normally...
called doSomething(7)
Exiting method normally...
called doSomething(8)
Exiting method normally...
called doSomething(9)
Exiting method normally...
called doSomething(10)
Exiting method normally...
called doSomething(11)
caught ApplicationException: **********ApplicationException[11 bye!]**********
OK. Nothing too exciting so far. Let's make things more interesting by scripting a Byteman rule to inject an exception before the doSomething method has a chance to print any output. Our Byteman script looks like this:
1   #
2 # A simple script to demonstrate fault injection with byteman
3 #
4 RULE Simple byteman example - throw an exception
5 CLASS sample.byteman.test.ExceptionTest
6 METHOD doSomething(int)
7 AT INVOKE PrintStream.println
8 BIND buffer = 0
10 DO throw sample.byteman.test.ApplicationException(1,"ha! byteman was here!")
  • Line 4 - RULE defines the start of the RULE. The following text on this line is not executed
  • Line 5 - Reference to the class of the application to receive the injection
  • Line 6 - And the method in that class. Note that since if we had written this line as "METHOD doSomething", the rule would have matched any signature of the soSomething method
  • Line 7 - Our rule will fire when the PrintStream.println method is invoked
  • Line 8 - BIND determince values for variables which can be referenced in the rule body - in our example, the recipient of the doSomething method call that triggered the rule, is identified by the parameter reference $0
  • Line 9 - A rule has to include an IF clause - in our example, it's always true
  • Line 10 - When the rule is triggered, we throw an exception - note that we supply a string to the exception constructor
Now, before we try to run this run, we should check the its syntax. To do this, we build our application into a .jar (bytemanTest.jar in our case) and use
sh -cp bytemanTest.jar byteman.txt
checking rules in sample_byteman.txt
TestScript: parsed rule Simple byteman example - throw an exception
RULE Simple byteman example - throw an exception
CLASS sample.byteman.test.ExceptionTest
METHOD doSomething(int)
AT INVOKE PrintStream.println
BIND buffer : int = 0
DO throw (1"ha! byteman was here!")

TestScript: checking rule Simple byteman example - throw an exception
TestScript: type checked rule Simple byteman example - throw an exception

TestScript: no errors
Once we get a clean result, we can run the application with Byteman. To do this, we run the application and specify an extra argument to the java command. Note that Byteman requires JDK 1.6 or newer.
java -javaagent:/opt/Byteman_1_1_0/build/lib/byteman.jar=script:sample_byteman.txt -classpath bytemanTest.jar sample.byteman.test.ExceptionTest
And the result is:
caught ApplicationException: **********ApplicationException[1 ha! byteman was here!]**********

Now that the Script Works, Let's Improve it!

Let's take a closer look and how we BIND to a method parameter. If we change the script to read as follows:

1 #
2 # A simple script to demonstrate fault injection with byteman
3 #
4 RULE Simple byteman example - throw an exception
5 CLASS sample.byteman.test.ExceptionTest
6 METHOD doSomething(int)
7 AT INVOKE PrintStream.println
8 BIND counter = $1
10 DO throw sample.byteman.test.ApplicationException(counter,"ha! byteman was here!")

In line 8, the BIND clause now refers to the int method parameter by index using the syntax $1. This change makes the value available inside the rule body by enabling us to use the name "counter." The value of counter is then supplied as the argument to the constructor for the ApplicationException class. This new version of the rule demonstrates shows how we can use local state as derived from the trigger method
to construct our exception object.

But wait there's more! Let's use the "counter" value as a counter.

It's useful to be able to force an exception the first time a method is called. But, it's even more useful to be able to force an exception at a selected invocation of a method. Let's add a test for that counter value to the script:

1 #
2 # A simple script to demonstrate fault injection with byteman
3 #
4 RULE Simple byteman example 2 - throw an exception at 3rd call
5 CLASS sample.byteman.test.ExceptionTest
6 METHOD doSomething(int)
7 AT INVOKE PrintStream.println
8 BIND counter = $1
9 IF counter == 3
10 DO throw sample.byteman.test.ApplicationException(counter,"ha! byteman was here!")

In line 9, we've changed the IF clause to make use of the counter value. When we run the test with this script, the first 2 calls to doSomething succeed, but the third one fails.

One Last Thing - Changing the Script for a Running Process

So far, so good. We've been able to inject a fault/exception into our running application, and even specify which iteration of a loop in which it happens. Suppose, however, we want to change a value in a byteman script, while the application is running? No problem! Here's how.

First, we need to alter our application so that it can run for a long enough time for us to alter the byteman script. Here's a modified version of the doSomething method that waits for user input:
1    public void doSomething(int counter) throws ApplicationException {
3 BufferedReader lineOfText = new BufferedReader(new InputStreamReader(;
4 try {
5 System.out.println("Press <return>");
6 String textLine = lineOfText.readLine();
7 } catch (IOException e) {
8 e.printStackTrace();
9 }
11 System.out.println("called doSomething(" + counter + ")");
12 if (counter > 10) {
13 throw new ApplicationException(counter, "bye!");
14 }
15 System.out.println("Exiting method normally...");
16 }

If we run this version of the application, we'll see output like this:

Press <return>

called doSomething(0)
Exiting method normally...
Press <return>

called doSomething(1)
Exiting method normally...
Press <return>

called doSomething(2)
Exiting method normally...
caught ApplicationException: **********ApplicationException[3 ha! byteman was here!]**********

Let's run the application again, but this time, don't press <return>. While the application is waiting for input, create a copy of the byteman script. In this copy, change the IF clause to have a loop counter set to a different value, say '5.' Then, open up a second command shell window and enter this command:

Byteman_1_1_0/bin/ sample_byteman_changed.txt

Then, return to the first command shell window and start pressing return, and you'll see this output:

Press <return>
redefining rule Simple byteman example - throw an exception

called doSomething(0)
Exiting method normally...
Press <return>

called doSomething(1)
Exiting method normally...
Press <return>

called doSomething(2)
Exiting method normally...
Press <return>

called doSomething(3)
Exiting method normally...
Press <return>

called doSomething(4)
Exiting method normally...
caught ApplicationException: **********ApplicationException[5 ha! byteman was here!]**********

So, we were able to alter the value in the original byteman script, without stopping the application under test!

Pitfalls Along the Way

Some of the newbee mistakes that I made along the way were:
  • Each RULE needs an IF clause - even if you want the rule to always fire
  • The methods referenced in a RULE cannot be static - if they are static, then there is no $0 (aka this) to reference
  • Yes, I had several errors and some typos the first few times I tried this. A syntax checker is always my best friend. ;-)
Closing Thoughts

With this simple example, we're able to inject injections into a running application in an easily automated/scripted manner. But, We've only scratched the surface with Byteman. In subsequent posts, I'm hoping to explore using Byteman to cause more widespread havoc in software testing.




(Special thanks to Andrew Dinn for his help! ;-)

Thursday, October 1, 2009

Christmas? Already????

The stores are already setting up for Christmas. This is very depressing as it is only October.

But - it's never too early for some excellent Christmas music. Check this out:

Christmas Music by Magnatune Compilation

New Post - in a new Blog!

I just posted to a new blog - the JBoss ESB project blog:

Tuesday, September 15, 2009

New SOA Platform Blog Post

I just finished this new post to the SOA Platform blog:

It's a great thing when a product works "right out of the box!" ;-)

Thursday, July 23, 2009

Running Eclipse Plugin Tests with SWTBot in Headless Mode

This is a followup to my previous post on SWTBot. Headless mode is very useful in running tests outsode of eclipse through Ant or from the command line.


Well, it's not exactly headless. A better title would be "running Eclipse Plugin Tests with SWTBot with Ant or From The Command Line." The tests are not run from an Eclipse workbench, but they require Eclipse, and actually open up a workbench when they are run.


Here's the place to start:

Download the "Headless Testing Framework" from here: and install it into your eclipse /plugins dir. You need to install both the junit4.headless and optional.junit4 files. For example:


Important note: Be sure to install only these files for either JUnit3 or JUnit4. If you install the files for both versions of JUnit, you'll see many class cast exceptions.


Be sure to define a class to serve as a test suite for your tests. For example;

import org.junit.runner.RunWith;
import org.junit.runners.Suite;
import org.junit.runners.Suite.SuiteClasses;

@SuiteClasses( { TestPositive.class, TestNegative.class })
public class AllTests {

Export your plugin tests (as a plugin) and install it into the /plugins directory of your eclipse installation.

Running with Ant

Build a build.xml file that looks like this:

<?xml version="1.0" encoding="UTF-8" ?>

<project name="testsuite" default="run" basedir=".">
<property name="eclipse-home" value="/opt/local/eclipse_swtbot/eclipse" />
<property name="plugin-name" value="" />
<property name="test-classname" value="" />
<property name="library-file" value="${eclipse-home}/plugins/org.eclipse.swtbot.eclipse.junit4.headless_2.0.0.371-dev-e34/library.xml" />

<target name="suite">

<property name="jvmOption" value=""></property>
<property name="temp-workspace" value="workspaceJuly23" />
<delete dir="${temp-workspace}" quiet="true" />

<ant target="swtbot-test" antfile="${library-file}" dir="${eclipse-home}">
<property name="data-dir" value="${temp-workspace}" />
<property name="plugin-name" value="${plugin-name}" />
<property name="os" value="linux" />
<property name="ws" value="gtk" />
<property name="arch" value="x86" />
<property name="classname" value="${test-classname}" />
<property name="vmargs" value="-Xms128M -XX:MaxPermSize=512m -Xmx512M" />

<target name="cleanup" />

<target name="run" depends="suite,cleanup">

Important note: The supported values for os, ws (workspace) and arch are:

os: win32/linux/macosx
ws: win32/wpf/gtk/cocoa/carbon
arch: x86/x86_64

Important note: Be sure to specify a non-existent workspace name for temp-workspace as this will be overwritten when the test is run.

Running from the CLI

export ECLIPSE_HOME=/opt/local/eclipse_swtbot/eclipse

java -Xms128M -Xmx368M -XX:MaxPermSize=256M -DPLUGIN_PATH= -classpath $ECLIPSE_HOME/plugins/org.eclipse.equinox.launcher_1.0.101.R34x_v20081125.jar org.eclipse.core.launcher.Main -application org.eclipse.swtbot.eclipse.junit4.headless.swtbottestapplication -data workspace,$ECLIPSE_HOME/$TEST_CLASS.xml -testPluginName -className $TEST_CLASS -os linux -ws gtk -arch x86 -consoleLog -debug

Test Output

It can be hard to find in all the output, but it's there:

Testcase: checkPerspective took 3.529 sec
Testcase: canConnectToRepo took 5.784 sec
Testcase: goIntoBackHomePropertiesTest took 34.526 sec
Testcase: doubleClickTest took 51.74 sec
Testcase: cannotConnectBadPath took 6.701 sec
Testcase: cannotConnectBadPassword took 7.747 sec

Other Useful Links

Ketan added some movies to the SWTBot site in July 2009. Here's one on running SWTBot from Ant:

Another DZone post!

The recent post on Gateways and Notifiers in the SOA Platform was just reposted to DZone here:

Monday, July 20, 2009

A new post for the SOA Platform blog

Just added a new post here:

Listeners and notifiers are very useful things to integrate apps together through the platform. They are actually easy to use, and are very powerful too!

Thursday, July 9, 2009

Another post to the Red Hat / JBoss SOA Platform Blog

I just finished my 2nd post to the SOA Platform blog here:

The diagram towards the end would have really helped me understand content based routing when I first heard of it. Maybe this will help some other newbies along the way... ;-)

Friday, June 19, 2009

Filling in the Hole in the UI Automation Tool Set

I've never been much of a UI test automation person as I've generally lived on the server-side of things. But, the past couple of years, I've found great set of open-source UI testing tools. In the past few days, I've found a tool that fills in what had been a gap in test tool coverage - automating Eclipse plugins.

The set of tools consists of:

For GNOME Applications - Dogtail

Dogtail developed by Zack Cerza and is written in Python (as are the tests that you write with it) and can be used to automate GNOME-based applications. The design of Dogtail is very interesting as uses accessibility technologies to communicate with desktop applications. It uses accessibility-related metadata to create an in-memory model of the application's GUI elements.

Dogtail can be found here:

(There's also an entry in Wikipedia here: that has links to the user documentation that I wrote for Dogtail. This was published as a series of articles in Red Hat Magazine

For Web Applications - Selenium

With Selenium, you can record or write test scripts that manipulate the web application to be tested through the browser. The tests can be written in HTML or Java or Python or other languages. There's also a record/playback mechanism.

Selenium can be found here:

For Eclipse plugins - SWTBot

I recently was able to fill in a long-empty hole in this set of tests when a co-worker of mine pointed me at a new tool named SWTBot. SWTBot is developed by Ketan Padegaonkar and automates Eclipse plugin testing.

SWTBot can be found here:

SWTBot makes it very easy to build Java tests for Eclipse plugins. Here's a test code fragment that creates a new project:
SWTWorkbenchBot bot = new SWTWorkbenchBot();"File").menu("New").menu("Project...").click();
bot.tree().select("Java Project");
bot.button("Next >").click();
bot.textWithLabel("Project name:").setText("testProject");
SWTBot is very new; it's in the incubation phase of its development as an Eclipse project. The only problem that I've had with SWTBot so far is that it took me a little while to locate a set of example programs. I did find a good set here:

So - to sum things up, with these tools, there's test coverage for desktop, web, and Eclipse-based apps. And that makes for a great tool-set!

Monday, June 15, 2009

Contributing to the Red Hat / JBoss SOA Platform Blog!

Just added a new post to the JBoss SOA Platform blog here:

There's a feed for it here:

'Hoping to contribute to this blog on a regular basis going forward...

(No - you're not going blind - the same post - the subject is integrations - was in this blog until a few minutes ago. ;-)

Tuesday, May 12, 2009

grep - In living Color

In my last blog post, I mentioned that in performing software testing you tend to look in log files for clues as to the root cause of bugs. grep is a great tool for doing this, but it does have one drawback. If the lines of text in a log file that you are grep'ing through are very long, you can end up with output that looks like this:

grep here-is-the-bad.jar server.log

22:05:01,485 DEBUG [ServerInfo] class.path: /opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/server-jaxws.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/server-jaxrpc.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/server-saaj.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/serializer.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/xercesImpl.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/jaxb-api.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/xalan.jar:/usr/lib/jvm/java-1.6.0-sun-

Oh. Of course. There's the problem. What? You can't see it? ;-)

Let's try that grep command again. But this time, let's use grep's color option*. This will highlight what we're looking for:

grep --color=auto here-is-the-bad.jar server.log

22:05:01,485 DEBUG [ServerInfo] class.path: /opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/server-jaxws.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/server-jaxrpc.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/server-saaj.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/serializer.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/xercesImpl.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/jaxb-api.jar:/opt/local/system000001_test_1234/server-sample-p.1.2.4/server-as/lib/endorsed/xalan.jar:/usr/lib/jvm/java-1.6.0-sun-

Now, that's more like it! ;-)

* Special thanks to Ralph for pointing me at grep's color option!

Friday, May 8, 2009

Watching Things...with Linux's "watch"

When you test software, you tend to spend a lot of your waking hours (and some sleeping hours too) debugging problems. This involves looking around the system undertest for clues as to the root causes of failures.

One trick that I constantly use is to execute the "tail -f" command on system or server or process log files while I'm running tests. The tail command displays the last (10) lines of the file. Adding the -f option results in the file contents to be displayed as they are written to the log file. The result is that I can watch the log file as a real-time window into what's happening on the system under test.

I recently learned[1] about another useful tool; the "watch"[1] command. What watch does is to execute the shell command that you specify at the interval that you specify. What makes watch useful, though. is that is displays its output full-screen, and refreshes the full screen display, so you can don't have to scroll the display for the latest results.

For example, if you wanted to keep an eye on system memory use, you could execute this command to have watch display memory usage information and update the display every 5 seconds:

watch -n 5 cat /proc/meminfo

And see something like this:

Every 5.0s: cat /proc/meminfo Fri May 8 22:43:53 2009

MemTotal: 2049932 kB
MemFree: 948244 kB
Buffers: 77540 kB
Cached: 535776 kB
SwapCached: 0 kB
Active: 460124 kB
Inactive: 476236 kB
HighTotal: 1153728 kB
HighFree: 314396 kB
LowTotal: 896204 kB
LowFree: 633848 kB
SwapTotal: 2621432 kB
SwapFree: 2621432 kB
Dirty: 88 kB
Writeback: 0 kB
AnonPages: 323000 kB
Mapped: 67700 kB
Slab: 53548 kB
PageTables: 5064 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3646396 kB
Committed_AS: 904128 kB
VmallocTotal: 114680 kB
VmallocUsed: 7748 kB
VmallocChunk: 106612 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 4096 kB

Or, if you wanted to only watch the amount of free memory, you could execute this variation on that same command:

watch -n 5 'cat /proc/meminfo | grep MemFree'

And see something like this:

Every 5.0s: cat /proc/meminfo | grep MemFree Fri May 8 22:47:10 2009

MemFree: 932252 kB

As I said, when you're debugging a test, or a potential bug uncovered by a test, you have to watch multiple source of information on the system you're testing. The watch command can be a very useful tool to help you see specific pieces of information in the flood of information that you may have to search through.



* Special thanks to Ralph for pointing me at watch!

Monday, March 30, 2009

Life as a Scientist

A great tool to analyze your blog:

(Found this tool referenced at Mark Little's blog:

Saturday, March 21, 2009

Stress Testing: Don't spend your life waiting for a moment that just don't come, Well, don't waste your time waiting...

I remember the situation distinctly, even though it happened over a decade ago. The product in question was a 2nd generation server. The 1st generation was successful, but like many 1.0 projects, parts of it were, well, let's just say "immature." As a result, the product was buggy and difficult (and expensive!) to maintain. The goals of the 2nd generation product were ambitious; to correct the product's design flaws, extend its feature set, and improve its overall performance and reliability.

After a long development cycle, testing started and we were dismayed to find that the re-architecting effort had completely destabilized the product. We immediately started finding memory leaks, random crashes, loss of data, and basic functions that we just plain broken. The early test results caused a large amount of angst, and also caused us to adopt a very defensive posture. We reorganized testing into discrete stages, and we determined to not advance from one stage to the next until we had established a consistent baseline of reliable product operation.

As the early stages of functional testing progressed, product instability continued, and large numbers of bugs were found, resolved, and verified. Eventually, the product's stability improved, but there serious bugs were still periodically found, so we held to our defensive posture and did not progress from functional to system-wide or stress/performance tests until functional testing was complete. When we finally determined that the product was stable enough for stress testing, the project schedule had been extended multiple times. When stress testing started, began finding still more bugs related to product's inability to function under a load over time or to handle high levels of traffic. Once again, the project schedule had to be extended as large numbers of bugs were found, resolved, and verified.

What went wrong on this project? Well, quite a bit. The planning was overly optimistic, the design was not properly reviewed or understood, unit testing was inadequate, at one point, we even discovered that if every port on the server was physically cabled and the system was not properly rack-mounted, it would tip over and fall on the floor. In addition, by being too conservative in our test planning, and too rigid in our first reaction to the system's initial poor performance, we may have delayed the discovery of some serious problems.

What should we have done differently? We should have started some stress testing as early as could find situations and product configurations that would support it. The first stress tests could have taken the form of low levels of traffic running over an extended period of time, and not necessarily a massive system load at the start. By finding tests that the product could support, we could have established a baseline of performance to measure against as the product's many bugs were resolved (and its code was changed). By waiting for "perfect" conditions that actually arrived very late in the product's development, we delayed tests that could have been run under "good" conditions.

There's a saying that, "if you see blue sky, go for it."[1] It can be like that in software testing too. If you wait for perfect conditions to run some tests, you may end up waiting for a long time, and that waiting may result in bugs being found at a late and probably inconvenient time. When should you try stress testing a product? As early as you can. Performance related problems can be difficult and time consuming to resolve. The sooner you find them, the better. So, if you see some blue sky among the bugs early in a project test cycle, go for it...

(Special thanks to Pavel!...and Mr. Springsteen...)



Monday, February 16, 2009

Parameterized Testing? But wait, there's more! (Inversion of Control/Dependency Injection, that is)

When you spend most of your waking hours (and, truth be told, some of you sleeping hours too) involved in software engineering, you can find yourself speaking a language foreign to friends and relatives. In my own case, most of my friends and family have no experience in or with software design, development, or testing. They tend to see my software engineering experience as useful, but also something out of the ordinary.

This past Monday, for example, a good friend of mine, a lawyer by profession, telephoned me late at night, desparate to rescue his home computer from the malware that he had inadvertently downloaded. I managed to walk him through the steps necessary to get his computer working again. That is, I thought I had, until a couple of nights later, he called me again asking my advice in buying a new computer.

A few days later, I was reading a printed copy of Martin Fowler's paper "Inversion of Control Containers and the Dependency Injection pattern" [1] when a friend of mine commented, "inversion of control?" Dude, that's the story of my life. My kids rule the house."

The term "dependency injection" can at first be bit more intimidating than the reality of what it is and how it's used. The simplest explanation that I've ever seen for it is in James Shore's excellent blog "The Art of Agile" [2] when he states that:

'...dependency injection means giving an object its instance variables..'

OK. This is all interesting, but what does it have to do with software testing? Quite a bit, actually. In the blog entry from a few weeks ago titled "Parameterized Testing - Very Easy with TestNG's DataProvider" we talked about designing tests that accept parameters, so that the same test could be run against multiple sets of data. In this post, we'll revisit that test design pattern.

But first, let's take a deeper look at inversion of control and dependency injection.

To begin, let's disect the phrase "inversion of control." In this context, who wants control, and just how is this control inverted?

In an object oriented language such as Java, you solve programming problems by designing classes. The classes contain variables (to hold data) and methods (functions that manipulate the data). To actually perform the tasks that you want the programs to do, you create discrete instances of the objects. When you create an object instance, you work with instances of the variables.

Let's say that you've defined a Java class of type Car, and within the class you set a Manufacturer variable to, say, "Audi."

What have we just done in terms of control and dependencies?

* Who has control? The "Car" class has the control. In order to build a new car, it defines a variable of type "Manufacturer." Or more precisely it defines an instance variable of type "Manufacturer." In this case, the class only allows for a Manufacturer of "Audi."

* The "Car" class needs the "Manufacturer" variable to do its work. In other words, it depends on this variable. That's right, this instance variable is a "dependency."

However, there's a problem with the "Car" class. How can build a test for the class, and test multiple car manufacturers? To build a test for Toyota, for example, we will have to build another class that's almost identical to the "Car" class, except that it will create a different "Manufacturer" object.

The answer is easy, of course, we just add another constructor to the Car class:

public Car(Manufacturer targetManufacturer) {
carManufacturer = targetManufacturer;

So, now what have we just done in terms of control and dependencies?

* The value of the dependency, that is, the "carManufacturer" variable is not hardwired into the "Car" class. It is "injected" into the "Car" class through its new constructor.

* Who has control now? Not the "Car" class anymore. The class can now be used to create any type of car. Who controls the decision of the type of car to create? The program (or test program) that creates the instance of the "Car" class. That's the "inversion" of control.

One important use of this model is that it emables us to isolate the class being tested. For example, if we want to test the Car class with a mock or stub[3] version of the Manufacturer class, we can pass that stub or mock object to the Car class during the test. This will let the test concentrate on verifying the operation of the Car class, independent of the function (or lack of function) provided by the Manufacturer class.

When you couple this design pattern with ability to pass a parameter into a test, such as is supported in TestNG, this enables us to inject the dependecies into the classes being tested. This makes it possible for the tests to control not only the test data, but also the defined profile and characteristics of the classes under test. And to do it without hardcoding data into the tests or the classes under test. Now, that's control!




[3] Next Generation Java Testing by Cedric Beust and Hani Suleiman, pages 95-96.

Saturday, February 7, 2009

And not a drop to drink - Leak and Soak Tests

Owning an old house has some advantages. You can enjoy its period architectural details and perhaps find some hidden treasures. When we did some house renovations, we found newspapers from 1923 in one wall of the house. It was quite a surprise to be able to read contemporary sports coverage of Babe Ruth[1]! Owning an old house also means that you are always performing maintenance. I remember late one cold January night when I woke up to the sound of water dripping. It wasn't the bathroom, it was a vintage cast-iron radiator drippig water through the ceiling. Nothing had changed in the water pressure, but the washers in the radiator had just been worn down by prolonged use. Another time, a massive rainstorm resulted in a leak in the roof. The roof had performed well for years of less severe rain storms or equally severe, but shorter downpours, but the extended high level of rain simply soaked through it.

In the first case, the problem wasn't that the level of water traffic had increased beyond what the radiator "system" could manage, it was the accumulated damage of a relatively low level of traffic over time. In the second case, the problem was that the roof "system" was overwhelmed by prolonged exposure to a high level of traffic.

OK - what do all these expensive home repairs have to do with software testing? Just this; when you go about designing stress tests for your product or application, you should include "leak" and "soak" tests* in your plan.

When some people approach stress test planning, they treat the tests as a blunt instrument with which they try to assault the system under test. This type of scorched earth approach to testing can result in system failures being encountered, but these failures can be difficult to diagnose and reproduce. A combination of disciplined leak and soak tests can help you better identify the root causes of stress related system failures.

How do leak and soak tests differ? The differences can be thought of in the terms of the radiator and roof failures that I mentioned above.

In a leak test, you tend to run the system under a manageable, and tracable level of traffic for an extended period, to determine of the accumulated weight of usage causes a system failure. The classic case is when you are looking for a memory leak. The taffic load should not be extreme.

It may be that each system action results in a very small memory leak, or in the premanent allocation of some other system resource. If you run this test once, the leak may occur, but in the context of a system with several GB of memory, and a process that is using several MB of memory, you might not notice that the memory or system resource is not freed up when the test completes. But, if you repeat the test, and observe the process under test a tool such as JBoss Profiler[2], and observe the system with utilities such as top, sar, or vmstat, then you may be able to spot a trend of when system resources are used and not released. A great way to begin a leak testing regimen, is by observing the memory and system resource use of the software under test in an idle state. Just start up the server or application product under test and then leave it alone for an extended period. You may find that it's use of system resources increases over time, just through its self-maintenance, even when it is not actively processing user or client requests.

So, in a leak test, the key variable in the test equation is time.

In contrast, a soak test, hits the system under test with a significant test load, and maintains that load over an extended period of time. A leak test may involve running traffic through the system under test on only one thread, but a soak test may involve stressing the system to its maximum number of concurrent threads. This is where you may start to see inter-thread contention issues or problems with database connection pools being exhausted. If you can establish a reliable baseline of system operation with a leak test that runs over an extended period of time, then you can move onto more aggressive tests such as soak tests. If, however, a leak test exposes system failures, then you would probably encounter the same types of failures with soak tests. But, the level of traffic used in a soak might make these failures harder to diagnose.

So, in a soak test, the key variables in the test equation are both time and a sustained high load level.

To sum it up, if a leak test is passive aggressive in nature, a soak test is just plain aggressive.

* Dzięki Jarek!


[2] JBoss Profiler -

Saturday, January 31, 2009

Parameterized Testing - Very Easy with TestNG's DataProvider

I was starting work on some parameterized JUnit tests the other day, when a co-worker*, someone who has a uncanny knack of reducing my complicated questions to simple answers, suggested, "Why don't you try a TestNG DataProvider instead?"

I was already trying to write the tests in Groovy, a scripting language with which I had very little experience, so the prospect of also trying a new test framework was not exactly appealing. But, I decided to give it a try. After a couple of hours of trial and error and self-inflicted syntax errors, I came to the conclusion that a TestNG DataProvider was the way to go. DataProviders are easy to use, and the coding is very straightforward.

TestNG[1] was developed by Cedric Beust and Alexandru Popescu in response to limitations in JUnit3. Actually, "limitations" may be too strong a word. In their book "Next Generation Java Testing"[2], they take pains to say that they developed TestNG in response to "perceived limitations" in JUnit3. In some cases, these were not limitations, but rather, design goals in JUnit that were in conflict to some types of tests. For example, the manner in which each JUnit test case re-instantiates the test class for a "clean" starting point. In this regard, TestNG supports test models beyond unit tests, for example, tests that are inter-dependent.

TestNG provides a couple of ways to support passing parameters to test cases. The parameters can be passed to the test cases through properties defined in testng.xml, or with the DataProvider annotation. DataProviders support complex passing complex objects as parameters. Here's how it works:

You define a method associcated with the "@DataProvider" annotation that returns an array of object arrays to your test cases.

Hang on - an array of arrays? Why is that needed? Here's why - each array of objects is passed to the test cases. In this way, you can pass an array of multiple parameters, each of whatever object type you want to the test cases. It's like this, say you want to pass a String and an Integer[3] to a test case. The DataProvider method returns an object of this type:

Object [][]

So that with these values:

Groucho, 1890
Harpo, 1888
Chico, 1887

The DataProvider provides:

array of objects for call #1 to test method = [Groucho] [1890]
array of objects for call #2 to test method = [Harpo] [1888]
array of objects for call #3 to test method = [Chico] [1887]

Or: this array of arrays of objects = [array for call #1] [array for call #2] [array for call #3]

Simple, right? Once you get past the idea of an array of arrays. Here's the Groovy code.

package misc

import org.testng.annotations.*
import org.testng.TestNG
import org.testng.TestListenerAdapter
import static org.testng.AssertJUnit.*;

public class DataProviderExample {

/* Test that consumes the data from the DataProvider */
@Test(dataProvider = "theTestData")
public void printData(String name, Integer dob) {
println("name: " + name + " dob: " + dob)

/* Method that provides data to test methods that reference it */
@DataProvider(name = "theTestData")
public Object[][] createData ( ) {
[ "Groucho", new Integer(1890) ] ,
[ "Harpo", new Integer(1888) ] ,
[ "Chico", new Integer(1887) ]
] as Object[][]


When you run this with TestNG, the output looks like:

[Parser] Running:

name: Groucho dob: 1890
name: Harpo dob: 1888
name: Chico dob: 1887
PASSED: printData("Groucho", 1890)
PASSED: printData("Harpo", 1888)
PASSED: printData("Chico", 1887)

Tests run: 3, Failures: 0, Skips: 0

I'll return to Groovy and TestNG in some future blog posts as they are very "groovy" test tools. Well, as Groucho would say, "hello, I must be going..."


[3] This is a very slight variation on the sample shown here:

* Děkujeme vám Jirka! ;-)

Saturday, January 24, 2009

Defect-Driven Test Design - What if You Start Where You Want to Finish?

When I start a new software testing project, I always begin by drafting a test plan. I've always viewed test plans as useful tools in pointing your test imagination in the right directions for the project at hand. The very act of committing your thoughts, knowledge, and the results of your test and project investigation to a written plan enforces a level of discipline and organization on the testing. The adage that "plan is also a verb" is very true with regard to test plans.

A written plan is also a great vehicle to use to collect input from other members of your project team such as your Development team counterparts, from whom you can gather information about what new features may be more at risk, the support team, from whom you can collect information about the issues that are affecting your customers, and from the marketing team, from whom you can collect information about business priorities and customer expectations. The task of conducting a review of your plan with these other members of your project team can be a grueling exercise, as they may question both the means and motivations supporting test design or strategy, but their input, along with project design and functional specifications in a crucial element in your creating an effective plan.

But wait - there's another source of information that you should mine. Your bug database.

Think about it for a minute. Where do you find bugs? New code is always suspect as you haven't seen how it behaves under test. But, where else should you look? In the code where you have found bugs in the past.[1] Odds are, this code, is complex, which means it may be hard to change or maintain without introducing new bugs, or it is subject to constant change as the application under test evolves from release to release, or maybe it's design is faulty, so it has to be repeatedly corrected, or maybe it's just plain buggy. By examining your bug database, you can get a good picture which components in your application have a checkered past.

However, you shouldn't stop there. What you want to do is to create a bug taxonomy[2] for your application.

The term "taxonomy" is typically used to describe the classification of living organisms into ordered groups. You should do the same thing with your bug database. Each bug probably contains information about the component or module in which the bug was found. You can use this data to identify the parts of the application that are at risk, based on their bug-related history. You also have have information as to the the types of bugs that you've found in the past. Maybe you'll find a pattern of user access permission problems, where classes of users are able to exercise functions that should only be available to admin users. Or maybe you'll find a pattern of security problems where new features failed to incorporate existing security standards. These are the areas in which you should invest time developing new tests.

OK, so now you have your past history of application specific bugs grouped into a classification that describes the history of your application. What can you do to make the taxonomy an even more effective testing tool? Move beyond the bugs themselves, to classify the root causes of the bugs, and then attack these root causes on your project. You may find that many bugs are caused by problems in your development, design, documentation, and even your test processes. For example, unclear requirements definitions may result in features incorrectly, or in unnecessary or invalid test environments being used. For an extensive bug taxonomy, including multiple types of requirements-related problems, see the taxonomy created by Boris Beizer in his 1990 book "Software Testing Techniques."[3] This is probably the best known bug taxonomy. It includes bug classifications based on implementation, design, integration, requirements and many other areas.

But, don't stop there. What you have in your taxonomy is a representation of the bugs that you have found. What about the bugs that you haven't (yet) found? Take your taxonomy and use it as the starting point for brainstorming about the types of bugs that your application MAY contain. Expand your taxomony to not only describe what has happened, but what might happen. For example, if you have seen database connection related problems in the past, you could test to ensure that database transactions can be rolled back by the application when its database connection fails. Then take it one step further. If you're having problems with database connections failing, what about other server connections? What about the LDAP server that the application relies on for authentication? How does the application respond if that server is unreachable? Does the application crash, or does it generate an informative error message for the user?

How can you then use the taxonomy that you've built? Giri Vijayaraghavan and Cem Kaner identified another use for your bug taxonomy in their 2003 paper "Bug Taxonomies: Use Them to Generate Better Tests"[4]. Their suggestion is to use the big taxonomy as a check against your test plans. For any bug type that you can define, you should have a corresponding test in your test plans. In other words, if you don't have such a test, then you have a hole in your test planning.

To sum it all up, bugs are the goals of software tests and removing bugs from your product is the goal of your software testing. You shouldn't however look at the bugs as a final destination. They are also a persistent resource and as such, they are part of the institutional memory of your project. Building new tests based on these bugs should be part of your testing. Taking the bugs as a starting point, and expanding them into a taxonomy of both actual and potential bugs should be part of your effort to complete the always "unfinished agenda" of your future testing.


[1] Glenford Myers, The Art of Software Testing, p.11.


[3] Boris Beizer, Software Testing Techniques, 2nd edition (New York: Van Nostrand Reinhold, 1990). The bug statistics and taxonomy can be copied and used. I found it on-line here: