Sunday, November 25, 2007

In Praise of Exploratory Testing

Have you ever found a bug because, in response to some unexpected behavior on the part of the software under test, you starting "exploring" and tried some unplanned tests? Were your actions guided by your past experience in testing in that you had an intuition that there was a bug lurking in the code? When you found the bug did you feel guilty because the test that you performed was not part of the test plan? Did someone complain to you that "Hey - that test isn't in the plan!"

Well, you shouldn't have felt guilty. You were actually performing exploratory testing[1].

In an earlier post to this blog, I talked about how, in order to be effective, test plans have to be dynamic, not static, and have to adapt to changes, especially with regard to risks, in a project's development. But - it's impossible to anticipate in advance every test that will be necessary. Once you get your hands on the software to be tested, you often learn more about the product.

In exploratory testing, you combine your learning about the software under test and your test design with your test execution, all at the same time, as a single action and not as separate tasks. But wait, isn't this just ad hoc testing? The question about whether there is a real difference between ad hoc and exploratory testing is frequently raised. The best description[2][3] that I've seen about the substantive differences between these two types of testing is that while ad hoc testing tends to be random in nature, in exploratory testing, you rely on your testing experience to select paths that will uncover bugs, based on the behavior of the software under test. It's very "situational" in that the tests that you create are based on the multi-step situations that you create and the situations the software under test presents to you.

I once worked for a manager who practiced ad hoc testing in an extreme form. He frequently parachuted into projects, attempted a few tasks with the software under test, reported finding numerous bugs, and then walked away. However, the bugs that he reported tended to fall into two categories: legitimate bugs that there cosmetic or trivial, or user errors. In contrast, in exploratory testing, you're working more like a surgeon looking for a tumor. You've seen the signs of software tumors before (for example, degradation in performance caused by a memory leak) and you put that experience to use as you probe for high-value bugs. You start your testing with an idea of what tests you want to perform, and then based on situations caused by the software's behavior, coupled with your past experience, you start to explore complex situations. This situational testing is one of exploratory testing's strengths.

So, don't feel guilty. Observe the software under test, evaluate its behavior through the filter of your experience and keep on exploring!

Ref:

[1] Exploratory Testing Explained by James Bach

[2] http://blogs.msdn.com/imtesty/archive/2007/10/19/exploratory-testing-versus-ad-hoc-testing.aspx

[3] http://www.sqaforums.com/showflat.php?Cat=0&Board=UBB46&Number=406151&page=2&fpart=all

Wednesday, November 14, 2007

A Test Plan is a Tool - a Dynamic Tool, That is

Why do we write test plans? Are we all just frustrated unpublished authors? (No, that's why we write blogs. ;-)

A test plan is a tool. The act of researching and writing a plan forces us to examine the software under test, to understand how it works, and the risks inherent it its function and its environment. By writing the plan as a document, we contribute to the institutional memory of the software project, as the document is persistent, and will be available as a resource to all the project team members and to other projects.

But - the fact that the plan may be stored as a static document file is purely incidental. Documents, files, database records are simply the mechanism by which we make collected information available to be read.

The most important thing about a test plan is that the information in the plan has to be dynamic, nit static. It has to adapt to changes in the project scope, direction, design, etc., and to changes in the potential risks that the project faces.

To be a successful test planning tool, the test plan has to reflect the "unfinished agenda" [1] of the testing effort.

But, what should that unfinished agenda be based on? The testing and quality risks that the product currently faces. The important thing to remember is that the risks the product faces during its testing will change. Perhaps the product design will change. If this happens, the plan has to adapt. Or, maybe new information will be received from a beta test sight about user requirements. Or, maybe tests that were originally planned become obsolete.

Remember how - in an earlier post to this blog - we talked about how it is impossible to find literally every bug in a product? The tests have to focus on what's most important, and that means focusing on what's most at risk. And - this will change constantly during the testing of a product.

So, the plan must be dynamic, not static, changing to meet the challenge of each new set of risks. The plan is part of a process - a way of finding the most serious bugs. [2]

Ref:

[1] From a 1960 speech by US President Kennedy
[2] Yes, from another JFK speech

What format should a test plan take? While the content is always more important than the format, it does help to have a well organized plan. I'm partial to the IEEE test plan format:

IEEE Test Plan
IEEE Test Plan in Wikipedia
Useful Wikipedia entry on test plans

Thursday, November 8, 2007

The Goal of a SW Test is to Find a Bug

"What's the goal of a software test?"

I always ask this question when I'm interviewing someone for a software development or test position. It sounds like an overly simplistic question, doesn't it? After all, we all know that:

The goal of software testing is to make sure there are no bugs, right?

Wrong! The goal of writing a software test is to find a bug.

This is why we test software. To locate the bugs and get them fixed. But, what about software that has no bugs?

There is no such thing as 100% "bug free" software. Why is this? The answer is inherent in the very nature of software. It's "soft." When you're working with physical media such as with steel, or concrete, or playdough, the limitations of what you can do are based on physical limitations of that media and the physical environment. With software, you face limitations of memory or CPU speed, but you are really only limited by your imagination. This is what makes software engineering so rewarding, and so much fun. You are basically building virtual structures out of ideas. And, unlike physical media, you can easily tear down, redesign, and rebuild structures is software easily. Sometimes badly. And so, there are bugs and you need new tests to find them.

But, hang on for a minute. In software testing, your goal cannot be to find literally every bug. You have to concentrate on finding the bugs that matter most. To do this requires an understanding of not only how the product under test works, the risks in its design, and how its customers will actually use it. And it requires that the tests that you write be intentionally destructive of and hostile to the product under test. It's often hard for people to be destructive of their own work. This is why you want an independent test team to create and execute tests.

Likewise, having a legacy library of thousands of tests that run cleanly does not guarantee that there are no bugs. It just means that the tests are not finding bugs. You have to constantly review the tests and map the test coverage to what risks the currently faces. Software products are dynamic. The test plans and tests have to adapt to keep pace.

In "The Art of Software Testing," Glenford Myers uses a medical analogy. If you feel ill and undergo medical tests that do not result in a diagnosis, is it accurate to say that the tests were "successful?" No! You're still sick, and your doctor hasn't run the correct test yet!


Ref: Glenford J. Myers, The Art of Software Testing. Wiley, 1979.

Ref: The goal of a software test: When failure equals success - IBM Developerworks article

Wednesday, November 7, 2007

Fundamentals, not "Philosophy"

There's a great line in a book by Jack Nicklaus that I've always thought applies to software testing. When he was asked about his philosophy for approaching a difficult task (in his case, hitting a golf ball), his response was:

"I don't believe in philosophies. I believe in fundamentals."

I'll start a periodic series of posts on software testing fundamentals - "the rules" - soon.

Monday, November 5, 2007

Automated Open Source GUI Test Tool - Dogtail

I came across this open source tool a while ago and was very impressed. The nice things about it are that it's open source, it's easy to get started using it, and, oh yes, it works!

The tool name is Dogtail - it's available: here

The technology it uses is interesting too. It uses Accessibility (A11Y) technologies to communicate with desktop applications. This is a key aspect of Dogtail's design. Unlike some other GUI test automation frameworks, Dogtail doesn't "scrape" information from the visual representation of the the application under the test's GUI into a data-store. Instead, it makes use of the accessibility-related metadata to create an in-memory model of the application's GUI elements.

I did not write Dogtail itself - I'm just a happy user. My contribution was some user documentation in the form of articles in Red Hat magazine. One of the article includes a Flash demo.

The articles are linked from the Dogtail entry in wikipedia

Introduction


I’m starting this blog to keep track of useful software test tools and techniques that I find, and to relate my experiences in software engineering and testing. It can be a lonely feeling when you’re staring at a problem in testing software. I’m hoping that this blog can help...no, why do you ask? I’m sure that it’s fire-proof. Check the design spec. There must be some tests for that in the plan too...