Sunday, December 23, 2007

That time of year again!

Frohe Weinachten! Boun Natale! Joyeux Noel! Merry Christmas!

(I'll blog about software testing after the holidays!)

Here are some scenes from snowy Boston...









Saturday, December 15, 2007

When Less May Be More - A Lighter Weight Test Plan

I was talking to a couple of colleagues about test plans this week; both the format of the plans and the content.

The IEEE standard is well, a "standard." It may, however, be intimidating to people unaccustomed to software testing. And, it lacks an easy to use construct for differentiating between classes of tests. The discussion of test plans came up this week as one of my colleagues is trying to - gently, but effectively - introduce a structured test process into an organization whose members are largely unfamiliar with it. He wanted very much to take a medical approach to this and "first, do no harm." For him, a lightweight adaptation of the the IEEE standard is a better starting point. We came up with this template for a test plan:

Introduction - what are we doing?
Test Strategy - how are we doing it
Test Priorities - what's most important
Scope - What's tested?
Scope - What's beyond the scope of testing?
Test Pass/Fail Criteria - how do we know that it's good or bad
Test Deliverables - docs and programs that we'll build
Test Cases - Functional Tests
* Tests mapped to product features
Test Cases - Non-functional tests - whichever apply[1]
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
Test Environment/Configurations
Responsibilities - who's doing what
Schedule/Milestones - when are they doing it
Risks and Contingencies - what might go wrong and how we'll handle it
Approvals - do we agree?
References - pointers to background docs
Revision history - why did the plan change and how?
Appendices - anything else?

It's worth noting how this template explicitly separates the descriptions of functional and non-functional tests as the differences between these test class definitions may be new concepts to some of the team members. What's that? Are these test class differences also a new concept to you? I'll discuss this subject in the next post to this blog!

Ref:

[1] http://en.wikipedia.org/wiki/Non-functional_tests

P.S. My Fedora friends have picked up on this test plan outline - it's the basis for their test plan template: https://fedoraproject.org/wiki/QA:Test_Plan_Template

Monday, December 10, 2007

Why Are the Fountains Always Broken? (The forgotten cost of test automation: maintenance)

I'm writing this post from just outside of Boston. By American standards, Boston is an old city. One of the distinguishing features of the city is its collection of public parks. These parks are dotted with fountains which vary in ages and design from classical and historic to modern and avent-garde. These fountains, however, all share one common characteristic.

They always seem to be broken.

Why is this the case? It is certainly at least partially due to the ravages of harsh New England climate (As I'm writing this, the Boston area is having its second snow and ice storm of the young 2007-2008 winter) on outdoor plumbing. There is, however, I think another possible reason.

When we (we human beings, that is) build something, we often forget about the cost of maintaining it after it is built.

In the case of the fountains, the lack of maintenance may be caused by the fact that in any year's budget, other budget responsibilities such as health care, roads and bridges, and public safety are higher priorities. While it may be possible to attact funding for new construction of exciting or ground breaking public places, maintenance is, in contrast, considered a mundane exercise.

The same pattern can be seen in software enginering. For example, in the development and maintenance of automated software tests.

In the course of developing a plan for automating tests, we consider the investment in time and resources to design, build and debug the tests. Once written, however, the tests will lose their value unless they are maintained and kept in synch with the software project that they are intended to test. As the project procedes from release to release, new features are added, and new tests are needed. But, the existing library of tests may either begin to fail due their falling out of synch with the project code. Worse yet, the existing tests may provide a false sense of security as they may continue to run without error, but fail to actively exercise changed, and therefore, at risk, areas of code.

Automated tests are essential to any software test effort. The tests will pay back the investment you had to make to build them many times over during the life of a project. But, don't think that the tests are "done" just because you've completed building the first version of them. You will have to revisit and review and update the tests to keep them in working order. So, when you build new tests, don't forget to plan for maintenance. Part of this planning involves good test design, so that the tests can be modified as needed. Another part of this planning involves remembering that this maintenance will require time and human effort.

Why are the fountains broken? Maybe because no one planned to or was able to maintain them. Are your automated tests running correctly today? Even if it has been several months and project updates since they were first written or last reviewed? It might be a good time to look for leaks...

Saturday, December 1, 2007

Why Doesn't This Work? It's Broken. That's Why!

Here's a rathole that software test engineers often fall into. When you test software, you frequently encounter situations where some bugs block certain sets or types of tests. Until these bugs are resolved, you often resort to "workarounds" to avoid these problems and to enable you to continue to make testing progress. The danger is that in the course of "working around" these problems, you forget that the workarounds each have at their core an actual bug. And, if these bugs are not correctly resolved, you may never catch them. But, if you don't, your customers probably will.

So, in your earnest attempt to make as much testing progress as you can, even with immature and buggy software, don't forget to remove any workarounds that you put into place. Each workaround should always be treated as a "canary in a coal mine" and as a pointer to a potential serious bug. A good way to deal with these workarounds is to track them in your bug tracking system along with the bugs. You should close out one of these workarounds when the underlying bug is resolved and you can begin to take the road that you wanted to take in the first place! (And that will really make all the difference. My apologies to Robert Frost. ;-)

The Opposite of Fundamentals, the "Ratholes"

There's a great line from Robert Frost's poem "The Road Not Taken"[1] that I've always thought applied to software engineering and software testing. The line from the poem is: "Two roads diverged in a wood, and I took the one less traveled by, And that has made all the difference."

Sometimes however, the road not taken is not taken for a very good reason! It can be the case that this road can lead down a rathole and is based on bad software design, poor implementation, or tactical mistakes.

As a similar construct to the fundamentals (the "rules") of software testing that I first mentioned in an earlier post, I'm also going to start a series of periodic posts on mistakes that people often make. In other words, the ratholes into which we all fall when we choose the wrong road.

Ref:

[1] http://www.bartleby.com/119/1.html

Sunday, November 25, 2007

In Praise of Exploratory Testing

Have you ever found a bug because, in response to some unexpected behavior on the part of the software under test, you starting "exploring" and tried some unplanned tests? Were your actions guided by your past experience in testing in that you had an intuition that there was a bug lurking in the code? When you found the bug did you feel guilty because the test that you performed was not part of the test plan? Did someone complain to you that "Hey - that test isn't in the plan!"

Well, you shouldn't have felt guilty. You were actually performing exploratory testing[1].

In an earlier post to this blog, I talked about how, in order to be effective, test plans have to be dynamic, not static, and have to adapt to changes, especially with regard to risks, in a project's development. But - it's impossible to anticipate in advance every test that will be necessary. Once you get your hands on the software to be tested, you often learn more about the product.

In exploratory testing, you combine your learning about the software under test and your test design with your test execution, all at the same time, as a single action and not as separate tasks. But wait, isn't this just ad hoc testing? The question about whether there is a real difference between ad hoc and exploratory testing is frequently raised. The best description[2][3] that I've seen about the substantive differences between these two types of testing is that while ad hoc testing tends to be random in nature, in exploratory testing, you rely on your testing experience to select paths that will uncover bugs, based on the behavior of the software under test. It's very "situational" in that the tests that you create are based on the multi-step situations that you create and the situations the software under test presents to you.

I once worked for a manager who practiced ad hoc testing in an extreme form. He frequently parachuted into projects, attempted a few tasks with the software under test, reported finding numerous bugs, and then walked away. However, the bugs that he reported tended to fall into two categories: legitimate bugs that there cosmetic or trivial, or user errors. In contrast, in exploratory testing, you're working more like a surgeon looking for a tumor. You've seen the signs of software tumors before (for example, degradation in performance caused by a memory leak) and you put that experience to use as you probe for high-value bugs. You start your testing with an idea of what tests you want to perform, and then based on situations caused by the software's behavior, coupled with your past experience, you start to explore complex situations. This situational testing is one of exploratory testing's strengths.

So, don't feel guilty. Observe the software under test, evaluate its behavior through the filter of your experience and keep on exploring!

Ref:

[1] Exploratory Testing Explained by James Bach

[2] http://blogs.msdn.com/imtesty/archive/2007/10/19/exploratory-testing-versus-ad-hoc-testing.aspx

[3] http://www.sqaforums.com/showflat.php?Cat=0&Board=UBB46&Number=406151&page=2&fpart=all

Wednesday, November 14, 2007

A Test Plan is a Tool - a Dynamic Tool, That is

Why do we write test plans? Are we all just frustrated unpublished authors? (No, that's why we write blogs. ;-)

A test plan is a tool. The act of researching and writing a plan forces us to examine the software under test, to understand how it works, and the risks inherent it its function and its environment. By writing the plan as a document, we contribute to the institutional memory of the software project, as the document is persistent, and will be available as a resource to all the project team members and to other projects.

But - the fact that the plan may be stored as a static document file is purely incidental. Documents, files, database records are simply the mechanism by which we make collected information available to be read.

The most important thing about a test plan is that the information in the plan has to be dynamic, nit static. It has to adapt to changes in the project scope, direction, design, etc., and to changes in the potential risks that the project faces.

To be a successful test planning tool, the test plan has to reflect the "unfinished agenda" [1] of the testing effort.

But, what should that unfinished agenda be based on? The testing and quality risks that the product currently faces. The important thing to remember is that the risks the product faces during its testing will change. Perhaps the product design will change. If this happens, the plan has to adapt. Or, maybe new information will be received from a beta test sight about user requirements. Or, maybe tests that were originally planned become obsolete.

Remember how - in an earlier post to this blog - we talked about how it is impossible to find literally every bug in a product? The tests have to focus on what's most important, and that means focusing on what's most at risk. And - this will change constantly during the testing of a product.

So, the plan must be dynamic, not static, changing to meet the challenge of each new set of risks. The plan is part of a process - a way of finding the most serious bugs. [2]

Ref:

[1] From a 1960 speech by US President Kennedy
[2] Yes, from another JFK speech

What format should a test plan take? While the content is always more important than the format, it does help to have a well organized plan. I'm partial to the IEEE test plan format:

IEEE Test Plan
IEEE Test Plan in Wikipedia
Useful Wikipedia entry on test plans

Thursday, November 8, 2007

The Goal of a SW Test is to Find a Bug

"What's the goal of a software test?"

I always ask this question when I'm interviewing someone for a software development or test position. It sounds like an overly simplistic question, doesn't it? After all, we all know that:

The goal of software testing is to make sure there are no bugs, right?

Wrong! The goal of writing a software test is to find a bug.

This is why we test software. To locate the bugs and get them fixed. But, what about software that has no bugs?

There is no such thing as 100% "bug free" software. Why is this? The answer is inherent in the very nature of software. It's "soft." When you're working with physical media such as with steel, or concrete, or playdough, the limitations of what you can do are based on physical limitations of that media and the physical environment. With software, you face limitations of memory or CPU speed, but you are really only limited by your imagination. This is what makes software engineering so rewarding, and so much fun. You are basically building virtual structures out of ideas. And, unlike physical media, you can easily tear down, redesign, and rebuild structures is software easily. Sometimes badly. And so, there are bugs and you need new tests to find them.

But, hang on for a minute. In software testing, your goal cannot be to find literally every bug. You have to concentrate on finding the bugs that matter most. To do this requires an understanding of not only how the product under test works, the risks in its design, and how its customers will actually use it. And it requires that the tests that you write be intentionally destructive of and hostile to the product under test. It's often hard for people to be destructive of their own work. This is why you want an independent test team to create and execute tests.

Likewise, having a legacy library of thousands of tests that run cleanly does not guarantee that there are no bugs. It just means that the tests are not finding bugs. You have to constantly review the tests and map the test coverage to what risks the currently faces. Software products are dynamic. The test plans and tests have to adapt to keep pace.

In "The Art of Software Testing," Glenford Myers uses a medical analogy. If you feel ill and undergo medical tests that do not result in a diagnosis, is it accurate to say that the tests were "successful?" No! You're still sick, and your doctor hasn't run the correct test yet!


Ref: Glenford J. Myers, The Art of Software Testing. Wiley, 1979.

Ref: The goal of a software test: When failure equals success - IBM Developerworks article

Wednesday, November 7, 2007

Fundamentals, not "Philosophy"

There's a great line in a book by Jack Nicklaus that I've always thought applies to software testing. When he was asked about his philosophy for approaching a difficult task (in his case, hitting a golf ball), his response was:

"I don't believe in philosophies. I believe in fundamentals."

I'll start a periodic series of posts on software testing fundamentals - "the rules" - soon.

Monday, November 5, 2007

Automated Open Source GUI Test Tool - Dogtail

I came across this open source tool a while ago and was very impressed. The nice things about it are that it's open source, it's easy to get started using it, and, oh yes, it works!

The tool name is Dogtail - it's available: here

The technology it uses is interesting too. It uses Accessibility (A11Y) technologies to communicate with desktop applications. This is a key aspect of Dogtail's design. Unlike some other GUI test automation frameworks, Dogtail doesn't "scrape" information from the visual representation of the the application under the test's GUI into a data-store. Instead, it makes use of the accessibility-related metadata to create an in-memory model of the application's GUI elements.

I did not write Dogtail itself - I'm just a happy user. My contribution was some user documentation in the form of articles in Red Hat magazine. One of the article includes a Flash demo.

The articles are linked from the Dogtail entry in wikipedia

Introduction


I’m starting this blog to keep track of useful software test tools and techniques that I find, and to relate my experiences in software engineering and testing. It can be a lonely feeling when you’re staring at a problem in testing software. I’m hoping that this blog can help...no, why do you ask? I’m sure that it’s fire-proof. Check the design spec. There must be some tests for that in the plan too...