Sunday, December 23, 2007
Saturday, December 15, 2007
When Less May Be More - A Lighter Weight Test Plan
I was talking to a couple of colleagues about test plans this week; both the format of the plans and the content.
The IEEE standard is well, a "standard." It may, however, be intimidating to people unaccustomed to software testing. And, it lacks an easy to use construct for differentiating between classes of tests. The discussion of test plans came up this week as one of my colleagues is trying to - gently, but effectively - introduce a structured test process into an organization whose members are largely unfamiliar with it. He wanted very much to take a medical approach to this and "first, do no harm." For him, a lightweight adaptation of the the IEEE standard is a better starting point. We came up with this template for a test plan:
Introduction - what are we doing?
Test Strategy - how are we doing it
Test Priorities - what's most important
Scope - What's tested?
Scope - What's beyond the scope of testing?
Test Pass/Fail Criteria - how do we know that it's good or bad
Test Deliverables - docs and programs that we'll build
Test Cases - Functional Tests
* Tests mapped to product features
Test Cases - Non-functional tests - whichever apply[1]
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
Test Environment/Configurations
Responsibilities - who's doing what
Schedule/Milestones - when are they doing it
Risks and Contingencies - what might go wrong and how we'll handle it
Approvals - do we agree?
References - pointers to background docs
Revision history - why did the plan change and how?
Appendices - anything else?
It's worth noting how this template explicitly separates the descriptions of functional and non-functional tests as the differences between these test class definitions may be new concepts to some of the team members. What's that? Are these test class differences also a new concept to you? I'll discuss this subject in the next post to this blog!
Ref:
[1] http://en.wikipedia.org/wiki/Non-functional_tests
P.S. My Fedora friends have picked up on this test plan outline - it's the basis for their test plan template: https://fedoraproject.org/wiki/QA:Test_Plan_Template
The IEEE standard is well, a "standard." It may, however, be intimidating to people unaccustomed to software testing. And, it lacks an easy to use construct for differentiating between classes of tests. The discussion of test plans came up this week as one of my colleagues is trying to - gently, but effectively - introduce a structured test process into an organization whose members are largely unfamiliar with it. He wanted very much to take a medical approach to this and "first, do no harm." For him, a lightweight adaptation of the the IEEE standard is a better starting point. We came up with this template for a test plan:
Introduction - what are we doing?
Test Strategy - how are we doing it
Test Priorities - what's most important
Scope - What's tested?
Scope - What's beyond the scope of testing?
Test Pass/Fail Criteria - how do we know that it's good or bad
Test Deliverables - docs and programs that we'll build
Test Cases - Functional Tests
* Tests mapped to product features
Test Cases - Non-functional tests - whichever apply[1]
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
Test Environment/Configurations
Responsibilities - who's doing what
Schedule/Milestones - when are they doing it
Risks and Contingencies - what might go wrong and how we'll handle it
Approvals - do we agree?
References - pointers to background docs
Revision history - why did the plan change and how?
Appendices - anything else?
It's worth noting how this template explicitly separates the descriptions of functional and non-functional tests as the differences between these test class definitions may be new concepts to some of the team members. What's that? Are these test class differences also a new concept to you? I'll discuss this subject in the next post to this blog!
Ref:
[1] http://en.wikipedia.org/wiki/Non-functional_tests
P.S. My Fedora friends have picked up on this test plan outline - it's the basis for their test plan template: https://fedoraproject.org/wiki/QA:Test_Plan_Template
Monday, December 10, 2007
Why Are the Fountains Always Broken? (The forgotten cost of test automation: maintenance)
I'm writing this post from just outside of Boston. By American standards, Boston is an old city. One of the distinguishing features of the city is its collection of public parks. These parks are dotted with fountains which vary in ages and design from classical and historic to modern and avent-garde. These fountains, however, all share one common characteristic.
They always seem to be broken.
Why is this the case? It is certainly at least partially due to the ravages of harsh New England climate (As I'm writing this, the Boston area is having its second snow and ice storm of the young 2007-2008 winter) on outdoor plumbing. There is, however, I think another possible reason.
When we (we human beings, that is) build something, we often forget about the cost of maintaining it after it is built.
In the case of the fountains, the lack of maintenance may be caused by the fact that in any year's budget, other budget responsibilities such as health care, roads and bridges, and public safety are higher priorities. While it may be possible to attact funding for new construction of exciting or ground breaking public places, maintenance is, in contrast, considered a mundane exercise.
The same pattern can be seen in software enginering. For example, in the development and maintenance of automated software tests.
In the course of developing a plan for automating tests, we consider the investment in time and resources to design, build and debug the tests. Once written, however, the tests will lose their value unless they are maintained and kept in synch with the software project that they are intended to test. As the project procedes from release to release, new features are added, and new tests are needed. But, the existing library of tests may either begin to fail due their falling out of synch with the project code. Worse yet, the existing tests may provide a false sense of security as they may continue to run without error, but fail to actively exercise changed, and therefore, at risk, areas of code.
Automated tests are essential to any software test effort. The tests will pay back the investment you had to make to build them many times over during the life of a project. But, don't think that the tests are "done" just because you've completed building the first version of them. You will have to revisit and review and update the tests to keep them in working order. So, when you build new tests, don't forget to plan for maintenance. Part of this planning involves good test design, so that the tests can be modified as needed. Another part of this planning involves remembering that this maintenance will require time and human effort.
Why are the fountains broken? Maybe because no one planned to or was able to maintain them. Are your automated tests running correctly today? Even if it has been several months and project updates since they were first written or last reviewed? It might be a good time to look for leaks...
They always seem to be broken.
Why is this the case? It is certainly at least partially due to the ravages of harsh New England climate (As I'm writing this, the Boston area is having its second snow and ice storm of the young 2007-2008 winter) on outdoor plumbing. There is, however, I think another possible reason.
When we (we human beings, that is) build something, we often forget about the cost of maintaining it after it is built.
In the case of the fountains, the lack of maintenance may be caused by the fact that in any year's budget, other budget responsibilities such as health care, roads and bridges, and public safety are higher priorities. While it may be possible to attact funding for new construction of exciting or ground breaking public places, maintenance is, in contrast, considered a mundane exercise.
The same pattern can be seen in software enginering. For example, in the development and maintenance of automated software tests.
In the course of developing a plan for automating tests, we consider the investment in time and resources to design, build and debug the tests. Once written, however, the tests will lose their value unless they are maintained and kept in synch with the software project that they are intended to test. As the project procedes from release to release, new features are added, and new tests are needed. But, the existing library of tests may either begin to fail due their falling out of synch with the project code. Worse yet, the existing tests may provide a false sense of security as they may continue to run without error, but fail to actively exercise changed, and therefore, at risk, areas of code.
Automated tests are essential to any software test effort. The tests will pay back the investment you had to make to build them many times over during the life of a project. But, don't think that the tests are "done" just because you've completed building the first version of them. You will have to revisit and review and update the tests to keep them in working order. So, when you build new tests, don't forget to plan for maintenance. Part of this planning involves good test design, so that the tests can be modified as needed. Another part of this planning involves remembering that this maintenance will require time and human effort.
Why are the fountains broken? Maybe because no one planned to or was able to maintain them. Are your automated tests running correctly today? Even if it has been several months and project updates since they were first written or last reviewed? It might be a good time to look for leaks...
Saturday, December 1, 2007
Why Doesn't This Work? It's Broken. That's Why!
Here's a rathole that software test engineers often fall into. When you test software, you frequently encounter situations where some bugs block certain sets or types of tests. Until these bugs are resolved, you often resort to "workarounds" to avoid these problems and to enable you to continue to make testing progress. The danger is that in the course of "working around" these problems, you forget that the workarounds each have at their core an actual bug. And, if these bugs are not correctly resolved, you may never catch them. But, if you don't, your customers probably will.
So, in your earnest attempt to make as much testing progress as you can, even with immature and buggy software, don't forget to remove any workarounds that you put into place. Each workaround should always be treated as a "canary in a coal mine" and as a pointer to a potential serious bug. A good way to deal with these workarounds is to track them in your bug tracking system along with the bugs. You should close out one of these workarounds when the underlying bug is resolved and you can begin to take the road that you wanted to take in the first place! (And that will really make all the difference. My apologies to Robert Frost. ;-)
So, in your earnest attempt to make as much testing progress as you can, even with immature and buggy software, don't forget to remove any workarounds that you put into place. Each workaround should always be treated as a "canary in a coal mine" and as a pointer to a potential serious bug. A good way to deal with these workarounds is to track them in your bug tracking system along with the bugs. You should close out one of these workarounds when the underlying bug is resolved and you can begin to take the road that you wanted to take in the first place! (And that will really make all the difference. My apologies to Robert Frost. ;-)
The Opposite of Fundamentals, the "Ratholes"
There's a great line from Robert Frost's poem "The Road Not Taken"[1] that I've always thought applied to software engineering and software testing. The line from the poem is: "Two roads diverged in a wood, and I took the one less traveled by, And that has made all the difference."
Sometimes however, the road not taken is not taken for a very good reason! It can be the case that this road can lead down a rathole and is based on bad software design, poor implementation, or tactical mistakes.
As a similar construct to the fundamentals (the "rules") of software testing that I first mentioned in an earlier post, I'm also going to start a series of periodic posts on mistakes that people often make. In other words, the ratholes into which we all fall when we choose the wrong road.
Ref:
[1] http://www.bartleby.com/119/1.html
Sometimes however, the road not taken is not taken for a very good reason! It can be the case that this road can lead down a rathole and is based on bad software design, poor implementation, or tactical mistakes.
As a similar construct to the fundamentals (the "rules") of software testing that I first mentioned in an earlier post, I'm also going to start a series of periodic posts on mistakes that people often make. In other words, the ratholes into which we all fall when we choose the wrong road.
Ref:
[1] http://www.bartleby.com/119/1.html
Subscribe to:
Posts (Atom)