One of my original goals in starting this blog was for it to as a resource for "new-to-it" software test engineers and managers. The general type of question that I want the blog to be able to answer is "now what do I do?" What I especially want is to be able suggest were specific actions that the reader can follow, instead providing only general, more abstract advice. These actions are related to three components of a successful software test team:
- People - who to hire and when to hire them, how to organize them into a team, how to maximize the effectiveness of a team
- Tools - which tools to use, and how to use them, and how to use them in combination, and
- Processes - how to create a flexible, efficient, and repeatable set of procedures
One specific case where I want the blog to be useful involves the first steps that a new software test manager should perform. I want the blog to be able to help a person in a situation where it's the start of the first day after you've just been appointed software test manager. You have no team, no tools, and no processes in place. As you sit at your desk, you're not only asking yourself "what do I do?", you're asking yourself, "what do I do first?"
I addressed the "people" component aspect of this question in an earlier post to this blog where I discussed the characterIstics of the first person to add to a team. I'll write at a later date on the first tools that you'll want to start using and the best way to use them together. For today, I want to talk about the first process that you want to implement.
"We Own the Process"
For many software engineers (the author included), the word "process" can have a negative connotation, as it conjures up images of having to spend lots of your time doing everything other than writing code. I once worked for a manager who, when he announced at a department meeting that "we own the process" subsequently discovered that we had hidden several jaras of "cheez whiz" in his office. (In case anyone has not heard of "cheez whiz," it's a famous American delicacy. It cannot legally be called cheese as it is so heavily processed, instead, it is described as a "pasturized process cheese food."
That was an unfortunate and extreme incident as "process" really ought not to be a bad word. In order for a software test organization to be successful over the long term, it has to develop a process that can be relied on as a roadmap to lead the team through difficult problems. In fact, the very "process" (pardon the pun) of defining, examining and documenting your software development and test process can itself be a useful task in making your organization more effective. If you force yourself to write down the rules under which your organization operates, you will probably find ways to improve it. (But, this is a subject for a different blog post!) This process also has to be repeatable so that the team's success can be repeatable. And, most importantly of all, the process has to always be a means to an end, where that end is the creation of high quality software, on time and on budget.
To get back to subject of this post, which particular process should the team put into place first? There are several possible answers to this question:
- Requirements traceability - In defining tests, you should be able to establish a relationship between the product's requirements and your tests tests. Any gaps represent holes in your test coverage.
- Formal test planning - There's a place for ad hoc or exploratory testing in any project, but it cannot represent the majority of a testing effort. You'll doomed to overlook some critical tests unless you impose some discipline on the process by compiling your test ideas into a plan. The goal of test planning is not to produce a document, but to create define the tests, ensure that you have proper test coverage, mitigate risks, etc. The document is almost a by-product to the act of collecting and defining the information in the document. Like I always tell people, "plan is also a verb."
- Defect tracking - If you find a bug, but have no means to record it, is it still a bug? Obviously, you have to track and record bugs in a presistent data store.
- Test results tracking - Likewise, if you run a test, but do not have the means to record the results for future analysis, and to compare the results with future test runs, then the test is a lot less useful than it could be.
Hey, what about test automation? Well, for the purposes of this discussion, I'm putting test automation in a separate category, sort of like breathing. Every team and every project has to invest in automation. Without automation, you can never achieve a sufficient level of test coverage, and the consistency of test execution and test results to ensure the quality of the software under test. In addition, you'll never be able to put into place an effective continuous integration process without a high degree of test automation. The degree to which automation is essential makes me treat this in a different class than other software test processes.
I think, however, if I had to advise someone as to the first process to set up for a new test team, I would choose "kaizen."  In other words, continuous improvement.
A systematic approach to continuous improvement can be effective in any complex human endeavor, whether it's software testing or formula 1 racing, or investment banking. You learn from past mistakes, modify your approach to tasks to incorporate those lessons learned, or, in Scrum terms, you continuously "inspect and adapt" .
But, what is it about software testing that makes continuous improvement especially applicable?
The Answer is: Bugs
"We deal in lead." 
- Steve McQueen in "The Magnificent Seven"
"We deal in bugs."
- Any software tester
The common denominator that ties all software testing together is that every test has the same ultimate goal; to find an as yet undiscovered bug. When you practice software testing, you spend your time looking for, investigating, recreating, and trying to understand the root cause of software bugs. And, every bug presents an opportunity for some type of design or implementation improvement. What's more, the bugs that look for and miss, also present opportunities for improvement in test design and implementation.
In software testing, you are always inspecting both the software product under test and adapting the test programs and test approach to the current state of that product. Your tests and test plans can never be static, they always have to adapt, and be improved, to meet the current conditions. And, it's important that this improvement be continuous, and not only be a task that you think about at the end of a project. In terms of a software product's development and testing lifecycle, waiting days or weeks to make a change to a test or adapt your test plan to deal with an unexpected set of new bugs.
So, how (specifically) can you implement a process of continuous improvement? Some ways are to:
- Turn every new bug into a new test - This one is easy, whenever you find a new bug, write an automated regression test for it. This will ensure that if the bug is ever re-introduced, you'll be able to catch it.
- Mine the customer support calls for new test ideas, and for missing tests - In the course of preparing your test plan, you should collect input from other product team members. You should also review the problems that actual customers are having. You may find that they are using the product in unexpected ways. I once saw a problem where a telephone voicemail system offered customers a feature where they could have the system call them back. People started to use this as a wakeup service, but the call back processes always ran at lower priority level than incoming call processes. What happened? A lot of people received some very late wakeup calls. ;-)
- Regularly re-review existing plans and tests - Like I said earlier, the plans should be dynamic, not static.
And, remember, "there is always something left to improve" - No matter how successful your testing is, there will be bugs that you missed. The complexity of software products, coupled with the flexible (or is it "brittle"?) nature of software as a medium, means that changes will be made and bugs will be introduced. So, if you think that that your test plan and tests are "complete," think again. The surest way to fall behind is by standing still!
 As much as I'd like to take credit for the line "there is always something left to improve," I can't. It's attributed to Ben Hogan (1912-1997), the great American golfer of the 1940's and 1950's. His life story is really fascinating. He was born into poverty, reportedly witnessed his father's suicide at age 7, survived a crippling automobile accident, and after decades of failure and constant practice, made himself into the best professional golfer of his time. He is also without a doubt the greatest self-taught athlete in history. No one, in any sport, has ever worked harder to constantly improve than Hogan. After he retired from competition, he founded a golf club manufacturing company. His products were known for their high level of quality as he would never sell anything of low quality with his name on it. In the early 1990's, I happened to have a broken golf club manufactured by the Hogan company. It was broken in such a way that it could not be repaired, so I sent a letter to the company asking to buy a replacement. I received a personal letter from Mr Hogan, asking me to send him the club. He wanted to examine it himself to understand if it was a manufacturing defect that could be improved on. (He later sent me a free replacement club too!)
(Special thanks to Jirka for the Kaizen inspiration!)