(We'll get back to the "what do I do now?" track in the next few posts. But, hey, it's baseball season!)
The year was 1929.
Legendary baseball manager George Stallings, who had led the 1914 Boston Braves from last place the year before to win the World Series, lay on his deathbed.
When a friend asked him what was hurting him, he reportedly replied, "bases on balls."
This story may require some translation. In American baseball, a "base on balls" refers to a batter being awarded first base if the a batter receives four pitches that are outside of the strike zone, and which he is not obliged to attempt to hit. This is also called a "walk" as the batter does not have to run to be able to reach first base. Walks are extremely frustrating to managers as it is a self-inflicted problem. When your pitcher walks a batter, then you've given your opponent an advantage that they did not have to earn by actually hitting the ball.
That's all very interesting, but what does this have to do with software testing?
If you research software bug taxonomies, you'll information on types of bugs such as incorrect requirements bugs, data translation bugs, interface bugs, and so forth. Actually, software bug taxonomies are great test planning tools. If you haven't researched these taxonomies before, then you really should. You'll probably quickly find ways to apply these bug types to your own project test planning.
One type of bug that you may not find in a standard bug taxonomy is the self inflicted bug. I was thinking about self inflicted bugs the other day when I had lunch with some former co-workers and we reminisced about a long ago software project. The project had many types of bugs; reliability, installation and configuration, data conversion, usability, build integrity, etc. And, in the course of trying to deal with these bugs, we inadvertently introduced some self inflicted bugs.
Familarity Breeds Politeness, and Sometimes Missed Bugs
What happened was that over time, we had all became so accustomed to the project's many problems, that we knew that in order to be able to make progress in testing some of the project's features that we would have to avoid certain actions that we knew would to fail. This approach can be useful when some testing is blocked as it enables you to work on the tests that are not blocked. It is, however, a "slipperly slope" as in software testing, once you start working around some of a product's bugs, you can get into a mindset where you forget that both those bugs and other bugs you encounter are BUGS that must be fixed.
I once worked with a very diligent and competent software tester who spent days trying to execute a complex text on the product he was testing. He worked through many scenarios, but the program always failed, When he asked me in frustration, "Why doesn't this work?" my answer was "maybe because it's broken?" He had fallen into the workaround trap. He was trying everything he could think of to help a broken piece of software to work, when what it really needed was for some bugs to be fixed.
In a way, becoming too familar with a product's limitations can be a liability in that you may unconsciously avoid performing certain actions because they will push a program under test beyond its limits and cause a failure. This familarity is a little like the way families sometimes operate. It's similar to the situation of when your uncle comes to Christmas dinner and tells the same unfunny jokes year after year. Everyone smiles and laughs, because "we all know that's how your uncle is." In reality, what you'd like to do is to fix the bug tell him to just keep quiet!
This is actually an important reason why you want to have an independent software testing team; it's very difficult to be hostile toward your own work. If you're developing a software product, your goal is to see it succeed. You may build test suites to verify its operation, but, you may unconsciously avoid building in tests that you know will break it. Software development is a creative process. Testing, however, is destructive.
In the case of our project, we worked around many bugs, and were therefore able to find many other bugs. Some of these workarounds took the form of manual corrections in system configurations that were caused by a buggy installation utility. Our options at the time were to either halt all testing to get the installer fixed, or work around the problems so that we could make progress testing the product. In the process of relying on the workarounds however, we lost track of some of them and ended up having to have them fixed very late in the testing cycle.
What could we have done differently? We tended to track the workarounds in emails or meeting minutes. What we needed to do was to tracking them in our bug tracking system as bugs, and highlight them as high priority bugs that needed to be be fixed. But, most of all, we needed to remember that the workarounds that we put into place were never intended to be part of the product, but they were only temporary constructs. Our goal was that we wanted the project to be able to do without the workarounds. Just like Christmas dinner without your uncle's jokes.
 Software Testing Techniques by Boris Beizer