The flowers are opening and the ice rinks will soon be closing. Before the season completely changes, I thought it would be a good idea to revisit my friend the software manager/hockey dad.
When I asked him how things were going with his previously troubled QA team he said:
"Oh man, they're doing better and actually finding bugs, but now I have a new problem. They never stop finding bugs. Here's the deal - we planned out an 8 week project release schedule. So, we're in week 8, working on the final build before we want to ship, and they found 18 new bugs and almost all of them were in the same feature!"
At this point, I asked him, "So, what was the find/fix rate looking like?"
His response was a blank stare.
I tried again, "Were you tracking the history of the bugs that were found in each component?"
Another blank stare.
So I told him, "Dude, you gotta get some bug tracking metrics. You need to start planning your future based on what happened in your past. You're treating your bug tracking system like a write-only database. Remember what Seinfeld's friend Newman said about the mail. When you control it, you control information. You shouldn't just record the information, you need to start using it."
(Ok, I am fictionalizing things a bit. But, he does really talk like that, dude.)
Mining Bug Data - Gently Introducing Metrics
There's a joke in golf that goes something like this:
Q: What's the most important shot in golf?
A: The next one.
It can be the same way when you test software. Technology always changes quickly, but designs and code implementations can change more quickly. And the current state of the software under test can change even more quickly as tests expose code that is more at risk than your original plans had taken into account. It can be easy to fall into a trap of only looking forward. If, however, you make more proactive use of your bug tracking information, you can use that information to assist you both in making decisions for the present and plans for the future. What you have to do is to track some bug metrics and "mine" that information,
OK - how can you introduce metrics to a team with no experience in tracking or using them?
The short answer is "gently."
The longer answer is that you introduce a small number of metrics initially, ensure that tracking these metrics adds logos or no overhead to the team's already heavy workload, and and show the team an immediate benefit/payback from tracking the metrics.
What are the best metrics to start with? That's easy, you can start with the metric that your bug tracking system already tracks for you.
Find Rate -vs- Fix Rate
Regardless of the software development model that you follow, you should see a pattern in the number of new bugs logged. The numbers will start slow when the QA team is more occupied with test development, rise when the tests are being debugged, peak when each test is being run for the first time, then decline sharply as test cycles continue and the only new bugs found are either in functional areas where testing had previously been blocked by other bugs or are bugs that are newly introduced when other bugs are resolved. The fix rate should follow the same general pattern, with a slight time lag.
A sign of trouble in the find rate for our hockey dad's product would have been that consistent find rate from week to week. The calendar might have told him that he was near the end of his planned schedule, but the find rate, as it was not decreasing, would have told him that the product was not complete.
What might have been the causes for this consistent find rate? Maybe the test development was not complete, so that each week new tests were being run for the first time. Or, maybe each new weekly build introduced new bugs. In order to have a clear picture of the state of the project, he would have to start tracking one more metric: the root cause of each bug.
Where is the Next Bug Coming From?
I suggested that if it was the case that the QA team was consistently finding new bugs with old tests, that it was time for him to look beyond just the number of bugs that were being found, but to also look at the locations in the code where the bugs were being found.
First, as to the location of the bugs, here's another software riddle;
Q: Where are you likely to find the next 100 bugs in your product?
A: In the same places you found the last 100 bugs.
(As much as I'd like to take credit for this concept, I can't. It's from Glenford Myers in his groundbreaking book, The Art of Software Testing.)
It may be that the code in these locations is complex, which means that it is difficult to maintain so that every change runs the risk of introducing a new bug. Or, the code may have been changed and patched so many times that the original design integrity has been compromised. Or, it may be that the code is just plain buggy so that running any new tests finds new bugs.
Whatever the specifics, changes to code introduce risk as along with the code changes that are added, bugs can also be added. So, the places where you found your last bugs is often the places where you will find your next bugs.
OK - now that you have this information about where in the code you may find future bugs, what do you do with it? You can allocate more time and resources to building additional tests to give those functional areas more test coverage. And you can also ensure that any change made to the code in those areas, even a seemingly minor change is done carefully and with an eye to the likelihood of a minor bug fix inadvertently opening up a major new bug. In short, this information can give you a road map as to where in the code you should tread lightly.
I didn't want to overload him with helpful suggestions all at once, but as he left, I reminded him that there is always uncertainty in software testing. But, if was able to leverage information from the past on his project, he might be able to better predict, or at least plan for the possible events in its future. And, some of the initial information that he needed to do this was already being recorded by his bug tracking system.