The end of summer is a good time to think about...time.
Maybe it's just the change of seasons, but the end of summer seems to make everyone especially reflective. We all look back fondly on the warm, sunny summer days, and, here in Boston, we look forward less fondly to another long, cold New England winter. And we also think about all the jobs around the house that we had planned to complete during the summer, but somehow never got around to starting.
I was thinking about time this week, and the first time that I had heard the phrase "COB" (i.e., close of business). It was my first job out of college. In the middle of the day, my boss sent me an email requesting (demanding, actually) that I complete a task "by COB." To be honest, I had absolutely no clue what COB meant. I was a recent college graduate, and was trying very hard to learn and memorized what seemed like an endless stream of acronym related to the company's products and technologies. After I was unable to find a definition for COB in any technical documents, I was able to find a co-worker who, when he stopped laughing, was able to explain the term to me.
It's an interesting phrase, "COB." It implies that there is a closing time for the business conducted, or work performed in any given day.
As I'm writing this blog entry, it's late at night in Boston. (The Red Sox just defeated the Chicago White Sox in a night-time baseball game, so, all's right with my world.) My calendar day is closing, but I just noticed that some of my project team co-workers just returned from lunch. It's already tomorrow for them as they are based in Brisbane. The rest of the project's team is based in China, the UK, the Czech Republic, Germany, Sweden, and elsewhere in the US and EU.
Given the geographic diversity of the project team, is there every a COB? I think that the answer is...rarely. The wide variety of locations involved means that there's almost always someone at work on the project.
It's really not that unique these days to work on a team that is not located in the same place. In order for the team to be successful, everyone has to be aware of everyone else's timezone, and make effective use of email and on-line chat tools to communicate. One useful thing to do is to always refer to time (for example, in the scheduling of video or teleconference meetings) in terms of UTC, so as to avoid errors in converting between timezones.
But, beyond any scheduling or communications tools, I think that the most important things to keep in mind involve how you perceive a couple things.
First, the perceptions of "remoteness." It's natural to view the rest of the world relative to your location, your home, your city, your country. For example, on my project's team, I sometimes think about my UK and EU and APAC co-workers as being in remote locations east of me, and my western US co-workers as being in remote locations west of me. [1] The reality, however, is that on this project, everyone is remote. Some offices host more or fewer team members than other, but there really is no "central" office for the project. So, it's important to keep in mind is that it's not "they" who are remote. We all are. No one is "satellite" or 2nd class team member by virtual of their physical location.
Second, the perception of just what a "day" is. Everyone's personal working day is a finite thing. There's a beginning and an end. With a widely dispersed team, however, if you can organize things properly, the calendar days can blend together into a much more flexible and fluid construct. It's sort of like a ship at sea. The ship doesn't turn off its engines at 5:00PM local time. Instead, the crew is divided into teams that work in shifts that are seamlessly put together. It takes effort and organization to ensure that tasks are handed off cleanly between shifts, both on a ship and a software engineering project. You have to be sure that your work can stand alone if a fellow team member is relying on it at a time when you are unavailable. For example, in testing it is extremely important to fully document all bugs that you find with stack traces, server log files, and example code that illustrates the bug. If any of this information is not included in a bug report, then anyone trying to recreate and fix the bug will have to request the information from you, and it may be several hours until you see this request for additional information. The key is to remember that your "day" doesn't exist in a vacuum, but that it is part of an ongoing stream of time and effort, where calendar days can blend together.
I have to wrap up this post now. I have to send some answers to questions to my friends getting back from lunch in Brisbane, and send some questions to my friends who will be sitting down to breakfast in Brno in a couple of hours. It's all in a day...
References:
[1] A note on Bostonians - some of us still see Boston as the "hub of the universe." http://www.boston-online.com/glossary/hub.html
Monday, September 1, 2008
Tuesday, August 5, 2008
Self-Inflicted Bugs - The Workaround Trap
(We'll get back to the "what do I do now?" track in the next few posts. But, hey, it's baseball season!)
The year was 1929.
Legendary baseball manager George Stallings, who had led the 1914 Boston Braves from last place the year before to win the World Series, lay on his deathbed.
When a friend asked him what was hurting him, he reportedly replied, "bases on balls."[1]
This story may require some translation. In American baseball, a "base on balls" refers to a batter being awarded first base if the a batter receives four pitches that are outside of the strike zone, and which he is not obliged to attempt to hit. This is also called a "walk" as the batter does not have to run to be able to reach first base. Walks are extremely frustrating to managers as it is a self-inflicted problem. When your pitcher walks a batter, then you've given your opponent an advantage that they did not have to earn by actually hitting the ball.[2]
That's all very interesting, but what does this have to do with software testing?
If you research software bug taxonomies[3], you'll information on types of bugs such as incorrect requirements bugs, data translation bugs, interface bugs, and so forth. Actually, software bug taxonomies are great test planning tools. If you haven't researched these taxonomies before, then you really should. You'll probably quickly find ways to apply these bug types to your own project test planning.
One type of bug that you may not find in a standard bug taxonomy is the self inflicted bug. I was thinking about self inflicted bugs the other day when I had lunch with some former co-workers and we reminisced about a long ago software project. The project had many types of bugs; reliability, installation and configuration, data conversion, usability, build integrity, etc. And, in the course of trying to deal with these bugs, we inadvertently introduced some self inflicted bugs.
Familarity Breeds Politeness, and Sometimes Missed Bugs
What happened was that over time, we had all became so accustomed to the project's many problems, that we knew that in order to be able to make progress in testing some of the project's features that we would have to avoid certain actions that we knew would to fail. This approach can be useful when some testing is blocked as it enables you to work on the tests that are not blocked. It is, however, a "slipperly slope" as in software testing, once you start working around some of a product's bugs, you can get into a mindset where you forget that both those bugs and other bugs you encounter are BUGS that must be fixed.
I once worked with a very diligent and competent software tester who spent days trying to execute a complex text on the product he was testing. He worked through many scenarios, but the program always failed, When he asked me in frustration, "Why doesn't this work?" my answer was "maybe because it's broken?" He had fallen into the workaround trap. He was trying everything he could think of to help a broken piece of software to work, when what it really needed was for some bugs to be fixed.
In a way, becoming too familar with a product's limitations can be a liability in that you may unconsciously avoid performing certain actions because they will push a program under test beyond its limits and cause a failure. This familarity is a little like the way families sometimes operate. It's similar to the situation of when your uncle comes to Christmas dinner and tells the same unfunny jokes year after year. Everyone smiles and laughs, because "we all know that's how your uncle is." In reality, what you'd like to do is to fix the bug tell him to just keep quiet!
This is actually an important reason why you want to have an independent software testing team; it's very difficult to be hostile toward your own work. If you're developing a software product, your goal is to see it succeed. You may build test suites to verify its operation, but, you may unconsciously avoid building in tests that you know will break it. Software development is a creative process. Testing, however, is destructive.
In the case of our project, we worked around many bugs, and were therefore able to find many other bugs. Some of these workarounds took the form of manual corrections in system configurations that were caused by a buggy installation utility. Our options at the time were to either halt all testing to get the installer fixed, or work around the problems so that we could make progress testing the product. In the process of relying on the workarounds however, we lost track of some of them and ended up having to have them fixed very late in the testing cycle.
What could we have done differently? We tended to track the workarounds in emails or meeting minutes. What we needed to do was to tracking them in our bug tracking system as bugs, and highlight them as high priority bugs that needed to be be fixed. But, most of all, we needed to remember that the workarounds that we put into place were never intended to be part of the product, but they were only temporary constructs. Our goal was that we wanted the project to be able to do without the workarounds. Just like Christmas dinner without your uncle's jokes.
References:
[1] http://www.baseballlibrary.com/ballplayers/player.php?name=George_Stallings_1867
[2] http://mlb.mlb.com/mlb/official_info/official_rules/definition_terms_2.jsp
[3] Software Testing Techniques by Boris Beizer
The year was 1929.
Legendary baseball manager George Stallings, who had led the 1914 Boston Braves from last place the year before to win the World Series, lay on his deathbed.
When a friend asked him what was hurting him, he reportedly replied, "bases on balls."[1]
This story may require some translation. In American baseball, a "base on balls" refers to a batter being awarded first base if the a batter receives four pitches that are outside of the strike zone, and which he is not obliged to attempt to hit. This is also called a "walk" as the batter does not have to run to be able to reach first base. Walks are extremely frustrating to managers as it is a self-inflicted problem. When your pitcher walks a batter, then you've given your opponent an advantage that they did not have to earn by actually hitting the ball.[2]
That's all very interesting, but what does this have to do with software testing?
If you research software bug taxonomies[3], you'll information on types of bugs such as incorrect requirements bugs, data translation bugs, interface bugs, and so forth. Actually, software bug taxonomies are great test planning tools. If you haven't researched these taxonomies before, then you really should. You'll probably quickly find ways to apply these bug types to your own project test planning.
One type of bug that you may not find in a standard bug taxonomy is the self inflicted bug. I was thinking about self inflicted bugs the other day when I had lunch with some former co-workers and we reminisced about a long ago software project. The project had many types of bugs; reliability, installation and configuration, data conversion, usability, build integrity, etc. And, in the course of trying to deal with these bugs, we inadvertently introduced some self inflicted bugs.
Familarity Breeds Politeness, and Sometimes Missed Bugs
What happened was that over time, we had all became so accustomed to the project's many problems, that we knew that in order to be able to make progress in testing some of the project's features that we would have to avoid certain actions that we knew would to fail. This approach can be useful when some testing is blocked as it enables you to work on the tests that are not blocked. It is, however, a "slipperly slope" as in software testing, once you start working around some of a product's bugs, you can get into a mindset where you forget that both those bugs and other bugs you encounter are BUGS that must be fixed.
I once worked with a very diligent and competent software tester who spent days trying to execute a complex text on the product he was testing. He worked through many scenarios, but the program always failed, When he asked me in frustration, "Why doesn't this work?" my answer was "maybe because it's broken?" He had fallen into the workaround trap. He was trying everything he could think of to help a broken piece of software to work, when what it really needed was for some bugs to be fixed.
In a way, becoming too familar with a product's limitations can be a liability in that you may unconsciously avoid performing certain actions because they will push a program under test beyond its limits and cause a failure. This familarity is a little like the way families sometimes operate. It's similar to the situation of when your uncle comes to Christmas dinner and tells the same unfunny jokes year after year. Everyone smiles and laughs, because "we all know that's how your uncle is." In reality, what you'd like to do is to fix the bug tell him to just keep quiet!
This is actually an important reason why you want to have an independent software testing team; it's very difficult to be hostile toward your own work. If you're developing a software product, your goal is to see it succeed. You may build test suites to verify its operation, but, you may unconsciously avoid building in tests that you know will break it. Software development is a creative process. Testing, however, is destructive.
In the case of our project, we worked around many bugs, and were therefore able to find many other bugs. Some of these workarounds took the form of manual corrections in system configurations that were caused by a buggy installation utility. Our options at the time were to either halt all testing to get the installer fixed, or work around the problems so that we could make progress testing the product. In the process of relying on the workarounds however, we lost track of some of them and ended up having to have them fixed very late in the testing cycle.
What could we have done differently? We tended to track the workarounds in emails or meeting minutes. What we needed to do was to tracking them in our bug tracking system as bugs, and highlight them as high priority bugs that needed to be be fixed. But, most of all, we needed to remember that the workarounds that we put into place were never intended to be part of the product, but they were only temporary constructs. Our goal was that we wanted the project to be able to do without the workarounds. Just like Christmas dinner without your uncle's jokes.
References:
[1] http://www.baseballlibrary.com/ballplayers/player.php?name=George_Stallings_1867
[2] http://mlb.mlb.com/mlb/official_info/official_rules/definition_terms_2.jsp
[3] Software Testing Techniques by Boris Beizer
Wednesday, July 9, 2008
The First Person to add to Your Team?
While you'll have to work on the people, tools, and processes tracks simultaneously, we'll take them one at a time in our discussions.
Let's think about the people that you need for your team first. A while ago, I wrote an essay for IBM Developerworks [1] that described some of the characteristics (and some of the characters) that you should look for in software QE engineers in order to build an effective team. I'll expand on some of the ideas from this essay in a later post. The specific subject that I want to address in this post however, concerns the first person that you should try to add to you new team.
The first person that you need to look for is a "grinder."
OK. What's a grinder? And, why do you need one first? Shouldn't the first member of the team be a test framework architect or automation coder? Think back to our hypothetical project situation. You have to juggle the creation of the team, its tools and the project's set of QE processes while you also ship a product. The skill set of the first person on your team has to be that of a general practitioner (sort of a software Swiss Army knife) who can both execute and build tests. But beyond his/her technical skills, you also need to find someone with the right attitude.
Once again - what's a grinder?
It's a sports metaphor. It's sometimes applied to baseball pitchers, but it most frequently is used when describing golfers. In golf terminology, "grinder" describes someone who is faced with adverse conditions and is not playing their best game, but is still able to, through effort and determination, "grind" out a victory. A great example of this happened at last month's US Open. [2] Tiger Woods was playing in his first event after knee surgery. As it turns out. he came back from the surgery before his knee was fully healed and as the tournament proceeded, he was playing in more and more pain. For me, the image of what it means to "grind" was watching Woods experimenting with different swings and shot patterns as he searched for a way to swing without collapsing in pain. In spite of the pain and the difficulty of playing the most difficult course of the entire season, he was able to focus on his goal.
You'll need the first person on your team to have this type of attitude. (You'll also need to be something of a grinder too.) If your first team member is a test architect, he or she may become frustrated by the need to meet short-term product delivery goals instead of being able to concentrate on a longer-term design effort. Likewise, if your first team member is a test automation specialist, he/she may become frustrated by the need to "jump in" and perform manual or exploratory tests. [3]
So - start with a general practitioner, but be sure to find one who can "grind it out."
References:
[1] http://www.ibm.com/developerworks/rational/library/content/RationalEdge/sep04/dimaggio
[2] http://www.usopen.com/en_US/index.html
[3] http://swqetesting.blogspot.com/2007/11/in-praise-of-exploratory-testing.html
Let's think about the people that you need for your team first. A while ago, I wrote an essay for IBM Developerworks [1] that described some of the characteristics (and some of the characters) that you should look for in software QE engineers in order to build an effective team. I'll expand on some of the ideas from this essay in a later post. The specific subject that I want to address in this post however, concerns the first person that you should try to add to you new team.
The first person that you need to look for is a "grinder."
OK. What's a grinder? And, why do you need one first? Shouldn't the first member of the team be a test framework architect or automation coder? Think back to our hypothetical project situation. You have to juggle the creation of the team, its tools and the project's set of QE processes while you also ship a product. The skill set of the first person on your team has to be that of a general practitioner (sort of a software Swiss Army knife) who can both execute and build tests. But beyond his/her technical skills, you also need to find someone with the right attitude.
Once again - what's a grinder?
It's a sports metaphor. It's sometimes applied to baseball pitchers, but it most frequently is used when describing golfers. In golf terminology, "grinder" describes someone who is faced with adverse conditions and is not playing their best game, but is still able to, through effort and determination, "grind" out a victory. A great example of this happened at last month's US Open. [2] Tiger Woods was playing in his first event after knee surgery. As it turns out. he came back from the surgery before his knee was fully healed and as the tournament proceeded, he was playing in more and more pain. For me, the image of what it means to "grind" was watching Woods experimenting with different swings and shot patterns as he searched for a way to swing without collapsing in pain. In spite of the pain and the difficulty of playing the most difficult course of the entire season, he was able to focus on his goal.
You'll need the first person on your team to have this type of attitude. (You'll also need to be something of a grinder too.) If your first team member is a test architect, he or she may become frustrated by the need to meet short-term product delivery goals instead of being able to concentrate on a longer-term design effort. Likewise, if your first team member is a test automation specialist, he/she may become frustrated by the need to "jump in" and perform manual or exploratory tests. [3]
So - start with a general practitioner, but be sure to find one who can "grind it out."
References:
[1] http://www.ibm.com/developerworks/rational/library/content/RationalEdge/sep04/dimaggio
[2] http://www.usopen.com/en_US/index.html
[3] http://swqetesting.blogspot.com/2007/11/in-praise-of-exploratory-testing.html
Three Tracks
I was wandering around FUDCon[1] in Boston a couple of weeks ago, and ran into someone who said, "I really like your blog, but the entries have been very short recently."
Ouch.
The awkward thing was that he was correct. I had been unable to contribute much to the blog for a while as I had been concentrating my writing time on Red Hat Magazine[2].
Let's get back to our "now what do I do?" thread. In the last post, we talked about dealing with risk including understanding that risks are always present. OK, you know that your new friend and constant companion is Mr. Risk. Now what do you do?
You have to make progress on (3) tracks:
* People
* Tools
* Processes
We'll examine aspects of each of these tracks in the coming posts to this blog.
References:
[1] http://fedoraproject.org/wiki/FUDCon/FUDConF10
[2] http://www.redhatmagazine.com
Ouch.
The awkward thing was that he was correct. I had been unable to contribute much to the blog for a while as I had been concentrating my writing time on Red Hat Magazine[2].
Let's get back to our "now what do I do?" thread. In the last post, we talked about dealing with risk including understanding that risks are always present. OK, you know that your new friend and constant companion is Mr. Risk. Now what do you do?
You have to make progress on (3) tracks:
* People
* Tools
* Processes
We'll examine aspects of each of these tracks in the coming posts to this blog.
References:
[1] http://fedoraproject.org/wiki/FUDCon/FUDConF10
[2] http://www.redhatmagazine.com
Friday, May 30, 2008
Open Source ESB - New Magazine Article
I mentioned Red Hat Magazine a while ago - it's really a great resource for technical open source content. And, speaking of content, I was able to publish this new article a few days ago:
http://www.redhatmagazine.com/2008/05/22/adapters-for-an-esb
The subject this time is the manner in which the JBoss ESB (Enterprise Service Bus) supports adapters so that apps (including legacy apps) can integrate with the ESB. Very cool stuff.
http://www.redhatmagazine.com/2008/05/22/adapters-for-an-esb
The subject this time is the manner in which the JBoss ESB (Enterprise Service Bus) supports adapters so that apps (including legacy apps) can integrate with the ESB. Very cool stuff.
Sunday, April 27, 2008
You'll Never Walk Alone
Before we start looking finding QE team members, building automated tools, and defining lightweight and repeatable processes, there's one point I want to make. You may be feeling a bit overwhelmed by the scale of the work you have to do in your new position. You may also be feeling a little lonely. But, fear not. You are not alone. You have a new constant companion. A companion that will be right beside you every step of the way. Who or what is this companion?
Risk.
That's right. Risk will always be with you. How you handle risk as it applies to the software under test will determine how successful your testing is. So, how do you handle risk? I think that there are (3) things you have to keep in mind:
1) There is no "Out" - A few years ago, a good friend mine called me to say that he was completely debt free. He had paid of his mortgage and his car. He said that he would "never again" be in debt. Then, his car needed major repairs. He bought a new car. Then, he and his wife decided that their kitchen was outdated. They remodeled. Then they had a baby. You probably get the point. Risk will always be there and you will always have to work to minimize its impact on your project. Something, whether new features, new requirements, new or old bugs, will put the project at risk. Speaking of bugs...
2) There are Always Bugs - Have you found all the bugs in that code? No? Well, don't worry. You never will. The simple fact is that there's no such thing as "bug-free" software. Why is this so? First, on the development side of the equation, you have to deal with changing technology, a complex and often flawed application design, the difficulties inherent in integrating new and existing systems, and so on. Human error is also a huge factor. Although modern application development tools can generate code, people must be involved at some point in the development process. And people make mistakes. On the testing side, you will always have to concentrate on finding the bugs that put your project most at risk. Most at risk today that is...
3) "What's at Risk?" The Better Question to Ask is, "What's at Risk NOW?" - No software project will even be static. To be successful, your project will have to respond to your customers' changing requirements and will have to adapt to and incorporate new technologies. There's a great line from JFK talking about political parties that I used in article to refer to software test planning. The line is that these parties aren't locked in amber, but flow like rivers through time. (Theodore Sorenson actually wrote many of his speeches, but JFK always claimed this line as his own.) It's like that for test plans too. You can try to anticipate everything in advance, but in the course of testing, you always learn something new and have to change the plan to incorporate new or changed tests based on the risks that the project is currently facing. These risks will change over the life of a project. Early in a project's development, the major risk may be that new features are simply buggy, or that integrations between major project subsystems are in conflict, or that the build process is so immature that unintended software changes are introduced. Later on in a project's development, the major risk may be that the original designers of a project have been replaced by new engineers who, in the course of fixing bugs, break the existing design.
So - how do you cope with all this?
First, don't expect that your job will ever be risk free. Your job is to manage that risk.
Second, don't expect that your software will every be bug-free. You have to manage your resources and testing to find the bugs that matter most.
Third, don't expect that you can relax because "all the risks" have been mitigated. In the time that it took you to read this blog post, a new risk was probably either exposed, inadvertently coded, or dreamed up down the hall in marketing. So, don't lock your thinking or your test planning in amber. Think of your project as a river. Maybe even the kind that people go white-water rafting it.
Risk.
That's right. Risk will always be with you. How you handle risk as it applies to the software under test will determine how successful your testing is. So, how do you handle risk? I think that there are (3) things you have to keep in mind:
1) There is no "Out" - A few years ago, a good friend mine called me to say that he was completely debt free. He had paid of his mortgage and his car. He said that he would "never again" be in debt. Then, his car needed major repairs. He bought a new car. Then, he and his wife decided that their kitchen was outdated. They remodeled. Then they had a baby. You probably get the point. Risk will always be there and you will always have to work to minimize its impact on your project. Something, whether new features, new requirements, new or old bugs, will put the project at risk. Speaking of bugs...
2) There are Always Bugs - Have you found all the bugs in that code? No? Well, don't worry. You never will. The simple fact is that there's no such thing as "bug-free" software. Why is this so? First, on the development side of the equation, you have to deal with changing technology, a complex and often flawed application design, the difficulties inherent in integrating new and existing systems, and so on. Human error is also a huge factor. Although modern application development tools can generate code, people must be involved at some point in the development process. And people make mistakes. On the testing side, you will always have to concentrate on finding the bugs that put your project most at risk. Most at risk today that is...
3) "What's at Risk?" The Better Question to Ask is, "What's at Risk NOW?" - No software project will even be static. To be successful, your project will have to respond to your customers' changing requirements and will have to adapt to and incorporate new technologies. There's a great line from JFK talking about political parties that I used in article to refer to software test planning. The line is that these parties aren't locked in amber, but flow like rivers through time. (Theodore Sorenson actually wrote many of his speeches, but JFK always claimed this line as his own.) It's like that for test plans too. You can try to anticipate everything in advance, but in the course of testing, you always learn something new and have to change the plan to incorporate new or changed tests based on the risks that the project is currently facing. These risks will change over the life of a project. Early in a project's development, the major risk may be that new features are simply buggy, or that integrations between major project subsystems are in conflict, or that the build process is so immature that unintended software changes are introduced. Later on in a project's development, the major risk may be that the original designers of a project have been replaced by new engineers who, in the course of fixing bugs, break the existing design.
So - how do you cope with all this?
First, don't expect that your job will ever be risk free. Your job is to manage that risk.
Second, don't expect that your software will every be bug-free. You have to manage your resources and testing to find the bugs that matter most.
Third, don't expect that you can relax because "all the risks" have been mitigated. In the time that it took you to read this blog post, a new risk was probably either exposed, inadvertently coded, or dreamed up down the hall in marketing. So, don't lock your thinking or your test planning in amber. Think of your project as a river. Maybe even the kind that people go white-water rafting it.
Monday, April 7, 2008
"Now What Do I Do?"
I'm been thinking of taking this blog in a slightly different direction. The motivation for this change came from the question that is the title of this post.
A couple of years ago, a good friend of mine, a product manager, called me up on a Sunday night asking for help. His software startup company had been growing quickly, as had the number of bugs in its product. They needed to implement a formal testing process, put together a QE team, and build automated tests. And fast. Unfortunately, no one at the startup had ever done this before. My friend was brave enough to say, "hey, I know someone who could help." So - he was placed in charge of the task. I wrote up a couple of pages of ideas for him so that he could get started.
I've thought often of his question as other people have also asked me about how to set up a QE team from scratch. One of them was also at a startup, a couple of others were working on new projects at larger companies, while another had just been promoted, with no advance notice, to manage a non-existent QE team. What they all had in common is that they had to build a new team, find or develop tools, and define a testing process, all while they were also trying to meet an aggressive product release schedule. Sort of like building a bus while it roars down a highway.
Anyway, what I'm going to try to do in the next series of posts to this blog is to answer that question. I'm hoping that this series of posts will be a useful roadmap or reference guide. The scale of the posts will probably be longer than most blog posts, but I'm going to keep the combined total length of the posts shorter than a book. After all, if you're in a situation similar to one my friend found himself, you don't have a lot of time to read!
A couple of years ago, a good friend of mine, a product manager, called me up on a Sunday night asking for help. His software startup company had been growing quickly, as had the number of bugs in its product. They needed to implement a formal testing process, put together a QE team, and build automated tests. And fast. Unfortunately, no one at the startup had ever done this before. My friend was brave enough to say, "hey, I know someone who could help." So - he was placed in charge of the task. I wrote up a couple of pages of ideas for him so that he could get started.
I've thought often of his question as other people have also asked me about how to set up a QE team from scratch. One of them was also at a startup, a couple of others were working on new projects at larger companies, while another had just been promoted, with no advance notice, to manage a non-existent QE team. What they all had in common is that they had to build a new team, find or develop tools, and define a testing process, all while they were also trying to meet an aggressive product release schedule. Sort of like building a bus while it roars down a highway.
Anyway, what I'm going to try to do in the next series of posts to this blog is to answer that question. I'm hoping that this series of posts will be a useful roadmap or reference guide. The scale of the posts will probably be longer than most blog posts, but I'm going to keep the combined total length of the posts shorter than a book. After all, if you're in a situation similar to one my friend found himself, you don't have a lot of time to read!
Friday, March 21, 2008
Great Tool for Swimming with Network Sharks
I was looking to debug a problem involving clients connecting to FTP servers and was in need of a packet sniffer. A colleague of mine pointed me at an open source tool named "WireShark" (http://www.wireshark.org). This is a great tool. It's a bit like tcpdump, but it includes a beautiful GUI and does a great job at filtering packets and at exporting/importing data. It also run son Linux, Windows and Mac OS X.
Here's a screen shot:
Here's a screen shot:
Tuesday, March 11, 2008
From my favorite e-zine...
That last post about "what is middleware?" received a nice response from many readers - strangely, no one actually commented on the post.
Anyway - a longer version of the topic was just published in Red Hat magazine:
http://www.redhatmagazine.com/2008/03/11/what-is-middleware-in-plain-english-please/
Anyway - a longer version of the topic was just published in Red Hat magazine:
http://www.redhatmagazine.com/2008/03/11/what-is-middleware-in-plain-english-please/
Thursday, February 14, 2008
What is Middleware? Huh? No, now try that again in a language that I can understand...
I listened to a webcast from JBoss World today with a group of people. After hearing several speakers announce new middleware products and initiatives (as JBoss is the leading force in open source middleware), one of them turned to me and asked, "Just what is middleware?" When I started to describe transaction servers and database connection pool sharing, she held up a hand and asked, "No. I want to know what it is in real world terms, and why it's a big deal."
That got me thinking and sent me to google to look for a short definition of middleware. I found a lot of them, but they mostly were either too vague or too dependent on the reader already having some knowledge about middleware. Then, I found this one:
'...middleware: The kind of word that software industry insiders love to spew. Vague enough to mean just about any software program that functions as a link between two other programs, such as a Web server and a database program. Middleware also has a more specific meaning as a program that exists between a "network" and an "application" and carries out such tasks as authentication. But middleware functionality is often incorporated in application or network software, so precise definitions can get all messy. Avoid using at all costs...'[1]
And that really got me thinking about how to describe middleware and why it matters. What I was searching for was a real-world analogy that would make sense to people with varying levels of computer and software experience. And then, it hit me.
Middleware is plumbing.
There are four ways that this is true.
First, it ties together disparate parts of complex systems.
In your house, you have kitchens, heating systems, bathrooms, washing machines, garden faucets, etc. Each plays an important part in making your house livable. You almost never don't have to worry about not having running water because the plumbing is robust and reliable. It just keeps the water moving through the pipes. Middleware keeps information moving through complex web-based applications. One of its primary tasks is to connect systems, applications and databases together in a secure and reliable way. For example, let's say you bought an over-priced sweater at a store web site last night. What happened? You looked through various sweaters' images, selected color and size, entered a charge card number and that was it, right? Well, behind the scenes, middleware made sure that the store's inventory database showed that sweater in stock, connected to the charge card company's database to make sure that your card was not maxed out, and connected to the shipping company database to verify a delivery date. And, it made sure that hundreds or thousands of people could all shop that site at the same time. Also, while it looked to you like you were looking at one web-site, middleware tied together many different computers, each in a different location, all running the store's e-commerce application, into a cluster. Why is this important? To make sure that you can always get to the store on-line, even if some of these computers are down due to maintenance or power failures.
Second, it's mostly invisible.
You don't generally see much of the plumbing in your house. What you see is the water. As a consumer, you don't see middleware. You see the web sites and the information flow that middleware makes possible.
Third, it provides a standard way of doing things.
If you wanted to build your own plumbing from scratch, you could. But, it's much easier to just buy plumbing fixtures. You, as a software developer, could design and build your own application servers, database connection drivers, authentication handlers, messaging systems, etc. But, these would not be easy to build and maintain. It's much easy to make use of middleware components that are built according to established (and especially open!) standards. In middleware, these standards take the form of libraries of functions that your programs call through well-defined application programming interfaces (APIs). You call these functions instead of having to invent your own.
Fourth and finally, it lets you worry about other things.
When you put an addition onto a house, what do you worry about? Bathroom fixtures, kitchen appliances, flooring, colors, and how to pay for it all. It's a very stressful process. The last thing you want to worry about is whether you want 3/4 inch or half inch pipe, copper or PVC connectors, #9 or #17 solder, etc. With middleware taking care of all the invisible functions, you, again as a software developer, or a business owner, can concentrate on building software to solve your business problems and fulfill your customers' needs.
My father once told me, "when you move to a new town, don't look for a doctor - first thing, you find a good plumber so you can sleep nights." It's like that with middleware. It may be mostly invisible, but it keeps things running so a lot of developers and managers and customers can sleep at night.
Ref:
[1] http://www.salon.com/tech/fsp/glossary/index.html
That got me thinking and sent me to google to look for a short definition of middleware. I found a lot of them, but they mostly were either too vague or too dependent on the reader already having some knowledge about middleware. Then, I found this one:
'...middleware: The kind of word that software industry insiders love to spew. Vague enough to mean just about any software program that functions as a link between two other programs, such as a Web server and a database program. Middleware also has a more specific meaning as a program that exists between a "network" and an "application" and carries out such tasks as authentication. But middleware functionality is often incorporated in application or network software, so precise definitions can get all messy. Avoid using at all costs...'[1]
And that really got me thinking about how to describe middleware and why it matters. What I was searching for was a real-world analogy that would make sense to people with varying levels of computer and software experience. And then, it hit me.
Middleware is plumbing.
There are four ways that this is true.
First, it ties together disparate parts of complex systems.
In your house, you have kitchens, heating systems, bathrooms, washing machines, garden faucets, etc. Each plays an important part in making your house livable. You almost never don't have to worry about not having running water because the plumbing is robust and reliable. It just keeps the water moving through the pipes. Middleware keeps information moving through complex web-based applications. One of its primary tasks is to connect systems, applications and databases together in a secure and reliable way. For example, let's say you bought an over-priced sweater at a store web site last night. What happened? You looked through various sweaters' images, selected color and size, entered a charge card number and that was it, right? Well, behind the scenes, middleware made sure that the store's inventory database showed that sweater in stock, connected to the charge card company's database to make sure that your card was not maxed out, and connected to the shipping company database to verify a delivery date. And, it made sure that hundreds or thousands of people could all shop that site at the same time. Also, while it looked to you like you were looking at one web-site, middleware tied together many different computers, each in a different location, all running the store's e-commerce application, into a cluster. Why is this important? To make sure that you can always get to the store on-line, even if some of these computers are down due to maintenance or power failures.
Second, it's mostly invisible.
You don't generally see much of the plumbing in your house. What you see is the water. As a consumer, you don't see middleware. You see the web sites and the information flow that middleware makes possible.
Third, it provides a standard way of doing things.
If you wanted to build your own plumbing from scratch, you could. But, it's much easier to just buy plumbing fixtures. You, as a software developer, could design and build your own application servers, database connection drivers, authentication handlers, messaging systems, etc. But, these would not be easy to build and maintain. It's much easy to make use of middleware components that are built according to established (and especially open!) standards. In middleware, these standards take the form of libraries of functions that your programs call through well-defined application programming interfaces (APIs). You call these functions instead of having to invent your own.
Fourth and finally, it lets you worry about other things.
When you put an addition onto a house, what do you worry about? Bathroom fixtures, kitchen appliances, flooring, colors, and how to pay for it all. It's a very stressful process. The last thing you want to worry about is whether you want 3/4 inch or half inch pipe, copper or PVC connectors, #9 or #17 solder, etc. With middleware taking care of all the invisible functions, you, again as a software developer, or a business owner, can concentrate on building software to solve your business problems and fulfill your customers' needs.
My father once told me, "when you move to a new town, don't look for a doctor - first thing, you find a good plumber so you can sleep nights." It's like that with middleware. It may be mostly invisible, but it keeps things running so a lot of developers and managers and customers can sleep at night.
Ref:
[1] http://www.salon.com/tech/fsp/glossary/index.html
Sunday, February 3, 2008
Untimely Assumptions to Look for in Test Design
Having no time to blog got me thinking about...time.
In my Sept. 2007 Red Hat Magazine article[1] about software about software communities, I referred to the need to deal with different timezones when your team and community are spread across multiple geographic locations. The folks with whom I'm working these days can attest to my being challenged when it comes to converting times between timezones. I always seem to ask them to attend meetings at inconvenient times for them! ;-)
Anyway, I received an email yesterday that got me thinking about time, and about assumptions that sometimes get made in system design about the availability of "quiet time." The email announced a lab was being powered down at 2:00 AM for some necessary maintenance. The time of day selected was a "quiet time" when the lab systems would be idle.
While I was reading the email, I had to think when 2:00 AM local time was in remote offices as we frequently share test servers in remote locations. As it turned out, the lab outage was not a serious problem based on the location and timezones of people would would be accessing the lab systems.
But, wow, that email caused me to have a sudden flashback to a software project on which I worked several years ago. The project was a voicemail call accounting system. Every day at 2:00AM the system ran several accounting tasks to calculate billing for that day. These billing tasks were large and complex and reduced overall system throughput while the tasks were running. The only problem with the system was that while 2:00 AM might be a quiet time for your own location, it might be a busy time for users accessing the system from elsewhere in the world, and those users would see poor performance when they accessed the system.
I don't think that we ever solved this problem, and I have long since left that company, but it illustrates an interesting problem in developing real-world stress tests. If the system under test includes faulty assumptions about traffic patterns, then unless you are able to identify the implications of these assumptions and attack them, then your testing may be flawed. The discrete tasks that the system has to perform may vary at different times of the day, or days of the week or month.
Let's wrap this up: what are some good faulty design assumption to look for in today's globally networked world?
The notion that the requirements that the system under test must fulfill are unchanging over time. Just as traffic patterns will change over a time period, the tasks performed by the system may vary from hour to hour.
Also, the notion that the system under test is the center of that world. These days, the system under test may be in Bonn, but it may have to simultaneously support people in Boston, Bogata, and Brno. And remember, 2:00 AM arrives at a different moment in time for everyone!
[1] http://www.redhatmagazine.com/2007/09/11/a-tale-of-three-communities/
In my Sept. 2007 Red Hat Magazine article[1] about software about software communities, I referred to the need to deal with different timezones when your team and community are spread across multiple geographic locations. The folks with whom I'm working these days can attest to my being challenged when it comes to converting times between timezones. I always seem to ask them to attend meetings at inconvenient times for them! ;-)
Anyway, I received an email yesterday that got me thinking about time, and about assumptions that sometimes get made in system design about the availability of "quiet time." The email announced a lab was being powered down at 2:00 AM for some necessary maintenance. The time of day selected was a "quiet time" when the lab systems would be idle.
While I was reading the email, I had to think when 2:00 AM local time was in remote offices as we frequently share test servers in remote locations. As it turned out, the lab outage was not a serious problem based on the location and timezones of people would would be accessing the lab systems.
But, wow, that email caused me to have a sudden flashback to a software project on which I worked several years ago. The project was a voicemail call accounting system. Every day at 2:00AM the system ran several accounting tasks to calculate billing for that day. These billing tasks were large and complex and reduced overall system throughput while the tasks were running. The only problem with the system was that while 2:00 AM might be a quiet time for your own location, it might be a busy time for users accessing the system from elsewhere in the world, and those users would see poor performance when they accessed the system.
I don't think that we ever solved this problem, and I have long since left that company, but it illustrates an interesting problem in developing real-world stress tests. If the system under test includes faulty assumptions about traffic patterns, then unless you are able to identify the implications of these assumptions and attack them, then your testing may be flawed. The discrete tasks that the system has to perform may vary at different times of the day, or days of the week or month.
Let's wrap this up: what are some good faulty design assumption to look for in today's globally networked world?
The notion that the requirements that the system under test must fulfill are unchanging over time. Just as traffic patterns will change over a time period, the tasks performed by the system may vary from hour to hour.
Also, the notion that the system under test is the center of that world. These days, the system under test may be in Bonn, but it may have to simultaneously support people in Boston, Bogata, and Brno. And remember, 2:00 AM arrives at a different moment in time for everyone!
[1] http://www.redhatmagazine.com/2007/09/11/a-tale-of-three-communities/
Tuesday, January 29, 2008
Need to Slow Down the Blogging for a Bit
I'm going to have to slow down on the blogging for a while as I want to start to write some new articles for Red Hat Magazine. If you haven't seen the magazine yet, it's really worth a look. (And, I'm not just saying that because I've been lucky enough to have published articles in the magazine.) It's a great resource for information On Linux, open source software, and of course Red Hat and JBoss. And - it's updated daily.
http://www.redhatmagazine.com
OK. I spoke too soon. There's one other resource that I really want to highlight.
http://www.opensourcetesting.org
This is a great site for open source testing tools, ideas and discussions. All open source of course! ;-)
http://www.redhatmagazine.com
OK. I spoke too soon. There's one other resource that I really want to highlight.
http://www.opensourcetesting.org
This is a great site for open source testing tools, ideas and discussions. All open source of course! ;-)
Sunday, January 13, 2008
The Father of the Bride Question
(A follow-up post to "Why Are the Fountains Broken?")
What question is asked by all fathers of the bride when a wedding is being planned? "How much will this cost me - can we make it cost less?"
In a recent post to this blog, I talked about the often forgotten cost of maintenance for automated tests. Let's now talk about how much this maintenance may cost, and how to limit that cost. In other words, how much of my project schedule do I have to devote to maintaining my automated tests? Let's walk through an example. Suppose you are building the first automated tests for a new product. For the sake of this example, let's assume that your test plan calls for you to create (50) automated tests. Let's also assume that you are using an existing test framework such as JUnit or TestNG.
The first test may take you up to a day to create as you will be learning the product under test. You will likely spend a good deal of time simply getting a simple test running. After this, you will probably be able to write perhaps 50% of the tests at the rate of two per day, and perhaps the other 50% at the rate of three to four a day. When you have them completed, you'll need some additional time to refactor and generally clean up the tests. Let's allocate three more days for that.
So, in summary:
the first test = 1 day
the next 24 tests = 12 days
the next 25 tests = 7 days
clean up = 3 days
For a total of 23 days = in other words, perhaps a month.
Now, let's think about maintanence. Let's assume that your new product ships quarterly updates which contain bug fixes and minor new features, and annual major releases. All of these new releases and bug fixes require that you write new automated tests, so over time, your test library grows and grows. But, how much of your time and person resources should you plan to spend on maintaining these tests?
To a large degree, the amount of investment in maintaining the tests will depend on how effective the tests are, and on how much the code actually tested by the tests changes. Let's say that 10% of the code in your project changes in each of your quarterly releases. Where will you spend your time and resources in updating your tests?
First, you will have to review all the tests so that you can determine which tests must be updated, and which new tests must be created. Then you have to design and implement the changes. Let's say that you can review ten tests per day, and that the time required to update these tests is on the scale of one half day for each of the 10% of tests.
So, in summary, maintenance on the first set of tests will cost:
review all the tests = 5 days
update the tests = 3 days
For a total of 8 days. Or, about 25% of the time to write the tests in the first place. If this sounds expensive, well it is! What we need to do is to find a way to reduce the time needed to review all the tests, because remember, the number of tests is always growing.
What's the answer? There's no silver bullet to the mundane task of maintenance. But, part of the answer is the mundane task of documenting the tests so that the specific tests that ought to be changed can be easily found.
This documentation should record not just the design of the tests, but the goals of the tests. This documentation becomes the "institutional memory" of the test automation in that it is persistent, and it outlives any one person's involvement with the project or the tests. Another part of the answer is to keep this documentation in a form that can be easily reviewed, that maps directly to the project requirements' definitions, and is easily edited. Javadoc is a great approach for this as it enables you to keep the test documentation in the actual test source files.
A few years ago, I was involved in a situation where I was dealing locating test coverage for a specific feature within a test suite of several thousand tests. Over the years that it took to develop the tests, the knowledge of the precise actions performed by any one test was lost. The only way to review the tests was to review the source code of each test. It was possible for Development and QE engineers to do this, but other project team members who had project knowledge, but not programming skills, were not able to contriute to the review.
So, why are the fountains broken? Maybe no one planned to or was able to maintain them. Why are the tests outdated? Maybe for the same reasons. But, if you plan for maintenance and make it possible for everyone on the project team can easily review the tests in a form that they can understand, then maybe you can direct your finite test automation resources at the tests that need updating. And in the process save some time and money.
What question is asked by all fathers of the bride when a wedding is being planned? "How much will this cost me - can we make it cost less?"
In a recent post to this blog, I talked about the often forgotten cost of maintenance for automated tests. Let's now talk about how much this maintenance may cost, and how to limit that cost. In other words, how much of my project schedule do I have to devote to maintaining my automated tests? Let's walk through an example. Suppose you are building the first automated tests for a new product. For the sake of this example, let's assume that your test plan calls for you to create (50) automated tests. Let's also assume that you are using an existing test framework such as JUnit or TestNG.
The first test may take you up to a day to create as you will be learning the product under test. You will likely spend a good deal of time simply getting a simple test running. After this, you will probably be able to write perhaps 50% of the tests at the rate of two per day, and perhaps the other 50% at the rate of three to four a day. When you have them completed, you'll need some additional time to refactor and generally clean up the tests. Let's allocate three more days for that.
So, in summary:
the first test = 1 day
the next 24 tests = 12 days
the next 25 tests = 7 days
clean up = 3 days
For a total of 23 days = in other words, perhaps a month.
Now, let's think about maintanence. Let's assume that your new product ships quarterly updates which contain bug fixes and minor new features, and annual major releases. All of these new releases and bug fixes require that you write new automated tests, so over time, your test library grows and grows. But, how much of your time and person resources should you plan to spend on maintaining these tests?
To a large degree, the amount of investment in maintaining the tests will depend on how effective the tests are, and on how much the code actually tested by the tests changes. Let's say that 10% of the code in your project changes in each of your quarterly releases. Where will you spend your time and resources in updating your tests?
First, you will have to review all the tests so that you can determine which tests must be updated, and which new tests must be created. Then you have to design and implement the changes. Let's say that you can review ten tests per day, and that the time required to update these tests is on the scale of one half day for each of the 10% of tests.
So, in summary, maintenance on the first set of tests will cost:
review all the tests = 5 days
update the tests = 3 days
For a total of 8 days. Or, about 25% of the time to write the tests in the first place. If this sounds expensive, well it is! What we need to do is to find a way to reduce the time needed to review all the tests, because remember, the number of tests is always growing.
What's the answer? There's no silver bullet to the mundane task of maintenance. But, part of the answer is the mundane task of documenting the tests so that the specific tests that ought to be changed can be easily found.
This documentation should record not just the design of the tests, but the goals of the tests. This documentation becomes the "institutional memory" of the test automation in that it is persistent, and it outlives any one person's involvement with the project or the tests. Another part of the answer is to keep this documentation in a form that can be easily reviewed, that maps directly to the project requirements' definitions, and is easily edited. Javadoc is a great approach for this as it enables you to keep the test documentation in the actual test source files.
A few years ago, I was involved in a situation where I was dealing locating test coverage for a specific feature within a test suite of several thousand tests. Over the years that it took to develop the tests, the knowledge of the precise actions performed by any one test was lost. The only way to review the tests was to review the source code of each test. It was possible for Development and QE engineers to do this, but other project team members who had project knowledge, but not programming skills, were not able to contriute to the review.
So, why are the fountains broken? Maybe no one planned to or was able to maintain them. Why are the tests outdated? Maybe for the same reasons. But, if you plan for maintenance and make it possible for everyone on the project team can easily review the tests in a form that they can understand, then maybe you can direct your finite test automation resources at the tests that need updating. And in the process save some time and money.
Saturday, January 5, 2008
To Make Sure the Software is Functional, Don't Forget the Non-Functional Tests
When I first heard the term "non-functional test, I thought that it was a joke. I was working for a now-defunct company on a now forgotten product. The product's code was anything by fully "functional," so when I was asked about non-functional tests, it seemed more like an appropriately funny and sarcastic comment than a serious question. It may sound odd when you first hear it, but the classification of tests as "functional" or "non-functional" actually is very logical.
What are functional tests? These are the tests that you build and run to exercise (and exorcise the bugs out of) the code that supports the functions and features of the product under test. In other words, these tests tell you if the product fulfills its functional requirements. What are non-functional tests? The best description that I've seen is that these tests verify how well the product fulfills its functional requirements. A good way to think about this difference between functional and non-functional tests is that the functional tests verify the "whats" and the non-functional tests verify the "hows."
The details of functional tests for a product will be specific to that product. For examples, tests for a text editor will be different from tests for a firewall. The types of non-functional tests that apply to a product will also depend on the characteristics of the product, but will likely include types of tests such as these[1]:
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
So, functional tests or non-functional tests? Which types should you build and run? For a thorough test cycle, you really need both. Hmm. What about dysfunctional tests? I'll get back to talking about these in a later post.
[1] http://en.wikipedia.org/wiki/Non-functional_tests
What are functional tests? These are the tests that you build and run to exercise (and exorcise the bugs out of) the code that supports the functions and features of the product under test. In other words, these tests tell you if the product fulfills its functional requirements. What are non-functional tests? The best description that I've seen is that these tests verify how well the product fulfills its functional requirements. A good way to think about this difference between functional and non-functional tests is that the functional tests verify the "whats" and the non-functional tests verify the "hows."
The details of functional tests for a product will be specific to that product. For examples, tests for a text editor will be different from tests for a firewall. The types of non-functional tests that apply to a product will also depend on the characteristics of the product, but will likely include types of tests such as these[1]:
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
So, functional tests or non-functional tests? Which types should you build and run? For a thorough test cycle, you really need both. Hmm. What about dysfunctional tests? I'll get back to talking about these in a later post.
[1] http://en.wikipedia.org/wiki/Non-functional_tests
Subscribe to:
Posts (Atom)