I listened to a webcast from JBoss World today with a group of people. After hearing several speakers announce new middleware products and initiatives (as JBoss is the leading force in open source middleware), one of them turned to me and asked, "Just what is middleware?" When I started to describe transaction servers and database connection pool sharing, she held up a hand and asked, "No. I want to know what it is in real world terms, and why it's a big deal."
That got me thinking and sent me to google to look for a short definition of middleware. I found a lot of them, but they mostly were either too vague or too dependent on the reader already having some knowledge about middleware. Then, I found this one:
'...middleware: The kind of word that software industry insiders love to spew. Vague enough to mean just about any software program that functions as a link between two other programs, such as a Web server and a database program. Middleware also has a more specific meaning as a program that exists between a "network" and an "application" and carries out such tasks as authentication. But middleware functionality is often incorporated in application or network software, so precise definitions can get all messy. Avoid using at all costs...'[1]
And that really got me thinking about how to describe middleware and why it matters. What I was searching for was a real-world analogy that would make sense to people with varying levels of computer and software experience. And then, it hit me.
Middleware is plumbing.
There are four ways that this is true.
First, it ties together disparate parts of complex systems.
In your house, you have kitchens, heating systems, bathrooms, washing machines, garden faucets, etc. Each plays an important part in making your house livable. You almost never don't have to worry about not having running water because the plumbing is robust and reliable. It just keeps the water moving through the pipes. Middleware keeps information moving through complex web-based applications. One of its primary tasks is to connect systems, applications and databases together in a secure and reliable way. For example, let's say you bought an over-priced sweater at a store web site last night. What happened? You looked through various sweaters' images, selected color and size, entered a charge card number and that was it, right? Well, behind the scenes, middleware made sure that the store's inventory database showed that sweater in stock, connected to the charge card company's database to make sure that your card was not maxed out, and connected to the shipping company database to verify a delivery date. And, it made sure that hundreds or thousands of people could all shop that site at the same time. Also, while it looked to you like you were looking at one web-site, middleware tied together many different computers, each in a different location, all running the store's e-commerce application, into a cluster. Why is this important? To make sure that you can always get to the store on-line, even if some of these computers are down due to maintenance or power failures.
Second, it's mostly invisible.
You don't generally see much of the plumbing in your house. What you see is the water. As a consumer, you don't see middleware. You see the web sites and the information flow that middleware makes possible.
Third, it provides a standard way of doing things.
If you wanted to build your own plumbing from scratch, you could. But, it's much easier to just buy plumbing fixtures. You, as a software developer, could design and build your own application servers, database connection drivers, authentication handlers, messaging systems, etc. But, these would not be easy to build and maintain. It's much easy to make use of middleware components that are built according to established (and especially open!) standards. In middleware, these standards take the form of libraries of functions that your programs call through well-defined application programming interfaces (APIs). You call these functions instead of having to invent your own.
Fourth and finally, it lets you worry about other things.
When you put an addition onto a house, what do you worry about? Bathroom fixtures, kitchen appliances, flooring, colors, and how to pay for it all. It's a very stressful process. The last thing you want to worry about is whether you want 3/4 inch or half inch pipe, copper or PVC connectors, #9 or #17 solder, etc. With middleware taking care of all the invisible functions, you, again as a software developer, or a business owner, can concentrate on building software to solve your business problems and fulfill your customers' needs.
My father once told me, "when you move to a new town, don't look for a doctor - first thing, you find a good plumber so you can sleep nights." It's like that with middleware. It may be mostly invisible, but it keeps things running so a lot of developers and managers and customers can sleep at night.
Ref:
[1] http://www.salon.com/tech/fsp/glossary/index.html
Thursday, February 14, 2008
Sunday, February 3, 2008
Untimely Assumptions to Look for in Test Design
Having no time to blog got me thinking about...time.
In my Sept. 2007 Red Hat Magazine article[1] about software about software communities, I referred to the need to deal with different timezones when your team and community are spread across multiple geographic locations. The folks with whom I'm working these days can attest to my being challenged when it comes to converting times between timezones. I always seem to ask them to attend meetings at inconvenient times for them! ;-)
Anyway, I received an email yesterday that got me thinking about time, and about assumptions that sometimes get made in system design about the availability of "quiet time." The email announced a lab was being powered down at 2:00 AM for some necessary maintenance. The time of day selected was a "quiet time" when the lab systems would be idle.
While I was reading the email, I had to think when 2:00 AM local time was in remote offices as we frequently share test servers in remote locations. As it turned out, the lab outage was not a serious problem based on the location and timezones of people would would be accessing the lab systems.
But, wow, that email caused me to have a sudden flashback to a software project on which I worked several years ago. The project was a voicemail call accounting system. Every day at 2:00AM the system ran several accounting tasks to calculate billing for that day. These billing tasks were large and complex and reduced overall system throughput while the tasks were running. The only problem with the system was that while 2:00 AM might be a quiet time for your own location, it might be a busy time for users accessing the system from elsewhere in the world, and those users would see poor performance when they accessed the system.
I don't think that we ever solved this problem, and I have long since left that company, but it illustrates an interesting problem in developing real-world stress tests. If the system under test includes faulty assumptions about traffic patterns, then unless you are able to identify the implications of these assumptions and attack them, then your testing may be flawed. The discrete tasks that the system has to perform may vary at different times of the day, or days of the week or month.
Let's wrap this up: what are some good faulty design assumption to look for in today's globally networked world?
The notion that the requirements that the system under test must fulfill are unchanging over time. Just as traffic patterns will change over a time period, the tasks performed by the system may vary from hour to hour.
Also, the notion that the system under test is the center of that world. These days, the system under test may be in Bonn, but it may have to simultaneously support people in Boston, Bogata, and Brno. And remember, 2:00 AM arrives at a different moment in time for everyone!
[1] http://www.redhatmagazine.com/2007/09/11/a-tale-of-three-communities/
In my Sept. 2007 Red Hat Magazine article[1] about software about software communities, I referred to the need to deal with different timezones when your team and community are spread across multiple geographic locations. The folks with whom I'm working these days can attest to my being challenged when it comes to converting times between timezones. I always seem to ask them to attend meetings at inconvenient times for them! ;-)
Anyway, I received an email yesterday that got me thinking about time, and about assumptions that sometimes get made in system design about the availability of "quiet time." The email announced a lab was being powered down at 2:00 AM for some necessary maintenance. The time of day selected was a "quiet time" when the lab systems would be idle.
While I was reading the email, I had to think when 2:00 AM local time was in remote offices as we frequently share test servers in remote locations. As it turned out, the lab outage was not a serious problem based on the location and timezones of people would would be accessing the lab systems.
But, wow, that email caused me to have a sudden flashback to a software project on which I worked several years ago. The project was a voicemail call accounting system. Every day at 2:00AM the system ran several accounting tasks to calculate billing for that day. These billing tasks were large and complex and reduced overall system throughput while the tasks were running. The only problem with the system was that while 2:00 AM might be a quiet time for your own location, it might be a busy time for users accessing the system from elsewhere in the world, and those users would see poor performance when they accessed the system.
I don't think that we ever solved this problem, and I have long since left that company, but it illustrates an interesting problem in developing real-world stress tests. If the system under test includes faulty assumptions about traffic patterns, then unless you are able to identify the implications of these assumptions and attack them, then your testing may be flawed. The discrete tasks that the system has to perform may vary at different times of the day, or days of the week or month.
Let's wrap this up: what are some good faulty design assumption to look for in today's globally networked world?
The notion that the requirements that the system under test must fulfill are unchanging over time. Just as traffic patterns will change over a time period, the tasks performed by the system may vary from hour to hour.
Also, the notion that the system under test is the center of that world. These days, the system under test may be in Bonn, but it may have to simultaneously support people in Boston, Bogata, and Brno. And remember, 2:00 AM arrives at a different moment in time for everyone!
[1] http://www.redhatmagazine.com/2007/09/11/a-tale-of-three-communities/
Tuesday, January 29, 2008
Need to Slow Down the Blogging for a Bit
I'm going to have to slow down on the blogging for a while as I want to start to write some new articles for Red Hat Magazine. If you haven't seen the magazine yet, it's really worth a look. (And, I'm not just saying that because I've been lucky enough to have published articles in the magazine.) It's a great resource for information On Linux, open source software, and of course Red Hat and JBoss. And - it's updated daily.
http://www.redhatmagazine.com
OK. I spoke too soon. There's one other resource that I really want to highlight.
http://www.opensourcetesting.org
This is a great site for open source testing tools, ideas and discussions. All open source of course! ;-)
http://www.redhatmagazine.com
OK. I spoke too soon. There's one other resource that I really want to highlight.
http://www.opensourcetesting.org
This is a great site for open source testing tools, ideas and discussions. All open source of course! ;-)
Sunday, January 13, 2008
The Father of the Bride Question
(A follow-up post to "Why Are the Fountains Broken?")
What question is asked by all fathers of the bride when a wedding is being planned? "How much will this cost me - can we make it cost less?"
In a recent post to this blog, I talked about the often forgotten cost of maintenance for automated tests. Let's now talk about how much this maintenance may cost, and how to limit that cost. In other words, how much of my project schedule do I have to devote to maintaining my automated tests? Let's walk through an example. Suppose you are building the first automated tests for a new product. For the sake of this example, let's assume that your test plan calls for you to create (50) automated tests. Let's also assume that you are using an existing test framework such as JUnit or TestNG.
The first test may take you up to a day to create as you will be learning the product under test. You will likely spend a good deal of time simply getting a simple test running. After this, you will probably be able to write perhaps 50% of the tests at the rate of two per day, and perhaps the other 50% at the rate of three to four a day. When you have them completed, you'll need some additional time to refactor and generally clean up the tests. Let's allocate three more days for that.
So, in summary:
the first test = 1 day
the next 24 tests = 12 days
the next 25 tests = 7 days
clean up = 3 days
For a total of 23 days = in other words, perhaps a month.
Now, let's think about maintanence. Let's assume that your new product ships quarterly updates which contain bug fixes and minor new features, and annual major releases. All of these new releases and bug fixes require that you write new automated tests, so over time, your test library grows and grows. But, how much of your time and person resources should you plan to spend on maintaining these tests?
To a large degree, the amount of investment in maintaining the tests will depend on how effective the tests are, and on how much the code actually tested by the tests changes. Let's say that 10% of the code in your project changes in each of your quarterly releases. Where will you spend your time and resources in updating your tests?
First, you will have to review all the tests so that you can determine which tests must be updated, and which new tests must be created. Then you have to design and implement the changes. Let's say that you can review ten tests per day, and that the time required to update these tests is on the scale of one half day for each of the 10% of tests.
So, in summary, maintenance on the first set of tests will cost:
review all the tests = 5 days
update the tests = 3 days
For a total of 8 days. Or, about 25% of the time to write the tests in the first place. If this sounds expensive, well it is! What we need to do is to find a way to reduce the time needed to review all the tests, because remember, the number of tests is always growing.
What's the answer? There's no silver bullet to the mundane task of maintenance. But, part of the answer is the mundane task of documenting the tests so that the specific tests that ought to be changed can be easily found.
This documentation should record not just the design of the tests, but the goals of the tests. This documentation becomes the "institutional memory" of the test automation in that it is persistent, and it outlives any one person's involvement with the project or the tests. Another part of the answer is to keep this documentation in a form that can be easily reviewed, that maps directly to the project requirements' definitions, and is easily edited. Javadoc is a great approach for this as it enables you to keep the test documentation in the actual test source files.
A few years ago, I was involved in a situation where I was dealing locating test coverage for a specific feature within a test suite of several thousand tests. Over the years that it took to develop the tests, the knowledge of the precise actions performed by any one test was lost. The only way to review the tests was to review the source code of each test. It was possible for Development and QE engineers to do this, but other project team members who had project knowledge, but not programming skills, were not able to contriute to the review.
So, why are the fountains broken? Maybe no one planned to or was able to maintain them. Why are the tests outdated? Maybe for the same reasons. But, if you plan for maintenance and make it possible for everyone on the project team can easily review the tests in a form that they can understand, then maybe you can direct your finite test automation resources at the tests that need updating. And in the process save some time and money.
What question is asked by all fathers of the bride when a wedding is being planned? "How much will this cost me - can we make it cost less?"
In a recent post to this blog, I talked about the often forgotten cost of maintenance for automated tests. Let's now talk about how much this maintenance may cost, and how to limit that cost. In other words, how much of my project schedule do I have to devote to maintaining my automated tests? Let's walk through an example. Suppose you are building the first automated tests for a new product. For the sake of this example, let's assume that your test plan calls for you to create (50) automated tests. Let's also assume that you are using an existing test framework such as JUnit or TestNG.
The first test may take you up to a day to create as you will be learning the product under test. You will likely spend a good deal of time simply getting a simple test running. After this, you will probably be able to write perhaps 50% of the tests at the rate of two per day, and perhaps the other 50% at the rate of three to four a day. When you have them completed, you'll need some additional time to refactor and generally clean up the tests. Let's allocate three more days for that.
So, in summary:
the first test = 1 day
the next 24 tests = 12 days
the next 25 tests = 7 days
clean up = 3 days
For a total of 23 days = in other words, perhaps a month.
Now, let's think about maintanence. Let's assume that your new product ships quarterly updates which contain bug fixes and minor new features, and annual major releases. All of these new releases and bug fixes require that you write new automated tests, so over time, your test library grows and grows. But, how much of your time and person resources should you plan to spend on maintaining these tests?
To a large degree, the amount of investment in maintaining the tests will depend on how effective the tests are, and on how much the code actually tested by the tests changes. Let's say that 10% of the code in your project changes in each of your quarterly releases. Where will you spend your time and resources in updating your tests?
First, you will have to review all the tests so that you can determine which tests must be updated, and which new tests must be created. Then you have to design and implement the changes. Let's say that you can review ten tests per day, and that the time required to update these tests is on the scale of one half day for each of the 10% of tests.
So, in summary, maintenance on the first set of tests will cost:
review all the tests = 5 days
update the tests = 3 days
For a total of 8 days. Or, about 25% of the time to write the tests in the first place. If this sounds expensive, well it is! What we need to do is to find a way to reduce the time needed to review all the tests, because remember, the number of tests is always growing.
What's the answer? There's no silver bullet to the mundane task of maintenance. But, part of the answer is the mundane task of documenting the tests so that the specific tests that ought to be changed can be easily found.
This documentation should record not just the design of the tests, but the goals of the tests. This documentation becomes the "institutional memory" of the test automation in that it is persistent, and it outlives any one person's involvement with the project or the tests. Another part of the answer is to keep this documentation in a form that can be easily reviewed, that maps directly to the project requirements' definitions, and is easily edited. Javadoc is a great approach for this as it enables you to keep the test documentation in the actual test source files.
A few years ago, I was involved in a situation where I was dealing locating test coverage for a specific feature within a test suite of several thousand tests. Over the years that it took to develop the tests, the knowledge of the precise actions performed by any one test was lost. The only way to review the tests was to review the source code of each test. It was possible for Development and QE engineers to do this, but other project team members who had project knowledge, but not programming skills, were not able to contriute to the review.
So, why are the fountains broken? Maybe no one planned to or was able to maintain them. Why are the tests outdated? Maybe for the same reasons. But, if you plan for maintenance and make it possible for everyone on the project team can easily review the tests in a form that they can understand, then maybe you can direct your finite test automation resources at the tests that need updating. And in the process save some time and money.
Saturday, January 5, 2008
To Make Sure the Software is Functional, Don't Forget the Non-Functional Tests
When I first heard the term "non-functional test, I thought that it was a joke. I was working for a now-defunct company on a now forgotten product. The product's code was anything by fully "functional," so when I was asked about non-functional tests, it seemed more like an appropriately funny and sarcastic comment than a serious question. It may sound odd when you first hear it, but the classification of tests as "functional" or "non-functional" actually is very logical.
What are functional tests? These are the tests that you build and run to exercise (and exorcise the bugs out of) the code that supports the functions and features of the product under test. In other words, these tests tell you if the product fulfills its functional requirements. What are non-functional tests? The best description that I've seen is that these tests verify how well the product fulfills its functional requirements. A good way to think about this difference between functional and non-functional tests is that the functional tests verify the "whats" and the non-functional tests verify the "hows."
The details of functional tests for a product will be specific to that product. For examples, tests for a text editor will be different from tests for a firewall. The types of non-functional tests that apply to a product will also depend on the characteristics of the product, but will likely include types of tests such as these[1]:
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
So, functional tests or non-functional tests? Which types should you build and run? For a thorough test cycle, you really need both. Hmm. What about dysfunctional tests? I'll get back to talking about these in a later post.
[1] http://en.wikipedia.org/wiki/Non-functional_tests
What are functional tests? These are the tests that you build and run to exercise (and exorcise the bugs out of) the code that supports the functions and features of the product under test. In other words, these tests tell you if the product fulfills its functional requirements. What are non-functional tests? The best description that I've seen is that these tests verify how well the product fulfills its functional requirements. A good way to think about this difference between functional and non-functional tests is that the functional tests verify the "whats" and the non-functional tests verify the "hows."
The details of functional tests for a product will be specific to that product. For examples, tests for a text editor will be different from tests for a firewall. The types of non-functional tests that apply to a product will also depend on the characteristics of the product, but will likely include types of tests such as these[1]:
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
So, functional tests or non-functional tests? Which types should you build and run? For a thorough test cycle, you really need both. Hmm. What about dysfunctional tests? I'll get back to talking about these in a later post.
[1] http://en.wikipedia.org/wiki/Non-functional_tests
Sunday, December 23, 2007
Saturday, December 15, 2007
When Less May Be More - A Lighter Weight Test Plan
I was talking to a couple of colleagues about test plans this week; both the format of the plans and the content.
The IEEE standard is well, a "standard." It may, however, be intimidating to people unaccustomed to software testing. And, it lacks an easy to use construct for differentiating between classes of tests. The discussion of test plans came up this week as one of my colleagues is trying to - gently, but effectively - introduce a structured test process into an organization whose members are largely unfamiliar with it. He wanted very much to take a medical approach to this and "first, do no harm." For him, a lightweight adaptation of the the IEEE standard is a better starting point. We came up with this template for a test plan:
Introduction - what are we doing?
Test Strategy - how are we doing it
Test Priorities - what's most important
Scope - What's tested?
Scope - What's beyond the scope of testing?
Test Pass/Fail Criteria - how do we know that it's good or bad
Test Deliverables - docs and programs that we'll build
Test Cases - Functional Tests
* Tests mapped to product features
Test Cases - Non-functional tests - whichever apply[1]
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
Test Environment/Configurations
Responsibilities - who's doing what
Schedule/Milestones - when are they doing it
Risks and Contingencies - what might go wrong and how we'll handle it
Approvals - do we agree?
References - pointers to background docs
Revision history - why did the plan change and how?
Appendices - anything else?
It's worth noting how this template explicitly separates the descriptions of functional and non-functional tests as the differences between these test class definitions may be new concepts to some of the team members. What's that? Are these test class differences also a new concept to you? I'll discuss this subject in the next post to this blog!
Ref:
[1] http://en.wikipedia.org/wiki/Non-functional_tests
P.S. My Fedora friends have picked up on this test plan outline - it's the basis for their test plan template: https://fedoraproject.org/wiki/QA:Test_Plan_Template
The IEEE standard is well, a "standard." It may, however, be intimidating to people unaccustomed to software testing. And, it lacks an easy to use construct for differentiating between classes of tests. The discussion of test plans came up this week as one of my colleagues is trying to - gently, but effectively - introduce a structured test process into an organization whose members are largely unfamiliar with it. He wanted very much to take a medical approach to this and "first, do no harm." For him, a lightweight adaptation of the the IEEE standard is a better starting point. We came up with this template for a test plan:
Introduction - what are we doing?
Test Strategy - how are we doing it
Test Priorities - what's most important
Scope - What's tested?
Scope - What's beyond the scope of testing?
Test Pass/Fail Criteria - how do we know that it's good or bad
Test Deliverables - docs and programs that we'll build
Test Cases - Functional Tests
* Tests mapped to product features
Test Cases - Non-functional tests - whichever apply[1]
* Compatibility testing
* Compliance testing
* Documentation testing
* Endurance testing
* Load testing
* Localization testing and Internationalization testing
* Performance testing
* Resilience testing
* Security testing
* Scalability testing
* Stress testing
* Usability testing
* Volume testing
Test Environment/Configurations
Responsibilities - who's doing what
Schedule/Milestones - when are they doing it
Risks and Contingencies - what might go wrong and how we'll handle it
Approvals - do we agree?
References - pointers to background docs
Revision history - why did the plan change and how?
Appendices - anything else?
It's worth noting how this template explicitly separates the descriptions of functional and non-functional tests as the differences between these test class definitions may be new concepts to some of the team members. What's that? Are these test class differences also a new concept to you? I'll discuss this subject in the next post to this blog!
Ref:
[1] http://en.wikipedia.org/wiki/Non-functional_tests
P.S. My Fedora friends have picked up on this test plan outline - it's the basis for their test plan template: https://fedoraproject.org/wiki/QA:Test_Plan_Template
Subscribe to:
Posts (Atom)