Professional Documents
Culture Documents
Sponsored By:
E-Guide
Sponsored By:
Page 2 of 20
Sponsored By:
Page 3 of 20
requirements-based tests are the most important tests, for only they confirm that the product is doing something useful. Requirements-based tests are "black box" (or functional) tests. So long as relevant inputs and/or conditions produce expected results that demonstrate the requirements are met, how the product does it is not of concern. Engineers refer to this as if the processing is in an unknown black box. Most of the tests that professional testers and users run are black box, and they use many different techniques for designing those tests. No single technique by itself can be said to be the one and only method of requirements-based testing.
Sponsored By:
Page 4 of 20
These particular test design techniques help, but are usually less effective than presumed or recognized. There is the tendency to over-rely on a chosen technique, which can result in missing other important tests. Paradoxically, using these disciplined techniques that reveal many more additional test conditions can become overwhelming. Even with the greater number of identified tests, the typical focus of these test design techniques frequently results in significant requirements/test conditions still being overlooked. Although the specifics are beyond the scope of this article, the good news is that the proactive testing methodology includes a number of more powerful test design techniques that identify considerably more (and often more important) requirements and test conditions that are ordinarily overlooked. Also, the methodology facilitates managing and prioritizing a far greater number of identified test conditions.
Sponsored By:
Page 5 of 20
based testing also needs to demonstrate that the product is not only built as designed but in fact actually satisfies the REAL business requirements that provide value. Therefore, it's generally considered helpful to trace the requirements to the tests that demonstrate those requirements have been met. However, typical traceability matrices start with the product requirements. To be truly effective, one should start with the business requirement whats, tracing them to the high-level design product requirement hows, and in turn tracing the product requirements to the detailed design, implementation components, and tests of the implemented product. Effective user acceptance testing is intended to demonstrate that the product as designed and created in fact satisfies the REAL business requirements, which therefore need to be traced directly to the user acceptance tests that demonstrate they've been met. Commercially available automated tools can reduce some of the effort involved in creating and maintaining traceability matrices. A caution, though: Business requirements are hierarchical and need to be driven down to detail, usually to a far greater extent than people are accustomed. The resulting large numbers of individually identified, itemized business requirements and corresponding detailed product requirements and tests may make maintaining full traceability impractical. Thus, at least initially, it may be advisable to keep the traceability matrix high-level.
Sponsored By:
Page 6 of 20
settled upon. In fact, much of creep occurs because the product requirements, regardless of how clear and testable they are, don't meet the REAL business requirements. The main reason is because the REAL business requirements usually have not been defined adequately, mainly because people think the product requirements are the requirements. Moreover, a requirement can be clear and testable but wrong, and clarity/testability is irrelevant for overlooked requirements. Typical requirements reviews tend to use one or two techniques that are unlikely to detect wrong and overlooked requirements. In contrast, the proactive testing methodology uses more than 21 techniques to review requirements, including many more powerful techniques that indeed can detect incorrect and overlooked requirements. Similarly, more than 15 powerful techniques are used to review designs. In addition, proactive testing has ways to more fully define use case test conditions and ways to make seemingly untestable requirements testable. By being aware of the issues with requirements-based testing as traditionally advocated, we can apply powerful improved review and test design methods to truly enhance the effectiveness of these most important tests.
Sponsored By:
Page 7 of 20
repeated painful experience that their biggest source of mistakes and rework is inadequately defined requirements, which this approach assures. Second, foes of requirements-based testing often speak monolithically as though the only alternative to their favored just-go-code-and-test approach is interminable analysis paralysis in a mindless and inflexible exaggerated "waterfall" chasing the impossible task of getting every possible requirement defined perfectly. I'd contend that such blaming of the methodology is usually the excuse for inept development, not the cause of it. Capable developers don't follow any methodology so slavishly. Reasonableness, not perfection, is the real practical standard. Third, without requirements, what exactly do developers develop and testers test? They can only be working on guesses, especially the testers. Without a requirements context, simply exercising the developed code may reveal problems, but it's illusory to think these are the most important problems. Moreover, by definition they are only being found at the tail end when they are hardest to fix. Fourth, this approach assumes that the only reason they can't get the requirements reasonably right is because it must be impossible. A more probable reason is that they don't know how to discover requirements very well. Instead of calling their weakness a virtue and using it as the excuse to "just go code and test," perhaps they should consider learning how to better discover requirements. Fifth, this approach invariably is predicated upon thinking that product design is the requirements. Iterative development guesses about, builds and repeatedly revises products. Product requirements change rapidly when the REAL business requirements the product must satisfy haven't been defined adequately. REAL business requirements don't change nearly so much as the awareness of those requirements. Tests that demonstrate that the developed product meets the REAL business requirements are essential and the most important tests. Denying their possibility, let alone their primary importance, is illusory and ill-advised.
Sponsored By:
Page 8 of 20
About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is the author of the Proactive Testing and REAL ROI methodologies and also the recent book Discovering REAL Business Requirements for Software Project Success.
Sponsored By:
Page 9 of 20
Sponsored By:
Page 10 of 20
application space. This can become a more formal traceability matrix by including a more specific inventory of the testing intent within each functional area -- a list of test case names. The danger is that the exploratory testing effort begins to move from being adaptive to predictive and therefore less exploratory. Depending on the current needs of the testing organization, this may or may not be appropriate, but it is important to be cognizant of what is actually occurring within the testing process.
Sponsored By:
Page 11 of 20
An inquiring mind An exploratory tester must think outside the box and also approach the challenge of testing from all angles (business, development, technology, governance, etc.). A skilled test designer An exploratory tester must be an effective test designer, someone who systematically explores and attempts to break the application space. A skilled observer An exploratory tester must notice not only any obvious defects within the application space but also more subtle variances in behavior that could be a potential defect. They must function as an investigative reporter. A skilled toolsmith An exploratory tester must be a skilled toolsmith, a tester who creates collections of tools to expedite the investigative process. An excellent communicator To communicate findings, an exploratory tester must be an effective communicator. Since there is no test script, the only guidance provided to explain the nature of a detected defect and how to replicate that defect comes directly from the tester. A tester must be able to capture the intent of the exploratory test in terms of a more structured or scripted test case when exploratory testing is being leveraged to create these types of artifacts.
Sponsored By:
Page 12 of 20
Sponsored By:
Page 13 of 20
Leveraging the information gained from ongoing formal exploratory testing efforts can help create an itemized list of test cases that should be captured as formal testing scripts and automated using a keyword-based test case design and automation techniques. The test case inventory to be formally scripted and automated should be based on two key selection criteria: application risk, and time to test. The concept of application risk is probably more familiar than time to test. Time to test simply refers to the resource hours required to perform the test. When looking at using a mixture of testing approaches this should be one of the main criteria for automation: How do I free up my skilled testers to perform more exploratory testing? The key here is not to dispose of one testing technique or approach for another. The key is to select the mixture of tools, techniques, and skills that create the greatest opportunity to reduce production issues and improve the overall quality of the product. For example, I would never recommend that all testing be automated. There is always room for improved testing coverage and exploratory testing is one of the best ways to discover these opportunities.
Sponsored By:
Page 14 of 20
About the Author: David W. Johnson (DJ) is a senior test architect with more than 23 years of experience in information technology across several industries. He has played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments and support of business solutions. Johnson has developed specific expertise over the past 15 years on implementing "test ware," including test strategies, test planning, test automation (functional and performance) and test management solutions.
Sponsored By:
Page 15 of 20
Sponsored By:
Page 16 of 20
A project plan does the following: Identifies what needs to be done, how it can be done, and what it will take in terms of resources, effort, and duration; Describes the tasks, sequences and dependencies required Sets the timing needed to accomplish the work and mitigate potential risks.
The plan helps assure that you have what you need when you need it, guides execution- to make sure you've done what is needed, while identifying when things go off track and indicating needed adjustments. By serving these purposes, a test plan helps the testing project succeed. It's the thought process that's important, not tons of verbiage. One writes down the test plan so the thinking is not lost or forgotten and so it can be shared, reviewed, and refined. A written test plan facilitates scheduling resources and carrying out intended tasks without missing any or going out of sequence. The plan serves as a record of what's been done both for ascertaining project status and suggesting improvements which can be implemented during the current project or on subsequent projects. True agility involves writing no more than is helpful, but no less!
Sponsored By:
Page 17 of 20
IEEE Std. 829-2008 is a standard for test documentation that clearly distinguishes between test plans and test cases. This is a somewhat controversial standard, because many people interpret it as a dictate for generating documentation. Instead, I use it to organize my thinking, making the writing incidental and just enough to be helpful. As can be seen in the diagram, the standard suggests using four levels of test planning and design documents. We start with a Master Test Plan, which is the project plan for the testing project. It identifies the set of Detailed Test Plans, which taken together are what must be demonstrated to be confident that the overall testing project works. Detailed Test Plans also are project plans, but for smaller sub-projects within the overall testing project: unit tests, integration tests, special tests -- such as usability, performance, and security -- system tests, independent QA tests and user acceptance tests. Detailed Test Plans indicate the set of features, functions, and capabilities that taken together must be demonstrated to be confident that the respective unit, integration, etc. detailed tests work. For each such feature, function and capability, we can have a Test Design Specification. This is a little-known, but valuable, technique for identifying and dealing economically with sets of executable Test Cases that, taken together, must be demonstrated to show that the Test Design Specification works. It's understandable why many people misinterpret the standard as dictating a huge amount of documentation. At first glance, the above description does sound like a lot of busywork. In fact, it's just the opposite. When used appropriately, the structure enables more efficient and effective testing, wherein important test conditions are less likely to be overlooked, forgotten, or skipped. The key to using the standard's structure is to realize that each level is largely a list of items at the next level down. For example, identifying the needed set of detailed tests in the Master Test Plan doesn't take much time but pays off hugely by revealing big risks that otherwise often are missed.
Sponsored By:
Page 18 of 20
Then, time is mainly devoted to those Detailed Test Plans which risk analysis indicates are the highest priority. These in turn are largely lists, of Test Design Specifications, which are themselves largely lists of Test Cases; and time is spent respectively on only those that deal with the highest risks. The net effect is that the structure's efficient minimal but sufficient documentation enables much more thoroughly, reliably, and quickly identifying the set of most important Test Cases before getting into the more time-consuming aspects of creating the executable Test Cases. Effective test planning and design is like plotting one's route for a trip; it's a small amount of added effort that helps you get to your desired destination with the least time, effort, and aggravation. About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is the author of the Proactive Testing and (with ProveIT.net) REAL ROI methodologies and also the recent book Discovering REAL Business Requirements for Software Project Success.
Sponsored By:
Page 19 of 20
Solution brief: From strategy to solutions: Enterprise Architecture Management in action White paper: Creating business value by dynamically connecting business and technology with enterprise architecture: Issues and challenges for coordinating and managing business change and transformation White paper: Turning strategy into success - Aligning organizations to create better business results
About IBM
At IBM, we strive to lead in the creation, development and manufacture of the industry's most advanced information technologies, including computer systems, software, networking systems, storage devices and microelectronics. We translate these advanced technologies into value for our customers through our professional solutions and services businesses. worldwide. http://www.ibm.com
Sponsored By:
Page 20 of 20