You are on page 1of 20

E-Guide

Software Testing Techniques


Software testing comes in many flavors, and knowing which flavor to choose for your next testing project, and what your plan is, are critical steps in the software development lifecycle. In this e-guide learn about the pros and cons of requirements based software testing, the potential pitfalls involving exploratory testing and how to turn them in your favor, and how to streamline test planning and design and the value it brings.

Sponsored By:

SearchSoftwareQuality.com E-Guide Software Testing Techniques

E-Guide

Software Testing Techniques


Table of Contents
Pros and cons of requirements-based software testing Exploratory testing: Tips and techniques for perfecting software quality Streamlining test planning and design Resources from IBM

Sponsored By:

Page 2 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

Pros and cons of requirements-based software testing


By Robin F. Goldsmith, JD For something that is essential, fairly fundamental and seemingly straightforward, requirements-based software testing sure does generate a lot of discussion. Rather than representing opposite extremes of the same continuum, pro and con camps come at the topic from disparate perspectives. Advocates of requirements-based testing tend to be analytical, whereas opponents tend to couch their objections in more emotional terms. Each approach has its own strengths and issues. We'll start by examining those that affect advocates.

Appropriating the "requirements-based testing" term


Several prominent testing authorities have endeavored to equate the term "requirementsbased testing" with their own favored technique. Cited methods include designing tests based on use cases, decision tables and decision trees, and logic diagramming techniques such as cause-and-effect graphing. Each of these gurus essentially claims that their method is the one and only true requirements-based testing. Such efforts imply that theirs alone and no other technique qualifies as requirements-based testing. Since the various gurus can't all have the one and only technique, they are probably contributing to general doubts and confusion about requirements-based testing. It may turn some people off to requirements-based testing, or at least the advocated methods. Those who buy into the premise that a particular promoted technique equals requirements-based testing undoubtedly are missing benefits from other requirements-based testing techniques.

The most important tests


Quite simply, requirements-based software tests demonstrate that the product/system/software (I'll subsequently use just "product" to refer to system and software too) meets the requirements and thereby provides value. Consequently,

Sponsored By:

Page 3 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

requirements-based tests are the most important tests, for only they confirm that the product is doing something useful. Requirements-based tests are "black box" (or functional) tests. So long as relevant inputs and/or conditions produce expected results that demonstrate the requirements are met, how the product does it is not of concern. Engineers refer to this as if the processing is in an unknown black box. Most of the tests that professional testers and users run are black box, and they use many different techniques for designing those tests. No single technique by itself can be said to be the one and only method of requirements-based testing.

Poorly defined requirements


Here's the rub. Requirements are almost always poorly defined or at least not defined as well as is desirable. Requirements-based tests can be no better than the requirements they are testing. Most developers, testers, users and others recognize this, but the problem is usually worse than even the most critical of them realizes. When requirements are poorly defined, developers are likely to develop the wrong things and/or develop things in the wrong way; testers generally are not able to tell any better than the developers what's right and wrong. Some authorities assume that documenting requirements with use cases automatically solves this problem. The premise is that one test of each use case scenario path through the use case suffices to fully test all the ways the use case can be executed. This premise fails to take into account that use cases can be missed or have numerous wrong and missing paths, and each path can be invoked in multiple ways that need to be demonstrated. The advocated decision tree/table and logic diagramming test design techniques represent systematic, disciplined ways to identify more thorough tests of the requirements by improving the defined requirements, largely by making more explicit aspects of the requirements that had previously been stated implicitly at best.

Sponsored By:

Page 4 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

These particular test design techniques help, but are usually less effective than presumed or recognized. There is the tendency to over-rely on a chosen technique, which can result in missing other important tests. Paradoxically, using these disciplined techniques that reveal many more additional test conditions can become overwhelming. Even with the greater number of identified tests, the typical focus of these test design techniques frequently results in significant requirements/test conditions still being overlooked. Although the specifics are beyond the scope of this article, the good news is that the proactive testing methodology includes a number of more powerful test design techniques that identify considerably more (and often more important) requirements and test conditions that are ordinarily overlooked. Also, the methodology facilitates managing and prioritizing a far greater number of identified test conditions.

REAL requirements vs. design


Another big reason that requirements-based test design techniques often miss important requirements is that not all black box tests demonstrate that the requirements have been met. In fact, most black box tests are actually demonstrating that the product conforms to its design. Realize that designs often are referred to as "requirements." REAL requirements are business deliverable whats that provide value when delivered by some product how. On the other hand, what people ordinarily refer to as "requirements" are actually requirements of the high-level design product how which will provide value if and only if the product satisfies the REAL business requirement whats. This is especially true of use cases, which despite being called the "user's requirements" are actually usually usage requirements of the product expected to be created. For further explanation of these important distinctions, see my book and previous article on using REAL business requirements to avoid requirements creep. Traditional testing, which I refer to as development or technical testing, is intended to demonstrate that the product indeed conforms to its design. It is necessary but not sufficient. The design is presumed to be responsive to the requirements. Requirements-

Sponsored By:

Page 5 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

based testing also needs to demonstrate that the product is not only built as designed but in fact actually satisfies the REAL business requirements that provide value. Therefore, it's generally considered helpful to trace the requirements to the tests that demonstrate those requirements have been met. However, typical traceability matrices start with the product requirements. To be truly effective, one should start with the business requirement whats, tracing them to the high-level design product requirement hows, and in turn tracing the product requirements to the detailed design, implementation components, and tests of the implemented product. Effective user acceptance testing is intended to demonstrate that the product as designed and created in fact satisfies the REAL business requirements, which therefore need to be traced directly to the user acceptance tests that demonstrate they've been met. Commercially available automated tools can reduce some of the effort involved in creating and maintaining traceability matrices. A caution, though: Business requirements are hierarchical and need to be driven down to detail, usually to a far greater extent than people are accustomed. The resulting large numbers of individually identified, itemized business requirements and corresponding detailed product requirements and tests may make maintaining full traceability impractical. Thus, at least initially, it may be advisable to keep the traceability matrix high-level.

Clarity and testability


To improve poorly defined requirements, a number of advocates consider requirementsbased software testing to include reviews of the requirements' clarity and testability. A requirement lacks testability when one cannot create tests to demonstrate that the requirement has been met. The most common reason a requirement is not testable is because it is not sufficiently clear. Unclear/untestable requirements are likely to be implemented incorrectly; regardless, testers have no way to tell. Widely held conventional wisdom is that unclear/untestable (product) requirements are the main cause of creep -- expensive changes to requirements which had supposedly been

Sponsored By:

Page 6 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

settled upon. In fact, much of creep occurs because the product requirements, regardless of how clear and testable they are, don't meet the REAL business requirements. The main reason is because the REAL business requirements usually have not been defined adequately, mainly because people think the product requirements are the requirements. Moreover, a requirement can be clear and testable but wrong, and clarity/testability is irrelevant for overlooked requirements. Typical requirements reviews tend to use one or two techniques that are unlikely to detect wrong and overlooked requirements. In contrast, the proactive testing methodology uses more than 21 techniques to review requirements, including many more powerful techniques that indeed can detect incorrect and overlooked requirements. Similarly, more than 15 powerful techniques are used to review designs. In addition, proactive testing has ways to more fully define use case test conditions and ways to make seemingly untestable requirements testable. By being aware of the issues with requirements-based testing as traditionally advocated, we can apply powerful improved review and test design methods to truly enhance the effectiveness of these most important tests.

Critics of requirements-based testing


A key requirements-based testing issue is that some prominent voices within the testing community deride it, often loudly and with great emotion. They say that the rapid pace of constantly changing business and technology makes it essentially impossible to define requirements; therefore trying to test based on defined requirements is a waste of time. They say instead of spending time trying to define the requirements, just go code and run tests. Such rationale can be very appealing, especially in organizations that seem to have lots of requirements changes. I'll caution, though, that this reasoning has numerous pitfalls. First, it creates a self-fulfilling prophecy that requirements cannot be defined adequately. Ironically, both developers and testers often welcome this opportunity to get busy, even though they both know from

Sponsored By:

Page 7 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

repeated painful experience that their biggest source of mistakes and rework is inadequately defined requirements, which this approach assures. Second, foes of requirements-based testing often speak monolithically as though the only alternative to their favored just-go-code-and-test approach is interminable analysis paralysis in a mindless and inflexible exaggerated "waterfall" chasing the impossible task of getting every possible requirement defined perfectly. I'd contend that such blaming of the methodology is usually the excuse for inept development, not the cause of it. Capable developers don't follow any methodology so slavishly. Reasonableness, not perfection, is the real practical standard. Third, without requirements, what exactly do developers develop and testers test? They can only be working on guesses, especially the testers. Without a requirements context, simply exercising the developed code may reveal problems, but it's illusory to think these are the most important problems. Moreover, by definition they are only being found at the tail end when they are hardest to fix. Fourth, this approach assumes that the only reason they can't get the requirements reasonably right is because it must be impossible. A more probable reason is that they don't know how to discover requirements very well. Instead of calling their weakness a virtue and using it as the excuse to "just go code and test," perhaps they should consider learning how to better discover requirements. Fifth, this approach invariably is predicated upon thinking that product design is the requirements. Iterative development guesses about, builds and repeatedly revises products. Product requirements change rapidly when the REAL business requirements the product must satisfy haven't been defined adequately. REAL business requirements don't change nearly so much as the awareness of those requirements. Tests that demonstrate that the developed product meets the REAL business requirements are essential and the most important tests. Denying their possibility, let alone their primary importance, is illusory and ill-advised.

Sponsored By:

Page 8 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is the author of the Proactive Testing and REAL ROI methodologies and also the recent book Discovering REAL Business Requirements for Software Project Success.

Sponsored By:

Page 9 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

Exploratory testing: Tips and techniques for perfecting software quality


By David W. Johnson Exploratory testing is widely used and often misunderstood. Managing exploratory testing requires an overall team effort, as well as a careful selection of the best testers for in-depth discovery and publication of defects. When used correctly, exploratory testing can yield meaningful returns in the quest for the best possible software quality.

How to manage exploratory testing


Managing team members who engage in exploratory testing is, in some ways, no different than managing any other testing activity. You can lead by example -- by participating in the exploratory testing effort -- or you can manage by delegating responsibility. What becomes more problematic during exploratory testing is how to manage what is being tested, how thorough the testing is, what defects are detected, and how much risk remains within the application space in terms of existing and undiscovered issues. Agile development practices provide some insight and guidance into managing an exploratory testing effort. In both cases the activity is more adaptive than it is predictive, so any management techniques applied must be consistent and people-centric. In terms of testing status, a periodic team standup meeting seems to work best. The scheduling or frequency of these meetings should be dependent on the velocity of the ongoing testing effort: the greater the effort, the more frequent the meetings. These standups should be supplemented by meeting minutes and, when required, formal status reports. To track what is being tested and by whom, the starting point should be a simple functional decomposition of the system leveraged as a testing checklist. If required, this can be developed into a simple matrix that tracks testing progress against functional areas of the

Sponsored By:

Page 10 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

application space. This can become a more formal traceability matrix by including a more specific inventory of the testing intent within each functional area -- a list of test case names. The danger is that the exploratory testing effort begins to move from being adaptive to predictive and therefore less exploratory. Depending on the current needs of the testing organization, this may or may not be appropriate, but it is important to be cognizant of what is actually occurring within the testing process.

Who should perform exploratory testing


Informal exploratory testing should be performed by everyone involved in the deployment of the application, including business analysts, developers, testers and finally end users. Exploratory testing is an excellent way to learn both the strengths and weaknesses of the application space and the business needs being addressed by the application. The larger the community exploring and exercising the application space, the more likely that defects and issues will be discovered before the application reaches production. Formal exploratory testing, which measures the production readiness of the application, should be performed by skilled context-driven testers. Exploratory testing is a creative context-driven activity that should be focused on finding defects and issues within the application and then publishing all detected defects. Resources engaged in this type of formal exploratory testing need to be skilled testers, but more importantly, they need to be experts at the craft of testing. How do you identify the skilled testers within your organization who are potential candidates to lead any exploratory testing? This requires a combination of skill, experience, aptitude and passion for testing. Key characteristics of an effective exploratory tester are: Passion for testing An exploratory tester must have a passion for the art and science of testing because any rigor applied during this type of testing comes from within the tester.

Sponsored By:

Page 11 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

An inquiring mind An exploratory tester must think outside the box and also approach the challenge of testing from all angles (business, development, technology, governance, etc.). A skilled test designer An exploratory tester must be an effective test designer, someone who systematically explores and attempts to break the application space. A skilled observer An exploratory tester must notice not only any obvious defects within the application space but also more subtle variances in behavior that could be a potential defect. They must function as an investigative reporter. A skilled toolsmith An exploratory tester must be a skilled toolsmith, a tester who creates collections of tools to expedite the investigative process. An excellent communicator To communicate findings, an exploratory tester must be an effective communicator. Since there is no test script, the only guidance provided to explain the nature of a detected defect and how to replicate that defect comes directly from the tester. A tester must be able to capture the intent of the exploratory test in terms of a more structured or scripted test case when exploratory testing is being leveraged to create these types of artifacts.

Sponsored By:

Page 12 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

What to leverage from exploratory testing


I stated earlier that exploratory testing is part of the testing continuum that stretches from very informal exploratory testing to formal testing scripts that treat test cases as intellectual property that are created, executed and maintained to a specific standard. Most testing engagements -- certainly larger, more complex engagements -- call for a mixture of testing approaches, leveraging the strengths of each to harvest the most value from testing efforts. Let us assume we have access to all the resources required to take full advantage of each. What could the testing landscape look like? Assuming there is little or no existing documentation on the application space and we are responsible for both functional and system testing, what are our options? Begin by performing informal exploratory testing of the application space. During this process, capture the basic functionality of the application by formulating a functional decomposition of the application space within a test management tool or spreadsheet. This provides a definitive target to begin working against. Then verify this functional decomposition of the application space with the business, development, and production support teams, and make appropriate adjustments. Now use the functional decomposition as a checklist to assign skilled testing resources against particular areas of the application. During their first sweep through the application space, the testers should focus on four primary goals: learning the application space, capturing test case names for future reference, detecting and publishing defects and confirming the functional decomposition of the application space. Once the first formal exploratory testing sweep is complete, size the ongoing testing effort. Based on this information, determine what additional testing processes will be required to meet testing velocity and software quality goals. Let us assume that this will be an ongoing testing effort involving a large and complex application space under tight time constraints, which is becoming the norm within the world of software development. What additional tools and techniques should we use?

Sponsored By:

Page 13 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

Leveraging the information gained from ongoing formal exploratory testing efforts can help create an itemized list of test cases that should be captured as formal testing scripts and automated using a keyword-based test case design and automation techniques. The test case inventory to be formally scripted and automated should be based on two key selection criteria: application risk, and time to test. The concept of application risk is probably more familiar than time to test. Time to test simply refers to the resource hours required to perform the test. When looking at using a mixture of testing approaches this should be one of the main criteria for automation: How do I free up my skilled testers to perform more exploratory testing? The key here is not to dispose of one testing technique or approach for another. The key is to select the mixture of tools, techniques, and skills that create the greatest opportunity to reduce production issues and improve the overall quality of the product. For example, I would never recommend that all testing be automated. There is always room for improved testing coverage and exploratory testing is one of the best ways to discover these opportunities.

Final testing tips


Exploratory testing is an iterative process of learning, test design, and test execution. It is a context-driven adaptive approach to software testing. When delivered by skilled testers it will detect a substantial number of defects in a short period of time, arguably more than any other testing approach, especially during the first couple of rounds of testing. For larger complex testing engagements, exploratory testing is often not enough, or certainly becomes less cost efficient over an extended period of time. This is when other testing approaches and techniques can be used to leverage testing resources, automation tools, management tools, and other technologies to reduce the weight from the testing process. If testing is a continuum, then exploratory testing is the leading edge of that continuum, and is a much more productive testing effort that helps to ensure overall product quality.

Sponsored By:

Page 14 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

About the Author: David W. Johnson (DJ) is a senior test architect with more than 23 years of experience in information technology across several industries. He has played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments and support of business solutions. Johnson has developed specific expertise over the past 15 years on implementing "test ware," including test strategies, test planning, test automation (functional and performance) and test management solutions.

Sponsored By:

Page 15 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

Streamlining test planning and design


By Robin F. Goldsmith Many software quality assurance (QA) and testing people, including some industry leaders, vociferously denounce test planning and design as worthless busywork that generates voluminous documentation pretty much for its own sake. They say that time spent writing plans reduces the time available to run tests, so it's better to skip test planning and design and just run tests. Instead of "throwing the baby out with the bath water" -- recommending removing test planning and design entirely -- consider the possibility that it can be valuable if you avoid some of the following misguided ways that diminish its worth. In this two-part tip, I identify two of those bad practices. In this installment, I look at the situation in which test cases are confused with the test planning and design. In part two, I explore the drudgery of overwriting test cases. Are your test plans really just test cases? In my seminars, I often informally survey participants about what their organizations mean by "test plan." Usually, half describe their test plans as the set of test cases that they will execute. While it's indeed valuable to have a set of test cases in hand, test plans that are only test cases miss the considerable additional important benefits of actually planning the testing. A test plan should be the project plan for the testing project. Testing can be more effective when it's treated as a project, which in turn is a sub-project within the overall development project. Treating testing as a project provides the opportunity to take advantage of the various project management techniques which have been found helpful increasing the likelihood of success. It's generally recognized that planning is the highest-payback project success technique.

Sponsored By:

Page 16 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

A project plan does the following: Identifies what needs to be done, how it can be done, and what it will take in terms of resources, effort, and duration; Describes the tasks, sequences and dependencies required Sets the timing needed to accomplish the work and mitigate potential risks.

The plan helps assure that you have what you need when you need it, guides execution- to make sure you've done what is needed, while identifying when things go off track and indicating needed adjustments. By serving these purposes, a test plan helps the testing project succeed. It's the thought process that's important, not tons of verbiage. One writes down the test plan so the thinking is not lost or forgotten and so it can be shared, reviewed, and refined. A written test plan facilitates scheduling resources and carrying out intended tasks without missing any or going out of sequence. The plan serves as a record of what's been done both for ascertaining project status and suggesting improvements which can be implemented during the current project or on subsequent projects. True agility involves writing no more than is helpful, but no less!

Sponsored By:

Page 17 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

IEEE Std. 829-2008 is a standard for test documentation that clearly distinguishes between test plans and test cases. This is a somewhat controversial standard, because many people interpret it as a dictate for generating documentation. Instead, I use it to organize my thinking, making the writing incidental and just enough to be helpful. As can be seen in the diagram, the standard suggests using four levels of test planning and design documents. We start with a Master Test Plan, which is the project plan for the testing project. It identifies the set of Detailed Test Plans, which taken together are what must be demonstrated to be confident that the overall testing project works. Detailed Test Plans also are project plans, but for smaller sub-projects within the overall testing project: unit tests, integration tests, special tests -- such as usability, performance, and security -- system tests, independent QA tests and user acceptance tests. Detailed Test Plans indicate the set of features, functions, and capabilities that taken together must be demonstrated to be confident that the respective unit, integration, etc. detailed tests work. For each such feature, function and capability, we can have a Test Design Specification. This is a little-known, but valuable, technique for identifying and dealing economically with sets of executable Test Cases that, taken together, must be demonstrated to show that the Test Design Specification works. It's understandable why many people misinterpret the standard as dictating a huge amount of documentation. At first glance, the above description does sound like a lot of busywork. In fact, it's just the opposite. When used appropriately, the structure enables more efficient and effective testing, wherein important test conditions are less likely to be overlooked, forgotten, or skipped. The key to using the standard's structure is to realize that each level is largely a list of items at the next level down. For example, identifying the needed set of detailed tests in the Master Test Plan doesn't take much time but pays off hugely by revealing big risks that otherwise often are missed.

Sponsored By:

Page 18 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

Then, time is mainly devoted to those Detailed Test Plans which risk analysis indicates are the highest priority. These in turn are largely lists, of Test Design Specifications, which are themselves largely lists of Test Cases; and time is spent respectively on only those that deal with the highest risks. The net effect is that the structure's efficient minimal but sufficient documentation enables much more thoroughly, reliably, and quickly identifying the set of most important Test Cases before getting into the more time-consuming aspects of creating the executable Test Cases. Effective test planning and design is like plotting one's route for a trip; it's a small amount of added effort that helps you get to your desired destination with the least time, effort, and aggravation. About the author: Robin F. Goldsmith, JD, has been president of consultancy Go Pro Management Inc. since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, process improvement and ROI. Robin is the author of the Proactive Testing and (with ProveIT.net) REAL ROI methodologies and also the recent book Discovering REAL Business Requirements for Software Project Success.

Sponsored By:

Page 19 of 20

SearchSoftwareQuality.com E-Guide Software Testing Techniques

Resources from IBM

Solution brief: From strategy to solutions: Enterprise Architecture Management in action White paper: Creating business value by dynamically connecting business and technology with enterprise architecture: Issues and challenges for coordinating and managing business change and transformation White paper: Turning strategy into success - Aligning organizations to create better business results

About IBM
At IBM, we strive to lead in the creation, development and manufacture of the industry's most advanced information technologies, including computer systems, software, networking systems, storage devices and microelectronics. We translate these advanced technologies into value for our customers through our professional solutions and services businesses. worldwide. http://www.ibm.com

Sponsored By:

Page 20 of 20

You might also like