You are on page 1of 11

Software Testing:

Software Testing is the process of exercising S/W with the intent of ensuring that the S/W meets its requirements and user expectations and does not fail in an unacceptable manner. The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. - IEEE/ANSI, 1990 What is a bug/error? A bug or error in software product is any exception that can hinder the functionality of either the whole software or part of it.
What is a Bug Life Cycle? The duration or time span between the first time bug is found (New) and closed successfully (status: Closed), rejected, postponed or deferred is called as Bug/Error Life Cycle. (Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed. For more information about various statuses used for a bug during a bug life cycle, you can refer to article Software Testing Bug & Statuses Used during a Bug Life Cycle)

New

Reopen

Open

Not a Bug

Resolved

Closed Statuses associated with a bug: New: When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of New is assigned to the bug.

Assigned: After the bug is reported as New, it comes to the Development Team. The development team verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of Assigned is assigned to it. Open: Once the developer starts working on the bug, he/she changes the status of the bug to Open to indicate that he/she is working on it to find a solution. Fixed: Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as Fixed and passes it over to the Development Lead in order to pass it to the Testing team. Pending Retest: After the bug is fixed, it is passed back to the testing team to get retested and the status of Pending Retest is assigned to it. Retest: The testing team leader changes the status of the bug, which is previously marked with Pending Retest to Retest and assigns it to a tester for retesting. Closed: After the bug is assigned a status of Retest, it is again tested. If the problem is solved, the tester closes it and marks it with Closed status. Reopen: If after retesting the software for the bug opened, if the system behaves in the same way or same bug arises once again, then the tester reopens the bug and again sends it back to the developer marking its status as Reopen. Pending Rejected: If the developers think that a particular behavior of the system, which the tester reports as a bug has to be same and the bug is invalid, in that case, the bug is rejected and marked as Pending Reject. Rejected: If the Testing Leader finds that the system is working according to the specifications or the bug is invalid as per the explanation from the development, he/she rejects the bug and marks its status as Rejected. Postponed: Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. That time, the bug is marked with Postponed status.

Deferred: In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is marked with Deferred status.

Steps in STLC:
Requirements - Requirements Gathering - Specification documents analysis - Document reviews Design & Planning - Scope Identification - Testing Type Identification - Test cases Mapping with requirements - Creation of Test Plan - Writing Test cases Execution - Setup test Environment - Build creation - Sanity / Smoke Test - Execution of respective test cases Defect Tracking - Defect logging - Defect Management Reporting - Test Matrix - Prepare test Results & test Summary documents - Reports & Graphs

General Idea Steps in STLC:


Project Initiation

System Study Test Plan Test Case Design Test Automation Execute test cases Report Defects Regression Test

Bugs Analysis

Summary Report

Defect Classification
The following are the defect priorities defined according to their precedence: Sr. Defect PriorityDescription No. Further development and / or testing cannot occur until the defect has been repaired. The system cannot be used or not advisable to use until the repair has been affected. The defect must be resolved as soon as possible because 1. P1 it is impairing development/and or testing activities OR it is violating an important business constraint (Implementation contrary to requirements). System use will be severely affected/or will be insecure to the user until the defect is fixed. 2. P2 The defect should be resolved in the normal course of development

Sr. Defect PriorityDescription No. activities. It can wait until a new build or version is created. The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or consistent results, or the defect impairs the systems usability. The defect is an irritant which should be repaired but which can be repaired after more serious defects have been fixed. The defect does not cause a 3. P3 failure, does not impair usability, and the desired processing results are easily obtained by working around the defect.

The following are the defect severity defined according to their precedence: Sr. No. 1. 2. 3. 4. 5. Defect Severity Causes Crash Critical Major Minor Enhancement Description Defect leads to crashing of system. System cannot work if the defect is not fixed. Defect has a large impact on the working of the system. Defect does not largely affect the working of the system. These are the features requested for improving the current system.

Types of testing:
The development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process are: Unit Test. System Test Integration Test Functional Test Performance Test Beta Test Acceptance Test. Unit Test The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each

unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results. System Test Several modules constitute a project. If the project is long-term project, several developers write the modules. Once all the modules are integrated, several errors may arise. The testing done at this stage is called system test. System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points. Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm. Functional Test Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do. Acceptance Testing Testing the system with the intent of confirming readiness of the product and customer acceptance. Ad Hoc Testing Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing. Alpha Testing Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department. Automated Testing

Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests. Beta Testing Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released. Black Box Testing Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document. Compatibility Testing Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested. Configuration Testing Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software. Independent Verification & Validation The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices. Installation Testing Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs. Integration Testing Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger

level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing) Load Testing Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation. Stress Testing Testing the hardware and software under extraordinary operating conditions to observe the negative results. Performance Testing Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing. Pilot Testing Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-toProduction activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing). Regression Testing Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred. Security Testing Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers. Software Testing The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant.

This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation) Stress Testing Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity. User Acceptance Testing See Acceptance Testing. White Box Testing Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. What's a 'test plan'? A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

Title Identification of software including version/release numbers Revision history of document including authors, dates, approvals Table of Contents Purpose of document, intended audience Objective of testing effort Software product overview Relevant related document list, such as requirements, design documents, other test plans, etc. Relevant standards or legal requirements Traceability requirements Relevant naming conventions and identifier conventions Overall software project organization and personnel/contact-info/responsibilities Test organization and personnel/contact-info/responsibilities Assumptions and dependencies Project risk analysis Testing priorities and focus Scope and limitations of testing Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable

Outline of data input equivalence classes, boundary value analysis, error classes Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems Test environment validity analysis - differences between the test and production systems and their impact on test validity. Test environment setup and configuration issues Software migration processes Software CM processes Test data setup requirements Database setup requirements Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs Test automation - justification and overview Test tools to be used, including versions, patches, etc. Test script/test code maintenance processes and version control Problem tracking and resolution - tools and processes Project test metrics to be used Reporting requirements and testing deliverables Software entrance and exit criteria Initial sanity testing period and criteria Test suspension and restart criteria Personnel allocation Personnel pre-training needs Test site/location Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues Relevant proprietary, classified, security and licensing issues. Open issues Appendix - glossary, acronyms, etc.

What's a 'test case'?

A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.

How can it be known when to stop testing? This can be difficult to determine. Many modern software applications are so complex, and

run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:

Deadlines (release deadlines, testing deadlines, etc.) Test cases completed with certain percentage passed Test budget depleted Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period ends

What if there isn't enough time for thorough testing? Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:

Which functionality is most important to the project's intended purpose? Which functionality is most visible to the user? Which functionality has the largest safety impact? Which functionality has the largest financial impact on users? Which aspects of the application are most important to the customer? Which aspects of the application can be tested early in the development cycle? Which parts of the code are most complex, and thus most subject to errors? Which parts of the application were developed in rush or panic mode? Which aspects of similar/related previous projects caused problems? Which aspects of similar/related previous projects had large maintenance expenses? Which parts of the requirements and design are unclear or poorly thought out? What do the developers think are the highest-risk aspects of the application? What kinds of problems would cause the worst publicity? What kinds of problems would cause the most customer service complaints? What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

You might also like