You are on page 1of 17

Software Development Life Cycle (SDLC)

SDLC is a problem-solving procedure to examine an


information system and identify appropriate processes to
improve it.
Four common SDLC models:

Waterfall Model
Incremental Waterfall Model
Prototyping Model
Spiral Model

Accenture Development Method (ADM) is a standard approach or


framework for delivering high quality solutions and services to the
client.

SDLC in Accenture includes:


Plan
Analyze
Design
Build
Test
Deploy

The ADM V-model is a systems development model designed to


simplify the understanding of the complexity associated with
developing systems.
Each phase in the ADM V-model has a complementary
Verification/Validation phase.

Four major phases of a standard project are:

The

Project Initiation
Project Planning
Project Execution
Project Closure
phases of Product Life Cycle:

Idea or innovation
Development
Introduction or adaptation

Growth
Maturity
Decline

Introduction to Testing
The goal of testing is to ensure that the business system meets
the established functional and technical requirements.
The different types of software testing are (according to the
Accenture Testing Framework):
Unit testing - Tests individual units of the application.
Assembly testing - sometimes called String test. Ensures
that the interactions between the components function
correctly.
Product testing - sometimes called System test, as it tests
that all business requirements are met by the system.
The Application Product test validates that the
product requirements are met by each individual
application.
The Integration Product test is an end-to-end test of
the product requirements across all applications and
platforms. After successfully completing the
application product test, integration product test can
occur.
Performance testing - Is carried out to ensure that a release
is capable of operating at the load levels specified in the
business performance requirements and any agreed-on
Service Level Agreements (SLAs) such as (load, stress,
endurance/ volume, and throughput)
Acceptance testing - Ensures that the users and
stakeholders are satisfied with the solution.
User Acceptance Testing Testing with the client.
Operational Acceptance Test - focuses on the
operations/support users to perform a set of tests to
verify that the system operates as intended. It verifies

that the correct functionality, architecture, and


procedures are defined and established to allow
production support teams to run, maintain, and
support the system in production in accordance with
common practices, Standard Operating Procedures
(SOP) or defined SLAs, where applicable.
Deployment testing - Tests the production environments
readiness to handle the new system.
The Testing Framework helps the software development project
teams to plan and execute testing tasks.
Quality Management (QM) is a method for ensuring that all the
activities necessary to design, develop, and implement a product
or service are effective and efficient with respect to the system
and its performance.
There are four QM programs at Accenture:
Quality and Process Improvement (QPI) - Provides projects
with standard processes, tools, coaching, and training on
systems/software engineering and project management
disciplines.
Capability Maturity Model Integration (CMMI) - Provides best
practices covering a products life cycle that are organized
into process areas. It is used for continuous improvement of
the development practices.
Delivery Excellence (DEX) - Establish process standards and
maintain process metrics and quality criteria. Provide key
process education.
Software Quality Assurance (SQA) - Verifies that processes
and deliverables/work products comply with project
standards and procedures. Reviews support the delivery of
high-quality products and services.

Accenture Delivery Suite (ADS) Overview


Accenture Delivery Suite (ADS) is an integrated set of
components that increases the reliability, quality, speed, and
predictability of the end-to-end delivery work.

There are four components of ADS:

Accenture Delivery Methods (ADM) and ADM Estimators


- provides projects with common starting points such as
roles and activities. Projects can then customize to
specifications without breaking the tight integration among
ADS components. ADM Estimators enable the process of
determining the probable cost or effort of something, using
historical information, experience, judgement, practice,
analysis, and other techniques.
Accenture Delivery Tools (ADT) - a set of standard industryleading tools to support application development and work
maintenance.
Accenture Delivery Architectures (ADA) Accenture Delivery Metrics - metric is the measurement of
attributes that allows comparison or prediction of a process
or a product. Use of Accenture Delivery Metrics enables
engagements to better understand when execution is within
normal operating tolerance.
ADM is a collection of Methods that cover management, strategy
and planning, development, and operations.
The Testing Framework helps the software development project
teams to plan and execute testing tasks.
Accenture Delivery Tools (ADT) are standard tools that automate
and simplify processes, tasks, and activities.
ADS Practices are common management and development
processes used so projects can perform consistently across all
workforces for multi-service, multi-location deals.
Accenture Delivery Metrics is a set of attributes that help
engagements improve delivery performance and support the
improvement of key delivery processes.

Requirements and Test Strategy


A requirement is a condition or capability to which a system must
perform.

Good requirements are clear, concise, have a tangible benefit,


and fully capture the customers expectation.
Bad requirements may be suggestive, relative, ambiguous, or
unrealistic.
The customer or their Domain Expert is the source of
requirements.
The Business Analyst manages and controls the requirements.
Requirements Traceability captures all functional and technical
requirements and can easily track whether requirements are
covered by specific work products.
The Test Strategy documents the overall strategy for testing the
application, technical architecture, and training and performance
support.
Accenture Requirements Engineering Suite (AcRES) Created by Accenture Technology Labs, in conjunction with
Accenture Delivery Tools (ADT) and the Requirements and
Analysis Capability. Composed of three tools:
Accenture Requirements Analysis Tool (RAT): Increases the
quality of requirements and reduces project rework by
analyzing requirements text to help you bring them into
alignment with Accenture best practices for clear
requirements. It highlights potential trouble spots and
provides suggestions on how risks can be addressed. The
newest version also provides enhanced visualizations Visio
diagrams generated automatically to help you see the
interactions depicted in you requirements text.
Requirements Glossary Assistant (RGA): Automates much
of the work of populating entity glossaries by scanning
requirements documents and suggesting potential entity
glossary entries automatically.
Requirements Documentation Assistant (RDA): Helps reduce
the effort to write requirements by making it easy to create
well-structured requirements the first time/every time. It
provides a dynamic wizard allowing users to focus on the
requirements content; it then automatically generates a
well-formed requirements sentence from that content.

Testing Technique
In the ADM for Testing perspective, both Discrete Testing and
Embedded Testing (or projects for testing in development
projects), share the common disciplines Analyze Test, Design Test,
Prepare Test, and Execute Test.
In the Analyze stage, the Test Team will Identify Application Test
Requirements, plan for Functional and Non-functional Testing,
Define the Test Environment and Data, and Confirm Test Analysis
Deliverables.
Requirements guide the design, building, and testing of the
product. And are initially developed during the Solution
Planning/Plan stage.
Requirements and Traceability Work Product is used to maintain
bi-directional traceability from the application requirement items
back to the high-level business requirements and to the
downstream analysis, design, build, and test components.
Product Testing is the first point where the solutions functional
and business requirements are tested together.
Assembly Test, ensures that related application components
function properly when assembled.
When a Production system is updated with new functionality,
existing code must be tested to ensure the new functionality will
not adversely impact it. This is called a Regression Test.
UAT proves to the responsible business units that the application
performs as expected and meets all the customer and product
requirements included in the current release.
Non-Functional Testing is a testing of an application for its nonfunctional requirements.
Testing Work Products are created during Test Planning,
Design and Preparation. These work products are:
Test Approach (Analyze Test) Test Scenarios (Design Test)
Test Conditions and Expected Results (Design Test)
Test Cycle Control Sheet (Analyze Test)
Test Scripts (Prepare Test)

Regression Testing Techniques


Retest-All Technique - All test cases are executed.
Regression Test Selection Technique - Only a number of
selected test cases are to be re-executed.
Dataflow Technique - Only test cases where the data flow
(or data interactions) of the affected modifications are to be
re-executed.
Test Case Prioritization Technique - The re-execution of some
test cases that depends on a number of factors such as
criticality of the module, impact to the business, and
modules where there are more user interactions.

Test Planning Overview


The purpose of the Design Test discipline is to create the test
conditions and expected results, test scenarios, and test cycle
control sheets for all of the major test streams.
Some of the most important testing techniques are:
Static
Dynamic
Positive
Negative
Black Box
White Box
Grey Box
Static Testing is also called as Dry Run Testing.
Dynamic Testing requires the code to be executed.
Positive Testing is performed in the system by providing valid
data input.
Negative Testing implies test to fail.
Black Box testing does not require the knowledge of the code or
internal structure.
Performance Engineering is the systematic process by which
performance of an application is ensured.
White Box Testing tests the structure of the code.
Grey Box Testing is a combination of Black Box and White Box
testing.

Litmus is a test design artifact generation tool that semiautomates the test design phase.

Applied Statistics
Applied Statistics testing is a statistical approach to Test Design
that provides maximum coverage with a minimum number of test
scripts.
Applied Statistics provides three key benefits during the Test
Design phase:
Addresses common challenges in the Test Design process.
Provides a structured approach to Test Design.
Provides actionable information to determine the
appropriate amount of testing.
The Applied Statistics process involves the following key steps:
Define parameters and values.
Prepare and generate tests.
Analyze and adjust coverage.
Confirm expected results and scripts.
Review and sign-off.
Update test execution plan.
Two tools on the market that use Applied Statistics for test design
are:
Hexawise, a part of Accenture Delivery Tools (ADT):
Is a software Test Design tool.
Ensures all important combinations are tested by
maximizing test coverage with minimum number of
test conditions..
AllPairs Testing or Pair-wise Testing:
Is a combination of sets to be tested.
Is a method that finds all double-mode faults that are two
parameters conflicting with each other.
The following are the features of the Hexawise Test Design tool:
Creates test conditions quickly and easily.
Flexible to use on any kind of test project.
Generates the smallest possible number of tests to meet
the coverage requirements.

Automatically maximizes the variation between tests and


minimizes the repetition in tests.

Test Prep Overview


Test Preparation for the ADM Perspective involves three key
steps:
Prepare Test Scripts and Data:
Expand the Test Deliverables by defining Input Data
Refine the Expected Results of the Test Conditions
Create Test Scripts Deliverable from the refined TCERs
Prepare and Finalize Test Cycle Schedule (TCCS)
Prepare Test Environment:
Establish the Test Environment
Prove that the Test Environment is working before the
Test Execution
Manage Test:
Periodically measure the test progress
Periodically report the test progress
Document progress
Test scripts detail the exact steps a Tester must follow while
executing a test.
Test data is a data that has been specifically identified for the
purpose of testing a particular system or application being
developed.
Application Product test (ADM task 4720) is designed to validate
the functional requirements of the systems internal operations.
ADM task 4720 comprises four steps:
Prepare Application Product Test scripts.
Prepare Application Product Test script transaction data.
Prepare Application Product Test script expected results.
Perform peer review.
Integration Product Test is an end-to-end test of the product
requirements for all applications and platforms.
ADM task 4725 comprises four steps:

Prepare integration product test scripts.


Prepare integration product test script transaction data.
Prepare integration product test script expected results.
Perform peer review.
Performance test (ADM task 4730) fixes performance- related
issues before the system goes live.
ADM task 4730 comprises four steps:
Prepare performance test scripts.
Prepare performance test script transaction data.
Prepare performance test script expected results.
Perform peer review.
User Acceptance test (UAT) is performed by the client.
UAT is generally taken up as the last phase of testing.
UAT (ADM task 4750) is devised to answer the important question
does the system enable the users to do their jobs?
ADM task 4750 has four steps:
Prepare user acceptance test scripts.
Prepare user acceptance test script transaction data.
Prepare user acceptance test script expected results.
Perform peer review.
Operational Acceptance Test (ADM task 4755) is meant to
validate the usability and operability of the systems involved in
the test.
ADM task 4755 has four steps:

Prepare Operational Acceptance Test scripts.


Prepare Operational Acceptance test script transaction data.
Prepare operational acceptance test script expected results.
Perform peer review.
Automation (ADM task 4760) involves the designing of
automation frameworks and components based on the
automation test scripts.
ADM task 4760 has four steps:
Create test automation scripts.
Update transaction data and common test data.
Perform peer review.
ADM Task 4791: Review Test Script Deliverables involves review
the deliverables with the various stakeholder groups.

ADM Task 4793: Start Assembly Test to Verify Readiness involves


loading the test scripts and test data into the test execution
tools/environments.
ADM Task 4795: Finalize Changes and Revise Assembly Test
Estimates involves reviewing the work plan for the Assembly Test.
The purpose of ADM Task T4799: Transition Test Script
Deliverables is to transition the test preparation deliverables to
the Test Execution team.
Rational Functional Tester (RFT) is an object-oriented testing tool
that tests Java, HTML, VB.net, and Windows applications.
Quick Test Professional (QTP) is an advanced solution for
functional and regression test automation.

Peer Review
Verification and Validation are two important review methods.
Verification ensures that you build the product right
Validation ensures that you build the right product.
Peer reviews:
Promote early detection and removal of defects from a
deliverable
Facilitate knowledge transfer
Provide an understanding of the content
Gain inputs from the reviewer
Encourage continuous improvement
Identify areas of improvement in overall processes
Support overall quality and solution delivery excellence
Reduce time and costs resulting from rework.
The roles involved in a peer review:
Project Manager
Deliverable Owner (Author)
Reviewer
Moderator (Optional).
The phases in the Peer Review process are:
Planning for the Peer Review
Scheduling the Peer Review
Preparing for the Peer Review

Conducting the Peer Review


Performing Rework
Reviewing Changes
Reviewing and Submitting Team Results
Analyzing Project Results.
The Peer Review process is followed on all system development
lifecycle deliverables:
Requirements, Design, Build, Test, and Deploy.
Metrics are processed values that help to quantitatively evaluate
the efficiency of the Peer Review process.
Metrics are interpreted by comparing them with a set of baseline
targets.

Peer Review
The Test Environment is a setup of the software and hardware on
which the Testing Team is going to test the newly built software
product or application.
Common test data provides control over the quality of testing,
which significantly improves the productivity of the testing effort.
Test Data Management helps define standards, tasks, ownership,
roles, and responsibilities to ensure the accuracy, completeness,
and integrity of test data.
The five areas covered in an effective Test Data Management
Approach are:
Data Identification - the data items which the project will
require in order to test the application and state sources of
this data and any dependencies which exist.
Data Acquisition - the methods used to acquire data, such
as data stubs, data mining, data extracts and data
conversion.
Data Conditioning - Processes which need to be followed on
the acquired data such as scrubbing, modification, or
transformation.
Data Population - the processes to define how the data will
be populated to test environments (for example, will it be

deployed with a build). Consideration should be given to


versions of data if multiple versions need to be active and
deployable at one time.
Data Maintenance - Detail how this data model will be
maintained and updated. The baseline recommendation is
the projects standard defect management process.
A poorly managed test environment can impact timelines,
quality, and budget.
When dealing with Test Data and Environments, the Test Strategy
determines:
What environments are available?
Environment configurations and test data management.
What environments are dedicated or shared?
Approach to managing test data and environments.
Poor test data management compromises the efficiency testing
process of an enterprise in many ways.
Benefits of Test Data Management Tools:
Test Data Management tools enable standardized and easy
creation of test data.
Users can easily view and edit data in a familiar
environment.
Test data created can be checked to make sure all table
links are valid.
Production data can be reused and duplicated many times.
Test cases can be created to test invalid data.

Test Execution Overview


To execute assembly test, follow the steps below:
Execute Assembly Test
Manage Defects
Re-test Fixes
Create Test Closure Memo
Ensure that entry criteria have been met to start any test
execution.

Test Closure Memo summarizes the


testing activities.
A defect is any deviation from the end client requirement that
manifests during a project.
Defect Management is the process of capturing, reviewing,
assigning, fixing, deferring, rejecting and closing defects
throughout the systems life cycle.
SIR is the Key form of communication between the Tester and Fixit team.
The four most typically-used SIR Priority Levels are: Critical, High,
Medium, Low.
Defect meetings should include:
Agenda
Roles and Responsibilities
Meeting Minutes
Representation
Schedule
Defect Reports

Other Testing Methodologies


The types of testing used by agile development teams include:
Automated testing - using an automation tool to execute
test case suite for an application. In this technique, the
automation tool enters test data into the system under test,
compares expected and actual results, and generates
detailed test reports.
Regression testing - is a full or partial selection of already
executed test cases which are re-executed to ensure
existing functionalities work fine. Regression testing is done
to make sure that new code changes should not have side
effects on the existing functionalities. It ensures that old
code still works once the new code changes are done.
Acceptance testing - The focus of Acceptance testing is not
to find defects but to check whether the software

application meets the project specifications or customer


requirements, since this is the first time, when the client
sees their requirements which was in plain text into an
actual working system.
The Agile testing life cycle has four phases:
Initiation - projects foundation is established. The Initiation
phase is usually of a relatively short duration and involves
establishing a projects foundation. It is essential to use the
Initiation phase for projects that have fixed deadlines to
determine when a project needs to enter the End Game
phase.
Construction - The system is developed in an evolutionary
or iterative and incremental manner. Most of the testing
conducted during an agile software development project
occurs during the Construction phase.
End Game - The system is transitioned into production.
During the End Game phase, some critical software systems
may often require Final or Full System tests and Acceptance
tests. For example, life-critical systems, such as medical
software.
Production involves operating systems and providing
support for its users. The goal of the Production phase is to
have the products useful and productive post deploy to the
users. If the support for the release is ended or if the
release of the newer version is getting ready, then the
Production phase will be ended.
Most tests occur during the Construction phase.
During the End Game phase, a team may need to perform full
System and Acceptance tests.
Risk-based testing is a testing technique where tests are
designed on the concept of refutability.
In Risk-based testing, Testers design tests to challenge their basic
or critical assumptions.
Model-based testing derives tests from models. Models are
created and used to represent the desired behavior of SUT.
Scenario-based testing is based on a scenario that revolves
around how the application will be used in a real-world situation.

Domain-based testing is used to ensure that the application only


accepts valid input.

More Tools of the Trade


SQL is a standard language for accessing and manipulating
databases.
The commonly used SQL commands are:

SELECT: Used for selecting data from a database


WHERE: Used for extracting only specific records
DELETE: Used for deleting a row in a table
UPDATE: Used for updating the records in a table

INSERT: Used for inserting a new row in a table


Depending on the functionality, SQL commands are classified into
four different categories, such as:
Data Definition Language (DDL) - A language used by a
database management system, which allows users to define
the database and its objects, specifying data types,
structures, and constraints on the data.
Data Manipulation Language (DML) - a language that
enables users to access or manipulate data as organized by
the appropriate data model. DML commands are used to
query and manipulate existing objects like tables.
Data Control Language (DCL) - a language that provides
users with privilege commands. The segment of SQL used
or controlling access to data in a database. DCL allows
protecting the tables and other objects created by a user
from accidental manipulation by another user. DCL allows
to control privileges assigned to users.
Transaction Control Language (TCL) - Transactions are a
collection of operations that form a single logical unit of
work. Transactions ensure that all changes made are either
committed to the database or if there is a failure at any
point of time then none of the changes are committed in the

database. Transaction changes can be made permanent to a


database only if they are committed.

You might also like