You are on page 1of 30

Master of Computer Application (MCA) – Semester 3

Assignment Set – 1

Que 1. Describe the following Software Development Models:


A) Parallel or Concurrent development model B) Hacking

Ans:

The Parallel or Concurrent Development Model

The concurrent process model can be represented schematically as a


series of major technical activities, tasks, and their associated states.
For e.g.:, the engineering activity defined for the spiral model is
accomplished by invoking the following tasks. Prototyping and / or
analysis modeling, requirements specification, and design.

Figure 2.4 show that it provides a schematic representation of one


activity with the concurrent process model. The activity-analysis-may
be in any one of the states noted at any given time. Similarly, other
activities (e.g. Design or customer communication) can be represented
in an analogous manner. All activities exist concurrently but reside in
different states. For e.g., early in a project the customer communication
activity has completed its first iteration and exists in the awaiting
Changes State. The analysis activity (which existed in the none state
while initial customer communication was completed) now makes a
transition into the under development state. If the customer indicates
that changes in requirements must be made, the analysis activity
moves from the under development state into the awaiting changes
state.

The concurrent process model defines a series of events that will


trigger transition from state to state for each of the software
engineering activities. For e.g., during early stages of design, an
inconsistency in the analysis model is uncovered. This generates the
event analysis model correction, which will trigger the analysis activity
from the done state into the awaiting Changes State.

1
Fig:-One element of concurrent process model

The concurrent process model is often used as the paradigm for the
development of client/server applications. A client/server system is
composed of a set of functional components. When applied to
client/server, the concurrent process model defines activities in two
dimensions a system dimension and component dimension. System
level issues are addressed using three activities, design assembly, and
use. The component dimension addressed with two-activity design and
realization. Concurrency is achieved in two ways; (1) System and
component activities occur simultaneously and can be modeled using
the state – oriented approach (2) a typical client server application is
implemented with many components, each of which can be designed
and realized concurrently.

The concurrent process model is applicable to all types of software


development and provides an accurate picture of the current state of a
project. Rather than confining software-engineering activities to a
sequence of events, it defines a net work of activities. Each activity on
the network exists simultaneously with other activities. Events
generated within a given activity or at some other place in the activity
network trigger transitions among the sates of an activity.

Hacking

The growing dependence of society on software also places


tremendous social responsibilities on the shoulders of software
engineers and their managers. When the software is being used to
monitor the health of patients, control nuclear power plants, apply the
breaks in an automobile, transfer billions of dollars in an instant, launch
missiles, or navigate an airplane, it is not simply good engineering to
2
build reliable software; it is also the engineer’s ethical responsibilities to
do so.

Program defects are not merely inconvenient “bugs” or interesting


technical puzzles to be captured, but potentially serious business-or
life-threatening errors. Building reliable software is technical objective
of the software engineer, but it also has ethical and social implications
that must guide the actions of a serious professional. In this light, “
Hacking”, i.e., inserting “play full” bugs into programs, creating viruses,
writing quick and dirty code just to meet a schedule or a market
window, shipping defective software, and even shipping software that
works but does not meet the agreed upon specifications is unethical.

Que 2. Explain the Software Architectural design.

Ans:-

Architectural Design:-

The initial design process of identifying this sub-system and


establishing a framework for sub-system control and communication is
called Architectural design.

Architectural design comes before detailed system specification, it


should not include any design information. Architectural design is
necessary to structure and organize the specification. This model is the
starting point for the specification of the various parts of the system.

There is no generally accepted process model for architectural design.


The process depends on application knowledge and on the skill and
intuition of the system architect. For the process, the following activities
are usually necessary:

(1) System structuring: The system is structured into a number of


principal sub-systems where a sub-system is an independent software
unit. Communications between sub-systems are identified.

(2) Control modeling: A general model of the control relationships


between the parts of the system is established.

3
(3) Modular decomposition: Each identified sub-system is
decomposed into modules. The architect must decide on the types of
module and their interconnections.

System structuring

The first phase of the architectural design activity is usually concerned


with decomposing a system into a set of interacting sub-systems. At its
most abstract level, an architectural design may be depicted as a block
diagram in which each box represents a sub-system. Boxes within
boxes indicate that the sub-system has itself been decomposed to sub-
systems. Arrows mean that data and/or control is passed from sub-
system in the direction of the arrows.

Fig :-Block diagram packing robot control system

Figure shows an architectural design for a packing robot system. This


robotic system can pack different kinds of object. It uses a vision sub-
system to pick out objects on a conveyor, identifies the type of object,
and selects the right kind of packaging from a range of possibilities. It
4
then moves objects from the delivery conveyor to be packaged.
Packaged objects are placed on another conveyor.

More specific models of the structure may be developed which show


how sub-systems share data, how they are distributed and how they
interface with each other. In this section three of these standard
models, namely a repository model, a client-server model and an
abstract machine model are discussed.

The repository model

Sub-systems making up a system must exchange information so that


they can work together effectively. There are two ways in which this
can be done:

(1) All shared data is held in a central database that can be accessed
by all sub systems. A system model based on a shared database is
sometimes called a repository model.

(2) Each sub-system maintains its own database. Data is interchanged


with other sub-systems by passing messages to them.

The majority of systems, which use large amounts of data, are


organized around a shared database or repository. This model is
therefore suited to applications where data is generated by one sub-
system and used by another.

Fig:-The architecture of an integrated CASE tool set


5
The client–server model

The client-server architectural model is a distributed system model


which show how data and processing is distributed across a range of
processors .

The major components of this model are:

(1) A set of stand-alone servers which offer services to other sub-


systems. Examples of servers are print servers which offer printing
services, file servers which offer file management services and a
compile server which offers language translation services.

(2) A set of clients that call on the services offered by servers. These
are normally sub-systems in their own right. There may be several
instances of a client program executing concurrently.

(3) A network which allows the clients to access these services. In


principle, this is not necessary as both the clients and the servers could
run on a single machine. Clients must know the names of the available
servers and the services that they provide. However, servers need not
know either the identity of clients or how many clients there are. Clients
access the services provided by a server through remote procedure
calls.

Fig:-the architecture of a film and picture library system

The client–server approach can be used to implement a repository-


based system where the repository is provided as a system server.
Sub-systems accessing the repository are clients. Normally, however,
each sub-system manages its own data. Servers and clients exchange
data for processing. This can result in performance problems when

6
large amounts of data are exchanged. However, as faster networks are
developed, this problem is becoming less significant.

The most important advantage of the client-server model is that


distribution is straightforward. Effective use can be made of networked
systems with many distributed processors. It is easy to add a new
server and integrate it gradually with the rest of the system or to
upgrade servers transparently without affecting other parts of the
system.

The abstract machine model

The abstract machine model of architecture (sometimes called a


layered model) models the interfacing of sub-systems. It organizes a
system into a series of layers each of which provides a set of services.
Each layer defines an abstract machine whose machine language (the
services provided by the layer) is used to implement a next level of
abstract machine. For example, a common way to implement a
language is to define an ideal ‘language machine’ and compile the
language into code for this machine. A further translation step then
converts this abstract machine code to real machine code.

Fig.: abstract machine model of a version management system

A well-known example of this approach is the OSI reference model of


network Protocols. Another influential example of this approach was
proposed by suggested a three-layer model for an Ada programming
support environment (APSE).

the version management system relies on managing versions of


objects and provides general configuration management facilities. To
support these configuration management facilities, it uses an object
management system which provides information storage and
management services for objects. This system uses a database
7
system to provide basic data storage and services such as transaction
management, rollback and recovery, and access control. The database
management uses the underlying operating system facilities and file
store in its management uses the underlying operating system facilities
and file store in its implementation.

The layered approach supports the incremental development of


system. As a layer is developed, some of the services provided by that
layer may be made available to users. This architecture is also
changeable and portable.

A disadvantage of the layered approach is that structuring system in


this way can be difficult. Inner layers may provide basic facilities, such
as file management, which are required by all abstract machines.
Services required by the user may therefore require access to an
abstract machine that is several levels beneath the outermost layer.
This subverts the model, as an outer layer is not longer simply
dependent on its immediate predecessor.

Performance can also be a problem because of the multiple levels of


command interpretation, which are required. If there are many layers,
some overhead is always associated with layer management. To avoid
these problems, applications may have to communicate directly with
inner layers rather than use facilities provided in the abstract machine.

Control models

The models for structuring a system are concerned with how a system
is decomposed into sub-systems. To work as a system, sub-systems
must be controlled so that their services are delivered to the right place
at the right time. Structural models do not (and should not) include
control information. Rather, the architect should organize the sub-
systems according to some control model, which supplements the
structure model is used. Control models at the architectural level are
concerned with the control flow between sub-systems.

Two general approaches to control can be identified:

(1) Centralized control: One sub-system has overall responsibility for


control and starts and stops other sub-systems. It may also devolve
control to another sub-system but will expect to have this control
responsibility returned to it.

8
(2) Event-based control: Rather than control information being
embedded in a sub-system, each sub-system can respond to externally
generated events. These events might come from other sub-systems
or from the environment of the system.

Control models supplement structural models. All the above structural


models may be implemented using either centralized or event-based
control.

Centralized control

In a centralized control model, one sub-system is designated as the


system controller and has responsibility for managing the execution of
other sub-systems.

Fig.:-centralized model of real time system

an illustration of a centralized management model of control for a


concurrent system. This model is often used in ‘soft’ real-time systems,
which do not have very tight time constraints. The central controller
manages the execution of a set of processes associated with sensors
and actuators.

Event-driven systems

In centralized control models, control decisions are usually determined


by the values of some system state variables. By contrast, event-driven
control models are driven by externally generated events.

The distinction between an event and a simple input is that the timing
of the event is outside the control of the process which handless that
event.
A sub-system may need to access state information to handle these
9
events but this state information does not usually determine the flow of
control.

There are two event-driven control models:

(1) Broadcast models: In these models, an event is, in principle,


broadcast to all sub-systems. Any sub-system, which is designed to
handle that event, responds to it.

(2) Interrupt-driven models: These are exclusively used in real-time


systems where an interrupt handler detects external interrupts. They
are then passed to some other component for processing.

Broadcast models are effective in integrating sub-systems distributed


across different computers on a network. Interrupt-driven models are
used in real-time systems with stringent timing requirements.

The advantage of this approach to control is that it allows very fast


responses to events to be implemented. Its disadvantages are that it is
complex to program and difficult to validate.

Modular decomposition

After a structural architecture has been designed, another level of


decomposition may be part of the architectural design process. This is
the decomposition of sub-systems into modules.

Here considered two models which may be used when decomposing a


sub-system into modules:

(1) An object-oriented model the system is decomposed into a set of


communicating objects.

(2) Data-flow models the system is decomposed into functional


modules, which accept, input data and transform it, in some way, to
output data. This is also called a pipeline approach.

In the object-oriented model, modules are objects with private state


and defined operations on that state. In the data-flow model, modules
are functional transformations. In both cases, modules may be
implemented as sequential components or as processes.

The advantages of the object-oriented approach are objects are loosely


coupled; the implementation of objects can be modified without
10
affecting other objects. Objects are often representations of real-world
entities so the structure of the system is readily understandable.
Because these real-world entities are used in different systems, objects
can be reused. Object-oriented programming languages have been
developed which provide direct implementations of architectural
components.

However, the object-oriented approach does have disadvantages. To


use services, objects must explicitly reference the name and the
interface of other objects. If an interface change is required to satisfy
proposed system changes, the effect of that change on all users of the
changed object must be evaluated. More complex entities are
sometimes difficult to represent using an object model.

In a data-flow model, functional transformations process their inputs


and produce outputs. Data flows from one to another and is
transformed as it moves through the sequence. Each processing step
is implemented as a transform. Input data flows through these
transforms until converted to output. The transformations may execute
sequentially or in parallel. The data can be processed by each
transform item by item or in a single batch.

The advantages of this architecture are:

(1) It supports the reuse of transformations.

(2) It is intuitive in that many people think of their work in terms of input
and output processing.

(3) Evolving system by adding new transformations is usually


straightforward.

(4) It is simple to implement either as a concurrent or a sequential


system.

Domain-specific architectures

The previous architectural models are general models. They can be


applied to many different classes of application. As well as these
general models, architectural models, which are specific to a particular
application domain, may also be used. Instances of these systems
differ in detail. The common architectural structure can be reused when
developing new systems. These architectural models are called
domain-specific architectures.
11
There are two types of domain-specific architectural model:

(1) Generic models which are abstractions from a number of real


systems. They encapsulate the principal characteristics of these
systems. The class of systems modeled using a generic model is
usually quite restricted. For example, in real-time systems, there might
be generic architectural models of different system types such as data
collection systems, monitoring systems, and so on.

(2) Reference models, which are more, abstract and describe a larger
class of systems. They provide a means of informing system architects
about that class of system.

Generic models are usually derived “bottom-up” from existing systems


where as reference models are derived “top-down”.

Que. 3. Describe the following system models giving appropriate


real time examples:
A) Data Flow Models B) Semantic Models C) Object Models

Ans:-

Data-flow models

Data-flow model is a way of showing how data is processed by a


system. At the analysis level, they should be used to model the way in
which data is processed in the existing system. The notations used in
these models represents functional processing, data stores and data
movements between functions.

Data-flow models are used to show how data flows through a


sequence of processing steps. The data is transformed at each step
before moving on to the next stage. These processing steps or
transformations are program functions when data-flow diagrams are
used to document a software design. Figure 4.1 shows the steps
involved in processing an order for goods (such as computer
equipment) in an organization.

12
Fig. 4.1: Data flow diagrams of Order processing

The model shows how the order for the goods moves from process to
process. It also shows the data stores that are involved in this process.

There are various notations used for data-flow diagrams. In figure


rounded rectangles represent processing steps, arrow annotated with
the data name represent flows and rectangles represent data stores
(data sources). Data-flow diagrams have the advantage that, unlike
some other modelling notations, they are simple and intuitive. These
diagrams are not a good way to describe sub-system with complex
interfaces.

Semantic data models

The large software system makes use of a large database of


information. In some cases, this database exists independently of the
software system. In others, it is created for the system being
developed. An important part of system modeling is to define the
logical form of the data processed by the system. An approach to data
modeling, which includes information about the semantics of the data,
allows a better abstract model to be produced. Semantic data models
always identify the entities in a database, their attributes and explicit
relationship between them. Approach to semantic data modeling
includes Entity-relationship modeling. Semantic data models are
described using graphical notations. These graphical notations are
understandable by users so they can participate in data modeling. The
notations used are in figure 4.2 shown below.

13
Fig: Notations for semantic data models.

Relations between entities may be 1:1, which means one entity


instance, participate in a relation with one other entity instance. And
they may be 1:M, where an entity instance participates in relationship
with more than one other entity instance, or M:N where several entity
instances participate in a relation with several others.

Entity-relationship models have been widely used in database design.


The database schemas derived from these models are naturally in third
normal form, which is a desirable characteristic of relational schemas.
Because of the explicit typing and the recognition of sub and super
types, it is also straightforward to map these models onto object-
oriented databases.

Object models

To support object-oriented programming, an object-oriented


development approach may be adopted. This means expressing the
system requirements using an object model, designing using an object-
oriented approach and developing the system in object-oriented
programming languages such as C++.

Object models developed during requirement analysis used to


represent both system data and its processing. They combine some of
the uses of data-flow and semantic data models. They are useful for
showing how entities in the system may be classified and composed of
other entities.

Object models of systems, which are developed during requirement


analysis, should not include details of the individual objects in the
system. They should model classes of objects representing entities. An
14
object class is an abstraction over a set of objects, which identifies
common attributes and the services or operations, which are provided
by each object.

Various types of object models can be produced showing how object


classes are related to each other, how objects are aggregated to form
other objects, how objects use the services provided by other objects
and so on. Figure 4.3 shows the notation, which is used to represent
an object class. There are three parts to this. The object class name
has its obvious meaning and the attribute section lists the attributes of
that object class. When objects are created using the class as a
template, all created objects acquire these attributes. They may then
be assigned values that are conformant with the attribute type declared
in the object class. The service section shows the operations
associated with the object. These operations may modify attribute
values and may be activated from other classes.

Fig. Notation to represent an object class

Que. 4. Describe the following software life cycle models:


A) Waterfall Model B) Incremental Model C) Iterative Model

Ans: - Waterfall Model:-

The Serial or Linear Sequential Development Model

This Model also called as the Classic life cycle or the Waterfall model.
The Linear sequential model suggests a systematic sequential
approach to software development that begins at the system level and
progresses through analysis, design, coding, testing, and support.
15
Figure 2.1 shows the linear sequential model for software engineering
Modeled after a conventional engineering cycle, the linear sequential
model has the following activities:

Fig:-The linear sequential model

Software requirement analysis

The requirement gathering process is intensified and focused


specifically on software. To understand the nature of the program to be
built, the software engineer (analyst) must understand the information
domain for the software, as well as required function, behavior,
performance and interface. Requirements for the both system and the
software are documented and reviewed with the customer.

Design

Software design is actually a multistep process that focuses on four


distinct attributes of a program, data structure, software architecture,
interface representations, and procedural (algorithmic) detail. The
design process translates requirements into a representation of the
software that can be assessed for quality before coding begins. Like
requirements, the design is documented and becomes part of the
software configuration.

Code Generation

The design must be translated into a machine–readable form. The


code generation step performs this task. If design is performed in a
detailed manner, code generation can be accomplished
mechanistically.

Testing

Once code has been generated, program testing begins. The testing
process focuses on the logical internals of the software, ensuring that
16
all statements have been tested, and on the functional externals; that
is, conducting tests to uncover errors and ensure that defined input will
produce actual results that agree with required results.

Support

Software will undergo change after it is delivered to the customer.


Change will occur because errors have been encountered, because
the software must be adopted to accommodate changes in its external
environments or because the customer requires functional or
performance enhancements. Software maintenance re-applies each of
the preceding phases to an existing program rather than a new one.

A successful software product is one that satisfies all the objectives of


the development project. These objectives include satisfying the
requirements and performing the development within time and cost
constraints. Generally, for any reasonable size projects, all the phases
listed in the model must be performed explicitly and formally.

The second reason is the one that is now under debate. For many
projects the linear ordering of these phases is clearly the optimum way
to organize these activities. However some argue that for many
projects this ordering of activity is unfeasible or suboptimal. Still
waterfall model is conceptually the simplest process model for software
development that has been used most often.

Incremental Model:-

The incremental Development Model

The incremental model combines elements of the linear sequential


model with the iterative of prototyping. Figure 2.3 shows the
incremental model applies linear sequences in a staggered fashion as
calendar time progresses. Each linear sequence produces a
deliverable “increment” of the software. For e.g., word processing
software developed using the incremental paradigm might deliver basic
file management, editing, and document production functions in the
first increment; more sophisticated editing and document production
capabilities in the second increment; spelling and grammar checking in
the third increment; and advanced page layout capability in the fourth
increment. It should be noted that the process flow for any increment
could incorporate the prototyping paradigm.
17
Fig: - The incremental model

When an incremental model is used, the first increment is a core


product. That is, basic requirements are addressed, but many
supplementary features remain undelivered. The customer uses the
core product. As a result of use and/or evaluation, a plan is developed
for the next increment. The plan addresses the modification of the core
product to better meet the needs of the customer and the delivery of
additional features and functionality. This process is repeated following
the delivery of each increment, until the complete product is produced.
The incremental process model is iterative in nature. The incremental
model focuses on the delivery of an operational product with each
increment.

Incremental development is particularly useful when staffing is


unavailable for a complete implementation by the business deadline
that has been established for the project. Early increments can be
implemented with fewer people. If the core product is well received,
then additional staff can be added to implement the next increment. In
addition increments can be planned to manage technical risks. For
e.g.: a major system might require the availability of new hardware i.e.,
under development and whose delivery date is uncertain. It might be
possible to plan early increments in a way that avoids the use of this
hardware, thereby enabling partial functionality to be delivered to end
users- without inordinate delay.

18
Iterative Development Model

The iterative enhance model counters the third limitation of the


waterfall model and tries to combine a benefit of both prototyping and
the waterfall model. The basic idea is that the software should be
developed in increments, each increment adding some functional
capability to the system until the full system is implemented. At each
step, extensions and design modifications can be made. An advantage
of this approach is that it can result in better testing because testing
each increment is likely to be easier than testing the entire system as in
the waterfall model. The increment models provide feedback to the
client i.e., useful for determining the final requirements of the system.

In the first step of this model, a simple initial implementation is done for
a subset of the overall problem. This subset is one that contains some
of the key aspects of the problem that are easy to understand and
implement and which form a useful and usable system. A project
control list is created that contains, in order, all the tasks that must be
performed to obtain the final implementation. This project control list
gives an idea of how far the project is at any given step from the final
system.

Each step consists of removing the next task from the list, designing
the implementation for the selected task, coding and testing the
implementation, performing an analysis of the partial system obtained
after this step, and updating the list as a result of the analysis. These
three phases are called the design phase, implementation phase and
analysis phase. The process is integrated until the project control list is
empty, at which time the final implementation of the system will be
available. The iterative enhancement process model is shown in figure
2.2.

Fig. The iterative enhancement model

The project control list guides the iteration steps and keeps track of all
tasks that must be done. Based on the analysis, one of the tasks in the
list can include redesign of defective components or redesign of the
19
entire system. Redesign of the system will occur only in the initial
steps. In the later steps, the design would have stabilized and there is
less chance of redesign. Each entry in the list is a task that should be
performed in one step of the iterative enhancement process and should
be completely understood. Selecting tasks in this manner will minimize
the chance of error and reduce the redesign work. The design and
implementation phases of each step can be performed in a top-down
manner or by using some other technique.

One effective use of this type of model is for product development, in


which the developers themselves provide the specifications and
therefore have a lot of control on what specifications go in the system
and what stay out.

In a customized software development, where the client has to


essentially provide and approve the specifications, it is not always clear
how this process can be applied. Another practical problem with this
type of development project comes in generating the business
contract-how will the cost of additional features be determined and
negotiated, particularly because the client organization is likely to be
tied to the original vendor who developed the first version. Overall, in
these types of projects, this process model can be useful if the “core” of
the applications to be developed is well understood and the
“increments” can be easily defined and negotiated. In client-oriented
projects, this process has the major advantage that the client’s
organization does not have to pay for the entire software together, it
can get the main part of the software developed and perform cost-
benefit analysis for it before enhancing the software with more
capabilities

Que. 5. Describe various types of Object Oriented and Reuse


Models.

Ans:-

Object-Oriented and Reuse Models:-

Object-oriented techniques (see Table 2.2) can be used at


different points in the software life cycle, from problem analysis and
requirements specification to programming. At analysis, the result of an
object-driven approach is an object-oriented model of the application
20
domain. At requirements, the outcome is a description of the system to
be designed in an object-oriented manner. At implementation, the
source programming is done using an object-oriented programming
language.

Using a traditional software process model in conjunction with object-


oriented programming has little impact on the overall process structure
because the object-oriented aspects are subordinated to the classic
development framework. A typical example of this would be to develop
a system using the Waterfall Model with implementation done in an
object-oriented language such as C++, Java, or Smalltalk. When an
object-oriented design strategy is used, the system modules become
classes or objects that are defined, analyzed, associated, and
aggregated using object-oriented analysis, design, and implementation
techniques and notations. Examples of this approach include
component-based process models, COTS development, and the UML-
based Rational Unified Process. These strategies have gained
considerable attention in rapid application development because they
can significantly improve productivity due to the reusability of the
objects or components. Furthermore, these approaches can be
extensively supported by CASE tools.

Consider an object-oriented approach to requirements and specifica-


tion. Requirements engineering entails identifying the requirements that
the user expects of a system and specifying those requirements in an
appropriate manner. The process involves the elicitation, specification,
and validation of stakeholder objectives for an application in a problem
domain. The requirements document tells what is acceptable to the
user. The correct requirements are critical because, without correctly
identified requirements, the project is doomed to failure and
irrelevance. The specifications, on the other hand, must also accurately
reflect what the user wants, but they are primarily for the benefit of the
developer.

Social or collaborative factors are involved in requirements gathering


because the elicitation of requirements is based on a cooperative
social interaction between the developers and users. The requirements
can be defined via use cases, with subsequent development iterations
planned around the use cases. The specifications can be represented
by various means: formal, informal, or based on natural language.
Nonfunctional requirements must also be addressed, including quality,
reliability, usability, and performance.

21
Typically, the specification aspect of requirements engineering has
been done using a structured approach based on data flow diagrams
and structure charts. However, this can also be done using an object-
oriented approach (Dawson & Swatman 1999). An object-oriented
problem analysis is first performed to understand the real-world
problem domain. A domain model is created to give a visual description
of the partitioning of the application domain into conceptual objects,
which can be determined, for example, from use cases. The emphasis
on objects as opposed to functions distinguishes object-oriented
analysis from structured analysis; the focus of the latter is on the
identification of system functions rather than domain objects. The
purpose of the object analysis is to discover the objects in the
application domain and the information and behaviors needed by the
objects to meet the requirements. Blank observes that object-oriented
analysis and design depend critically on correctly “assigning
responsibilities to objects.”

The system design phase involves creating a conceptual solution to the


problem that meets the requirements. This includes architectural,
database, and object design. Object-oriented design involves
identifying the software objects needed to fulfill the requirements
analysis, including their properties, methods, and how they interact or
collaborate with one another. Such objects are anticipated to be
relatively stable throughout development.

UML diagrammatic models or tools are widely used for describing


objects and their interactions. Static UML models are used to define
the objects, their properties, and relations. Dynamic UML models are
used to define the states of the objects, their state transitions, event
handling, and message passing. The interaction between the objects
reflects the system flow of control. UML collaboration diagrams are
used to illustrate the interactions between objects visually. UML
sequence diagrams are used to illustrate the interactions between
objects arranged in a time sequence (the sequence of messages
between objects) and to clarify the logic of use cases. These are
examples of so-called interaction diagrams.

System sequence diagrams show the system events that the so-called
actors generate (see the RUP discussion for further details), their order
during a scenario, and the system responses to the events and their
order. A system sequence diagram is a visual illustration for the system
responses in the use case for a scenario; it describes the system
operations triggered by a use case (Blank 2004). UML activity
22
diagrams are used to understand the logic of use cases and business
processes. Traditional state machine diagrams illustrate the behavior of
an object in response to events and as a function of its internal state.
For a further discussion of UML modeling, refer to the section on the
Rational Unified Process. Larman (2001) provides an important
treatment of UML and object-oriented analysis and design. Incidentally,
Liu et al. (1998) describe the application of SOFL (Structured Object-
Oriented Formal Language) for integrating structured and object-
oriented methodologies. SOFL combines static and dynamic modeling
and may potentially overcome some of the problems with formal
methods that have limited their use.

One can also model the entire development process in an object-


oriented way for such purposes as to apply automation for pr ocess
improvement (Riley 1994). In this perspective, the development
process is what is captured and formulated in an object-oriented
manner. Riley observed that current process descriptions are often
“imprecise, ambiguous, incomprehensible, or unusable,” and there is
also frequently “a lack of fidelity between actual behavior and a
[development] organization’s stated process.” To address this, he
proposed an object-oriented approach for modeling software processes
based on a language called DRAGOON, which is also object oriented.

DRAGOON is used as a metamodeling language to represent the


process model. This type of process representation is intended to
facilitate improving process quality via automation. Riley claimed that
his approach avoided some of the drawbacks associated with process
modeling techniques based on functional approaches – such as
structured analysis and design for defining system data flows and
control. The idea is that his approach can be used to develop a
theoretical model of software process, including formalization, as well
as support simulation and automated enactment of processes.

Models like this are needed to develop life-cycle support environments


that can partially automate process enactment. Automation could help
ensure that processes were enacted in a standard manner by the
individuals and teams using the processes. This could allow
“enforcement and verification of the process and the unobtrusive
collection of metrics,” which could then be used to improve the process
(Riley 1994). Riley’s metamethod is based on a four-step approach:

1. Define an object-oriented process model

23
2. Specify the DRAGOON syntax for each model object

3. Develop object behavior models for DRAGOON

4. Develop object interaction models for the overall process.

Rational Unified Process Model (RUP)

UML has become a widely accepted, standard notation for object-


oriented architecture and design. The widespread acceptance of UML
allows developers to perform system design and provide design
documentation in a consistent and familiar manner. The
standardization reduces the need for developers to learn new
notational techniques and improves communication among the
development team and stakeholders. The Rational Rose software suite
is a GUI or visual modeling tool available from Rational Software that
lets developers model a problem, its design, implementation, and
indeed the entire development process, all the way through testing and
configuration management, using the UML notation.

Component-based development allows the use (or reuse) of


commercially available system components and ultimately continuous
(re)development, but involves the complexities of gluing the
components together. This is also highly consistent with the
fundamental principle of separation of concerns

The RUP constitutes a complete framework for software development.


The elements of the RUP (not of the problem being modeled) are the
workers who implement the development, each working on some
cohesive set of development activities and responsible for creating
specific development artifacts. A worker is like a role a member plays
and the worker can play many roles (wear many hats) during the
development. For example, a designer is a worker and the artifact that
the designer creates may be a class definition. An artifact supplied to a
customer as part of the product is a deliverable. The artifacts are
maintained in the Rational Rose tools, not as separate paper
documents. A workflow is defined as a “meaningful sequence of
activities that produce some valuable result” (Krutchen 2003). The
development process has nine core workflows: business modeling;
requirements; analysis and design; implementation; test; deployment;
24
configuration and change management; project management; and
environment. Other RUP elements, such as tool mentors, simplify
training in the use of the Rational Rose system. These core workflows
are spread out over the four phases of development:

· The inception phase defines the vision of the actual user end-product
and the scope of the project.

· The elaboration phase plans activities and specifies the architecture.

· The construction phase builds the product, modifying the vision and
the plan as it proceeds.

· The transition phase transitions the product to the user (delivery,


training, support, maintenance).

Commercial Off-the-Shelf Model (COTS)

The component-based approach to development such as that


represented by the use of commercial off-the-shelf (COTS) products is
a good example of how object-oriented methodologies have
dramatically affected development strategies. COTS development
reflects a radically expanded concept of reuse in which proposed
systems are configured out of prebuilt components or subsystems. The
software economics of COTS components is very unlike that of
custom-built components. Cost estimates call for components that
must be reused at least two or three times to recoup the development
expense however, commercial components can be reused thousands
of times by different developers.

The Reengineering Model

The Reengineering Process originally emerged in a business or


organizational context as a response to the customary business
metrics of time, cost, and risk reduction. Reengineering is especially
attractive in situations in which significant advances in existing
technologies have been made that may enable breakthroughs in the
performance or functionality of an existing system. Although
reengineering has influenced the software process modeling literature
and some reengi-neering models have been introduced, it has
nonetheless been a somewhat overlooked approach. This status is
perhaps attributable to its relatively recent introduction or to its
25
categorization as a technique that usually appears integrated with other
process modeling approaches.

Somerville (2000) identifies three main phases in software reengineer-


ing: defining the existing system; understanding and transformation;
and reengineering the system. The process entails “taking existing
legacy systems and reimplementing them to make them more
maintainable. As part of this reengineering process, the system may be
redocumented or restructured or even retranslated to a more modern
programming language.” The system may be implemented on a
different architectural platform and data may be “migrated to a different
database management system.” Pressman (1996) introduced a six-
phase software reengineering process model in which the phases
worked together in a cyclical, iterative fashion: inventory analysis;
document restructuring; reverse engineering; code and data
restructuring; and forward engineering.

An inventory analysis makes a detailed review of all existing business


applications with respect to their longevity, size, maintainability, and
criticality.

· Reverse engineering refers to the attempt to extract and abstract


design information from the existing system’s source code; in other
words, it attempts to recover the design implemented in the code.
Reverse engineering uses the information about the system’s scope
and functionality provided by the inventory analysis.

· Forward engineering refers to the use of the process results or


products from the reverse engineering phase to develop the new
system. Obviously, one of the most common adaptations is the
development of new interactive interfaces. These may not precisely
match the characteristics of the old interface but may use new styles of
interaction instead.

· Data reengineering refers to translation of the current model to the


target data model – for example, converting from flat tables or a
hierarchical organization to a relational data model.

· Redocumentation creates new system documentation from legacy


documentation according to an appropriate documentation standard.

26
Que. 6. Describe productivity driven dynamic process modeling.

Ans:-

Productivity-Driven Dynamic Process Modeling

Abdel–Hamid and Madnick described a simulated approach to process


modeling in a series of papers in 1983, 1989, and 1991. Their research
represented an attempt to understand the impact of project
management and economic effects on software development by using
simulation. They examined the effect of management and process
structure on team effectiveness by designing a computer model of
software project management based on a systems dynamics
methodology. The work was motivated by a perceived fundamental
shortcoming in previous research on software project management: its
“inability to integrate our knowledge of the microcomponents of the
software development process such as scheduling, productivity, and
staffing to derive implications about the behavior of the total
sociotechnical system” (Abdel–Hamid & Madnick 1989).

The 1989 article includes an intentionally simplified but nonetheless


instructive flow diagram of project management. In this model, people
and resources are first allocated, then simulated work is done and its
progress is monitored. A revised simulated estimate of the project com-
pletion date is made on the basis of progress so far, and resource
allocation is revised depending on the business requirements and the
available budget. The computer model allows one to address certain
obvious management concerns in an automated scenario-like fashion.
For example, suppose a project is behind schedule. Should the
completion date be revised, should new staff be added, or should
existing staff work overtime? Scenarios like this can be automatically
implemented for each alternative. Consider the perennial problem
concerning the trade-off among quality assurance, project completion
time, and cost versus the “impact of different effort distributions among
project phases.The simple flow model can address these what-if’s too.

A more complete model incorporates additional organizational sub-


systems, including human resource management, software production,
process control, and planning and allows one to test scenarios experi-
mentally via the simulation and a built-in COCOMO style framework for
cost estimation. Modeling the impact of quality assurance procedures
is also an important ability of the model. The purpose of this work was
27
to understand how to improve productivity in software development by
understanding the dynamic inter-relations among key project elements
such as task management, team effectiveness, and quality control.

The system dynamics model can be used to examine the so-called


productivity paradox, which refers to the often disappointing lack of
improvement in software development productivity despite the
application of powerful new development techniques and automated
support like CASE tools. Abdel–Hamid (1996) observes that, although
laboratory-scale experiments on the effect of using CASE tools often
report dramatic improvements in development productivity, actual
results in field studies of real development environments appear to
reflect modest or no improvement. The explanation for this
phenomenon suggested by the systems dynamics model is that the
shortfall in performance improvement does not reflect a failure in
management implementation of design strategies, but rather inherent
complexities in the social system in which the development is
implemented.

Abdel–Hamid (1996) argues that predictably modeling the behavior of


the complex organizational systems underlying software development
requires using computer-based systems dynamics models to create
micro-worlds that are digital replicas of the organizations. The details of
these models must, of course, be defined or specified on the basis of
close empirical review and analysis of existing project management
environments and statistical studies. In the absence of such simulation
techniques, the observed behavior of development environments may
seem counterintuitive or puzzling and may exhibit unintended side
effects.

The model applies Steiner’s (1972) theory of group productivity, which


computes actual productivity as potential productivity minus process
defects. The simulation models factors such as human resource
management, turnover rate, error rates, learning, development rate,
workforce level, schedule pressure, project task control, etc., with cost
estimates done using COCOMO – although simulation-driven
variations of the COCOMO estimation strategy and data are also used.

According to the Steiner theory, faulty processes are a key element in


explaining why group problem-solving processes fall short of their
potential. The faulty processes that may detract from the potential
productivity of a system include so-called dynamic motivation factors
and communication overhead. The dynamic motivation factors include
28
things such as the impact of schedule pressures on work and the effect
of budget slack (excess) on the temptation to gold-plate features or
underwork. The communication overhead is related to the
requirements of intrateam communications. The consequences of
faulty processes are not always clear. For example, schedule pressure
can make personnel work harder; however, more rapid work can also
increase the likelihood of errors, which can affect product quality or
entail inefficient rework later in the project.

The results of the simulated development indicated significant shortfalls


in productivity over what had been forecast on the basis of the
standard COCOMO model. One of the factors causing this was the
result of a classic staffing mistake: an initial underestimation of needed
project resources was followed (in the simulation) by a reactive, quick
increase of resources when the (simulated) project began to fall behind
schedule. This generated (in the simulation) an equally classic
disruptive effect. Namely, the added personnel required learning to
become familiar with the development project and the correlated
instruction had to be supplied by the current project staff, thereby
slowing down the development rather than expediting it.
Communication costs also escalated as a result of the increased
staffing in the simulated developments.

Significantly, managerial productivity estimates could also affect the


outcomes by affecting how personnel spend their time. For example,
when there is “fat in the estimate, Parkinson’s law indicates that people
will use the extra time for training, personal activities…slack
components that can make up more than 20 percent of a person’s time
on the job” (Abdel–Hamid 1996). It is noteworthy that such
overestimates can easily become hardwired into the prediction process
because the prediction model is based on empirical data about
productivity from past projects, which can easily reify past development
problems. This kind of misestimate based on misunderstood prior
experience represents a type of failure to learn from the past. In a
sense, a variant of the Heisenberg Uncertainty Principle is at work here
because of “the difficulty in separating the quality of estimates from the
effect of actions based on those estimates!” Additionally, cognitive
errors like the saliency with which distinctive past events stand out in
memory also affect the interpretation of experience from previous
projects.

A major advantage of the system dynamics model is that it permits


computerized, simulated controlled experiments to test the impact of
29
different development strategies – that is, hypothesis-testing. With
respect to the productivity paradox, Abdel–Hamid suggests that a
possible explanation for the failure of productivity to increase
adequately in response in new technologies and methods may be what
is referred to in engineering as compensating feedback. This refers to a
phenomenon in complex systems in which potentially beneficial
exogenous effects such as new technologies produce “natural
feedback effects within the intended system that counteract the
intended effect” of the external intervention.

30

You might also like