You are on page 1of 68

What's the difference between public, private, and protected?

• A member (either data member or member function) declared in a private


section of a class can only be accessed by member functions and friends of that
class
• A member (either data member or member function) declared in a protected
section of a class can only be accessed by member functions and friends of that
class, and by member functions and friends of derived classes
• A member (either data member or member function) declared in a public section
of a class can be accessed by anyone
How can I protect derived classes from breaking when I change the
internal parts of the base class?
A class has two distinct interfaces for two distinct sets of clients:
• It has a public interface that serves unrelated classes
• It has a protected interface that serves derived classes
Unless you expect all your derived classes to be built by your own team, you should
declare your base class's data members as private and use protected inline access
functions by which derived classes will access the private data in the base class. This
way the private data declarations can change, but the derived class's code won't break
(unless you change the protected access functions).

Access control of base class members (C++ only)
When you declare a derived class, an access specifier can precede each base class in the base list of the
derived class. This does not alter the access attributes of the individual members of a base class as seen by
the base class, but allows the derived class to restrict the access control of the members of a base class.
You can derive classes using any of the three access specifiers:
• In a public base class, public and protected members of the base class remain public and protected
members of the derived class.
• In a protected base class, public and protected members of the base class are protected members
of the derived class.
• In a private base class, public and protected members of the base class become private members
of the derived class.
In all cases, private members of the base class remain private. Private members of the base class cannot be
used by the derived class unless friend declarations within the base class explicitly grant access to them.
In the following example, class d is derived publicly from class b. Class b is declared a public base class by
this declaration.
class b { };
class d : public b // public derivation
{ };
You can use both a structure and a class as base classes in the base list of a derived class declaration:
• If the derived class is declared with the keyword class, the default access specifier in its base list
specifiers is private.
• If the derived class is declared with the keyword struct, the default access specifier in its base list
specifiers is public.
In the following example, private derivation is used by default because no access specifier is used in the base
list and the derived class is declared with the keyword class:
struct B
{ };
class D : B // private derivation
{ };
Members and friends of a class can implicitly convert a pointer to an object of that class to a pointer to either:
• A direct private base class
• A protected base class (either direct or indirect)
• Description: Normally in multi-winform application we create a
common base class which exhibits common behavior of all forms in
application. Every win-form deriving from common base win-form will
have same behavior as of base win-form.

Implementing base-derived model in WPF is a bit complex because


both Xaml and code behind (.cs) class should know which base class to
derive from.
• Let's implement this model in WPF using one practical scenario:
• Example: Suppose we want that every WPF dialog used in our
application should be a modal dialog. So our motive is to create a base
class which will ensure every dialog deriving from this base class will
be modal dialog.
• Implementation Approach:
• Step 1: Create a base class which exposes one public method
ShowModalDialog() which associates WPF window to main window.
Any class deriving from this base class will call this exposed method
instead of normal ShowDialog () method to show modal dialog.

• public class DialogBase: Window
• {
• [DllImport("user32.dll")]
• iternal static extern IntPtr GetActiveWindow (); // Returns active
window reference.

• public bool? ShowModalDialog()
• {
• WindowInteropHelper helper = new WindowInteropHelper(this);
• helper.Owner = GetActiveWindow(); // Set main active window.
• return this.ShowDialog();
• }
• }

• Step 2: Now any new wpf window needs to derive from above base
class.
• public class SampleDialog : DialogBase
• {
• public SampleDialog( )
• {

• }
• }

• Step 3: As any WPF dialog also has a xaml associated with it. So we
need to have reference of base class in xaml also so that it can
associate with base class.
• <my:DialogBase
• xmlns:my="clr-namespace:MyNameSpace" // Include namespace
of base class.
• Title="Sample WPF Dialog"
• //// Implementation of xaml binding and other stuff as such.
• </my:DialogBase>

• Step4: Now we have the basic framework available and we are ready
to use it while showing WPF dialog. Instead of calling showDialog()
method we will call base class’s ShowModalDialog() method.
• Main()
• {
• SampleDialog wpfDialog = new SampleDialog ();
• wpfDialog.ShowModalDialog();
• }
• Derived Classes
• C++ allows you to use one class declaration, known as a base class, as the basis for the
declaration of a second class, known as a derived class. Suppose you had a class called
CPushButton that implemented a simple push-button, like the one shown in Figure 1.


• Figure 1. Simple push-button.
• The CPushButton class definition might look something like this:
• /* 1 */
• class CPushButton
• {
• protected:
• Rect bounds;
• Str255 title;
• BooleanisDefault;
• WindowPtrowningWindow;
• ControlHandle buttonControl;

• public:
• CPushButton( WindowPtr owningWindow, Rect *bounds,
• Str255 title, Boolean isDefault );
• virtual~CPushButton();
• virtual void Draw();
• virtual void DoClick();
• };
• For the moment, ignore the keyword virtual that’s sprinkled throughout the CPushButton class
definition. We’ll get to it in a bit.
• Take a look at the CPushButton data members. Notice that they were declared using the
protected access specifier. protected is very similar to the private access specifier. A data
member or member function marked as private is only accessible from within member
functions of that class. For example, if you defined an object in main() that featured a private
data member, referring to the data member within a member function works just fine, but
referring to the data member within main() will cause a compile error, telling you that you are
trying to access a member inappropriately.
• A member marked as protected is accessible from member functions of the class and also
from member functions of any classes derived from that class. For example, the bounds data
member might be accessed by the CPushButton classes’ Draw() function, but never by an
outside function like main().
• In addition, a protected member is also accessible from the member functions of any classes
derived from its class.

• Deriving One Class From Another


• Why would you want to derive one class from another? You might want to extend the
functionality of a class, or customize it in some way. For example, you might create a
CPictureButton class that creates a push button with a picture instead of a text label. By
basing the CPictureButton class on CPushButton, you automatically inherit all of CPushButton
data members and member functions.
• Here’s my definition of a CPictureButton class:
• /* 2 */
• class CPictureButton : public CPushButton
• {
• protected:
• PicHandlepic;

• public:
• CPictureButton( WindowPtr owningWindow, Rect *bounds,
• PicHandle pic, Boolean isDefault );
• virtual~CPushButton();
• virtual void Draw();
• virtual void DoClick();
• };
• The first line tells you that this class is derived from the CPushButton class. Every
CPictureButton object you create will inherit all the CPushButton member functions and data
members. Here’s what this means.
• When you create a CPushButton object, space is allocated for the CPictureButton data
member, as well as for all of the non-private CPushButton data members. This means that a
CPictureButton member function can refer to both CPictureButton and CPushButton data
members. Here’s a CPictureButton definition:
• /* 3 */
• CPictureButton *myPictureButton;

• myPictureButton = new CPictureButton(myWindow, &buttonRect,
• myPicture, true );
• A myPictureButton member function might refer to pic, which is a CPictureButton data
member, or perhaps bounds or owningWindow, which were provided courtesy of CPushButton.
The point here is, the CPictureButton class didn’t need to define the data members that gave it
its infrastructure. All it needed were the members that provided what wasn’t already provided
by its base class.
• If you declared a third class based on CPictureButton, the new class would inherit the
members all the way up the inheritance chain, from both CPictureButton and CPushButton.
• Access to member functions follow the same rules as access to data members. In the previous
code, the CPictureButton member functions can access all of the non-private CPushButton
member functions. Why non-private? Though your derived class does inherit all of the base
class members, the compiler won’t let you access inherited private members.
• To prove this yourself, add a private data member to the CGramps class in this month’s
program, then try to reference it from the class derived from CGramps, which is CPops.
• My general strategy is to mark my member functions as public and my data members as
protected. If for some reason you don’t want to mark your member functions as public, be
sure you at least mark your constructor and destructor as public, otherwise you won’t have
the access you need to create your object!
• In the definition of the CPictureButton class above, you may have noticed the public keyword
on the first line:
• class CPictureButton : public CPushButton

• This keyword tells the compiler how the members of the base class are accessed by the
derived class. The public keyword says that public members from the base class are public in
the derived class, protected members from the base class are protected in the derived class,
and private members are not accessible at all.
• If the private keyword is used, public and protected members from the base class are private
in the derived class, and private members are not inherited at all.
• You don’t have to include the access keyword. If the base class was defined using struct, the
derivation defaults to public. If the base class was defined using class, the derivation defaults
to private. My preference is to use class to define all my classes and to use the public keyword
in my derived class definitions.
REUSABILITY CONCEPT
• Object oriented programming is the most preferred programming technique now a day
only due to the flexibility of re usability. Re usability is actually an attribute that makes
the object oriented programming more flexible and attractive.

As this programming is based on objects. Object is actually a collection of data and


functions that are used to operate on that data. These objects are defined as
independent entities because the data inside each object represents the attributes of
object and functions are used to change these attributes as required by a program.
These objects act just like spare parts of any program. Thus, they are not limited to any
specific program; rather they can be used in more than one application as required.
These objects are defined as an independent entity in a program, and afterward they
can be used in any other program that needs the same functionality. This reuse of
objects helps in reducing the development time of programs.

This reuse also plays a basic role in inheritance that is a characteristic of object oriented
programming.In inheritance, new objects are defined that inherit the existing objects and
provide extended functionality as required. It means that if we require some new object
that also needs some functionality of existing one's then it is easy to inherit the existing
one instead of writing all the required code once again.

what is the use of procedure overriding

To supress the base class virtual method and wanted derived method to be called then Base class
virtual method is overrided in the derived class.

class A
{

public virtual void func1()


{

Console.Write("Base function1");
}
}

class B : A
{

public override void func1()


{

Console.Write("Base function2");
}
}

static void Main(string[] args)


{

A a new B();
a.func1();
}
Method overriding, in object oriented programming, is a language feature that allows a subclass
to provide a specific implementation of a method that is already provided by one of its
superclasses. The implementation in the subclass overrides (replaces) the implementation in the
superclass.
A subclass can give its own definition of methods which also happen to have the same signature
as the method in its superclass. This means that the subclass's method has the same name and
parameter list as the superclass's overridden method. Constraints on the similarity of return type
vary from language to language, as some languages support covariance on return types.
Method overriding is an important feature that facilitates polymorphism in the design of object-
oriented programs.
Some languages allow the programmer to prevent a method from being overridden, or disallow
method overriding in certain core classes. This may or may not involve an inability to subclass
from a given class.
In many cases, abstract classes are designed — i.e. classes that exist only in order to have
specialized subclasses derived from them. Such abstract classes have methods that do not
perform any useful operations and are meant to be overridden by specific implementations in the
subclasses. Thus, the abstract superclass defines a common interface which all the subclasses
inherit.
Method overloading is a feature found in various programming languages such as Ada, C#, C+
+, D and Java that allows the creation of several methods with the same name which differ from
each other in terms of the type of the input and the type of the output of the function.
For example, doTask() and doTask(object O) are overloaded methods. To call the latter, an
object must be passed as a parameter, whereas the former does not require a parameter, and is
called with an empty parameter field. A common error would be to assign a default value to the
object in the second method, which would result in an ambiguous call error, as the compiler
wouldn't know which of the two methods to use.
Another example would be a Print(object O) method. In this case one might like the method to
be different when printing, for example, text or pictures. The two different methods may be
overloaded as Print(text_object T); Print(image_object P). If we write the overloaded print
methods for all objects our program will "print", we never have to worry about the type of the
object, and the correct function call again, the call is always: Print(something).
Method overloading is usually associated with statically-typed programming languages which
enforce type checking in function calls. When overloading a method, you are really just making
a number of different methods that happen to have the same name. It is resolved at compile time
which of these methods are used.

Object database
From Wikipedia, the free encyclopedia
Jump to: navigation, search

An object database (also object-oriented database) is a database model in which information


is represented in the form of objects as used in object-oriented programming.
Example of an object-oriented model.[1]

Object databases are a niche field within the broader DBMS market dominated by relational
database management systems (RDBMS). Object databases have been considered since the early
1980s and 1990s but they have made little impact on mainstream commercial data processing,
though there is some usage in specialized areas.

Contents
[hide]
• 1 Overview
• 2 History
• 3 Adoption of object
databases
• 4 Technical features
• 5 Standards
• 6 Advantages and
disadvantages
• 7 See also
• 8 References
• 9 External links

[edit] Overview
When database capabilities are combined with object-oriented (OO) programming language
capabilities, the result is an object database management system (ODBMS).
Today’s trend in programming languages is to utilize objects, thereby making OODBMS ideal
for OO programmers because they can develop the product, store them as objects, and can
replicate or modify existing objects to make new objects within the OODBMS. Information
today includes not only data but video, audio, graphs, and photos which are considered complex
data types. Relational DBMS aren’t natively capable of supporting these complex data types. By
being integrated with the programming language, the programmer can maintain consistency
within one environment because both the OODBMS and the programming language will use the
same model of representation. Relational DBMS projects using complex data types would have
to be divided into two separate tasks: the database model and the application.
As the usage of web-based technology increases with the implementation of Intranets and
extranets, companies have a vested interest in OODBMS to display their complex data. Using a
DBMS that has been specifically designed to store data as objects gives an advantage to those
companies that are geared towards multimedia presentation or organizations that utilize
computer-aided design (CAD)[2].
Some object-oriented databases are designed to work well with object-oriented programming
languages such as Ruby, Python, Perl, Java, C#, Visual Basic .NET, C++, Objective-C and
Smalltalk; others have their own programming languages. ODBMSs use exactly the same model
as object-oriented programming languages.

[edit] History
Object database management systems grew out of research during the early to mid-1970s into
having intrinsic database management support for graph-structured objects. The term "object-
oriented database system" first appeared around 1985.[3] Notable research projects included
Encore-Ob/Server (Brown University), EXODUS (University of Wisconsin–Madison), IRIS
(Hewlett-Packard), ODE (Bell Labs), ORION (Microelectronics and Computer Technology
Corporation or MCC), Vodak (GMD-IPSI), and Zeitgeist (Texas Instruments). The ORION
project had more published papers than any of the other efforts. Won Kim of MCC compiled the
best of those papers in a book published by The MIT Press.[4]
Early commercial products included Gemstone (Servio Logic, name changed to GemStone
Systems), Gbase (Graphael), and Vbase (Ontologic). The early to mid-1990s saw additional
commercial products enter the market. These included ITASCA (Itasca Systems), Jasmine
(Fujitsu, marketed by Computer Associates), Matisse (Matisse Software), Objectivity/DB
(Objectivity, Inc.), ObjectStore (Progress Software, acquired from eXcelon which was originally
Object Design), ONTOS (Ontos, Inc., name changed from Ontologic), O2[5] (O2 Technology,
merged with several companies, acquired by Informix, which was in turn acquired by IBM),
POET (now FastObjects from Versant which acquired Poet Software), Versant Object Database
(Versant Corporation), VOSS (Logic Arts) and JADE (Jade Software Corporation). Some of
these products remain on the market and have been joined by new open source and commercial
products such as InterSystems CACHÉ (see the product listings below).
Object database management systems added the concept of persistence to object programming
languages. The early commercial products were integrated with various languages: GemStone
(Smalltalk), Gbase (LISP), Vbase (COP) and VOSS (Virtual Object Storage System for
Smalltalk). For much of the 1990s, C++ dominated the commercial object database management
market. Vendors added Java in the late 1990s and more recently, C#.
Starting in 2004, object databases have seen a second growth period when open source object
databases emerged that were widely affordable and easy to use, because they are entirely written
in OOP languages like Smalltalk, Java or C#, such as db4o (db4objects), DTS/S1 from Obsidian
Dynamics and Perst (McObject), available under dual open source and commercial licensing.
[edit] Adoption of object databases
Object databases based on persistent programming acquired a niche in application areas such as
engineering and spatial databases, telecommunications, and scientific areas such as high energy
physics and molecular biology. They have made little impact on mainstream commercial data
processing, though there is some usage in specialized areas of financial services.[6] It is also
worth noting that object databases held the record for the World's largest database (being the first
to hold over 1000 terabytes at Stanford Linear Accelerator Center)[7] and the highest ingest rate
ever recorded for a commercial database at over one Terabyte per hour.
Another group of object databases focuses on embedded use in devices, packaged software, and
real-time systems.

[edit] Technical features


Most object databases also offer some kind of query language, allowing objects to be found by a
more declarative programming approach. It is in the area of object query languages, and the
integration of the query and navigational interfaces, that the biggest differences between
products are found. An attempt at standardization was made by the ODMG with the Object
Query Language, OQL.
Access to data can be faster because joins are often not needed (as in a tabular implementation of
a relational database). This is because an object can be retrieved directly without a search, by
following pointers. (It could, however, be argued that "joining" is a higher-level abstraction of
pointer following.)
Another area of variation between products is in the way that the schema of a database is
defined. A general characteristic, however, is that the programming language and the database
schema use the same type definitions.
Multimedia applications are facilitated because the class methods associated with the data are
responsible for its correct interpretation.
Many object databases, for example VOSS, offer support for versioning. An object can be
viewed as the set of all its versions. Also, object versions can be treated as objects in their own
right. Some object databases also provide systematic support for triggers and constraints which
are the basis of active databases.
The efficiency of such a database is also greatly improved in areas which demand massive
amounts of data about one item. For example, a banking institution could get the user's account
information and provide them efficiently with extensive information such as transactions,
account information entries etc. The Big O Notation for such a database paradigm drops from
O(n) to O(1), greatly increasing efficiency in these specific cases.

[edit] Standards
The Object Data Management Group (ODMG) was a consortium of object database and object-
relational mapping vendors, members of the academic community, and interested parties. Its goal
was to create a set of specifications that would allow for portable applications that store objects
in database management systems. It published several versions of its specification. The last
release was ODMG 3.0. By 2001, most of the major object database and object-relational
mapping vendors claimed conformance to the ODMG Java Language Binding. Compliance to
the other components of the specification was mixed. In 2001, the ODMG Java Language
Binding was submitted to the Java Community Process as a basis for the Java Data Objects
specification. The ODMG member companies then decided to concentrate their efforts on the
Java Data Objects specification. As a result, the ODMG disbanded in 2001.
Many object database ideas were also absorbed into SQL:1999 and have been implemented in
varying degrees in object-relational database products.
In 2005 Cook, Rai, and Rosenberger proposed to drop all standardization efforts to introduce
additional object-oriented query APIs but rather use the OO programming language itself, i.e.,
Java and .NET, to express queries. As a result, Native Queries emerged. Similarly, Microsoft
announced Language Integrated Query (LINQ) and DLINQ, an implementation of LINQ, in
September 2005, to provide close, language-integrated database query capabilities with its
programming languages C# and VB.NET 9.
In February 2006, the Object Management Group (OMG) announced that they had been granted
the right to develop new specifications based on the ODMG 3.0 specification and the formation
of the Object Database Technology Working Group (ODBT WG). The ODBT WG plans to
create a set of standards that incorporates advances in object database technology (e.g.,
replication), data management (e.g., spatial indexing), and data formats (e.g., XML) and to
include new features into these standards that support domains where object databases are being
adopted (e.g., real-time systems).
On January 2007 the World Wide Web Consortium gave final recommendation status to the
XQuery language. XQuery has enabled a new class of applications that managed hierarchical
data built around the XRX web application architecture that also provide many of the advantages
of object databases. In addition XRX applications benefit by transporting XML directly to client
applications such as XForms without changing data structures.

[edit] Advantages and disadvantages


This section may contain original research. Please improve it by verifying the
claims made and adding references. Statements consisting only of original
research may be removed. More details may be available on the talk page.
(October 2008)

The main benefit of creating a database with objects as data is speed. OODBMS are faster than
relational DBMS because data isn’t stored in relational rows and columns but as objects[8].
Objects have a many to many relationship and are accessed by the use of pointers. Pointers are
linked to objects to establish relationships. Another benefit of OODBMS is that it can be
programmed with small procedural differences without affecting the entire system[9]. This is most
helpful for those organizations that have data relationships that aren’t entirely clear or need to
change these relations to satisfy the new business requirements. This ability to change
relationships leads to another benefit which is that relational DBMS can’t handle complex data
models while OODBMS can.
Benchmarks between ODBMSs and RDBMSs have shown that an ODBMS can be clearly
superior for certain kinds of tasks. The main reason for this is that many operations are
performed using navigational rather than declarative interfaces, and navigational access to data is
usually implemented very efficiently by following pointers.
Critics of navigational database-based technologies like ODBMS suggest that pointer-based
techniques are optimized for very specific "search routes" or viewpoints; for general-purpose
queries on the same information, pointer-based techniques will tend to be slower and more
difficult to formulate than relational. Thus, navigation appears to simplify specific known uses at
the expense of general, unforeseen, and varied future uses.[citation needed] However, with suitable
language support, direct object references may be maintained in addition to normalised, indexed
aggregations, allowing both kinds of access; furthermore, a persistent language may index
aggregations on whatever its content elements return from a call to some arbitrary object access
method, rather than only on attribute value, which allows a query to 'drill down' into complex
data structures.
Other things that work against ODBMS seem to be the lack of interoperability with a great
number of tools/features that are taken for granted in the SQL world, including but not limited to
industry standard connectivity, reporting tools, OLAP tools, and backup and recovery standards.
[citation needed]
Additionally, object databases lack a formal mathematical foundation, unlike the
relational model, and this in turn leads to weaknesses in their query support. However, this
objection is offset by the fact that some ODBMSs fully support SQL in addition to navigational
access, e.g. Objectivity/SQL++, Matisse, and InterSystems CACHÉ. Effective use may require
compromises to keep both paradigms in sync.
In fact there is an intrinsic tension between the notion of encapsulation, which hides data and
makes it available only through a published set of interface methods, and the assumption
underlying much database technology, which is that data should be accessible to queries based
on data content rather than predefined access paths. Database-centric thinking tends to view the
world through a declarative and attribute-driven viewpoint, while OOP tends to view the world
through a behavioral viewpoint, maintaining entity-identity independently of changing attributes.
This is one of the many impedance mismatch issues surrounding OOP and databases.
Although some commentators have written off object database technology as a failure, the
essential arguments in its favor remain valid, and attempts to integrate database functionality
more closely into object programming languages continue in both the research and the industrial
communities.[citation needed]
Features of Object oriented Programming
The Objects Oriented programming language supports all the features of normal programming
languages. In addition it supports some important concepts and terminology which has made it
popular among programming methodology.

The important features of Object Oriented programming are:

• Inheritance
• Polymorphism
• Data Hiding
• Encapsulation
• Overloading
• Reusability
Let us see a brief overview of these important features of Object Oriented programming

But before that it is important to know some new terminologies used in Object Oriented
programming namely
• Objects
• Classes

Objects:
In other words object is an instance of a class.

Classes:
These contain data and functions bundled together under a unit. In other words class is a
collection of similar objects. When we define a class it just creates template or Skelton. So no
memory is created when class is created. Memory is occupied only by object.

Example:

Class classname
{
Data
Functions
};
main ( )
{
classname objectname1,objectname2,..;
}

In other words classes acts as data types for objects.

Member functions:
The functions defined inside the class as above are called member functions.
Here the concept of Data Hiding figures

Data Hiding:
This concept is the main heart of an Object oriented programming. The data is hidden inside the
class by declaring it as private inside the class. When data or functions are defined as private it
can be accessed only by the class in which it is defined. When data or functions are defined as
public then it can be accessed anywhere outside the class. Object Oriented programming gives
importance to protecting data which in any system. This is done by declaring data as private and
making it accessible only to the class in which it is defined. This concept is called data hiding.
But one can keep member functions as public.
So above class structure becomes

Example:

Class classname
{
private:
datatype data;

public:
Member functions
};
main ( )
{
classname objectname1,objectname2,..;
}

Encapsulation:
The technical term for combining data and functions together as a bundle is encapsulation.

Inheritance:
Inheritance as the name suggests is the concept of inheriting or deriving properties of an exiting
class to get new class or classes. In other words we may have common features or characteristics
that may be needed by number of classes. So those features can be placed in a common tree class
called base class and the other classes which have these charaterisics can take the tree class and
define only the new things that they have on their own in their classes. These classes are called
derived class. The main advantage of using this concept of inheritance in Object oriented
programming is it helps in reducing the code size since the common characteristic is placed
separately called as base class and it is just referred in the derived class. This provide the users
the important usage of terminology called as reusability

Reusability:
This usage is achieved by the above explained terminology called as inheritance. Reusability is
nothing but re- usage of structure without changing the existing one but adding new features or
characteristics to it. It is very much needed for any programmers in different situations.
Reusability gives the following advantages to users

It helps in reducing the code size since classes can be just derived from existing one and one
need to add only the new features and it helps users to save their time.
For instance if there is a class defined to draw different graphical figures say there is a user who
want to draw graphical figure and also add the features of adding color to the graphical figure. In
this scenario instead of defining a class to draw a graphical figure and coloring it what the user
can do is make use of the existing class for drawing graphical figure by deriving the class and
add new feature to the derived class namely add the feature of adding colors to the graphical
figure.

Polymorphism and Overloading:


Poly refers many. So Polymorphism as the name suggests is a certain item appearing in different
forms or ways. That is making a function or operator to act in different forms depending on the
place they are present is called Polymorphism. Overloading is a kind of polymorphism. In other
words say for instance we know that +, - operate on integer data type and is used to perform
arithmetic additions and subtractions. But operator overloading is one in which we define new
operations to these operators and make them operate on different data types in other words
overloading the existing functionality with new one. This is a very important feature of object
oriented programming methodology which extended the handling of data type and operations.

Hiding Data within Object-Oriented Programming


• July 29, 2004
• By Matt Weisfeld
• Bio »
• Send Email »
• More Articles »

Introduction
This is the seventh installment in a series of articles about fundamental object-oriented
(OO) concepts. The material presented in these articles is based on material from the
second edition of my book, The Object-Oriented Thought Process, 2nd edition. The
Object-Oriented Thought Process is intended for anyone who needs to understand the
basic object-oriented concepts before jumping into the code. Click here to start at the
beginning of the series.
Now that we have covered the conceptual basics of classes and objects, we can start to
explore specific concepts in more detail. Remember that there are three criteria that are
applied to object-oriented languages: They have to implement encapsulation,
inheritance, and polymorphism. Of course, these are not the only important terms, but
they are a great place to start a discussion.
In the previous article, this article, and several of the ones that follow, we will focus on a
single concept and explore how it fits in to the object-oriented model. We will also begin
to get much more involved with code. In keeping with the code examples used in the
previous articles, Java will be the language used to implement the concepts in code.
One of the reasons that I like to use Java is because you can download the Java
compiler for personal use at the Sun Microsystems Web site http://java.sun.com/. You
can download the J2SE 1.4.2 SDK (software development kit) to compile and execute
these applications and will provide the code listings for all examples in this article. I
have the SDK 1.4.0 loaded on my machine. I will provide figures and the output for
these examples. See the previous article in this series for detailed descriptions for
compiling and running all the code examples in this series.
Checking Account Example
Recall that in the previous article, we created a class diagram that is represented in the
following UML diagram; see Figure 1.

Figure 1: UML Diagram for Checking Account Class


This class is designed to illustrate the concept of encapsulation. This class
encapsulates both the data (attributes) and methods (behaviors). Encapsulation is a
fundamental concept of object-oriented design—all objects contain both attributes and
behavior. The UML class diagram is composed of only three parts: the class name, the
attributes, and the behaviors. This class diagram maps directly to the code in Listing 1.
class CheckingAccount {
private double balance = 0;
public void setBalance(double bal) {
balance = bal;
};
public double getBalance(){
return balance;
};
}
Data Hiding
Returning to the actual CheckingAccount example, we can see that while the class
contains both attributes and behavior, not all of the class is accessible to other class.
For example, consider again the balance attribute. Note that balance is defined as
private.
private double balance = 0;
We proved in the last article that attempting to access this attribute directly from an
application would produce an error. The application that produces this error is shown in
Listing 2.
class Encapsulation {

public static void main(String args[]) {


System.out.println("Starting myEncapsulation...");
CheckingAccount myAccount = new CheckingAccount();
myAccount.balance = 40.00;
System.out.println("Balance = " +
myAccount.getBalance());
}

}
Listing 2: Encapsulation.java
The offending line is the where this main application attempts to set balance directly.
myAccount.balance = 40.00;
This line violates the rule of data hiding. As we saw in last month's article, the compiler
does not allow this; however, it fails to set balance to 40 only because the access was
declared as private. It is interesting to note that the Java language, just as C++, C#, and
other languages, allows for the attribute to be declared as public. In this case, the main
application would indeed be allowed to directly set the value of balance. This then would
break the object-oriented concept of data hiding and would not be considered a proper
object-oriented design.
This is one area where the importance of the design comes in. If you abide by the rule
that all attributes are private, all attributes of an object are hidden, thus the term data
hiding. This is so important because the compiler now can enforce the data hiding rule.
If you declare all of a class's attributes as private, a rogue developer cannot directly
access the attributes from an application. Basically, you get this protection checking for
free.
Whereas the class's attributes are hidden, the methods in this example are designated
as public.
public void setBalance(double bal) {
balance = bal;
};
public double getBalance(){
return balance;

};
- Data hiding is a characteristic of object-oriented programming. Because an
object can only be associated with data in predefined classes or templates, the
object can only "know" about the data it needs to know about. There is no
possibility that someone maintaining the code may inadvertently point to or
otherwise access the wrong data unintentionally. Thus, all data not required by an
object can be said to be "hidden."

What is the difference between public, private, protected


inheritance?

Answe public-->inherite the 3 Dee


r protected members as
#1 preotected in
drived class and pubic
members wiull be public in
derived
class

protected--->pubic and
protecated members of the
base class
will become protected in
derived class

Private-->pubilc and
proteacted members will
become private
in derived class

Re: What is the difference between public, private,


protected inheritance?

Answe public: class members are


r access from the outside the
class
private: class members can
access only within the class
protected:class members can
access inside the same
package
Templates
Published by Juan Soulie
Last update on Nov 16, 2007 at 9:36am UTC

Function templates
Function templates are special functions that can operate with generic types. This allows us to create a
function template whose functionality can be adapted to more than one type or class without
repeating the entire code for each type.

In C++ this can be achieved using template parameters. A template parameter is a special kind of
parameter that can be used to pass a type as argument: just like regular function parameters can be
used to pass values to a function, template parameters allow to pass also types to a function. These
function templates can use these parameters as if they were any other regular type.

The format for declaring function templates with type parameters is:

template <class identifier> function_declaration;


template <typename identifier> function_declaration;

The only difference between both prototypes is the use of either the keyword class or the keyword
typename. Its use is indistinct, since both expressions have exactly the same meaning and behave
exactly the same way.

For example, to create a template function that returns the greater one of two objects we could use:

1 template <class myType>


2 myType GetMax (myType a, myType b) {
3 return (a>b?a:b);
}
4

Here we have created a template function with myType as its template parameter. This template
parameter represents a type that has not yet been specified, but that can be used in the template
function as if it were a regular type. As you can see, the function template GetMax returns the greater
of two parameters of this still-undefined type.

To use this function template we use the following format for the function call:

function_name <type> (parameters);

For example, to call GetMax to compare two integer values of type int we can write:
1 int x,y;
2 GetMax <int> (x,y);

When the compiler encounters this call to a template function, it uses the template to automatically
generate a function replacing each appearance of myType by the type passed as the actual template
parameter (int in this case) and then calls it. This process is automatically performed by the compiler
and is invisible to the programmer.

Here is the entire example:

1 // function template 6
2 #include <iostream> 10
3 using namespace std;
4
5 template <class T>
T GetMax (T a, T b) {
6
T result;
7 result = (a>b)? a : b;
8 return (result);
9 }
10
11 int main () {
12 int i=5, j=6, k;
13 long l=10, m=5, n;
k=GetMax<int>(i,j);
14
n=GetMax<long>(l,m);
15 cout << k << endl;
16 cout << n << endl;
17 return 0;
18 }
19
20

In this case, we have used T as the template parameter name instead of myType because it is shorter
and in fact is a very common template parameter name. But you can use any identifier you like.

In the example above we used the function template GetMax() twice. The first time with arguments
of type int and the second one with arguments of type long. The compiler has instantiated and then
called each time the appropriate version of the function.

As you can see, the type T is used within the GetMax() template function even to declare new
objects of that type:

T result;

Therefore, result will be an object of the same type as the parameters a and b when the function
template is instantiated with a specific type.

In this specific case where the generic type T is used as a parameter for GetMax the compiler can find
out automatically which data type has to instantiate without having to explicitly specify it within angle
brackets (like we have done before specifying <int> and <long>). So we could have written instead:

1 int i,j;
2 GetMax (i,j);

Since both i and j are of type int, and the compiler can automatically find out that the template
parameter can only be int. This implicit method produces exactly the same result:

1 // function template II 6
2 #include <iostream> 10
3 using namespace std;
4
5 template <class T>
T GetMax (T a, T b) {
6
return (a>b?a:b);
7 }
8
9 int main () {
10 int i=5, j=6, k;
11 long l=10, m=5, n;
12 k=GetMax(i,j);
13 n=GetMax(l,m);
cout << k << endl;
14
cout << n << endl;
15 return 0;
16 }
17
18

Notice how in this case, we called our function template GetMax() without explicitly specifying the
type between angle-brackets <>. The compiler automatically determines what type is needed on each
call.

Because our template function includes only one template parameter (class T) and the function
template itself accepts two parameters, both of this T type, we cannot call our function template with
two objects of different types as arguments:

1 int i;
2 long l;
3 k = GetMax (i,l);

This would not be correct, since our GetMax function template expects two arguments of the same
type, and in this call to it we use objects of two different types.

We can also define function templates that accept more than one type parameter, simply by specifying
more template parameters between the angle brackets. For example:

1 template <class T, class U>


2 T GetMin (T a, U b) {
3 return (a<b?a:b);
}
4

In this case, our function template GetMin() accepts two parameters of different types and returns
an object of the same type as the first parameter (T) that is passed. For example, after that
declaration we could call GetMin() with:

1 int i,j;
2 long l;
3 i = GetMin<int,long> (j,l);

or simply:

i = GetMin (j,l);

even though j and l have different types, since the compiler can determine the appropriate
instantiation anyway.

Class templates
We also have the possibility to write class templates, so that a class can have members that use
template parameters as types. For example:

1 template <class T>


2 class mypair {
3 T values [2];
public:
4
mypair (T first, T second)
5 {
6 values[0]=first; values[1]=second;
7 }
8 };
9

The class that we have just defined serves to store two elements of any valid type. For example, if we
wanted to declare an object of this class to store two integer values of type int with the values 115
and 36 we would write:
mypair<int> myobject (115, 36);

this same class would also be used to create an object to store any other type:

mypair<double> myfloats (3.0, 2.18);

The only member function in the previous class template has been defined inline within the class
declaration itself. In case that we define a function member outside the declaration of the class
template, we must always precede that definition with the template <...> prefix:

1 // class templates 100


2 #include <iostream>
3 using namespace std;
4
5 template <class T>
class mypair {
6
T a, b;
7 public:
8 mypair (T first, T second)
9 {a=first; b=second;}
10 T getmax ();
11 };
12
13 template <class T>
T mypair<T>::getmax ()
14
{
15 T retval;
16 retval = a>b? a : b;
17 return retval;
18 }
19
20 int main () {
21 mypair <int> myobject (100,
75);
22
cout << myobject.getmax();
23 return 0;
24 }
25
26

Notice the syntax of the definition of member function getmax:

1 template <class T>


2 T mypair<T>::getmax ()

Confused by so many T's? There are three T's in this declaration: The first one is the template
parameter. The second T refers to the type returned by the function. And the third T (the one
between angle brackets) is also a requirement: It specifies that this function's template parameter is
also the class template parameter.

Template specialization
If we want to define a different implementation for a template when a specific type is passed as
template parameter, we can declare a specialization of that template.

For example, let's suppose that we have a very simple class called mycontainer that can store one
element of any type and that it has just one member function called increase, which increases its
value. But we find that when it stores an element of type char it would be more convenient to have a
completely different implementation with a function member uppercase, so we decide to declare a
class template specialization for that type:

1 // template specialization 8
2 #include <iostream> J
3 using namespace std;
4
5 // class template:
template <class T>
6
class mycontainer {
7 T element;
8 public:
9 mycontainer (T arg)
10 {element=arg;}
11 T increase () {return +
+element;}
12
};
13
14
// class template
15 specialization:
16 template <>
17 class mycontainer <char> {
18 char element;
19 public:
mycontainer (char arg)
20
{element=arg;}
21 char uppercase ()
22 {
23 if
24 ((element>='a')&&(element<='z'))
25 element+='A'-'a';
26 return element;
}
27
};
28
29 int main () {
30 mycontainer<int> myint (7);
31 mycontainer<char> mychar
32 ('j');
33 cout << myint.increase() <<
34 endl;
cout << mychar.uppercase() <<
endl;
return 0;
}

This is the syntax used in the class template specialization:

template <> class mycontainer <char> { ... };

First of all, notice that we precede the class template name with an emptytemplate<> parameter list.
This is to explicitly declare it as a template specialization.

But more important than this prefix, is the <char> specialization parameter after the class template
name. This specialization parameter itself identifies the type for which we are going to declare a
template class specialization (char). Notice the differences between the generic class template and
the specialization:

1 template <class T> class mycontainer { ... };


2 template <> class mycontainer <char> { ... };

The first line is the generic template, and the second one is the specialization.

When we declare specializations for a template class, we must also define all its members, even those
exactly equal to the generic template class, because there is no "inheritance" of members from the
generic template to the specialization.

Type conversions
An expression of a given type is implicitly converted in the following situations:
• The expression is used as an operand of an arithmetic or logical operation.
• The expression is used as a condition in an if statement or an iteration statement (such as a for
loop). The expression will be converted to a Boolean (or an integer in C89).
• The expression is used in a switch statement. The expression will be converted to an integral type.
• The expression is used as an initialization. This includes the following:
○ An assignment is made to an lvalue that has a different type than the assigned value.
○ A function is provided an argument value that has a different type than the parameter.
○ The value specified in the return statement of a function has a different type from the
defined return type for the function.
You can perform explicit type conversions using a cast expression, as described in Cast expressions. The
following sections discuss the conversions that are allowed by either implicit or explicit conversion, and the
rules governing type promotions:
• Arithmetic conversions and promotions
• Lvalue-to-rvalue conversions
• Pointer conversions
• Reference conversions (C++ only)
• Qualification conversions (C++ only)
• Function argument conversions
What is Type Conversion
It is the process of converting one type into another. In other words converting an expression of a
given type into another is called type casting.

How to achieve this


There are two ways of achieving the type conversion namely:

Automatic Conversion otherwise called as Implicit Conversion


Type casting otherwise called as Explicit Conversion

Let us see each of these in detail:

Automatic Conversion otherwise called as Implicit Conversion


This is not done by any conversions or operators. In other words value gets automatically
converted to the specific type in which it is assigned.

Let us see this with an example:

#include <iostream.h>
void main()
{
short x=6000;
int y;
y=x;
}

In the above example the data type short namely variable x is converted to int and is assigned to
the integer variable y.

So as above it is possible to convert short to int, int to float and so on.

Type casting otherwise called as Explicit Conversion


Explicit conversion can be done using type cast operator and the general syntax for doing this is

datatype (expression);
Here in the above datatype is the type which the programmer wants the expression to gets
changed as

In C++ the type casting can be done in either of the two ways mentioned below namely:

C-style casting
C++-style casting

The C-style casting takes the synatx as

(type) expression

This can also be used in C++.

Apart from the above the other form of type casting that can be used specifically in C++
programming language namely C++-style casting is as below namely:

type (expression)

This approach was adopted since it provided more clarity to the C++ programmers rather than
the C-style casting.
Say for instance the as per C-style casting

(type) firstVariable * secondVariable

is not clear but when a programmer uses the C++ style casting it is much more clearer as below

type (firstVariable) * secondVariable

Let us see the concept of type casting in C++ with a small example:
#include <iostream.h>
void main()
{
int a;
float b,c;
cout<< “Enter the value of a:”;
cin>>a;
cout<< “n Enter the value of b:”;
cin>>b;
c = float(a)+b;
cout<<”n The value of c is:”<<c;
}

The output of the above program is

Enter the value of a: 10


Enter the value of b: 12.5
type conversion or typecasting refers to changing an entity of one data type into another. This is
done to take advantage of certain features of type hierarchies. For instance, values from a more
limited set, such as integers, can be stored in a more compact format and later converted to a
different format enabling operations not previously possible, such as division with several
decimal places' worth of accuracy. In object-oriented programming languages, type conversion
allows programs to treat objects of one type as one of their ancestor types to simplify interacting
with them.
There are two types of conversion: implicit and explicit. The term for implicit type conversion is
coercion. The most common form of explicit type conversion is known as casting. Explicit type
conversion can also be achieved with separately defined conversion routines such as an
overloaded object constructor.

Entity-relationship model
From Wikipedia, the free encyclopedia
Jump to: navigation, search

A sample Entity-relationship diagram using Chen's notation

In software engineering, an entity-relationship model (ERM) is an abstract and conceptual


representation of data. Entity-relationship modeling is a database modeling method, used to
produce a type of conceptual schema or semantic data model of a system, often a relational
database, and its requirements in a top-down fashion. Diagrams created by this process are called
entity-relationship diagrams, ER diagrams, or ERDs.
The definitive reference for entity-relationship modeling is Peter Chen's 1976 paper.[1] However,
variants of the idea existed previously,[2] and have been devised subsequently.
Contents
[hide]
• 1 Overview
• 2 The building blocks: entities, relationships, and
attributes
• 3 Diagramming conventions
○ 3.1 Crow's Foot Notation
• 4 ER diagramming tools
• 5 See also
• 6 References
• 7 Further reading
• 8 External links

[edit] Overview
The first stage of information system design uses these models during the requirements analysis
to describe information needs or the type of information that is to be stored in a database. The
data modeling technique can be used to describe any ontology (i.e. an overview and
classifications of used terms and their relationships) for a certain area of interest. In the case of
the design of an information system that is based on a database, the conceptual data model is, at a
later stage (usually called logical design), mapped to a logical data model, such as the relational
model; this in turn is mapped to a physical model during physical design. Note that sometimes,
both of these phases are referred to as "physical design".
There are a number of conventions for entity-relationship diagrams (ERDs). The classical
notation mainly relates to conceptual modeling. There are a range of notations employed in
logical and physical database design, such as IDEF1X.

[edit] The building blocks: entities, relationships, and


attributes
Two related entities

An entity with an attribute

A relationship with an attribute

Primary key

An entity may be defined as a thing which is recognized as being capable of an independent


existence and which can be uniquely identified. An entity is an abstraction from the complexities
of some domain. When we speak of an entity we normally speak of some aspect of the real world
which can be distinguished from other aspects of the real world.[3]
An entity may be a physical object such as a house or a car, an event such as a house sale or a car
service, or a concept such as a customer transaction or order. Although the term entity is the one
most commonly used, following Chen we should really distinguish between an entity and an
entity-type. An entity-type is a category. An entity, strictly speaking, is an instance of a given
entity-type. There are usually many instances of an entity-type. Because the term entity-type is
somewhat cumbersome, most people tend to use the term entity as a synonym for this term.
Entities can be thought of as nouns. Examples: a computer, an employee, a song, a mathematical
theorem. Entities are represented as rectangles.
A relationship captures how two or more entities are related to one another. Relationships can be
thought of as verbs, linking two or more nouns. Examples: an owns relationship between a
company and a computer, a supervises relationship between an employee and a department, a
performs relationship between an artist and a song, a proved relationship between a
mathematician and a theorem. Relationships are represented as diamonds, connected by lines to
each of the entities in the relationship.
The model's linguistic aspect described above is utilized in the declarative database query
language ERROL, which mimics natural language constructs.
Entities and relationships can both have attributes. Examples: an employee entity might have a
Social Security Number (SSN) attribute; the proved relationship may have a date attribute.
Attributes are represented as ellipses connected to their owning entity sets by a line.
Every entity (unless it is a weak entity) must have a minimal set of uniquely identifying
attributes, which is called the entity's primary key.
Entity-relationship diagrams don't show single entities or single instances of relations. Rather,
they show entity sets and relationship sets. Example: a particular song is an entity. The collection
of all songs in a database is an entity set. The eaten relationship between a child and her lunch is
a single relationship. The set of all such child-lunch relationships in a database is a relationship
set. In other words, a relationship set corresponds to a relation in mathematics, while a
relationship corresponds to a member of the relation.
Certain cardinality constraints on relationship sets may be indicated as well.
Re: Difference: Object Oriented Analysis (OOA) and
Object Oriented Design (OOD)?

Answ Object-Oriented Analysis (OOA) aims to 0 Ar


er model the problem ul
#1 domain, the problem to be solved, by
developing an OO
system. The source of the analysis is
generally a written
requirements statement. Object-Oriented
Design (OOD) is an
activity of looking for logical solutions
to solve a
problem by using encapsulated entities
called objects.

Is This Answer 20
Correct ? Yes 3 No
Re: Difference: Object Oriented Analysis (OOA)
and Object Oriented Design (OOD)?

Answ OOA focuses on what the


er system does, OOD on how the
#2 system
does it
Object diagrams show instances instead of classes. They are useful for explaining small pieces with complicated
relationships, especially recursive relationships.
This small class diagram shows that a university Department can contain lots of other Departments.
Hide image

Class Diagrams
A class diagram focuses on a set of classes (see Chapter 1) and the structural relationships among them
(see Chapter 2). It may also show interfaces (see the section “Interfaces, Ports, and Connectors” in Chapter
1).
The UML allows you to draw class diagrams that have varying levels of detail. One useful way to classify
these diagrams involves three stages of a typical software development project: requirements, analysis, and
design. These stages are discussed in the following sections.

Object diagram
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Example of a Object diagram.

An object diagram in the Unified Modeling Language (UML), is a diagram that shows a
complete or partial view of the structure of a modeled system at a specific time.
An Object diagram focuses on some particular set of object instances and attributes, and the links
between the instances. A correlated set of object diagrams provides insight into how an arbitrary
view of a system is expected to evolve over time. Object diagrams are more concrete than class
diagrams, and are often used to provide examples, or act as test cases for the class diagrams.
Only those aspects of a model that are of current interest need be shown on an object diagram.


Object diagram topics
Instance specifications
Each object and link on an object diagram is represented by an InstanceSpecification. This can
show an object's classifier (e.g. an abstract or concrete class) and instance name, as well as
attributes and other structural features using slots. Each slot corresponds to a single attribute or
feature, and may include a value for that entity.
The name on an instance specification optionally shows an instance name, a ':' separator, and
optionally one or more classifier names separated by commas. The contents of slots, if any, are
included below the names, in a separate attribute compartment. A link is shown as a solid line,
and represents an instance of an association.
Object diagram example
Initially, when n=2, and f(n-2) = 0, and f(n-1) = 1, then f(n) = 0 + 1 = 1.

As an example, consider one possible way of modeling production of the Fibonacci sequence.
In the first UML object diagram on the right, the instance in the leftmost instance specification is
named v1, has IndependentVariable as its classifier, plays the NMinus2 role within the
FibonacciSystem, and has a slot for the val attribute with a value of 0. The second object is
named v2, is of class IndependentVariable, plays the NMinus1 role, and has val = 1. The
DependentVariable object is named v3, and plays the N role. The topmost instance, an
anonymous instance specification, has FibonacciFunction as its classifier, and may have an
instance name, a role, and slots, but these are not shown here. The diagram also includes three
named links, shown as lines. Links are instances of an association.
After the first iteration, when n = 3, and f(n-2) = 1, and f(n-1) = 1, then f(n) = 1 + 1
= 2.

In the second diagram, at a slightly later point in time, the IndependentVariable and
DependentVariable objects are the same, but the slots for the val attribute have different values.
The role names are not shown here.
After several more iterations, when n = 7, and f(n-2) = 5, and f(n-1) = 8, then f(n) =
5 + 8 = 13.

In the last object diagram, a still later snapshot, the same three objects are involved. Their slots
have different values. The instance and role names are not shown here.

[edit] Usage
If you are using a UML modeling tool, you will typically draw object diagrams using some other
diagram type, such as on a class diagram. An object instance may be called an instance
specification or just an instance. A link between instances is generally referred to as a link. Other
UML entities, such as an aggregation or composition symbol (a diamond) may also appear on an
object diagram.

Object oriented technology is based on a few simple concepts that, when combined,
produce significant improvements in software construction. Unfortunately, the basic
concepts of the technology often get lost in the excitement of advanced features and
advantageous features. The basic characteristics of the OOM are explained ahead.
Characteristics of Object Oriented Technology:

* Identity
* Classification
* Polymorphism
* Inheritance
Identity:
The term Object Oriented means that we organize the software as a collection of
discrete objects. An object is a software package that contains the related data and the
procedures. Although objects can be used for any purpose, they are most frequently
used to represent real-world objects such as products, customers and sales orders.
The basic idea is to define software objects that can interact with each other just as
their real world counterparts do, modeling the way a system works and providing a
natural foundation for building systems to manage that business.
Classification:
In principle, packaging data and procedures together makes perfect sense. In practice,
it raises an awkward problem. Suppose we have many objects of the same general
type- for example a thousand product objects, each of which could report its current
price. Any data these objects contained could easily be unique for each object. Stock
number, price, storage dimensions, stock on hand, reorder quantity, and any other
values would differ from one product to the next. But the methods for dealing with
these data might well be the same. Do we have to copy these methods and duplicate
them in every object?
No, this would be ridiculously inefficient. All object-oriented languages provide a simple
way of capturing these commonalties in a single place. That place is called a class.
The class acts as a kind of template for objects of similar nature.
Polymorphism:
Polymorphism is a Greek word meaning ¡§many forms¡¨. It is used to express the fact
that the same message can be sent to many different objects and interpreted in
different ways by each object. For example, we could send the message "move" to
many different kinds of objects. They would all respond to the same message, but they
might do so in very different ways. The move operation will behave differently for a
window and differently for a chess piece.
Inheritance:
Inheritance is the sharing of attributes and operations among classes on a hierarchical
relationship. A class can be defined as a generalized form and then it specialized in a
subclass. Each subclass inherits all the properties of its superclass and adds its own
properties in it. For example, a car and a bicycle are subclasses of a class road
vehicle, as they both inherits all the qualities of a road vehicle and add their own
properties to it.
Dynamic Binding
Last updated Mar 1, 2004.
Earlier, I explained how dynamic binding and polymorphism are related. However, I didn't explain how this
relationship is implemented. Dynamic binding refers to the mechanism that resolves a virtual function call at
runtime. This mechanism is activated when you call a virtual member function through a reference or a pointer
to a polymorphic object. Imagine a class hierarchy in which a class called Shape serves as a base class for
other classes (Triangle and Square):
class Shape
{
public:
void virtual Draw() {} //dummy implementation
//..
};
class Square
{
public:
void Draw(); //overriding Shape::Draw
}
class Triangle
{
public:
void Draw(); //overriding Shape::Draw
}
Draw() is a dummy function in Shape. It's declared virtual in the base class to enable derived classes to
override it and provide individual implementations. The beauty in polymorphism is that a pointer or a reference
to Shape may actually point to an object of class Square or Triangle:
void func(const Shape* s)
{
s->Draw()
}
int main()
{
Shape *p1= new Triangle;
Shape *p2 = new Square;
func(p1);
func(p2);
}
C++ distinguishes between a static type and a dynamic type of an object. The static type is determined at
compile time. It's the type specified in the declaration. For example, the static type of both p1 and p2 is
"Shape *". However, the dynamic types of these pointers are determined by the type of object to which they
point: "Triangle *" and "Square *", respectively. When func() calls the member function Draw(), C++
resolves the dynamic type of s and ensures that the appropriate version of Draw() is invoked. Notice how
powerful dynamic binding is: You can derive additional classes from Shape that override Draw() even after
func() is compiled. When func() invokes Draw(), C++ will still resolve the call according to the dynamic
type of s.
As the example shows, dynamic binding isn't confined to the resolution of member function calls at runtime;
rather, it applies to the binding of a dynamic type to a pointer or a reference that may differ from its static type.
Such a pointer or reference is said to be polymorphic. Likewise, the object bound to such a pointer is a
polymorphic object.
Dynamic binding exacts a toll, though. Resolving the dynamic type of an object takes place at runtime and
therefore incurs performance overhead. However, this penalty is negligible in most cases. Another advantage
of dynamic binding is reuse. If you decide to introduce additional classes at a later stage, you only have to
override Draw() instead of writing entire classes from scratch. Furthermore, existing code will still function
correctly once you've added new classes. You only have to compile the new code and relink the program.
< BackPage 87 of 438Next >

Multiple inheritance
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Multiple inheritance refers to a feature of some object-oriented programming languages in


which a class can inherit behaviors and features from more than one superclass. This contrasts
with single inheritance, where a class may inherit from at most one superclass.
Languages that support multiple inheritance include: Eiffel, C++, Dylan, Python, Perl, Perl 6,
Curl, Common Lisp (via CLOS), OCaml, Tcl (via Incremental Tcl)[1], and Object REXX (via the
use of mixin classes).

Contents
[hide]
• 1 Overview
• 2 Criticisms
• 3 See also
• 4 References
• 5 Further reading
• 6 External links

[edit] Overview
Multiple inheritance allows a class to take on functionality from multiple other classes, such as
allowing a class named StudentMusician to inherit from a class named Person, a class named
Musician, and a class named Worker. This can be abbreviated StudentMusician : Person,
Musician, Worker.
Ambiguities arise in multiple inheritance, as in the example above, if for instance the class
Musician inherited from Person and Worker and the class Worker inherited from Person. This is
referred to as the Diamond problem. There would then be the following rules:
Worker : Person
Musician : Person, Worker
StudentMusician : Person, Musician, Worker
If a compiler is looking at the class StudentMusician it needs to know whether it should join
identical features together, or whether they should be separate features. For instance, it would
make sense to join the "Age" features of Person together for StudentMusician. A person's age
doesn't change if you consider them a Person, a Worker, or a Musician. It would, however, make
sense to separate the feature "Name" in Person and Musician if they use a different stage name
than their given name. The options of joining and separating are both valid in their own context
and only the programmer knows which option is correct for the class they are designing.
Languages have different ways of dealing with these problems of repeated inheritance.
• Eiffel allows the programmer to explicitly join or separate features that are
being inherited from superclasses. Eiffel will automatically join features
together if they have the same name and implementation. The class writer
has the option to rename the inherited features to separate them. Eiffel also
allows explicit repeated inheritance such as A: B, B.
• C++ requires that the programmer state which parent class the feature to
use should come from i.e. "Worker::Person.Age". C++ does not support
explicit repeated inheritance since there would be no way to qualify which
superclass to use (see criticisms). C++ also allows a single instance of the
multiple class to be created via the virtual inheritance mechanism (i.e.
"Worker::Person" and "Musician::Person" will reference the same object).
• Perl uses the list of classes to inherit from as an ordered list. The compiler
uses the first method it finds by depth-first searching of the superclass list or
using the C3 linearization of the class hierarchy. Various extensions provide
alternative class composition schemes. Python has the same structure, but
unlike Perl includes it in the syntax of the language. In Perl and Python, the
order of inheritance affects the class semantics (see criticisms).
• The Common Lisp Object System allows full programmer control of method
combination, and if this is not enough, the Metaobject Protocol gives the
programmer a means to modify the inheritance, method dispatch, class
instantiation, and other internal mechanisms without affecting the stability of
the system.
• Logtalk supports both interface and implementation multi-inheritance,
allowing the declaration of method aliases that provide both renaming and
access to methods that would be masked out by the default conflict
resolution mechanism.
• Curl allows only classes that are explicitly marked as shared to be inherited
repeatedly. Shared classes must define a secondary constructor for each
regular constructor in the class. The regular constructor is called the first
time the state for the shared class is initialized through a subclass
constructor, and the secondary constructor will be invoked for all other
subclasses.
• Ocaml chooses the last matching definition of a class inheritance list to
resolve which method implementation to use under ambiguities. To override
the default behavior one simply qualifies a method call with the desired class
definition.
• Tcl allows multiple parent classes- their serial affects the name resolution for
class members.[2]
Smalltalk, C#, Objective-C, Object Pascal / Delphi, Java, Nemerle, and PHP do not allow
multiple inheritance, and this avoids any ambiguity. However, all but Smalltalk allow classes to
implement multiple interfaces.

Multiple inheritance example

#include <iostream>
using std::ostream;
using std::cout;
using std::endl;

class Base1 {
public:
Base1( int parameterValue )
{
value = parameterValue;
}

int getData() const


{
return value;
}
protected:
int value;
};

class Base2
{
public:
Base2( char characterData )
{
letter = characterData;
}

char getData() const


{
return letter;
}
protected:
char letter;
};

class Derived : public Base1, public Base2


{
public:
Derived( int integer, char character, double double1 )
: Base1( integer ), Base2( character ), real( double1 ) { }

double getReal() const {


return real;
}
void display()
{
cout << " Integer: " << value << "\n Character: "
<< letter << "\nReal number: " << real;
}

private:
double real;
};

int main()
{
Base1 base1( 10 ), *base1Ptr = 0;
Base2 base2( 'Z' ), *base2Ptr = 0;
Derived derived( 7, 'A', 3.5 );

cout << base1.getData()


<< base2.getData();
derived.display();

cout << derived.Base1::getData()


<< derived.Base2::getData()
<< derived.getReal() << "\n\n";

base1Ptr = &derived;
cout << base1Ptr->getData() << '\n';

base2Ptr = &derived;
cout << base2Ptr->getData() << endl;
return 0;
}

10Z Integer: 7
Character: A
Real number: 3.57A3.5

7
A

9.19.multiple base classes

9.19.1. An example of multiple base classes.

9.19.2. multiple inheritance with employees and degrees


9.19.3. multiple inheritance with English Distances

9.19.4. Multiple Inheritance to have the features from both parent

Resolving Ambiguity in Case of Multiple Inheritance Involving


9.19.5.
Common Base Classes

9.19.6. Multiple Inheritance in both private and public way

In cases of multiple inheritance: Constructors are called in order of


9.19.7.
derivation, destructors in reverse order

9.19.8. Multiple inheritance example

How to Create and Destroy Objects


C++ offers software developers two philosophies for creating and destroying objects--
static and dynamic. In restrictive programs objects should be stored in stack memory.
Stack or static memory is efficient and memory management is done automatically by
the compiler. In user-driven programs objects should be stored in heap memory. Heap
or dynamic memory, although slower, is fully manageable by the programmer. It is the
area of choice for storing data in complex applications where program flow is dictated
by the user.

#include <iostream>
#include <string>
#include <vector>

using namespace std;

class Customer
{
public:
Customer() : m_name("") {}
std::string& getName() { return m_name; }
void setName(std::string const& name) { m_name = name; }
friend std::ostream& operator<<(std::ostream& os, Customer const&
rhs)
{
os << rhs.m_name;
return os;
}
friend std::istream& operator>>(std::istream& is, Customer& rhs)
{
is >> rhs.m_name;
return is;
}
private:
std::string m_name;
};

typedef std::vector<Customer> vCustomers;

class Bank
{
public:
Bank() {}
Bank(const size_t numCustomers)
{
for(size_t i = 0; i < numCustomers; i++)
{
Customer c;
m_vCustomers.push_back(c);
}
}
void addCustomer(Customer const& customer)
{
m_vCustomers.push_back(customer);
}
void showCustomers()
{
vCustomers::iterator it;
for(it = m_vCustomers.begin(); it != m_vCustomers.end(); it++)
{
std::cout << (*it) << endl;
}
}
private:
vCustomers m_vCustomers;
};

int main()
{
size_t numCustomers = 0;
cout << "Enter the number of customers for a new Bank object: ";
cin >> numCustomers;
if(numCustomers > 0)
{
Bank b;

for(size_t i = 0; i < numCustomers; i++)


{
Customer c;
cout << "Enter a name for Customer: ";
cin >> c;
b.addCustomer(c);
}
b.showCustomers();
}

return 0;
}
Section 7.3: Constructors and Destructors
In addition to all of the member functions you'll create for your objects, there are two special
kinds of functions that you should create for every object. They are called constructors and
destructors. Constructors are called every time you create an object, and destructors are called
every time you destroy an object.
Constructors
The constructor's job is to set up the object so that it can be used. Remember in Chapter 3.2,
when we first declared a variable? Before we initialized the variable, it stored a garbage value.
We needed to initialize the variable to 0 or to some other useful value before using it. The same
is true of objects. The difference is that with an object, you can't just assign it a value. You can't
say:
Player greenHat = 0;
because that doesn't make sense. A player is not a number, so you can't just set it
to 0. The way object initialization happens in C++ is that a special function, the
constructor, is called when you instantiate an object. The constructor is a function
whose name is the same as the object, with no return type (not even void). For our
video game, we'll probably want to initialize our Players' attributes so that they
don't contain garbage values. We might decide to write the constructor like this:

Player::Player() {
strength = 10;
agility = 10;
health = 10;
}
We would also have to change the class declaration so that it looks like this:

class Player {
int health;
int strength;
int agility;

Player(); // constructor - no return type


void move();
void attackMonster();
void getTreasure();
};
One problem with this constructor is that all of the players will be initialized to have
strength=10, agility=10, and health=10. We might want to create players with different values
for strength and agility to make our game more interesting. So, we can add a second constructor,
which has parameters for strength and agility. Our class declaration would now look like this:
class Player {
int health;
int strength;
int agility;
Player(); // constructor - no return type
Player(int s, int a); // alternate constructor takes
two parameters
void move();
void attackMonster();
void getTreasure();
};
and we would add a function definition for the alternate constructor, which looks
like this:

Player::Player(int s, int a) {
strength = s;
agility = a;
health = 10;
}
Now, when we want to instantiate the Player object four times, we can do the
following:

Player redHat; // default constructor


Player blueHat(14, 7); // alternate constructor
Player greenHat(6, 12); // alternate constructor
Player yellowHat(10, 10); // alternate constructor

Destructors
Destructors are less complicated than constructors. You don't call them explicitly (they are called
automatically for you), and there's only one destructor for each object. The name of the
destructor is the name of the class, preceeded by a tilde (~). Here's an example of a destructor:
Player::~Player() {
strength = 0;
agility = 0;
health = 0;
}

Since a destructor is called after an object is used for the last time, you're probably
wondering why they exist at all. Right now, they aren't very useful, but you'll see
why they're important in Section 8.3.

Need for Friend Function:


As discussed in the earlier sections on access specifiers, when a data is declared as private
inside a class, then it is not accessible from outside the class. A function that is not a member
or an external class will not be able to access the private data. A programmer may have a
situation where he or she would need to access private data from non-member functions and
external classes. For handling such cases, the concept of Friend functions is a useful tool.
What is a Friend Function?
A friend function is used for accessing the non-public members of a class. A class can allow
non-member functions and other classes to access its own private data, by making them
friends. Thus, a friend function is an ordinary function or a member of another class.

How to define and use Friend Function in C++:


The friend function is written as any other normal function, except the function declaration of
these functions is preceded with the keyword friend. The friend function must have the class to
which it is declared as friend passed to it in argument.

Some important points to note while using friend functions in C++:


• The keyword friend is placed only in the function declaration of the
friend function and not in the function definition.
.
• It is possible to declare a function as friend in any number of
classes.
.
• When a class is declared as a friend, the friend class has access to
the private data of the class that made this a friend.
.
• A friend function, even though it is not a member function, would have
the rights to access the private members of the class.
.
• It is possible to declare the friend function as either private or
public.
.
• The function can be invoked without the use of an object. The friend
function has its argument as objects, seen in example below.
Example to understand the friend function:

#include
class exforsys
{
private:
int a,b;
public:
void test()
{
a=100;
b=200;
}
friend int compute(exforsys e1)

//Friend Function Declaration with keyword friend and with the object of class
exforsys to which it is friend passed to it
};

int compute(exforsys e1)


{
//Friend Function Definition which has access to private data
return int(e1.a+e2.b)-5;
}

main()
{
exforsys e;
e.test();
cout<<"The result is:"<<COMPUTE(E);
//Calling of Friend Function with object as argument.
}

The output of the above program is

The result is:295

The function compute() is a non-member function of the class exforsys. In order to make this
function have access to the private data a and b of class exforsys , it is created as a friend
function for the class exforsys. As a first step, the function compute() is declared as friend in
the class exforsys as:

friend int compute (exforsys e1)

need help to define a class in C++

hi

I am learning to program in C++. I have got some difficulties in defining a class. public ans
private type is confusing.
anyway

the class I want to define should collect information about integers. she accepts integers
through a method add(). at any time she returns the average of the integers, the median
(value on the middle of the list, the sum, and the number of integer.

I have thought about define like this:


CPP / C++ / C Code:

class statistics {

private:

double mean(void)
vector<int> mode(void)
double median(void)
int sum(void)
int count(void)

public:
void add(int t)

};

mean=sum/a;
cout << "the mean is " << mean << endl;
}

I don't think it is working because I don't think I need to read the file "list"
if someone can void

statistics::count (void)
{
ifstream list("list.txt")
container c;
int n;
a=0;
while (list >> n)
{
c.push_back(n);
a=a++;
}
c.sort();
cout << a << endl;
}
void
statistics::sum (void)
{
ifstream list("list.txt")
container c;
int n;
sum=0;
while (list >> n)
c.push_back(n);
for (container::iterator i = c.begin(); i != c.end(); i++)
sum = addsum(*i);

cout << "the sum is " << sum << endl;


}

void
statistics::mean (void)
{
ifstream list("list.txt")
container c;
int n;
mean=0;
a=0;
while (list >> n)
{
c.push_back(n);
a=a++;
}
for (container::iterator i = c.begin(); i != c.end(); i++)
sum = addsum(*i);

mean=sum/a;
cout << "the mean is " << mean << endl;
}

I don't think it is working because I don't think I need to read the file "list"
if someone can give me hints about that, that give me hints about that, that would

Definition
The definition of the operator<< function can be in any file. It is not a member
function, so it is defined with two explicit operands. The operator<< function must
return the value of the left operand (the ostream) so that multiple << operators may
be used in the same statement. Note that operator<< for your type can be defined
in terms of << on other types, specifically the types of the data members of your
class (eg, ints x and y in the Point class).

//=== Point.cpp file ===========================


. . .
ostream& operator<<(ostream& output, const Point& p) {
output << "(" << p.x << ", " << p.y <<")";
return output; // for multiple << operators.
}

Overloading << and >>


Perhaps the most common use of friend functions is overloading << for I/O. This
example overloads << (ie, defines a operator<< function) so that Point objects can
use cout and <<.

// example usage
Point p;
. . .
cout << p; // not legal without << friend function

Operator Overloading
by Andrei Milea

In C++ the overloading principle applies not only to functions, but to operators too. That is, the meaning of
operators can be extended from built-in types to user-defined types. In this way a programmer can provide
his or her own operator to a class by overloading the built-in operator to perform some specific computation
when the operator is used with objects of that class. One question may arise here: is this really useful in
real world implementations? Some programmers consider that overloading is not useful most of the time.
This and the fact that overloading makes the language more complicated is the main reason why operator
overloading is banned in Java. Even if overloading adds complexity to the language it can provide a lot of
syntactic sugar, and code written by a programmer using operator overloading can be easy, but sometimes
misleading, to read. We can use operator overloading easily without knowing all the implementation's
complexities. A short example will make things clear:

Complex a(1.2,1.3); //this class is used to represent complex numbers


Comblex b(2.1,3); //notice the construction taking 2 parameters for the real
and imaginary part
Complex c = a+b; //for this to work the addition operator must be overloaded

The addition without having overloaded operator + could look like this:
a.Add(b);
Complex c(a);

This piece of code is not as suggestive as the first one and the readability becomes poor. Using operator
overloading is a design decision, so when we deal with concepts where some operator seems fit and its use
intuitive, it will make the code more clear than using a function to do the task. However, there are many
cases when programmers abuse this technique, when the concept represented by the class is not related to
the operator (like using + and - to add and remove elements from a data structure). In this cases operator
overloading is a bad idea, creating confusion.

In order to be able to write the above code we must have the "+" operator overloaded to make the proper
addition between the real members and the imaginary ones and also the assignment operator. The
overloading syntax is quite simple, similar to function overloading, the keyword operator followed by the
operator we want to overload as you can see in the next code sample:
class Complex
{
public:
Complex(double re,double im)
:real(re),imag(im)
{};
Complex operator+(Complex);
Complex operator=(Complex);
private:
double real;
double imag;
}
Complex Complex::operator+(Complex num)
{
real = real + num.GetRealPart();
imag = imag + num.GetImagPart();
return *this;
}

The assignment operator can be overloaded similarly. Notice that we had to call the accessor function in
order to get the real and imaginary parts from the parameter since they are private. In order to bypass this
difficulty we could have made the operator + a friend (a friend function is a function which is permitted to
access the private members of a class) in the complex class:
friend Complex operator+(Complex);

We could have defined the addition operator globally and called a member to do the actual work:
Complex operator+(Complex &num1,Complex &num2)
{
Complex temp(num1); //note the use of a copy constructor here
temp.Add(num2);
return temp;
}

The motivation for doing so can be understood by examining the difference between the two choices: when
the operator is a member the first object in the expression must be of that particular type, when it's a global
function, the implicit or user-defined conversion can allow the operator to act even if the first operand is not
exactly of the same type:
Complex c = 2+b; //if the integer 2 can be converted by the Complex class,
this expression is valid
The number of operands can't be overridden, that is, a binary operator takes two operands, a unary only
one. The same restriction acts for the precedence too, for example the multiplication takes place before
addition. There are some operators that need the first operand to be left value: operator=, operator(),
operator[] and operator->, so their use is restricted just as member functions(non-static), they can't be
overloaded globally. The operator=, operator& and operator, (sequencing) have already defined meanings
by default for all objects, but their meanings can be changed by overloading or erased by making them
private.

Another intuitive meaning of the "+" operator from the STL string class which is overloaded to do
concatenation:
string prefix("de");
string word("composed");
string composed = prefix+word;

Using "+" to concatenate is also allowed in Java, but note that this is not extensible to other classes, and it's
not a user defined behavior. Almost all operators can be overloaded in C++:

+ - * / % ^ & |
~ ! , = < > <= >=
++ -- << >> == != && ||
+= -= /= %= ^= & = |= *=
<<= >>= [ ] ( ) -> ->* new delete

exceptions are the operators for scope resolution (::), member selection (.), and member
selection through a pointer to a function(.*). Overloading assumes you specify a behavior for an
operator that acts on a user defined type and it can't be used just with general pointers. The
standard behavior of operators for built-in (primitive) types cannot be changed by overlo C++

Encapsulation

Introduction
Encapsulation is the process of combining data and functions into a single unit called class.
Using the method of encapsulation, the programmer cannot directly access the data. Data is only
accessible through the functions present inside the class. Data encapsulation led to the important
concept of data hiding. Data hiding is the implementation details of a class that are hidden from
the user. The concept of restricted access led programmers to write specialized functions or
methods for performing the operations on hidden members of the class. Attention must be paid to
ensure that the class is designed properly.
Neither too much access nor too much control must be placed on the operations in order to make
the class user friendly. Hiding the implementation details and providing restrictive access leads
to the concept of abstract data type. Encapsulation leads to the concept of data hiding, but the
concept of encapsulation must not be restricted to information hiding. Encapsulation clearly
represents the ability to bundle related data and functionality within a single, autonomous entity
called a class.

For instance:

class Exforsys
{
public:
int sample();
int example(char *se)
int endfunc();
.........
......... //Other member functions

private:
int x;
float sq;
..........
......... //Other data members
};
In the above example, the data members integer x, float sq and other data members and member
functions sample(),example(char* se),endfunc() and other member functions are bundled and put
inside a single autonomous entity called class Exforsys. This exemplifies the concept of
Encapsulation. This special feature is available in object-oriented language C++ but not available
in procedural language C. There are advantages of using this encapsulated approach in C++. One
advantage is that it reduces human errors. The data and functions bundled inside the class take
total control of maintenance and thus human errors are reduced. It is clear from the above
example that the encapsulated objects act as a black box for other parts of the program through
interaction. Although encapsulated objects provide functionality, the calling objects will not
know the implementation details. This enhances the security of the application.

The key strength behind Data Encapsulation in C++ is that the keywords or the access specifiers
can be placed in the class declaration as public, protected or private. A class placed after the
keyword public is accessible to all the users of the class. The elements placed after the keyword
private are accessible only to the methods of the class. In between the public and the private
access specifiers, there exists the protected access specifier. Elements placed after the keyword
protected are accessible only to the methods of the class or classes derived from that class.

The concept of encapsulation shows that a non-member function cannot access an object's
private or protected data. This adds security, but in some cases the programmer might require an
unrelated function to operate on an object of two different classes. The programmer is then able
to utilize the concept of friend functions. Encapsulation alone is a powerful feature that leads to
information hiding, abstract data type and friend functions.

Features and Advantages of the concept of Encapsulation:


* Makes Maintenance of Application Easier:

Complex and critical applications are difficult to maintain. The cost associated with maintaining
the application is higher than that of developing the application properly. To resolve this
maintenance difficulty, the object-oriented programming language C++ created the concept of
encapsulation which bundles data and related functions together as a unit called class. Thus,
making maintenance much easier on the class level.

* Improves the Understandability of the Application

* Enhanced Security:
There are numerous reasons for the enhancement of security using the concept of Encapsulation
in C++. The access specifier acts as the key strength behind the concept of security and provides
access to members of class as needed by users. This prevents unauthorized access. If an
application needs to be extended or customized in later stages of development, the task of adding
new functions becomes easier without breaking existing code or applications, there by giving an
additional security to existing application.

4. System Design
Before you purchase any hardware, it may be a good idea to consider the design of your system.
There are basically two hardware issues involved with design of a Beowulf system: the type of
nodes or computers you are going to use; and way you connect the computer nodes. There is one
software issue that may effect your hardware decisions; the communication library or API. A
more detailed discussion of hardware and communication software is provided later in this
document.
While the number of choices is not large, there are some important design decisions that must be
made when constructing a Beowulf systems. Because the science (or art) of "parallel computing"
has many different interpretations, an introduction is provided below. If you do not like to read
background material, you may skip this section, but it is advised that you read section Suitability
before you make you final hardware decisions.

4.1 A brief background on parallel computing.


This section provides background on parallel computing concepts. It is NOT an exhaustive or
complete description of parallel computing science and technology. It is a brief description of the
issues that may be important to a Beowulf designer and user.
As you design and build your Beowulf, many of these issues described below will become
important in your decision process. Due to its component nature, a Beowulf Supercomputer
requires that we consider many factors carefully because they are now under our control. In
general, it is not all that difficult to understand the issues involved with parallel computing.
Indeed, once the issues are understood, your expectations will be more realistic and success will
be more likely. Unlike the "sequential world" where processor speed is considered the single
most important factor, processor speed in the "parallel world" is just one of several factors that
will determine overall system performance and efficiency.

4.2 The methods of parallel computing


Parallel computing can take many forms. From a user's perspective, it is important to consider
the advantages and disadvantages of each methodology. The following section attempts to
provide some perspective on the methods of parallel computing and indicate where the Beowulf
machine falls on this continuum.
Why more than one CPU?
Answering this question is important. Using 8 CPUs to run your word processor sounds a little
like "over-kill" -- and it is. What about a web server, a database, a rendering program, or a
project scheduler? Maybe extra CPUs would help. What about a complex simulation, a fluid
dynamics code, or a data mining application. Extra CPUs definitely help in these situations.
Indeed, multiple CPUs are being used to solve more and more problems.
The next question usually is: "Why do I need two or four CPUs, I will just wait for the 986
turbo-hyper chip." There are several reasons:
1. Due to the use of multi-tasking Operating Systems, it is possible to do several
things at once. This is a natural "parallelism" that is easily exploited by more
than one low cost CPU.
2. Processor speeds have been doubling every 18 months, but what about RAM
speeds or hard disk speeds? Unfortunately, these speeds are not increasing
as fast as the CPU speeds. Keep in mind most applications require "out of
cache memory access" and hard disk access. Doing things in parallel is one
way to get around some of these limitations.
3. Predictions indicate that processor speeds will not continue to double every
18 months after the year 2005. There are some very serious obstacles to
overcome in order to maintain this trend.
4. Depending on the application, parallel computing can speed things up by any
where from 2 to 500 times faster (in some cases even faster). Such
performance is not available using a single processor. Even supercomputers
that at one time used very fast custom processors are now built from multiple
"commodity- off-the-shelf" CPUs.
If you need speed - either due to a compute bound problem and/or an I/O bound problem,
parallel is worth considering. Because parallel computing is implemented in a variety of ways,
solving your problem in parallel will require some very important decisions to be made. These
decisions may dramatically effect portability, performance, and cost of your application.
Before we get technical, let's look take a look at a real "parallel computing problem" using an
example with which we are familiar - waiting in long lines at a store.
The Parallel Computing Store
Consider a big store with 8 cash registers grouped together in the front of the store. Assume each
cash register/cashier is a CPU and each customer is a computer program. The size of the
computer program (amount of work) is the size of each customer's order. The following
analogies can be used to illustrate parallel computing concepts.
Single-tasking Operating System
One cash register open (is in use) and must process each customer one at a time.
Computer Example: MS DOS
Multi-tasking Operating System:
One cash register open, but now we process only a part of each order at a time, move to the next
person and process some of their order. Everyone "seems" to be moving through the line
together, but if no one else is in the line, you will get through the line faster.
Computer Example: UNIX, NT using a single CPU
Multitasking Operating Systems with Multiple CPUs:
Now we open several cash registers in the store. Each order can be processed by a separate cash
register and the line can move much faster. This is called SMP - Symmetric Multi-processing.
Although there are extra cash registers open, you will still never get through the line any faster
than just you and a single cash register.
Computer Example: UNIX and NT with multiple CPUs
Threads on a Multitasking Operating Systems extra CPUs
If you "break-up" the items in your order, you might be able to move through the line faster by
using several cash registers at one time. First, we must assume you have a large amount of
goods, because the time you invest "breaking up your order" must be regained by using multiple
cash registers. In theory, you should be able to move through the line "n" times faster than
before*; where "n" is the number of cash registers. When the cashiers need to get sub- totals,
they can exchange information quickly by looking and talking to all the other "local" cash
registers. They can even snoop around the other cash registers to find information they need to
work faster. There is a limit, however, as to how many cash registers the store can effectively
locate in any one place.
Amdals law will also limit the application speed-up to the slowest sequential portion of the
program.
Computer Example: UNIX or NT with extra CPU on the same motherboard running multi-
threaded programs.
Sending Messages on Multitasking Operating Systems with extra CPUs:
In order to improve performance, the store adds 8 cash registers at the back of the store. Because
the new cash registers are far away from the front cash registers, the cashiers must call on the
phone to send their sub-totals to the front of the store. This distance adds extra overhead (time) to
communication between cashiers, but if communication is minimized, it is not a problem. If you
have a really big order, one that requires all the cash registers, then as before your speed can be
improved by using all cash registers at the same time, the extra overhead must be considered. In
some cases, the store may have single cash registers (or islands of cash registers) located all over
the store - each cash register (or island) must communicate by phone. Since all the cashiers
working the cash registers can talk to each other by phone, it does not matter too much where
they are.
Computer Example: One or several copies of UNIX or NT with extra CPUs on the same or
different motherboard communicating through messages.
The above scenarios, although not exact, are a good representation of constraints placed on
parallel systems. Unlike a single CPU (or cash register) communication is an issue.

4.3 Architectures for parallel computing


The common methods and architectures of parallel computing are presented below. While this
description is by no means exhaustive, it is enough to understand the basic issues involved with
Beowulf design.
Hardware Architectures
There are basically two ways parallel computer hardware is put together:
1. Local memory machines that communicate by messages (Beowulf Clusters)
2. Shared memory machines that communicate through memory (SMP
machines)
A typical Beowulf is a collection of single CPU machines connected using fast Ethernet and is,
therefore, a local memory machine. A 4 way SMP box is a shared memory machine and can be
used for parallel computing - parallel applications communicate using shared memory. Just as in
the computer store analogy, local memory machines (individual cash registers) can be scaled up
to large numbers of CPUs, while the number of CPUs shared memory machines (the number of
cash registers you can place in one spot) can have is limited due to memory contention.
It is possible, however, to connect many shared memory machines to create a "hybrid" shared
memory machine. These hybrid machines "look" like a single large SMP machine to the user and
are often called NUMA (non uniform memory access) machines because the global memory
seen by the programmer and shared by all the CPUs can have different latencies. At some level,
however, a NUMA machine must "pass messages" between local shared memory pools.
It is also possible to connect SMP machines as local memory compute nodes. Typical CLASS I
motherboards have either 2 or 4 CPUs and are often used as a means to reduce the overall system
cost. The Linux internal scheduler determines how these CPUs get shared. The user cannot (at
this point) assign a specific task to a specific SMP processor. The user can however, start two
independent processes or a threaded processes and expect to see a performance increase over a
single CPU system.
Software API Architectures
There basically two ways to "express" concurrency in a program:
1. Using Messages sent between processors
2. Using operating system Threads
Other methods do exist, but these are the two most widely used. It is important to remember that
the expression of concurrency is not necessary controlled by the underlying hardware. Both
Messages and Threads can be implemented on SMP, NUMA-SMP, and clusters - although as
explained below efficiently and portability are important issues.
Messages
Historically, messages passing technology reflected the design of early local memory parallel
computers. Messages require copying data while Threads use data in place. The latency and
speed at which messages can be copied are the limiting factor with message passing models. A
Message is quite simple: some data and a destination processor. Common message passing APIs
are PVM or MPI. Message passing can be efficiently implemented using Threads and Messages
work well both on SMP machine and between clusters of machines. The advantage to using
messages on an SMP machine, as opposed to Threads, is that if you decided to use clusters in the
future it is easy to add machines or scale your application.
Threads
Operating system Threads were developed because shared memory SMP (symmetrical
multiprocessing) designs allowed very fast shared memory communication and synchronization
between concurrent parts of a program. Threads work well on SMP systems because
communication is through shared memory. For this reason the user must isolate local data from
global data, otherwise programs will not work properly. In contrast to messages, a large amount
of copying can be eliminated with threads because the data is shared between processes
(threads). Linux supports POSIX threads. The problem with threads is that it is difficult to extend
them beyond one SMP machine and because data is shared between CPUs, cache coherence
issues can contribute to overhead. Extending threads beyond the SMP boundary efficiently
requires NUMA technology which is expensive and not natively supported by Linux.
Implementing threads on top of messages has been done (
(http://syntron.com/ptools/ptools_pg.htm)), but Threads are often inefficient when implemented
using messages.
The following can be stated about performance:
SMP machine cluster of machines scalability
performance performance
----------- ------------------- -----------
messages good best best

threads best poor* poor*

* requires expensive NUMA technology.


Application Architecture
In order to run an application in parallel on multiple CPUs, it must be explicitly broken in to
concurrent parts. A standard single CPU application will run no faster than a single CPU
application on multiple processors. There are some tools and compilers that can break up
programs, but parallelizing codes is not a "plug and play" operation. Depending on the
application, parallelizing code can be easy, extremely difficult, or in some cases impossible due
to algorithm dependencies.
Before the software issues can be addressed the concept of Suitability needs to be introduced.

4.4 Suitability
Most questions about parallel computing have the same answer:
"It all depends upon the application."
Before we jump into the issues, there is one very important distinction that needs to be made -
the difference between CONCURRENT and PARALLEL. For the sake of this discussion we will
define these two concepts as follows:
CONCURRENT parts of a program are those that can be computed independently.
PARALLEL parts of a program are those CONCURRENT parts that are executed on separate
processing elements at the same time.
The distinction is very important, because CONCURRENCY is a property of the program and
efficient PARALLELISM is a property of the machine. Ideally, PARALLEL execution should
result in faster performance. The limiting factor in parallel performance is the communication
speed and latency between compute nodes. (Latency also exists with threaded SMP applications
due to cache coherency.) Many of the common parallel benchmarks are highly parallel and
communication and latency are not the bottle neck. This type of problem can be called
"obviously parallel". Other applications are not so simple and executing CONCURRENT parts
of the program in PARALLEL may actually cause the program to run slower, thus offsetting any
performance gains in other CONCURRENT parts of the program. In simple terms, the cost of
communication time must pay for the savings in computation time, otherwise the PARALLEL
execution of the CONCURRENT part is inefficient.
The task of the programmer is to determining what CONCURRENT parts of the program
SHOULD be executed in PARALLEL and what parts SHOULD NOT. The answer to this will
determine the EFFICIENCY of application. The following graph summarizes the situation for
the programmer:

| *
| *
| *
% of | *
appli- | *
cations | *
| *
| *
| *
| *
| *
| ****
| ****
| ********************
+-----------------------------------
communication time/processing time
In a perfect parallel computer, the ratio of communication/processing would be equal and
anything that is CONCURRENT could be implemented in PARALLEL. Unfortunately, Real
parallel computers, including shared memory machines, are subject to the effects described in
this graph. When designing a Beowulf, the user may want to keep this graph in mind because
parallel efficiency depends upon ratio of communication time and processing time for A
SPECIFIC PARALLEL COMPUTER. Applications may be portable between parallel
computers, but there is no guarantee they will be efficient on a different platform.
IN GENERAL, THERE IS NO SUCH THING AS A PORTABLE AND EFFICIENT
PARALLEL PROGRAM
There is yet another consequence to the above graph. Since efficiency depends upon the
comm./process. ratio, just changing one component of the ratio does not necessary mean a
specific application will perform faster. A change in processor speed, while keeping the
communication speed that same may have non- intuitive effects on your program. For example,
doubling or tripling the CPU speed, while keeping the communication speed the same, may now
make some previously efficient PARALLEL portions of your program, more efficient if they
were executed SEQUENTIALLY. That is, it may now be faster to run the previously
PARALLEL parts as SEQUENTIAL. Furthermore, running inefficient parts in parallel will
actually keep your application from reaching its maximum speed. Thus, by adding faster
processor, you may actually slowed down your application (you are keeping the new CPU from
running at its maximum speed for that application)
UPGRADING TO A FASTER CPU MAY ACTUALLY SLOW DOWN YOUR
APPLICATION
So, in conclusion, to know whether or not you can use a parallel hardware environment, you
need to have some insight into the suitability of a particular machine to your application. You
need to look at a lot of issues including CPU speeds, compiler, message passing API, network,
etc. Please note, just profiling an application, does not give the whole story. You may identify a
computationally heavy portion of your program, but you do not know the communication cost
for this portion. It may be that for a given system, the communication cost as do not make
parallelizing this code efficient.
A final note about a common misconception. It is often stated that "a program is
PARALLELIZED", but in reality only the CONCURRENT parts of the program have been
located. For all the reasons given above, the program is not PARALLELIZED. Efficient
PARALLELIZATION is a property of the machine.

4.5 Writing and porting parallel software


Once you decide that you need parallel computing and would like to design and build a Beowulf,
a few moments considering your application with respect to the previous discussion may be a
good idea.
In general there are two things you can do:
1. Go ahead and construct a CLASS I Beowulf and then "fit" your application to
it. Or run existing parallel applications that you know work on your Beowulf
(but beware of the portability and efficiently issues mentioned above)
2. Look at the applications you need to run on your Beowulf and make some
estimations as to the type of hardware and software you need.
In either case, at some point you will need to look at the efficiency issues. In general, there are
three things you need to do:
1. Determine concurrent parts of your program
2. Estimate parallel efficiently
3. Describing the concurrent parts of your program
Let's look at these one at a time.
Determine concurrent parts of your program
This step is often considered "parallelizing your program". Parallelization decisions will be made
in step 2. In this step, you need to determine data dependencies.
>From a practical standpoint, applications may exhibit two types of concurrency: compute
(number crunching) and I/O (database). Although in many cases compute and I/O concurrency
are orthogonal, there are application that require both. There are tools available that can perform
concurrency analysis on existing applications. Most of these tools are designed for FORTRAN.
There are two reasons FORTRAN is used: historically most number crunching applications were
written in FORTRAN and it is easier to analyze. If no tools are available, then this step can be
some what difficult for existing applications.
Estimate parallel efficiency
Without the help of tools, this step may require trial and error tests or just a plain old educated
guess. If you have a specific application in mind, try to determine if it is CPU limited (compute
bound) or hard disk limited (I/O bound). The requirements of your Beowulf may be quite
different depending upon your needs. For example, a compute bound problem may need a few
very fast CPUs and high speed low latency network, while an I/O bound problem may work
better with more slower CPUs and fast Ethernet.
This recommendation often comes as a surprise to most people because, the standard assumption
is that faster processor are always better. While this is true if your have an unlimited budget, real
systems may have cost constraints that should be maximized. For I/O bound problems, there is a
little known rule (called the Eadline-Dedkov Law) that is quite helpful:
For two given parallel computers with the same cumulative CPU performance index, the one
which has slower processors (and a probably correspondingly slower interprocessor
communication network) will have better performance for I/O-dominant applications.
While the proof of this rule is beyond the scope of this document, you find it interesting to
download the paper Performance Considerations for I/O-Dominant Applications on Parallel
Computers (Postscript format 109K ) (ftp://www.plogic.com/pub/papers/exs-pap6.ps)
Once you have determined what type of concurrency you have in your application, you will need
to estimate how efficient it will be in parallel. See Section Software for a description of Software
tools.
In the absence of tools, you may try to guess your way through this step. If a compute bound
loop measured in minutes and the data can be transferred in seconds, then it might be a good
candidate for parallelization. But remember, if you take a 16 minute loop and break it into 32
parts, and your data transfers require several seconds per part, then things are going to get tight.
You will reach a point of diminishing returns.
Describing the concurrent parts of your program
There are several ways to describe concurrent parts of your program:
1. Explicit parallel execution
2. Implicit parallel execution
The major difference between the two is that explicit parallelism is determined by the user where
implicit parallelism is determined by the compiler.
Explicit Methods
These are basically method where the user must modify source code specifically for a parallel
computer. The user must either add messages using PVM or MPI or add threads using POSIX
threads. (Keep in mind however, threads can not move between SMP motherboards).
Explicit methods tend to be the most difficult to implement and debug. Users typically embed
explicit function calls in standard FORTRAN 77 or C/C++ source code. The MPI library has
added some functions to make some standard parallel methods easier to implement (i.e.
scatter/gather functions). In addition, it is also possible to use standard libraries that have been
written for parallel computers. Keep in mind, however, the portability vs. efficiently trade-off)
For historical reasons, most number crunching codes are written in FORTRAN. For this reasons,
FORTRAN has the largest amount of support (tools, libraries, etc.) for parallel computing. Many
programmers now use C or re- write existing FORTRAN applications in C with the notion the C
will allow faster execution. While this may be true as C is the closest thing to a universal
machine code, it has some major drawbacks. The use of pointers in C makes determining data
dependencies extremely difficult. Automatic analysis of pointers is extremely difficult. If you
have an existing FORTRAN program and think that you might want to parallelize it in the future
- DO NOT CONVERT IT TO C!
Implicit Methods
Implicit methods are those where the user gives up some (or all) of the parallelization decisions
to the compiler. Examples are FORTRAN 90, High Performance FORTRAN (HPF), Bulk
Synchronous Parallel (BSP), and a whole collection of other methods that are under
development.
Implicit methods require the user to provide some information about the concurrent nature of
their application, but the compiler will then make many decisions about how to execute this
concurrency in parallel. These methods provide some level of portability and efficiency, but
there is still no "best way" to describe a concurrent problem for a parallel computer.

Object-oriented design
From Wikipedia, the free encyclopedia
Jump to: navigation, search

"OOD" redirects here. OOD may also refer to Officer of the Deck, Officer of the day,
or the Ood.

Object-oriented design is the process of planning a system of interacting objects for the purpose
of solving a software problem. It is one approach to software design.

Contents
[hide]
• 1 Overview
• 2 Object-oriented design topics
○ 2.1 Input (sources) for object-oriented design
○ 2.2 Object-oriented concepts
○ 2.3 Designing concepts
○ 2.4 Output (deliverables) of object-oriented design
○ 2.5 Some design principles and strategies
• 3 See also
• 4 References
• 5 External links

[edit] Overview
An object contains encapsulated data and procedures grouped together to represent an entity. The
'object interface', how the object can be interacted with, is also defined. An object-oriented
program is described by the interaction of these objects. Object-oriented design is the discipline
of defining the objects and their interactions to solve a problem that was identified and
documented during object-oriented analysis.
From a business perspective, Object Oriented Design refers to the objects that make up that
business. For example, in a certain company, a business object can consist of people, data files
and database tables, artifacts, equipment, vehicles, etc.
What follows is a description of the class-based subset of object-oriented design, which does not
include object prototype-based approaches where objects are not typically obtained by instancing
classes but by cloning other (prototype) objects.

[edit] Object-oriented design topics


[edit] Input (sources) for object-oriented design
The input for object-oriented design is provided by the output of object-oriented analysis.
Realize that an output artifact does not need to be completely developed to serve as input of
object-oriented design; analysis and design may occur in parallel, and in practice the results of
one activity can feed the other in a short feedback cycle through an iterative process. Both
analysis and design can be performed incrementally, and the artifacts can be continuously grown
instead of completely developed in one shot.
Some typical input artifacts for object-oriented design are:
• Conceptual model: Conceptual model is the result of object-oriented analysis,
it captures concepts in the problem domain. The conceptual model is
explicitly chosen to be independent of implementation details, such as
concurrency or data storage.
• Use case: Use case is description of sequences of events that, taken
together, lead to a system doing something useful. Each use case provides
one or more scenarios that convey how the system should interact with the
users called actors to achieve a specific business goal or function. Use case
actors may be end users or other systems. In many circumstances use cases
are further elaborated into use case diagrams. Use case diagrams are used to
identify the actor (users or other systems) and the processes they perform.
• System Sequence Diagram: System Sequence diagram (SSD) is a picture that
shows, for a particular scenario of a use case, the events that external actors
generate, their order, and possible inter-system events.
• User interface documentations (if applicable): Document that shows and
describes the look and feel of the end product's user interface. It is not
mandatory to have this, but it helps to visualize the end-product and
therefore helps the designer.
• Relational data model (if applicable): A data model is an abstract model that
describes how data is represented and used. If an object database is not
used, the relational data model should usually be created before the design,
since the strategy chosen for object-relational mapping is an output of the OO
design process. However, it is possible to develop the relational data model
and the object-oriented design artefacts in parallel, and the growth of an
artefact can stimulate the refinement of other artefacts.
[edit] Object-oriented concepts
The five basic concepts of object-oriented design are the implementation level features that are
built into the programming language. These features are often referred to by these common
names:
• Object/Class: A tight coupling or association of data structures with the
methods or functions that act on the data. This is called a class, or object (an
object is created based on a class). Each object serves a separate function. It
is defined by its properties, what it is and what it can do. An object can be
part of a class, which is a set of objects that are similar.
• Information hiding: The ability to protect some components of the object from
external entities. This is realized by language keywords to enable a variable
to be declared as private or protected to the owning class.
• Inheritance: The ability for a class to extend or override functionality of
another class. The so-called subclass has a whole section that is derived
(inherited) from the superclass and then it has its own set of functions and
data.
• Interface: The ability to defer the implementation of a method. The ability to
define the functions or methods signatures without implementing them.
• Polymorphism: The ability to replace an object with its subobjects. The ability
of an object-variable to contain, not only that object, but also all of its
subobjects.
[edit] Designing concepts
• Defining objects, creating class diagram from conceptual diagram: Usually
map entity to class.
• Identifying attributes.
• Use design patterns (if applicable): A design pattern is not a finished design,
it is a description of a solution to a common problem, in a context[1]. The main
advantage of using a design pattern is that it can be reused in multiple
applications. It can also be thought of as a template for how to solve a
problem that can be used in many different situations and/or applications.
Object-oriented design patterns typically show relationships and interactions
between classes or objects, without specifying the final application classes or
objects that are involved.
• Define application framework (if applicable): Application framework is a term
usually used to refer to a set of libraries or classes that are used to
implement the standard structure of an application for a specific operating
system. By bundling a large amount of reusable code into a framework, much
time is saved for the developer, since he/she is saved the task of rewriting
large amounts of standard code for each new application that is developed.
• Identify persistent objects/data (if applicable): Identify objects that have to
last longer than a single runtime of the application. If a relational database is
used, design the object relation mapping.
• Identify and define remote objects (if applicable).
[edit] Output (deliverables) of object-oriented design
• Sequence Diagrams: Extend the System Sequence Diagram to add specific
objects that handle the system events.
A sequence diagram shows, as parallel vertical lines, different processes or
objects that live simultaneously, and, as horizontal arrows, the messages
exchanged between them, in the order in which they occur.
• Class diagram: A class diagram is a type of static structure UML diagram that
describes the structure of a system by showing the system's classes, their
attributes, and the relationships between the classes. The messages and
classes identified through the development of the sequence diagrams can
serve as input to the automatic generation of the global class diagram of the
system.
[edit] Some design principles and strategies
• Dependency injection: The basic idea is that if an object depends upon
having an instance of some other object then the needed object is "injected"
into the dependent object; for example, being passed a database connection
as an argument to the constructor instead of creating one internally.
• Acyclic dependencies principle: The dependency graph of packages or
components should have no cycles. This is also referred to as having a
directed acyclic graph. [2] For example, package C depends on package B,
which depends on package A. If package A also depended on package C, then
you would have a cycle.
• Composite reuse principle: Favor polymorphic composition of objects over
inheritance.[1]

Structured Systems Analysis and Design Method


From Wikipedia, the free encyclopedia
Jump to: navigation, search

Structured Systems Analysis and Design Method (SSADM) is a systems approach to the
analysis and design of information systems. SSADM was produced for the Central Computer and
Telecommunications Agency (now Office of Government Commerce), a UK government office
concerned with the use of technology in government, from 1980 onwards.

Contents
[hide]
• 1 Overview
• 2 History
• 3 SSADM techniques
• 4 Stages
○ 4.1 Stage 0 - Feasibility study
○ 4.2 Stage 1 - Investigation of the current
environment
○ 4.3 Stage 2 - Business system options
○ 4.4 Stage 3 - Requirements specification
○ 4.5 Stage 4 - Technical system options
○ 4.6 Stage 5 - Logical design
○ 4.7 Stage 6 - Physical design
• 5 Advantages and disadvantages
• 6 References
• 7 External links

[edit] Overview
SSADM is a waterfall method by which an Information System design can be arrived at.
SSADM can be thought to represent a pinnacle of the rigorous document-led approach to system
design, and contrasts with more contemporary Rapid Application Development methods such as
DSDM.
SSADM is one particular implementation and builds on the work of different schools of
structured analysis and development methods, such as Peter Checkland's Soft Systems
Methodology, Larry Constantine's Structured Design, Edward Yourdon's Yourdon Structured
Method, Michael A. Jackson's Jackson Structured Programming, and Tom DeMarco's Structured
Analysis.
The names "Structured Systems Analysis and Design Method" and "SSADM" are now
Registered Trade Marks of the Office of Government Commerce (OGC), which is an Office of
the United Kingdom's Treasury.[citation needed]

[edit] History
• 1980: Central Computer and Telecommunications Agency (CCTA) evaluate
analysis and design methods.
• 1981: Learmonth & Burchett Management Systems (LBMS) method chosen
from shortlist of five.
• 1983: SSADM made mandatory for all new information system developments
• 1984: Version 2 of SSADM released
• 1986: Version 3 of SSADM released, adopted by NCC
• 1988: SSADM Certificate of Proficiency launched, SSADM promoted as ‘open’
standard
• 1989: Moves towards Euromethod, launch of CASE products certification
scheme
• 1990: Version 4 launched
• 1993: SSADM V4 Standard and Tools Conformance Scheme Launched
• 1995: SSADM V4+ announced, V4.2 launched

[edit] SSADM techniques


The three most important techniques that are used in SSADM are:
Logical Data Modeling

This is the process of identifying, modeling and documenting the data


requirements of the system being designed. The data are separated into
entities (things about which a business needs to record information) and
relationships (the associations between the entities).

Data Flow Modeling

This is the process of identifying, modeling and documenting how data moves
around an information system. Data Flow Modeling examines processes
(activities that transform data from one form to another), data stores (the
holding areas for data), external entities (what sends data into a system or
receives data from a system), and data flows (routes by which data can flow).

Entity Behavior Modeling

This is the process of identifying, modeling and documenting the events that
affect each entity and the sequence in which these events occur.

[edit] Stages
The SSADM method involves the application of a sequence of analysis, documentation and
design tasks concerned with the following.
[edit] Stage 0 - Feasibility study
In order to determine whether or not a given project is feasible or not, there must be some form
of investigation into the goals and implications of the project. For very small scale projects this
may not be necessary at all as the scope of the project is easily apprehended. In larger projects,
the feasibility may be done but in an informal sense, either because there is not time for a formal
study or because the project is a “must-have” and will have to be done one way or the other.
When a feasibility study is carried out, there are four main areas of consideration:
• Technical - is the project technically possible?
• Financial - can the business afford to carry out the project?
• Organizational - will the new system be compatible with existing practices?
• Ethical - is the impact of the new system socially acceptable?
To answer these questions, the feasibility study is effectively a condensed version of a fully-
blown systems analysis and design. The requirements and users are analyzed to some extent,
some business options are drawn up and even some details of the technical implementation.
The product of this stage is a formal feasibility study document. SSADM specifies the sections
that the study should contain including any preliminary models that have been constructed and
also details of rejected options and the reasons for their rejection.
[edit] Stage 1 - Investigation of the current environment
This is one of the most important stages of SSADM. The developers of SSADM understood that
though the tasks and objectives of a new system may be radically different from the old system,
the underlying data will probably change very little. By coming to a full understanding of the
data requirements at an early stage, the remaining analysis and design stages can be built up on a
firm foundation.
In almost all cases there is some form of current system even if it is entirely composed of people
and paper. Through a combination of interviewing employees, circulating questionnaires,
observations and existing documentation, the analyst comes to full understanding of the system
as it is at the start of the project. This serves many purposes:
• the analyst learns the terminology of the business, what users do and how
they do it
• the old system provides the core requirements for the new system
• faults, errors and areas of inefficiency are highlighted and their reparation
added to the requirements
• the data model can be constructed
• the users become involved and learn the techniques and models of the
analyst
• the boundaries of the system can be defined
The products of this stage are:
• Users Catalogue describing all the users of the system and how they interact
with it
• Requirements Catalogues detailing all the requirements of the new system
• Current Services Description further composed of
• Current environment logical data structure (ERD)
• Context diagram (DFD)
• Levelled set of DFDs for current logical system
• Full data dictionary including relationship between data stores and entities
To produce the models, the analyst works through the construction of the models as we have
described. However, the first set of data-flow diagrams (DFDs) are the current physical model,
that is, with full details of how the old system is implemented. The final version is the current
logical model which is essentially the same as the current physical but with all reference to
implementation removed together with any redundancies such as repetition of process or data.
In the process of preparing the models, the analyst will discover the information that makes up
the users and requirements catalogues.
[edit] Stage 2 - Business system options
Having investigated the current system, the analyst must decide on the overall design of the new
system. To do this, he or she, using the outputs of the previous stage, develops a set of business
system options. These are different ways in which the new system could be produced varying
from doing nothing to throwing out the old system entirely and building an entirely new one. The
analyst may hold a brainstorming session so that as many and various ideas as possible are
generated.
The ideas are then collected to form a set of two or three different options which are presented to
the user. The options consider the following:
• the degree of automation
• the boundary between the system and the users
• the distribution of the system, for example, is it centralized to one office or
spread out across several?
• cost/benefit
• impact of the new system
Where necessary, the option will be documented with a logical data structure and a level 1 data-
flow diagram.
The users and analyst together choose a single business option. This may be one of the ones
already defined or may be a synthesis of different aspects of the existing options. The output of
this stage is the single selected business option together with all the outputs of stage 1.
[edit] Stage 3 - Requirements specification
This is probably the most complex stage in SSADM. Using the requirements developed in stage
1 and working within the framework of the selected business option, the analyst must develop a
full logical specification of what the new system must do. The specification must be free from
error, ambiguity and inconsistency. By logical, we mean that the specification does not say how
the system will be implemented but rather describes what the system will do.
To produce the logical specification, the analyst builds the required logical models for both the
data-flow diagrams (DFDs) and the entity relationship diagrams (ERDs). These are used to
produce function definitions of every function which the users will require of the system, entity
life-histories (ELHs) and effect correspondence diagrams, these are models of how each event
interacts with the system, a complement to entity life-histories. These are continually matched
against the requirements and where necessary, the requirements are added to and completed.
The product of this stage is a complete Requirements Specification document which is made up
of:
• the updated Data Catalogue
• the updated Requirements Catalogue
• the Processing Specification which in turn is made up of
• user role/function matrix
• function definitions
• required logical data model
• entity life-histories
• effect correspondence diagrams
Though some of these items may be unfamiliar to you, it is beyond the scope of this unit to go
into them in great detail.
[edit] Stage 4 - Technical system options
This stage is the first towards a physical implementation of the new system. Like the Business
System Options, in this stage a large number of options for the implementation of the new
system are generated. This is honed down to two or three to present to the user from which the
final option is chosen or synthesised.
However, the considerations are quite different being:
• the hardware architectures
• the software to use
• the cost of the implementation
• the staffing required
• the physical limitations such as a space occupied by the system
• the distribution including any networks which that may require
• the overall format of the human computer interface
All of these aspects must also conform to any constraints imposed by the business such as
available money and standardisation of hardware and software.
The output of this stage is a chosen technical system option.
[edit] Stage 5 - Logical design
Though the previous level specifies details of the implementation, the outputs of this stage are
implementation-independent and concentrate on the requirements for the human computer
interface.
The three main areas of activity are the definition of the user dialogues. These are the main
interfaces with which the users will interact with the system. The logical design specifies the
main methods of interaction in terms of menu structures and command structures.
The other two activities are concerned with analyzing the effects of events in updating the
system and the need to make enquiries about the data on the system. Both of these use the events,
function descriptions and effect correspondence diagrams produced in stage 3 to determine
precisely how to update and read data in a consistent and secure way.
The product of this stage is the logical design which is made up of:
• Menu structures
• Command structures
• Requirements catalogue
• Data catalogue
• Required logical data structure
Logical process model which includes dialogues and model for the update and enquiry processes
[edit] Stage 6 - Physical design
This is the final stage where all the logical specifications of the system are converted to
descriptions of the system in terms of real hardware and software. This is a very technical stage
and an simple overview is presented here.
The logical data structure is converted into a physical architecture in terms of database
structures. The exact structure of the functions and how they are implemented is specified. The
physical data structure is optimized where necessary to meet size and performance requirements.
The product is a complete Physical Design which could tell software engineers how to build the
system in specific details of hardware and software and to the appropriate standards.
[edit] Advantages and disadvantages
Using this methodology involves a significant undertaking which may not be suitable to all
projects.
The main advantages of SSADM are:
• Three different views of the system
• Mature
• Separation of logical and physical aspects of the system
• Well-defined techniques and documentation
• User involvement
The size of SSADM is a big hindrance to using it in all circumstances. There is a large
investment in cost and time in training people to use the techniques. The learning curve is
considerable as not only are there several modeling techniques to come to terms with, but there
are also a lot of standards for the preparation and presentation of documents.

You might also like