You are on page 1of 16

Database Systems versus File Systems

Considered the scenario in your campus or university recording keeping system, they need to make record
for the students, teacher, employee, and book etc. all these data can be stored in the sequential file like
notepad or any other text editor. We need separate file for student, teacher, book and staff so that user able
to manipulate these data in right time. Files were best storage medium before the introduced of DBMS and
may enterprise kept their record in flat files.
A collection of application programs that perform services for the end-users such as the production of
reports. Each program denes and manages its own data.
Typical le-processing system is supported by a conventional operating system. The system stores
permanent records in various les, and it needs different application programs to extract records from, and
add records to, the appropriate les. Before database management systems (DBMS) came along,
organizations usually stored information in such systems. To keep the record in flat file has numerous
disadvantages that are listed below:
Data file Application Program Users


















Fig. File oriented Approach.
o Data redundancy and inconsistency.
Since different programmers create the les and application programs over a long period, the various files
are likely to have different formats and the programs may be written in several programming languages.
Moreover, the same information may be duplicated in several places (les). For example, the address and
telephone number of a particular student may appear in a le that consists of marks sheet records and in a
le that consists of Sport records. This redundancy leads to higher storage and access cost. In addition, it
may lead to data inconsistency; that is, the various copies of the same data may no longer agree. For
example, a changed student address may be reected in mark sheet records but not elsewhere in the
system.
o Difficulty in accessing data.
Suppose that one of the campus director needs to nd out the names of all student who live within a
particular area. The director asks the data-processing department to generate such a list. Because the
designers of the original system did not anticipate this request, there is no application program on hand to
meet it. So it is difficulty in retrieving data on time.


Student
Teacher
Employee

Library
Student Program
Teacher Program
Employee
Program
Library Program
Reports
Reports
Reports
Reports
o Data isolation.
Because data are scattered in various les, and les may be in different formats, writing new application
programs to retrieve the appropriate data is difficult.
o Integrity problems
The data values stored in the database must satisfy certain types of consistency constraints. For example,
the money value for registration form a campus may never fall below a prescribed amount (say, 2500).
Developers enforce these constraints in the system by adding appropriate code in the various application
programs. However, when new constraints are added, it is difficult to change the programs to enforce them.
The problem is compounded when constraints involve several data items from different les.
o Atomicity problems
A computer system, like any other mechanical or electrical device, is subject to failure. In many applications,
it is crucial that, if a failure occurs, the data be restored to the consistent state that existed prior to the
failure. Consider a program to transfer Rs 50 from account A to account B. If a system failure occurs during
the execution of the program, it is possible that the Rs 50 was removed from account A but was not credited
to account B, resulting in an inconsistent database state. Clearly, it is essential to database consistency that
either both the credit and debit occur, or that neither occur. That is, the funds transfer must be atomic it
must happen in its entirety or not at all. It is difficult to ensure atomicity in a conventional le-processing
system.
o Concurrent- access anomalies.
For the sake of overall performance of the system and faster response, many systems allow multiple users to
update the data simultaneously. In such an environment, interaction of concurrent updates may result in
inconsistent data. Consider bank account A, containing Rs 500. If two customers withdraw funds (say Rs 50
and Rs 100 respectively) from account A at about the same time, the result of the concurrent executions
may leave the account in an incorrect (or inconsistent) state.
o Security problems
Not every user of the database system should be able to access all the data. For example, in a banking
system, payroll personnel need to see only that part of the database that has information about the various
bank employees. They do not need access to information about customer accounts. But, since application
programs are added to the system in an ad hoc manner, enforcing such security constraints is difficult.
Data file Application Program Users
















Fig. DBMS oriented Approach.
o Fixed queries/proliferation of application programs

Student Program
Teacher Program
Employee
Program
Book Program
Reports
Reports
Reports
Reports
Database

Student Table
Teachers Table
Employee Table
Book Table



Database
Management
System

View of Data
A database system is a collection of interrelated les and a set of programs that allow users to access and
modify these les. A major purpose of a database system is to provide users with an abstract view of the data.
That is, the system hides certain details of how the data are stored and maintained.

The Three-Schema Architecture
The goal of the three-schema architecture, illustrated in above Figure, is to separate the user applications from
the physical database. In this architecture, schemas can be defined at the following three levels:
1. The internal (Physical) level has an internal schema, which describes the physical storage structure of the
database. The internal schema uses a physical data model and describes the complete details of data storage
and access paths for the database.
2. The conceptual (Logical) level has a conceptual schema, which describes the structure of the whole
database for a community of users. The conceptual schema hides the details of physical storage structures and
concentrates on describing entities, data types, relationships, user operations, and constraints.
Usually, a representational data model is used to describe the conceptual schema when a database system is
implemented. This implementation conceptual schema is often based on a conceptual schema design in a
high-level data model.
3. The external or view level includes a number of external schemas or user views. Each external schema
describes the part of the database that a particular user group is interested in and hides the rest of the database
from that user group. As in the previous level, each external schema is typically implemented using a
representational data model, possibly based on an external schema design in a high-level data model


Fig. The Three-Schema Architecture

To understand the three-schema architecture, consider the three levels of the BOOK file in Online Book
database as shown in bellow Figure. In this figure, two views (view 1 and view 2) of the BOOK file have been
defined at the external level. Different database users can see these views. The details of the data types are
hidden from the users. At the conceptual level, the BOOK records are described by a type definition. The
application programmers and the DBA generally work at this level of abstraction. At the internal level, the
BOOK records are described as a block of consecutive storage locations such as words or bytes. The database
users and the application programmers are not aware of these details; however, the DBA may be aware of
certain details of the physical organization of the data.


INTANCES, SCHEMAS AND SUBSCHEMA in DBMS
Database changes over time when information is inserted or deleted. The collection of information stored in
the database at a particular moment is called an instance of the database. The overall design of the database is
called the database schema. A schema diagram, as shown above, displays only names of record types (entities)
and names of data items (attributes) and does not show the relationships among the various files.



The schema will remain the same while the values filled into it change from instant to instant. When the
schema framework is filled in with data item values, it is referred as an instance of the schema. The data in the
database at a particular moment of time is called a database state or snapshot, which is also called the current
set of occurrences or instances in the database In other words, "the description of a database is called the
database schema, which is specified during database design and is not expected to change frequently". A
displayed schema is called a schema diagram.

A schema diagram displays only some aspects of a schema, such as the number of record types and data items,
and some types of constraints. Other aspects are not specified in the schema diagram. It does not specify the
data type of each data item and the relationships among the various files.
Subschema
A subschema is a subset of the schema and inherits the same property that a schema has. The plan (or scheme)
for a view is often called subschema. Subschema refers to an application programmer's (user's) view of the
data item types and record types, which he or she uses. It gives the users a window through which he or she
can view only that part of the database, which is of interest to him. Therefore, different application
Data abstraction
One fundamental characteristic of the database approach is that it provides some level of data abstraction.
Data abstraction generally refers to the suppression of details of data organization and storage, and the
highlighting of the essential features for an improved understanding of data. One of the main characteristics of
the database approach is to support data abstraction so that different users can perceive data at their preferred
level of detail.
Data model
A collection of concepts that can be used to describe the structure of a database provides the necessary means
to achieve this abstraction. By structure of a database we mean the data types, relationships, and constraints
that apply to the data.


o Hierarchical Database Model
Hierarchical Database model is one of the oldest database models, dating from late 1950s. One of the first
hierarchical databases Information Management System (IMS) was developed jointly by North American
Rockwell Company and IBM. This model is like a structure of a tree with the records forming the nodes and
fields forming the branches of the tree.
The hierarchical model organizes data elements as tabular rows, one for each instance of entity. Consider a
company's organizational structure. At the top we have a General Manager (GM). Under him we have several
Deputy General Managers (DGMs). Each DGM looks after a couple of departments and each department will
have a manager and many employees. When represented in hierarchical model, there will be separate rows for
representing the GM, each DGM, each department, each manager and each employee. The row position
implies a relationship to other rows. A given employee belongs to the department that is closest above it in the
list and the department belongs to the manager that is immediately above it in the list and so on as shown.

Advantages
1. Simplicity
Data naturally have hierarchical relationship in most of the practical situations. Therefore, it is easier to view
data arranged in manner. This makes this type of database more suitable for the purpose.
2. Security
These database systems can enforce varying degree of security feature unlike flat-file system.
3. Database Integrity
Because of its inherent parent-child structure, database integrity is highly promoted in these systems.
4. Efficiency: The hierarchical database model is a very efficient, one when the database contains a large
number of I: N relationships (one-to-many relationships) and when the users require large number of
transactions, using data whose relationships are fixed.
Disadvantages
1. Complexity of Implementation: The actual implementation of a hierarchical database depends on the
physical storage of data. This makes the implementation complicated.
2. Difficulty in Management: The movement of a data segment from one location to another cause all the
accessing programs to be modified making database management a complex affair.
3. Complexity of Programming: Programming a hierarchical database is relatively complex because the
programmers must know the physical path of the data items.
4. Poor Portability: The database is not easily portable mainly because there is little or no standard existing
for these types of database.
5. Database Management Problems: If you make any changes in the database structure of a hierarchical
database, then you need to make the necessary changes in all the application programs that access the
database. Thus, maintaining the database and the applications can become very difficult.
6. Lack of structural independence: Structural independence exists when the changes to the database
structure does not affect the DBMS's ability to access data. Hierarchical database systems use physical storage
paths to navigate to the different data segments. So, the application programs should have a good knowledge
of the relevant access paths to access the data. So, if the physical structure is changed the applications will
also have to be modified. Thus, in a hierarchical database the benefits of data independence are limited by
structural dependence.
7. Programs Complexity: Due to the structural dependence and the navigational structure, the application
programs and the end users must know precisely how the data is distributed physically in the database in order
to access data. This requires knowledge of complex pointer systems, which is often beyond the grasp of
ordinary users (users who have little or no programming knowledge).
8. Operational Anomalies: As discussed earlier, hierarchical model suffers from the Insert anomalies, Update
anomalies and Deletion anomalies, also the retrieval operation is complex and asymmetric, thus hierarchical
model is not suitable for all the cases.
9. Implementation Limitation: Many of the common relationships do not conform to the l:N format required
by the hierarchical model. The many-to-many (N:N) relationships, which are more common in real life are
very difficult to implement in a hierarchical model.
o Network Model
The popularity of the network data model coincided with the popularity of the hierarchical data model. Some
data were more naturally modeled with more than one parent per child. So, the network model permitted the
modeling of many-to-many relationships in data. In 1971, the Conference on Data Systems Languages
(CODASYL) formally defined the network model. The basic data modeling construct in the network model is
the set construct. A set consists of an owner record type, a set name, and a member record type. A member
record type can have that role in more than one set; hence the multi parent concept is supported. Network
model is a collection data in which records are physically linked through linked lists .A DBMS is said to be a
Network DBMS if the relationships among data in the database are of type many-to-many. The Relationship
among many-to-many appears in the form of a network. Thus the structure of a network database is extremely
complicated because of these many-to-many relationships in which one record can be used as a key of the
entire database. A network database is structured in the form of a graph that is also a data structure.

Advantages:
1. Speed of access is faster because of the predefined data paths.
2. Provide very efficient "High-speed" retrieval.
3. Ability to handle more relationship types. The network model can handle the one-to-many and many-to-
many relationships.
4. Simplicity
The network model is conceptually simple and easy to design.
5. Data Integrity
In a network model, no member can exist without an owner. A user must therefore first define the owner
record and then the member record. This ensures the integrity.
6. Ease of data access
In the network database terminology, a relationship is a set. Each set comprises of two types of
records.an owner record and a member record, In a network model an application can access an owner
record and all the member records within a set.
7. Data Independence
The network model draws a clear line of demarcation between programs and the complex physical
storage details. The application programs work independently of the data. Any changes made in the data
characteristics do not affect the application program.
Disadvantages
1. System complexity.
In a network model, data are accessed one record at a time. This males it essential for the database
designers, administrators, and programmers to be familiar with the internal data structures to gain access
to the data. Therefore, a user friendly database management system cannot be created using the network
model.
2. Lack of Structural independence.
Making structural modifications to the database is very difficult in the network database model as the
data access method is navigational. Any changes made to the database structure require the application
programs to be modified before they can access data. Though the network model achieves data
independence, it still fails to achieve structural independence.
3. Procedural access language.


o Relational Model
The Relational Model was the first theoretically founded and well thought out Data Model, proposed by
EfCodd in 1970, then a researcher at IBM. It has been the foundation of most database software and
theoretical database research ever since. The Relational Model is a depiction of how each piece of stored
information relates to the other stored information. It shows how tables are linked, what type of links are
between tables, what keys are used, what information is referenced between tables. It's an essential part of
developing a normalized database structure to prevent repeat and redundant data storage.
The basic idea behind the relational model is that a database consists of a series of unordered tables (or
relations) that can be manipulated using non-procedural operations that return tables. This model was in vast
contrast to the more traditional database theories of the time that were much more complicated, less flexible
and dependent on the physical storage methods of the data. The RELATIONAL database model is based on
the Relational Algebra, set theory and predicate logic.

The relational data model provides several advantages, but the main advantages are;
1. It allows for developing simple but powerful structure for databases.
2. It allows separation of the logical and physical level, so that logical design can be performed without
considering the storage structures.
3. It allows for designing or expressing the data of an organization in a simple way, which can easily be
understood.
4. The data operations can easily be expressed in a simple way.
5. It allows for applying data integrity rules on the relations of database.
6. Data from multiple tables can be retrieved very easily.
7. It allows users for inserting, modifying, and deleting the rows in a table without facing any problems.
Disadvantages:
1. Performance:
A major constraint and therefore disadvantage in the use of relational database system is machine
performance. If the number of tables between which relationships to be established are large and the
tables themselves effect the performance in responding to the sql queries.
2. Physical Storage Consumption:
With an interactive system, for example an operation like join would depend upon the physical storage
also. It is, therefore common in relational databases to tune the databases and in such a case the physical
data layout would be chosen so as to give good performance in the most frequently run operations. It
therefore would naturally result in the fact that the lays frequently run operations would tend to become
even more shared.
4. Slow extraction of meaning from data:
If the data is naturally organized in a hierarchical manner and stored as such, the hierarchical approach
may give quick meaning for that data

o The Entity-Relationship Model
The entity-relationship (E-R) data model is based on a concept of a real world that consists of a collection of
basic objects, called entities, and of relationships among these objects. An entity is a thing or object in the
real world that is distinguishable from other objects. For example, each student is an entity, and book can be
considered as entities. Entities are described in a database by a set of attributes. For example, the attributes
book name and author may describe one particular book in a library, and they form attributes of the book
entity set. Similarly, attributes student-name, student address and gender may describe a student entity. An
extra attribute student_id is used to uniquely identify student. A unique student identier must be assigned to
each student. A relationship is an association among several entities. For example, a marks relationship
associates a student with each student that they have. The set of all entities of the same type and the set of all
relationships of the same type are termed an entity set and relationship set, respectively. The overall logical
structure (schema) of a database can be expressed graphically by an E-R diagram, which is built up from the
following components:
Rectangles, which represent entity sets
Ellipses, which represent attributes
Diamonds, which represent relationships among entity sets
Lines, which link attributes to entity sets and entity sets to relationships


Fig E-R diagram corresponding to customers and loans



ER Diagrams Usage
While able to describe just about any system, ER diagrams are most often associated with complex databases
that are used in software engineering and IT networks. In particular, ER diagrams are frequently used during
the design stage of a development process in order to identify different system elements and their relationships
with each other. For example, inventory software used in a retail shop will have a database that monitors
elements such as purchases, item, item type, and item source and item price.
Advantages Disadvantages
Conceptual simplicity
Visual representation
Effective communication
Integration with the relational database model
Limited constraint representation
Limited relationship representation
No representation of data manipulation
Loss of information


DatabaseLanguages
o DDL (data definition language)
For describing data and data structures a suitable description tool, a data definition language (DDL), is
needed. With this help a data scheme can be defined and also changed later. Typical DDL operations (with
their respective keywords in the structured query language SQL:
Creation of tables and definition of attributes (CREATE TABLE ...)
Change of tables by adding or deleting attributes (ALTER TABLE )
Deletion of whole table including content (DROP TABLE )
o DML (data manipulation language)
Additionally a language for the descriptions of the operations with data like store, search, read, change, etc.
the so-called data manipulation, is needed. Such operations can be done with a data manipulation language
(DML). Within such languages keywords like insert, modify, update, delete, select, etc. are common. Typical
DML operations (with their respective keywords in the structured query
Language SQL:
Add data (INSERT)
Change data (UPDATE)
Delete data (DELETE)
Query data (SELECT)
o DCl (Data control language): it provides utilities to privilege like grant, revoke.
o TCL (Transaction control language) it provide utility for transaction management like commit, rolback.

An object-oriented database management system (OODBMS)
An object-oriented database management system (OODBMS), sometimes shortened to ODBMS for object
database management system), is a database management system (DBMS) that supports the modeling and
creation of data as objects. This includes some kind of support for classes of objects and the inheritance of
class properties and methods by subclasses and their objects. There is currently no widely agreed-upon
standard for what constitutes an OODBMS, and OODBMS products are considered to be still in their infancy.
In the meantime, the object-relational database management system (ORDBMS), the idea that object-oriented
database concepts can be superimposed on relational databases, is more commonly encountered in available
products. An object-oriented database interface standard is being developed by an industry group, the Object
Data Management Group (ODMG). The Object Management Group (OMG) has already standardized an
object-oriented data brokering interface between systems in a network.
A core object-oriented data model consists of the following basic object-oriented concepts:
(1) Object and object identifier: Any real world entity is uniformly modeled as an object (associated with a
unique id: used to pinpoint an object to retrieve).
(2) Attributes and methods: every object has a state (the set of values for the attributes of the object) and a
behavior (the set of methods - program code - which operate on the state of the object). The state and behavior
encapsulated in an object are accessed or invoked from outside the object only through explicit message
passing. An attribute is an instance variable, whose domain may be any class: user-defined or primitive. A
class composition hierarchy (aggregation relationship) is orthogonal to the concept of a class hierarchy. The
link in a class composition hierarchy may form cycles.
(3) Class: a means of grouping all the objects which share the same set of attributes and methods. An object
must belong to only one class as an instance of that class (instance-of relationship). A class is similar to an
abstract data type. A class may also be primitive (no attributes), e.g., integer, string, Boolean.
(4) Class hierarchy and inheritance: derive a new class (subclass) from an existing class (superclass). The
subclass inherits all the attributes and methods of the existing class and may have additional attributes and
methods. single inheritance (class hierarchy) vs. multiple inheritance (class lattice).

























Object Database Advantages over RDBMS
o Objects don't require assembly and disassembly saving coding time and execution time to assemble or
disassemble objects.
o Reduced paging
o Easier navigation
o Better concurrency control - A hierarchy of objects may be locked.
o Data model is based on the real world.
o Works well for distributed architectures.
o Less code required when applications are object oriented.
o Reduced Maintenance
o Improved Reliability and Flexibility
o High Code Reusability
Object Database Disadvantages compared to RDBMS
o Lower efficiency when data is simple and relationships are simple.
o Relational tables are simpler.
o Late binding may slow access speed.
o More user tools exist for RDBMS.
o Standards for RDBMS are more stable.
o Support for RDBMS is more certain and change is less likely to be required.




Fig. different between OODMS v/s RDMS
Transaction
A transaction is a logical unit of work that contains one or more SQL statements. A transaction is an atomic
unit. The effects of all the SQL statements in a transaction can be either all committed (applied to the
database) or all rolled back (undone from the database).
A transaction begins with the first executable SQL statement. A transaction ends when it is committed or
rolled back, either explicitly with a COMMIT or ROLLBACK statement or implicitly when a DDL statement
is issued.


To illustrate the concept of a transaction, consider a banking database. When a bank customer transfers money
from a savings account to a checking account, the transaction can consist of three separate operations:
1. Decrement the savings account
2. Increment the checking account
3. Record the transaction in the transaction journal
ACID Properties
1. Atomicity
Either all operations of a transaction are reflected in the database or none of them (all or nothing)
2. Consistency
If the database was is a consistent state before the transaction started, it will be in a consistent state
after the transaction has been executed
3. Isolation
If transactions are executed in parallel, the effects of an ongoing transaction must not be visible to
other transactions
4. Durability
After a transaction finished successfully, its changes are persistent and will not be lost (e.g. on system
failure)














Transaction States
1. Active.
Initial state; transaction is in this state while executing
2. Partially committed.
After the last statement has been executed.
3. Committed.
After successful completion.
4. Failed.
After discovery that a normal execution is no longer possible. logical error (e.g. bad input), system
error (e.g. deadlock) or system crash
5. Aborted: after the rollback of a transaction



Database System structure.



Query processor: This is a major DBMS component that transforms queries into a series of low-level
instructions directed to the database manager.
Database manager (DM): The DM interfaces with user-submitted application programs and queries. The
DM accepts queries and examines the external and conceptual schemas to determine what conceptual records
are required to satisfy the request. The DM then places a call to the le manager to perform the request. The
components of the DM.
File manager: The le manager manipulates the underlying storage files and manages the allocation of
storage space on disk. It establishes and maintains the list of structures and indexes dened in the internal
schema.
DML preprocessor: This module converts DML statements embedded in an application program into
standard function calls in the host language. The DML preprocessor must interact with the query processor to
generate the appropriate code.
DDL compiler: The DDL compiler converts DDL statements into a set of tables containing metadata.
These tables are then stored in the system catalog while control information is stored in data le headers.
Catalog manager: The catalog manager manages access to and maintains the system catalog. The system
catalog is accessed by most DBMS components.

The major software components for the database manager are as follows:
Authorization control This module checks that the user has the necessary authorization to carry out the
required operation.
Command processor Once the system has checked that the user has authority to carry out the operation;
control is passed to the command processor.
Integrity checker For an operation that changes the database, the integrity checker checks that the
requested operation satises all necessary integrity constraints (such as key constraints).
Query optimizer This module determines an optimal strategy for the query execution.
Transaction manager This module performs the required processing of operations it receives from
transactions.
Scheduler This module is responsible for ensuring that concurrent operations on the database proceed without
conflicting with one another. It controls the relative order in which transaction operations are executed.
Recovery manager This module ensures that the database remains in a consistent state in the presence of
failures. It is responsible for transaction commit and abort.
Buffer manager This module is responsible for the transfer of data between main memory and
secondary storage, such as disk and tape. The recovery manager and the buffer manager are sometimes
referred to collectively as the data manager. The buffer manager is sometimes known as the cache manager.

Application architectures
The client takes the users request, checks the syntax and generates database requests in SQL or
another database language appropriate to the application logic. It then transmits the message to the server,
waits for a response, and formats the response for the end-user. The server accepts and processes the
database requests, then transmits the results back to the client. The processing involves checking
authorization, ensuring integrity, maintaining the system catalog, and performing query and update
processing. In addition, it also provides concurrency and recovery control.

















Two-tier clientserver architecture.


There are many advantages to this type of architecture. For example:
1. It enables wider access to existing databases.
2. Increased performance if the clients and server reside on different computers then different CPUs
can be processing applications in parallel. It should also be easier to tune the server machine if its only
task is to perform database processing.
3. Hardware costs may be reduced it is only the server that requires storage and processing power
sufficient to store and manage the database.
4. Communication costs are reduced applications carry out part of the operations on the client and send
only requests for database access across the network, resulting in less data being sent across the
network.
5. Increased consistency the server can handle integrity checks, so that constraints need be defined and
validated only in the one place, rather than having each application program perform its own
checking.
6. It maps on to open systems architecture quite naturally.

















Three-Tier ClientServer Architecture




Three-Tier ClientServer Architecture includes:
1. The user interface layer, which runs on the end-users computer (the client).
2. The business logic and data processing layer. This middle tier runs on a server and is often called the
application server.
3. A DBMS, which stores the data required by the middle tier. This tier may run on a separate server called
the database server.

Advantages:
1. The need for less expensive hardware because the client is thin.
2. Application maintenance is centralized with the transfer of the business logic for many end-users into a
single application server. This eliminates the concerns of software distribution that are problematic in the
traditional two-tier clientserver model.
3. The added modularity makes it easier to modify or replace one tier without affecting the other tiers.
4. Load balancing is easier with the separation of the core business logic from the database functions.

An additional advantage is that the three-tier architecture maps quite naturally to the Web environment, with
a Web browser acting as the thin client, and a Web server acting as the application server. The three-tier
architecture can be extended to n-tiers, with additional tiers added to provide more flexibility and scalability.
For example, the middle tier of the three-tier architecture could be split into two, with one tier for the Web
server and another for the application server.

Database users and administrators
A primary goal of a database system is to retrieve information from and store new information in the
database. People who work with a database can be categorized as database users or database administrators.
There are four different types of database-system users, differentiated by the way they expect to interact with
the system. Different types of user interfaces have been designed for the different types of users.
1. Naive users are unsophisticated users who interact with the system by invoking one of the application
programs that have been written previously. For example consider a user who wishes to find his account
balance over the World Wide Web. Such a user may access a form, where she enters her account number.
An application program at the Web server then retrieves the account balance, using the given account
number, and passes this information back to the user
2. Application programmers are computer professionals who write application programs. Application
programmers can choose from many tools to develop user interfaces. Rapid application development
(RAD) like visual basic or DOT NET, are tools that enable an application programmer to construct forms
and reports without writing a program.
3. Specialized users are sophisticated users who write specialized database applications that do not fit into
the traditional data-processing framework. Among these applications are computer-aided design systems,
knowledge base and expert systems, systems that store data with complex data types (for example,
graphics data and audio data), and environment-modeling systems.
4. Database Administrator is a person who has central control over both data and application programs.
The responsibilities of DBA vary depending upon the job description and corporate and organization
policies. Some of the responsibilities of DBA are given here.
Schema definition and modification: The overall structure of the database is known as database
schema. It is the responsibility of the DBA to create the database schema by executing a set of data
definition statements in DDL. The DBA also carries out the changes to the schema according to the
changing needs of the organization.
New software installation: It is the responsibility of the DBA to install new DBMS software,
application software, and other related software. After installation, the DBA must test the new software.
Security enforcement and administration:
DBA is responsible for establishing and monitoring the security of the database system. It involves
adding and removing users, auditing, and checking for security problems.
Data analysis: DBA is responsible for analyzing the data stored in the database, and studying its
performance and efficiency in order to effectively use indexes, parallel query execution, etc.
Preliminary database design: The DBA works along with the development team during the database
design stage due to which many potential problems that can arise later (after installation) can be avoided.
Physical organization modification: The DBA is responsible for carrying out the modifications in the
physical organization of the database for better performance.
Routine maintenance checks: The DBA is responsible for taking the database backup periodically in
order to Recover from any hardware or software failure (if occurs). Other routine maintenance checks
that are carried out by the DBA are checking data storage and ensuring the availability of free disk space
for normal operations, upgrading disk space as and when required, etc.

You might also like