You are on page 1of 59

Oracle DBA - Backup and Recovery Scripts

Date: Dec 27, 2002 By Rajendra Gutta. Sample Chapter is provided courtesy of Sams.
Having the right backup and recovery procedures is crucial to the operation of any database. It is the
responsibility of the database administrator to protect the database from system faults, crashes, and
natural calamities resulting from a variety of circumstances. Learn how to choose the best backup and
recovery mechanism for your Oracle system.

Having the right backup and recovery procedures is the lifeblood of any database. Companies live on data,
and, if that data is not available, the whole company collapses. As a result, it is the responsibility of the
database administrator to protect the database from system faults, crashes, and natural calamities
resulting from a variety of circumstances.

The choice of a backup and recovery mechanism depends mainly on the 
following factors: 

• Database mode (ARCHIVELOG, NOARCHIVELOG)


• Size of the database
• Backup and recovery time
• uptime
• Type of data (OLTP, DSS, Data Warehouse).

The types of backup are

• Offline backup (Cold or closed database backup)


• Online backup (Hot or open database backup)
• Logical export

Logical exports create an export file that contains a list of SQL statements to recreate the database. Export
is performed when the database is open and does not affect users work. Offline backups can only be
performed when the database is shut down cleanly, and the database will be unavailable to users while
the offline backup is being performed. Online backups are performed when the database is open, and it
does not affect users work. The database needs to run in ARCHIVELOG mode to perform online backups.

The database can run in either ARCHIVELOG mode or NOARCHIVELOG mode. In ARCHIVELOG mode,
the archiver (ARCH) process archives the redo log files to the archive destination directory. These archive
files can be used to recover the database in the case of a failure. In NOARCHIVELOG mode, the redo log
files are not archived.

When the database is running in ARCHIVELOG mode, the choice can be one or more of the following:

• Export
• Hot backup
• Cold backup

When the database is running in NOARCHIVELOG mode, the choice of backup is as follows:

• Export
• Cold backup

Cold Backup
Offline or cold backups are performed when the database is completely shutdown. The disadvantage of an
offline backup is that it cannot be done if the database needs to be run 24/7. Additionally, you can only
recover the database up to the point when the last backup was made unless the database is running in
ARCHIVELOG mode.
The general steps involved in performing a cold backup are shown in Figure 3.1. These general steps are
used in writing cold backup scripts for Unix and Windows NT.

Figure 3.1 Steps for cold backup.

The steps in Figure 3.1 are explained as follows.

Step 1—Generating File List

An offline backup consists of physically copying the following files:

• Data files
• Control files
• Init.ora and config.ora files

CAUTION

Backing up online redo log files is not advised in all cases, except when performing cold backup with the
database running in NOARCHIVELOG mode. If you make a cold backup in ARCHIVELOG mode do not
backup redo log files. There is a chance that you may accidentally overwrite your real online redo logs,
preventing you from doing a complete recovery.

If your database is running in ARCHIVELOG mode, when you perform cold backup you should also
backup archive logs that exist.

Before performing a cold backup, you need to know the location of the files that need to be backed up.
Because the database structure changes day to day as more files get added or moved between
directories, it is always better to query the database to get the physical structure of database before
making a cold backup.

To get the structure of the database, query the following dynamic data dictionary tables:

• V$datafile Lists all the data files used in the database

SQL>select name from v$datafile;

• Backup the control file and perform a trace of the control file using

SQL>alter database backup controlfile to


'/u10/backup/control.ctl';
SQL>alter database backup controlfile to trace;

• Init.ora and config.ora Located under $ORACLE_HOME/dbs directory

Step 2—Shut down the database

You can shut down a database with the following commands:

$su – oracle
$sqlplus "/ as sysdba"
SQL>shutdown

Step 3—Perform a backup

In the first step, you generated a list of files to be backed up. To back up the files, you can use the Unix
copy command (cp) to copy it to a backup location, as shown in the following code. You have to copy all
files that you generated in Step 1.
$cp /u01/oracle/users01.dbf /u10/backup

You can perform the backup of the Init.ora and config.ora files as follows:

$cp $ORACLE_HOME/dbs/init.ora /u10/backup


$cp $ORACLE_HOME/dbs/config.ora /u10/backup

Step 4—Start the database

After the backup is complete, you can start the database as follows:

$su – oracle
$sqlplus "/ as sysdba"
SQL> startup

Hot Backup
An online backup or hot backup is also referred to as ARCHIVE LOG backup. An online backup can only
be done when the database is running in ARCHIVELOG mode and the database is open. When the
database is running in ARCHIVELOG mode, the archiver (ARCH) background process will make a copy of
the online redo log file to archive backup location.

An online backup consists of backing up the following files. But, because the database is open while
performing a backup, you have to follow the procedure shown in Figure 3.2 to backup the files:

• Data files of each tablespace


• Archived redo log files
• Control file
• Init.ora and config.ora files

Figure 3.2 Steps for hot backup.

The general steps involved in performing hot backup are shown in Figure 3.2. These general steps are
used in writing hot backup scripts for Unix and Windows NT.

The steps in Figure 3.2 are explained as follows.

Step 1—Put the tablespace in the Backup mode and copy the data files.

Assume that your database has two tablespaces, USERS and TOOLS. To back up the files for these two
tablespaces, first put the tablespace in backup mode by using the ALTER statement as follows:

SQL>alter tablespace USERS begin backup;

After the tablespace is in Backup mode, you can use the SELECT statement to list the data files for the
USERS tablespace, and the copy (cp) command to copy the files to the backup location. Assume that the
USERS tablespace has two data files—users01.dbf and users02.dbf.

SQL>select file_name from dba_data_files


where tablespace_name='USERS';
$cp /u01/oracle/users01.dbf /u10/backup
$cp /u01/oracle/users01.dbf /u10/backup

The following command ends the backup process and puts the tablespace back in normal mode.

SQL>alter tablespace USERS end backup;


You have to repeat this process for all tablespaces. You can get the list of tablespaces by using the
following SQL statement:

SQL>select tablespace_name from dba_tablespaces;

Step 2—Back up the control and Init.ora files.

To backup the control file,

SQL>alter database backup controlfile to '/u10/backup/control.ctl';

You can copy the Init.ora file to a backup location using

$cp $ORACLE_HOME/dbs/initorcl.ora /u10/backup

Step 3—Stop archiving.

Archiving is a continuous process and, without stopping archiver, you might unintentionally copy the file
that the archiver is currently writing. To avoid this, first stop the archiver and then copy the archive files to
backup location. You can stop the archiver as follows:

SQL>alter system switch logfile;


SQL>alter system archive log stop;

The first command switches redo log file and the second command stops the archiver process.

Step 4—Back up the archive files.

To avoid backing up the archive file that is currently being written, we find the least sequence number that
is to be archived from the V$LOG view, and then backup all the archive files before that sequence number.
The archive file location is defined by the LOG_ARCHIVE_DEST_n parameter in the Init.ora file.

select min(sequence#) from v$log


where archived='NO';

Step 5—Restart the archive process.

The following command restarts the archiver process:

SQL>alter system archive log start;

Now you have completed the hot backup of database.

An online backup of a database will keep the database open and functional for 24/7 operations. It is
advised to schedule online backups when there is the least user activity on the database, because backing
up the database is very I/O intensive and users can see slow response during the backup period.
Additionally, if the user activity is very high, the archive destination might fill up very fast.

Database Crashes During Hot Backup

There can be many reasons for the database to crash during a hot backup—a power outage or rebooting
of the server, for example. If these were to happen during a hot backup, chances are that tablespace
would be left in backup mode. In that case you must manually recover the files involved, and the recovery
operation would end the backup of tablespace. It's important to check the status of the files as soon as you
restart the instance and end the backup for the tablespace if it's in backup mode.

select a.name,b.status from v$datafile a, v$backup b


where a.file#=b.file# and b.status='ACTIVE';
or

select a.tablespace_name,a.file_name,b.status from dba_data_files a,


v$backup b
where a.file_id=b.file# and b.status='ACTIVE';

This statement lists files with ACTIVE status. If the file is in ACTIVE state, the corresponding tablespace is
in backup mode. The second statement gives the tablespace name also, but this can't be used unless the
database is open. You need to end the backup mode of the tablespace with the following command:

alter tablespace tablespace_name end backup;

Logical Export
Export is the single most versatile utility available to perform a backup of the database, de-fragment the
database, and port the database or individual objects from one operating system to another operating
system.

Export backup detects block corruption

Though you perform other types of backup regularly, it is good to perform full export of database at regular
intervals, because export detects any data or block corruptions in the database. By using export file, it is
also possible to recover individual objects, whereas other backup methods do not support individual object
recovery.

Export can be used to export the database at different levels of functionality:

• Full export (full database export) (FULL=Y)


• User-level export (exports objects of specified users) (OWNER=userlist)
• Table-level export (exports specified tables and partitions) (TABLES=tablelist)
• Transportable tablespaces (TABLESPACES=tools, TRANSPORT_TABLESPACE=y)

There are two methods of Export:

• Conventional Path (default)—Uses SQL layer to create the export file. The fact is that the SQL
layer introduces CPU overhead due to character set, converting numbers, dates and so on. This
is time consuming.
• Direct path (DIRECT=YES)—Skips the SQL layer and reads directly from database buffers or
private buffers. Therefore it is much faster than conventional path.

We will discuss scripts to perform the full, user-level, and table-level export of database. The scripts also
show you how to compress and split the export file while performing the export. This is especially useful if
the underlying operating system has a limitation of 2GB maximum file limit.

Understand scripting

This chapter requires understanding of basic Unix shell and DOS batch programming techniques that are
described in Chapter 2 "Building Blocks." That chapter explained some of the common routines that will be
used across most of the scripts presented here.

This book could have provided much more simple scripts. But, considering standardization across all
scripts and the reusability of individual sections for your own writing of scripts, I am focusing on providing a
comprehensive script, rather than a temporary fix. After you understand one script, it is easy to follow the
flow for the rest of the scripts.

Backup and Recovery under Unix

The backup and recovery scripts discussed here have been tested under Sun Solaris 2.x, HP-UX 11.x and
AIX 4.x. The use of a particular command is discussed if there is a difference between these operating
systems. They might also work in higher versions of the same operating system. These scripts are written
based on the common ground among these three Unix flavors. However, I advise that you test the scripts
under your environment for both backup and recovery before using it as a regular backup script. This
testing not only gives you confidence in the script, it also gives you an understanding of how to use the
script in case a recovery is needed and gives you peace of mind when a crisis hits.

Backup Scripts for HP-UX, Sun Solaris, and AIX

The backup scripts provided here work for HP-UX, Sun Solaris, and AIX with one slight modification. That
is, the scripts use v$parameter and v$controlfile to get the user dump destination and control
file information. Because in Unix the dollar sign ($) is a special character, you have to precede it with a
forward slash (\) that tells Unix to treat it as a regular character. However, this is different in each flavor of
Unix. AIX and HP-UX need one forward slash, and the Sun OS needs two forward slashes to make the
dollar sign a regular character.

Sun OS 5.x needs two \\

AIX 4.x needs one \

HP-UX 11.x needs one \

These scripts are presented in modular approach. Each script consists of a number of small functions and
a main section. Each function is designed to meet a specific objective so that they are easy to understand
and modify. These small functions are reusable and can be used in the design of your own scripts. If you
want to change a script to fit to your unique needs, you can do so easily in the function where you want the
change without affecting the whole script.

After the backup is complete, it is necessary to check the backup status by reviewing log and error files
generated by the scripts.

Cold Backup
Cold backup program (see Listing 3.1) performs the cold backup of the database under the Unix
environment. The script takes two input parameters—SID and OWNER. SID is the instance to be backed
up, and OWNER is the Unix account under which Oracle is running. Figure 3.3 describes the functionality of
the cold backup program. Each box represents a corresponding function in the program.

Figure 3.3 Functions in cold backup script for Unix.

Listing 3.1 coldbackup_ux

#####################################################################
# PROGRAM NAME:coldbackup_ux

# PURPOSE:Performs cold backup of the database. Database

#should be online when you start


# the script. It will shutdown and take a cold backup and brings
# the database up again

# USAGE:$coldbackup_ux SID OWNER

# INPUT PARAMETERS: SID(Instance name), OWNER(Owner of instance)

#####################################################################
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_ux_cmd_stat "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't generate files to be backed up"
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify_shutdown(): Verify that database is down
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_verify_shutdown(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
if [ $? = 0 ]; then
echo "´date´" >> $LOGFILE
echo "COLDBACKUP_FAIL: ${ORA_SID}, Database is up, can't make
coldbackup if the database is online."|tee -a ${BACKUPLOGFILE} >>
$LOGFILE
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_shutdown_i(): Shutdown database in Immediate mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_shutdown_i(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
shutdown immediate;
exit
EOF
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_shutdown_n(): Shutdown database in Normal mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_shutdown_n(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
shutdown normal;
exit
EOF
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_startup_r(): Startup database in restricted mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_startup_r(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
startup restrict;
exit
EOF
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_startup_n(): Startup database in normal mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_startup_n(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
startup;
exit
EOF
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_dynfiles(): Identify the files to backup
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_build_dynfiles(){
# Build datafile list
echo "Building datafile list ." >> ${BACKUPLOGFILE}
datafile_list=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select file_name from dba_data_files order by tablespace_name;
exit
EOF´

echo "############### SQL for Temp Files " >> ${RESTOREFILE}


${ORACLE_HOME}/bin/sqlplus -s <<EOF >> ${RESTOREFILE}
/ as sysdba
set heading off feedback off
select 'alter tablespace '||tablespace_name||' add tempfile '||''||
file_name||''||' reuse'||';'
from dba_temp_files;
exit
EOF

echo "Backingup controlfile and trace to trace file"


>>${BACKUPLOGFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter database backup controlfile to
'${CONTROLFILE_DIR}/backup_control.ctl';
alter database backup controlfile to trace;
exit
EOF

# Backup trace of control file


CONTROL=´ls -t ${udump_dest}/*.trc |head -1´
if [ ! -z "$CONTROL" ]; then
grep 'CONTROL' ${CONTROL} 1> /dev/null
if test $? -eq 0; then
cp ${CONTROL} ${CONTROLFILE_DIR}/backup_control.sql
fi
fi
}
# Prepare restore file for control file
echo "###### Control File " >> ${RESTOREFILE}
echo "# Use your own discretion to copy control file, not advised
unless
required..." >> ${RESTOREFILE}
echo " End of backup of control file" >> ${BACKUPLOGFILE}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cold_backup(): Perform cold backup
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_cold_backup(){

#Copy datafiles to backup location


echo "############### Data Files " >> ${RESTOREFILE}
for datafile in ´echo $datafile_list´
do
echo "Copying datafile ${datafile} ..." >> ${BACKUPLOGFILE}
#Prepare a restore file to restore coldbackup in case a
restore is necessary
echo cp -p ${DATAFILE_DIR}/´echo $datafile|awk -F"/" '{print
$NF}'´
$datafile >> ${RESTOREFILE}
cp -p ${datafile} ${DATAFILE_DIR}
funct_chk_ux_cmd_stat "Failed to copy datafile file to
backup location"
done

#Copy current init<SID>.ora file to backup directory


echo " Copying current init.ora file" >> ${BACKUPLOGFILE}
cp -p ${init_file} ${INITFILE_DIR}/init${ORA_SID}.ora
funct_chk_ux_cmd_stat "Failed to copy init.ora file to backup
location"

echo "################ Init.ora File " >> ${RESTOREFILE}


echo cp -p ${INITFILE_DIR}/init${ORA_SID}.ora ${init_file}
>> ${RESTOREFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "COLDBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already
existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_bkup_dir() {

RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir"
BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir"
DATAFILE_DIR="${BACKUPDIR}/datafile_dir"
CONTROLFILE_DIR="${BACKUPDIR}/controlfile_dir"
REDOLOG_DIR="${BACKUPDIR}/redolog_dir"
ARCLOG_DIR="${BACKUPDIR}/arclog_dir"
INITFILE_DIR="${BACKUPDIR}/initfile_dir"

BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}"
RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}"
LOGFILE="${LOGDIR}/${ORA_SID}.log"

if [ ! -d ${RESTOREFILE_DIR} ]; then mkdir -p ${RESTOREFILE_DIR}; fi


if [ ! -d ${BACKUPLOG_DIR} ]; then mkdir -p ${BACKUPLOG_DIR}; fi
if [ ! -d ${DATAFILE_DIR} ]; then mkdir -p ${DATAFILE_DIR}; fi
if [ ! -d ${CONTROLFILE_DIR} ]; then mkdir -p ${CONTROLFILE_DIR}; fi
if [ ! -d ${REDOLOG_DIR} ]; then mkdir -p ${REDOLOG_DIR}; fi
if [ ! -d ${ARCLOG_DIR} ]; then mkdir -p ${ARCLOG_DIR}; fi
if [ ! -d ${INITFILE_DIR} ]; then mkdir -p ${INITFILE_DIR}; fi

if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi


if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi

# Remove old backup


rm -f ${RESTOREFILE_DIR}/*
rm -f ${BACKUPLOG_DIR}/*
rm -f ${DATAFILE_DIR}/*
rm -f ${CONTROLFILE_DIR}/*
rm -f ${REDOLOG_DIR}/*
rm -f ${ARCLOG_DIR}/*
rm -f ${INITFILE_DIR}/*

echo "${JOBNAME}: coldbackup of ${ORA_SID} begun on ´date +\"%c\"´" >


${BACKUPLOGFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_get_vars(){

ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":"


'{print $2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++) print
"/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
init_file=$ORA_HOME/dbs/init$ORA_SID.ora
#log_arch_dest1=´sed /#/d $init_file|grep -i log_archive_dest|
nawk -F "=" '{print $2}'´
#log_arch_dest=´echo $log_arch_dest1|tr -d "'"|tr -d '"'´

udump_dest=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select value from v\\$parameter
where name='user_dump_dest';
exit
EOF´

if [ x$ORA_HOME = 'x' ]; then


echo "COLDBACKUP_FAIL: Can't get ORACLE_HOME from oratab file
for $ORA_SID"|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi

if [ ! -f $init_file ]; then
echo "COLDBACKUP_FAIL: init$ORA_SID.ora does not exist in
ORACLE_HOME/dbs"|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi

if [ x$udump_dest = 'x' ]; then


echo "COLDBACKUP_FAIL: user_dump_dest not defined in
init$ORA_SID.ora"|
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi

ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME


ORACLE_SID=${ORA_SID}; export ORACLE_SID
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_ux_cmd_stat(): Check the exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_ux_cmd_stat() {
if [ $? != 0 ]; then
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "COLDBACKUP_FAIL: ${1} "| tee -a ${BACKUPLOGFILE}
>> ${LOGFILE}
exit 1
fi
}

############################################################
# MAIN
############################################################

NARG=$#
ORA_SID=$1
ORA_OWNER=$2

# Set environment variables


BACKUPDIR="/u02/${ORA_SID}/cold"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"

DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="dbcoldbackup"

echo " Starting coldbackup of ${ORA_SID} "

funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_verify
funct_build_dynfiles
funct_shutdown_i
funct_startup_r
funct_shutdown_n
funct_verify_shutdown
funct_cold_backup
funct_startup_n

echo "${ORA_SID}, Coldbackup Completed successfully on ´date +\"%c\"´"


|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
######## END MAIN ##########################

Cold Backup Script under Unix Checklist

• In the main function, set correct values for the BACKUPDIR, ORATABDIR, and TOOLS variables
highlighted in the cold backup script. The default location of ORATABDIR is different for each
flavor of Unix. For information about the default location of the ORATAB file for different flavors of
Unix, refer to Chapter 13, "Unix, Windows NT, and Oracle."
• Check for the existence of SID in oratab file. If not already there, you must add the instance.
• Check for existence of initSID.ora file in the ORACLE_HOME/dbs directory. If it is in a
different location, you can create a soft link to the ORACLE_HOME/dbs directory.
• Pass SID and OWNER as parameters to the program.
• The database must be running when you start the program. It gets required information by
querying the database and then shuts down the database and performs cold backup.
• main() The main function defines the variables required and calls the functions to be
executed. The variables BACKUPDIR defines the backup location, ORATABDIR defines the
oratab file location. oratab files maintain the list of instances and their home directories on
the machine. This file is created by default when oracle is installed. If it is not there, you must
create one. OWNER is the owner of Oracle software directories. A sample oratab file can be
found at the end of the chapter.
• funct_get_vars() This function gets ORACLE_HOME from the oratab file and
USER_DUMP_DEST from the initSID.ora file. The value of USER_DUMP_DEST is used to
back up the trace of the control file.
• funct_build_dynfiles() This function generates a list of files from the database for
backup. It also creates SQL statements for temporary files. These temporary files do not need to
be backed up, but can be recreated when a restore is performed. These temporary files are
session-specific and do not have any content when the database is closed.
• funct_shutdown_i() This function shuts down the database in Immediate mode, so that
any user connected to the database will be disconnected immediately.
• funct_startup_r() This function starts up the database in Restricted mode, so that no one
can connect to the database except users with Restrict privileges.
• funct_shutdown_n() This function performs a clean shutdown of the database.
• funct_chk_ux_cmd_stat() This function is used to check the status of Unix commands,
especially after copying files to a backup location.

Restore File
A cold backup program creates a restore file that contains the commands to restore the database. This
functionality is added based on the fact that a lot of DBAs perform backups but, when it comes to recovery,
they will not have any procedures to make the recovery faster. With the restore file, it is easier to restore
files to the original location because it has all the commands ready to restore the backup. Otherwise, you
need to know the structure of the database—what files are located where. A sample restore file is shown
in Listing 3.2.
Listing 3.2 Sample Restore File

######### SQL for Temp Files


alter tablespace TEMP add tempfile '/u03/oracle/DEV/data/temp03.dbf'
reuse;
alter tablespace TEMP add tempfile '/u03/oracle/DEV/data/temp04.dbf'
reuse;
######### Data Files
cp -p /bkp/DEV/cold/datafile_dir/INDX01.dbf
/u02/oracle/DEV/data/INDX01.dbf
cp -p /bkp/DEV/cold/datafile_dir/RBS01.dbf
/u02/oracle/DEV/data/RBS01.dbf
cp -p /bkp/DEV/cold/datafile_dir/SYSTEM01.dbf
/u02/oracle/DEV/data/SYSTEM01.dbf
cp -p /bkp/DEV/cold/datafile_dir/TEMP01.dbf
/u02/oracle/DEV/data/TEMP01.dbf
cp -p /bkp/DEV/cold/datafile_dir/USERS01.dbf
/u02/oracle/DEV/data/USERS01.dbf
######### Control Files
cp -p /bkp/DEV/cold/controlfile_dir/cntrl01.dbf
/u02/oracle/DEV/data/cntrl01.dbf
######### Init.ora File
cp -p /bkp/DEV/cold/initfile_dir/initDEV.ora
/u02/apps/DEV/oracle/8.1.7/
dbs/initDEV.ora

Cold Backup Troubleshooting and Status Check

The important thing here is that the backup log file defined by BACKUPLOGFILE contains detailed
information about each step of the backup process. This is a very good place to start investigating why the
backup failed or for related errors. This file will also have the start and end time of the backup.

A single line about the success or failure of a backup is appended to SID.log file every time a backup is
performed. This file is located under the directory defined by the LOGDIR variable. This file also has the
backup completion time. A separate file is created for each instance. This single file maintains the history
of performed backups and their status and timing information. The messages for a cold backup are
'COLDBACKUP_FAIL' if a cold backup failed and 'Coldbackup Completed successfully' if
a backup completes successfully.

Apart from the BACKUPLOGFILE and SID.log files, it is always good to capture the out-of-the-ordinary
errors displayed onscreen if you are running the backup unattended. You can capture these errors by
running the command shown next. The same thing can be done for hot backups. This command captures
onscreen errors to the coldbackup.log file.

coldbackup_ux SID OWNER 1> coldbackup.log 2>&1

The following is an excerpt from the SID.log file:

Tue Jul 18 16:48:46 EDT 2000


COLDBACKUP_FAIL: DEV, Failed to copy control file to backup location

BACKUPLOGFILE
Listing 3.3 Sample BACKUPLOGFILE

—dbcoldbackup: coldbackup of DEV begun on Sun May 20 21:15:27 2001


dbcoldbackup: building datafile list .
dbcoldbackup: Building controlfile list

Copying datafile /u02/oracle/DEV/data/INDX01.dbf ...


Copying datafile /u02/oracle/DEV/data/RBS01.dbf ...
Copying datafile /u02/oracle/DEV/data/SYSTEM01.dbf ...
Copying datafile /u02/oracle/DEV/data/TEMP01.dbf ...
Copying datafile /u02/oracle/DEV/data/USERS01.dbf ...
Copying control file /u02/oracle/DEV/data/cntrl01.dbf ...
Copying redolog file /u03/oracle/DEV/data/log01a.dbf ...
Copying redolog file /u03/oracle/DEV/data/log01b.dbf ...
Copying current init.ora file
DEV, Coldbackup Completed successfully on Sun May 20 21:19:38 2001

Hot Backup

Listing 3.4 provides the script to perform the hot backup of a database under the Unix environment. The
hot backup script takes two input parameters—SID and OWNER. SID is the instance to be backed up,
and OWNER is the Unix account under which Oracle is running.

Figure 3.4 shows the functionality of the hot backup program. Each box represents a corresponding
function in the program.

Figure 3.4 Functions in hot backup script.for Unix.

Listing 3.4 hotbackup_ux

#####################################################################
# PROGRAM NAME: hotbackup_ux

# PURPOSE: This utility will perform a warm backup of


# the database
# USAGE: $hotbackup_ux SID OWNER

# INPUT PARAMETERS: SID(Instance name), OWNER(Owner of instance)


#####################################################################

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_ux_cmd_stat "Database is down for given SID($ORA_SID),
Owner($ORA_OWNER). Can't perform hotbackup "
}

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_dblogmode(): Check DB log mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_dblogmode(){
STATUS=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select log_mode from v\\$database;
exit
EOF´

if [ $STATUS = "NOARCHIVELOG" ]; then


echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "HOTBACKUP_FAIL: $ORA_SID is in NOARCHIVELOG mode. Can't
perform
hotbackup " |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_control_backup(): Backup control file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_control_backup(){
echo "Begin backup of controlfile and trace to trace file"
>>${BACKUPLOGFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter database backup controlfile to
'${CONTROLFILE_DIR}/backup_control.ctl';
alter database backup controlfile to trace;
exit
EOF

# Backup trace of control file


CONTROL=´ls -t ${udump_dest}/*.trc |head -1´
if [ ! -z "$CONTROL" ]; then
grep 'CONTROL' ${CONTROL} 1> /dev/null
if test $? -eq 0; then
cp ${CONTROL} ${CONTROLFILE_DIR}/backup_control.sql
fi
fi

# Prepare restore file for control file


echo "###### Control File " >> ${RESTOREFILE}
echo "# Use your own discretion to copy control file, not advised
unless
required..." >> ${RESTOREFILE}
echo " End of backup of control file" >> ${BACKUPLOGFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_archivelog_backup(): Backup archivelog files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_archivelog_backup(){
echo "Begin backup of archived redo logs" >> ${BACKUPLOGFILE}
#Switch logs to flush current redo log to archived redo before back up
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter system switch logfile;
alter system archive log stop;
exit
EOF

# This gets the redo sequence number that is being archived


# and remove this from the list of files to be backed up
ARCSEQ=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select min(sequence#) from v\\$log
where archived='NO';
exit
EOF´
#Get current list of archived redo log files
ARCLOG_FILES=´ls ${log_arch_dest}/*|grep -v $ARCSEQ´

${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter system archive log start;
exit
EOF

#Prepare restore file for arc log files


echo "##### Archive Log Files" >> ${RESTOREFILE}
for arc_file in ´echo $ARCLOG_FILES´
do
echo cp -p ${ARCLOG_DIR}/´echo $arc_file|awk -F"/" '{print $NF}'´
$arc_file >> ${RESTOREFILE}
done

#Copy arc log files to backup location


#remove the archived redo logs from the log_archive_dest if copy is
successful
cp -p ${ARCLOG_FILES} ${ARCLOG_DIR}
if [ $? = 0 ]; then
rm ${ARCLOG_FILES}
else
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "HOTBACKUP_FAIL: Failed to copy Archive log files" |
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
echo "End backup of archived redo logs" >> ${BACKUPLOGFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_init_backup(): Backup init.ora file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_init_backup(){

#Copy current init<SID>.ora file to backup directory


echo " Copying current init${ORA_SID}.ora file" >> ${BACKUPLOGFILE}
cp -p ${init_file} ${INITFILE_DIR}/init${ORA_SID}.ora
funct_chk_ux_cmd_stat "Failed to copy init.ora file to backup
location"

# Prepare restore file for init.ora


echo "############# Parameter Files" >> ${RESTOREFILE}
echo cp -p ${INITFILE_DIR}/init${ORA_SID}.ora ${init_file} >>
${RESTOREFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_temp_backup(): Prepre SQL for temp files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_temp_backup(){
echo "############# Recreate the following Temporary Files" >>
${RESTOREFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF >> ${RESTOREFILE}
/ as sysdba
set heading off feedback off
select 'alter tablespace '||tablespace_name||' add tempfile '||''||
file_name||''||' reuse'||';'
from dba_temp_files;
exit
EOF
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
#funct_hot_backup(): Backup datafiles
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_hot_backup(){

# Get the list of tablespaces


echo "Building tablespace list " >> ${BACKUPLOGFILE}
tablespace_list=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select distinct tablespace_name from dba_data_files
order by tablespace_name;
exit
EOF´

echo "##### DATE:" ´date´ > ${RESTOREFILE}


echo "####Data Files(Please restore only corrupted files)" >>
${RESTOREFILE}
for tblspace in ´echo $tablespace_list´
do
# Get the datafiles for the current tablespace
datafile_list=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select file_name from dba_data_files
where tablespace_name = '${tblspace}';
exit
EOF´

echo " Beginning back up of tablespace ${tblspace}..." >>


${BACKUPLOGFILE}
${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter tablespace ${tblspace} begin backup;
exit
EOF

# Copy datafiles of current tablespace


for datafile in ´echo $datafile_list´
do
echo "Copying datafile ${datafile}..." >> ${BACKUPLOGFILE}
# The next command prepares restore file
echo cp -p ${DATAFILE_DIR}/´echo $datafile|awk -F"/" '{print $NF}'´
$datafile >> ${RESTOREFILE}
cp -p ${datafile} ${DATAFILE_DIR}
if [ $? != 0 ]; then
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "HOTBACKUP_FAIL: Failed to copy file to backup location "|
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}

# Ending the tablespace backup before exiting


´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter tablespace ${tblspace} end backup;
exit
EOF´

exit 1
fi
done

${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
alter tablespace ${tblspace} end backup;
exit
EOF
echo " Ending back up of tablespace ${tblspace}.." >>
${BACKUPLOGFILE}
done
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "HOTBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already
existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_bkup_dir() {

RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir"
BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir"
DATAFILE_DIR="${BACKUPDIR}/datafile_dir"
CONTROLFILE_DIR="${BACKUPDIR}/controlfile_dir"
REDOLOG_DIR="${BACKUPDIR}/redolog_dir"
ARCLOG_DIR="${BACKUPDIR}/arclog_dir"
INITFILE_DIR="${BACKUPDIR}/initfile_dir"

BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}"
RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}"
LOGFILE="${LOGDIR}/${ORA_SID}.log"

if [ ! -d ${RESTOREFILE_DIR} ]; then mkdir -p ${RESTOREFILE_DIR}; fi


if [ ! -d ${BACKUPLOG_DIR} ]; then mkdir -p ${BACKUPLOG_DIR}; fi
if [ ! -d ${DATAFILE_DIR} ]; then mkdir -p ${DATAFILE_DIR}; fi
if [ ! -d ${CONTROLFILE_DIR} ]; then mkdir -p ${CONTROLFILE_DIR}; fi
if [ ! -d ${REDOLOG_DIR} ]; then mkdir -p ${REDOLOG_DIR}; fi
if [ ! -d ${ARCLOG_DIR} ]; then mkdir -p ${ARCLOG_DIR}; fi
if [ ! -d ${INITFILE_DIR} ]; then mkdir -p ${INITFILE_DIR}; fi

if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi


if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi

# Remove old backup


rm -f ${RESTOREFILE_DIR}/*
rm -f ${BACKUPLOG_DIR}/*
rm -f ${DATAFILE_DIR}/*
rm -f ${CONTROLFILE_DIR}/*
rm -f ${REDOLOG_DIR}/*
rm -f ${ARCLOG_DIR}/*
rm -f ${INITFILE_DIR}/*

echo "${JOBNAME}: hotbackup of ${ORA_SID} begun on ´date +\"%c\"´" >


${BACKUPLOGFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_get_vars(){

ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print


$2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
init_file=$ORA_HOME/dbs/init$ORA_SID.ora
#log_arch_dest1=´sed /#/d $init_file|grep -i log_archive_dest|
nawk -F "=" '{print $2}'´
#log_arch_dest=´echo $log_arch_dest1|tr -d "'"|tr -d '"'´

udump_dest=´${ORACLE_HOME}/bin/sqlplus -s <<EOF
/ as sysdba
set heading off feedback off
select value from v\\$parameter
where name='user_dump_dest';
exit
EOF´

if [ x$ORA_HOME = 'x' ]; then


echo "HOTBACKUP_FAIL: can't get ORACLE_HOME from oratab file for
$ORA_SID"
| tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi

if [ ! -f $init_file ]; then
echo "HOTBACKUP_FAIL: init$ORA_SID.ora does not exist in
ORACLE_HOME/dbs"
| tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi

if [ x$log_arch_dest = 'x' -o x$udump_dest = 'x' ]; then


echo "HOTBACKUP_FAIL: user_dump_dest or log_archive_dest not defined
"
| tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi

ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME


ORACLE_SID=${ORA_SID}; export ORACLE_SID
}

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_ux_cmd_stat(): Check the exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_ux_cmd_stat() {
if [ $? != 0 ]; then
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "HOTBACKUP_FAIL:${1} "|tee -a ${BACKUPLOGFILE} >>
${LOGFILE}
exit 1
fi
}

############################################################
# MAIN
############################################################

NARG=$#
ORA_SID=$1
ORA_OWNER=$2

# Set environment variables


BACKUPDIR="/u02/${ORA_SID}/hot"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
log_arch_dest="/export/home/orcl/arch"

DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="dbhotbackup"

echo " Starting hotbackup of .... ${ORA_SID}"

funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_verify
funct_chk_dblogmode
funct_hot_backup
funct_temp_backup
funct_control_backup
funct_archivelog_backup
funct_init_backup

echo "${ORA_SID}, hotbackup Completed successfully on ´date +\"%c\"´"


|
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}

######## END MAIN #########################

Hot Backup Script under Unix Checklist

• In the main function, set the correct values for BACKUPDIR, ORATABDIR, TOOLS, and
log_arch_dest variables highlighted in the script. The default location of ORATABDIR is
different for each flavor of Unix.
• Check for existence of the SID instance in the oratab file. If not already there, you must add
the instance.
• Check for the existence of the initSID.ora file in the ORACLE_HOME/dbs directory. If it is
in a different location, you must create a soft link to the ORACLE_HOME/dbs directory.

Pass SID and OWNER as parameters to the program:

• main() BACKUPDIR defines the backup location. ORATABDIR defines the oratab file
location. oratab files maintain the list of instances and their home directories on the machine.
This file is created by default when Oracle is installed. If it is not there, you must create one.
OWNER is the owner of the Oracle software directories.
• funct_get_vars() Make sure that the USER_DUMP_DEST parameter is set correctly in
Init.ora file. I was reluctant to get LOG_ARCHIVE_DEST from the Init.ora file because
there are some changes between Oracle 7 and 8 in the way the archive destination is defined.
There are a variety of ways that you can define log_archive_dest based on how many
destinations you are using. Consequently, I have given the option to define
log_archive_dest in main function.
• funct_temp_backup() Oracle 7 and Oracle 8 support permanent temporary tablespaces
(created with create tablespce tablespace_name ... temporary). Apart from
this, Oracle 8I has new features to create temporary tablespaces that do not need back up
(created with create tablespace temporary...). Data in these temporary tablespaces
is session-specific and gets deleted as soon as the session is disconnected. Because of the
nature of these temporary tablespaces, you do not need to back them up; in the case of a restore,
you can just add the data file for these temporary tablespaces. The files for these temporary
tablespaces are listed under the dba_temp_files data dictionary view.
• funct_control_backup() In addition to taking backup of control file, this function also
backs up the trace of the control file. The trace of the control file will be useful to examine the
structure of the database. This is the single most important piece of information that you need to
perform a good recovery, especially if the database has hundreds of files.
• funct_chk_bkup_dir() This function creates backup directories for data, control, redo log,
archivelog, init files, restore files, and backup log files.
Restore file

The restore file for hot backup looks similar to cold backup. Please refer to the explanation under the
heading restore file for cold backup.

Hot Backup Troubleshooting and Status Check


The important thing here is that the backup log file defined by (BACKUPLOGFILE) contains detailed
information about each step of the backup process. This is a very good place to start investigating why a
backup has failed or for related errors. This file will also have the start and end time of the backup.

A single line about the success or failure of a backup is appended to the SID.log file every time a
backup is performed. This file is located under the directory defined by the LOGDIR variable. This file also
has the backup completion time. A separate file is created for each instance. This single file maintains the
history of the performed backups, their status, and timing information. The messages for a hot backup are
'HOTBACKUP_FAIL', if the hot backup failed, and 'Hotbackup Completed successfully', if
the backup completes successfully.

The following is an excerpt from the log file:

Tue Jul 18 16:48:46 EDT 2000


HOTBACKUP_FAIL: DEV, Not enough arguments passed

Export
The export program (see Listing 3.5) performs a full export of the database under Unix environment. The
export script takes two input parameters—SID and OWNER. SID is the instance to be backed up, and
OWNER is the Unix account under which Oracle is running. Figure 3.5 shows the functionality of the export
and split export programs. Each box represents a corresponding function in the program.

Figure 3.5 Functions in export and split export scripts for Unix.

Listing 3.5 xport_ux

######################################################################
# PROGRAM NAME: xport_ux
# PURPOSE: Performs export of the database
# USAGE: $xport_ux SID OWNER
# INPUT PARAMETERS: SID(Instance name), OWNER(Owner of instance)

)
######################################################################

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_unix_command_status "Database is down for given
SID($ORA_SID),
Owner($ORA_OWNER). Can't perform export "
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_cleanup() {
echo "Left for user convenience" > /dev/null
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): This will create parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
# if you use connect string. see next line.
#userid=system/manager@${CON_STRING}
#echo "Owner=scott">>${PARFILE}
#echo "Tables=scott.T1">>${PARFILE}
echo "Full=Y">>${PARFILE}
#echo "Direct=Y">>${PARFILE}
echo "Grants=Y">>${PARFILE}
echo "Indexes=Y">>${PARFILE}
echo "Rows=Y">>${PARFILE}
echo "Constraints=Y">>${PARFILE}
echo "Compress=N">>${PARFILE}
echo "Consistent=Y">>${PARFILE}
echo "File=${FILE}">>${PARFILE}
echo "Log=${EXPORT_DIR}/${ORA_SID}.exp.log">>${PARFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_export(): Export the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_export() {
# Remove old export file
rm -f ${FILE}

${ORACLE_HOME}/bin/exp parfile=${PARFILE}
if [ $? != 0 ]; then
echo ´date´ >> $LOGDIR/${ORA_SID}.log
echo "EXPORT_FAIL: ${ORA_SID}, Export Failed" >>
$LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "EXPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already exist
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_bkup_dir() {
EXPORT_DIR=${BACKUPDIR}
if [ ! -d ${EXPORT_DIR} ]; then mkdir -p ${EXPORT_DIR}; fi
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi

FILE="${EXPORT_DIR}/${ORA_SID}.dmp"
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print
$2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
#CON_STRING=${ORA_SID}.company.com
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" >> ${LOGDIR}/${ORA_SID}.log
echo "EXPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}

######################################
# MAIN
######################################

NARG=$#
ORA_SID=$1
ORA_OWNER=$2

# Set up the environment


BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"

DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/export.par"
LOGDIR="${TOOLS}/localog"

echo "... Now exporting .... ${ORA_SID}"

funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_build_parfile
funct_export
funct_cleanup

echo ´date´ >> $LOGDIR/${ORA_SID}.log


echo "${ORA_SID}, export completed successfully" >>
$LOGDIR/${ORA_SID}.log

####################### END MAIN ###############################

Export Script under Unix Checklist

• In the main function, set the correct values for BACKUPDIR, ORATABDIR, and TOOLS
variables highlighted in the export script. The default location of ORATABDIR is different for each
flavor of Unix.
• Check for existence of SID in the oratab file. If not already there, you must add the instance.
• The funct_build_parfile() function builds the parameter file. By default, it performs a
full export. You can modify the parameters to perform a user- or table-level export.

Pass SID and OWNER as parameters to the program:

• funct_build_parfile() Builds the export.par parameter file dynamically, based on


the information provided in this function. This function is configured for a full export of the
database. To perform a different type of export (user- or table-level), set the correct parameters.
• funct_cleanup() Removes the interim files.

Export Troubleshooting and Status Check

The 'Log' parameter value set in the parameter file will have detailed information about the status of
export. This is a very good place to start investigating why an export has failed or for related errors.

A single line about the success or failure of export is appended to SID.log file every time an export is
performed. This file is located under the directory defined by the LOGDIR variable. This file also has the
backup completion time. A separate file is created for each instance. This single file maintains the history
of performed backups, their status, and timing information. The messages for an export are
'EXPORT_FAIL', if the export failed, and 'Export Completed successfully', if the export
completes successfully.

The following is an excerpt from a log file:

Tue Apr 8 16:07:12 EST 2000


DEV , export completed successfully

Split Export
The split export program (see Listing 3.6) performs an export of the database. Additionally, if the export file
is larger than 2GB, the script compresses the export file and splits into multiple files to overcome the
export limitation of a 2GB file system size. This is the only way to split the export file prior to Oracle 8i. New
features in 8I allow you to split the export file into multiple files, but it does not compress the files on-the-fly
to save space. The script uses the Unix commands split and compress to perform splitting and
compressing of the files. The functions of the script are explained in Figure 3.5.

The split export script takes two input parameters—SID and OWNER. SID is the instance to be backed up,
and OWNER is the Unix account under which Oracle is running.

Export New Features in 8i

In 8i, Oracle introduced two new export parameters called FILESIZE and QUERY. FILESIZE
specifies the maximum file size of each dump file. This overcomes the 2GB file system limitations of export
command operating systems. By using the QUERY parameter, you can export the subset of a table data.
During an import, when using split export files, you have to specify the same FILESIZE limit.

Listing 3.6 splitZxport_ux

######################################################################
# PROGRAM NAME: splitZxport_ux
#
# PURPOSE: Performs export of the database
# Compresses the export file on the fly while splitting.
# Useful if the size of export file goes beyond 2GB
# USAGE: $splitZxport_ux SID OWNER
# INPUT PARAMETERS: SID(Instance name), OWNER(Owner of instance)

######################################################################

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_unix_command_status "Database is down for given
SID($ORA_SID),
Owner($ORA_OWNER). Can't perform export "
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_cleanup() {
rm –f ${PIPE_DEVICE}
rm –f ${SPLIT_PIPE_DEVICE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_splitcompress_pipe(): Creates pipe for compressing and
splitting of file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_splitcompress_pipe() {
# Creates pipe for compressing
if [ ! -r ${PIPE_DEVICE} ]; then
/etc/mknod ${PIPE_DEVICE} p
fi

#Creates pipe for splitting


if [ ! -r ${SPLIT_PIPE_DEVICE} ]; then
/etc/mknod ${SPLIT_PIPE_DEVICE} p
fi

# Splits the file for every 500MB


# As it splits it adds aa,bb,cc ... zz to the name
nohup split -b1000m - ${ZFILE} < ${SPLIT_PIPE_DEVICE} &
nohup compress < ${PIPE_DEVICE} >${SPLIT_PIPE_DEVICE} &
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): Creates parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
echo "Full=Y">>${PARFILE}
#echo "tables=scott.t1">>${PARFILE}
echo "Grants=Y">>${PARFILE}
echo "Indexes=Y">>${PARFILE}
echo "Rows=Y">>${PARFILE}
echo "Constraints=Y">>${PARFILE}
echo "Compress=N">>${PARFILE}
echo "Consistent=Y">>${PARFILE}
echo "File=${PIPE_DEVICE}">>${PARFILE}
echo "Log=${EXPORT_DIR}/${ORA_SID}.exp.log">>${PARFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_export(): Export the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_export() {
# Remove old export file
rm -f ${ZFILE}

${ORACLE_HOME}/bin/exp parfile=${PARFILE}
if [ $? != 0 ]; then
echo ´date´ >> $LOGDIR/${ORA_SID}.log
echo "EXPORT_FAIL: ${ORA_SID}, Export Failed" >>
$LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "EXPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already
existing
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_bkup_dir() {
EXPORT_DIR=${BACKUPDIR}
if [ ! -d ${EXPORT_DIR} ]; then mkdir -p ${EXPORT_DIR}; fi
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
ZFILE="${EXPORT_DIR}/${ORA_SID}.dmp.Z"
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print
$2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" >> ${LOGDIR}/${ORA_SID}.log
echo "EXPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}

#######################################
## MAIN
#######################################

NARG=$#
ORA_SID=$1
ORA_OWNER=$2
# Set up environment
BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"

DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/export.par"
LOGDIR="${TOOLS}/localog"

PIPE_DEVICE="/tmp/export_${ORA_SID}_pipe"
SPLIT_PIPE_DEVICE="/tmp/split_${ORA_SID}_pipe"

echo "... Now exporting .... ${ORA_SID}"

funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_splitcompress_pipe
funct_build_parfile
funct_export
funct_cleanup

echo ´date´ >> $LOGDIR/${ORA_SID}.log


echo "${ORA_SID}, export completed successfully" >>
$LOGDIR/${ORA_SID}.log

####################### END MAIN ###############################

Split Export Script under Unix Checklist

The checklist of things to verify before the splitZxport is run is the same as for the export program.

• funct_splitcompress_pipe() This function creates two pipes—one for compressing


and another for splitting. The export dump file is passed to the compress pipe for compression,
and the output of the compress command is passed to the split command for the split
operation. The output of split command is passed to a file. The split command splits the
dump file into pieces of 1000MB. When the split operation occurs, it appends aa, bb, cc...zz to
the name of the original file to maintain different names for individual pieces. compress and
split are Unix commands.
• funct_build_parfile() In building the parameter file, we pass the pipe name as a
filename to the export command. The pipe acts as a medium to transfer output from one
command to another.

Split Import
The split import program (see Listing 3.7) performs an import using the compressed split export dump files
created by the splitZxport program. The script takes two input parameters—SID and OWNER. SID
is the instance to be backed up, and OWNER is the Unix account under which Oracle is running.

Listing 3.7 splitZmport_ux

######################################################################
# PROGRAM NAME: splitZmport_ux

# PURPOSE: Performs import of the database using export files created


by
#splitZxport program. Uncompresses the dump file on the fly while
desplitting.

# USAGE: $splitZmport_ux SID OWNER


# INPUT PARAMETERS: SID(Instance name), OWNER(Owner of instance)

######################################################################

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify(): Verify that database is online
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_verify(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
funct_chk_unix_command_status "Database is down for given
SID($ORA_SID),
Owner($ORA_OWNER). Can't perform impot"
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_cleanup(): Cleanup interim files
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_cleanup() {
rm –f ${PIPE_DEVICE}
rm –f ${SPLIT_PIPE_DEVICE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_desplitcompress_pipe(): Creates pipe for uncompressing and
desplitting of file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_desplitcompress_pipe() {
# Creates pipe for uncompressing
if [ ! -r ${PIPE_DEVICE} ]; then
/etc/mknod ${PIPE_DEVICE} p
fi

#Creates pipe for desplitting


if [ ! -r ${SPLIT_PIPE_DEVICE} ]; then
/etc/mknod ${SPLIT_PIPE_DEVICE} p
fi

nohup cat ${ZFILES} > ${SPLIT_PIPE_DEVICE} &


sleep 5
nohup uncompress < ${SPLIT_PIPE_DEVICE} >${PIPE_DEVICE} &
sleep 30
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_build_parfile(): Creates parameter file
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_build_parfile() {
# This line makes sure that it always creates a new parameter file
echo " " >${PARFILE}
echo "userid=system/manager">>${PARFILE}
#echo "indexfile=${BACKUPDIR}/${ORA_SID}.ddl">>${PARFILE}
#echo "Owner=scott">>${PARFILE}
#echo "Fromuser=kishan">>${PARFILE}
#echo "Touser=aravind">>${PARFILE}
#echo "Tables=T1,T2,t3,t4">>${PARFILE}
echo "Full=Y">>${PARFILE}
echo "Ignore=Y">>${PARFILE}
echo "Commit=y">>${PARFILE}
echo "File=${PIPE_DEVICE}">>${PARFILE}
echo "Log=${BACKUPDIR}/${ORA_SID}.imp.log">>${PARFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_import(): Import the database
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_import() {
${ORACLE_HOME}/bin/imp parfile=${PARFILE}
if [ $? != 0 ]; then
echo ´date´ >> $LOGDIR/${ORA_SID}.log
echo "IMPORT_FAIL: ${ORA_SID}, Import Failed" >>
$LOGDIR/${ORA_SID}.log
funct_cleanup
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "IMPORT_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Check for backup directories
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_bkup_dir() {
if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi
if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print
$2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check exit status of Unix command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" >> ${LOGDIR}/${ORA_SID}.log
echo "IMPORT_FAIL: ${1} " >> ${LOGDIR}/${ORA_SID}.log
exit 1
fi
}

#######################################
## MAIN
#######################################

NARG=$#
ORA_SID=$1
ORA_OWNER=$2

# Set up environment
BACKUPDIR="/u02/${ORA_SID}/export"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"
# List all split files in ZFILES variable
#ZFILES=´echo ${BACKUPDIR}/file.dmp.Z??|sort´
ZFILES="${BACKUPDIR}/file.dmp.Zaa ${BACKUPDIR}/file.dmp.Zab"

DYN_DIR="${TOOLS}/DYN_FILES"
PARFILE="${DYN_DIR}/import.par"
LOGDIR="${TOOLS}/localog"

PIPE_DEVICE="/tmp/import_${ORA_SID}_pipe"
SPLIT_PIPE_DEVICE="/tmp/split_${ORA_SID}_pipe"
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1;export NLS_LANG

echo "... Now importing .... ${ORA_SID}"

funct_chk_parm
funct_get_vars
funct_verify
funct_chk_bkup_dir
funct_desplitcompress_pipe
funct_build_parfile
funct_import
funct_cleanup

echo ´date´ >> $LOGDIR/${ORA_SID}.log


echo "${ORA_SID}, import completed successfully" >>
$LOGDIR/${ORA_SID}.log

####################### END MAIN ###############################

SplitImport Script under Unix Checklist

• In the main() function, set the correct values for the BACKUPDIR, ORATABDIR, and TOOLS
variables highlighted in the import script. The default location of ORATABDIR is different for each
flavor of Unix.
• Check for the existence of the SID in the oratab file. If not already there, you must add the
instance.
• List all split filenames in the ZFILES variable in the main() function.
• The funct_build_parfile() function builds the parameter file. By default, it performs a
full import. You can modify the settings to perform a user or table import.

Pass SID and OWNER as parameters to the program:

• funct_desplitcompress_pipe() The only trick here is that we need to split and uncompress
the files before we use them as input to import command. That is accomplished by creating two pipes.
Here, we use the cat command to send output from split files to the split pipe device. The split pipe
device is passed to the uncompress command. The output from the uncompress command is sent to
the Oracle import command. cat and uncompress are Unix commands. Everything else is the same
as a regular import.

Oracle Software Backup


This section discusses backing up the software directories of Oracle. We have already discussed how to
back up the database. Backing up software is also a very important part of a backup strategy. The
software might not need to be backed up as often as the database because it does not change quite as
often. But as you upgrade, or before you apply any patches to existing software, it is important to make a
backup copy of it to avoid getting into trouble.

Listing 3.8 contains the script to perform a backup of Oracle software. The script takes two input
parameters—SID and OWNER. SID is the instance to be backed up, and OWNER is the Unix account
under which Oracle is running.

Listing 3.8 OraSoftware_ux

#####################################################################
# PROGRAM NAME: OraSoftware_ux

# PURPOSE: Backup ORACLE_HOME & ORACLE_BASE


# USAGE: $OraSoftware_ux SID OWNER
# INPUT PARAMETERS: SID(Instance name), OWNER(Owner of instance)

#####################################################################

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_verify_shutdown(): Verify that database is down
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_verify_shutdown(){
STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´
if [ $? = 0 ]; then
echo "´date´" >> ${LOGFILE}
echo "SOFTWAREBACKUP_FAIL: ${ORA_SID}, Database is up, can't do
software
backup if the database is online." |tee -a ${BACKUPLOGFILE} >>
${LOGFILE}
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_shutdown_i(): Shutdown database in Immediate mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_shutdown_i(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
shutdown immediate;
exit
EOF
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_startup_n(): Startup database in normal mode
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_startup_n(){
${ORACLE_HOME}/bin/sqlplus -s << EOF
/ as sysdba
startup;
exit
EOF
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_software_bkup(): Backup software
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_software_bkup(){

echo "tarring ${ORA_HOME}" >> ${BACKUPLOGFILE}


echo "tarring ${ORA_BASE}" >> ${BACKUPLOGFILE}

nohup tar cvlpf - ${ORA_HOME} | compress > ${ORAHOMEFILE} 2>


${BACKUPLOGFILE}
nohup tar cvlpf - ${ORA_BASE} | compress > ${ORABASEFILE} 2>
${BACKUPLOGFILE}

#Prepare restore file


echo "zcat ${ORAHOMEFILE}| tar xvlpf - ${ORA_HOME}" > ${RESTOREFILE}
echo "zcat ${ORABASEFILE}| tar xvlpf - ${ORA_BASE}" >> ${RESTOREFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_parm(): Check for input parameters
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_parm() {
if [ ${NARG} -ne 2 ]; then
echo "SOFTWAREBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed"
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_bkup_dir(): Create backup directories if not already exist
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

funct_chk_bkup_dir() {

RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir"
BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir"
SOFTWARE_DIR="${BACKUPDIR}/software_dir"

BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}"
RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}"
ORAHOMEFILE="${SOFTWARE_DIR}/orahome_${ORA_SID}.tar.Z"
ORABASEFILE="${SOFTWARE_DIR}/orabase_${ORA_SID}.tar.Z"
LOGFILE="${LOGDIR}/${ORA_SID}.log"

if [ ! -d ${RESTOREFILE_DIR} ]; then mkdir -p ${RESTOREFILE_DIR}; fi


if [ ! -d ${BACKUPLOG_DIR} ]; then mkdir -p ${BACKUPLOG_DIR}; fi
if [ ! -d ${SOFTWARE_DIR} ]; then mkdir -p ${SOFTWARE_DIR}; fi

if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi


if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi

# Remove old files


rm -f ${RESTOREFILE_DIR}/*
rm -f ${BACKUPLOG_DIR}/*
rm -f ${SOFTWARE_DIR}/*

echo "${JOBNAME}: software backup of ${ORA_SID} begun on ´date


+\"%c\"´ "
>> ${BACKUPLOGFILE}
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_get_vars(): Get environment variables
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_get_vars(){
ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print
$2}'´
ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i<=NF-2; i++)
print "/"$i}'´
ORACLE_BASE=´echo $ORA_BASE|tr -d " "´
init_file=$ORA_HOME/dbs/init$ORA_SID.ora
ORACLE_HOME=${ORA_HOME}; export ORACLE_HOME
ORACLE_SID=${ORA_SID}; export ORACLE_SID

if [ x$ORA_HOME = 'x' ]; then


echo "SOFTWAREBACKUP_FAIL: Can't get ORACLE_HOME from oratab for
$ORA_SID"|
tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
if [ ! -f $init_file ]; then
echo "SOFTWAREBACKUP_FAIL: int$ORA_SID.ora does not exist in
ORACLE_HOME/dbs. Used by funct_startup_n to start database"|tee -a
${BACKUPLOGFILE} >> ${LOGFILE}
exit 1
fi
}

#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
# funct_chk_unix_command_status(): Check the exit status of Unix
command
#::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
funct_chk_unix_command_status() {
if [ $? != 0 ]; then
echo "´date´" |tee -a ${BACKUPLOGFILE} >> ${LOGFILE}
echo "SOFTWAREBACKUP_FAIL: ${1}"| tee -a ${BACKUPLOGFILE} >>
${LOGFILE}
exit 1
fi
}

############################################################
## MAIN
############################################################
NARG=$#
ORA_SID=$1
ORA_OWNER=$2

# Set environment variables


BACKUPDIR="/u02/${ORA_SID}/software"
ORATABDIR=/etc/oratab
TOOLS="/u01/oracomn/admin/my_dba"

DYN_DIR="${TOOLS}/DYN_FILES"
LOGDIR="${TOOLS}/localog"
JOBNAME="orasoftware"

echo "Preparing to make software backup of ${ORA_SID}"

funct_chk_parm
funct_chk_bkup_dir
funct_get_vars
funct_shutdown_i
funct_verify_shutdown
funct_software_bkup
funct_startup_n
echo "${ORA_SID}, Software Backup Completed successfully on ´date
+\"%c\"´ "
|tee -a ${BACKUPLOGFILE} >> ${LOGFILE}

######## END MAIN #########################


Oracle Software Backup Script under Unix Checklist

• In the main function, set correct values for BACKUPDIR, ORATABDIR, and TOOLS variables
highlighted in the software backup script. The default location of ORATABDIR is different for
each flavor of Unix.
• Check for the existence of the SID in the oratab file. If not already there, you must add the
instance.
• If your Oracle software directory structure does not follow OFA guidelines, set ORA_BASE and
ORA_HOME manually in funct_get_vars().

Pass SID and OWNER as parameters to the program:

• funct_software_bkup() This function tars the software directories of ORACLE_HOME


and ORACLE_BASE and compresses the output using the compress command. Here, we are
assuming that the Oracle software is installed using OFA (optimal Flexible Architecture)
guidelines. If not, you have to manually set ORA_BASE and ORA_HOME in the
funct_get_vars() function. 'nohup' commands submit the tar command at the server.
• main() If the database is running, it shuts down the database and starts backing up the
software directories. When the backup is complete, it restarts the database.

Troubleshooting and status check:

The important thing here is that the backup log file defined by BACKUPLOGFILE contains detailed
information about each step of the backup process. This is a very good place to start investigating why a
backup has failed or for related errors. This file will also have the start and end time of backup.

A single line about the success or failure of backup is appended to SID.log file every time backup is
performed. This file is located under the directory defined by the LOGDIR variable. The messages for a
software backup are 'SOFTWAREBACKUP_FAIL', if the software backup failed, and 'Software
Backup Completed', successfully', if the backup completes successfully.

Restoring Oracle Software


The steps to restore the software are as follows:

1. Shutdown database.
2. Use restore file from the backup to restore the directories.
3. Start up the database.

The restore command in the restore file first does a zcat (uncompress and cat) of the output file
and passes it to tar for extraction. For example,

zcat ora_home.tar.Z | tar xvlpf - /oracle/ora81

ora_home.tar.Z: File to extract


/oracle/ora81: Destination directory

Backup and Recovery under Windows NT

Before reading through this section, I strongly recommend that you go through the Windows NT
programming section in Chapter 2. This section presents and explains the scripts for taking a backup and
recovering a database in the Windows NT environment. Here, we use the DOS Shell batch programming
techniques to automate the backup process.

After the backup is complete, it is important to check the backup status by reviewing log and error files
generated by the scripts.
Cold Backup
Listing 3.9 performs a cold backup of a database under the Windows NT environment. The cold backup
script takes SID, the instance to be backed up, as the input parameter. The general steps to write a
backup script in Unix and Windows NT are the same. The only difference is that we will be using
commands that are understood by Windows NT. Figure 3.6 shows the functionality of a cold backup
program under Windows NT. Each box represents a corresponding section in the program. For example,
the Parameter Checking section checks for the necessary input parameters and also checks for the
existence of the backup directories.

Figure 3.6 Functions in cold backup script for Windows NT.

Listing 3.9 coldbackup_nt.bat

@echo off
REM ##############################################################
REM PROGRAM NAME: coldbackup_nt.bat

REM PURPOSE: This utility performs cold backup of


REM the database on Windows NT
REM USAGE: c:\>coldbackup_nt.bat SID

REM INPUT PARAMETERS: SID (Instance name)


''
REM ###############################################################

REM ::::::::::::::::::::Begin Declare Variables Section

set ORA_HOME=c:\oracle\ora81\bin
set CONNECT_USER="/ as sysdba"
set ORACLE_SID=%1
set BACKUP_DIR=c:\backup\%ORACLE_SID%\cold
set INIT_FILE=c:\oracle\admin\orcl\pfile\init.ora set
TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log

set CFILE=%BACKUP_DIR%\log\coldbackup.sql
set ERR_FILE=%BACKUP_DIR%\log\cerrors.log
set LOG_FILE=%BACKUP_DIR%\log\cbackup.log
set BKP_DIR=%BACKUP_DIR%

REM :::::::::::::::::::: End Declare Variables Section

REM :::::::::::::::::::: Begin Parameter Checking Section

if "%1" == " goto usage

REM Create backup directories if already not exist


if not exist %BACKUP_DIR%\data mkdir %BACKUP_DIR%\data
if not exist %BACKUP_DIR%\control mkdir %BACKUP_DIR%\control
if not exist %BACKUP_DIR%\redo mkdir %BACKUP_DIR%\redo
if not exist %BACKUP_DIR%\log mkdir %BACKUP_DIR%\log
if not exist %LOGDIR% mkdir %LOGDIR%

REM Check to see that there were no create errors


if not exist %BACKUP_DIR%\data goto backupdir
if not exist %BACKUP_DIR%\control goto backupdir
if not exist %BACKUP_DIR%\redo goto backupdir
if not exist %BACKUP_DIR%\log goto backupdir
REM Deletes previous backup. Make sure you have it on tape.
del/q %BACKUP_DIR%\data\*
del/q %BACKUP_DIR%\control\*
del/q %BACKUP_DIR%\redo\*
del/q %BACKUP_DIR%\log\*

echo. > %ERR_FILE%


echo. > %LOG_FILE%
(echo Cold Backup started & date/T & time/T) >> %LOG_FILE%

echo Parameter Checking Completed >> %LOG_FILE%


REM :::::::::::::::::::: End Parameter Checking Section

REM :::::::::::::::::::: Begin Create Dynamic files Section


echo. >%CFILE%
echo set termout off heading off feedback off >>%CFILE%
echo set linesize 300 pagesize 0 >>%CFILE%
echo set serveroutput on size 1000000 >>%CFILE%
echo. >>%CFILE%
echo spool %BACKUP_DIR%\log\coldbackup_list.bat >>%CFILE%
echo. >>%CFILE%
echo exec dbms_output.put_line('@echo off' ); >>%CFILE%
echo. >>%CFILE%
echo exec dbms_output.put_line('REM ******Data files' ); >>%CFILE%
echo select 'copy '^|^| file_name^|^| ' %BKP_DIR%\data '
>>%CFILE%
echo from dba_data_files order by tablespace_name; >>%CFILE%
echo. >>%CFILE%
echo exec dbms_output.put_line('REM ******Control files' );
>>%CFILE%
echo select 'copy '^|^| name^|^| ' %BKP_DIR%\control ' >>%CFILE%
echo from v$controlfile order by name; >>%CFILE%
echo. >>%CFILE%
echo exec dbms_output.put_line('REM ******Init.ora file ' );
>>%CFILE%
echo select ' copy %INIT_FILE% %BKP_DIR%\control ' >>%CFILE%
echo from dual; >>%CFILE%
echo exec dbms_output.put_line('exit;'); >>%CFILE%
echo spool off >>%CFILE%
echo exit >>%CFILE%

echo Dynamic files Section Completed >> %LOG_FILE%


REM :::::::::::::::::::: End Create Dynamic files Section

REM :::::::::::::::::::: Begin ColdBackup Section

%ORA_HOME%\sqlplus -s %CONNECT_USER% @%CFILE%


%ORA_HOME%\sqlplus -s %CONNECT_USER% @shutdown_i_nt.sql
%ORA_HOME%\sqlplus -s %CONNECT_USER% @startup_r_nt.sql
%ORA_HOME%\sqlplus -s %CONNECT_USER% @shutdown_n_nt.sql

REM Copy the files to backup location


start/b %BACKUP_DIR%\log\coldbackup_list.bat 1>> %LOG_FILE% 2>>
%ERR_FILE%
%ORA_HOME%\sqlplus -s %CONNECT_USER% @startup_n_nt.sql

(echo ColdBackup Completed Successfully & date/T & time/T) >>


%LOG_FILE%
(echo ColdBackup Completed Successfully & date/T & time/T) >>
%LOGFILE%
goto end

REM :::::::::::::::::::: End ColdBackup Section

REM :::::::::::::::::::::::::::: Begin Error handling section

:usage
echo Error, Usage: coldbackup_nt.bat SID
goto end

:backupdir
echo Error creating Backup directory structure >> %ERR_FILE%
(echo COLDBACKUP_FAIL:Error creating Backup directory structure
& date/T & time/T) >> %LOGFILE%
REM :::::::::::::::::::: End Error handling section

REM :::::::::::::::::::: Cleanup Section


:end
set ORA_HOME=
set ORACLE_SID=
set CONNECT_USER=
set BACKUP_DIR=
set INIT_FILE=
set CFILE=
set ERR_FILE=
set LOG_FILE=
set BKP_DIR=

Cold Backup Script for Windows NT Checklist

• Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to correct values according to
your directory structure. These variables are highlighted in the script.
• Verify that CONNECT_USER is set to correct the username and password.
• Define the INIT_FILE variable to the location of the Init.ora file.
• Be sure that the user running the program has Write access to backup directories.
• When you run the program, pass SID as a parameter.

Cold Backup under Windows NT Troubleshooting and Status Check

Backup log files defined by LOG_FILE contain detailed information about each step of the backup
process. This is a very good place to start investigating why a backup has failed or for related errors. This
file will also have the start and end time of backup. ERR_FILE has error information.

A single line about the success or failure of backup is appended to the SID.log file every time a backup
is performed. This file is located under the directory defined by the LOGDIR variable. The messages for a
cold backup are 'COLDBACKUP_FAIL', if the cold backup failed, and 'Cold Backup Completed
successfully', if the backup completes successfully.

You can schedule automatic backups using the 'at' command, as shown in the following:

at 23:00 "c:\backup\coldbackup_nt.bat ORCL"


Runs at 23:00 hrs on current date.

at 23:00 /every:M,T,W,Th,F "c:\backup\coldbackup_nt.bat ORCL "

This command runs a backup at 23:00 hours every Monday, Tuesday, Wednesday, Thursday, and Friday.
The "Create Dynamic Files" section in the coldbackup_nt.bat program creates the
coldbackup.sql file (see Listing 3.10) under the log directory. coldbackup.sql is called from
coldbackup_nt.bat and generates a list of data, control, and redo log files to be backed up from the
database. A sample coldbackup.sql is shown in Listing 3.10 for your understanding. The contents of
this file are derived based on the structure of the database.

Listing 3.10 coldbackup.sql

set termout off heading off feedback off


set linesize 300 pagesize 0
set serveroutput on size 1000000

spool c:\backup\orcl\cold\log\coldbackup_list.bat

exec dbms_output.put_line('@echo off' );

exec dbms_output.put_line('REM ******Data files' );


select 'copy '|| file_name|| ' c:\backup\orcl\cold\data '
from dba_data_files order by tablespace_name;

exec dbms_output.put_line('REM ******Control files' );


select 'copy '|| name|| ' c:\backup\orcl\cold\control '
from v$controlfile order by name;

exec dbms_output.put_line('REM ******Init.ora file ' );


select ' copy c:\oracle\admin\orcl\pfile\init.ora
c:\backup\orcl\cold\control '
from dual;
exec dbms_output.put_line('exit;');
spool offexit

When the coldbackup.sql file is called from the coldbackup_nt.bat program, it spools output to the
coldbackup_list.bat DOS batch file (see Listing 3.11). This file has the commands necessary for
performing the cold backup.

This is only a sample file. Note that in the contents of file data, control, redo log, and Init.ora files are
copied to respective backup directories.

Listing 3.11 coldbackup_list.bat

@echo off

REM ******Data files


copy C:\ORADATA\DSGN01.DBF c:\backup\orcl\cold\data
copy C:\ORADATA\INDX01.DBF c:\backup\orcl\cold\data
copy C:\ORADATA\OEM01.DBF c:\backup\orcl\cold\data
copy C:\ORADATA\RBS01.DBF c:\backup\orcl\cold\data
copy C:\ORADATA\SYSTEM01.DBF c:\backup\orcl\cold\data
copy C:\ORADATA\TEMP01.DBF c:\backup\orcl\cold\data
copy C:\ORADATA\USERS01.DBF c:\backup\orcl\cold\data

REM ******Control files


copy C:\ORADATA\CONTROL01.CTL c:\backup\orcl\cold\control
copy C:\ORADATA\CONTROL02.CTL c:\backup\orcl\cold\control

REM ******Init.ora file


copy c:\oracle\admin\orcl\pfile\init.ora c:\backup\orcl\cold\control
exit;
Hot Backup
The hot backup program (see Listing 3.12) performs a hot backup of a database under the Windows NT
environment. The hot backup script takes SID, the instance to be backed up, as the input parameter.

Listing 3.12 hotbackup_nt.bat

@echo off
REM
#####################################################################
REM PROGRAM NAME: hotbackup_nt.bat

REM PURPOSE: This utility performs hot backup of


REM the database on Windows NT
REM USAGE: c:\>hotbackup_nt.bat SID

REM INPUT PARAMETERS: SID (Instance name)


''
REM
#####################################################################

REM :::::::::::::::::::: Begin Declare Variables Section

set ORA_HOME=c:\oracle\ora81\bin
set CONNECT_USER="/ as sysdba"
set ORACLE_SID=%1
set BACKUP_DIR=c:\backup\%ORACLE_SID%\hot
set INIT_FILE=c:\oracle\admin\orcl\pfile\init.ora
set ARC_DEST=c:\oracle\oradata\orcl\archive

set TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log

set HFILE=%BACKUP_DIR%\log\hotbackup.sql
set ERR_FILE=%BACKUP_DIR%\log\herrors.log
set LOG_FILE=%BACKUP_DIR%\log\hbackup.log
set BKP_DIR=%BACKUP_DIR%
REM :::::::::::::::::::: End Declare Variables Section

REM :::::::::::::::::::: Begin Parameter Checking Section

if "%1" == " goto usage

REM Create backup directories if already not exist


if not exist %BACKUP_DIR%\data mkdir %BACKUP_DIR%\data
if not exist %BACKUP_DIR%\control mkdir %BACKUP_DIR%\control
if not exist %BACKUP_DIR%\arch mkdir %BACKUP_DIR%\arch
if not exist %BACKUP_DIR%\log mkdir %BACKUP_DIR%\log
if not exist %LOGDIR% mkdir %LOGDIR%

REM Check to see that there were no create errors


if not exist %BACKUP_DIR%\data goto backupdir
if not exist %BACKUP_DIR%\control goto backupdir
if not exist %BACKUP_DIR%\arch goto backupdir
if not exist %BACKUP_DIR%\log goto backupdir

REM Deletes previous backup. Make sure you have it on tape.


del/q %BACKUP_DIR%\data\*
del/q %BACKUP_DIR%\control\*
del/q %BACKUP_DIR%\arch\*
del/q %BACKUP_DIR%\log\*

echo. > %ERR_FILE%


echo. > %LOG_FILE%
(echo Hot Backup started & date/T & time/T) >> %LOG_FILE%
echo Parameter Checking Completed >> %LOG_FILE%
REM :::::::::::::::::::: End Parameter Checking Section

REM :::::::::::::::::::: Begin Create Dynamic files Section


echo. >%HFILE%
echo set termout off heading off feedback off >>%HFILE%
echo set linesize 300 pagesize 0 >>%HFILE%
echo set serveroutput on size 1000000 >>%HFILE%
echo spool %BACKUP_DIR%\log\hotbackup_list.sql >>%HFILE%

echo Declare >>%HFILE%


echo cursor c1 is select distinct tablespace_name from dba_data_files
order by tablespace_name; >>%HFILE%
echo cursor c2( ptbs varchar2) is select file_name from
dba_data_files
where tablespace_name = ptbs order by file_name; >>%HFILE%
echo Begin >>%HFILE%
echo dbms_output.put_line('set termout off heading off feedback
off');
>>%HFILE%

echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Data files' ); >>%HFILE%
echo for tbs in c1 loop >>%HFILE%
echo dbms_output.put_line(' alter tablespace '^|^|
tbs.tablespace_name
^|^|' begin backup;'); >>%HFILE%
echo for dbf in c2(tbs.tablespace_name) loop >>%HFILE%
echo dbms_output.put_line(' host copy '^|^|dbf.file_name^|^|'
%BKP_DIR%\data 1^>^> %LOG_FILE% 2^>^> %ERR_FILE%'); >>%HFILE%
echo end loop; >>%HFILE%
echo dbms_output.put_line(' alter tablespace
'^|^|tbs.tablespace_name
^|^|' end backup;'); >>%HFILE%
echo end loop; >>%HFILE%

echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Control files ' );
>>%HFILE%
echo dbms_output.put_line(' alter database backup controlfile to
'^|^| ''^|^|'%BKP_DIR% \control\coltrol_file.ctl'^|^|''^|^|';');
>>%HFILE%
echo dbms_output.put_line(' alter database backup controlfile to
trace;');
>>%HFILE%

echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Init.ora file ' );
>>%HFILE%
echo dbms_output.put_line(' host copy %INIT_FILE%
%BKP_DIR%\control
1^>^> %LOG_FILE% 2^>^> %ERR_FILE%'); >>%HFILE%
echo. >>%HFILE%
echo dbms_output.put_line(chr(10) ); >>%HFILE%
echo dbms_output.put_line('host REM ******Archivelog files' );
>>%HFILE%
echo dbms_output.put_line(' alter system switch logfile;');
>>%HFILE%
echo dbms_output.put_line(' alter system archive log stop;');
>>%HFILE%
echo dbms_output.put_line('host move %ARC_DEST%\* %BKP_DIR%\arch
1^>^> %LOG_FILE% 2^>^> %ERR_FILE%' ); >>%HFILE%
echo dbms_output.put_line(' alter system archive log start;');
>>%HFILE%

echo dbms_output.put_line('exit;'); >>%HFILE%


echo End; >>%HFILE%
echo / >>%HFILE%
echo spool off >>%HFILE%
echo exit; >>%HFILE%

echo Dynamic files Section Completed >> %LOG_FILE%


REM :::::::::::::::::::: End Create Dynamic files Section

REM :::::::::::::::::::: Begin HotBackup Section

%ORA_HOME%\sqlplus -s %CONNECT_USER% @%HFILE%


REM Copy the files to backup location
%ORA_HOME%\sqlplus -s %CONNECT_USER%
@%BACKUP_DIR%\log\hotbackup_list.sql

(echo HotBackup Completed Successfully & date/T & time/T) >>


%LOG_FILE%
(echo HotBackup Completed Successfully & date/T & time/T) >> %LOGFILE%
goto end

REM :::::::::::::::::::: End HotBackup Section

REM :::::::::::::::::::: Begin Error handling section

:usage
echo Error, Usage: hotbackup_nt.bat SID
goto end

:backupdir
echo Error creating Backup directory structure >> %ERR_FILE%
(echo HOTBACKUP_FAIL:Error creating Backup directory structure
& date/T & time/T) >> %LOGFILE%
REM :::::::::::::::::::: End Error handling section

REM :::::::::::::::::::: Cleanup Section


:end
set ORA_HOME=
set ORACLE_SID=
set CONNECT_USER=
set BACKUP_DIR=
set INIT_FILE=
set ARC_DEST=
set HFILE=
set ERR_FILE=
set LOG_FILE=
set BKP_DIR=

Hot backup program functionality can be shown with the similar diagram as for a cold backup. The
sections and their purposes in the program are the same as for a cold backup.

Hot Backup Script under Windows NT Checklist

• Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to the correct values
according to your directory structure. These variables are highlighted in the script.
• Verify that CONNECT_USER is set to the correct username and password.
• Define the INIT_FILE variable to the location of the Init.ora file.
• Define the ARC_DEST variable to the location archive destination.
• Be sure that the user running the program has Write access to the backup directories.
• When you run the program, pass SID as a parameter.

Hot Backup under Windows NT Troubleshooting and Status Check

The backup log file defined by LOG_FILE contains detailed information about each step of the backup
process. This is a very good place to start investigating why a backup has failed or for related errors. This
file will also have the start and end time of backup. ERR_FILE has error information.

A single line about the success or failure of backup is appended to the SID.log file every time a backup
is performed. This file is located under the directory defined by the LOGDIR variable. The messages for a
hot backup are 'HOTBACKUP_FAIL', if a hot backup failed, and 'Hot Backup Completed
successfully', if a backup completes successfully.

The "Create Dynamic Files" section, in the hotbackup_nt.bat creates the hotbackup.sql file
(see Listing 3.13) under the log directory. This generates a list of tablespaces, data, control, and redo log
files from the database. It is called from the hotbackup_nt.bat program.

Listing 3.13 hotbackup.sql

set termout off heading off feedback off


set linesize 300 pagesize 0
set serveroutput on size 1000000
spool c:\backup\orcl\hot\log\hotbackup_list.sql
Declare
cursor c1 is select distinct tablespace_name from dba_data_files
order by tablespace_name;
cursor c2( ptbs varchar2) is select file_name from dba_data_files
where tablespace_name = ptbs order by file_name;
Begin
dbms_output.put_line('set termout off heading off feedback off');

dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Data files' );
for tbs in c1 loop
dbms_output.put_line(' alter tablespace '|| tbs.tablespace_name
||'
begin backup;');
for dbf in c2(tbs.tablespace_name) loop
dbms_output.put_line(' host copy '||dbf.file_name||'
c:\backup\orcl\hot\data 1>> hbackup.log 2>> herrors.log');
end loop;
dbms_output.put_line(' alter tablespace '||tbs.tablespace_name ||
' end backup;');
end loop;
dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Control files ' );
dbms_output.put_line(' alter database backup controlfile to '||
''||'c:\backup\orcl\hot\control\coltrol_file.ctl
'||''||';');
dbms_output.put_line(' alter database backup controlfile to
trace;');

dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Init.ora file ' );
dbms_output.put_line('host copy
c:\oracle\admin\orcl\pfile\init.orac:\backup\orcl\hot\control
1>> hbackup.log 2>> herrors.log');

dbms_output.put_line(chr(10) );
dbms_output.put_line('host REM ******Archivelog files' );
dbms_output.put_line(' alter system switch logfile;');
dbms_output.put_line(' alter system archive log stop;');
dbms_output.put_line('host move c:\oracle\oradata\orcl\archive\*
c:\backup\orcl\hot\arch 1>> hbackup.log 2>> herrors.log' );
dbms_output.put_line(' alter system archive log start;');
dbms_output.put_line('exit;');
End;
/
spool off
exit;

The hotbackup.sql file is called from hotbackup_nt.bat and it spools output to the hotbackup_list.sql SQL
file (see Listing 3.14). This file has the commands necessary for performing a hot backup.

This is only a sample file. Note in the file that the data, control, archive log, and Init.ora files are
copied to their respective backup directories. First, it puts the tablespace into Backup mode, copies the
corresponding files to backup location, and then turns off the Backup mode for that tablespace. This
process is repeated for each tablespace, and each copy command puts the status of the copy operation
to hbackup.log and reports any errors to the herrors.log file.

Listing 3.14 is generated based on the structure of the database. In a real environment, the database
structure changes as more data files or tablespaces get added. Because of this, it is important to generate
the backup commands dynamically, as shown in hotbackup_list.sql. It performs the actual backup
and is called from hotbackup_nt.bat.

Listing 3.14 hotbackup_list.sql

set termout off heading off feedback off

host REM ******Data files


alter tablespace DESIGNER begin backup; host copy
C:\ORADATA\DSGN01.DBF
c:backup\orcl\hot\data
1>> hbackup.log 2>> herrors.logalter tablespace DESIGNER end backup;
alter tablespace DESIGNER_INDX begin backup;
host copy C:\ORADATA\DSGN_INDX01.DBF c:backup\orcl\hot\data
1>> hbackup.log 2>> herrors.log
alter tablespace DESIGNER_INDX end backup;
alter tablespace INDX begin backup;
host copy C:\ORADATA\INDX01.DBF c:backup\orcl\hot\data 1>>
hbackup.log
2>> herrors.log
alter tablespace INDX end backup;
alter tablespace OEM_REPOSITORY begin backup;
host copy C:\ORADATA\OEMREP01.DBF c:backup\orcl\hot\data
1>> hbackup.log 2>> herrors.log
alter tablespace OEM_REPOSITORY end backup;

host REM ******Control files


alter database backup controlfile to
'c:\hot\control\coltrol_file.ctl';
alter database backup controlfile to trace;

host REM ******Init.ora file


host copy c:\oracle\admin\orcl\pfile\init.ora
c:backup\orcl\hot\control
1>> hbackup.log2>> herrors.log

host REM ******Archivelog files


alter system switch logfile;
alter system archive log stop;
host move c:\oracle\oradata\orcl\archive\* c:\backup\orcl\hot\arch
1>> hbackup.log 2>> herrors.log
alter system archive log start;
exit;

Export
The export program (see Listing 3.15) performs a full export of the database under a Windows NT
environment. The export script takes SID, the instance to be backed up, as the input parameter.

Listing 3.15 export_nt.bat

@echo off
REM
#####################################################################
REM PROGRAM NAME: export_nt.bat

REM PURPOSE: This utility performs a full export of


REM database on Windows NT
REM USAGE: c:\>export_nt.bat SID

REM INPUT PARAMETERS: SID (Instance name)


''
REM
#####################################################################

REM :::::::::::::::::::: Begin Declare Variables Section

set ORA_HOME=c:\oracle\ora81\bin
set ORACLE_SID=%1
set CONNECT_USER=system/manager
set BACKUP_DIR=c:\backup\%ORACLE_SID%\export

set TOOLS=c:\oracomn\admin\my_dba
set LOGDIR=%TOOLS%\localog
set LOGFILE=%LOGDIR%\%ORACLE_SID%.log

REM :::::::::::::::::::: End Declare Variables Section

REM :::::::::::::::::::: Begin Parameter Checking Section

if "%1" == " goto usage


REM Create backup directories if already not exist
if not exist %BACKUP_DIR% mkdir %BACKUP_DIR%
if not exist %LOGDIR% mkdir %LOGDIR%

REM Check to see that there were no create errors


if not exist %BACKUP_DIR% goto backupdir

REM Deletes previous backup. Make sure you have it on tape.


del/q %BACKUP_DIR%\*

REM :::::::::::::::::::: End Parameter Checking Section

REM :::::::::::::::::::: Begin Export Section

%ORA_HOME%\exp %CONNECT_USER% parfile=export_par.txt


(echo Export Completed Successfully & date/T & time/T) >> %LOGFILE%
goto end

REM :::::::::::::::::::: End Export Section

REM :::::::::::::::::::: Begin Error handling section

:usage
echo Error, Usage: coldbackup_nt.bat SID
goto end

:backupdir
echo Error creating Backup directory structure
(echo EXPORT_FAIL:Error creating Backup directory structure
& date/T & time/T) >> %LOGFILE%

REM :::::::::::::::::::: End Error handling section

REM ::::::::::::::::::::Cleanup Section


:end
set ORA_HOME=
set ORACLE_SID=
set CONNECT_USER=
set BACKUP_DIR=

This program performs an export of the database by using the parameter file specified by export_par.txt. In
Listing 3.16 is a sample parameter file that performs a full export of the database. You can modify the
parameter file to suit to your requirements.

Listing 3.16 export_par.txt

file= %BACKUP_DIR%\export.dmp
log= %BACKUP_DIR%\export.log
full=y
compress=n
consistent=y
Export Script under Windows NT Checklist

• Check to see that ORA_HOME and BACKUP_DIR, TOOLS are set to correct values according
to your directory structure. These variables are highlighted in the program.
• Verify that CONNECT_USER is set to the correct username and password.
• Be sure that the user running the program has Write access to the backup directories.
• Edit the parameter file to your specific requirements. Specify the full path of the location of your
parameter file in the program.
• When you run the program, pass SID as a parameter.

Export under Windows NT Troubleshooting and Status Check

The log file specified in the parameter file contains detailed information about each step of the export
process. This is a very good place to start investigating why an export has failed or for related errors.

A single line about the success or failure of export is appended to the SID.log file every time an export
is performed. This file is located under the directory defined by the LOGDIR variable. The messages for
an export are 'EXPORT_FAIL', if the export failed, and 'Export Completed successfully',
if the export completes successfully.

Recovery Principles

Recovery principles are the same, regardless of whether you are in a Unix or Windows NT environment.
The following are general guidelines for recovery using a cold backup, hot backup, and export.

Definitions
• Control File—The control file contains records that describe and maintain information about the
physical structure of a database. The control file is updated continuously during database use
and must be available for writing whenever the database is open. If the control file is not
accessible, the database will not open.
• System Change Number (SCN)—The system change number is a clock value for the database
that describes a committed version of the database. The SCN functions as a sequence generator
for a database and controls concurrency and redo record ordering. Think of the SCN as a
timestamp that helps ensure transaction consistency.
• Checkpoint—A checkpoint is a data structure in the control file that defines a consistent point of
the database across all threads of a redo log. Checkpoints are similar to SCNs and they also
describe which threads exist at that SCN. Checkpoints are used by recovery to ensure that
Oracle starts reading the log threads for the redo application at the correct point. For a parallel
server, each checkpoint has its own redo information.

Media Recovery Commands


To perform either a complete media recovery or incomplete media recovery, you need to be familiar with
the following three media recovery commands.

• RECOVER DATABASE This command performs a media recovery on all the data files that
require the application of redo.
• This can be used only when the database is mounted but not open.
• This command is generally used when the system data file is lost.
• RECOVER TABLESPACE tablespace_name This command performs a media recovery on
all the data files in the tablespaces listed.
• The database must be mounted and open.
• The tablespace in question must be offline to perform the media recovery.
• To recover the tablespace, you need to mount the database first, put the data file that is in trouble
offline, and then open the database and put the tablespace offline. Then give the recover
tablespace tablespace_name command and put the tablespace online when the
recovery is complete.
• RECOVER DATAFILE 'filename' This command performs a recovery on listed data files.
• The database can be open or closed.
• If the database is open, data file recovery can only recover offline files.
• To recover the data file in question, mount the database and put the troubled data file offline,
open the database and issue the 'RECOVER DATAFILE 'FILE_NAME' command, and put
the data file online. This command is generally used when a non-system data file is lost.

Performing Recovery, Where to Start?

You are a new DBA and you get a call from the project manager saying that the users are not able to
connect to the database.

As a first step, try to establish a connection for yourself as a DBA as shown. If the connection succeeds,
try to connect as a regular user and see if you receive any errors during connection, because some errors
that are seen by regular users do not show up when you connect as Internal or SYSDBA (such as Max
sessions reached).

$sqlplus user/pwd

Now you determined that you are not able to connect to the database.

As a second step, try to see whether the processes are running by using the following command.

$ps –ef|grep –i ORCL

This should list the processes that are running. If it does not list any processes, you are sure that the
database is down.

As a third step, check the alert log file for any errors. The alert log file is located under the directory defined
by BACKGROUND_DUMP_DEST in the Init.ora file.

This file lists any errors encountered by database. If you see any errors, note the time of the error, error
number, and error message. If you do not see any errors, start up the database (sometimes it will report an
error when you try to startup the database). If the database starts, that is wonderful! If it doesn't start, it will
generally complain about the error onscreen and also report the error in the alert log file. Check the alert
log again for more information.

Now you determined from the error that the database is not finding one of the data files.

As a fourth step, inform the project manager that somebody has caused a problem in the database and try
to find out what happened (a hard disk problem or perhaps somebody deleted the file). Limit your time to
this research based on time available.

As a fifth step, try to determine what kind of backups you have taken recently and see which one is most
beneficial for recovering as much data as possible. This depends on the types of backups your site is
employing to protect from database crashes.

If you have a hot backup mechanism in place, you can be sure that you can recover all or most of the data.
If you have an export or cold backup mechanism in place, the data changes since the time of last backup
will be lost.

As a sixth step, follow the instructions in this chapter, given your recovery scenario.
Recovery Using Cold Backup
To restore a full database, do the following:

1. Shutdown the database.


2. Copy all data files, control files, and redo log files from the backup location to the original location.
Verify the owner and permissions for the files (for Unix only).
3. Start up the database.

Recovery When a Data File Is Lost

To recover a database using c cold backup, just restore all the files from the backup location to their
original locations and open the database. You can find the original physical location in the trace file you
generated as part of the backup. You cannot recover the transactions that occurred between the last
backup and the point of failure—that information is lost.

Recovery When a Redo Log File Is LostTo recover the database when a
redo log file is lost or corrupted

alter database clear logfile group 1;

Where group 1 is the corrupted log group number.

Or you can create a new control file and open the database in the Reset Logs mode (alter database
open resetlogs). For this the database need to be in NOMOUNT state (startup NOMOUNT). The
reset logs option resets the redo log sequence numbering and recreates any missing logfiles. To
create the new control file, you need to know the full structure of the database. We have taken the trace of
control file by using Alter database backup controlfile to trace as part of the backup.
Follow the steps explained in Chapter 10, "Database Maintenance and Reorganization," for creating a new
control file.

Recovery When a Control File Is Lost

To recover the database in case of a lost control file, you simply recreate the control file knowing the
structure of the database (from the trace of control file) and open the database with reset logs. Follow the
steps explained in Chapter 10 for creating a new control file.

Recovery Using Hot Backup


When the database is running in ARCHIVELOG mode and online backup is being used, there are a variety
of options for recovering the database, up to the point of failure, that provide maximum protection for your
data.

Recovery can be classified as follows:

• Complete media recovery


• Closed database recovery
• Open database/offline tablespace recovery
• Open database/offline tablespace/individual data file recovery
• Incomplete media recovery
• Cancel-based recovery
• Time-based recovery
• Change-based recovery
Complete Media Recovery

At all costs, we want to be able to fully recover the data in case of a database failure. Consequently, we
always try to perform a complete recovery unless the need is to recover the database only to a specific
point in time for specific reasons, such as those discussed in the next section, "Incomplete Media
Recovery."

The choice of whether to use a closed or open database recovery is based on the type of failure. If you
lose system data files, the only choice is a closed database recovery. If a non-system data file is lost, you
can perform recovery by using either a closed or open database method. Suppose that you are running a
24/7, mission-critical database, and only part of the database (non-system) is damaged. In this situation,
you can open the database for users by taking the damaged data files offline and then performing a
recovery on the damaged files. This way, users can access the rest of the database while the recovery is
being performed on the damaged data files.

Incomplete Media Recovery

Incomplete media recovery is very useful as well, if a user drops a table accidentally and comes to you for
help, for example. If you know the time the table drop occurred, you can restore the database from a
backup. By using the latest control file, you can roll forward the changes by applying redo log files up to
the point just before the accidental drop (time-based recovery).

Point in Time Recovery

There was a database corruption at 5 p.m. in the evening and the database crashed. When I tried to bring
up the database, the database opened and immediately died as soon as I started executing any SQL
statement. This crippled my ability to perform troubleshooting of the problem. I restored the database from
a backup and applied the archive redo log files up to just before the time of the crash and the database
came up fine. Remember, you have to use the latest control file to roll forward with the archived redo log
files, so that the Oracle knows what archived redo log files to apply.

Closed Database Recovery Steps

1. Restore the damaged files from backup.


2. With the following command, mount the database but do not open it:

startup mount

3. Start media recovery as follows:

recover database

At this point, you will be prompted for the location of the archived redo log files, if necessary.

4. Open the database:

alter database open

Verify that the recovery worked.

Offline Tablespace Recovery Steps

1. Restore the damaged files from the backup.


2. With the following command, mount database but do not open it:

startup mount

3. Take the corrupted data file offline:


alter datafile '/u01/oradata/users01.dbf' offline;

4. Open the database as follows:

alter database open;

5. After the database is open, take the tablespace offline. For example, if the corrupted data file
belongs to USERS tablespace, use the following command:

alter tablespace users offline;

Here, tablespace can be taken offline either with a normal, temporary, or immediate priority. If
possible, take the damaged tablespace offline with a normal or temporary priority to minimize the
amount of recovery.

6. Start the recovery on the tablespace:

recover tablespace users;

At this point, you will be prompted for the location of the archived redo log files, if necessary.

7. Bring the tablespace online:

alter tablespace users online;

8. Verify that the recovery worked.

Offline Datafile Recovery Steps

1. Restore the damaged files from the backup.


2. Using the following command, mount the database but do not open it:

Startup mount

3. Take the corrupted data file offline:

alter datafile '/u01/oradata/users01.dbf' offline;

4. Open the database:

alter database open;

5. After the database is open, take the tablespace offline. For example, if the corrupted data file
belongs to USERS tablespace, use the following command:

alter tablespace users offline;

Here, tablespace can be taken offline either with a normal, temporary, or immediate priority. If
possible, take the damaged tablespace offline with a normal or temporary priority to minimize the
amount of recovery.

6. Start the recovery on the data file:

recover datafile '/u01/oradata/users01.dbf';

At this point, you will be prompted for the location of the archived redo log files, if necessary.
7. Bring the tablespace online:

alter tablespace users online;

8. Verify that the recovery worked.

Cancel-Based Recovery Steps

1. Restore the damaged files from the backup.


2. Using the following command, mount the database but do not open it:

startup mount

3. Start the recovery:

recover database until cancel [using backup controlfile]

At this point, you will be prompted for the location of the archived redo log files, if necessary.
Enter cancel to cancel recovery after Oracle has applied the archived redo log file just prior to the
point of corruption. If a backup control file or recreated control file is being used with incomplete
recovery, you should specify the using backup controlfile option. In cancel-based recovery, you
cannot stop in the middle of applying a redo log file. You either completely apply a redo log file or
you don't apply it at all. In time-based recovery, you can apply to a specific point in time,
regardless of the archived redo log number.

4. Open the database:

alter database open resetlogs

Whenever an incomplete media recovery is being performed or the backup control file is used for
recovery, the database should be opened with the resetlogs option. The resetlogs option will
reset the redo log files.

5. Perform a full backup of database.

If you open the database with resetlogs, a full backup of the database should be performed
immediately after recovery. Otherwise, you will not be able to recover changes made after you
reset the logs.

6. Verify that the recovery worked.

Time-Based Recovery Steps

1. Restore the damaged files from the backup.


2. Using the following command, mount the database but do not open it:

startup mount

3. Start the recovery:

recover database until time [using backup controlfile]

For example

recover database until time '1999-01-01:12:00:00' using backup


controlfile
At this point, you will be prompted for the location of the archived redo log files, if necessary.
Oracle automatically terminates the recovery when it reaches the correct time. If a backup control
file or recreated control file is being used with incomplete recovery, you should specify the using
backup controlfile option.

4. Open the database:

alter database open resetlogs

Whenever an incomplete media recovery is being performed or the backup control file is used,
the database should be opened with the resetlogs option, so that it resets the log numbering.

5. Perform a full backup of the database.

If the database is opened with resetlogs, a full backup of the database should be performed
immediately after recovery. Otherwise, you will not be able to recover the changes made after you
reset the logs.

6. Verify that the recovery worked.

Change-Based Recovery Steps

1. Restore the damaged files from the backup.


2. Using the following command, mount the database but do not open it:

startup mount

3. Start the recovery:

recover database until change [using backup controlfile]

For example

recover database until change 2315 using backup controlfile

At this point, you will be prompted for the location of the archived redo log files, if necessary.
Oracle automatically terminates the recovery when it reaches the correct system change number
(SCN).

If a backup control file or a recreated control file is being used with an incomplete recovery, you
should specify using the backup controlfile option.

4. Open the database.

alter database open resetlogs

5. Perform a full backup of the database.

If the database is opened with resetlogs, a full backup of the database should be performed
immediately after recovery. Otherwise, you will not be able to recover the changes made after you
reset the logs.

6. Verify that the recovery worked.


System Tablespace Versus a Non-System Tablespace Recovery

When a system data file is lost or damaged, the only way to recover the database is by doing a closed
database recovery using RECOVER DATABASE command.

Checking for Files Needing Recovery

The following command can be used to check the data file status. This command works when the
database is mounted or open.

select name, status from v$datafile;

Before you actually start recovering the database, you can obtain information about the files that need
recovery by executing the following command. To execute the statement, the database must be mounted.
The command also gives error information.

select b.name, a.error from v$recover_file a, v$datafile b


where a.file# = b.file#

Recovery Using Import


The import utility is used to import the database from the dump file generated through the export
utility. This is very useful for transferring data across platforms and importing only specific objects or users.
It works whether archiving is turned on or off. Full database import performance can be improved by
turning off archiving during the import.

There are three levels of Import:

• Full
• User-level
• Table-level

Full Import

A full import can be used to restore the database in case of a database crash. For example, you have a full
export of the database from yesterday and your database crashed this afternoon. You can use the
import command to restore the database from the previous day's backup. The restore steps are as
follows.

1. Create a blank database—Refer to Chapter 10 for instructions on how to create a database.


2. Import the database—The following command performs a full database import, assuming that
your export dump filename is export.dmp. The IGNORE=Y option ignores any create errors,
and the DESTROY=N option does not destroy the existing tablespaces.
3. C:\>imp system/manager file=export.dmp log=import.log full=y
ignore=y destroy=n
4. Verify the import log for any errors—With this import, the data changes between your previous
backup and the crash will be lost.

Table-Level Import

A table level import allows you to import specific objects without importing the whole database.

Example 1:

For example, if one of the developers requests that you transfer the EMP and DEPT tables of user SCOTT
from database ORCL to TEST. You can use the following steps to transfer these two tables.

1. Set your ORACLE_SID to ORCL.


C:\>set ORACLE_SID=ORCL

This step sets the correct database to which to connect.

2. Perform an export of EMP and DEPT.

C:\>exp system/manager tables=(scott.emp,scott.dept)


file=export.dmp log=export.log

This command exports table data, constraints and any indexes on the table. Because the tables
belong to owner scott, we need to precede them with the owner in the export command. Verify
the export.log file to make sure there are no errors in the export.

3. Connect to TEST database.

SQL>Connect system/manager@TEST

4. Drop the tables if it already exists.

If the TEST database already has EMP and DEPT tables, you can truncate the tables or drop the
tables as shown.

SQL>Truncate table EMP;SQL>Truncate table DEPT;

Or

SQL>Drop table EMP;SQL>Drop table DEPT;

5. Import the tables to TEST.


6. C:\>set ORACLE_SID=TEST
7. C:\>imp system/manager fromuser=scott touser=scott
tables=(EMP,DEPT)
8. file=export.dmp log=import.log ignore=Y

Check for any errors in the import log file.

Example 2:

Suppose you walk into the office in the morning and a developer meets you in the hallway and says that
he accidentally dropped the SALES table. He wants to see whether you can do anything to restore the
table.

Well, you could do something if you have an export dump file from your previous backup. The steps to
restore the table are as follows (assuming that this happened in the TEST database):

1. Set your ORACLE_SID to the TEST database.

C:\>set ORACLE_SID=TEST

2. Import the table from previous backup.


3. C:\>imp system/manager tables=(SCOTT.SALES) file=export.dmp
4. log=import.log ignore=Y

This command imports the SALES table from previous backup. After the import check the import
log file for any errors.
Backup and Recovery Tools

Recovery Manager (RMAN)


RMAN is an Oracle provided tool that allows you perform backup and recovery operations on the
database. Using RMAN you can backup and restore datafiles, control files and archived redo log files.

RMAN operates using the recovery catalog to store metadata information about backup and recovery
operations. Typically the recovery catalog is stored in a separate database. If you do not want to use the
recovery catalog RMAN can use the target database control file to perform backup and recovery
operations. Because most information in the recovery catalog is also available in the target database's
control file, RMAN supports using the target database control file instead of a recovery catalog. The
disadvantage of using the control file is that RMAN does not support restore or recovery when the control
file is lost. To avoid this you should make frequent backups of the control file. Using the control file is
especially appropriate for small databases where installation and administration of another database for
the sole purpose of maintaining the recovery catalog is burdensome.

A single recovery catalog is able to store information for multiple target databases. Consequently, loss of
the recovery catalog can be disastrous. You should back up the recovery catalog frequently. If the
recovery catalog is destroyed and no backups of it are available, then you can partially reconstruct the
catalog from the current control file or control file backups.

When you perform a backup using RMAN, information about the backup is stored in the catalog and the
actual backups(physical files) are stored on disk or tape(requires media management software). When you
use RMAN with a recovery catalog, the RMAN environment is comprised of the following components

• RMAN executable
• Recovery catalog database (Database to hold the catalog)
• Recovery catalog schema in the recovery catalog database (Schema to hold the metadata
information)
• Optional Media Management Software (for tape backups)

Sample Files

Sample oratab File


Listing 3.17 is created by the Oracle installer when you install the Oracle database under Unix operating
system. The installer adds the instance name, Oracle home directory, and auto startup flag (Y/N) for the
database in the format [SID:ORACLE_HOME:FLAG]. The auto startup flag tells whether the Oracle
database should be started automatically when the system is rebooted.

Listing 3.17 oratab

# All the entries in oratab file follow the


# following syntax. Each instance listed on a separate line

# SID:ORACLE_HOME:Y/N

DEV:/u02/oracle/DEV/oracle/8.1.7:N
TEST:/u05/oracle/TEST/oracle/8.1.7:N
#PREPROD:/u06/oracle/PREPROD/oracle/8.1.7:N

Sample Trace of Control File


Listing 3.18 will have the structure of the database. It lists the data files, control files, and the redo log files
and their location. This is useful if you need to recreate the control file. A trace of the control file can be
generated by using the alter database backup control file to trace.
Listing 3.18 trace of control file

/u02/oracle/DEV/common/admin/udump/DEV_ora_11817.trc
Oracle8i Enterprise Edition Release 8.1.7.1.0 - Production
With the Partitioning option
JServer Release 8.1.7.1.0 - Production
ORACLE_HOME = /u02/oracle/DEV/oracle/8.1.7
System name: SunOS
Node name: mking07
Release: 5.6
Version: Generic_105181-25
Machine: sun4u
Instance name: DEV
Redo thread mounted by this instance: 1
Oracle process number: 10
Unix process pid: 11817, image: oracle@mking07 (TNS V1-V3)

*** SESSION ID:(9.13) 2001-05-17 21:15:28.730


*** 2001-05-17 21:15:28.730
# The following commands will create a new control file and use it
# to open the database.
# Data used by the recovery manager will be lost. Additional logs may
# be required for media recovery of offline data files. Use this
# only if the current version of all online logs are available.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "DEV" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 4
MAXDATAFILES 1022
MAXINSTANCES 1
MAXLOGHISTORY 453
LOGFILE
GROUP 1 (
'/u03/oracle/DEV/data/log01a.dbf',
'/u03/oracle/DEV/data/log01b.dbf'
) SIZE 400M,
GROUP 2 (
'/u03/oracle/DEV/data/log02a.dbf',
'/u03/oracle/DEV/data/log02b.dbf'
) SIZE 400M,
DATAFILE
'/u02/oracle/DEV/data/system01.dbf',
'/u02/oracle/DEV/data/indx01.dbf',
'/u02/oracle/DEV/data/rbs01.dbf',
'/u02/oracle/DEV/data/temp01.dbf',
'/u02/oracle/DEV/data/users.dbf',
CHARACTER SET WE8ISO8859P1
;
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# Database can now be opened normally.
ALTER DATABASE OPEN;
# Commands to add tempfiles to temporary tablespaces.
# Online tempfiles have complete space information.
# Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/u03/oracle/DEV/data/temp04.dbf'
REUSE;
ALTER TABLESPACE TEMP ADD TEMPFILE '/u03/oracle/DEV/data/temp03.dbf'
REUSE;
# End of tempfile additions.

You might also like