You are on page 1of 54

Oracle DBA Job Interview Questions

Q.1. Tell me a real time recovery scenario that u have done. Ans:-Follow this URL
http://gavinsoorma.com/recovery-scenarios/

Q.2.(i):- can we apply archive log to cold backup? Ans:-It is not at all complicated question. But it is tricky question. a.sql>Shutdown immediate; b.sql>Startup mount; c.sql>restore database; /* It will apply base backup(cold backup) */ d.sql>recover database; /* It will apply all the changes made in database using archivelog */ e.sql>alter database open; /*After that take complete database cold backup which has previous cold backup as well as archivelog */ Q.2.(ii):-My database operates in ARCHIVELOG mode. I took Cold backup (datafiles, control files and redo log files) at Monday night. On Tuesday, My system had media failure, I lost all files but all archive log files from last cold backup still remain. How can I recover database until time of failure from Cold backup and archive log files? How to apply archive log files to cold backup? Ans:C:>sqlplus /nolog Sql>conn sys as sysdba Sql>startup mount Sql>recover database using backup controlfile until cancel; Sql>Apply your logs Sql>CANCEL Sql>alter database open resetlogs; Sql>shutdown immediate; Sql>startup mount;

Sql> Take user cold backup of database which contains all the datafiles; Q3. While exporting a table expdp/ exp is faster? why? How expdp is faster than exp, what oracle internally change to speed up datapump? Data pump is block mode and exp is a byte mode. So, block mode is always faster than the byte mode ORACLE Export (exp) vs Datapump (expdp) ORACLE provides two external utilities to transfer database objects from one database to another database. Traditional exports (exp /imp) are introduced before 10g. Then from 10g, ORACLE introduced datapump (expdp / impdp) as an enhancement to traditional export utility. Traditional Export (exp/ imp) This is an ORACLE database external utility, which is used to transfer database objects from one database server to another database server. It allows transferring the database objects over different platforms, different hardware and software configurations. When an export command is executed on a database, database objects are extracted with their dependency objects. That means if it extracts a table, the dependences like indexes, comments, and grants are extracted and written into an export file (binary format dump file). Following is the command to export a full database,

Cmd > exp userid=username/password@exportdb_tns file=export.dmp log=export.log full=y statistics=none The above command will be exported the database to a binary dump file named export.dmp. Then imp utility can be used to import this data to another database. Following is the command to import, Cmd> imp userid=username/password@importdb_tns file=export.dmp log=import.log full=y statistics=none

Datapump Export (expdp/ impdp):-

This is also an ORACLE database external utility, which is used to transfer objects between databases. This utility is coming from ORACLE 10g database. It has more enhancements than the traditional exp/ imp utilities. This utility also makes dump files, which are in binary formats with database objects, object metadata and their control information. The expdp and impdp commands can be executed in three ways, Command line interface (specify expdp/impdp parameters in command line) Parameter file interface (specify expdp/impdp parameters in a separate file) Interactive-command interface (entering various commands in export prompt) There are five different modes of data unloading using expdp. They are, 1. Full Export Mode (entire database is unloaded) 2. Schema Mode (this is the default mode, specific schemas are unloaded) 3. Table Mode (specified set of tables and their dependent objects are unloaded) 4. Tablespace Mode (the tables in the specified tablespace are unloaded) 5. Transportable Tablespace Mode (only the metadata for the tables and their dependent objects within a specified set of tablespaces are unloaded)

Following is the way to export a full database using expdp, C:>expdp userid=username/password dumpfile=expdp_export.dmp logfile=expdp_export.log full=y directory=export Then impdp utility should be used to import this file to another database. What is the difference between Traditional Export and Datapump? Datapump operates on a group of files called dump file sets. However, normal export operates on a single file. Datapump access files in the server (using ORACLE directories). Traditional export can access files in client and server both (not using ORACLE directories). Exports (exp/imp) represent database metadata information as DDLs in the dump file, but in datapump, it represents in XML document format. Datapump has parallel execution but in exp/imp single stream execution. Datapump does not support sequential media like tapes, but traditional export supports.Data Pump Export and Import operations are processed in the database

as a Data Pump job,which is much more efficient that the client-side execution of original Export and Import.

Data Pump operations can take advantage of the servers parallel processes to read or write multiple data streams simultaneously. Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes. These server processes access files for the Data Pump jobs using directory objects that identify the location of the files. The directory objects enforce a security model that can be used by DBAs to control access to these files.Datapump has a very powerful interactive command-line mode which allows the user to monitor and control Data Pump Export and Import operations.Datapump allows you to disconnect and reconnect to the session .Because Data Pump jobs run entirely on the server, you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress.Data Pump gives you the ability to pass data between two databases over a network (via a database link), without creating a dump file on disk.

Datapump uses the Direct Path data access method (which permits the server to bypass SQL and go right to the data blocks on disk) has been rewritten to be much more efficient and now supports Data Pump Import and Export.Original Export is being deprecated with the Oracle Database 11g. Q4. How to calculate the free spaces in tablespaces? Ans:column "Tablespace" format a13 column "Used MB" format 99,999,999 column "Free MB" format 99,999,999 column "Total MB" format 99,999,999 select

fs.tablespace_name (df.totalspace - fs.freespace) fs.freespace df.totalspace

"Tablespace", "Used MB", "Free MB", "Total MB",

round(100 * (fs.freespace / df.totalspace)) "Pct. Free" from (select tablespace_name, round(sum(bytes) / 1048576) TotalSpace from dba_data_files group by tablespace_name ) df, (select tablespace_name, round(sum(bytes) / 1048576) FreeSpace from dba_free_space group by tablespace_name ) fs where df.tablespace_name = fs.tablespace_name;

Q5. You have a database which is in nonasm . You need to send all data files to ASM file system?? how you will do it? Ans:-It is very interesting and send datafiles from NONASM to ASM is lengthy task.So you need to understand to transfer NonASM datafiles to ASM.But I am not going to tell to configure your ASM file system.I assume that you already know.

Step Summary:1. Using RMAN you can take image copy backup of all datafiles in ASM Diskgroup in +dg1.. Note:-DG1 is ASM Disk group. RMAN> backup as copy database format +dg1;

2. Switch the database files path from existing path to the new ASM
diskgroup path.Now switch your database datafiles from operating system file system to ASM file system which already created in ASM disk group. RMAN> switch database to copy;

Converting Non-ASM Database to ASM Database:1.Create the ASM instance and start the instance. Add the diskgroups say DG1. DG1 is used for datafile, controlfile, redologfile, and tempfiles. 2.Create a database ORCL in /u01/oracle/oradata/ 3.Shutdown the database if it is running using normal mode SQL> shutdown immediate 4.If your database was running in spfile, create pfile from spfile

SQL> create pfile from spfile; 5.Edit the pfile $ vi $ORACLE_HOME/dbs/initORCL.ora 6.Make the necessary changes CONTROL_FILES= +DG1 DB_CREATE_FILE_DEST=DG1 DB_CREATE_ONLINE_LOG_DEST_1=DG1 7.Execute the RMAN utility, $ export ORACLE_SID=ORCL $ rman target / Recovery Manager: Release 10.2.0.1.0 Production on Thu Dec 9 14:01:50 2010 Copyright (c) 1982, 2005, Oracle. All rights reserved. connected to target database (not started) RMAN> 8.Start the RDBMS instance to nomount phase using RMAN RMAN> startup nomount Oracle instance started Total System Global Area 629145600 bytes Fixed Size Variable Size Database Buffers Redo Buffers 1220964 bytes 171970204 bytes 452984832 bytes 2969600 bytes

9.Restore the controlfile from the existing path using the RMAN RMAN> restore controlfile from /u01/oracle/oradata/controlfile/cntrl_01.ctl; Starting restore at 09-DEC-10 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=36 devtype=DISK

channel ORA_DISK_1: copied control file copy output filename=+DG1/orcl/controlfile/backup.256.737301987 Finished restore at 09-DEC-10 This will create a copy of controfile in +DG1 diskgroup. You can verify this creation by using $ export ORACLE_SID=+ASM $ asmcmd -p ASMCMD [+] > ls DG1 ASMCMD [+] > cd DG1/ORCL/CONTROLFILE/ ASMCMD [+DG1/ORCL/CONTROLFILE] > ls -l Type Redund Striped Time Sys Name CONTROLFILE HIGH FINE DEC 09 14:00:00 Y Backup.261.737302375 10.Alter the database to mount stage using RMAN RMAN> alter database mount; database mounted released channel: ORA_DISK_1 Database brought to the mount state by using the controlfile which had been created in the ASM diskgroup, you can examine this by $ export ORACLE_SID=ORCL $ sqlplus / as sysdba SQL> select name from v$controlfile; NAME DG1/orcl/controlfile/backup.261.737302375 11.Backup the database using the RMAN to the diskgroup +DG1 RMAN> backup as copy database format +dg1; Starting backup at 09-DEC-10 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=32 devtype=DISK

channel ORA_DISK_1: starting datafile copy input datafile fno=00003 name=/u01/oracle/oradata/datafile/SYSAUX01.dbf output filename=+DG1/orcl/datafile/sysaux.257.737302111 tag=TAG20101209T140830 recid=1 stamp=737302214 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:46 channel ORA_DISK_1: starting datafile copy input datafile fno=00001 name=/u01/oracle/oradata/datafile/SYSTEM01.dbf output filename=+DG1/orcl/datafile/system.258.737302219 tag=TAG20101209T140830 recid=2 stamp=737302318 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:45 channel ORA_DISK_1: starting datafile copy input datafile fno=00002 name=/u01/oracle/oradata/datafile/UNDOTBS01.dbf output filename=+DG1/orcl/datafile/undotbs.259.737302325 tag=TAG20101209T140830 recid=3 stamp=737302340 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25 channel ORA_DISK_1: starting datafile copy input datafile fno=00004 name=/u01/oracle/oradata/datafile/DEF_PERM01.dbf output filename=+DG1/orcl/datafile/def_perm.260.737302349 tag=TAG20101209T140830 recid=4 stamp=737302365 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25 channel ORA_DISK_1: starting datafile copy copying current control file output filename=+DG1/orcl/controlfile/backup.261.737302375 tag=TAG20101209T140830 recid=5 stamp=737302378 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07 Finished backup at 09-DEC-10 Now RMAN has created the copy of the database. 12.Switch the database files path from existing path to the new ASM diskgroup path RMAN> switch database to copy; datafile 1 switched to datafile copy +DG1/orcl/datafile/system.258.737302219 datafile 2 switched to datafile copy +DG1/orcl/datafile/undotbs.259.737302325

datafile 3 switched to datafile copy +DG1/orcl/datafile/sysaux.257.737302111 datafile 4 switched to datafile copy +DG1/orcl/datafile/def_perm.260.737302349 13.Perform incomplete recovery and open the database with the RESETLOGS option SQL> recover database using backup controlfile until cancel; ORA-00279: change 7937583 generated at 12/09/2010 20:33:55 needed for thread 1 ORA-00289: suggestion : +DG1 ORA-00280: change 7937583 for thread 1 is in sequence #36 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} CANCEL Media recovery cancelled. Then, bring the database to open phase with resetlogs option RMAN> alter database open resetlogs; database opened You can verify the datafile paths for the tablespaces using the views like dba_data_files or v$datafile SQL> select tablespace_name, file_name from dba_data_files; TABLESPACE_NAME FILE_NAME - SYSTEM UNDOTBS SYSAUX DEF_PERM +DG1/orcl/datafile/system.258.737302219 +DG1/orcl/datafile/undotbs.259.737302325 +DG1/orcl/datafile/sysaux.257.737302111 +DG1/orcl/datafile/def_perm.260.737302349

We had one more tablespace DEF_TEMP is a default temporary tablespace made up of tempfiles. RMAN will take the structure copy of the temporary tablespaces but not the tempfile, Will see more about how to restore the tempfile in step 15 14.If you want to delete the old copy of datafiles, you can delete them using RMAN RMAN> delete copy of database; released channel: ORA_DISK_1 allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=270 devtype=DISK List of Datafile Copies Key File S Completion Time Ckp SCN Ckp Time Name - - - 8 1 A 09-DEC-10 392632 09-DEC-10 /u01/oracle/oradata/system01.dbf 9 2 A 09-DEC-10 392632 09-DEC-10 /u01/oracle/oradata/undotbs01.dbf 10 3 A 09-DEC-10 392632 09-DEC-10 /u01/oracle/oradata/sysaux01.dbf 11 4 A 09-DEC-10 392632 09-DEC-10 /u01/oracle/oradata/def_perm01.dbf Do you really want to delete the above objects (enter YES or NO)? YES deleted datafile copy datafile copy filename=/u01/oracle/oradata/system01.dbf recid=8 stamp=541172332 deleted datafile copy datafile copy filename=/u01/oracle/oradata/undotbs01.dbf recid=9 stamp=541172332 deleted datafile copy datafile copy filename=/u01/oracle/oradata/sysaux01.dbf recid=10 stamp=541172332 deleted datafile copy datafile copy filename=/u01/oracle/oradata/def_perm01.dbf recid=11 stamp=541172333 Deleted 4 object 15.Add the tempfiles to the DEF_TEMP tablespace, though the RMAN doesnt take the backup of temporary files physically. SQL> alter tablespace def_temp add tempfile; Tablespace altered. Now the temporary file will be created in the ASM diskgroup, though we had set the DB_CREATE_FILE_DEST parameter. You can examine it by using v$tempfile view orif you want to examine the creation of tempfile in ASM diskgroup If you want to examine the creation of tempfile ASMCMD [+] > cd dg1/orcl/tempfile ASMCMD [+DG1/ORCL/TEMPFILE] > ls

DEF_TEMP.265.737391815 16.We had only moved the datafiles and tempfiles till yet. To change the Non-ASM online redolog files to ASM redologs, add logfile groups SQL> alter database add logfile; Database altered. Here the logfile group will be created in the ASM diskgroup with size 100m, if you want it for different size use the keyword size 17.Add somemore logfile groups SQL> alter database add logfile; Database altered. SQL> alter database add logfile; Database altered. If you want to examine the path of the redolog files SQL> select group#, member from v$logfile; GROUP# MEMBER - 1 2 3 4 5 /u01/oracle/oradata/redologfile/log_01_01.log /u01/oracle/oradata/redologfile/log_02_01.log +DG1/orcl/onlinelog/group_3.262.737304349 +DG1/orcl/onlinelog/group_4.263.737304363 +DG1/orcl/onlinelog/group_5.264.737304371

18.Identify the status of the logfile groups SQL> select group#, status from v$log; GROUP# STATUS - 1 2 3 4 ACTIVE CURRENT UNUSED UNUSED

UNUSED

19.Perform the manual log switches, so that the group 1 and group 2 turns to be INACTIVE SQL> alter system switch logfile; System altered. 20.Remove the group which are placed in the OS filesystems once they status turns to be INACTIVE i.e. group 1 and group 2 SQL> alter database drop logfile group 1; Database altered. SQL> alter database drop logfile group 2; Database altered. 21.Place the parameter file in the ASM diskgroup SQL> create spfile=+DG1 from pfile; You can verify the creation by ASMCMD [+] > cd dg1/orcl/parameterfile ASMCMD [+dg1/orcl/parameterfile] > ls spfile.267.737406545 22.Edit the pfile and make the changes $ vi $ORACLE_HOME/dbs/initORCL.ora Commit (#) all the parameters and enter spfile=+DG1/ORCL/PARAMETERFILE/spfile.267.737406545 23.Save and quit 24.Restart the database, the database will start using spfile from ASM diskgroup Now your database is completely placed in the ASM diskgroups.

Startup Restrict Mode:-

Sometimes it is necessary to do work on a database without any other users being logged in. It is possible to restrict the database session in such a case. When the database starts in restricted mode only users with restricted session privileges can get access to the database even though it is technically in open mode. Enable / Disable Restricted Session SQL> startup restrict ORACLE instance started. Total System Global Area 504366872 bytes Fixed Size 743192 bytes Variable Size 285212672 bytes Database Buffers 218103808 bytes Redo Buffers 307200 bytes Database mounted. Database opened. Startup the database in restricted mode The alter system command can be used to put the database in and out of restricted session once it is open: SQL> alter system enable restricted session; system altered SQL> alter system disable restricted session; system altered

NOTE: Find and disconnect users connected during restricted session. Any users connected to the database when restricted session is enabled will remain connected and need to be manually disconnected To check which users are connected to the database run the following:

SQL> SELECT username, logon_time, process from v$session;

USERNAME LOGON_TIM PROCESS ----------------- ------------ ------------------SYS 17-NOV-10 17-NOV-10 1310796 1343899

By querying the process id you can then issue a kill -9 <process_id> at the operating system level to disconnect the connected user. The blank usernames in v$session refer to background database processes. Check if database in restricted mode, If you are unsure whether the database is in restricted session or not you can run the following query to check: SQL> SELECT logins from v$instance; LOGINS ---------RESTRICTED

Startup Upgrade:What is happening inside the database when you are starting a database in startup upgrade mode? What is the difference between normal startup and startup upgrade? This article is applicable for oracle 10g.

The answer is same for both the questions.

Basically startup upgrade will open the database by setting the below parameters in memory.(not for spfile)

ALTER SYSTEM enable restricted session; ALTER SYSTEM SET _system_trig_enabled=FALSE SCOPE=MEMORY; ALTER SYSTEM SET _undo_autotune=FALSE SCOPE=MEMORY; ALTER SYSTEM SET undo_retention=900 SCOPE=MEMORY; ALTER SYSTEM SET aq_tm_processes=0 SCOPE=MEMORY; ALTER SYSTEM SET resource_manager_plan='' SCOPE=MEMORY;

It will be just a normal database startup, but it will make an environment for migrating the version.

Alert log will show all these details:

Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Picked latch-free SCN scheme 2 Autotune of undo retention is turned on. IMODE=BR ILAT =18 LICENSE_MAX_USERS = 0 SYS auditing is disabled

ksdpec: called for event 13740 prior to event group initialization Starting up ORACLE RDBMS Version: 10.2.0.3.0. ----------------------------------------------------------------------------------------------------Mon Mar 25 19:54:01 2013 Database mounted in Exclusive Mode Completed: ALTER DATABASE MOUNT Mon Mar 25 19:54:02 2013 ALTER DATABASE OPEN MIGRATE Mon Mar 25 19:54:02 2013 ------------------------------------------------------------------------------------------------Starting background process MMNL MMNL started with pid=12, OS id=2536 Mon Mar 25 19:54:14 2013 ALTER SYSTEM enable restricted session; Mon Mar 25 19:54:14 2013 ALTER SYSTEM SET _system_trig_enabled=FALSE SCOPE=MEMORY; Autotune of undo retention is turned off. Mon Mar 25 19:54:14 2013 ALTER SYSTEM SET _undo_autotune=FALSE SCOPE=MEMORY; Mon Mar 25 19:54:14 2013 ALTER SYSTEM SET undo_retention=900 SCOPE=MEMORY;

Mon Mar 25 19:54:14 2013 ALTER SYSTEM SET aq_tm_processes=0 SCOPE=MEMORY; Mon Mar 25 19:54:14 2013 ALTER SYSTEM SET resource_manager_plan='' SCOPE=MEMORY; replication_dependency_tracking turned off (no async multimaster replication found) Completed: ALTER DATABASE OPEN MIGRATE - See more at: http://www.arunsankar.in/2013/03/startupupgrade.html#sthash.wXr6JGhr.dpuf

Changing the Name of a Database If you ever want to change the name of database or want to change the setting of MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS then you have to create a new control file. Creating a New Control File Follow the given steps to create a new controlfile Steps: 1. First generate the create controlfile statement

SQL>alter database backup controlfile to trace; After giving this statement oracle will write the CREATE CONTROLFILE statement in a trace file. The trace file will be randomly named something like ORA23212.TRC and it is created in USER_DUMP_DEST directory. 2. Go to the USER_DUMP_DEST directory and open the latest trace file in text editor. This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another without

RESETLOGS. Since we are changing the name of the Database we have to use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql

3. Now open the c.sql file in text editor and set the database name from ica to prod shown in an example below CREATE CONTROLFILE SET DATABASE prod LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log', '/u01/oracle/ica/redo01_02.log'), GROUP 2 ('/u01/oracle/ica/redo02_01.log', '/u01/oracle/ica/redo02_02.log'), GROUP 3 ('/u01/oracle/ica/redo03_01.log', '/u01/oracle/ica/redo03_02.log') RESETLOGS DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M, '/u01/oracle/ica/rbs01.dbs' SIZE 5M, '/u01/oracle/ica/users01.dbs' SIZE 5M, '/u01/oracle/ica/temp01.dbs' SIZE 5M MAXLOGFILES 50 MAXLOGMEMBERS 3 MAXLOGHISTORY 400 MAXDATAFILES 200 MAXINSTANCES 6 ARCHIVELOG;

4.

Start and do not mount the database.

SQL>STARTUP NOMOUNT;

5.

Now execute c.sql script

SQL> @/u01/oracle/c.sql

6. Now open the database with RESETLOGS SQL>ALTER DATABASE OPEN RESETLOGS;

Ans3:-ORA-31655,ORA-39154 while Datapump Import

Today while Im performing SYSFM schema import using SACORP user on my test server

$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.4 (Tikanga)

I already have the Dump files from prod server and ready to do import in test server with my parfile.

Vi sysfm_impdp.par DIRECTORY=DPUMP_OMEGA_DIR1 DUMPFILE=DPUMP_OMEGA_DIR1:SYSFM_%U.dmp LOGFILE=LOGFILE_OMEGA_dir1:sysfm_impdp.log PARALLEL=10 SCHEMAS=SYSFM JOB_NAME=sysfm_Import

Impdp sacorp/***** parfile=sysfm_impdp.par Import: Release 11.2.0.2.0 - Production on Thu Feb 16 15:45:25 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 64bit Production With the Partitioning, Real Application Clusters and Automatic Storage Management options ORA-31655: no data or metadata objects selected for job ORA-39154: Objects from foreign schemas have been removed from import Master table "SACORP"."SYSFM_IMPORTsuccessfully loaded/unloaded Starting "SACORP"."SYSFM_IMPORT": sacorp/******** parfile=sysfm_impdp.par Job "SACORP"."SYSFM_IMPORT" successfully completed at 15:47:11

I checked the database and found no schema has imported. Then after struggling for some time I came to know the solution for above error as the "user have no privileges" to perform on the another user then I granted IMP_FULL_DATABASE to the SACORP user from which Im performing datapump schema import operation.

SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production

CORE 11.2.0.2.0

Production

TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 Production

SQL> grant IMP_FULL_database to SACORP; Grant succeeded.

And started import again, Now the Import operation has running successfully. Check dba_datapump_jobs to confirm import job is running

SQL> select OWNER_NAME,JOB_NAME,STATE from dba_datapump_jobs;

OWNER_NAME -------------------SACORP

JOB_NAME

STATE

-------------------- -------------------SYSFM_IMPORT EXECUTING

Q4:- Differences between STATSPACK and AWR

1. AWR is the next evolution of statspack utility. 2. AWR is the 10g NEW feature but statspack can still be used in 10g. 3. AWR repository holds all the statistics available in statspack as well as some additional statistics which are not (10g new features).

4. Statspack does not STORE the ASH statistics which are available in AWR dba_hist_active_sess_history VIEW. 5. Important difference between both is STATSPACK doesnt store history for new metric statistics introduced in Oracle 10g.The key AWR views dba_hist_sysmetric_history and dba_hist_sysmetric_summary. 6. AWR contains views such as dba_hist_service_stat, dba_hist_service_wait_class and dba_hist_service_ name. 7. Latest version of statspack included with 10g contains a specific tables which track history of statistics that reflect the performance of Oracle streams feature. These tables are stats$streams_capture, stats$treams_apply_sum, stats_buffered_subscribers, stats$rule_set. 8. The AWR does not contain specific tables that reflect oracle streams activity. Therefore if DBA relies on Oracle streams it would be useful to monitor its performance using Statspack utiity. 9. AWR snapshots are scheduled every 60 minutes by default. 10. Statspack snapshot purges must be scheduled manually but AWR snapshots are purged automatically by MMON every n Data Recovery Advisor(DRA)

Oracle 11g come up with one new cool feature for Database Backup & recovery called DRA (Data Recovery Advisor) which help us to recover the database without any trouble with few RMAN commands.

What is Data Recovery Advisor? DRA is an oracle database tool that automatically diagnoses data failures, determines and presents appropriate repair options and executes repairs at user requests.

The following RMAN commands are use to perform Data Recovery Advisor.

1. List Failure 2. Advise Failure 3. Repair Failure 4. Change Failure

1. List Failure: List failure command gives us information regarding failures and the effect of these on database operations. Each failures uniquely identified by failure number.

2. Advise Failure: Advise failure give us advise for how to deal with failure against database means advise failure give us solution for particular failure.

3. Repair failure: Repair failure command gives us "rman generated scrits" which restore and recover database from backup.

4. Change failure: Change failure is RMAN command which change the failure status or priority. Like there is two status : OPEN or CLOSED and Priority is HIGH or LOW. If some failure is rman showing HIGH and we want to change it to LOW then using change failure command we can change it. 1. Suppose I lost my system data file. 2. Now I am not worry because I have database backup with 11g database. 3. I need to just connect with RMAN, after connect with RMAN I do the following task.

RMAN>

List failure;

Using target database control file instead of recovery catalog List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------602 CRITICAL OPEN 26-JUL-08 System datafile 1: 'C:\APP\M.TAJ\ORADATA\TEST\SYSTEM01.DBF' is missing After got the failure description we can get "advise" from oracle about failure through advice failure command.

RMAN>

Advise failure;

List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------602 CRITICAL OPEN 26-JUL-08 System datafile 1: 'C:\APP\M.TAJ\ORADATA\TEST\SYSTEM01.DBF' is missing analyzing automatic repair options; this may take some time allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=152 device type=DISK analyzing automatic repair options complete Mandatory Manual Actions ========================

no manual actions available Optional Manual Actions ======================= 1. If file C:\APP\M.TAJ\ORADATA\TEST\SYSTEM01.DBF was unintentionally renamed or moved, restore it Automated Repair Options ======================== Option Repair Description ------ -----------------1 Restore and recover datafile 1 Strategy: The repair includes complete media recovery with no Data loss Repair script: c:\app\m.taj\diag\rdbms\test\test\hm\reco_2508517227.hm Above is rman advise regarding particular failure if above suggested repair option is helpful and fix the current problem then ok otherwise need to call oracle support services. now check oracle suggested repair options or scripts.

RMAN> repair failure preview;

Strategy: The repair includes complete media recovery with no data loss Repair script: c:\app\m.taj\diag\rdbms\test\test\hm\reco_2508517227.hm contents of repair script: # restore and recover datafile restore datafile 1; recover datafile 1;

Above is suggested script from RMAN to restore and recover database for particular failure, if suppose we want to use above script then again run "repair failure" command without 'preview' keyword.

RMAN> repair failure ; Strategy: The repair includes complete media recovery with no data loss Repair script: c:\app\m.taj\diag\rdbms\test\test\hm\reco_2508517227.hm contents of repair script: # restore and recover datafile restore datafile 1; recover datafile 1; Do you really want to execute the above repair (enter YES or NO)? YES executing repair script Starting restore at 26-JUL-08 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to C:\APP\M.TAJ\ORADATA\TEST\SYSTEM01.DBF channel ORA_DISK_1: reading from backup piece C:\APP\M.TAJ\PRODUCT\11.1.0\DB_1\DATABASE\05JMEU48_1_1 channel ORA_DISK_1: piece handle=C:\APP\M.TAJ\PRODUCT\11.1.0\DB_1\DATABASE\05JMEU48_1_1 tag=TAG20080726T124808 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:05:25 Finished restore at 26-JUL-08 Starting recover at 26-JUL-08 using channel ORA_DISK_1 starting media recovery media recovery complete, elapsed time: 00:00:03 Finished recover at 26-JUL-08 If we lost "tempfiles" in 10gr1 we need to manually RE-CREATE temporary tablespace but in 11g it is automatically done by ORACLE .

Finding current SCN of a database:-

There are two ways to get the current SCN (System Change Number) for an oracle 10g and 11g database DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER and V$database Method 1: Using DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function SQL> SELECT TO_CHAR(dbms_flashback.get_system_change_number) FROM dual; GET_SYSTEM_CHANGE_NUMBER ---------------------------------------11626841778005

Method 2 : Using current_scn column from v$database; SQL> SELECT TO_CHAR(CURRENT_SCN) FROM V$DATABASE; CURRENT_SCN -------------------11626841778008

RAC Questions:SINGLE CLIENT ACCESS NAME (SCAN)

Single Client Access Name (SCAN) is s a new Oracle Real Application Clusters (RAC) 11g Release 2 feature that provides a single name for clients to access Oracle Databases running in a cluster. The benefit is that the clients connect information does not need to change if you add or remove nodes in the cluster. Having a single name to access the cluster allows clients to use the EZConnect client and the simple JDBC thin URL to access any database running in the cluster, independently of which server(s) in the cluster the database is active. SCAN provides load balancing and failover for client connections to the database. The SCAN works as a cluster alias for databases in the cluster.

NETWORK REQUIREMENTS FOR USING SCAN The SCAN is configured during the installation of Oracle Grid Infrastructure that is distributed with Oracle Database 11g Release2. Oracle Grid Infrastructure is a single Oracle Home that contains Oracle Clusterware and Oracle Automatic Storage Management. You must install Oracle Grid Infrastructure first in order to use Oracle RAC 11g Release 2. During the interview phase of the Oracle Grid Infrastructure installation, you will be prompted to provide a SCAN name. There are 2 options for defining the SCAN: 1. Define the SCAN in your corporate DNS (Domain Name Service) 2. Use the Grid Naming Service (GNS) USING OPTION 1 - DEFINE THE SCAN IN YOUR CORPORATE DNS If you choose Option 1, you must ask your network administrator to create a single name that resolves to 3 IP addresses using a round-robin algorithm. Three IP addresses are recommended considering load balancing and high availability requirements regardless of the number of servers in the cluster. The IP addresses must be on the same subnet as your public network in the cluster. The name must be 15 characters or less in length, not including the

domain, and must be resolvable without the domain suffix (for example: sales1-scan must be resolvable as opposed to scan1-scan.example.com). The IPs must not be assigned to a network interface (on the cluster), since Oracle Clusterware will take care of it. You can check the SCAN configuration in DNS using nslookup. If your DNS is set up to provide round-robin access to the IPs resolved by the SCAN entry, then run the nslookup command at least twice to see the round-robin algorithm work. The result should be that each time, the nslookup would return a set of 3 IPs in a different order. Note: If your DNS server does not return a set of 3 IPs as shown in figure 3 or does not round-robin, ask your network administrator to enable such a setup. DNS using a round-robin algorithm on its own does not ensure failover of connections. However, the Oracle Client typically handles this. It is therefore recommended that the minimum version of the client used is the Oracle Database 11g Release 2 client. USING OPTION 2 - THE GRID NAMING SERVICE (GNS) If you choose option 2, you only need to enter the SCAN during the interview. During the cluster configuration, three IP addresses will be acquired from a DHCP service (using GNS assumes you have a DHCP service available on your public network) to create the SCAN and name resolution for the SCAN will be provided by the GNS1. IF YOU DO NOT HAVE A DNS SERVER AVAILABLE AT INSTALLATION TIME Oracle Universal Installer (OUI) enforces providing a SCAN resolution during the Oracle Grid Infrastructure installation, since the SCAN concept is an essential part during the creation of Oracle RAC 11g Release 2 databases in the cluster. All Oracle Database 11g Release 2 tools used to create a database (e.g. the Database Configuration Assistant (DBCA), or the Network Configuration Assistant (NetCA)) would assume its presence. Hence, OUI will not let you continue with the installation until you have provided a suitable SCAN resolution. However, in order to overcome the installation requirement without setting up a DNS-based SCAN resolution, you can use a hosts-file based workaround. In this case, you would use a typical hosts-file entry to resolve the SCAN to only 1 IP address and one IP address only. It is not possible to simulate the round-robin resolution that the DNS server does using a local host file. The host file look-up the OS performs will only return the first IP address that matches the name. Neither will you be able to do so in one entry (one line in

the hosts-file). Thus, you will create only 1 SCAN for the cluster. (Note that you will have to change the hosts-file on all nodes in the cluster for this purpose.) This workaround might also be used when performing an upgrade from former (pre-Oracle Database 11g Release 2) releases. However, it is strongly recommended to enable the SCAN configuration as described under Option 1 or Option 2 above shortly after the upgrade or the initial installation. In order to make the cluster aware of the modified SCAN configuration, delete the entry in the hosts-file and then issue: "srvctl modify scan -n <scan_name>" as the root user on one node in the cluster. The scan_name provided can be the existing fully qualified name (or a new name), but should be resolved through DNS, having 3 IPs associated with it, as discussed. The remaining reconfiguration is then performed automatically. SCAN CONFIGURATION IN THE CLUSTER During cluster configuration, several resources are created in the cluster for SCAN. For each of the 3 IP addresses that the SCAN resolves to, a SCAN VIP resource is created and a SCAN Listener is created. The SCAN Listener is dependent on the SCAN VIP and the 3 SCAN VIPs (along with their associated listeners) will be dispersed across the cluster. This means, each pair of resources (SCAN VIP + Listener) will be started on a different server in the cluster, assuming the cluster consists of three or more nodes. In case, a 2-node-cluster is used (for which 3 IPs are still recommended for simplification reasons), one server in the cluster will host two sets of SCAN resources under normal operations. If the node where a SCAN VIP is running fails, the SCAN VIP and its associated listener will failover to another node in the cluster. If by means of such a failure the number of available servers in the cluster becomes less than three, one server would again host two sets of SCAN resources. If a node becomes available in the cluster again, the formerly mentioned dispersion will take effect and relocate one set accordingly. DATABASE CONFIGURATION USING SCAN For Oracle Database 11g Release 2, SCAN is an essential part of the configuration and therefore the REMOTE_LISTENER parameter is set to the SCAN per default, assuming that the database is created using standard Oracle tools (e.g. the formerly mentioned DBCA). This allows the instances to register with the SCAN Listeners as remote listeners to provide information on what services are being provided by the instance, the current load, and a recommendation on how many incoming connections should be directed to the instance. In this context, the LOCAL_LISTENER parameter must be considered. The LOCAL_LISTENER parameter should be set to the node-VIP. If

you need fully qualified domain names, ensure that LOCAL_LISTENER is set to the fully qualified domain name (e.g. node-VIP.example.com). By default, a node listener is created on each node in the cluster during cluster configuration. With Oracle Grid Infrastructure 11g Release 2 the node listener run out of the Oracle Grid Infrastructure home and listen on the node-VIP using the specified port (default port is 1521). Unlike in former database versions, it is not recommended to set your REMOTE_LISTENER parameter to a server side TNSNAMES alias that resolves the host to the SCAN (HOST=sales1scan for example) in the address list entry, but use the simplified SCAN:port syntax as shown in figure 5. [oracle@mynode] srvctl config scan_listener SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521 SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521 SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521 [oracle@mynode] srvctl config scan SCAN name: sales1-scan, Network: 1/133.22.67.0/255.255.255.0/ SCAN VIP name: scan1, IP: /sales1-scan.example.com/133.22.67.192 SCAN VIP name: scan2, IP: /sales1-scan.example.com/133.22.67.193 SCAN VIP name: scan3, IP: /sales1-scan.example.com/133.22.67.194 NAME TYPE VALUE --------------------- --------------------local_listener string (DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=133.22.67.111)(POR T=1521)))) remote_listener string sales1-scan.example.com:1521 Note: if you are using the easy connect naming method, you may need to modify your SQLNET.ORA to ensure that EZCONNECT is in the list when specifying the order of the naming methods used for the client name resolution lookups (the Oracle 11g Release 2 default is NAMES.DIRECTORY_PATH=(tnsnames, ldap, ezconnect)) HOW CONNECTION LOAD BALANCING WORKS USING SCAN For clients connecting using Oracle SQL*Net 11g Release 2, three IP addresses will be received by the client by resolving the SCAN name through DNS as discussed. The client will then go through the list it receives from the DNS and

try connecting through one of the IPs received. If the client receives an error, it will try the other addresses before returning an error to the user or application. This is similar to how client connection failover works in previous releases when an address list is provided in the client connection string. When a SCAN Listener receives a connection request, the SCAN Listener will check for the least loaded instance providing the requested service. It will then re-direct the connection request to the local listener on the node where the least loaded instance is running. Subsequently, the client will be given the address of the local listener. The local listener will finally create the connection to the database instance. Note: This example assumes an Oracle 11g R2 client using a default TNSNAMES. ORA: ORCLservice = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = sales1-scan.example.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MyORCLservice) ))

VERSION AND BACKWARD COMPATIBILITY The successful use of SCAN to connect to an Oracle RAC database in the cluster depends on the ability of the client to understand and use the SCAN as well as on the correct configuration of the REMOTE_LISTENER parameter setting in the database. If the version of the Oracle Client connecting to the database as well as the Oracle Database version used are both Oracle Database 11g Release 2 and the default configuration is used as described in this paper, no changes to the system are typically required. The same holds true, if the Oracle Client version and the version of the Oracle Database that this client is connecting to are both pre-11g Release 2 version (e.g. Oracle Database 11g Release 1 or Oracle Database 10g Release 2, or older). In this case, the pre-11g Release 2 client would use a TNS connect descriptor that resolves to the node-VIPs of the cluster, while the Oracle pre11g Release 2 database would still use a REMOTE_LISTENER entry pointing to the node-VIPs. The disadvantage of this configuration is that SCAN would not be used and hence the clients are still exposed to changes every time the

cluster changes in the backend. Similarly, if an Oracle Database 11g Release 2 is used, but the clients remain on a former version. The solution is to change the Oracle client and / or Oracle Database REMOTE_LISTENER settings accordingly. The following cases need to be considered: Note: If using a pre-11g Release 2 client (Oracle Database 11g Release or Oracle Database 10g Release 2, or older) you will not fully benefit from the advantages of SCAN. Reason: The Oracle Client will not be able to handle a set of three IPs returned by the DNS for SCAN. Hence, it will try to connect to only the first address returned in the list and will more or less ignore the others. If the SCAN Listener listening on this specific IP is not available or the IP itself is not available, the connection will fail. In order to ensure load balancing and connection failover with pre-11g Release 2 clients, you will need to change the TNSNAMES.ora of the client so that it would use 3 address lines, where each address line resolves to one of the SCAN VIPs. Sample TNSNAMES.ora for Oracle Database pre- 11g Release 2 Clients sales.example.com =(DESCRIPTION= (ADDRESS_LIST= (LOAD_BALANCE=on)(FAILOVER=ON) (ADDRESS=(PROTOCOL=tcp)(HOST=133.22.67.192)(PORT=1521)) (ADDRESS=(PROTOCOL=tcp)(HOST=133.22.67.193)(PORT=1521)) (ADDRESS=(PROTOCOL=tcp)(HOST=133.22.67.194)(PORT=1521))) (CONNECT_DATA=(SERVICE_NAME= salesservice.example.com))) USING SCAN IN A MAXIMUM AVAILABILITY ARCHITECTURE ENVIRONMENT If you have implemented a Maximum Availability Architecture (MAA) environment, in which you use Oracle RAC for both your primary and standby database (in both, your primary and standby site), which are synchronized using Oracle Data Guard, using SCAN provides a simplified TNSNAMES configuration that a client can use to connect to the database independently of whether the primary or standby database is the currently active (primary) database. In order to use this simplified configuration, Oracle Database 11g Release 2 introduces two new SQL*Net parameters that can be used on for connection strings of individual clients. The first parameter is CONNECT_TIMEOUT. It specifies the timeout duration (in seconds) for a client to establish an Oracle Net connection to an Oracle database. This parameter overrides SQLNET.OUTBOUT_CONNECT_TIMEOUT in the SQLNET.ORA. The second parameter is RETRY_COUNT and it specifies the number of times an ADDRESS_LIST is traversed before the connection attempt is terminated. Using these two parameters, both, the SCAN on the primary site

and the standby site, can be used in the client connection strings. Even, if the randomly selected address points to the site that is not currently active, the timeout will allow the connection request to failover before the client has waited unreasonably long (the default timeout depending on the operating system can be as long as 10 minutes). sales.example.com =(DESCRIPTION= (CONNECT_TIMEOUT=10)(RETRY_COUNT=3) (ADDRESS_LIST= (LOAD_BALANCE=on)(FAILOVER=ON) (ADDRESS=(PROTOCOL=tcp)(HOST=sales1-scan)(PORT=1521)) (ADDRESS=(PROTOCOL=tcp)(HOST=sales2-scan)(PORT=1521))) (CONNECT_DATA=(SERVICE_NAME= salesservice.example.com))) USING SCAN WITH ORACLE CONNECTION MANAGER If you use Oracle Connection Manager (CMAN) with your Oracle RAC Database, the REMOTE_LISTENER parameter for the Oracle RAC instances should include the CMAN server so that the CMAN server will receive load balancing related information and can therefore load balance connections across the available instances. The easiest way to achieve this would be to add the CMAN-server as an entry to the REMOTE_LISTENER of the databases that clients want to connect to via CMAN as shown in figure 10. Note also that you will have to remove the SCAN from the TNSNAMES connect descriptor of the clients and further configurations will be required for the CMAN server. See the CMAN documentation for more details. SQL> show parameters listener NAME TYPE VALUE ---------------------------------------------------------listener_networks string local_listener string (DESCRIPTION=(ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP) (HOST=148.87.58.109)(PORT=1521)))) remote_listener string stscan3.oracle.com:1521,(DESCRIPTION= (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (HOST=CMANserver)(PORT=1521))))

Virtual IP in RAC

How new connection establish in Oracle RAC?

For failover configuration we should need to configure our physical ip of host name in listener configuration. Listener process is accepting new connection request and handover user process to server process or dispatcher process in Oracle.

Means using listener new connection is being established by Oracle. Once connection gets established there is no need of listener process. If new connection is trying to get session in database and listener is down then what will be happening. User process gets error message and connection fails. Because listener is down in same host or something else problem. But in Oracle RAC database environment database is in sharing mode. Oracle RAC database is shared by all connected nodes. Means more than 1 listeners are running in various nodes.

In Oracle RAC database if user process is trying to get connection with some listener and found listener is down or node is down then Oracle RAC automatically transfer this request to another listener on another node. Up to Oracle 9i we use physical IP address in listener configuration. Means if requested connection gets failed then it will be diverting to another node using physical IP address of another surviving node. But during this automatically transfer, connection should need to wait up to get error message of node down or listener down using TCP/IP connection timeout. Means session should need to wait up to getting TCP/IP timeout error dictation. Once error message

is received oracle RAC automatically divert this new connection request to another surviving node.

Using physical IP address there is biggest gap to get TCP/IP timeout for failover suggestion. Session should need to wait for same timeout. High availability of Oracle RAC depends on this time wasting error message.

Why VIP (Virtual IP) needs in Oracle RAC?

From Oracle 10g, virtual IP considers to configure listener. Using virtual IP we can save our TCP/IP timeout problem because Oracle notification service maintains communication between each nodes and listeners. Once ONS found any listener down or node down, it will notify another nodes and listeners with same situation. While new connection is trying to establish connection to failure node or listener, virtual IP of failure node automatically divert to surviving node and session will be establishing in another surviving node. This process doesn't wait for TCP/IP timeout event. Due to this new connection gets faster session establishment to another surviving nodes/listener.

Characteristic of Virtual IP in Oracle RAC:

Virtual IP (VIP) is for fast connection establishment in failover dictation. Still we can use physical IP address in Oracle 10g in listener if we have no worry for failover timing. We can change default TCP/IP timeout using operating system utilities or commands and kept smaller. But taking advantage of VIP (Virtual IP address) in Oracle 10g RAC database is advisable. There is utility also provided to configure virtual IP (vip) with RAC environment called VIPCA. Default path is $ORA_CRS_HOME/bin. During installation of Oracle RAC, it is executed.

Advantage of Virtual IP deployment in Oracle RAC:

Using VIP configuration, client can be able to get connection fast even fail over of connection request to node. Because vip automatically assign to another surviving node faster and it can't wait for TNS timeout old fashion.

Disadvantage of Virtual IP deployment in Oracle RAC:

Some more configurations is needed in system for assign virtual IP address to nodes like in /etc/hosts and others. Some misunderstanding or confusion may occur due to multiple IP assigns in same node.

Important for VIP configuration:

The VIPs should be registered in the DNS. The VIP addresses must be on the same subnet as the public host network addresses. Each Virtual IP (VIP) configured requires an unused and resolvable IP address.

Dataguard:Data Guard Protection Modes Oracle Data Guard (known as Oracle Standby Database prior to Oracle9i), forms an extension to the Oracle RDBMS and provides organizations with high availability, data protection, and disaster recovery for enterprise databases. One of those new features in Oracle9i Release 2 is the ability for the DBA to place the database into one of the following protection modes:

Maximum Protection Maximum Availability Maximum Performance

A Data Guard configuration will always run in one of the three protection modes listed above. Each of the three modes provide a high degree of data protection; however they differ with regards to data availability and performance of the primary database.

Log Transport Services Log Transport Services enables and controls the automated transfer of redo data within a Data Guard configuration from the primary site to each of its standby sites.

Log transport services also controls the level of data protection for your database. The DBA will configure log transport services to balance data protection and availability against database performance. Log transport services will also coordinate with Log Apply Servicesand Role Transition Services for switchover and failover operations.

Maximum Protection Mode Maximum Protection mode offers the ultimate in data protection. It guarantees no data loss will occur in the event the primary database fails. In order to provide this level of protection, the redo data needed to recover each transaction must be written to both the local (online) redo log and to a standby redo log on at least one standby database before the transaction can be committed. In order to guarantee no loss of data can occur, the primary database will shut down if a fault prevents it from writing its redo data to at least one remote standby redo log. In a multiple-instance RAC database environment, Data Guard will shut down the primary database if it is unable to write the redo data to at least one properly configured database instance (see minimum requirements below). In order to participate in Maximum Protection mode:

At least one standby instance has to be configured with standby redo logs. When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, you must use the LGWR, SYNC, and AFFIRM attributes.

NOTE: It is highly recommended that a Data Guard configuration operating in Maximum Protection mode contain at least two physical standby databases that meet the requirements listed in the table above. That way, the primary database can continue processing if one of the physical standby databases cannot receive redo data from the primary database. If only one standby database is configured with the minimum requirements listed above, the primary database will shut down when the physical standby databases cannot receive redo data from the primary database!

Maximum Availability Mode Maximum Availability mode provides the highest level of data protection that is possible without affecting the availability of the primary database. This protection mode is very similar to Maximum Protection where a transaction will not commit until the redo data needed to recover that transaction is written to both the local (online) redo log and to at least one remote standby redo log. Unlike Maximum Protection mode; however, the primary database will not shut down if a fault prevents it from writing its redo data to a remote standby redo log. Instead, the primary database will operate in Maximum Performance mode until the fault is corrected and all log gaps have been resolved. After all log gaps have been resolved, the primary database automatically resumes operating in Maximum Availability mode.

NOTE: Please note that Maximum Availability mode guarantees that no data will be lost if the primary fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Just like Maximum Protection mode, Maximum Availability requires:

At least one standby instance has to be configured with standby redo logs. When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, you must use the LGWR, SYNC, and AFFIRM attributes.

Maximum Performance Mode Maximum Performance mode is the default protection mode and provides the highest level of data protection that is possible without affecting the performance or availability of the primary database. With this protection mode, a transaction is committed as soon as the redo data needed to recover the transaction is written to the local (online) redo log. When configuring the standby destination service in the LOG_ARCHIVE_DEST_n initialization parameter on the primary database, log transport services can be set to use either LGWR / ASYNC or ARCH. In order to reduce the amount of data loss on the standby destination if the primary database were to fail, set the LGWR and ASYNC attribute. Using this configuration, the primary database writes its redo stream to the standby redo logs on the standby database asynchronously with respect to the commitment of the transactions that create the redo data. When the nodes are connected with sufficient bandwidth, this mode provides a level of data protection that approaches that of Maximum Availability mode with minimal impact on primary database performance. Note that the use of standby redo logs while operating in Maximum Performance mode is only necessary when configuring log transport services to use LGWR. When log transport services is configured to use ARCH, standby redo logs are not required.

SQL> alter database set standby database to maximize protection;

SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL, DATABASE_ROLE FROM V$DATABASE;

PROTECTION_MODE

PROTECTION_LEVEL DATABASE_ROLE

------------------- ------------------- -------------MAXIMUM PROTECTION MAXIMUM PROTECTION PRIMARY

SUMMARY : 1) Maximum protection ( zero data loss) Redo synchronously transported to standby database

Redo must be applied to at least one standby before transactions on primary can be committed Processing on primary is suspended if no standby is available

2) Maximum availability (minimal data loss) Similar to maximum protection mode If no standby database is available processing continues on primary

3) Maximum performance (default) Redo asynchronously shipped to standby database If no standby database is available processing continues on primary

Que:-Can we increase the block size of oracle database in real time? Jonathan Lewis Ans:- I think I've warned you before that you shouldn't believe everything you read in the documentation or on Metalink. The stuff you've quoted there (is it in the "real" documentation, or just a Metalink note) isn't very well thought through, and hasn't been described accurately. For example: <blockquote> +"smaller: good for small rows with lots of random access."+ </blockquote> Where's the justification, what's the complete scenario ? A smaller block on (say) an order-lines table increases the chances that the order lines for a single order now span two blocks instead of fitting one. So a query for an order with

its order lines now visits two blocks, increasing the latch activity - which increases the chances of latch contention. Should I ignore this example because "access to order-lines wouldn't be random enough" ? Virtually every one of the suggestions you've quoted needs to be carefully justified. The only one that's fairly sound is the one that says "don't split rows across multiple blocks" - and even then there's room for error. > The problem here is that the OP did not mention and clarify what he tried so say when he said "improve the performance". If he is going to have a lot of, huge bulk loads and use the database as a warehouse i think it is better for him to use larger block size. Technically and on the paper, i don't know the real impact but it should help the database and may increase the overall performance. That's why I asked him about the performance improvements he thinks he might get and how he's going to measure the benefit. In his case, if he's merging several databases into one (and not simply using transportable tablespaces to hold several systems in one database) then the extra cost and rsk of change is probably insignificant - nevertheless it would be interesting to hear what metrics he intends to use and how he's going to demonstrate that his final choice of blocksize is correct. Que:- Find the information about oracle block size and create tablespace with multiple block size? Ans:HowTo: Create Oracle tablespace using multiple DB block sizes. Summary Instructions provided describe how to create Oracle tablespaces using multiple DB blocks. In some cases, it is necessary to restore Oracle databases using exactly the same settings as the original ones. The DB block size is one of the Oracle system parameters. The default DB block size cannot be changed once the database is created, but multiple DB block sizes can be set up to meet the requirement.

Procedure Since Oracle 9i, databases can now have multiple block sizes. Every database has a 'standard' block size specified by db_block_size. The SYSTEM and temporary tablespaces use the standard block size. Application tablespaces can use other non-standard block sizes. All partitions of a table or index must use the same block size. The SGA (System Global Area) has a separate buffer cache for each block size: DB_2K_CACHE_SIZE DB_4K_CACHE_SIZE DB_8K_CACHE_SIZE DB_16K_CACHE_SIZE DB_32K_CACHE_SIZE

Use the following workflow to change the buffer cache size to use multiple DB block sizes.

1. Check current available SGA and buffer size. 2. SQL> show sga 3. 4. Total System Global Area 419430400 bytes 5. Fixed Size 2073288 bytes 6. Variable Size 251661624 bytes 7. Database Buffers 159383552 bytes 8. Redo Buffers 6311936 bytes 9. SQL> select name, block_size, current_size from v$buffer_pool; 10. 11.NAME BLOCK_SIZE CURRENT_SIZE 12.-------------------- ---------- -----------13.DEFAULT 8192 152 14.Try creating a tablespace with block size = 4kb, this fails with following error. -show me-

15.Alter system to add the db_4k_cache_size parameter. 16.SQL> alter system set db_4k_cache_size = 60M; 17. 18.System altered. 19.A new tablespace can now be created using desired block size. -show me20.Double-check the SGA and buffer usage. 21.SQL> select name, block_size, current_size from v$buffer_pool; 22. 23.NAME BLOCK_SIZE CURRENT_SIZE 24.-------------------- ---------- -----------25.DEFAULT 8192 92 26.DEFAULT 4096 60

1. Give Methods of transferring a table from one schema to another.


Export/import, CREATE TABLE AS SELECT, COPY, etc.

2. What is the purpose of the IMPORT option IGNORE? What is the default setting?
The IMPORT IGNORE option tells import to ignore already exists errors. If it is not specified the already existing tables will be skipped. The default is N.

3. You have a rollback segment in Oracle8 database that has expanded beyond optimal.
How can it be restored to optimal. Use the rollback segment shrink command.

4. If the DEFAULT and TEMPORARY tablespace clauses are left out of the create user
statement in Oracle8i what happens? Is this good or bad? Why? The user is assigned to the SYSTEM tablespace as a default and temporary tablespace. This is bad because no user object should be in SYSTEM.

5. What are the some of the Oracle provided packages that DBA should be aware of? Owned by the SYS user: dbms_shared_pool, dbms_utility, dbms_sql, dbms_ddl, dbms_session, dbms_output, dbms_sna pshot. Also CAT*.SQL and UTL*.SQL. 6. What happens if constraint name is left out of the constraint clause? The Oracle will use the default name of SYS_Cxxxx where xxx is a system generated number. Hard to track. 7. What happens if a tablespace clause is left off a primary key constraint clause? This results in the index that is automatically generated being placed in the users default tablespace. Since this will be in the same tablespace as the table, this will cause serious performance problems.

8. What is the proper method for disabling and re-enabling a primary key constraint? You can use ALTER TABLE for both. However, for the enable clause you must specify the USING INDEX and TABLESPACE clause. 9. What happens if a primary key constraint is disabled and then re-enabled without specifying the index clause? The index is created in the users default tablespace and all sizing information is lost. 10. When should more than one DN writer be used? How many? If the UNIX system is capable of asynchronous I/O, then only one is required. If the UNIX system is incapable of asynchronous I/O, then up to twice the number of disks used by Oracle or twice the number of CPUs. 11. You are using a hot backup without being in ARCHIVELOG mode. Can you recover in the event of failure? Why not? You cant use a hot backup without being in ARCHICVELOG mode. 12. What causes the Snapshot too old error? How can this be prevented or mitigated? This is caused by large or long running transactions that are either wrapped onto their own rollback space or had another transaction write on part of their rollback space. This can be mitigated by breaking the transactions into smaller pieces and running them separately or increasing the size of the rollback segments and their extents. 13. How can you tell if a database object is invalid? By checking the status column in the DBA_OBJECTS or ALL_ and USER_ views. 14. If a user gets the ORA-00942 error, yet you know you granted them the privilege, what else should you check? Check if the user specified the full name of the object (schema), or has synonym pointing to that object. 15. A developer is trying to create a view but the database will not let him. He has the DEVELOPER role which has the CREATE VIEW system privilege and the selects on the table he is using. What is the problem? You need to verify that the developer has direct grants on the tables on which the view is based. You can not create a stored object based with grants given through views. 16. If you are using an example table what is the best way to get sizing data for the production table implementation? The best way is to analyze the table and then use the data provided in the dba_tables view to get the average row length and other data for the calculation. The quick and the dirty way is to look at the number of blocks the table is actually using and ratio the number of rows to its number of blocks against expected number of rows. 17. How can you tell how many users are currently logged into the database? How can you find their system ID? Query V$SESSION and V$PROCESS. The other is to check the current_logins parameter in the V$SYSSTAT. Another way on UNIX is to do a ps ef|grep oracle| wc l command, but this works only against a single instance installation. 18. The user selects from a sequence and gets back 2 values, his select is SELECT pk_seq.nextval FROM dual; What is the problem? Somehow two rows have been inserted into DUAL. 19. How do you determine if an index has to be dropped or rebuilt? Run the analyze index to validate structure and then calculate the ratio of LF_ROWS_LEN / LF_ROWS_LEN + DEL_LF_ROWS_LEN and if it is not at least 70% the index should be rebuilt. Or id the ratio of DEL_LF_ROWS_LEN / LF_ROWS_LEN + DEL_LF_ROWS_LEN is nearing 30%.

SQL * PLUS and SQL Job Questions

1. How can you pass variables into a SQL routine? By the use of & or && symbol. For passing in variable numbers can be used &1, &2. To be prompted for a specific variable, place ampersand variable into the code itself: SELECT * FROM dba_tables WHERE owner = &owner_name; The use of double ampersand tells Oracle to reuse the variable for subsequent times, unless ACCEPT is used to get value from the user. 2. You want to include carriage return / linefeed into your output from a SQL script. How
can you do this? The best way is to use the CHR() function (CHR(10) as a return / linefeed and the concatenation function. Another method is to use the return / linefeed as a part of a quoted string.

3. How do you call a PL/SQL procedure in SQL? By using EXECUTE or wrap the call in a BEGIN END block and treat it as an anonymous block. 4. How do you execute a host OS command from within SQL? By using ! orHOST command. 5. You want to use SQL to generate SQL. What is it called and give an example. This is called dynamic SQL. An example: Set lines 90 Pages 0 Termout off Feedback off Verify off Spool drop_all.sql SELECT drop user ||username|| cascade; from dba_users Where username not in (SYS, SYSTEM); Spool off; 6. What SQL * PLUS command is used to format output from a select? This is done with the COLUMN. 7. You want to group the following set of select returns, what can you group on?
MAX(sum_of_cost), MIN(sum_of_cost), COUNT(item_no)? The only column you can group by is the COUNT(item_no), the rest are aggregate functions.

8. What special Oracle feature allows you to specify how the CBO treats SQL functions? You can use hints FIRST ROWS, ALL_ROWS, RULE, USING INDEX, STAR. 9. You want to determine the location of identical rows in a table before attempting to
place a unique index on the table, how can this be done? If you use MIN / MAX function against your ROWID, then select against the proposed primary key you can squeeze out the ROWID of duplicate rows quickly. SELECT ROWID FROM emp e WHERE e.ROWID > (SELECT MIN(x.ROWID) FROM emp x WHERE x.emp_n0 = e.emp_no); In a situation if multiple columns make up the proposed key, they all must be used in the WHERE clause.

10. What is a Cartesian product? A Cartesian product is a result of an unrestricted of 2 or more tables. 11. You are joining a local and remote table and the network manager complains about
network traffic involved. Hodo you reduce the amount of traffic?

Push the processing of the remote data to the remote server by using a view to preselect information for the join. This will result in only data needed for the join being sent across the network.

12. What is the default ordering in the ORDER BY statement? Ascending 13. What is TKPROF and how is it used? The TKPROF is a tuning tool used to determine the execution time for SQL statements. Use it first by setting TIMED_STATISTICS parameter to TRUE and then setting the entire instance SQL_TRACE to on or just for the session with an ALTER SESSION command. Once that is done you run TKPROF and generate a readable report with an explain plan. 14. What is EXPLAIN plan and how is it used? The EXPLAIN plan is used to tune SQL statements. You have to have the EXPLAIN_TABLE generated for the user you are generating the explain plan for. This is done with the utlxplan.sql. Once the EXPLAIN_TABLE exists, you run the explain command with the statement to be explained. The explain table then is queried to see the execution plan. 15. How do you set the number of lines per page? The width? The SET command in SQL * PLUS is used to control the number of lines generated per pager and the width of those lines. For example SET PAGESIZE 60 LINESIZE 80 will generate reports that are 60 lines long with a line width of 80 characters. 16. How do you prevent output from coming to the screen? The SET option of TERMOUT controls output to the screen. Setting TERMOUT OFF turns off the screen output. 17. How do you prevent Oracle from giving you informational messages during and after a
SQL statement execution? The SET option FEEDBACK and VERIFY can be set to OFF.

18. How do you generate file output from SQL? By use of the SPOOL command.

Oracle Performance Tuning Job Interview Questions

1. A tablespace has a table with 300 extents in it. Is this bad? Why or why not? Multiple extents are not bad. However, if you also have chained rows, this can hurt performance. 2. How do you set up tablespaces during an Oracle installation? You should always use OFA. For the best results SYSTEM, ROLLBACK, UNDO, TEMPORARY, INDEX and DATA segments should be separated. 3. You use multiple fragments in the SYSTEM tablespace. What should you check first? Ensure that users dont have SYSTEM tablespace as their default and or temporary tablespace by checking DBA_USERS. 4. What are the indications that you need to increase or decrease the shared_pool size
parameter? Poor data dictionary or library cache hit ratios or getting ORA-04031. Another indication is steadily decreasing performance with all other tuning parameters are the same.

5. What are the general guidelines for sizing DB_BLOCK_SIZE and


DB_FILE_MULTIBLOCK_READ_COUNT for an application that does many table scans? OS almost always reads in 64K chunks. The two should have a product of = 64K, a multiple of 64K, or the value for read size from your OS.

6. What is the fastest query method of a table in the RULE based optimizer? Fetch by ROWID. 7. Explain the use of TKPROF. What OS parameter should be set to get full TKPROFF
output? The TKPROF is a tuning tool used to determine the execution time for SQL statements. Use it first by setting TIMED_STATISTICS parameter to TRUE and then setting the entire instance SQL_TRACE to on or just for the session with an ALTER SESSION command. Once that is done you run TKPROF and generate a readable report with an explain plan.

bad how do you correct it? If you get excessive disk sorts this is bad. This means you need to tune the sorts area parameters in the init file SORT_AREA_SIZe is the major one.

8. When looking at V$SYSSTAT you see that sorts (disk) is high. Is that bad or good? If

9. When should you increase copy latches? What parameter controls copy latches? When you get excessive contention of the copy latches as shown by the redo copy latch hit ratio you can increase copy latches via the init parameter log simultenious_copies to twice the number of CPUs on your system. 10. Where can you get a list of all initialization parameters for your system? How about if
they are the default settings or have been changed? You can look in the init.ora file or the V$PARAMETER view.

11. Describe hit ratios as pertains to the database buffers. What is the difference between
an instantaneous and cumulative hit ratio? Which one should you use for tuning? A hit ratio is measurement of how many times the database was able to read the value from the buffers, instead of disk. A value of 80%-90% is good. If you simply take the ratio of existing parameters, they will be applicable to since the instance started. If you do a comparison of readings based on some 2 arbitrary time spans, this is the instantaneous ratio for that time span (more valuable)

12. Discuss row chaining. How does it happen? How do you correct it? The row chaining happens when a variable length value is updated and the new value is longer than the old value and will not fit into remaining block space. This results in row chaining to another block. You can correct this by setting appropriate values for the table storage clause (PCTFREE). This can be corrected be exporting and importing the table. 13. When looking at the estat events report you see that you are getting buffer
busy waits. Is this bad/ How can you find what is causing it? Buffer busy wait can indicate contention in redo rollback or data blocks. You need to check the V$WAITSTAT to see what areas are causing the problem. The value of count tells you where the problem is, class tells you with what.

14. If you see contention for library caches how can you fix it? Increase the size of the shared pool. 15. If you see statistics that deal with UNDO, what are they really talking about? Rollback segments associated with structures. 16. If a tablespace has a default PCTINCREASE of 0, what will it cause (in relation to
SMON). The SMON will not automatically coalesce its free space fragments.

17. If a tablespace shows excessive fragmentation, what are the methods to de-fragment
the tablespace?

In Oracle 7.0 and 7.2 use the ALTER SESSION SET EVENTS IMMEDIATE TRACE NAME COALESCE LEVEL ts# command is the easiest way to defragment the space. The ts# is in the ts$ table owned by SYS. In version 7.3 alter tablespace <> coalesce; If the free space is not contiguous, export, drop and import the tablespace.

18. How can you tell if the tablespace has excessive fragmentation? If a select against DBA_FREE_SPACE shows that the count of tablespaces extents is greater than the count of its datafiles, then it is fragmented.
space wait time 0. Is this something to worry about? What is the redo log wait time is high? How can you fix this? Since the wait time is zero no. If the wait time is high it might indicate a need for more or larger redo logs.

19. You see the following on your status report: redo log space requests 23, redo log

20. What can cause a high value for recursive calls? How can this be fixed? The high value of recursive calls is caused by improper usage of cursors, extensive dynamic space management actions, and excessive statements re-parses. You need to determine cause and correct it by either relinkingapplications to hold cursors or use proper space management techniques (proper storage and sizing) to ensure repeated queries are placed in the packages for proper use. 21. If you see a pin hit ratio of less than 0.8 in the estat library report is this a problem? If
so, how do you resolve it? This indicates that shared pool size is too small. Increase the size of shared pool.

22. If you see a high value for reloads in estat library cache report, is this a matter for
concern? You should strive for zero reloads if possible. If you see excessive reloads increase the size of shared pool.

23. You look at the DBA_ROLLBACK_SEGS and see that there is a large number of
shrinks and they are relatively small in size. Is this a problem? How can this be fixed? The large number of small shrinks indicates the need to increase the size of extents of rollback segments. Ideally, you should have no shrinks or small number of large shrinks. To alleviate this just increase the size of extents and adjust optimal accordingly.

24. You look at the DBA_ROLLBACK_SEGS and see a large number of wraps. Is this a
problem? A large number of wraps indicates that rollback segment extent size is too small. Increase the size of your extents. You can look at an average transaction size in the same view.

25. You have a room to grow extents by 20%. Is there a problem? Should you take any
action? No, it is not a problem. You have 40 extents showing and an average of 40 users. Since there is plenty of room to grow there is no problem.

26. You see multiple extents in the temporary tablespace. Is this good or bad? As long as they are all the same size it is not a problem.

Oracle Installation and Configuration Job Interview Questions

1. Define OFA. OFA is Optimal Flexible Architecture. It is a method of placing directories and files in an Oracle system for future tuning and file placement.

2. How do you set up tablespaces on installation? At least 7 disks arrays: SYSTEM on one, two mirrored redo logs on different disks, TEMPORARY tablespace on one, ROLLBACK tablespace on another, and still have 2 disks for data and indexes. 3. What should be done prior to installation of Oracle? Adjust kernel parameters and disk space. 4. You have installed Oracle and now setting up the actual instance. You have been
waiting for an hour for the initialization script to finish. What should you check first? Check to make sure that the archiver is not stuck. If archive logging is turned on during install, this will generate an enormous amount of archived log space. If the space is full, Oracle will stop and wait until you free more space.

5. When configuring SQL * NET on the server what files should be set up? Initialization files SQLNET.ORA, TNSNAMES.ORA and LISTENER.ORA. 6. When configuring SQLNET on the client what files need to be set up? SQLNET.ORA, TNSNAMES.ORA. 7. What must be installed with ODBC on the client in order to work with Oracle? SQLNET and protocol TCP/IP, etc. 8. You have started a new instance with a large SGA on a busy existing server. The
performance is terrible and the users are complaining. What should you check for first? Check if the large SGA is not being swapped out.

9. What OS user should be first set up on UNIX as an Oracle user? You must use root first, then create DBA and OINSTALL roles. 10. When should default Oracle parameter be used?
Never.

11. How many control files should you have? Where should they be located? At least 2 on separate disks, not just 2 file systems. 12. How many redo logs should you have and how should they be configured for
maximum recoverability? At least 2 are required. The OFA specifies at least 3 groups with at least 2 members in each. The logs files should be on 2 different disks mirrored by Oracle. The files should not be on raw devices on UNIX.

13. You have a simple application with no hot tables (uniform I/O and access
requirements). How many disk arrays should you have assuming standard layout for SYSTEM, USER, TEMP and ROLLBACKtablespaces? At least 7 disks. See above.

Oracle Data Modeling Job Interview Questions

1. Describe the third normal form All attributes in an entity relate to the primary key and only primary key. 2. Is this statement TRUE or FALSE all databases must be in third normal form?

FALSE. While the 3d normal form is good for logical design, most databases will not perform well under 3NF. Usually, some entity will be denormalized in the logical to physical transfer process.

3. What is ERD? ERD stands for Entity relational Diagram. 4. Why are recursive relationships are bad? How do you resolve them? Recursive relationships (when tables are related to themselves) is bad when it is a hard relationship (neither side is may both sides are a must as this results in it not being possible to put on either bottom or top of the table. You resolve them by using an intersection entity. 5. What does a hard one-to-one relationship mean (when it has to be must on both
ends)? This means that the two entities should probably be made one.

6. How is the many-to-many relationship be handled? By adding an intersection entity table. 7. What is an artificial (derived) primary key? When should it be used? A derived key comes from a sequence. Usually, it is used when a concatenated index becomes too cumbersome to use as a foreign key. 8. When should you consider denormalization? When performance analysis indicates it will be beneficial without compromising data integrity?

Oracle Troubleshooting Job Interview Questions

1. How can you determine if an Oracle system is up from an OS level? The following processes will be running smon, pmon, dbwr, lgwr. Use the ps ef| grep dbwr will show what instances are up. 2. Users from the PC client are getting messages: ORA-06114: (Cnct errcant get err
txt. See Servr Msg and Codes Manual). What could be the problem? The instance name is probably incorrect in their connection string.

ORACLE not available. What is the problem? The Oracle instance is shut down. Restart the instance.

3. Users from the PC clients are getting the following stack: ERROR: ORA-01034:

4. How can you determine if the SQL * NET process is running for SQL * NET V1? How
about V2 for NET8? For SQL * NET V1 check for the orapsv process. You can use TCPCTL STATUS to get the full status of SQL * NET server. For other protocols check the existence of LISTENER process or you can use lsnrctl status

5. What file will give you Oracle instance status information? Where is it located? The alert<SID>.log file located in the BACKGROUND_DUMP_DEST from V$PARAMETER table.
internal only until freed What is the problem? The archiver destination is probably full. Backup the archive logs and remove them and archiver will restart.

6. Users are not being allowed in the system ORA-00257 archiver is stuck. Connect

7. Where should you look to find out if a redo log was corrupted assuming you are using
Oracle mirrored redo logs? You must check the alert<>.log for this info.

40 exceeded? When the database was created the DB_FILES parameter was set to 40. You can specify a higher value up to the MAX_DATAFILES in the control files. You will have to rebuild the control file to set the value higher than MAX_DATAFILES.

8. You attempt to add a datafile and get: ORA-01118: cannot add more datafiles: limit of

9. You look at the fragmentation report and see that SMON has not coalesced any of
your tablespaces. What is the problem? Check the DBA_TABLESPACES for the value of PCT_INCREASE for the tablespaces. If the PCT_INCREASE is 0 SMON will not coalesce their free extents.

exceeded. What is the problem and how do you fix it? This is set with the parameter DML_LOCKS. If the value is set too low (default) you will get this error. If you think it is a temporary problem, you can wait till it clears.

10. Your users get the following error: ORA-00055 maximum number of DML locks

11. You get a call from your backup DBA while you are on vacation. He has corrupted all of
the control files while playing with the ALTER DATABASE BACKUP CONTROLFILE. What do you do? As long as the datafiles are OK and he was successful with the backup controlfile command, you can use the following: CONNECT INTERNAL STARTUP MOUNT (Take read only tablespaces offline before next step ALTER DATABASE DATAFILE <> OFFLINE;) RECOVER DATABASE USING BACKUP CONTROLFILE ALTER DATABASE RESETLOGS; (Bring read only tablespaces online) Shutdown, backup the syste, then restart. If no controlfile backup is avialbale, then the following is required: CONNECT INTERNAL STARTUP MOUNT CREATE CONTROLFILE ; However, they will need to know all datafiles, logfiles and settings for the MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES for the database.

You might also like