Professional Documents
Culture Documents
Much over looked thing while building UAT and DEV databases is it does not model
Production database. I'v seen people tend to ignoring unless they are pushed hard to
keep UAT and DEV databases as much as close to Production database design, Data
distribution and Hawrdware/Software environment.
for e.g. There was an Java Application running on Tomcat - Apache on Solaris 10 on on 64
bit SPARC machines.
1. UAT database was refreshed from export dump instead from Physical Hot/Cold/RMAN
backup of Prod. Extent size on UAT and Prod was different. Prod had large fragmentation
in some tables,indexes. While UAT did not have as it was refreshed from export dump
3. All database were placed on single Disk array on UAT database. Production has three
mirrored copies of Redo logs while UAT has no mirrored redo log.
4. Application was using Connection pooling implemented through java developed code
in application only (instead using Oracle's default connection pooling or connection
pooling of Weblogic etc)
5. UAT middle tier was using different JDBC driver than Production.
6. Application was facing reaching open_cursors limit in UAR because connections from
pool were not closed and some result sets were still opened. Increasing the number of
UAT solved it, but production did not have this issue as production servers had more
connections in pool.
Don't assume!
consider query:
this query returns result set in sorted order of c1 but it changes in 10g R2 as 10g used HASH GROUP BY
Operation to implement grouping,rather than using SORT GROUP BY as it would do in earlier versions. So
Here if sorting is desired there must be explicit order by query.
Similary There can be some join queries in which users might be getting sorted result set , but they can not
rely on it always, may be if execution plan changes it can not sort the result set, so if sorting is required,
developers need explicit specify order by clause in query.
I remember a case in which a junior developer wrote a query to dump the table data to asciii csv file, Here
was obviously clear columns data in csv need in same order as in table. But as Developer came to know about
view user_tab_columns I told him, he used query on this view to estimate the maximum record length of
table in csv file(rather than manully summing the all columns widths of table) what he could have done
alternate way is set large linesize along with trimspool on, but he wamted to cut short work of typing select
c1||','|| c2||','||c3||','||... from table. So he generated this select query from user_tab_columns. But he
assumed columns orders would be same as in table name. Result was wrong columns order in csv file. So
please don't assume - it was view- so not guaranteed.
Rules of thumb are never advised by me but some can be taken as part of check list one by one while
tunning I/O
I/O how much you have - mind it! so rule 1 is minimize I/O
1. cut unnecessary fetch. Be restrictive about columns in selected list. Make sure all columns fetched in
explicit/implicit cursor are used some where in code.
2. check usefullness of indexed columns. They may be slowing DMLs heavily and not yielding any query
performance gain. So identify and drop such indexes.
3. avoid triggers which performs lot of transactions and auditing from inside - these may actually be
slowing DMLs especially when dmls in bulks are issued.
4. check all tables/indexes have appropriate values ser for PCTFREE and PCTUSED . PCTFREE has default
10% so you may be wasting not only 10% extra disk/cache memory but also causing more I/O for
objecting not undergoing future updates.
5. If CPU resources are available some tables can be compressed. this will not ony minimize the I/O at
the expense of CPU but also meets the objective "maximize cache" - how ? Because table now needs less
buffers, you have more free buffers where other objects can be assigned. This is very useful in case
when there is no shortage of CPU but scarcity of memory is.
6. If using materialized views for replication or reporting then, try their refresh possible by FAST
method.
1. explore if you need configure KEEP and RECYCEL pools in your database for frequently accessed(small
in size) and least accessed(bigger) tables and the set and size them appropriately. Assign the related
objects to these pools.
3. if using bigger SGA > 16GB, in linux use huge pages memory.
*
ERROR at line 1:
ORA-01325: archive log mode must be enabled to build into the logstream
And again below you get error when utry use redo logs for building dict
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( options=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
BEGIN DBMS_LOGMNR_D.BUILD ( options=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS); END;
*
ERROR at line 1:
ORA-01325: archive log mode must be enabled to build into the logstream
query from, another session SELECT t.session_info,t.sql_redo, t.* FROM t1_log t WHERE UPPER(sql_redo) LIKE
UPPER('%truncate%') OR operation LIKE 'DDL'
gives: DDL operations above in SQL>prompt are also tracked