You are on page 1of 320

| GREP ORA

| sort quality | head 200

GrepOra Team
All rights reserved GrepOra Team - http://grepora.wordpress.com
| GREP ORA
| sort quality | head 200

The book of the blog http://grepora.wordpress.com


from 29 January 2015 to 25 January 2017
Disclaimer
The content in this Blog  (GrepOra.com.br) and Book is protected by intellectual
work Brazilian Federal Law 9.610, of February 19, 1998. This content should not be
published, distributed or transmitted without permission of the authors.

This is an independent site. The opinions published here are personal and do not


represent the opinion of Oracle or any other institution, as expressed on CC BY-NC
3.0

O conteúdo publicado neste Blog (GrepOra.com) e Livro é obra intelectual dos


autores, conforme previsto na Lei Federal do Brasil nº 9.610, de 19 de fevereiro de
1998. O conteúdo deste Blog não deve ser publicado, distribuído ou transmitido sem
autorização prévia do seu autor.

Este é um site independente. As opiniões publicadas neste blog são pessoais e não
representam a visão da Oracle ou qualquer outra instituição, conforme expresso em:
CC BY-NC 3.0

4
About the Blog
GrepOra is a blog between friends to learn and share about our daily experiences and
challenges with Oracle technologies.

The blog started in Jan/2015 called MatheusDBA , focused in Oracle Database stuff,
being written only by Matheus. In November 2015, ore authors specialized in other
Oracle related technologies joined Matheus as new authors and the blog was
renamed to GrepOra.com .

The common thing is that we all work with Oracle by different ways: Database,
Middleware, Integration and Application. Someday we realized we’re always having
conversations and frequently about Oracle stuffs. So we decided to make a “ grep” in
these conversations to filter those are related to Oracle and share.
And this is the origin for the name “ GrepOra.com” (or |GREP ORA).

Grepora is also our way to say “thank you” to community and return part of all
learning we got through blogs and communities.

Feel yourself welcome to read the book and the blog, follow, share and get in touch
with us.
It’ll be great to be with you in every post!

To know more about each one of us, access the Members section (
https://grepora.com/members/ ) in the blog or take a look in next pages.
To understand the posting schedule, access Posting Schedule section in the blog (
https://grepora.com/agenda/ ).

Sincerely,
Matheus , Maiquel, Dieison, Rafael, Jackson and Cassiano.

5
GrepOra.com in 2016…
Hello!
Today’s post is to share with you some information about what 2016 represented for
GrepOra.com .

In 2016, the first official year of GrepOra.com, we had over 26,000 accesses from
more than 160 different countries . Indeed, almost every country in the world was in
GrepOra.com this year. And this is spectacular considering we discuss very specific
topics about Oracle Database and Applications.

The accesses are still growing every day, which show us we can expect even bigger
numbers to celebrate in 2017. See below our monthly accesses graph of 2016.

See below our accessing map of 2016.

Besides that, some accomplishments make us even prouder, like being recognized by
OTN LA ( Oracle Technology Network – Latin America ) as a technical reference blog
in Database Management and Performance category .

6
Since this recognition in June, we have the OTN LA logo in our blog page. Also since
August, we have the GUOB logo, once I participated in last GUOB Tech Day as
Official Blogger.

All this, however, was not achieved only by having the blog. Since the beginning we
organized the weekly posting schedule and the author’s pages. The consistency prove
itself by our monthly access growth. The organization and commitment to keep
posting relevant content is what led us to this point.

Notwithstanding those numbers, recognitions, networking, self improvement in


technical skills, writing and mindset changing, there is a rewarding feeling of giving
back to community a little bit we took. Since I started as DBA, I sometimes used
independent blogs and references in the most critical and desperate moments. This
way, nothing is better than feeling useful for someone else. Don’t you think?

Be sure we are preparing lots of news and even more quality content in GrepOra.com
for next year.

Thank you all and have a great New Year!

Matheus.

7
GrepOra Team
As already spoken, we are a group of friends that are crazy enough to share our
experiences with you and with Oracle community as a payback of our own
consumption.

In the next pages you are going to see some of our background and brief description
professionally talking. So, by now, we are only going to share some photos of our
occasional meetings.

(Maiquel, Matheus, Cassiano, Jackson, Dieison and Rafael)


(First GrepOra meeting)

8
(Maiquel, Rafael, Jackson, Matheus, Dieison and Cassiano)
(Last GrepOra Meeting – by now)

And this is it!


We hope you enjoy the book and the experience.

Let us know what you think about the book and the blog. Reach us out in social media
like LinkedIn and Twitter. Collaborate and engage to Community!

Cheers!

9
About the Book
Hello!

Welcome to our book, our blog and our world to have fun and view/review/learn/laugh
with some of our struggles and personal notes for ourselves in the future.

Those posts are basically our notes with some of ours discovers and tips to review in
the future. I believe everyone who works with that kind of technology have some
personal notes, right? So, ours are being published to share with you.

We believe in sharing and mutual growing, so feel free to reach us to share your notes
and tips, to fix anything you think to be wrong or can be better explained or everything.
This is not only GrepOra team’s blog. This is our blog. Which includes you.

Ok then. But we are publishing a book? Just why? Who is the target audience? How
should I read it? How is it structured? What should I expect?

Why:

This week we are completing 2 Years since the blog was created (in that time, called
MatheusDBA ). And we decided to review our best moments in these last years and
compile them for you. It’s, above all, a good opportunity to refresh some posts that
are still actual.

For who:

We are compiling it as a best moments review to engage new readers with the best
past posts and reach that readers that enjoy to read a book in their mobile reading
devices. Actually, we believe that writing material for this kind of media is the future (or
the present), so if you prefer to read PDF files in you Kindle, Ipad, or similar, specially
for those who prefer the offline mode to not being bothered by social media
notifications, instant messages and other: This is for you.

How to read:

This is a book generated by the best posts in the blog. If you read the blog you know
that the posts are not continuos and mostly have not relation between them. So, this is
a book to read some curiosities and tips, to learn and review some useful stuff and to
be aware about some daily basis challenges and struggles on working with Oracle
technologies. This is not a book to be read in sequence, chapters or something like
this. Feel free to read whatever you want and whatever you feel it’s interesting for
yourself and to get richer your own experience with Oracle techs… Simple like that.

The structure:

There is no boundaries for our posts and ideas. Of course we have specialities, but
everyone can write about everything. So there is no chapters of any restrictedly fixed

10
boundaries. However, to give a little sense, we kind of organized the posts by
following this (using our blog categories):

• Oracle Database, RAC and Dataguard;

• ASM;

• Datapump, RMAN, Exp/Imp;

• Enterprise Manager;

• Application and Middleware;

• Golden Gate and Data Integrator;

• Linux and Shellscripts;

• Cloud Computing;

• Heterogeneous Databases;

• Web Development and APEX;

• PL/SQL and SQL Scripts;

• Errors and Bugs.

What to expect:

Basically: “To read some curiosities and tips, to learn and review some useful stuff
and to be aware about some daily basis challenges and struggles on working with
Oracle technologies”. But mostly: To have fun! This is a book written by Oracle geeks
to Oracle geeks.

Welcome to our world!

11
ADRCI Retention Policy and Ad-Hoc Purge
Script for all Bases
As you know, since 11g we have a Automatic Diagnostic Repository (ADR). To better
manage it, we also have a Command-line Interface, called ADRCI.
ADR contains all diagnostic information for database (logs, traces, incidents,
problems, etc).

ADR Structure

ADRCI is a powerful tool, but unfortunately misunderstood and sub-used.


But I’m not going to retype all that Tim Hall already done for us .

The objective of this post, however, isn’t to show all good from ADRCI, but share a
how configure retention policiy and a quick script to clean logs from all homes in the
server:

1. Setting Retention Policy:


First thing is to understand these two guys:
– LONGP_POLICY (long term) defaults to 365 days and relates to things like Incidents
and Health Monitor warnings.
– SHORTP_POLICY (short term) defaults to 30 days and relates to things like trace
and core dump files

They are setted by default with  720 hours (30 days) for the Short Term and 8760
hours (One year) for the long term category. See:

adrci show control ADR Home = /u01/app/oracle/diag/rdbms/mydb/mydb:


************************************************************************* ADRID               
SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                           
LAST_AUTOPRG_TIME -------------------- -------------------- --------------------
---------------------------------------- ---------------------------------------- 1067873839          

12
720                  8760                 2013-08-10 15:42:04.686159 +00:00        2016-04-25
20:53:28.159552 +00:00

We can change this by using the ADRCI command ‘set control’. Look at example for
changing the retention to 15 days for the Short Term policy attribute:
(note it’s defined by hours!)

adrci set control (SHORTP_POLICY=360) adrci show control ADR Home =


/u01/app/oracle/diag/rdbms/mydb/mydb:
************************************************************************* ADRID               
SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                           
LAST_AUTOPRG_TIME -------------------- -------------------- --------------------
---------------------------------------- ---------------------------------------- 1067873839          
360                  8760                 2016-04-29 13:30:03.361811 +00:00        2016-04-25
20:53:28.159552 +00:00

Now let’s run a “Purge” using this parameter:

adrci purge

And see last autopurge:

adrci show control ADR Home = /u01/app/oracle/diag/rdbms/mydb/mydb:


************************************************************************* ADRID               
SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                           
LAST_AUTOPRG_TIME -------------------- -------------------- --------------------
---------------------------------------- ---------------------------------------- 1067873839          
360                  8760                 2016-04-29 13:30:03.361811 +00:00        2016-04-29
13:30:29.153407 +00:00

2. Ad-Hoc Purge Script:


The manual execution basically uses “purge -age” clause. The important thing is that
in this case you must user minutes instead of hours. Pay attention!

There is a lot of scripts on the net. But my personal script for ad-hoc/manual purges is:

AGE7DAYS=10080 AGE10DAYS=14400 AGE15DAYS=21600 AGE30DAYS=43200


PURGETARGET=$AGE15DAYS for f in $( adrci exec="show homes" | grep -v "ADR
Homes:" ); do echo "Purging ${f}:"; adrci exec="set home $f; purge -age
$PURGETARGET ;" ; done

That’s it!
Have a nice day!
Matheus.

13
High CPU usage by LMS and Node
Evictions: Solved by Setting
“_high_priority_processes”
Another thing that may help you in environments with highly interdependent
applications:

Our env has high interconnect network block changing, and, as a consequence, high
CPU usage by Global Cache Services (GCS)/Lock Manager Server Process (LMS).

This way, for each little latency in the interconnect interface, we were having a node
eviction and all the impacts to the legacy application you can imagine (without gridlink
or any solution to make the relocation ‘transparent’, as is usual to legacy application)
and, of course, the business impact.

Oracle obviously suggested that we reduce the block concurrency over the cluster
nodes grouping the application by affinity. But, it’s just no applicable to our env…

When nothing seemed to help, the workaround came from here: Top 5 Database
and/or Instance Performance Issues in RAC Environment (Doc ID 1373500.1) .

Here is our change:

boesing@proddb alter system set


"_high_priority_processes"='LMS*|LGWR|VKTM' scope=spfile sid='*'; System altered.

No magic, but the problem stopped to happen. After that, we’re having some warnings
about clock synchronization over the cluster nodes on CRS alerts. Like this:

CRS-2409:The clock on host proddb1 is not synchronous with the mean cluster time.
No action has been taken as the Cluster Time Synchronization. Service is running in
observer mode.

I believe it happens because VKTM lost priority. But it’s OK: The node evictions has
stopped!

Matheus.

14
Application Looping Until Lock a Row with
NOWAIT Clause
Yesterday I treated an interesting situation:
A BATCH stayed on “SQL*Net message from client” event but the last_call_et was
always on 1 or 0. Seems OK, with some client contention to send the commands to
the DBMS, right? Nope.

It was caused by a loop in the application code “waiting” for a row lock but without
“DBMS waiting events” (something like “ select * from table for update nowait” ). Take
a look in how it was identified below.

First the session with no SQL_ID, no wait events and last_Call_et=0 of a


“BATH_PROCESS” user:

proddb2 @sid Sid:9796 Inst: LAST_CALL_ET SQL_ID EVENT STATUS SID


SERIAL# INST_ID USERNAME ------------ ------- ------------- ---------- ------------------------
0 SQL*Net message from client INACTIVE 9796 45117 2 BATCH_PROCESS
proddb2 @trace Enter value for sid: 9796 Enter value for serial: 45117 PL/SQL
procedure successfully completed.

As you see, with no idea about what is happening, I started a trace. The trace was
stuck with this:

*** 2015-06-15 14:03:25.755 WAIT #4574470448: nam='SQL*Net message from


client' ela=993072 driver id=1413697536 #bytes=1 p3=0 obj#=23141074
tim=12833326636999 CLOSE
#4574470448:c=10,e=15,dep=0,type=3,tim=12833326637228 PARSE #4574470448:
c=25,e=41,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1139820409,tim=12833326637
286 BINDS #4574470448: Bind#0 oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00
pre=00 oacflg=01 fl2=1000000 frm=00 csi=00 siz=24 off=0 kxsbbbfp=110a8d0d8
bln=22 avl=05 flg=05 value=5022011 WAIT #4574470448: nam='gc cr block
2-way' ela= 709 p1=442 p2=5944 p3=8483 obj#=0 tim=12833326638533 WAIT
#4574470448: nam='gc cr block 2-way' ela= 541 p1=3 p2=2088264 p3=4367 obj#=0
tim=12833326639352 WAIT #4574470448: nam='gc cr block 2-way' ela= 651 p1=442
p2=5944 p3=8483 obj#=0 tim=12833326641673 WAIT #4574470448: nam='enq: TX -
row lock contention' ela= 1093 name|mode=1415053318 usnobj#=23141074
tim=12833326643029 EXEC #4574470448:c=1776,e=5836,p=0,cr=117,cu=1,mis=0,r
=0,dep=0,og=1,plh=1139820409,tim=12833326643150 ERROR #4574470448:
err=54 tim=12833326643172 WAIT #4574470448: nam='SQL*Net break/reset to
client' ela= 9 driver id=1413697536 break?=1 p3=0 obj#=23141074
tim=12833326643373 WAIT #4574470448: nam='SQL*Net break/reset to client' ela=
503 driver id=1413697536 break?=0 p3=0 obj#=23141074 tim=12833326643891
WAIT #4574470448: nam='SQL*Net message to client' ela= 2 driver id=1413697536
#bytes=1 p3=0 obj#=23141074 tim=12833326643915

15
AHÁ!
Did you see the “err=54” there? Yes. You know this error:

ORA-00054: Resource busy and acquire with NOWAIT specified

It’s caused by a

SELECT FOR UPDATE NOWAIT

in the code.
But, this select is in a loop, so the session don’t go ahead until have it.
(Obviously it could be coded with some treatment/better logic for this loop and errors,
buuuut…)

What can we do now?


The easy way is to discover the holding session and kill it.
And sometimes the easy way is the best way.

For that, we use the “obj#” and “value” , also bolded in the trace.
As I know the application, I know that the used field in all “where clauses” is the
“RECNO” column. But if you don’t, it’s needed to discover. With this information in
mind:

proddb2select * from dba_objects where object_id='23141074' ; OWNER


OBJECT_NAME ------------------------------ ---------------- OWNER_EXAMPLE
TABLE_XPTO proddb2 select * from OWNER_EXAMPLE.TABLE_XPTO WHERE
recno=5022011 ; COL_KEY FSAMED0 FSAMED1 FSMNEG1 FSMNEG2 FSMNEG3
COL_DATE RECNO ------- ---------- ---------- ---------- ---------- ---------- ----- 1002974 0 0
-516.8 0 0 15/06/2015 00:00:00 5022011

Ok, I know the row that is holded by the other session.


Let’s discover which session is causing a lock by myself (but in my case, without
“NOWAIT” clause, to have time to find the holder):

proddb5select * from OWNER_EXAMPLE.TABLE_XPTO WHERE recno=5022011 for


update;

In another sqlplus session:

proddb2 @me INST_ID SID SERIAL# USERNAME EVENT BLOCKING_SE


BLOCKING_SESSION BLOCKING_INSTANCE ------- ---------- ---------- ---------------
---------------------- 5 14174 479 MATHEUS_BOESING enq:TX - row lock contention
VALID 11006 1 2 4233 12879 MATHEUS_BOESING PX Deq: Execution Msg NOT IN
WAIT 1 15410 7697 MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT

AHÁ again!
The SID 11006. Let’s see who is there:

16
proddb2 @sid Sid:11006 Inst: SQL_ID SEQ# EVENT STATUS SID SERIAL# INST_ID
USERNAME -------------------- ---------- -------------------------------------- 9jzm6vn5j06js
24919 enq: TX - row lock contention ACTIVE 11006 44627 1
DBLINK_OTHER_BATCH_SCHEMA

Ok, it’s another session of a different batch process in a remote database holding this
row. As it’s less relevant, lets kill! Muahaha!
Then, you’ll see, my session get the lock and is in the middle of a transaction:

proddb1 @kill *** sid : 11006 serial : 44627 *** System altered. *** proddb1 @me
INST_ID SID SERIAL# USERNAME EVENT BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE ------- ---------- ---------- --------------- -------------------- 5 14174
479 MATHEUS_BOESING transaction UNKNOWN 2 4332 56037
MATHEUS_BOESING PX Deq: Execution Msg NOT IN WAIT 1 12058 9
MATHEUS_BOESING class slave wait NO HOLDER

To release the “row locked” to my principal process, lets suicide (kill my own session,
this case, that is holding the row lock right now).

proddb5 @kill *** sid : 14174 serial : 479 *** System altered. ***

After kill all the holding sessions, my BATCH_PROCESS just gone!


Take a look on the trace (running ok):

WAIT #4576933904: nam='SQL*Net message to client' ela= 3 driver id=1413697536


#bytes=1 p3=0 obj#=23141074 tim=12833981531019 FETCH #4576933904:c=45,e=7
1,p=0,cr=3,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981531062 WAIT
#4576933904: nam='SQL*Net message from client' ela= 562 driver id=1413697536
#bytes=1 p3=0 obj#=23141074 tim=12833981531654 WAIT #4576933904:
nam='SQL*Net message to client' ela= 3 driver id=1413697536 #bytes=1 p3=0
obj#=23141074 tim=12833981531788 FETCH #4576933904:c=55,e=86,p=0,cr=2,cu=
0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12833981531826 WAIT #4576933904:
nam='SQL*Net message from client' ela= 715 driver id=1413697536 #bytes=1 p3=0
obj#=23141074 tim=12833981532576 WAIT #4576933904: nam='SQL*Net message
to client' ela= 4 driver id=1413697536 #bytes=1 p3=0 obj#=23141074
tim=12833981532721 FETCH #4576933904:c=61,e=96,p=0,cr=2,cu=0,mis=0,r=5,dep
=0,og=1,plh=419358542,tim=12833981532758 WAIT #4576933904: nam='SQL*Net
message from client' ela= 600 driver id=1413697536 #bytes=1 p3=0 obj#=23141074
tim=12833981533617 WAIT #4576933904: nam='SQL*Net message to client' ela= 3
driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981534163 FETCH #
4576933904:c=52,e=82,p=0,cr=2,cu=0,mis=0,r=5,dep=0,og=1,plh=419358542,tim=12
833981534203 WAIT #4576933904: nam='SQL*Net message from client' ela= 517
driver id=1413697536 #bytes=1 p3=0 obj#=23141074 tim=12833981534752

Now, with the problem solved, lets disable the trace and continue the other daily
tasks…

17
proddb2 @untrace Enter value for sid: 9796 Enter value for serial: 45117 PL/SQL
procedure successfully completed.

I hope it was useful!


If helped you, make a comment!

See ya!
Matheus.

18
VKTM Hang – High CPU Usage
Today a database (RHEL 6, single instance, 11.2.0.4) suddently started to “explode”
CPU on VKTM process (100% CPU).
After some minutes lost (completely) in support.oracle.com (there was just a few notes
about binary permissions on Solaris), I decided to make a McGayver by myself.

By Oracle words: “ VKTM acts as a time publisher for an Oracle instance. VKTM
publishes two sets of time: a wall clock time using a seconds interval and a higher
resolution time (which is not wall clock time) for interval measurements. The VKTM
timer service centralizes time tracking and offloads multiple timer calls from other
clients. ”

This way, my solution:

SQL alter system set "_high_priority_processes"='LMS*' scope=spfile; System altered.

And restart the database, of course.


So, VKTM is no more a “priority” process. The problem was solved .

Another possibility is to disable VKTM (undocumented parameter “_disable_vktm” –


boolean). But I wanted to keep it running, changing less as possible of database
configuration, just reducing priority.

KB:
Master Note: Troubleshooting Oracle Background Processes (Doc ID 1509616.1)
Great post about hidden parameters:
http://oracleinaction.com/undocumented-params-11g/
Oficial one: http://www.orafaq.com/parms/index.htm

Hugs!
Matheus.

19
Oracle TPS: Evaluating Transaction per
Second
Sometimes this information has some ‘myth atmosphere’… Maybe because of that
Oracle doesn’t have this information very clear and it’s not the most useful metric.
But for comparison to another systems and also to performance/’throughput’ with
different infrastructure/database configuration, it can be useful.

It can be seen by AWR on “Report Summary” section, on “Load Profile”,


“Transactions” item:

But if you want to calculate it through SQL query?


And if you want to have a historic from this metric?

I found a reference for this calculation here , using v$sysstat.


It’s the only reference I found, and it on 10g documentation… It refers this metric as:

Number of Transactions = (DeltaCommits+DeltaRollbacks)/Time

It also refers as DeltaCommits and DeltaRollbacks , respectively, “user commits” and


user “rollbacks”.

Where it goes a possible SQL to do that:

WITH hist_snaps AS (SELECT instance_number, snap_id,


round(begin_interval_time,'MI') datetime, (  begin_interval_time + 0 - LAG
(begin_interval_time + 0) OVER (PARTITION BY dbid, instance_number ORDER BY
snap_id)) * 86400 diff_time FROM dba_hist_snapshot), hist_stats AS (SELECT dbid,
instance_number, snap_id, stat_name, VALUE - LAG (VALUE) OVER (PARTITION
BY dbid,instance_number,stat_name ORDER BY snap_id) delta_value FROM

20
dba_hist_sysstat WHERE stat_name IN ('user commits', 'user rollbacks')) SELECT
datetime, ROUND (SUM (delta_value) / 3600, 2) "Transactions/s" FROM hist_snaps
sn, hist_stats st WHERE     st.instance_number = sn.instance_number AND
st.snap_id = sn.snap_id AND diff_time IS NOT NULL GROUP BY datetime ORDER
BY 1 desc;

I like to use PL/SQL Developer to see this kind of data. And it regards us to make very
good charts very quickly. I try it in a small database here, just for example:

Jedi Master Jonathan Lewis wrote a good post about Transactions and this kind of
AWR metric here .

See ya!
Matheus.

21
Leap Second and Impact for Oracle
Database
Don’t know what is this? Oh boy, I suggest you take a look…

It can sound a little crazy, but it’s about an universal time adjustment of atomic time.
Something like that. To understand, take a look on:
http://www.meinberg.de/english/info/leap-second.htm
http://en.wikipedia.org/wiki/Coordinated_Universal_Time
http://en.wikipedia.org/wiki/International_Atomic_Time
http://www.britannica.com/EBchecked/topic/136395/Coordinated-Universal-Time
http://www.britannica.com/EBchecked/topic/290686/International-Atomic-Time

Okey doke!
But what about Oracle Database adjustment? Good news: Nothing to do!

In Oracle words: “ The Oracle RDBMS needs no patches and has no problem with the
leap second changes on OS level. ”

But, attention!
If your application uses timestamp or sysdate, verify the adjust of the OS Level. If it
consists on a “60” second, it can result on “ ORA-01852 seen 60 seconds is a illegal
value for the date or timestamp dataype. ”
( Insert leap seconds into a timestamp column fails with ORA-01852 (Doc ID
1553906.1) )

Another possibilities is documented on these notes:


NTP leap second event causing Oracle Clusterware node reboot (Doc ID
759143.1)
(Oracle VM and RHEL 4.4 to 6.2): Leap Second Hang – CPU Can Be Seen at 100%

22
(Doc ID 1472421.1)
(OEM on Linux): Enterprise Manager Management Agent or OMS CPU Use Is
Excessive near Leap Second Additions on Linux (Doc ID 1472651.1)

So, pay attention!

Here other Oracle notes that I recommend to take a look:


Leap seconds (extra second in a year) and impact on the Oracle database. (Doc
ID 730795.1)
Leap Second Time Adjustment (e.g. on June 30, 2015 at 23:59:59 UTC) and Its
Impact on Exadata Database Machine (Doc ID 1986986.1)
How Leap Second Affects The OS Clock on Linux and Oracle VM (Doc ID
1453523.1)
NOTE:1461363.1 – What Leap Second Affects Occur In Tuxedo?
NOTE:1553906.1 – Insert leap seconds into a timestamp column fails with
ORA-01852
NOTE:412160.1 – Updated DST Transitions and New Time Zones in Oracle RDBMS
and OJVM Time Zone File Patches
NOTE:1453523.1 – How Leap Second Affects The OS Clock on Linux and Oracle VM
NOTE:1019692.1 – Leap Second Handling in Solaris – NTPv3 and NTPv4
NOTE:1444354.1 – Strftime(3c) Does Not Show The Leap Second As 23:59:60
NOTE:1461606.1 – Any Effect of Leap Seconds to MessageQ?

Matheus.

23
HANGANALYZE Part 1
Hi all!
I realized I have some posts about database hangs but have no posts about
hanganalyze, system state or ashdump usage. So let’s fix it.
To organize the ideas I’m going to split the subject on three posts. This first will be
about hanganalyse.

See the second part of this post here: HANGANALIZE Part 2 .

Ok, so let me refer the most clear Oracle words I could found:
“Hanganalyze tries to work out who is waiting for who by building wait chains, and
then depending on the level will request various processes to dump their errorstack.”

This is very similar to what we can do manually through v$wait_chains. But is quicker
and ‘oficial’, so let’s use!

But before I show how you can do it, it’s important to mention that Oracle does not
recommend you to use ‘numeric events’ without a SR (MOS), according to Note:
75713.1.

So, how to do it? Basically 2 ways:

1) ALTER SESSION SET EVENTS 'immediate trace name HANGANALYZE level LL';


OR EVENT="60 trace name HANGANALYZE level 5" 2) ORADEBUG hanganalyze LL

I prefer to use ORADEBUG on database server if possible, regarding you already are
with some hanging:

sqlplus / as sysdba oradebug setmypid; oradebug unlimit; oradebug hanganalyze LL

For example, connected with sqlplus / as sysdba:

SQL oradebug setmypid; Statement processed. SQL oradebug unlimit; Statement


processed. SQL oradebug hanganalyze 3 Hang Analysis in
/db/oracle/diag/rdbms/greporadb/greporadb/trace/greporadb_ora_2096.trc

What this ‘level’ means?

Level

Description

Comment

Very minimal output

Could be useful…

24
2

Minimal output

Useful for some cases…

Dump only processes thought to be in a hang

Most common level

Dump leaf nodes in wait chains

You really need this info?

Dump all processes involved in wait chains

can be a lot!

Dump errorstacks of processes involved in wait chains

can be high overhead

10

Dump all processes

not a good idea…

But take care! Using too high a level will cause lots of processes to be asked to dump
their stack. This can be very expensive…
In summary, Remember the Note: 75713.1!

If you have a RAC?

SQL oradebug setmypid SQL oradebug unlimit SQL oradebug setinst all SQL
oradebug -g def hanganalyze LL

OR

SQL oradebug setmypid SQL oradebug unlimit SQL oradebug -g all hanganalyze LL

Nice, and how hanganalize looks like?


Here it goes an Oracle’s example of output:

25
============== HANG ANALYSIS: ============== Open chains found: This
process (below) is running Chain 1 :  : Below is a wait chain. Sid 16 waits for Sid 17
Chain 2 :  : -- Other chains found: Chain 3 :  : Extra information that will be dumped at
higher levels: This just shows which nodes would be dumped at each level [level 4] : 2
node dumps -- [LEAF] [LEAF_NW] [IGN_DMP] [level 5] : 2 node dumps -- [NLEAF]
[level 10] : 10 node dumps -- [IGN] State of nodes All nodes are listed below. The
"state" column shows the state that the session is in
([nodenum]/sid/sess_srno/session/state/start/finish/[adjlist]/predecessor): The first
nodes are IGN (ignore) [0]/1/1/0x826f94c0/IGN/1/2//none
[1]/2/1/0x826f9d2c/IGN/3/4//none [2]/3/1/0x826fa598/IGN/5/6//none
[3]/4/1/0x826fae04/IGN/7/8//none [4]/5/1/0x826fb670/IGN/9/10//none
[5]/6/1/0x826fbedc/IGN/11/12//none [6]/7/1049/0x826fc748/IGN/13/14//none
[7]/8/1049/0x826fcfb4/IGN/15/16//none [8]/9/1049/0x826fd820/IGN/17/18//none
[9]/10/1049/0x826fe08c/IGN/19/20//none Below are LEAF nodes in various states
[12]/13/158/0x826ff9d0/LEAF_NW/21/22//none
[15]/16/416/0x82701314/NLEAF/23/26/[16]/none
[16]/17/941/0x82701b80/LEAF/24/25//15
[17]/18/344/0x827023ec/NLEAF/27/28/[16]/none You are told which processes are
being dumped They will dump errorstacks to their own trace files Dumping
System_State and Fixed_SGA in process with ospid 18668 Dumping Process
information for process with ospid 18656 Dumping Process information for process
with ospid 18658 ... ================================ PROCESS DUMP
FROM HANG ANALYZER: ================================ This process
dumps its errorstack and processstate. See for details of this informaiton ----- Call
Stack Trace ----- calling call entry ... ======================================
END OF PROCESS DUMP FROM HANG ANALYZER
====================================== ==================== END OF
HANG ANALYSIS ====================

And about Node States?

State

Meaning

IGN

Ignore

LEAF

A waiting leaf node

LEAF_NW

A running (using CPU?) leaf node

NLEAF

26
An element in a chain but not at the end (not a leaf)

Cool, right?
There is a very useful tool to analyze chains of hanging. And also generate files that
can be added to an SR, if needed.

There is an observetion in MOSC about “DUMP” word, let’me reproduce it:


“Note that in 11g+ the “ORADEBUG HANGANALYZE NN” form will also try to include
SHORT_STACK dumps in the hanganalyze chains for level 3 and higher. Short stacks
will NOT be included in event triggered HANGANALYZE (like from ALTER SESSION)
nor from “ORADEBUG DUMP HANGANALYZE nn”, only from ORADEBUG
HANGANALYZE nn (no DUMP keyword).”

OK, but I’m in a hang situation, what if a can’t loging as sysdba in my database?
This case, wait the next week post . There is a very useful kludge.

# KB:
Troubleshooting Database Hang Issues (Doc ID 1378583.1)
How to Collect Diagnostics for Database Hanging Issues (Doc ID 452358.1)
Troubleshooting Database Contention With V$Wait_Chains (Doc ID 1428210.1)
EVENT: HANGANALYZE – Reference Note (Doc ID 130874.1)
Important Customer information about using Numeric Events (Doc ID 75713.1)

Matheus.

27
HANGANALYZE Part 2
Hi!
See the first part of this post here: HANGANALIZE Part 1 .

This post is just complement with a little kludge I liked…


First, let’s remmember that the hanganalyze is used when you are if some hanging in
your environment, of course.

But what if you are having difficult to access the database, even with ‘/ as sysdba’?

You can create a ‘preliminary connection’ without create a session, like this:

sqlplus -prelim / as sysdba

This ‘feature’ is available since Oracle 10g, and it basically skips a session creation
part (which could block) when logging on as SYSDBA.

When you log on normally (even as SYSDBA), this is what happens:


1) A new Oracle process is started
2) The new process attaches to SGA shared memory segments
3) The new process allocates process and session state objects and initializes new
session structures in SGA

The step 3 obviously can create some ‘lock’ once it’s allocating (locking) memory
(usually latches/KGX mutexes).
So, the preliminar connection consists in not execute step 3. And this is the reason it
solves ‘memory hangs’ situations…

But, there is another observation: With -prelim you are able to get a systemstate or an
ashdump, but since 11.2.0.2 you cannot get a hanganalize. The statements are
proccessed:

SQL oradebug setmypid; Statement processed. SQL oradebug unlimit; Statement


processed. SQL oradebug hanganalyze 3 Statement processed.

Uuups, and what if I get this error in the trace file?

ERROR: Can not perform hang analysis dump without a process state object and a
session state object.

No problems, McGayver can be applied again, there is a kludge for the kludge: You
can use another ospid to generate the hanganalyse. It’s not recommended to use a
vital process (just to mention).
I listed some sessions connected on database and used one of them to generate the
hanganalyze:

[oracle@devdb09]$ ps -ef |grep greporadb |grep LOCAL=NO |head oracle 2418 1 0


13:54 ? 00:00:00 oraclegreporadb (LOCAL=NO) oracle 2420 1 0 13:54 ? 00:00:00

28
oraclegreporadb (LOCAL=NO) oracle 2422 1 0 13:54 ? 00:00:00 oraclegreporadb
(LOCAL=NO) oracle 2565 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) oracle
2567 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) oracle 2569 1 0 13:55 ?
00:00:00 oraclegreporadb (LOCAL=NO) oracle 2571 1 0 13:55 ? 00:00:00
oraclegreporadb (LOCAL=NO) oracle 2573 1 0 13:55 ? 00:00:00 oraclegreporadb
(LOCAL=NO) oracle 2575 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) oracle
2577 1 0 13:55 ? 00:00:00 oraclegreporadb (LOCAL=NO) [oracle@devdb09 trace]$
sqlplus -prelim / as sysdba SQL oradebug setospid 2577 Oracle pid: 133, Unix
process pid: 2577, image: oracle@devdb09 SQL oradebug dump hanganalyze 3
Statement processed. SQL exit Disconnected from ORACLE

Ok, now the hanganalyze was generated on spid tracefile. Let’s see:

[oracle@devdb09 userdumpdest]$ ls -lrt |grep 2577 -rw-rw---- 1 oracle oracle 125 Jun
16 14:02 greporadb_ora_2577.trm -rw-rw---- 1 oracle oracle 2772 Jun 16 14:02
greporadb_ora_ 2577 .trc [oracle@devdb09 trace]$ cat greporadb_ora_ 2577 .trc
|grep hanganalyze Received ORADEBUG command (#1) 'dump hanganalyze 3' from
process 'Unix process pid: 4068, image: ' Finished processing ORADEBUG command
(#1) 'dump hanganalyze 3'

Awsome, hãn?

There is some similiar posts about:


Tanel Poder: Oradebug hanganalyze with a prelim connection and “ERROR: Can not
perform hang analysis dump without a process state object and a session state
object”.
Arup Nanda: Diagnosing Library Cache Latch Contention: A Real Case Study
How to log on even when SYSDBA can’t do so?
How to Use HANGANALYZE and How to Interpreting HANGANALYZE trace files

Matheus.

29
ASHDUMP for Instance Crash/Hang ‘Post
Mortem’ Analysis
Hi guys!
In the last weeks I talked about ASHDUMP in the post HANGANALYZE Part 1 . Let’s
think about it now…

Imagine the situation: The database is hanging, you cannot find what is going on and
decided to restart the database OR your leader/boss yelled to you do it so, OR you
know the database is going do get down, anyway…
Everyone has passed by this kind of situation at least once. After restart everything
become OK and the ‘problem’ was solved. But now you are being asked about RCA
(what caused this situation?). The database was hanging, so no snap was closed and
you lost the ASH info…

For this cases I think is very useful to take 1 minute before database get down to
generate an ASHDUMP. It’s very simple:

sqlplus / as sysdba oradebug setmypid oradebug unlimit oradebug dump


ashdumpseconds 30 oradebug tracefile_name

An exemple of execution:

SQL oradebug setmypid Statement processed. SQL oradebug unlimit Statement


processed. SQL oradebug dump ashdumpseconds 30 Statement processed. SQL
oradebug tracefile_name
/db/oracle/diag/rdbms/grepora/GREPORA/trace/GREPORA_ora_22024.trc

The command below will generate an ASH dump from the last 30 seconds to trace
file. You can also generate an ASHDUMP for minutes by changing the line with
ashdumpseconds by:

SQL oradebug dump ashdump 5

Other way to do it is:

SQL alter session set events 'immediate ashdump(5)';

Or the equivalent for ASHDUMPSECONDS:

SQL alter session set events 'immediate ashdumpseconds(300)';

If you cannot create a connection in database with SQLPlus (even as SYSDBA),


regarding it’s a hang situation, you can use an preliminar connection, like shown in the
post HANGANALYZE Part 2 .

The trace file is generated with instructions to import data with SQLLDR. This way you
can realize your ‘Post Mortem’ analysis.

30
An example of ASHDUMP file:

ASHDUMPSECONDS
===================================================== Processing
Oradebug command 'dump ashdumpseconds 30' ASH dump **************** SCRIPT
TO IMPORT **************** ------------------------------------------ Step 1: Create destination
table ------------------------------------------ CREATE TABLE ashdump AS SELECT *
FROM SYS.WRH$_ACTIVE_SESSION_HISTORY WHERE rownum 0
---------------------------------------------------------------- Step 2: Create the SQL*Loader
control file as below ---------------------------------------------------------------- load data infile *
"str '\n####\n'" append into table ashdump fields terminated by ',' optionally enclosed
by '"' ( SNAP_ID  CONSTANT 0           , DBID                          ,
INSTANCE_NUMBER               , SAMPLE_ID                     , SAMPLE_TIME
TIMESTAMP ENCLOSED BY '"' AND '"' "TO_TIMESTAMP(:SAMPLE_TIME  
,'MM-DD-YYYY HH24:MI:SSXFF')"   , SESSION_ID                    ,
SESSION_SERIAL#               , SESSION_TYPE                  , USER_ID                       ,
SQL_ID                        , SQL_CHILD_NUMBER              , SQL_OPCODE                   
, FORCE_MATCHING_SIGNATURE      , TOP_LEVEL_SQL_ID              ,
TOP_LEVEL_SQL_OPCODE          , SQL_PLAN_HASH_VALUE           ,
SQL_PLAN_LINE_ID              , SQL_PLAN_OPERATION#           ,
SQL_PLAN_OPTIONS#             , SQL_EXEC_ID                   , SQL_EXEC_START
DATE 'MM/DD/YYYY HH24:MI:SS' ENCLOSED BY '"' AND '"' ":SQL_EXEC_START"  
, PLSQL_ENTRY_OBJECT_ID         , PLSQL_ENTRY_SUBPROGRAM_ID     ,
PLSQL_OBJECT_ID               , PLSQL_SUBPROGRAM_ID           ,
QC_INSTANCE_ID                , QC_SESSION_ID                 ,
QC_SESSION_SERIAL#            , EVENT_ID                      , SEQ#                          ,
P1                            , P2                            , P3                            ,
WAIT_TIME                     , TIME_WAITED                   ,
BLOCKING_SESSION              , BLOCKING_SESSION_SERIAL#      ,
BLOCKING_INST_ID              , CURRENT_OBJ#                  ,
CURRENT_FILE#                 , CURRENT_BLOCK#                ,
CURRENT_ROW#                  , TOP_LEVEL_CALL#               ,
CONSUMER_GROUP_ID             , XID                           ,
REMOTE_INSTANCE#              , TIME_MODEL                    ,
SERVICE_HASH                  , PROGRAM                       , MODULE                        ,
ACTION                        , CLIENT_ID                     , MACHINE                       ,
PORT                          , ECID ) --------------------------------------------------- Step 3: Load
the ash rows dumped in this trace file --------------------------------------------------- sqlldr
userid/password control=ashldr.ctl data= errors=1000000
--------------------------------------------------- #### 4092499541,1,93736863,"06-15-2016 16
:58:00.581442000",118,13423,1,152,"a3dj32s553jwz",0,3,16794496187212003770,"",
0,3121342805,1,20,0,27310348,"06/15/2016 16:57:59",0,0,0,0,0,0,0,310662678,642,1
415053318,9371681,422864,0,511985,590,62515,1,289642,7,1595,0,94,12553,,0,10
24,3427055676,"","","","","devapp16",35734,"" ####
4092499541,1,93736863,"06-15-2016 16:58:00.581442000",309,869,1,0,"",65535,0,0,
"",0,0,0,0,0,0,"",0,0,0,0,0,0,0,112941199,13,0,0,0,0,499675,4294967295,0,1,4294967

31
295,0,0,0,86,12553,,0,0,3427055676,"sqlplus@devdb09 (TNS
V1-V3)","sqlplus@devdb09 (TNS V1-V3)","","","devdb09",0,"" #### *** 2016-06-15
16:58:13.931 Oradebug command 'dump ashdumpseconds 30' console output:

Very nice, right?

Matheus.

32
SYSTEMSTATE DUMP
Hi guys!
I already posted about Hang Analyze ( part1 , part2 ) and ASHDUMP . Now, in the
same ‘package’, let me show you about SYSTEMSTATE DUMP.

Systemstate is basically made by the process state for all process in instance (or
instances) at the time the systemstate is called.
Through a systemstate it’s possible to identify enqeues, rowcache locks, mutexes,
library cache pins and locks, latch free situations, and other kind of chains.

It’s a good tool to add in a SR, but it’s quite hard to habituate on reading/interpreting
the file. To undertand exactly how to read a systemstate I’d recommend you the best:
Read the manual!
The doc Reading and Understanding Systemstate Dumps (Doc ID 423153.1) has
a very good explanation with examples, I’m not able to to it better.

What I can do is share about the SYSTEMSTATE levels. I had some difficult to find
it…
But before I show how you can do it, it’s important to mention that Oracle does not
recommend you to use ‘numeric events’ without a SR (MOS), according to Note:
75713.1.

So, systemstate dump has several levels:

Level

Content

dump (not including the lock element)

10

dump

11

dump + global cache of RAC

256

short stack (function stack)

258

256 + 2 - short stack + dump (not including the lock element)

266

33
256 + 10 - short stack + dump

267

256 + 11 - short stack + dump + global cache of RAC

Levels 11 and 267 will dump global cache, will generate a large trace file, under
normal circumstances is not recommended.

Under normal circumstances, if the process is not too much, it is recommended to use
266 because it can dump out the process of the function stack, it can be used to
analyze the process in what to do.
But the more time-consuming to generate short stack, if the process is very large,
such as the 2000 process, it may take more than 30 minutes. In this case, you can
generate a level 10 or level 258, level 258 will collect more than level 10 short short
stack, but less than level 10 to collect some lock element data.

In addition to the RAC system, please pay attention Bug 11800959 – A


SYSTEMSTATE dump with level = 10 in RAC dumps huge BUSY GLOBAL CACHE
ELEMENTS – can hang / crash instances (Doc ID 11800959.8). The Bug is fixed in
the 11.2.0.3. For versions = 11.2.0.2 of the RAC, when the system lock an element for
a bunch of time, when executing level 10, 266 or 267 of systemstate dump, it can
cause database hang or crash. It may be solved by using level 258.

To generate it:

oradebug setmypid; oradebug unlimit; oradebug dump systemstate 266 oradebug


tracefile_name

OR, for all cluster:

oradebug setmypid; oradebug unlimit; oradebug -g all dump systemstate 266


oradebug tracefile_name

An example of execution:

SQL oradebug setmypid; Statement processed. SQL oradebug unlimit; Statement


processed. SQL oradebug dump systemstate 266 Statement processed. SQL
oradebug tracefile_name
/db/oracle/diag/rdbms/grepora/GREPORA/trace/GREPORA_ora_18256.trc

If you cannot create a connection in database with SQLPlus (even as SYSDBA),


regarding it’s a hang situation, you can use an preliminar connection, like shown in the
post “HANGANALYZE Part 2”.

I’d recommend you to read:


How to Collect Systemstate Dumps When you Cannot Connect to Oracle (Doc ID
121779.1)
Important Customer information about using Numeric Events (Doc ID 75713.1)

34
An example of a SYSTEMSTATE level 266 dumpfile:

*** 2016-06-15 16:59:00.180 Processing Oradebug command 'dump systemstate 266'


=================================================== SYSTEM STATE
(level=10, with short stacks) ------------ System global information: processes: base
0xbb5b6850, size 1000, cleanup 0xbb63b780 allocation: free sessions 0xbba6fb48,
free calls (nil) control alloc errors: 0 (process), 0 (session), 0 (call) PMON latch
cleanup depth: 0 seconds since PMON's last scan for dead processes: 39 system
statistics: 0 OS CPU Qt wait time 727657760 Requests to/from client 7106305 logons
cumulative 274 logons current 231675649 opened cursors cumulative 1737 opened
cursors current 43388218 user commits 1682106 user rollbacks 835078962 user calls
752889391 recursive calls 12720247 recursive cpu usage 150 pinned cursors current
15100586388 session logical reads 0 session logical reads in local numa group 0
session logical reads in remote numa group [...] PROCESS 1:
---------------------------------------- SO: 0xbb63a6d0, type: 2, owner: (nil), flag:
INIT/-/-/0x00 if: 0x3 c: 0x3 proc=0xbb63a6d0, name=process, file=ksu.h LINE:12616,
pg=0 (process) Oracle pid:1, ser:0, calls cur/top: (nil)/(nil) flags : (0x20) PSEUDO
flags2: (0x0), flags3: (0x10) intr error: 0, call error: 0, sess error: 0, txn error 0 intr
queue: empty ksudlp FALSE at location: 0 (post info) last post received: 0 0 0 last post
received-location: No post last process to post me: none last post sent: 0 0 0 last post
sent-location: No post last process posted by me: none (latch info) wait_event=0
bits=0 O/S info: user: , term: , ospid: (DEAD) OSD pid info: Unix process pid: 0, image:
PSEUDO ---------------------------------------- SO: 0x6000c838, type: 5, owner:
0xbb63a6d0, flag: INIT/-/-/0x00 if: 0x3 c: 0x3 proc=(nil), name=kss parent, file=kss2.h
LINE:138, pg=0 PSO child state object changes : Dump of memory from
0x00000000BB5BA8E8 to 0x00000000BB5BAAF0 0BB5BA8E0 00000000 00000000
[........] 0BB5BA8F0 00000000 00000000 00000000 00000000 [................] Repeat 31
times PROCESS 2: PMON ---------------------------------------- SO: 0xbb63b780, type: 2,
owner: (nil), flag: INIT/-/-/0x00 if: 0x3 c: 0x3 proc=0xbb63b780, name=process,
file=ksu.h LINE:12616, pg=0 (process) Oracle pid:2, ser:1, calls cur/top:
0xbaea88a8/0xbaea88a8 flags : (0xe) SYSTEM flags2: (0x0), flags3: (0x10) intr error:
0, call error: 0, sess error: 0, txn error 0 intr queue: empty ksudlp FALSE at location: 0
(post info) last post received: 0 0 16 last post received-location: ksu.h LINE:13945
ID:ksupsc last process to post me: bb75a490 25 0 last post sent: 0 0 19 last post
sent-location: ksu.h LINE:13957 ID:ksuxfd last process posted by me: bb63b780 1 14
(latch info) wait_event=0 bits=0 Process Group: DEFAULT, pseudo proc: 0xbba5e848
O/S info: user: oracle, term: UNKNOWN, ospid: 14438 OSD pid info: Unix process pid:
14438, image: oracle@devdb09 (PMON) Short stack dump: [...]

Matheus.

35
Upgrade your JDBC and JDK before
Upgrade your Database to 12c Version!
Ok, now it’s everyone upgrading to 12c, right? Thanks God, this version was released
in 2013!

But there is some things to be aware when planning an upgrade, specially regarding
old applications and legacy. But pay attention! Not all of the requirements are
necessary inside database. It’s the case os JDBC version requirement.

The database 12c documentation explicit mentions that JDBC versions 11.1.x and
below are not supported anymore. It doesn’t mean that they don’t work, it’s only
unsupported and you’ll have no assistance from MOS if you need. It’s better to avoid,
right?

Anyway, if you check the JDBC support matrix, if you are in version 11.2 or below you
are not supported since August/2015. So the Database 12c is helping you, that don’t
have patching policy, to keep on right way. Thanks to Database 12c!

If this is your situation, I highly recommend you to upgrade the directly to JDBC
version 7, the last available by now. See JDBC matrix version as:

But test! Test in you dev/test/QA environments before upgrading in Production


environment!

Why? Because JDBC also have his compatibility matrix. JDBC 7, for example,
demands your JDK to be at least in version 7 (released in 2011!). So, it’s needed to be
at least in JDK version 6, as you can see below.

36
(Click in the image to access the link)

OK doke?

Some interesting links for you:


Verifying a JDBC Client Installation
What are the various supported Oracle database version vs JDBC compliant versions
vs JDK version supported?
Checking the Oracle JDBC Driver Version on a Weblogic Server (by Cristóbal Soto)

Matheus.

37
Unplug/Plug PDB between different Clusters
Everyone test, write and show how to move pluggable databases between containers
(CBDs) in the same Cluster, but a little more than a few write/show about move
pluggable databases between different clusters, with isolated storage. So, let’s do
that:

OBS: Just to stay easy to understand, this post is about migration of a Pluggable
Database (BACENDB) from a cluster named ORAGRID12C and a Container
Database named INFRACDB to the Cluster CLBBGER12, into Container CDBBGER.
(Click on images to get it bigger)

1. Access the container INFRACDB (Cluster GRID12C) and List the PDBs:

2. Shutdown BACENDB:

(of course it does’n worked with a normal shutdown. I don’t know what I was
thinking… haha)

3. Unplug BACENDB (PDB) to XML (must be done from Pluggable, as you see…)

38
4. Created an ACFS (180G) to use as “migration area” mounted on “/migration/” in
ORAGRID12C cluster:

5. Copy Datafiles and Tempfiles for the “/migration” through ASMCMD cp

6. ACFS exported and mounted as NFS on destination (CLBBGER12):

7. Pluggable created (Plugged) on new Cluster (CDBBGER), using “MOVE”


FILE_NAME_CONVERT, to send the files to diskgroup +DGCDBBGER:

7.1 How it looks like on alert.log?

39
7.2 How about the Datafiles?

7.3 Checking database by remote sqlplus:

8. Creating the services as needed:

40
9. Dropping Pluggable from INFRACDB:

That’s Okey? Of course there is a few other ways to copy the files from an infra to
another, like scp rather than mount.nfs, RMAN Copy, or other possibilities…

By the way, one of the restrictions of pluggable migration is to use the same endian
format. Buut it’s possible to use RMAN Convert Plataform and convert datafiles to a
filesystem, isn’t?
So, I guess it’s not a necessary limitation. Must to test an write another post… haha

About the post, this link helped, but, again, don’t mention about “another”
cluster/infra/storage.

Matheus.

41
Database Migration/Move with RMAN: Are
you sure nothing is missing?
Forced by the destiny to make a migration using backup/restore (with a little
downtime), how to be sure nothing will be lost during the migration?
Here is a way: Create your own data just before migrating.

Seems like a kludge and it is.. haha.. But it works. Take a look:

# Original Database

SQL shu immediate; Database closed. Database dismounted. ORACLE instance shut
down. SQL startup restrict; ORACLE instance started. Total System Global Area
2689060864 bytes Fixed Size 2229520 bytes Variable Size 1996491504 bytes
Database Buffers 671088640 bytes Redo Buffers 19251200 bytes Database mounted.
Database opened. SQL create table matheus_boesing.migration (text varchar2(10));
Table created. SQL insert into matheus_boesing.migration values ('well done!'); 1 row
created. SQL commit; Commit complete. SQL alter system switch logfile; System
altered. SQL / System altered. SQL / System altered. SQL / System altered. SQL /
System altered. SQL / System altered. SQL shu immediate; SQL exit; $ rman target /
connect catalog rman_mydb/password@catalogdb run { backup archivelog all;}

# Destination Database

$ rman target / connect catalog rman_mydb/password@catalogdb run { recover


database;} $ sqlplus / as sysdba SQL select count(1),
to_char(CHECKPOINT_TIME, 'DD/MM/YYYY HH24:MI:SS') from
V$DATAFILE_HEADER t group by to_char(CHECKPOINT_TIME, 'DD/MM/YYYY
HH24:MI:SS') order by 2; COUNT(1) TO_CHAR(CHECKPOINT_ ----------
------------------- 51 27/06/2015 22:15:28 -- All datafiles with synchronized headers...
SQL alter database open read only; -- If needed, you can do more recover, this way...
Database altered. SQL select * from matheus_boesing.migration; TEXT ---------- well
done! -- Means no more recover is needed :) SQL shutdown immediate; SQL alter
database open resetlogs; Database altered.

And be Happy!

Matheus.

42
Vulnerability: Decrypting Oracle DBlink
password (<11.2.0.2)
Hi all,
It’s not a new vulnerability, but a good thing to have personal note about it. Besides
the security problem, it can save you from situations you need but don’t have the
database link password.
It works only if the database link was created pre-11.2.0.2.

The vulnerability only is exposed if user has one of the follow privileges:
SYS
SYSDBA
DBA
SYS WITHOUT SYSDBA
SYSASM
EXP_FULL_DATABASE
DATAPUMP_EXP_FULL_DATABASE
DATAPUMP_IMP_FULL_DATABASE

Starting with 11.2.0.2, Oracle changed the hashes format for database link passwords,
solving this vulnerability. But it only apply to dblinks created in this version or higher.
If you have dblink created when database was on 11.2.0.1, for example, and upgrade
the database for 11.2.0.4, the problem remains until you recreate the database link.

So, if you are upgrading database from 11.2.0.1 or lower to 11.2.0.2 or higher,
remember to reacreate database links!

The vulnerability was exposed in 2012 by Paul Wright. Here is his PoC .
And there is his post .

43
To make it different, below I made the same test (using a PLSQL block, to make it
prettier) with an upgraded database, from 11.2.0.1 to 11.2.0.4:

testdb11204 select passwordx from sys.link$ where name='MY_DBLINK';


PASSWORDX ----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
0540C5B8090D021649C5C614E8E0C242AF33F71C08C535900C 1 row selected.
testdb11204 set serveroutput on testdb11204 declare 2   db_link_password
varchar2(100); 3  begin 4   db_link_password := '0540C5B8090D021649C5C614E8E0
C242AF33F71C08C535900C'; 5   dbms_output.put_line ('Password: ' ||
utl_raw.cast_to_varchar2 ( dbms_crypto.decrypt ( substr (db_link_password, 19) ,
dbms_crypto.DES_CBC_PKCS5 , substr (db_link_password, 3, 16) ) ) ); 6  end; 7  /
Password: P4SSW0RD

Note that the simple upgrade does not solve the question. Is needed to recreate
database link.

Matheus.

44
Ordering Sequences over RAC – Hang on
‘DFS lock handle’
Hi all!
Whats up?
I had a fun weekend. So, some things to write about.

This post is just to show an exerience with the event ‘DFS lock handle’, related to
sequence ordering over the cluster nodes.

I started a process to make pk id’s adjustment, related to application’s number


limitation (int 32 bits - 4294967296), looking for move older entries and “release” some
ids.
As a legacy of database unification, we have a lot of tables with the same name in
different schemas, those use the same sequence for pk ids generation.
The readjustment involve a select of a sequence and is runned in a lot of parallel
“sqlplus” call to optimize the time and fit to the business maintenance window.

When I started, I just used the global service name (dedicated connection) and the
scanlistener. The result was distribuiting connections over the 5 nodes of the cluster.
Bad idea.
In the first, I suspected about the concurrency by the sequence over different nodes
(could occour if the node caches are too small), based on a few XA transaction bugs
involving this event.

By the way, if you’re facing this hang with XA transactions, please take a look on “
High rdbms ipc reply and DFS lock handle in 11gR2 RAC With XA Enabled
Application (Doc ID 1361615.1) “.
It can be solved by setting “_clusterwide_global_transactions” to FALSE.
It’s recommendable, additionally, to read the Best Practices for Using XA with RAC
.

Take a look on the blocking session over the cluster nodes:

proddb4 @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE
------ ---------- ---------- ----------------- -------------- ----------- ---------------- -----------------
9386 147 4 DFS lock handle 9zr9vpvmkqzkv VALID 10968 4
9499 179 4 DFS lock handle fqk9y9q7u2d5c VALID 11082 4
8821 153 4 DFS lock handle 2jd84taf7krh2 VALID 13902 4
22442 1155 3 DFS lock handle 8ycpfxq2jthq3 VALID 9067 3
9860 1339 3 DFS lock handle 2jmzv23ug9kth VALID 10299 3
9772 1529 3 DFS lock handle 802kn9htah6pt VALID 22442 3
22543 1673 5 DFS lock handle 6tgvwkt6cqngk VALID 3074 5

45
22307 135 5 DFS lock handle 5b3zgqgq7bbdz VALID 3665 5
21010 91 5 DFS lock handle gkmycubvn9aa3 VALID 3546 5
9508 1459 3 DFS lock handle 7cw6bcjsf8xf2 VALID 10387 3
10299 4669 3 DFS lock handle 7y2tnuckh37wp VALID 11795 3
121 139 5 DFS lock handle 6tgvwkt6cqngk VALID 3310 5
596 113 5 DFS lock handle 8yqbzu29shvnm VALID 2603 5
360 113 5 DFS lock handle dv49pafm9z8zy VALID 596 5
10740 3177 3 DFS lock handle c6q65hnq0ju7x VALID 11707 3
9838 181 4 DFS lock handle aqa7afq2upkuq VALID 9386 4
714 77 5 DFS lock handle ft8xzyzhycpn2 VALID 360 5
9951 147 4 DFS lock handle 697mts944db7y VALID 9725 4
950 109 5 DFS lock handle cd2gsz5rb2qw9 VALID 3192 5
10387 1529 3 DFS lock handle 2tqnrbh0x60dp VALID 12238 3
10064 143 4 DFS lock handle d833wg4u9cfyb VALID 10649 1
833 1503 5 DFS lock handle 7ynbg2t4taxha VALID 2366 5
10649 53 1 DFS lock handle 0sgzmj1tbx4rh VALID 10737 1
2249 149 5 DFS lock handle aa6jr8ugxaz4z VALID 833 5
9612 175 4 DFS lock handle d2nrr4gtdjq9b VALID 9499 4
10825 57 1 DFS lock handle acmyc4sw7zzc2 VALID 10649 1
2603 1415 5 DFS lock handle fg47vs5wa8zq8 VALID 22307 5
2485 65 5 DFS lock handle 702x9zwtfktu6 VALID 714 5
10737 55 1 DFS lock handle bthxrpmz0ug63 VALID 12148 3

Ok doke, let’s cancel the sessions and rerun the process just in node node (by SID). It
should solve the small caches over cluster hang, without need to modify the
sequence, right?

Beeep . Wrong:

proddb4 @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE

46
------ ---------- ---------- --------------- ------------- ----------- ---------------- -----------------
2494 53953 4 DFS lock handle fc3cam368zsp6 UNKNOWN
6561 32113 4 DFS lock handle f618p0hd4xsy0 UNKNOWN
9269 111 4 DFS lock handle fkn8hxbsfkfnz UNKNOWN
9047 175 4 DFS lock handle fqk9y9q7u2d5c VALID 8931 4
459 12605 4 DFS lock handle 5b3zgqgq7bbdz VALID 9271 4
1929 305 4 DFS lock handle 6tgvwkt6cqngk VALID 8026 4
7349 1013 4 DFS lock handle 802kn9htah6pt UNKNOWN
7800 175 4 DFS lock handle 0hc1bmqj1fp4f UNKNOWN
21475 17349 4 DFS lock handle cfh3r4sq788vu VALID 9042 4
8026 641 4 DFS lock handle 6tgvwkt6cqngk VALID 459 4
14919 59 4 DFS lock handle gkmycubvn9aa3 VALID 15373 4
15032 2267 4 DFS lock handle 9zr9vpvmkqzkv VALID 7688 4
15145 2411 4 DFS lock handle ddkqx4xttc9s9 UNKNOWN
15373 1657 4 DFS lock handle 2jd84taf7krh2 VALID 15713 4
8934 157 4 DFS lock handle 8ycpfxq2jthq3 VALID 1929 4
15826 551 4 DFS lock handle d8dhmr2sx08xq VALID 9612 4
15713 3357 4 DFS lock handle 2jmzv23ug9kth VALID 10177 4
8821 155 4 DFS lock handle 9fpmw9cwak21s UNKNOWN
16050 7007 4 DFS lock handle 4t5qkth35r2um VALID 8705 4
2042 1269 4 DFS lock handle 7cw6bcjsf8xf2 UNKNOWN

What a hell!
Lets take a look in one of the sqls to find the sequence…

proddb4 @sqlid 6tgvwkt6cqngk


UPDATE TABLE_XPTO SET RECNO = SEQ_OWNER.SEQ_NAME.NEXTVAL
WHERE ROWID=:B1 EXTVAL WHERE ROWID=:B1

And what about the sequence configuration?

proddb4 @getddl sequence SEQ_OWNER SEQ_NAME


create sequence SEQ_OWNER.SEQ_NAME
minvalue 1
maxvalue 9999999999
start with 85669803
increment by 1
cache 120000
cycle
order;

ORDER !
Man, of course. It create a several control over the nodes just to keep the sequence in
order, as explained in this post by Christo Kutrovsky .

To my situation, in the business maintenance window, it’s not an important constraint.


So, lets disable the ordering:

47
proddb4 alter sequence SEQ_OWNER.SEQ_NAME noorder;
Sequence altered.

Then, TAADÃÃ!

proddb4 @sess
User:MATHEUS
SID SERIAL# INST_ID EVENT SQL_ID BLOCKING_SE BLOCKING_SESSION
BLOCKING_INSTANCE
----- ---------- ------- ------------------------ ------------- ----------- ---------------- -----------------
15145 2411 4 library cache: mutex X ddkqx4xttc9s9 UNKNOWN
15032 2267 4 library cache: mutex X 9zr9vpvmkqzkv UNKNOWN
14919 59 4 library cache: mutex X gkmycubvn9aa3 NOT IN WAIT
9269 111 4 library cache: mutex X fkn8hxbsfkfnz UNKNOWN
9047 175 4 library cache: mutex X fqk9y9q7u2d5c UNKNOWN
8934 157 4 library cache: mutex X 8ycpfxq2jthq3 UNKNOWN
8821 155 4 library cache: mutex X 9fpmw9cwak21s UNKNOWN
8026 641 4 library cache: mutex X 6tgvwkt6cqngk UNKNOWN
7800 175 4 library cache: mutex X 0hc1bmqj1fp4f UNKNOWN
7349 1013 4 library cache: mutex X 802kn9htah6pt UNKNOWN
2042 1269 4 library cache: mutex X 7cw6bcjsf8xf2 UNKNOWN
9160 1205 4 library cache: mutex X 6tgvwkt6cqngk NOT IN WAIT
9042 293 4 library cache: mutex X 4jc1u6n2qx94z UNKNOWN
10177 5611 4 library cache: mutex X c6q65hnq0ju7x UNKNOWN
9271 235 4 library cache: mutex X d2nrr4gtdjq9b NOT IN WAIT
9951 1191 4 library cache: mutex X bkdhxhhqdbqb9 UNKNOWN
8931 291 4 library cache: mutex X 697mts944db7y UNKNOWN
9838 1315 4 library cache: mutex X 6q6r7ht1hnctg NOT IN WAIT
8818 325 4 library cache: mutex X 2tqnrbh0x60dp UNKNOWN

Of course we’re having some mutex x, but it’s a lot better then DFS lock, and the
process just “go”.

After be done, to keep the configuration, let’s enable ordering again:

proddb4 alter sequence SEQ_OWNER.SEQ_NAME order;


Sequence altered.

Matheus.

48
Infiniband Error: Cable is present on Port
“X” but it is polling for peer port
Facing this error? Let me guess: Ports 03, 05, 06, 08, 09 and 12 are alerting? You
have a Quarter Rack? Have recently installed Exadata plugin to version 12.1.0.3 or
higher?
Don’t panic!

This is probably related to Bug 15937297 : EM 12C HAS ERRORS CABLE IS


PRESENT ON PORT ‘N’ BUT IT IS POLLING FOR PEER PORT . The full message
might be like “ Cable is present on Port 6 but it is polling for peer port. This could
happen when the peer port is unplugged/disabled “.

In fact, the bug was closed as not a bug.


As part of the 12.1.0.3 Exadata plugin, the IB switch ports are now checked for
non-terminated cables. So these errors ‘polling for peer port’ are the expected
behavior.  Once ‘polling for peer port’ is an enhanced feature of the 12.1.0.3 plugin,
this explains why you most likely did not see these errors until you upgraded the OMS
to 12.1.0.2 and then updated the plugins.

In Quarter Racks, the following ports 3, 5, 6, 8, 9 and 12 are usually cabled ahead of
time, but not terminated. In some racks port 32 may also be unterminated. Checking
for incident in OEM you might see something like this image:

Or, as prefer, you can go on command line with a listlinkup on infiniband switch with
ILOM CLI interface:

[root@exa1db2 ~]# ssh -l root exa1db2sw-ibb0 The authenticity of


host 'exa1db2sw-ibb0(1.1.1.1)' can't be established. RSA key fingerprint is
be:6b:01:27:90:91:0a:f9:ab:7f:fd:99:81:76:4a:45. Are you sure you want to continue
connecting (yes/no)? yes Warning: Permanently
added 'exa1db2sw-ibb0,1.1.1.1' (RSA) to the list of known hosts. Last login: Thu Aug
4 17:34:20 2016 from exa1db1 You are now logged in to the root shell. It is
recommended to use ILOM shell instead of root shell. All usage should be restricted to
documented commands and documented config files. To view the list of documented
commands, use "help" at linux prompt. [root@exa1db2sw-ibb0 ~]# listlinkup
Connector 0A Not present Connector 1A Not present Connector 2A Not present
Connector 3A Not present Connector 4A Not present Connector 5A Present Switch
Port 30 is up (Enabled) Connector 6A Present Switch Port 35 is up (Enabled)
Connector 7A Present Switch Port 33 is up (Enabled) Connector 8A Present Switch
Port 31 is up (Enabled) Connector 9A Present Switch Port 14 is up (Enabled)

49
Connector 10A Present Switch Port 16 is up (Enabled) Connector 11A Present Switch
Port 18 is up (Enabled) Connector 12A Present Switch Port 11 is up (Enabled)
Connector 13A Present Switch Port 09 is down (Enabled) Connector 14A Present
Switch Port 07 is up (Enabled) Connector 15A Present Switch Port 05 is down
(Enabled) Connector 16A Present Switch Port 03 is down (Enabled) Connector 17A
Present Switch Port 01 is up (Enabled) Connector 0B Not present Connector 1B Not
present Connector 2B Not present Connector 3B Not present Connector 4B Present
Switch Port 27 is up (Enabled) Connector 5B Present Switch Port 29 is up (Enabled)
Connector 6B Present Switch Port 36 is up (Enabled) Connector 7B Present Switch
Port 34 is up (Enabled) Connector 8B Not present Connector 9B Present Switch Port
13 is up (Enabled) Connector 10B Present Switch Port 15 is up (Enabled) Connector
11B Present Switch Port 17 is up (Enabled) Connector 12B Present Switch Port 12 is
down (Enabled) Connector 13B Present Switch Port 10 is up (Enabled) Connector
14B Present Switch Port 08 is down (Enabled) Connector 15B Present Switch Port 06
is down (Enabled) Connector 16B Present Switch Port 04 is up (Enabled) Connector
17B Present Switch Port 02 is up (Enabled)

And not being a bug there is no solution or workaround.


Ok then, but how to shush it ?

Basically 2 options:

1. Disable switch port with command disableportswitch as per example below


(complete reference guide in bottom of post):

# disableswitchport 13A Disable connector 13A Switch port 9 reason: Blacklist Initial
PortInfo: # Port info: DR path slid 65535; dlid 65535; 0 port 9
LinkState:.......................Down PhysLinkState:...................Polling
LinkWidthSupported:..............1X or 4X LinkWidthEnabled:................1X or 4X
LinkWidthActive:.................4X LinkSpeedSupported:..............2.5 Gbps or 5.0 Gbps or
10.0 Gbps LinkSpeedEnabled:................2.5 Gbps or 5.0 Gbps or 10.0 Gbps
LinkSpeedActive:.................2.5 Gbps After PortInfo set: # Port info: DR path slid
65535; dlid 65535; 0 port 9 LinkState:.......................Down
PhysLinkState:...................Disabled #

2. In OEM, go to Infiniband Switch Monitoring   Metric and Collections Settings. In


“Switch Port State” click in edit pencils then click in “Add” to add a new option and for
this new one click in the magnifying glass in Port Number column and add the ports
you want to disable monitoring. Of course, remember to let the thresholds empty.
Repeat this process to all metrics under “Switch Port State”. I’ll have something like
that:

50
A good reference for the commands is the Doc: Controlling the InfiniBand Fabric .
I’ll aso recommend, of course, the MOS 12c: Red Arrow Down Status on IB ports
or False Alert “Cable Is Present On Port ‘N’ But It Is Polling For Peer Port” (Doc
ID 1514940.1) , besides the already mentioned “Bug” note in MOS.

See you!
Matheus.

51
After adding Datafile in Primary the MRP
Stopped in Physical Standby (Dataguard)
Hi all!
After add a datafile in PRIMARY database, the STANDBY MRP stopped. An “ALTER
DATABASE RECOVER MANAGED STANDBY DATABASE” does not solved te
problem, as you see:

SQL SELECT SEQUENCE#, Name, APPLIED FROM V$ARCHIVED_LOG where


APPLIED  'YES' and SEQUENCE# ALTER DATABASE RECOVER MANAGED
STANDBY DATABASE DISCONNECT FROM SESSION; Database altered. SQL
SELECT SEQUENCE#, Name, APPLIED FROM V$ARCHIVED_LOG where
APPLIED  'YES' and SEQUENCE# (select max(SEQUENCE#) -1 from
V$ARCHIVED_LOG); SEQUENCE# 
NAME                                                                              APPLIED ---------- 
--------------------------------------------------------------------------------  --------- 15075 
/db/u1004/oracle/admin/MYDB/arch/arch_1_823102978_15075.arc                    NO

Ok, this happen when setting standby_file_management to MANUAL, lets check:

SQL show parameters standby_file_management NAME                                


TYPE                             VALUE ------------------------------------ --------------------------------
------------------------------ standby_file_management              string                          
MANUAL

That’s right. Let’s see alert log what is happening:

Thu May 05 19:26:21 2016 ALTER DATABASE RECOVER MANAGED STANDBY


DATABASE DISCONNECT FROM SESSION Attempt to start background Managed
Standby Recovery process (MYDB_DG) Thu May 05 19:26:21 2016 MRP0 started
with pid=25, OS id=5670 MRP0: Background Managed Standby Recovery process
started (MYDB_DG) started logmerger process Thu May 05 19:26:26 2016 Managed
Standby Recovery not using Real Time Apply MRP0: Background Media Recovery
terminated with error 1111 Errors in file
/db/u1001/oracle/diag/rdbms/MYDB_DG/MYDB_DG/trace/MYDB_DG_pr00_5672.trc:
ORA-01111: name for data file 15 is unknown - rename to correct file ORA-01110:
data file 15: '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' ORA-01157: cannot
identify/lock data file 15 - see DBWR trace file ORA-01111: name for data file 15 is
unknown - rename to correct file ORA-01110: data file
15: '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' Slave exiting with ORA-1111
exception Errors in file
/db/u1001/oracle/diag/rdbms/MYDB_DG/MYDB_DG/trace/MYDB_DG_pr00_5672.trc:
ORA-01111: name for data file 15 is unknown - rename to correct file ORA-01110:
data file 15: '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' ORA-01157: cannot
identify/lock data file 15 - see DBWR trace file ORA-01111: name for data file 15 is
unknown - rename to correct file ORA-01110: data file

52
15: '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' Completed: ALTER
DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM
SESSION Recovery Slave PR00 previously exited with exception 1111 MRP0:
Background Media Recovery process shutdown (MYDB_DG)

Precisely. Now, how to fix?


Let’s first add the datafile, with the same name added on primary.
Another thing is that standby_file_management setted as MANUAL only makes sense
when using rawdevices on standby. This is not my case, so, let’s set it to AUTO too.
This way, it’s not going to happen again.

SQL ALTER DATABASE CREATE


DATAFILE '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' 
AS  '/db/u1002/oradata/MYDB/EZM_DATA_08.dbf'; Database altered. SQL ALTER
SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO scope=both; System altered.
SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
DISCONNECT FROM SESSION; Database altered. SQL SELECT SEQUENCE#,
Name, APPLIED FROM V$ARCHIVED_LOG where APPLIED  'YES' and
SEQUENCE# (select max(SEQUENCE#) -1 from V$ARCHIVED_LOG);  2 no rows
selected

Solved!
See alert log:

Thu May 05 19:36:29 2016 ALTER DATABASE CREATE


DATAFILE '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' 
AS  '/db/u1002/oradata/MYDB/EZM_DATA_08.dbf' Completed: ALTER DATABASE
CREATE DATAFILE '/u01/app/oracle/product/11.2/dbs/UNNAMED00015' 
AS  '/db/u1002/oradata/MYDB/EZM_DATA_08.dbf' Thu May 05 19:37:31 2016 ALTER
SYSTEM SET standby_file_management='AUTO' SCOPE=BOTH; Thu May 05
19:37:49 2016 ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
DISCONNECT FROM SESSION Attempt to start background Managed Standby
Recovery process (MYDB_DG) Thu May 05 19:37:49 2016 MRP0 started with pid=25,
OS id=8148 MRP0: Background Managed Standby Recovery process started
(MYDB_DG) started logmerger process Thu May 05 19:37:54 2016 Managed Standby
Recovery not using Real Time Apply Parallel Media Recovery started with 16 slaves
Waiting for all non-current ORLs to be archived... All non-current ORLs have been
archived. Media Recovery Log
/db/u1004/oracle/admin/MYDB/arch/arch_1_823102978_15075.arc Completed:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT
FROM SESSION Thu May 05 19:38:05 2016 Media Recovery Log
/db/u1004/oracle/admin/MYDB/arch/arch_1_823102978_15076.arc Thu May 05
19:38:35 2016 Media Recovery Log
/db/u1004/oracle/admin/MYDB/arch/arch_1_823102978_15077.arc

KB:
Managing Primary Database Events That Affect the Standby Database

53
Matheus.

54
Lock by DBLink – How to locate the remote
session?
And if you identify a lock or other unwanted operation by a DBLink session, how to
identify the original session in remote database (origin dabatase)?
The one million answer is simple: by process of v$session. By the way, looks like is
easier than find the local process (spid)… Take a look in my example (scripts in the
end of post):

dest @sid Sid:10035 Inst:1 SEQ# EVENT MODULE STATUS SID SERIAL# INST_ID
----- --------- ---------- ---------- ---------- ---------- ---------- 29912 SQL*Net message from
client oracle@origin2(TNS V1-V3) INACTIVE 10035 35 1 dest @spid SPID SID PID
PROCESS_FOR_DB_LINK MACHINE LOGON_TIME ------ ---------- ---------- -----------
----------- ----------- 16188960 10035 882 17302472 origin2 24/08/2015 07:43:40

Now I know the sid 10035 refers to local process 16188960 and the process on origin
database is 17302472. What I do what I want if this process:

root@origin2:/oracle/diag/rdbms/origin/origin2/traceps -ef |grep 17302472 grid


17302472 1 97 07:42:42 - 5:58 oracleorigin2 (LOCAL=NO) root 24445782 36700580
0 08:05:45 pts/3 0:00 grep 17302472

What include to locae the session in the database by spid, see the sql, and etecetera:

origin @spid2 Enter value for process: 17302472 SID SERIAL# USERNAME
OSUSER PROGRAM STATUS ------- ---------- ----------- ----------- --------------- ----------
7951 41323 USER_XPTO scheduler_user sqlplus@scheduler_app.domain.net (TNS
V1-V3) ACTIVE database2 @sid Sid:7951 Inst: 2 SQL_ID SEQ# EVENT MODULE
STATUS SID SERIAL# INST_ID ---------- ----- --------- ------- --------- ----- ------ ----------
1w1wz2mdunya1 56778 db file sequential read REMOTE_LOAD ACTIVE 7951 41323
2

That’s OK?
Simple isn’t?

The used Scripts (except the “sid”, that is a simple SQL on gv$session):

Get SPID and PROCESS FOR DBLINK from a SID:

# spid: col machine format a30 col process format 999999 select p.spid,b.sid, p.pid,
b.process as process_for_db_link, machine, logon_time from v$session b, v$process
p where b.paddr=p.addr and sid=&sid; /

Get SID from SPID:

#spid2: SELECT s.sid, s.serial#, s.username, s.osuser, s.program, s.status, FROM


v$session s, v$process p WHERE s.paddr = p.addr AND p.spid IN (&process;); /

55
See ya!
Matheus.

56
Listing Sessions Connected by SID
When we are preparing to move a database or something like that, it’s useful to know
if there is any session connecting by SID, right?

It can be done with:

select distinct machine,username,inst_id from gv$session where service_name


= 'SYS$USERS';

Today, another quick post. The life busy.

Matheus.

57
VPD: “row cache objects” latch contention
The other day, we found high occurrence of latch events in our principal/core
environment (11.2.0.3.0). The origins are all “different businesses channels” that
access objects through the use of VPD. The latch events was bit by bit dominating the
environment during the last months and turn on an “attention alarm” to us.

Then we found the the note: Bug 12772404 – Significant “row cache objects” latch
contention when using VPD – superseded (Doc ID 12772404.8).

The situation is right the same:

“When VPD is used, intense row cache objects latch contention (dc_users) may
caused by an internal Exempt Access Policy privilege check. Rediscovery Information:
VPD is in use
Significant “latch: row cache objects” waits occur
The waits are for the latch covering dc_users”

Take a look on the DC_USERS latches:

And about the workaround:


“There is no direct workaround available.
The following guidelines may help to alleviate the problem :
– Dropping the database roles from our user:
The Number of Roles granted to user can increase the row cache
look-ups proportionally. When database is required to check whether
a system privilege is granted to User, it checks if that privilege
is granted to any of the User’s roles. Hence, it’s not helpful
to do something like “set role A, B, C, D, F …” to recreate its

58
environment for every execution.
– Changing the policy function might be helpful in some cases
eg: To use CONTEXT dependent policies instead of DYNAMIC policies”

Take a look in one of the examples of:

boesing@mydb4 / P1RAW EVENT USERNAME SQL_ID SQL_CHILD_NUMBER


LAST_CALL_ET SID SEQ# WAIT_TIME SECOND --------- ---------------- ------ ------
---------- ---------- ------ 0700011807B50D08 latch: row cache objects CHANNELAPP
4nwvpx8xt3h3m 22 0 1276 59113 0 0700011807B50D08 latch: row cache objects
CHANNELAPP fp3mft3usb74w 0 21719 16636 0 0700011807B50D08 latch: row
cache objects CHANNELAPP 58pund2p09hgg 0 6774 11061 0 0700011807B50D08
latch: row cache objects OTHER_CHANNELAPP 54a2wfa60rgu1 1 0 8046 12386 0
0700011807B50D08 latch: row cache objects CHANNELAPP 1gwr69wduk9v4 42 0
9454 53927 0 0700011807B50D08 latch: row cache objects OTHER_CHANNELAPP
9pqrqqfzukrq4 68 0 9732 19311 0 0700011807B50D08 latch: row cache objects
CHANNELAPP d1bnq8wb0nhrf 0 1 11425 56830 -1 0700011807B50D08 latch: row
cache objects CHANNELAPP 32aqdd8cbmc4b 0 11711 39182 0 0700011807B50D08
latch: row cache objects IB_RUN adgnrpwazbfmz 0 12133 3372 0
0700011807B50D08 latch: row cache objects IB_RUN cqmgxvb78q9hy 0 17913 6345
0 0700011807B50D08 latch: row cache objects CHANNELAPP byzm159jbjxaa 0 6
19606 52624 0 0700011807B50D08 latch: row cache objects OTHER_CHANNELAPP
2kbjztd9yzqfm 61 0 20732 28687 0 0700011807B50D08 latch: row cache objects
CHANNELAPP 6dvagdabts9nx 19 7 21011 504 0 0700011807B50D08 latch: row
cache objects CHANNELAPP 9pqrqqfzukrq4 78 0 21439 19030 0
0700011807B50D08 latch: row cache objects CHANNELAPP gq1avu79h2np3 85 0
3815 33831 -1 boesing@mydb4SELECT child# FROM v$latch_children WHERE
addr= '0700011807B50D08'; CHILD# ---------- 8 boesing@mydb4 select s.kqrstcln
latch#, s.kqrstcid cache#, kqrsttxt name from x$kqrst s where s.kqrstcln=8; LATCH#
CACHE# NAME ---------- ---------- -------------------------------- 8 10 dc_users 8 7
dc_users 8 7 dc_users 8 7 dc_users

The problem was definitively solved by applying the 11.2.0.4.2 PSU. No problems
after that.
Good luck, if it’s your situation.

Hugs!
Matheus.

59
Compilation Impact: Object Dependencies
Hi all!
It’s not necessarily the DBA function, but how often someone of business came and
ask you wich is the impact on recompiling one or other procedure?
It probably happen because the DBA usually make some magic and have a better
understanding about objects relationship. It happens specially in cases there is no
code governance…

So, you don’t have to handle all responsability and can switch some of that with
developer, through DBA_DEPENDENCIES view.

The undertstanding is easy: The depended objects and the refered objects. If ou
change the refered, all depended will be impacted by.

GREPORADB @dependencies Enter value for owner: GREPORA Enter value for
obj_name: TABLE_EXAMPLE OWNER Name TYPE DEPE REFERENCED
REFERENCED_OWNER REFERENCED_NAME ------------------
----------------------------------- ---------- ---- ---------- ------------------
----------------------------------- GREPORA TOTALANSWEREDQUESTIONS FUNCTION
HARD TABLE GREPORA TABLE_EXAMPLE GREPORA USERRESPONSESTATUS
FUNCTION HARD TABLE GREPORA TABLE_EXAMPLE GREPORA
VW_INPROGRESSFEEDBACKOPTS VIEW HARD TABLE GREPORA
TABLE_EXAMPLE GREPORA EVENTSTARTDT FUNCTION HARD TABLE
GREPORA TABLE_EXAMPLE GREPORA HAVEUSERANSWEREDANYTHING
FUNCTION HARD TABLE GREPORA TABLE_EXAMPLE

Nice, hãn?

## @dependencies col owner for a18 col name for a35 col type for a10 col
referenced_owner for a18 col referenced_name for a35 col referenced_type for a10
select owner,name,type,dependency_type,referenced_type,referenced_owner,referen
ced_name from dba_dependencies where referenced_owner like upper('%&owner;%')
and referenced_name like upper('%&OBJ;_NAME%');

See ya!
Matheus.

60
RAC on AIX: Network Best Practices
Hi all!
A few time ago I passed by some performance issues on AIX working with instances
with different configuration (proc/mem). The root cause was basically the inefficient
configuration of networking for interconnect (UDP).
As you know, the UDP is a non-response (for that reason with less metadata and
faster) protocol. By the default, every server have a pool to send udp (and tcp)
messages and another to recieve.
In my situation, once there was an ‘inferior’ instance, the pools were automatically set
smaller in this one, and it was causing a high interconnection block sending statistics
from other instances. In deed, it was lots of resends caused by overflows in this
smaller instance…

There is one one to evaluete how much loss are you having for UDP in your AIX
server:

netstat -s | grep 'socket buffer overflows'

If you are having considerable number of overflows, it’s recommended to reavaluate


the sized of your udp_recvspace. And, of course, maintain the calculation of pools.

Oracle recommends, at least:

tcp_recvspace = 65536 tcp_sendspace = 65536 udp_sendspace =


((DB_BLOCK_SIZE * DB_FILE_MULTIBLOCK_READ_COUNT) + 4 KB) but no lower
than 65536 udp_recvspace = 655360 (Minimum recommended value is 10x
udp_sendspace, parameter value must be less than sb_max) rfc1323 = 1 sb_max =
4194304 ipqmaxlen = 512

This and others details about configuring RAC on AIX ban be found in note: RAC and
Oracle Clusterware Best Practices and Starter Kit (AIX) (Doc ID 811293.1)

I’d recommend you take a look too.

Have a nice day!


Matheus.

61
Grepping Entries from Alert.log
Hey hey,
One more McGayver by me! Haha
Again to find some information in alert. This time, I’m looking to count and list all
occurrences of an action in alert. To archive this, I made the script below.

The functionality is just a little bit more complex than the script of the last post, but
stills quite simple. Take a look:

Parameters:
PAR1 : name of alert (the main alert.log)
PAR2 : Searched token
PAR3 : Start day you want to, in the format “Mon dd” or just “Mon”. Below an example.
PAR4 : Start Year (4 digits)
PAR5 : [optional]End day you want to, in the format “Mon dd” or just “Mon”. The
default value is “until now”.
PAR6 : [optional]End Year (4 digits). The default value is “until now”. If you use the
PAR5, you have to use PAR6.
PAR7 : [optional] List All entries and when?. If you want to use this PAR, you must to
use PAR5 and PAR6.

Examples (Looking for service reconfigurations) :


Ex1: sh grep_entries_alert.sh alert_xxdb_1.log “services=” “Apr 12” 2015
(Seach between April 12 and now and count entries).
Ex2 : sh grep_entries_alert.sh alert_xxdb_1.log “services=” “Apr 01” 2015 “May 30”
2015
(Seach between April 01 and May 30 and count the entries).
Ex3 : sh grep_entries_alert.sh alert_xxdb_1.log “services=” “Apr 01” 2015 “May 30”
2015 LIST
(Seach between April 01 and May 30 and count the entries and list them all…)

# Script grep_entries_alert.sh

62
if [ $# -lt 6 ]; then
FIN=`cat $1 |wc -l`
else FIN=`cat $1 |grep -n $5 |grep $6$ |head -n 1 |cut -d':' -f1`
fi
BEG=`cat $1 |grep -n "$3" |grep $4$ |head -n 1 |cut -d':' -f1`
NMB=`expr $FIN - $BEG`
ENTR=`cat $1 |head -n $FIN |tail -$NMB| grep $2|wc -l`
echo Number of Entries: $ENTR log.log
if [ $# -lt 7 ]; then
echo ------- Complete List Of Entries and When ---------- log.log
for line in `cat $1 |head -n $FIN |tail -$NMB| grep -n $2|cut -d':' -f1`;do
LR=`expr $line + $BEG` # To get "real line", without the displacement
DAT=`expr $LR - 1`     # To get line date of entry
echo awk \'NR==$DAT\' $1 aux.sh # Printing the lines just calculted
echo awk \'NR==$LR\' $1 aux.sh  # with aux.sh
done;
sh aux.sh log.log
fi
cat log.log

It’s not beautiful. But it works!

After that, there is the new blog sponsor:

(Hahahaha)

Matheus.

63
Grepping Alert by Day
Hi all,
For that moment when your alert is very big and some OS doesn’t “work very well with
it” (in my case was using AIX), I jerry-ringged the shellscript bellow. It puts in a new
log just the log entries of a selected day.

The call can be made with two or three parameters, this way:

Parameters:
PAR1:

name of alert (the main alert.log)


PAR2: Day you want to, in the format “Mon dd”. Below an example.
PAR3: [optional] desired year. The default is the current year. But is useful specially
on the “new year” period…

Examples:
Ex1: sh grep_day.sh alert_xxdb_1.log “Apr 12”
Ex2: sh grep_day.sh alert_xxdb_1.log “Apr 12” 2014

Generated files:
dalert_2015Apr12.log
dalert_2014Apr12.log

# Script grep_day.sh

if [ $# -lt 3 ]; then
YEAR=`date +"%Y"`
else YEAR=$3

64
fi
DATEFORMAT=`echo $2|cut -d' ' –f1`""`echo $2|cut -d' ' –f2`
BEG=`cat $1 |grep -n "$2" |grep $YEAR |head -1 |cut -d':' -f1`
FIN=`cat $1 |grep -n "$2" | grep $YEAR |tail -1 |cut -d':' -f1`
NMB=`expr $FIN - $BEG`
cat $1 |head -$FIN |tail -$NMB dalert_$YEAR$DATEFORMAT.log

Belive me! It can be useful…. haha

See ya!

Matheus.

65
Searching entries on Alert.log: A Better Way
Hi all!
As the oldest readers know, someday I had to found some entries in the alertlog and I
had a really big log. So I jerrry-ringed some scripts for grepping alert with auxiliar files
and etc.
I can see the posts here: Grepping Alert by Day and Grepping Entries from Alert.log .

So… They are functional, but probably the worst ways to get it. I didn’t know and was
innocent to not search by the view x$dbgalertex t.
There is also possible to write on alert through the procedure
SYS.DBMS_SYSTEM.KSDWRT .

Ok, so let me fix this situation with theese two good guys: @write_alert and
@find_alert

greporadb @write_alert Enter value for text: GrepOra.com best blog ever! PL/SQL
procedure successfully completed.

greporadb @find_alert Enter value for inst: 1 Enter value for host: Enter value for
message: GrepOra.com ORIGINATING_TIMESTAMP Inst# HOST_ID
MESSAGE_TEXT ---------------------------------------- ----- ---------------
--------------------------------------- 13/06/16 16:53:13,699 +00:00 1 greporasrvr
GrepOra.com best blog ever! 1 row selected.

In alert log we can see:

[oracle@greporasrvr trace]$ tail -3 alert_GREPORADB.log Archived Log entry 29824


added for thread 1 sequence 15786 ID 0x87039d01 dest 1: Mon Jun 13 16:53:13
2016 GrepOra.com best blog ever!

And the scripts:

## write_alert.sql EXEC SYS.DBMS_SYSTEM.KSDWRT(2, '&TEXT;');

## find_alert.sql col ORIGINATING_TIMESTAMP for a40 col host_id for a15 col
inst_id for 99 col MESSAGE_TEXT for a100 set linesize 500 SELECT
originating_timestamp,inst_id,host_id,message_text FROM x$dbgalertext where 1=1
and inst_id like '%&INST;%' and upper(host_id) like upper('%&host;%') and
upper(message_text) like upper('%&message;%') order by record_id asc;

Ok, fixed!
See ya!
Matheus.

66
Alter (Fix) Oracle Database Date
When you haven’t access to SO and just have to alter database date…

# Fix Date:

ALTER SYSTEM SET fixed_date = '2016-04-05-12:00:00';

# Unfix Date:

ALTER SYSTEM SET fixed_date = NONE;

OBS: Just to make it clear: The date will be really “fixed”. The time will “stop”.
Seconds, minutes will not advance…
Matheus.

67
Explain ORA-XXX on SQL*Plus
For those when the error is unkown/rare, SQL*Plus helps us. It’s just call “oerr” from
OS.

See the Linux example (made on RHEL):

SQL  !oerr ora 01652 01652, 00000, "unable to extend temp segment by %s in


tablespace %s" // *Cause:  Failed to allocate an extent of the required number of
blocks for //          a temporary segment in the tablespace indicated. // *Action: Use
ALTER TABLESPACE ADD DATAFILE statement to add one or more //          files to
the tablespace indicated.

Pretty cool, han?

Have a nice week!


Matheus.

68
Oracle Database Licensing: First Step!
Oracle licensing is always a complex question, right?
I made some search about it today and decided to share, on quick mode. As I usually
do. I focused on Database, by the way.

The first step is to understand Features vs Options vs Packs relation. Oracle


documentation is always good for that. I recommend you to spend some time on
Database Licensing Information User Manual .

Ok, now the best way to undestand how evaluate my environment is searching on
Oracle Support, right?
And it do not disappoint: Database Options/Management Packs Usage Reporting
for Oracle Databases 11gR2 and 12c (Doc ID 1317265.1)
In this note you can get a complete and actual script used to evaluate
features/options/packs utilization ( options_packs_usage_statistics.sql ). This is a
good way if you are preparing for an auditing…

I made some simple queries to validate/understand results from Oracle Script. So, if
you don’t have access to Oracle Support, it might help you:

Get Features usage:

SELECT u1.name, u1.detected_usages, u1.currently_used, u1.version, u1.description


FROM   dba_feature_usage_statistics u1 WHERE  u1.version = (SELECT
MAX(u2.version) FROM   dba_feature_usage_statistics u2 WHERE  u2.name =
u1.name) AND    u1.dbid = (SELECT dbid FROM v$database) --WHERE
DETECTED_USAGES0 -- To get used features only ORDER BY name /

Get Options usage:

69
col parameter for a50 select parameter,value from v$option --where value='TRUE' --
To get used options only /

Information about Session license limits:

SELECT sessions_max s_max,sessions_warning s_warning, sessions_current


s_current,sessions_highwater s_high,users_max FROM v$license;

Information about CPU license limits:

select cpu_count_current,
cpu_core_count_current,CPU_SOCKET_COUNT_CURRENT, CPU_COUNT_HIGHW
ATER,CPU_CORE_COUNT_HIGHWATER,CPU_SOCKET_COUNT_HIGHWATER
FROM v$license;

An interesting point is that you can disable and enable options through the command
chopt. But, you must to get database down first. Example to disable partitioning
option:

chopt disable partitioning

The complete explanation and examples (including right values to activate/deactivate


options) can be found on Oracle Database Postinstallation Tasks – Enabling and
Disabling Database Options .

Some time ago I wrote a post about evaluating Database license in all database park
through OEM . It remains valid, I recommend you take a look in this post too.

KB and other interesting stuffs:


Database Options/Management Packs Usage Reporting for Oracle Databases 11gR2
and 12c (Doc ID 1317265.1)
12c Release 1 Database Licensing Information User Manual
11g Release 1 Database Licensing Information User Manual
Enabling and Disabling Database Options
Excellent and actual presentation by Martin Berger
Article about top license pitfalls. Good to reflection. Written by OMTCO Consulting

Matheus.

70
Getting Oracle Parameters: Hidden and
Unhidden
Today’s post is a quick post!
Very quick post! very very quick post!
But it’s a helpful post!

Connected as sys with sysdba:

select x.ksppinm name,


ksppdesc description,
y.kspftctxvl value,
y.kspftctxdf isdefault,
decode(bitand(y.kspftctxvf, 7), 1,'MODIFIED',4,'SYSTEM_MOD','FALSE')
ismod, decode(bitand(y.kspftctxvf, 2), 2, 'TRUE', 'FALSE') isadj
from sys.x$ksppi x, sys.x$ksppcv2 y
where x.inst_id = userenv('Instance')
and y.inst_id = userenv('Instance')
and x.indx + 1 = y.kspftctxpn
order by name;

Matheus.

71
Application Hangs: resmgr:become active
Application APP hangs with resmgr:become active . There is a resource plan defined
who has a specific group to this Application. What is wrong and how to fix?

Here I presume you what is a resource manager and a resource plan. And, of course,
for what purpose they exists. You must to know that this event is related to high active
sessions in the group of resource plan too.

Before everything else, please understand if this is an acceptable behavior of the


application. Then, in which resource group the sessions in this event are. The are
other application in this same group with an unacceptable behavior? Yes? So, fix it.
No? Consider tho adjust the resource plan, switch the application to a new group, or,
like in my case, remap the Application APP to the right group… ¬¬

To make it clear: In my case, the mapping is missing, so the schema MYAPP


(Application APP) fit to OTHER_GROUP, where we use to set minimal limits:

SID SERIAL# INST_ID USERNAME RESOURCE_CONSUMER_GROUP EVENT -----


---------- ---------- ------------------------------ ----------- 492 29459 2 MYAPP
OTHER_GROUPS resmgr:become active 1102 19145 2 MYAPP OTHER_GROUPS
resmgr:become active 955 33161 2 MYAPP OTHER_GROUPS resmgr:become active
1084 33839 2 MYAPP OTHER_GROUPS db file sequential read

MYDB show parameters resource_manager_plan NAME TYPE VALUE


--------------------- ------ -------------- resource_manager_plan string MYDB_PLAN MYDB
select group_or_subplan, active_sess_pool_p1, cpu_p1, cpu_p2, cpu_p3, cpu_p4
from DBA_RSRC_PLAN_DIRECTIVES where plan = 'MYDB_PLAN' Enter value for
plano: MYDB_PLAN GROUP_OR_SUBPLAN ACTIVE_SESS_POOL_P1 CPU_P1
CPU_P2 CPU_P3 CPU_P4 ------------------------------ ------------------- ---------- ----------
---------- ---------- BATCH_GROUP 60 0 10 0 0 SYS_GROUP 80 0 0 0 APP_PLAN 20 0
30 0 0 OTHER_GROUPS 20 0 20 0 0 GGATE_GROUP 0 10 0 0 PAYTRUE_GROUP
40 0 30 0 0 DBA_GROUP 20 0 0 0

You can configure the mapping by user like that:

BEGIN DBMS_RESOURCE_MANAGER.clear_pending_area;
DBMS_RESOURCE_MANAGER.create_pending_area;
DBMS_RESOURCE_MANAGER.set_consumer_group_mapping ( attribute =
DBMS_RESOURCE_MANAGER.oracle_user, --
DBMS_RESOURCE_MANAGER.service_name (or a lot of possibilities. Google it!)
value = 'MYAPP', consumer_group = 'APP_PLAN');
DBMS_RESOURCE_MANAGER.validate_pending_area;
DBMS_RESOURCE_MANAGER.submit_pending_area; END; /

To switch the connected sessions, it can be done like:

72
SELECT 'EXEC
DBMS_RESOURCE_MANAGER.SWITCH_CONSUMER_GROUP_FOR_SESS
('''||SID||''','''||SERIAL#||''',''APP_PLAN'');' FROM V$SESSION where
username='MYAPP' and RESOURCE_CONSUMER_GROUP='OTHER_GROUPS';

Remember that creating a resource plan without making the mappings is a bit
pointless…

Matheus.

73
How to Prevent Automatic Database Startup
This is a quick post!
– About Oracle Restart
– Reference to SRVCTL

Ok!
In a nutshell, my notes:

To register the database, if not already registered:


srvctl add database -d $DBNAME -o $ORACLE_HOME -p
$ORACLE_HOME/dbs/spfile.ora -y manual

Once the database is registered, change the management policy for the
database to manual:
srvctl modify database -d $DBNAME -y manual

Matheus.

74
TFA – Collecting Period
I like quick posts, you already know that. It’s like a quick memo to myself in the future.

Here is a simple example to collect files with TFA by date interval:

tfactl diagcollect -all -from "Jul/24/2015 15:30:00" -to "Jul/24/2015 16:30:00"

See ya!

Matheus.

75
ARCH Process Killed – Fix Without Restart
Hi all,
What if your arch processes hangs or get killed? How to keep archive going without
restart database?

Take a look…

Problem:

Current log# 2 seq# 1484 mem# 1:


+DGFRA/dbprod01/onlinelog/group_2.380.898680659 ARC1: Detected ARCH
process failure ARC1: STARTING ARCH PROCESSES ARC1: STARTING ARCH
PROCESSES COMPLETE Master archival failure: 28 ARCH: Archival stopped, error
occurred. Will continue retrying ORACLE Instance dbprod01_1 - Archival Error
ORA-00028: your session has been killed LGWR: Detected ARCH process failure
Thread 1 advanced to log sequence 1485 (LGWR switch) Current log# 5 seq# 1485
mem# 0: +DGFRA/dbprod01/onlinelog/group_5.870.898689599 Fri Apr 26 10:51:36
2016 Master archival failure: 28 Fri Apr 26 10:52:00 2016

Solution:
Increase your arch processes number…

ALTER SYSTEM SET log_archive_max_processes=8 SCOPE=BOTH; Fri Apr 26


10:55:15 2016 ARC4 started with pid=59, OS id=76169 Fri Apr 26 10:55:15 2016
ARC5 started with pid=60, OS id=76171 Fri Apr 26 10:55:15 2016 ARC6 started with
pid=61, OS id=76173 Fri Apr 26 10:55:15 2016 ARC7 started with pid=62, OS
id=76175

Matheus.

76
DBA_TAB_MODIFICATIONS
Do you know the view “dba_tab_modifications”?
It’s very useful to know what has changed since the last stats gathering of a table and
all decision/information that comes with… See the example below..

The only need is to run “dbms_stats.flush_database_monitoring_info” before


cheking… take a look:

mydb create TABLE matheus_boesing.test (nro number); Table created. mydb begin
2 for i in 1..1000 loop 3 insert into matheus_boesing.test values (i); 4 end loop; 5
commit; 6 end; 7 / PL/SQL procedure successfully completed. mydb select
table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where
table_name ='test' and table_owner='MATHEUS_BOESING'; no rows selected mydb
exec dbms_stats.flush_database_monitoring_info; PL/SQL procedure successfully
completed. mydb select table_owner,table_name,inserts,updates,deletes from
dba_tab_modifications where table_name ='test' and
table_owner='MATHEUS_BOESING'; TABLE_OWNER TABLE_NAME INSERTS
UPDATES DELETES ---------------------- -------------- ---------- ---------- ----------
MATHEUS_BOESING test 1000 0 0 mydb EXEC
DBMS_STATS.GATHER_TABLE_STATS('MATHEUS_BOESING','test'); PL/SQL
procedure successfully completed. mydb select
table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where
table_name ='test' and table_owner='MATHEUS_BOESING'; no rows selected

For more information:


http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_4149.htm

Have a nice day!


Matheus.

77
Oracle – Lost user’s password?
Hi everyone,

Sometimes we need to connect to the database using an unusual schema that we


don’t even know the password, maybe because it was created through a script in a
release or the latest DBA never stored it somewhere public or maybe due to any other
reason (whatever, right? you just need to connect there and end of the story), but
anyhow, you need to login using this schema specifically (to create/delete synonyms,
dblinks, jobs etc…). How would you do that without the password?

Well, there are 2 simple and very useful ways to do that:

• Use proxy connection (connect through);

• Save the password hash - Change - Perform what you need - Change back to the
original password using the hash.

PS: The second approach might be more risky because the password may be set into
some application, datasource, etc… So be aware of the impact before actually
changing the password.

Using Proxy connection:

This is very simple, and in order to do that, you have to connect as sysdba:

sqlplus / as sysdba

Then you will say to the database: “Alright mate, now you will connect to the user A,
through the user B, even not knowing user’s A password”, with the following
command:

78
alter userA grant connect through userB;

By performing this command, you’ll be able to access the user A, through the user B.
But how does the connection works?

When you are connecting to the database, do it like this:

conn userB[userA]/passB@database

See that we have put the schema’s name in [ ]’s. This is how it works. Once you
connect to the database and run:

show user

You will see: “userA”

Using Password Hash:

As said before, this one should be faced more carefully, as it might affect something,
because we will temporarily change the password of the user.

First of all, connect to the database with an user who have “grant select any
dictionary” or at least grant select on dba_users. Then run:

select password from dba_users where username='schema';

You will have a result like this:

PASSWORD ------------------------------ F894844C34402B67

Now that you have the CURRENT password hash saved, change the users password:

alter user schema identified by newpassword;

Doing that, you will be able to connect to the user using the new password. Do what
you need, and when you are done, you need to change the password back to the
original one like this:

alter user schema identified by values 'F894844C34402B67';

Please notice the command VALUES there, using the saved password hash. This is
the command which allow us to set up the user’s password using the hash.

That’s it for today guys, very simple, but useful.

Have a wonderful week.

Rafael.

79
Scheduler Job by Node (RAC Database)
Sometimes you want to run something just in one node of the RAC. Here is an
example to do it:

create or replace procedure USER_JOB.PRC_SOMETHING is begin -- do something


null; end; /

begin sys.dbms_scheduler.create_job(job_name = 'USER_JOB.JOB_SOMETHING',


job_type            = 'PLSQL_BLOCK', job_action         
= 'USER_JOB.PRC_SOMETHING;', start_date          = sysdate, repeat_interval    
= 'Freq=Minutely;Interval=30', end_date            = to_date(null), job_class          
= 'DEFAULT_JOB_CLASS', enabled             = true, auto_drop           = false,
comments            = 'Something Job.'); end; /

begin dbms_scheduler.set_attribute(name = 'USER_JOB.JOB_SOMETHING',


attribute='INSTANCE_ID', value= 1 ); end; /

Matheus.

80
ORA-01950 On Insert but not on Create
Table
Sounds weird creating table does not raise any error, but inserting a correct tuple in
this table raise a permission error, right? Just take a look:

SQL create table matheusdba.table_test(a number) tablespace TEST_TABLESPACE;


Table created. SQL insert into matheusdba.table_test values (1); insert into
matheusdba.table_test values (1) * ERROR at line 1: ORA-01950: no privileges on
tablespace 'TEST_TABLESPACE'

It probably it’s a new user or a tablespace for which user doesn’t has quota. But why
table creation doesn’t result in error but only on inserting?

Certainly the database is 11.2 or above, because this mechanism are related to
deferred_segment_creation, introduced in this release. This parameter is default
setted for true, and means that the segments for tables and their dependent objects
(LOBs, indexes) will not be created until the first row is inserted into the table.
So, only when allocating segment for the first insert database will check privileges on
tablespace.

It’s a good way to save space. But it causes too some situations when exporting with
EXP, like described here .

Anyway, I think Oracle could implement segment validation when create table, it’ll
avoid a lot of misunderstanding…
Now, create a table doesn’t implies in the insert will happen successfully, unless you
disable the deferred_segment_creation and bring back the behavior from earlier
versions:

SQL show parameters deferred NAME                                 TYPE        VALUE


------------------------------------ ----------- ------------------------------
deferred_segment_creation            boolean     TRUE SQL alter system set
deferred_segment_creation=FALSE; System altered. SQL create table
matheusdba.table_test2(a number) tablespace TEST_TABLESPACE; create table
matheusdba.table_test2(a number) tablespace TEST_TABLESPACE * ERROR at line
1: ORA-01950: no privileges on tablespace 'TEST_TABLESPACE'

See ya!
Matheus.

81
Adding datafile hang on “enq: TT –
contention”
Yesterday a colegue asked me about “enq: TT – contention” event on his session that
is adding a a datafile in a tablespace wich run out of space in a 11.1.0.7 Database.
I’ve faced this situation another time and decided to document it.

Oracle refer Bug 8332021 : CANNOT ADD A DF WHEN SESSIONS ARE


REPORTING ORA-1653 ON 11.1.0.7 for this situation.

The pointing solutions are:


– “Apply Patch 8332021”
– “Alternatively, you can upgrade to 11.2.0.2 or higher as the patch is included in the
11.2.0.2 patch set.”

The not documented workaround (just for you, by Matheus :D) is:


– Cancel session adding datafile.
– Extend any datafile to resume sessions waiting (in resumable state).
– Readd datafile.
This extend action relieves the blocks and will allow you to add the datafile.

Hugs!

Matheus.

82
Quick guide about SRVCTL
Hi everyone!

Often we caught ourselves trying to remember some simple commands to achieve


what we need. And SRVCTL and its variations may be one of them

Sometimes we need to create a specific service_name to connect to an


existing database, and we can, for example, have an application that use a SPECIFIC
NODE, so we can configure the service name to use it that way. And we find
ourselves looking for the right syntax for that. Ok, we are going to give you guys some
basic examples that may be helpful

In order to check ALL the available services already created via SRVCTL we should
use:

srvctl status service -d

it should retrieve an output like that:

dbsrv {/home/oracle}: srvctl status service -d dbgrepora Service grepora-app1 is


running on instance(s) dbgrepora1

Please bear in mind that the does not necessarily match the instance name, so to
make sure about the database name, run:

srvctl config database

Example:

dbsrv {/home/oracle}: srvctl config database dbgrepora

If you have more than one database on that server, it will be returned too.

Ok, now let’s try to create a new service name for your database. In the node that you
want to create the service_name, please run the following.

srvctl add service -d -s

where follow the rule already described above, and you can create as you wish.

Ok GREPORA, but what if i want to create a service_name to multiple instances ?


You got it!

The syntax follows the same idea, but we should include different parameter in there,
which is:

-r

Example:

83
srvctl add service -d dbgrepora -s service_dbg -r dbgrepora1,dbgrepora2

Creating the service_dbg service, and checking the status, you’ll have an output like:

dbsrv {/home/oracle}: srvctl status service -d dbgrepora -s service_dbg Service


service_dbg is running on instance(s) dbgrepora1,dbgrepora2

To stop and remove a created service just use:

srvctl stop service -d -s

srvctl remove service -d -s

Hope it comes to help!

Best Regards,

Rafael.

84
Saving database space with ASSM
It’s good way reclaim WASTED space from tables and index using  the Segment
Advisor.

To perform an database reclaim procedure using Automatic Segment Space


Management (ASSM) it is preferred to create tablespaces with below option:

grepdb CREATE TABLESPACE HR DATAFILE '+GREPORADG/' SIZE 10M EXTENT


MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

Only tablespaces with segment space auto are eligible to Segment Advisor.

To manually run the Segment Advisor on OEM:

It will save some database storage area, and make it more effective cause
by LHWM/HHWM.

Maiquel.

85
Flashback- Part 1 (Flashback Drop)
Hi everyone!

Flashback is a technology that becomes handy to the DBA when you need to recover
the database from logical issues, and it is considered a great feature to use for
recovery scenarios, besides RMAN. Comparing with Recovery Manager (RMAN),
Flashback is way simpler mode to recover from logical issues (end users, most of the
time), when RMAN is better for physical issues. These issues can be like:

• DELETE operation with a wrong WHERE clause;

• A table mistakenly DROPPED;

• Wrong UPDATE commands;

• Flashback the whole database, to a time in the the past.

And so on… The scenarios are plenty. So in order to understand each of them better,
we’ll explain in details, separately, in different posts, so we don’t get tired of reading
that much

The Flashback Types are:

• Flashback Drop

• Flashback Query

• Flashback Versions Query

• Flashback Transactions Query

• Flashback Table

• Flashback Database

• Flashback Data Archive

For this Part 1 , we’ll discuss about item 1 only, and in the next posts we will continue
this saga!

Most of the flashback operations are undo-based, so its up to the DBA to set up a
good retention based on his own environment. The steps are:

• Create the UNDO tablespace

• Set the undo_retention good enough for your needs

• Configure the tablespace to be auto-extend

Okay then, enough with the talking and let’s go right to the point.

86
FLASHBACK DROP

To perform Flashback Drop operations, we must have the RecycleBin enabled on the


database. To make sure that your RecycleBin is enable, you can check as:

SQL show parameter recyclebin NAME     TYPE VALUE ----------------------- -----------


------------------------------ recyclebin     string on

This feature allow us to restore a table that was accidentally dropped, using the
RecybleBin as a source. RecybleBin is basically where your tables and associated
objects (such as indexes, constraints, triggers, etc…) are sent when they are dropped
(yes, they are still in the database somehow, even if you have dropped them). The
Flashback Drop is capable of restoring dropped tables based on the RecycleBin. Ok
GrepOra, but for how long will we gonna have the dropped objects available on the
RecycleBin? They remain available until someone purge it explicitly or due to space
pressure.

Here is an example of FLASHBACK DROP operation:

Create table:

SQL CREATE TABLE grepora ( column1 VARCHAR2(30), column2 VARCHAR2(40),


4 column3 VARCHAR2(20) ) 5 TABLESPACE users; Table created.

Then drop the table:

SQL drop table grepora; Table dropped.

Check in the RecycleBin, with the following command, the dropped table:

SQL select original_name, object_name, type, droptime from user_recyclebin where


original_name='GREPORA'; ORIGINAL_N OBJECT_NAME   TYPE DROPTIME
---------- ------------------------------ ------ ------------------- GREPORA   
BIN$NRwjojcna3XgUzvONgooCA==$0 TABLE  2016-06-12:16:06:01

Please, have a look at the OBJECT_NAME column, which now it contains the current
name of the dropped table in the database, and the column ORIGINAL_NAME shows
the name as it was before the drop. This happens because we can have an object
with the same name created and dropped different times, so we can have all its
versions available in case we need a specific one.

To prove this is real, we can simply query the dropped table using the RecycleBin’s
name:

SQL select count(*) from "BIN$NRwjojcna3XgUzvONgooCA==$0";   COUNT(*)


---------- 0

Now we have to actually use the flashback command to restore the dropped table and
make it available again with the right name. To do that, we have some different ways.

87
Note: In case we have different versions of the table with the same name on the
RecycleBin, Oracle will always choose the most recent one. If you want to restore an
older version, you should use the OBJECT_NAME for the operation.

Examples:

SQL flashback table grepora to before drop; Flashback complete. SQL select count(*)
from grepora;   COUNT(*) --------- 0

In the example above, we have successfully restored the GREPORA table using its
ORIGINAL_NAME. But what if we had different versions of the same table?

First, let’s drop the table that we have restored, and check it on the RecycleBin.

SQL drop table grepora; Table dropped. SQL select original_name, object_name,
type, droptime from user_recyclebin where original_name='GREPORA'; ORIGINAL_N
OBJECT_NAME   TYPE DROPTIME ---------- ------------------------------ ------
------------------- GREPORA    BIN$NRxYdbc4hpjgUzvONgrFng==$0 TABLE 
2016-06-12:16:20:48

Create the table again, using the same DDL, and then drop it:

SQL CREATE TABLE grepora ( column1          VARCHAR2(30),   column2       


VARCHAR2(40),   column3          VARCHAR2(20) )   5  TABLESPACE users; Table
created. SQL drop table grepora; Table dropped.

Check the RecycleBin. We will find the two versions of our table, in different times.

SQL select original_name, object_name, type, droptime from user_recyclebin where


original_name='GREPORA'; ORIGINAL_N OBJECT_NAME   TYPE DROPTIME
---------- ------------------------------ ------ ------------------- GREPORA   
BIN$NRxYdbc4hpjgUzvONgrFng==$0 TABLE 2016-06-12:16:20:48 GREPORA   
BIN$NRxYdbc5hpjgUzvONgrFng==$0 TABLE 2016-06-12:16:21:41

Check that the ORIGINAL_NAME for both lines are the same. Now we can flashback
any version of the same table, using the OBJECT_NAME :

SQL flashback table "BIN$NRxYdbc4hpjgUzvONgrFng==$0" to before drop;


Flashback complete.

As we still have the other table and want to restore it as well, we obviously cannot
have the same name for both of them, so we can restore it with the RENAME TO
clause:

SQL flashback table "BIN$NRxYdbc5hpjgUzvONgrFng==$0" to before drop rename


to grepora_2; Flashback complete.

And now we have both versions available to the database:

88
SQL select table_name from user_tables; TABLE_NAME ------------------------------
GREPORA_2 GREPORA

Please stay tuned for the next Flashback Posts upcoming! We’ll cover it all. I hope it
was all clear to everyone. Thanks for reading and have a wonderful week!

Rafael.

89
Flashback – Part 2 (Flashback Query)
Hey team,

This is the second part of our Flashback Tutorial and today we’re gonna talk about
FLASHBACK QUERY. Please check here for the first post about Flashback Drop .

Let’s go:

FLASHBACK QUERY

In the last Flashback post, we learnt about restoring tables that were dropped from the
database with the RecycleBin facility. But if you think about it, it’s way more likely that
a table suffer an undesirable change , than actually be dropped. Example, when you
UPDATE a table with values that are not correct, or delete values (and commit, of
course), and so on, wouldn’t it be great if we could come back in the past and see how
it was before the change? Thanks to the almighty Oracle Database we can! We can
use the Flashback Query to see how a table was at a specific time in the past. And the
best part of it, is if you are the owner of your table, you can do it yourself, no need to
bother the DBA with that (definetely the best part), and you can correct your own
mistakes. Also, please keep in mind that for FLASHBACK QUERY to work, we
need to have our undo properly configured. To illustrate that, let’s see an example:

Let’s create our same old table:

CREATE TABLE grepora ( column1 VARCHAR2(30),  column2 VARCHAR2(40),


 column3 VARCHAR2(20) )   5   TABLESPACE users; Table created.

Then, let’s insert some values on it:

SQL insert into grepora values ('value1', 'value2', 'value3'); 1 row created. SQL insert
into grepora values ('line2', 'line2', 'line2'); 1 row created. SQL insert into grepora
values ('line3', 'line3', 'line3'); 1 row created. SQL insert into grepora values
('line4', 'line4', 'line4'); 1 row created. SQL insert into grepora values
('line5', 'line5', 'line5'); 1 row created. SQL commit;

See how the table is at the moment:

SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ value1
value2 value3 line2  line2  line2 line3  line3  line3 line4  line4  line4 line5  line5  line5

Get the SYSDATE , to know the exact date where you have this amount of data:

SQL alter session set nls_date_format='dd/mm/yyyy hh24:mi:ss'; Session altered.


SQL  SQL select sysdate from dual; SYSDATE ------------------- 20/06/2016 16:36:07

Now, let’s make some “mistakes” here, try to change the content of the table, deleting
and updating values:

90
SQL delete from grepora where column1='line5'; 1 row deleted. SQL update grepora
set column1='line1', column2='line1', column3='line1' where column1='value1'; 1 row
updated. SQL commit; Commit complete.

And see how the table is right now:

SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ line1  line1 
line1 line2  line2  line2 line3  line3  line3 line4  line4  line4

Check that the content data of the table is different from the original version after our
changes. How can we revert that if we didn’t know how it was before that?

We use the famous AS OF TIMESTAMP statement, which allow us to see the table in
a different time in the past.

With the example below, check that after using the clause AS OF TIMESTAMP and
using the date we caught before to DML our table, we can find the same previous
data:

SQL select * from grepora as of timestamp (to_timestamp('20/06/2016


16:36:07', 'dd/mm/yyyy hh24:mi:ss')); COLUMN COLUMN COLUMN ------ ------ ------
value1 value2 value3 line2  line2  line2 line3  line3  line3 line4  line4  line4 line5 
line5  line5

And the current version:

SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ line1  line1 
line1 line2  line2  line2 line3  line3  line3 line4  line4  line4

With this feature, we can see how a table was “before the mistake” and do the proper
actions to fix it.

I hope it was clear to everyone, if you have any doubt, please get in touch with
GrepOra and we’ll be glad to help.

For the next post, we’ll be doing a test case for FLASHBACK VERSIONS QUERY!
Stay Tuned!

Rafael.

91
Flashback- Part 3 (Flashback Versions
Query)
Hi Everyone,

Here we are to continue our Flashback Saga! If you lost our first 2 posts about that
and are in the mood for a good reading, please go through the links below:

Flashback Drop

Flashback Query

Today we are going to discuss Flashback Versions Query , which has a strong link
with the previous post, the Flashback Query (AS OF). With this feature, we are able to
verify all changes made  between 2 points in time in the past, using SCN or a
Timestamp. Of course, the Flashback Versions Query will retrieve only the committed
data. Just like Flashback Query , the Flashback Versions Query is undo-based, so
make sure your undo Tablespace and undo retention period is good enough for you.

What is the difference between Flashback Query and Flashback Versions Query?
Well, basically using Flashback Query, you’ll see an EXACT point in the past for one
single value. Using the Versions Query, you can see all versions of that value between
two times in the past. Interesting huh?

This feature is enabled by using the clause VERSIONS BETWEEN in a SELECT


statement, so then, you can view all the variations of some value between 2 points in
the past.

Let’s do an example that may clarify our doubts:

We have our table already created on the previous Flashback Posts, so let’s use it:

SQL desc grepora  Name   Null?    Type  ----------------------------------------- --------


---------------------  COLUMN1     VARCHAR2(30)  COLUMN2     VARCHAR2(40)
 COLUMN3     VARCHAR2(20)

• Insert values into the table, and then get the SCN:

SQL insert into grepora values ('line1', 'line1', 'line1'); 1 row created. SQL commit;
Commit complete.

SQL @scn    2498333363867

• This is the script used to get the SCN:

set echo off feedback off lines 200 pages 0 column scn format 999999999999999
SELECT dbms_flashback.get_system_change_number scn FROM DUAL;

• So, currently our table has only one value, which is:

92
SQL select * from grepora; COLUMN1  COLUMN2 COLUMN3 ------ ------- ------ line1
line1 line1

Let’s modify these values several times:

SQL update grepora set column1='line2', column2='line2', column3='line2' where


column1='line1'; 1 row updated. SQL commit; Commit complete. SQL update grepora
set column1='line3', column2='line3', column3='line3' where column1='line2'; 1 row
updated. SQL commit; Commit complete. SQL update grepora set column1='line4',
column2='line4', column3='line4' where column1='line3'; 1 row updated. SQL commit;
Commit complete. SQL update grepora set column1='line5', column2='line5',
column3='line5' where column1='line4'; 1 row updated. SQL commit; Commit
complete.

• We still have only one line in our table, but we have changed it several times, with
the above UPDATE commands.

SQL select * from grepora; COLUMN1        COLUMN2 COLUMN3


------------------------------ ---------------------------------------- -------------------- line5       line5
line5

Now, it’s time. Let’s use this very nice feature to check all the versions of that this
value has during two points in time.

• First, get the SCN again, in order to have the second point in time to compare:

SQL @scn    2498333943172

• Now, we can compare all existent values for this table/columns having 2 SCN as
reference (We could also use Timestamp for that).

SQL select * from grepora versions between scn 2498333363867 and


2498333943172; COLUMN1    COLUMN2    COLUMN3 -------- ---------- ----------- line5  
    line5 line5 line4       line4 line4 line3       line3 line3 line2       line2 line2 line1      
line1 line1

Done! With this example we could see all the versions of that table between 2 times in
the past using SCN!

For the next post, we will check Flashback TRANSACTIONS query, which can go a
little further than this one. We’ll see a little more next week!

Please let us know if you have any doubt or suggestion.

Have a wonderful week.

Rafael.

93
Flashback – Part 4 (Flashback Transaction
Query)
Hi all,

If you have missed the previous Flashback posts, please go through these links to find
it and read them if you feel like!

Flashback – Part 1 (Flashback Drop)


Flashback – Part 2 (Flashback Query)
Flashback – Part 3 (Flashback Versions Query)

And now, we are half way there to the end of the Flashback posts, let’s see a little
more about FLASHBACK TRANSACTION QUERY .

Being very simple, Flashback Transactions Query is pretty much the same as
Flashback Versions Query, where you can see all changes made between two times
in the past. The difference here is that the TRANSACTION query, facilitate the
rollback of an operation to us by providing the proper SQL to undo it.

FTQ is also undo-based, so as usual, make sure you have the space on the undo
tablespace that fit for you and also the undo_retention that is enough for your
scenario.

There are some things that need to be configured before use the FTQ, so make sure it
is properly set up:

• Your DB must be running with version 10.0 compatibility or higher.

• Supplemental logging must be enabled ( alter database add supplemental log


data)

.GRANTS: Any user who might need to use FTQ must have the SELECT ANY
TRANSACTION grant, and also the FLASHBACK privilege on those tables that he
wants to be able to flashback (or FLASHBACK ANY TABLE ).

In order to operate the Flashback Transactions Query, we should use the


FLASHBACK_TRANSACTION_QUERY view. This view determines what changes
were made in a specific transaction or between two times in the past.  Make sure you
set a WHERE clause in your select statement, indicating or the transaction identifier or
a timestamp. Let’s have a look into the view columns:

SQL desc flashback_transaction_query Name Null? Type


----------------------------------------- -------- --------------------- XID RAW(8) START_SCN
NUMBER START_TIMESTAMP DATE COMMIT_SCN NUMBER
COMMIT_TIMESTAMP DATE LOGON_USER VARCHAR2(30) UNDO_CHANGE#
NUMBER OPERATION VARCHAR2(32) TABLE_NAME VARCHAR2(256)

94
TABLE_OWNER VARCHAR2(32) ROW_ID VARCHAR2(19) UNDO_SQL
VARCHAR2(4000)

See the XID column there? This is our Transaction identifier, so how would we know
the identifier of our transaction if we don’t have this information?

There are some hidden columns on every table named VERSIONS_% that contain
all these informations when we use the VERSIONS BETWEEN, and some of them are
named as:

versions_startscn versions_starttime versions_endscn versions_endtime


versions_xid versions_operation

In order to clarify all of this, let’s use an example to illustrate every statement read
here today.

After doing all the pre-requisites described above:

• Compatibility 10.o

• Enable Supplemental log

• Grant SELECT ANY TRANSACTION and FLASHBACK ANY TABLE

Now, we wanna know the values of some of our hidden columns for GREPORA table
(created on previous posts), such as VERSIONS_XID, in order to identify the
transaction id’s to properly use FTQ. Let’s use the following query to get it:

SELECT versions_startscn,        versions_starttime,        versions_endscn,       


versions_endtime,        versions_xid,        versions_operation,        grepora.* FROM
grepora  VERSIONS BETWEEN TIMESTAMP          to_timestamp('20/06/2016
16:35:0 0', 'dd/mm/yyyy hh24:mi:ss')      AND to_timestamp('20/06/2016
16:40:00', 'dd/mm/yyyy hh24:mi:ss');

Obviously,  please adjust your script to run between the desired timestamp.

Once you have the informations captured above, we can figure out the transaction id
(XID) and query the FLASHBACK_TRANSACTION_QUERY view, to be able to
rollback our transaction:

SELECT XID,undo_sql FROM flashback_transaction_query t WHERE


table_owner='RNOLIO'  and table_name='GREPORA'  and XID="";

• The output should be such as:

XID UNDO_SQL ---------------- --------------------------- 000200030000002D insert into


"RNOLIO"."GREPORA" ("column1","column2","column3") values ('111','Mike','655');
000200030000002D delete from "RNOLIO"."GREPORA" where ROWID
= 'AAAKD4AABAAAJ3BAAB'; 000200030000002D update "RNOLIO"."GREPORA"
set "column1" = 'value1' where ROWID = 'AAAKD2AABAAAJ29AAA';

95
Please note that we have the UNDO_SQL column, indicating to us the exact
command to be executed to rollback that exact transation . This is awsome, right?
Also, instead of use the XID as a filter, you can use any other hidden column that you
want, or even use the timestamp between two points in time.

Please let us know if you have any doubt on this,  and have an awesome week.

Rafael.

96
Flashback – Part 5 (Flashback Table)
Hi everybody,

Today we are going to discuss about FLASHBACK TABLE . As usual, first, I am


tagging here the previous posts about Flashback Technology, so feel free to check it
out if you want:

Flashback – Part 1 (Flashback Drop)


Flashback – Part 2 (Flashback Query)
Flashback – Part 3 (Flashback Versions Query)
Flashback – Part 4 (Flashback Transaction Query)

So let’s do it people.

Flashback Table is a very interesting facility that our almighty Oracle Database
provide us, giving us the ease of flashback a table (obviously) to a point-in-time in the
past or even to an SCN.

An interest part is: If you have dependent values on this table, they will be reverted as
well when you perform the flashback table! Awesome right?

The difference, comparing to all other previous sections until now is: All of them, did
not affect the table as a whole, it was very punctual. Now we have the possibility to go
back with the entire table with only one simple command.

The use of flashback table is a MUCH QUICKER, SIMPLER and INDEPENDENT


option to recover a table to a previous position, comparing to an incomplete recover
for example. Why independent? Because if you are having a bad day, and do
something wrong , you can flashback your own table quickly, bothering nobody.

There is an important point to be mentioned:

• All triggers are disabled when you perform Flashback Table operation, and they
remain disable regardless they were enabled or disabled. So make sure to identify
the enabled ones before to execute the Flashback Table.

Steps to perform a Flashback Table:

• Enable Row Movement on the table you desire to perform the flashback

• Get an SCN or Timestamp to go back in time (I wish I could do it with my life


sometimes).

• Check the current values on your table.

• Do some changes on it.

• Check the table with the wrong values.

97
• Flashback the table to the SCN or Timestamp you caught at the step 2.

• Check that everything is as expected on your table.

So let’s do a real example here:

Step 1:

SQL alter table grepora enable row movement; Table altered.

Step 2:

SQL @scn    2503726817930

Step 3:

SQL select * from grepora; COLUMN COLUMN COLUMN ------ ------ ------ line1  line1 
line1 line2  line2  line2 line3  line3  line3 line4  line4  line4 line5  line5  line5

Step 4:

SQL update grepora set column1='grepora'; 5 rows updated. SQL commit; Commit
complete.

Step 5:

SQL select * from grepora; COLUMN1  COLUMN COLUMN -------- ------ ------ grepora 
line1 line1 grepora  line2 line2 grepora  line3 line3 grepora  line4 line4 grepora  line5
line5

Step 6:

SQL flashback table grepora to scn 2503726817930; Flashback complete.

Step 7:

SQL select * from grepora; COLUMN1  COLUMN COLUMN -------- ------ ------ line1
line1 line1 line2 line2 line2 line3 line3 line3 line4 line4 line4 line5 line5 line5

PS: If you want to already enable the triggers along with the flashback command,
please use:

flashback table grepora to scn 2503726817930 enable triggers;

And there we go! The table is reverted to its previous position. For this time we used
SCN to flashback.

Also bear in mind that as most of the Flashback operations, this one is also
undo-based, so make sure you have the size and retention that you need.

Please feel free to comment and e-mail us in case of any doubt or suggestion.

98
Have a wonderful week.

Rafael.

99
Flashback – Part 6 (Flashback Database)
Hi people,

We’re almost there to finish this Flashback Tutorial

To check the previous posts, please go through:

Flashback – Part 1 (Flashback Drop)


Flashback – Part 2 (Flashback Query)
Flashback – Part 3 (Flashback Versions Query)

Flashback – Part 4 (Flashback Transaction Query)

Flashback – Part 5 (Flashback Table)

Today’s post is gonna be about Flashback Database, a pretty good feature for
non-production levels of your structure, I would say.

It’s very unlikely that you are going to rollback your entire production database to a
point-in-time in the past, right? But if you need to, this facility is there.

Why do I say that it is great for non-production environments?

For example, I have my DEV/TEST database and I know that my database is running
perfectly fine now, then as a test measure, I change a lot of things and end up
messing up with the database, affecting a lot of ends. Then, as magic, you can move
back your ENTIRE DATABASE with Flashback Database to point in the past where
everything were fine.

Different from all other flashback operations, Flashback database is not undo-based, it
has his own Flashback Logs, that are used to perform these operations. We can see
how far we can go back by querying the V$FLASHBACK_DATABASE_LOG view,
columns OLDEST_FLASHBACK_SCN and OLDEST_FLASHBACK_TIME.

To make sure that you can perform Flashback Database operations, please make
sure that you have enabled the Flashback, as:

SQL select FLASHBACK_ON from v$database; FLASHBACK_ON ------------------ YES

If it is set to NO, we need to do the following:

• Shutdown the database

• Startup Mount:

• run:

alter database flashback on;

100
• Open the database.

Using the FLASHBACK DATABASE operation:

To use flashback operations, make sure that your database is in MOUNT mode ,
otherwise you won’t be able to do so.

Once your database is properly setup for flashback database operations, we have 3
ways to perform this:

• SCN

• Timestamp

• Restore Point

The first two, you must be already familiar, you can go back to a specific past SCN or
a Time in the past using Timestamp.  The commands to execute it, follows the
following syntax:

FLASHBACK DATABASE TO SCN 73834;

FLASHBACK DATABASE TO TIME "TO_DATE('09/20/05','MM/DD/YY')";

Executing this, you are rolling back your whole database to the point in time defined.

Then you have the Restore Point feature, which is nothing more than YOU,
manually, mark the database at some point, and then turn back to this point. The good
part here, is that you can name this point-in-time as your preference.

Let’s do an example:

CREATE RESTORE POINT BEFORE_CHANGES;

The name of our restore point is BEFORE_CHANGES, but it can be named as your
preference. Thinking about our first example for non-production databases, we can
use just like we said:

• Create the restore point

• Perform all the changes that you need to do

• Go back in time with the whole database using the restore point created.

To perform the recovery using the Restore Point, you must have your database in
MOUNT mode. Once you have it, you are going to need to execute:

FLASHBACK DATABASE TO RESTORE POINT 'BEFORE_CHANGES';

When the Database finish the Flashback Operation, you will need to open the
database with RESETLOGS operation:

101
alter database open resetlogs;

There you go guys, as we could have seen, we have several ways to use the
flashback database operation, and it is very useful for a lot of situations. I have just
illustrated the most common one (for me).

I hope that it has been a good reading for you guys and not boring.

We have only one flashback type left to publish (flashback data archive), and then we
are going to move on to different subjects

Have a wonderful week everyone!

Cheers,
Rafael.

102
Flashback – Part 7 (Flashback Data Archive)
Hey everyone,

Finally, the last part of our flashback posts, FLASHBACK DATA ARCHIV E! If you
didn’t have a chance to check the previous posts, please do not hesitate to take a look
if you need or if you just get curious.

Flashback – Part 1 (Flashback Drop)


Flashback – Part 2 (Flashback Query)
Flashback – Part 3 (Flashback Versions Query)

Flashback – Part 4 (Flashback Transaction Query)

Flashback – Part 5 (Flashback Table)


Flashback – Part 6 (Flashback Database)

Well, there we go then

The Flashback Data Archive is a great option if you need to keep track of all changes
for a very long time in your database. I mean, when all other Flashback options aren’t
good enough for you and you need to keep way more time of history, you need to use
Flashback Data Archive, which is gonna keep the track of lifetime changes.

Why would I want to use that? Well, one of the options that I see, is about auditing
your DB.

Considering the configuration and use of Flashback Data Archive, we’re gonna list the
steps and then explain them with more details:

• Create a tablespace with enough space for your data archive (It can be an existing
one, but how about we keep ourselves better organized?)

• Create the Flashback Data Archive using the tablespace created on step1 and
define quota to the tablespace (optional) and define the retention of the FDA
(optional).

• Create/Alter a table to use the flashback data archive.

It is pretty straightforward and simple to configure and use it. So let’s get into the
details:

• Create the tablespace:

If you are here reading this post we assume that you already know how to create a
simple tablespace

2.  Create the Flashback Data Archive:

103
SQL create flashback archive audit_grepora tablespace tbs_grepora_archive quota
25g retention 2 year;

Please check that here, you can set up:

* Flashback Data Archive NAME


* Define the tablespace
* Define the quota
* Define the retention

Of course, you can change all the parameters as you need using the ALTER
command, such as:

SQL alter flashback archive audit_grepora MODIFY tablespace tbs_greopora_archive


quota 10g;

or

SQL alter flashback archive audit_grepora retention 200 day;

Also, you can clean up your Flashback Data Archive as you need. Imagine that you
are running out of space and your data is too big and you don’t need the oldest data.
Then we can PURGE the flashback data archive using SCN or timestamp:

SQL alter flashback archive audit_grepora purge before SCN 9835743;

or

SQLalter flashback archive audit_grepora purge before timespamp (SYSDATE - 365);

3. Create/Alter a table to use the flashback data archive:

This is the simplest step. If you want your table to use a specific flashback data
archive to keep the history of all its changes, then you need to run the following:

SQL alter table grepora flashback archive audit_grepora;

Or if you are creating a new table, just add “flashback archive” in the end of your DDL:

SQL create table grepora  (column1 varchar2(20), column2 varchar2(20)), flashback


archive;

If you want to remove your table from using the FDA, simply do it with alter command:

SQL alter table grepora no flashback archive;

Imagine that you want to check how that table was 200 days ago? Then just use the
AS OF TIMESTAMP clause in your SELECT statement, already discussed on
previous posts

104
If you want to check Flashback Data Archive information, please go through these
views:

DBA_FLASHBACK_ARCHIVE – Information about flashback data archive

DBA_FLASHBACK_ARCHIVE_TS – All tablespaces used by FDA

DBA_FLASHBACK_ARCHIVE_TABLES – List all tables with FDA enabled.


I sincerely hope that this flashback tutorial helped you to define your best strategies of
recover your data and also cleared your mind about some doubts that may have
showed up during your day job.

We from GrepOra.com are very grateful to have the opportunity to share knowledge
and experience with everyone and we seriously want to help!

This is the end of Flashback posts. See you next week with some other subject

Rafael.

105
Alert Log: “Private Strand Flush Not
Complete” on Logfile Switch
Hi all!
Just a curiosity: Have you ever noticed in a database alert log the occourance of the
following message for every logfile switch:

Thread 1 cannot allocate new log, sequence 9281 Private strand flush not
complete Current log# 5 seq# 9280 mem# 0:
/db/u5001/oradata/GREPORADB/redo05a.log Thread 1 advanced to log sequence
9281 (LGWR switch) Current log# 6 seq# 9281 mem# 0:
/db/u5001/oradata/GREPORADB/redo06a.log

It happens because before every switch of logfile all private strands have to be flushed
to current log.
It’s well described by the docs Alert Log Messages: Private Strand Flush Not
Complete (Doc ID 372557.1 ) and Manual Log Switching Causing “Thread 1
Cannot Allocate New Log” Message in the Alert Log (Doc ID 435887.1) .

The unpublished Bug 5241081 says:


“Technically all the strands need to be flushed when the log switch is being initiated
and hence the messages for checkpoint not complete and private strand flush not
complete are synonymous. The crux is that the strand still have transactions active
which need to be flushed before this redo can be overwritten, would recommend
letting Oracle perform the checkpointing by default and tune fast_start_mttr_target to
achieve what one is looking for.”

So, it’s an expected behavior and normal to transactional environments, don’t worry!
It’s simple to be reproduced too… Take a look:

session1 update teste set a=5 where a=2; 1 row updated. session2 select 2 t1.sid, 3
t1.username, 4 t2.xidusn, 5 t2.used_urec, 6 t2.used_ublk 7 from 8 v$session t1, 9
v$transaction t2 10 where 11 t1.saddr = t2.ses_addr; SID USERNAME XIDUSN
USED_UREC USED_UBLK ---------- ------------------------------ ---------- ---------- ----------
304 MBOESING 4 1 1 session2 alter system switch logfile; System altered.

So, there is an uncommited session, how it looks in alert log?

Thread 1 cannot allocate new log, sequence 9289 Private strand flush not
complete Current log# 4 seq# 9288 mem# 0:
/db/u5001/oradata/GREPORADB/redo04a.log Thread 1 advanced to log sequence
9289 (LGWR switch) Current log# 5 seq# 9289 mem# 0:
/db/u5001/oradata/GREPORADB/redo05a.log

Ok! The expected behavior. Now let’s commit the transaction and repeat the process:

106
session1 commit; Commit complete. session2 select 2 t1.sid, 3 t1.username, 4
t2.xidusn, 5 t2.used_urec, 6 t2.used_ublk 7 from 8 v$session t1, 9 v$transaction t2 10
where 11 t1.saddr = t2.ses_addr; no rows selected session2 alter system switch
logfile; System altered.

And the alert log:

Thread 1 advanced to log sequence 9290 (LGWR switch) Current log# 6 seq# 9290
mem# 0: /db/u5001/oradata/GREPORADB/redo06a.log

Have a nice week!


Matheus.

107
TPS Chart on PL/SQL Developer
Hi all,
Since last post, some people asked me about how to make the charts using PL/SQL
Developer. It basically works for every kind of query/data, like MS Excel.
I’d recommend you to use with historic data, setting time as “X” axis.

Here the example for the post Oracle TPS: Evaluating Transaction per Second:

And get:

PL/SQL Developer is a commercial tool of Allround Automations .


You can access more information about licensing here .

108
Have a nice day!
Matheus.

109
PL/SQL Developer Taking 100% of Database
CPU
When using PL/SQL Developer (Allround Automations), a internal query is taking a lot
of cpu cycles on database server (100% of a CPU).
Is this your problem? Please check if the query is like this:

select s.synonym_name object_name, o.object_type from sys.all_synonyms s,


sys.all_objects o where s.owner in ('PUBLIC', user) and o.owner = s.table_owner and
o.object_name = s.table_name and o.object_type in
('TABLE', 'VIEW', 'PACKAGE','TYPE', 'PROCEDURE', 'FUNCTION', 'SEQUENCE')

It’s caused by the Describe Context Option of Code Assistant. To disable it:
Tools Preferences Code Assistant and disable the “Describe Context” option.

PL/SQL Developer is a commercial tool of Allround Automations .

By tool documentation:
“Describe context context to determine if the Code Assistant should describe the
context of the current user, editor and program unit.
The minimum number of characters identified in the context described can be called
before the word of how many characters need to be typed. Note that you can always
manually invoke code assist, even if the characters have not been typed.
Description of standard functions in the case of default, Code Assist will describe the
function of the standard the to_char, add_months. If you are familiar with these
functions, you can disable this option.”

110
I hope it helped you.
See ya!
Matheus.

111
Installing and Configuring ASMLIb on
Oracle Linux 7
Hi all!
For those are familiar with RHEL/OEL 4 and 5, there is some differences to start
ASMLib on OEL 6 and 7.

So, a quick guide to install (done on OEL 7), start and configure:

1. Install the ASMLib kernel module package as root using the following command:

yum install kmod-oracleasm

2. Install the ASMLib library package and utilities package

yum install oracleasm-support oracleasmlib oracleasm-`uname -r`

It’s possible some package to not found. For example:

No package oracleasmlib available.

So, you can download rpm libs from here and install via rpm:

[root@dbsrv01 oracle]# rpm -Uvh ~/oracleasmlib-2.0.12-1.el6.x86_64.rpm Preparing...


################################# [100%] Updating / installing...
1:oracleasmlib-2.0.12-1.el6 ################################# [100%]

Ok, now, lets configure/start services:

[root@dbsrv01 ~]# /etc/init.d/oracleasm configure

112
Nothing happen? Ok, let’s try to start it:

[root@dbsrv01 ~]# /etc/init.d/oracleasm start Starting oracleasm (via systemctl): Job


for oracleasm.service failed because the control process exited with error code. See
"systemctl status oracleasm.service" and "journalctl -xe" for details. [FAILED]

Hmmm… Are these commands correct?

[root@dbsrv01 ~]# /etc/init.d/oracleasm Usage: /etc/init.d/oracleasm


{configure|createdisk|deletedisk|querydisk|listdisks|scandisks|status}

Ok… So, what to do?

Take a look:

[root@dbsrv01 ~]# oracleasm init Creating /dev/oracleasm mount point:


/dev/oracleasm Loading module "oracleasm": oracleasm Configuring "oracleasm" to
use device physical block size Mounting ASMlib driver filesystem: /dev/oracleasm

Victory!
Now, let’s configure:

[root@dbsrv01 ~]# oracleasm configure ORACLEASM_UID= ORACLEASM_GID=


ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

It shows, but how configure?

Just put “-i” clause, like:

[root@dbsrv01 ~]# oracleasm configure -i Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library driver. The
following questions will determine whether the driver is loaded on boot and what
permissions it will have. The current values will be shown in brackets ('[]'). Hitting
without typing an answer will keep that current value. Ctrl-C will abort. Default user to
own the driver interface []: grid Default group to own the driver interface []: oinstall
Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver
configuration: done

And you can list again:

[root@dbsrv01 ~]# oracleasm configure ORACLEASM_UID=grid


ORACLEASM_GID=oinstall ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER="" ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false" [root@dbsrv01 ~]# oracleasm
status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes

To add a disk, the same process can be followed on earlier versions:

113
[root@dbsrv01 ~]# oracleasm createdisk SDD /dev/sdd1 Writing disk header: done
Instantiating disk: done [root@dbsrv01 ~]# oracleasm listdisks SDD

For all commands:

[root@dbsrv01 ~]# oracleasm -h Usage: oracleasm [--exec-path=] [ ] oracleasm


--exec-path oracleasm -h oracleasm -V The basic oracleasm commands are:
configure Configure the Oracle Linux ASMLib driver init Load and initialize the ASMLib
driver exit Stop the ASMLib driver scandisks Scan the system for Oracle ASMLib disks
status Display the status of the Oracle ASMLib driver listdisks List known Oracle
ASMLib disks querydisk Determine if a disk belongs to Oracle ASMlib createdisk
Allocate a device for Oracle ASMLib use deletedisk Return a device to the operating
system renamedisk Change the label of an Oracle ASMlib disk update-driver
Download the latest ASMLib driver

And to see arguments for each one:

[root@dbsrv01 ~]# oracleasm configure -h Usage: oracleasm-configure [-l ] [-i|-I]


[-e|-d] [-u ] [-g ] [-b|-p] [-s y|n] [[-o ] ...] [[-x ] ...]

Have a nice day!


See ya!
Matheus.

114
ASM: Adding disk “_DROPPED%” FORCE
Ok doke,
First let I make it clear: Adding a disk with force should be avoided, mainly by all the
rebalance involved. The best choice, if you has “time”, is to just put disks online, like:

1) ALTER DISKGROUP ONLINE DISK ; or


2) ALTER DISKGROUP ONLINE DISKS IN FAILGROUP ; or
3) ALTER DISKGROUP ONLINE ALL;

But, the post is about adding back to DG the dropped disks .


Let’s imagine, to undestand my situation, you lost the contact with one of your two site
storage… In this example, represented by failgroup FGAUX. You would see the disks
like this:

SQL select name,failgroup,state from v$asm_disk a where state  'NORMAL';

NAME FAILGROUP STATE


------------------------------ ------------------------------ --------
_DROPPED_0000_DGDATA FGAUX FORCING
_DROPPED_0001_DGDATA FGAUX FORCING
_DROPPED_0002_DGDATA FGAUX FORCING

So, you know your disks by the name pattern (0 are FGMAIN and 1 are FGAUX, the
problematic). You can do something like:

[root@database-host ~]# /etc/init.d/oracleasm listdisks |grep DGDATA


DGDATA001
DGDATA002
DGDATA003
DGDATA101
DGDATA102
DGDATA103

Now, make the simple…

SQL ALTER DISKGROUP DGDATA ADD


FAILGROUP FGAUX
DISK
'ORCL:DGDATA101' name DGDATA101 FORCE,
'ORCL:DGDATA102' name DGDATA102 FORCE,
'ORCL:DGDATA103' name DGDATA103 FORCE;

Diskgroup altered.

SQL ALTER DISKGROUP DGDATA rebalance power 8;

Diskgroup altered.

115
While waiting the reball, let’s see the disks in DG:

SQL select * from v$asm_operation where group_number=(select group_number from


v$asm_diskgroup where name='DGDATA');

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK


EST_RATE EST_MINUTES ERROR_CODE
------------ ----- ---- ---------- ---------- ---------- ---------- ---------- -----------
--------------------------------------------
3 REBAL WAIT 8
SQL select name,failgroup,state from v$asm_disk a where group_number=(select
group_number from v$asm_diskgroup where name='DGDATA');

NAME FAILGROUP STATE


------------------------------ ------------------------------ --------
_DROPPED_0000_DGDATA FGAUX FORCING
_DROPPED_0001_DGDATA FGAUX FORCING
_DROPPED_0002_DGDATA FGAUX FORCING
DGDATA101 FGAUX NORMAL
DGDATA102 FGAUX NORMAL
DGDATA103 FGAUX NORMAL
DGDATA001 FGMAIN NORMAL
DGDATA002 FGMAIN NORMAL
DGDATA003 FGMAIN NORMAL

And, when the rebalance end, the situation will be OK:

SQL select * from v$asm_operation where group_number=(select group_number from


v$asm_diskgroup where name='DGDATA');

GROUP_NUMBER OPERA STAT POWER ACTUAL SOFAR EST_WORK


EST_RATE EST_MINUTES ERROR_CODE
------------ ----- ---- ---------- ---------- ---------- ---------- ---------- -----------
--------------------------------------------
3 REBAL RUN 8 8 629 19087 10143 1

SQL select * from v$asm_operation where group_number=(select group_number from


v$asm_diskgroup where name='DGDATA');

no rows selected

SQL select name,failgroup,state from v$asm_disk a where group_number=(select


group_number from v$asm_diskgroup where name='DGDATA');

NAME FAILGROUP STATE


------------------------------ ------------------------------ --------
DGDATA101 FGAUX NORMAL
DGDATA102 FGAUX NORMAL
DGDATA103 FGAUX NORMAL

116
DGDATA001 FGMAIN NORMAL
DGDATA002 FGMAIN NORMAL
DGDATA003 FGMAIN NORMAL

OK? Easy!

Matheus.

117
Adding ASM Disks on RHEL Cluster with
Failgroups
# Recognizing as ASMDISK on ASM Libs (ORACLEASM):

1) All cluster nodes: /etc/init.d/oracleasm scandisk

[root@db1host1p ~]# /etc/init.d/oracleasm scandisks


Scanning the system for Oracle ASMLib disks: [ OK ]
[root@db2host2p ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]

2) One of cluster nodes:

[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA059


/dev/asmdsk/DGDATA059
Marking disk "DGDATA059" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA060
/dev/asmdsk/DGDATA060
Marking disk "DGDATA060" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA061
/dev/asmdsk/DGDATA061
Marking disk "DGDATA061" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA062
/dev/asmdsk/DGDATA062
Marking disk "DGDATA062" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA159
/dev/asmdsk/DGDATA159
Marking disk "DGDATA159" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA160
/dev/asmdsk/DGDATA160
Marking disk "DGDATA160" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA161
/dev/asmdsk/DGDATA161
Marking disk "DGDATA161" as an ASM disk: [ OK ]
[root@db1host1p ~]# /etc/init.d/oracleasm createdisk DGDATA162
/dev/asmdsk/DGDATA162
Marking disk "DGDATA162" as an ASM disk: [ OK ]

3) All cluster nodes: /etc/init.d/oracleasm scandisk

[root@db1host1p ~]# /etc/init.d/oracleasm scandisks


Scanning the system for Oracle ASMLib disks: [ OK ]
[root@db2host2p ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]

118
# Adding Disk on Diskgroup (sqlplus / as sysasm – ASM Instance)
1) Listing Failgroups

SQL select distinct failgroup from v$asm_disk where group_number in (select


group_number from v$asm_diskgroup where name='DGDATA');
FAILGROUP
----------------------------------------------------
FGMASTER
FGAUX

1) Adding Disks (naming and setting rebalance power):

SQL alter diskgroup DGDATA


2 add failgroup FG01 disk
3 'ORCL:DGDATA059' name DGDATA059,
4 'ORCL:DGDATA060' name DGDATA060,
5 'ORCL:DGDATA061' name DGDATA061,
6 'ORCL:DGDATA062' name DGDATA062
7 add failgroup FG02 disk
8 'ORCL:DGDATA159' name DGDATA159,
9 'ORCL:DGDATA160' name DGDATA160,
10 'ORCL:DGDATA161' name DGDATA161,
11 'ORCL:DGDATA162' name DGDATA162
12 rebalance power 10 nowait;
Diskgroup altered

2) Be patient, and wait the rebalancing:

SQL select * from v$asm_operation;


GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK
EST_RATE EST_MINUTES ERROR_CODE
------------ ----------- ---------- ---------- ---------- -----------
4 REBAL RUN 10 10 191386 540431 1651 211 5 REBAL WAIT 4
SQL /
GROUP_NUMBER OPERATION STATE POWER ACTUAL SOFAR EST_WORK
EST_RATE EST_MINUTES ERROR_CODE
------------ --------------- ----------------------------------------
4 REBAL RUN 10 10 443438 548118 2345 44 5 REBAL WAIT 4
SQL /
no rows selected

Well done!
Matheus.

119
Manually Mounting ACFS
A server rebooted and I needed to remount the ACFS where the Oracle Home is.
About that:
Today’s post: Manually Mounting ACFS
Tomorrow’s Someday’s post: Kludge: Mounting ACFS Thought Shellscript
Day Before Tomorrow’s Another Day’s post: Auto Mounting Cluster Services
Through Oracle Restart

But, first, some usefull links:


– ACFS Introduction
– ACFS Advanced
– ACFS Command-Line Utilities

# Manually Mounting ACFS


Checked my $ORACLE_HOME (mounted on ACFS) is not available to start the
database. Checked ACFS service is down. So, let’s do all the process:

# Starting ACFS

[root@db1host1p ~]$ $GRID_HOME/bin/acfsload start -s

# Volumes OFFLINE: Let’s Enable it:

[root@db1host1p ~]$ $GRID_HOME/bin/crsctl stat res -t |grep acfs


ora.dghome.sephome.acfs
ONLINE OFFLINE db1host1p
[root@db1host1p ~]$ su - grid
[grid@db1host1p ~]$ asmcmd
ASMCMD volinfo -a
Diskgroup Name: DGHOME
Volume Name: LVHOME
Volume Device: /dev/asm/lvhome-270
State: DISABLED
Size (MB): 10240
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /oracle/MYDB
ASMCMD volenable -a
ASMCMD volinfo -a
Diskgroup Name: DGHOME
Volume Name: LVHOME
Volume Device: /dev/asm/lvhome-270
State: ENABLED

120
Size (MB): 10240
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath: /oracle/MYDB

[root@db1host1p ~]$ $GRID_HOME/bin/crsctl stat res -t |grep acfs


ora.dghome.sephome.acfs
ONLINE ONLINE db1host1p mounted on /oracle/MYDB
ONLINE ONLINE db2host2p mounted on /oracle/MYDB

# As root, let’s mount it:

[root@db1host1p ~]# mount -t acfs /dev/asm/lvhome-270 /oracle/MYDB

# Then, with the $ORACLE_HOME available:

[oracle@db1host1p ~]$ srvctl start instance -d MYDB -i MYDB001

Matheus.

121
Kludge: Mounting ACFS Thought Shellscript
Just the script. The history is here .
This is a “workaround” script. As always, is recommended to use Oracle Restart, like I
posted here .

#!/bin/sh $GRID_HOME/bin/srvctl add filesystem -d /dev/asm/dbhome-270


-g 'DGHOME' -v DBHOME -m /oracle/db -u oracle if [ $? = "0" -o $? = "2" ]; then
$GRID_HOME/bin/srvctl start filesystem -d /dev/asm/dbhome-270 if [ $? = "0" ]; then
chown oracle:oinstall /oracle/db chmod 775 /oracle/db $GRID_HOME/bin/srvctl status
filesystem -d /dev/asm/dbhome-270 exit 0 fi $GRID_HOME/bin/srvctl status filesystem
-d /dev/asm/dbhome-270 fi

There is a good post ACFS and ACFS restart scripting:


https://levipereira.wordpress.com/2011/07/28/oracle-acfs-filesystem-managed-by-oha
s-on-oracle-restart/

See ya!

Matheus.

122
CRSCTL: AUTO_START of Cluster Services
(ACFS)
As I sad long time ago ( Manually Mounting ACFS )… Here is it:

To set autostart of a resource (in my case an ACFS) by CRSCTL, here the simple
example:

# Check How it is currently configured:

[root@db1database1p bin]# ./crs_stat -p ora.dghome.dbhome.acfs |grep


AUTO_START AUTO_START=restore

# Set Autostart (and check):

[root@db1database1p bin]# ./crsctl modify resource ora.dghome.dbhome.acfs -attr


AUTO_START=always [root@db1database1p bin]# ./crs_stat -p
ora.dghome.dbhome.acfs |grep AUTO_START AUTO_START=always

It can be done also with “AUTO_START=1”. We have 3 possibilities (always, restore


and never).

# KB
http://docs.oracle.com/cd/E11882_01/rac.112/e16794/resatt.htm#CWADD91444

Matheus.

123
Changing ACFS mount point
I do checked there’s no good way to change ACFS mounting point on asmca
assistant, so I decided to document how I quickly change ACFS mount point:

• MAKE BACKUP ( in my case, there are no data loss );

• Do bellow:

root@mymachine:/oracle/product /grid/product/12.1.0.2/bin/srvctl stop filesystem


-d /dev/asm/ggatebin-68 root@mymachine:/ /usr/sbin/acfsutil registry -d
/dev/asm/ggatebin-68 acfsutil registry: successfully removed ACFS volume
/dev/asm/ggatebin-68 from Oracle Registry root@mymachine:/ /usr/sbin/acfsutil
registry -a /dev/asm/ggatebin-68 /oracle/product/goldengate12c/ acfsutil registry:
mount point /oracle/product/goldengate12c successfully added to Oracle Registry
root@mymachine:/oracle/product chown -R oracle.oinstall goldengate12c
root@mymachine:/oracle/product chmod 755 goldengate12c

Maiquel.

124
ORA-27054: NFS file system where the file is
created or resides is not mounted with
correct options
Due to ease in which we can go to the future or return to the past using Goldengate, it
becomes increasingly necessary recover archives from backup, sometimes it is
necessary to recover a several days.
To do it, generally we need large disk space, at this time, starts a searching for
storage disks.

After finding a disk, is need to mount it, i performed with simply mount options in AIX.

oracle@grepora1.net:/ tail -8 /etc/filesystems /archives: dev = "/ggate" vfs = nfs


nodename = grepora2.net mount = true options = bg,hard,intr,sec=sys,rw,acl account
= false oracle@grepora1.net:/ mount -a

After trying to move the first archieve piece, I get the error:
ORA-27054: NFS file system where the file is created or resides is not mounted with
correct options.

The solution for this issue, can be found in (Doc ID 359515.1)

Using the table below, just adjust mount point options according to your system:

Dieison.

125
Error: Starting ACFS in RHEL 6 (Can’t exec
“/usr/bin/lsb_release”)
Quick tip:

# Error:
[root@db1gridserver1 bin]# ./acfsload start -s
Can’t exec “/usr/bin/lsb_release”: No such file or directory at
/grid/product/11.2.0/lib/osds_acfslib.pm line 511.
Use of uninitialized value $LSB_RELEASE in split at
/grid/product/11.2.0/lib/osds_acfslib.pm line 516.

# Solution:
[root@db1gridserver1 bin]# yum install redhat-lsb-core-4.0

Note: Bug 17359415 – Linux: Configuring ACFS reports that cannot execute
‘/usr/bin/lsb_release’ (Doc ID 17359415.8)

Matheus.

126
Create SPFILE on ASM from PFILE on
Filesystem
Some basics, right?
Another thing that is not usual and everytime I do, someone be surprised: “shu”
alias for “shutdown”:

SQL create spfile='+DGDATA/MYDB/spfilemydb.ora' from


pfile='/oracle/product/11.2/dbs/init_mydb.ora'; File created. SQL shu immediate;
Database closed. Database dismounted. ORACLE instance shut down.

The Bourleson Master also wrote about it. Take a look on a better detailed post about
this subject: http://www.dba-oracle.com/concepts/pfile_spfile.htm .

Matheus.

127
ORA-15186: ASMLIB error function
Almost a month away… My bad!
Here I go again, with a quick tip, that a passed today. Our kernel was ‘changed’
without advise and this began to happen:

ORA-15186: ASMLIB error function = [asm_init], error = [18446744073709551611],


mesg = [Driver not installed] ERROR: error ORA-15186 caught in ASM I/O path

The solution was is basically update the asmlibs, that is based on kernel version. For
RHEL, the solution is well decribed here:

http://www.oracle.com/technetwork/server-storage/linux/asmlib/rhel6-1940776.html
https://access.redhat.com/solutions/315643

Just to remember: After the kernel change, a relink of your Oracle Home is higly
recommended.

Have a nice day!


Matheus.

128
Charsets: Single-Byte vs Multibyte
Encoding Scheme Issue
Sad history:

IMP-00019: row rejected due to ORACLE error 12899 IMP-00003: ORACLE error
12899 encountered ORA-12899: value too large for column
"SCHEMA"."TABLE"."COLUMN" (actual: 61, maximum: 60)

To understand: It happens when the export/import is being made by different


charsets. Usually when the destination is a superset with “multibyting” and the source
is a single-byte one. The reason is that as more as the charset is not specific, more
bits are used to represent a charcter (c-ç, a-ã, o-õ-ô, for example), this way, the
columns that uses as data length byte will be different sized between theese
databases.

Of course, as more specific a charset configuration is, much better for performance
constraints it’ll be (specially for sequencial reads), because the databases needs to
work with less bytes in datasets/datablocks for the same tuples, in a simple way to
explain. Otherside, this is a quite specific configuration. The performance issues are
mostly related to more simple tunings (sql access plan, indexing, statistics or solution
architecture) than this kind of details. But, it’s important to mention if you’re working in
a database that is enough well tuned…

For more information, I recommend this (recent) documentation:


https://docs.oracle.com/database/121/NLSPG/ch2charset.htm . Please, invest your
time to understand the relation between “ Single-Byte Encoding Schemes ” and “
Multibyte Encoding Schemes ” in this doc.

The follow image ilustrates in a simple way the difference of byting used to address
more characters (a characteristic of supersets):

Ok, doke!
And the solution is…

Let’s summarize the problem first: The char (char, varchar) columns uses more
bytes to represent the same characters. So the situations where, in the source, the
column was used by the maximum lengh, it “explodes” the column lengh in the
destination database with a multibyte encoding scheme.
For consideration, I’m not using datapump (expdp/impdp or impdb with networklink)

129
just because it’s a legacy system with long columns. Datapump doesn’t support this
“deprecated” type of data.
So, my solution, for this pontual problem occouring during a migration was to change
the data lengh of the char columns from “byte” to “char”. This way, the used metric is
the charchain rather than bytesize. Here is my “kludge” for you:

select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MODIFY '||COLUMN_NAME||' CH
AR('||data_length||' CHAR );' from dba_tab_cols where DATA_TYPE='CHAR' and
owner='&SCHEMA;' union all select 'ALTER TABLE '||owner||'.'||TABLE_NAME||' MO
DIFY '||COLUMN_NAME||' VARCHAR2('||data_length||' CHAR );' from dba_tab_cols
where DATA_TYPE='VARCHAR2' and owner='&SCHEMA;';

And it works!
Hugs and see ya!
Matheus.

130
Date Format in RMAN: Making better!
I know…
The date format on RMAN it’s not good, but it’s to make it better. Take a look:

db-serverrman target / Recovery Manager: Release 11.2.0.4.0 - Production on Wed


Aug 12 11:00:59 2015 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights
reserved. connected to target database: MYDB (DBID=1286311368) RMAN list
backup of controlfile; using target database control file instead of recovery catalog List
of Backup Sets =================== BS Key Type LV Size Device Type Elapsed
Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 541 Incr 1
17.80M DISK 00:00:01 12-AUG-15 BP Key: 541 Status: AVAILABLE Compressed:
NO Tag: BKPINCR_LV1_20150812_0923 Piece Name: +DGFRA/MYDB/backupset/2
015_08_12/ncnnn1_bkpincr_lv1_20150812_0923_0.4613.887534683 Control File
Included: Ckp SCN: 7301745 Ckp time: 12-AUG-15 RMAN exit Recovery Manager
complete.

I’ts a simple NLS export on SO before access RMAN:

db-server export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'; db-serverrman


target / Recovery Manager: Release 11.2.0.4.0 - Production on Wed Aug 12 11:05:57
2015 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: MYDB (DBID=1286311368) RMAN list backup of
controlfile; using target database control file instead of recovery catalog List of Backup
Sets =================== BS Key Type LV Size Device Type Elapsed Time
Completion Time ------- ---- -- ---------- ----------- ------------ ------------------- 541 Incr 1
17.80M DISK 00:00:01 2015/08/12 09:24:42 BP Key: 541 Status: AVAILABLE
Compressed: NO Tag: BKPINCR_LV1_20150812_0923 Piece Name: +DGFRA/MYD
B/backupset/2015_08_12/ncnnn1_bkpincr_lv1_20150812_0923_0.4613.887534683
Control File Included: Ckp SCN: 7301745 Ckp time: 2015/08/12 09:24:41

Matheus.

131
Creating RMAN Backup Catalog
It can soud repetitive, but always good to have notes about

• Create Schema for Catalog on CatalogDB:

-- Create the user create user RMAN_MYDB identified by &PASS; -- Grant/Revoke


role privileges grant recovery_catalog_owner to RMAN_MYDB; -- Grant/Revoke
system privileges grant create session to RMAN_MYDB;

2. Create catalog and register database:

-- Conected to target Database via RMAN RMAN connect catalog


rman_mydb/password@catdb.sicredi.net:1521/catalogdb connected to recovery
catalog database RMAN CREATE CATALOG; recovery catalog created RMAN
REGISTER DATABASE; database registered in recovery catalog starting full resync of
recovery catalog full resync complete

Well done!
Matheus.

132
EXP Missing Tables on 11.2
Made an exp and some table is missing, right? The database is 11.2+? The tables
missing have no rows in source dabase, right? Bingo!
This happen because Oracle implemented a space saving feature on 11.2 called
Deffered Segment Creation.

This feature basically makes that the first segment of a table is only allocated when
the first row is inserted. It was implemented because Oracle realized is not rare to find
databases with lots of tables that haven’t ever had a row.

The situation occurs because the EXP client uses dab_segments as index to
exporting, and, this feature makes that no segment be allocated. For Oracle, it’s not a
problem, considering the use of Datapump (EXPDP/IMPDP).

But (there always exist a “but”), let’s suppose you have to export the file to a different
location not accessible by directory nor has local space, or either, your table has a
long column (yes, it’s deprecated, I know… but let’s suppose this is a legacy
system…). Then, you can do:

1) For all tables that has no rows, allocate an extent:


alter table owner.tabela allocate extent;

To generate, the script:

select 'alter table '||owner||'.'||table_name||' allocate extent;' from all_tables where


num_rows=0;

2) Export using clausule VERSION=11.1 or lower on EXP.

More about Deffered Segment Creation:


https://oracle-base.com/articles/11g/segment-creation-on-demand-11gr2

Hope It helped.
See ya!
Matheus.

133
DDBoost: sbtbackup:
dd_rman_connect_to_backup_host failed
A common error. It happens when the datadomain host or mtree is unreachable.
For the first situation, contact the OS/Network administrator. Is can be a firewall
limitation, DNS miss (if using DNS hosting) or, in some cases, networks physically
unreachable.

For the second case, try to [re]send user/pass to access datadomain:

Starting backup at 24-OCT-15 using target database control file instead of recovery
catalog allocated channel: ORA_SBT_TAPE_1 channel ORA_SBT_TAPE_1:
SID=191 instance=almdbdw_1 device type=SBT_TAPE channel ORA_SBT_TAPE_1:
Data Domain Boost API allocated channel: ORA_SBT_TAPE_2 input datafile file
number=00001 name=+DGMYDB/almdbdw/datafile/system.267.849463017 channel
ORA_SBT_TAPE_1: starting piece 1 at 22-JUL-15 RMAN-00571:
===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
=============== RMAN-00571:
===========================================================
RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at
10/24/2015 10:03:50 ORA-19506: failed to create sequential file,
name="a4qcme1l_1_1", parms="" ORA-27028: skgfqcre: sbtbackup returned error
ORA-19511: Error received from media manager layer, error text: sbtbackup:
dd_rman_connect_to_backup_host failed channel ORA_SBT_TAPE_1 disabled,
job failed on it will be run on another channel

Sending user/password to acess data domain as follow and, after that, re-run the your
action.

RUN { ALLOCATE CHANNEL t1 TYPE SBT_TAPE PARMS 'BLKSIZE=1048576, SBT


_LIBRARY=$ORACLE_HOME/lib/libddobk.so,ENV=(STORAGE_UNIT=$STORAGE_
UNIT,BACKUP_HOST=$DATADOMAIN_HOST,ORACLE_HOME=$ORACLE_HOME
)' FORMAT '%U-%d'; send 'set username $DDBOOST_USER password
$PASSWORD servername $DATADOMAIN_HOST'; RELEASE CHANNEL t1; }

Hugs!

Matheus.

134
EXP-00079 – Data Protected
A quick one: I began to have this problem on 12c’s backup catalog schemas. The
reason is that by now all information is protected by policies (VPD). The error:

EXP-00079: Data in table "&TABLE;" is protected. Conventional path may only be


exporting partial table.

The solution:

catalogdb GRANT exempt access policy TO &exp;_user; Grant succeeded.

Hugs!
Matheus.

135
Backup Not Backuped Archivelogs and
Delete Input
Hi all!
Sometimes you are caught in a situation where your database is not backuping
archivelogs and need to generate a quick backup commands for those are not
backuped yet and deleting it, right?
I saw this situation in this archived discussion at OTN . Unfortunately I couldn’t give
my answer… But it’s how I do:

select 'backup archivelog from sequence '||sequence#||' until


sequence '||sequence#||' thread '||thread#||' filesperset=1 delete input;',first_time from
v$archived_log where backup_count=0 and name is not null order by first_time desc;

It generates an output like:

greporadb select 2  'backup archivelog from sequence '||sequence#||' until


sequence '||sequence#||' thread '||thread#||' filesperset=1 delete input;',first_time 3
from v$archived_log where backup_count=0 and name is not null 4 order by first_time
desc; 'BACKUPARCHIVELOGFROMSEQUEN ---------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
backup archivelog from sequence 152153 until sequence 152153 thread 1
filesperset=1 delete input; backup archivelog from sequence 152152 until sequence
152152 thread 1 filesperset=1 delete input; backup archivelog from sequence 152151
until sequence 152151 thread 1 filesperset=1 delete input; 3 rows selected.

And be happy!

But an observation! It not works this way for databases with dataguard. For these
cases you’ll need to add “ and name’&dgname’ ” at select where clause…

See ya!
Matheus.

136
How to list all my Oracle Products from
Database park?
This is part of DBA role: know and prospect the use of Oracle Products for Oracle
contract periodical review, isn’t?
It usually represent a huge problem, or, at least, demands a long time to refresh your
spread sheet…

Well, If you use OEM, we offer you a better option!


(I said ‘we’, because Dieison Santos came to me with this problem theese days…. So
we talked about, I gave some directives and he mainly solved the problem. This way,
great part of ‘we’ should be ‘he’ … haha)

Without further, here’s a query that can map your environment (at least your Oracle
database products):
You can use it to automate a report and set thresholds. Be creative…

PS: From now, I’ll post all in english. Just for fun.

select distinct(ddi.host_name) "Host", (case when opt.name like '%Active Data


Guard%' then 'Oracle Active Data Guard' when opt.name like '%Advanced
Compression%' then 'Oracle Advanced Compression' when opt.name like '%Audit
Vault%' then 'Oracle Audit Vault' when opt.name like '%Database Vault%' then 'Oracle
Database Vault' when opt.name like '%Partitioning (User)%' then 'Oracle Partitioning'
when opt.name like '%Real Application Clusters%' then 'Oracle Real Application
Clusters' when opt.name like '%Real Application Testing%' then 'Oracle Real
Application Testing' when (opt.name like '%ADDM%' or opt.name like '%Automatic
Database Diagnostic Monitor%' or opt.name like '%Automatic Workload
Repository%' or opt.name like '%AWR%' or opt.name like '%Baseline%' or opt.name
like '%Diagnostic Pack%' ) then 'Oracle Diagnostic Pack' when (opt.name like '%SQL
Monitoring%' or opt.name like '%SQL Performance%' or opt.name like '%SQL
Performance%' or opt.name like '%SQL Profile%' or opt.name like '%SQL Tuning%' or
opt.name like '%SQL Access Advisor%' or opt.name like '%Tuning Pack%')
then 'Oracle Tuning Pack' when opt.name like '%Change Management
Pack%' then 'Oracle Change Management Pack' when ddi.edition like 'Enterprise
Edition' then 'Oracle Database Enterprise Edition' else opt.name end) "Produto
Oracle", hcd.num_cores "Cores", ohs.virtual "Virtual", hcd.impl "Processador",
ddi.dbversion "Versao" from mgmt$hw_cpu_details hcd, mgmt$os_hw_summary ohs,
mgmt$db_dbninstanceinfo ddi, (select h.host_name as host, h.target_name as
database_name, i.instance_name as instance_name, h.target_type   as target_type,
h.target_guid as target_guid, f.DBID, f.NAME, f.CURRENTLY_USED,
f.DETECTED_USAGES, f.FIRST_USAGE_DATE, f.LAST_USAGE_DATE,
f.VERSION, f.LAST_SAMPLE_DATE, f.LAST_SAMPLE_PERIOD,
f.TOTAL_SAMPLES, f.AUX_COUNT, f.DESCRIPTION from mgmt_db_featureusage f,
mgmt_targets h, mgmt_db_dbninstanceinfo_ecm i, gc$ecm_gen_snapshot s where
s.is_current = 'Y' and s.snapshot_guid = i.ecm_snapshot_id and s.target_guid =

137
f.target_guid and h.target_type in ('oracle_database','rac_database') and s.target_type
= h.target_type and s.snapshot_type in ('oracle_dbconfig','oracle_racconfig') and
f.DETECTED_USAGES0 ) opt where hcd.target_guid=ohs.target_guid and
ohs.host_name=ddi.host_name and ddi.target_guid=opt.target_guid and (    opt.name
like '%Active Data Guard%' -- Active Data Guard or opt.name like '%Advanced
Compression%' -- Advanced Compression or opt.name like '%Audit Vault%' -- Audit
Vault or opt.name like '%Database Vault%' -- DB Vault or opt.name like '%Partitioning
(user)%' -- Partitioning or opt.name like '%Real Application Clusters%' --RAC or
opt.name like '%Real Application Testing%' -- RAT or opt.name like '%ADDM%' --
Diagnostic Pack or opt.name like '%Automatic Database Diagnostic Monitor%' --
Diagnostic Pack or opt.name like '%Automatic Workload Repository%' -- Diagnostic
Pack or opt.name like '%AWR%' -- Diagnostic Pack or opt.name like '%Baseline%' -- 
Diagnostic Pack or opt.name like '%Diagnostic Pack%' -- Diagnostic Pack or opt.name
like '%SQL Monitoring%' -- Tuning Pack or opt.name like '%SQL Performance%' --
Tuning Pack or opt.name like '%SQL Profile%' -- Tuning Pack or opt.name like '%SQL
Tuning%' -- Tuning Pack or opt.name like '%SQL Access%' -- Tuning Pack or
opt.name like '%Tuning Pack%' -- Tuning Pack or opt.name like '%Change
Management Pack%' -- Change Management Pack or ddi.edition like 'Enterprise
Edition') order by ddi.host_name;

Matheus.

138
How to list all my Oracle Products from
Application park?
YES!
I knew you would like the last post!

So, remains a doubt. What about my Oracle Application park?


Be soft. I’m glad to help. At real, Dieison Santos and me. As I said in the last post, it
was his problem theese days…

Here is a query to list your Oracle Application Products (including Oracle SOA Suite,
of course) from OEM.

Use wisely:

select distinct * from ( select (LBL_HOSTNAME) "Host", (CASE when


LBL_PRODUCTNAME like 'WebLogic Server' then 'WebLogic Suite' when
LBL_PRODUCTNAME like '%WebTier and Utilities%' then 'WebLogic Suite' when
LBL_PRODUCTNAME like '%EM Platform (OMS)%' then 'WebLogic Suite' when
LBL_PRODUCTNAME like '%Web Services Manager%' then 'Diagnostics Pack for
Internet Application Server' when LBL_PRODUCTNAME like '%Application Server
10g%' then 'Internet Application Server Enterprise Edition' when
LBL_PRODUCTNAME like '%Application Server Infrastructure 10g%' then 'Oracle
Enterprise Single Sign-On Suite' when LBL_PRODUCTNAME like '%Business
Intelligence%' then 'Oracle Business Intelligence Suite Enterprise Edition Plus' when
LBL_PRODUCTNAME like '%Oracle SOA Suite%' then 'SOA Suite for Oracle
Middleware' when LBL_PRODUCTNAME like '%Oracle BAM%' then 'SOA Suite for
Oracle Middleware' when LBL_PRODUCTNAME like '%WebCenter Portal Suite
11g%' then 'Oracle WebCenter Portal' when LBL_PRODUCTNAME like '%Oracle
Business Process Management%' then 'Unified Business Process Management Suite'
when LBL_PRODUCTNAME like '%Oracle Remote Intradoc Client%' then 'Oracle
WebCenter Content' when LBL_PRODUCTNAME like '%Oracle Application Server
Guard%' then 'Internet Application Server Enterprise Edition' when
LBL_PRODUCTNAME like '%Application Server Configuration%' then 'Configuration
Management Pack for Internet Application Server' else LBL_PRODUCTNAME end)
"Produto", LBL_BASEVERSION "Versao", LBL_PROCESSOR "Processador",
lbl_virtual "VIrtual", DECODE(LBL_CPUS,null,1,LBL_CPUS) "CPUS" from (SELECT
M.EXTERNAL_NAME LBL_PRODUCTNAME, M.NAME LBL_COMPONENTNAME,
M.BASE_VERSION LBL_BASEVERSION, M.HOST_NAME LBL_HOSTNAME,
p.virtual lbl_Virtual, p.system_config || nvl2(p.freq, p.freq || ' MHz FSB ', '')
LBL_PROCESSOR, p.cpu_count LBL_CPUS FROM
(MGMT$SOFTWARE_COMPONENTS M INNER JOIN mgmt$os_hw_summary p ON
M.HOST_NAME = P.HOST_NAME)) where   ( LBL_PRODUCTNAME like 'WebLogic
Server' or LBL_PRODUCTNAME like '%WebTier and Utilities%' or
LBL_PRODUCTNAME like '%EM Platform (OMS)%' or LBL_PRODUCTNAME
like '%Oracle Remote Intradoc Client%' or LBL_PRODUCTNAME like '%Application

139
Server 10g%' or LBL_PRODUCTNAME like '%Application Server Infrastructure 10g%'
or LBL_PRODUCTNAME like '%Business Intelligence%' or LBL_PRODUCTNAME
like '%Oracle SOA Suite%' or LBL_PRODUCTNAME like '%Oracle BAM%' or
LBL_PRODUCTNAME like '%WebCenter Portal Suite 11g' or LBL_PRODUCTNAME
like '%Oracle Business Process Management%' or LBL_PRODUCTNAME
like '%Application Server Configuration%' or LBL_PRODUCTNAME like '%Oracle
Application Server Guard%' or LBL_PRODUCTNAME like '%Oracle Remote Intradoc
Client%' ) order by "Produto");

Matheus.

140
Service Detected on OEM but not in SRVCTL
or SERVICE_NAMES Parameter?
Okey, it happens.
To me, after a database moving from a cluster to another. The service was registered
by SRVCTL in the old cluster but is not needed. So, was not registered in the new
cluster.
But OEM insists to list, for example, the “service3” as offline. The problem is that you
can not remove it by SRVCTL, because you had not registered, right? See the
example below:

Listing services:

srvdatabase1:/home/oraclesrvctl status service -d systemdb


Service service1_systemdb is running on nodes: srvdatabase1
Service service2 is running on nodes: srvdatabase1
Service service2_systemdb is running on nodes: srvdatabase1

In the service_name parameter:

srvdatabase1:/home/oraclesqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 8 15:21:00 2015
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options
SQL show parameters service;
NAME                                 TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
service_names                        string
service2,test,systemdb

And the offline alarm goes to “ service3 “?


The easiest fix:

SQL exec dbms_service.DELETE_SERVICE(' service3 ');


PL/SQL procedure successfully completed.

Matheus.

141
Manipulating JMS queues using WLST
Script
Hi.

Today, let’s talk about Java Message Systems (JMS), the reason led me talk about
this, is that my environment, a complex architecture of messages where we have
more of two hundred queues in the same domain.
The administration of queues in the weblogic console is very simple, but, if you need
to remove a million messages, in a hundred queues, you have a problem!
To turn more agile the visualization of messages, state and other queue properties,
nothing better than to use WLST.

This post shows a script, which can grow up where you imagine, for while the script
have just three options (the most useful to me) and nothing prevents to have more.

1 – Pause consumer
2 – Resume consumer
3 – Delete messages

You just need to edit the script to add user, password and admin console url.

# @author Dieison Larri santos # 30/04/2016 print " What do you need?" print " " print
"1 - Pause Consumer" print "2 - Resume Consumer" print "3 - Delete Messages" task
= raw_input("choose an option: ") task = int(task)
connect('username','passsword','t3://admin_console.net:7001') servers =
domainRuntimeService.getServerRuntimes(); if (len(servers) 0): for server in servers:
jmsRuntime = server.getJMSRuntime(); jmsServers = jmsRuntime.getJMSServers();
for jmsServer in jmsServers: destinations = jmsServer.getDestinations(); for
destination in destinations: pen = destination.getMessagesPendingCount(); cur =
destination.getMessagesCurrentCount(); sum = pen + cur;
print 'Name: '+destination.getName(),'; Messages Count:',sum,';
Paused: ',destination.isPaused() if task == 3: destination.pauseConsumption(); if task
== 2: destination.resumeConsumption(); if task == 3: destination.deleteMessages('');
disconnect()

To execute:  $WL_HOME/common/bin/wlst.sh script_name.py.

Dieison.

142
Decrypting WebLogic Datasource Password
Hi Guys,

Today I bring you a script that I use to decrypt datasource passwords and also the
password of AdminServer, which is very useful on a daily basis.

The script uses the encrypted password that is found within the datasource
configuration files ($DOMAIN_HOME/config/jdbc/*.xml).
To decrypt the AdminServer password is used the encrypted password contained
within the boot.properties ($DOMAIN_HOME/servers/AdminServer/security).

Below the script (decryptPassword.py):

#================================================================
======================= # This Script decrypt WebLogic passwords # # Usage:
# wlst decryptPassword.py # # #========================================
=============================================== import os import
weblogic.security.internal.SerializedSystemIni import
weblogic.security.internal.encryption.ClearOrEncryptedService def
decrypt(domainHomeName, encryptedPwd): domainHomeAbsolutePath =
os.path.abspath(domainHomeName) encryptionService = weblogic.security.internal.S
erializedSystemIni.getEncryptionService(domainHomeAbsolutePath) ces =
weblogic.security.internal.encryption.ClearOrEncryptedService(encryptionService)
clear = ces.decrypt(encryptedPwd) print "RESULT:" + clear try: if len(sys.argv) == 3:
decrypt(sys.argv[1], sys.argv[2]) else: print "INVALID ARGUMENTS" print " Usage:
java weblogic.WLST decryptPassword.py " print " Example:" print " java
weblogic.WLST decryptPassword.py
D:/Oracle/Middleware/user_projects/domains/base_domain
{AES}819R5h3JUS9fAcPmF58p9Wb3swTJxFl0t8NInD/ykkE=" except: print
"Unexpected error: ", sys.exc_info()[0] dumpStack() raise

Syntax using: java weblogic.WLST decryptPassword.py $DOMAIN_HOME


encrypted_password

For example:
[oracle@app1osbgrepora1l scripts]$ source
/oracle/domains/osb_domain/bin/setDomainEnv.sh
[oracle@app1osbgrepora1l osb_domain]$ java weblogic.WLST decryptPassword.py
/oracle/domains/osb_domain/
{AES}WdbfYhD1EbVXmIe62hLftef4WtNPvyRDGc1/lsyQ014=
Initializing WebLogic Scripting Tool (WLST) …
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
RESULT :OSBPASS123

143
That’s all for today
Jackson.

144
Setting up a weblogic Result cache on
Oracle Service Bus
Hi Guys,

In the current days , even with the new ideals about agile metods and various
attempts to put together infraestructure and development (DevOps) we still have so
much codes that had the development with a great distance of the machines and S.O.

In this scenario, a lot of exceptions are found in the application logs, but the majority
can’t be considerated the problem in fact.

This Post is related to an exception that occurs when a BusinessService in an Oracle


Service Bus flux was configured to use a Result cache service, instead to call an
external service,
why do we have an exception when calling the result cache? -Because the result
cache was not configured on the Weblogic server.

com.bea.wli.sb.service.resultcache.ResultCacheException: An unexpected exception


was thrown while using the result cache:
java.lang.ClassNotFoundException:
com.bea.wli.sb.transports.jca.JCAResponseMetaDataImpl

Let’s configure weblogic result cache (Coherence)!

For this lab, will used two machines and two managed servers on a cluster.

First, let’s create two coherence servers, one for each machine:

For each coherence server we must set one lib and one module in the classpath, this
box is found in “Start Server” page on the Coherence Server.

145
In the same page, we need to configure the box “Arguments:” to define coherence
hosts and ports.

Attention to fill properly ‘localhost’: For the coherence server1 the localhost value is
machine01, to the coherence server2 the value for localhost is machine2.

After settting up the two coherence server, let’s create a coherence cluster, the target
must to be the managed servers or Weblogic server Cluster:

After setting up coherence cluster, set the new cluster on each coherence Server.

146
The last step is to configure coherece Server parameter on each managed server. In
Box “Arguments”, which is on page “Start server” on each managed server.

Again, attention to fill properly ‘localhost’: For the Managed server1 the localhost value
is machine01, to the Managed server2 the localhost value is machine2.

To validate the settings , start the coherence servers, and wait to RUNNING status.
Celebrate with a good wine!

Dieison.

147
Avoiding lost messages in JDBC Persistent
Store, when processing Global Transactions
with JMS.
A few months ago, i had a problem in Persistence Store of JMS queues, soon after
perform server restart, I get error from persistence store to recover message:

weblogic.store.PersistentStoreFatalException: [Store:280064]invalid handle 55981


(server="EVENTS01" store="JDBCStore_3022" table="JDBCStore_3022WLStore"),

To resolve this problem, just add this parameter on server startup arguments:

-Dweblogic.store.StoreBootOnError = true

With this parameter, the server starts with OK status in WebLogic 11g and with
FAILED status in Weblogic 12c, but in both the processing of the messages continues
when active,
to remove FAILED status in Weblogic 12c, just need to truncate persistence table in
database and restart server (This solution can be found in Oracle Docs).

This solution did not solved my problem, because I can’t lost or delete messages.

Let’s go analyse the problem:

If I perform server start with paramenter mendioned above, I get this error:

weblogic.store.PersistentStoreFatalException: [Store:280064]invalid handle 55981...

If I perform server start without parameter, I get this errors:

BEA-280061 The store "JDBCStore_3022" could not be deployed:


weblogic.store.io.jdbc.JDBCStoreException: [Store:280 065]open failed
(server="EVENTS01" store="JDBCStore_3022"
table="JDBCStore_3022WLStore"):(Linked Cause, "java.lang.Exception:
java.lang.AssertionError") BEA-310006 - Critical subsystem
PersistentStore.JDBCStore_3022 has failed. Setting server state to FAILED. Reason:
weblogic.store.PersistentStoreFatalException: [Store:280064]invalid handle 55981
(server="EVENTS01" store="JDBCStore_3022" table="JDBCStore_3022WLStore")

After analyse the two behaviors, and pay special attention to this error: (Ignoring 2PC
record for sequence…) . I went to invetigate what is the better configuration to use
JMS with Global transactions, because I always a doubt of why Datasource of
persistence is non-XA, what is behavior of global transactions in this case? And then, I
found about LLR Optimization (Logging Last Resource).

The use of this configuration, explains why do can not uses Driver XA for JDBC
persistence Store.

148
The information below can be found HERE .

About the LLR Optimization:

In many cases a global transaction becomes a two-phase commit (2PC) transaction


because it involves a database operation (using JDBC) and another non-database
operation, such as a message queueing operation (using JMS). In cases such as this
where there is one database participant in a 2PC transaction, the Logging Last
Resource (LLR) Optimization transaction option can significantly improve transaction
performance by eliminating some of the XA overhead for database processing and by
avoiding the use of JDBC XA drivers, which typically are less efficient than non-XA
drivers. The LLR transaction option does not incur the same data risks as borne by the
Emulate Two-Phase Commit JDBC data source option and the NonXAResource
resource adapter (Connector) option.

LLR processing details

At server boot or data source deployment, LLR data sources load or create a table on
the database from which the data source pools database connections. The table is
created in the schema determined by the user specified to create database
connections. If the database table cannot be created or loaded, then server boot will
fail.
Within a global transaction, the first connection obtained from an LLR data source
reserves an internal JDBC connection that is dedicated to the transaction. The internal
JDBC connection is reserved on the specific server that is also the transactions’
coordinator. All subsequent transaction operations on any connections obtained from
a same-named data source on any server are routed to this same single internal
JDBC connection.
When an LLR transaction is committed, the WebLogic Server transaction manager
handles the processing transparently. From an application perspective, the transaction
semantics remain the same, but from an internal perspective, the transaction is
handled differently than standard XA transactions. When the application commits the
global transaction, the WebLogic Server transaction manager atomically commits the
local transaction on the LLR connection before committing transaction work on any
other transaction participants. For a two-phase commit transaction, the transaction
manager also writes a 2PC record on the database as part of the same local
transaction. After the local transaction completes successfully, the transaction
manager calls commit on all other global transaction participants. After all other
transaction participants complete the commit phase, the related LLR 2PC transaction
record is freed for deletion. The transaction manager will lazily delete the transaction
record after a short interval or with another local transaction.

If the application rolls back the global transaction or the transaction times out,
the transaction manager rolls back the work in the local transaction and does
not store a 2PC record in the database.

Firt Step:

149
Create a Datasouce to persistence Store with Non-XA Driver.

Second Step:

Go to the transaction page of the new Datasource, and select these check boxes as
below:

Restart the server and hope to never lose messages in persistence again!

Dieison.

150
Reset the AdminServer Password in
WebLogic 11g and 12c
Reset the AdminServer Password in WebLogic 11g and 12c:

source $DOMAIN_HOME/bin/setDomainEnv.sh
cd $DOMAIN_HOME/servers/AdminServer/
mv data data-old
cd $DOMAIN_HOME/security
java weblogic.security.utils.AdminAccount weblogic .

Restart the AdminServer.

If the weblogic has the file boot.properties in


$DOMAIN_HOME/servers/AdminServer/security/, should be adjusted the credentials
of user and password, before restart the AdminServer.

OBS: Check the post on decrypt datasource password , which can also be used to
decrypt the credentials of boot.properties file, avoiding making the above procedure, if
this file exists.

That’s all for today.


Jackson.

151
Configuration Coherence Server
Out-of-Process in OSB 12C
Hello guys,

Today I’m going to introduce the Coherence Server Out-of-Process configuration.


Once this configuration has changed a lot between versions 11G to 12C, the post will
be a little bit more detailed (than usual).

The table below summarize these changes:

From

To

osb-coherence-cache-config.xml

Coherence Cache Config resource

osb-coherence-override.xml

Coherence Cluster resoruce

Out-Of-Process Cache Server

New WLS node/cluster

Follow the steps:

1 – Create the managed servers for coherence and also the cluster for these
managed servers:

2 – Restart all managed servers (including OSB);

3 – Add the cluster created for the managed servers of coherence to the targets of the
“Coherence cluster” automatically created in the default installation
(defaultCoherenceCluster):

152
4 – Restart the coherence’s managed servers;

5 – Deploy the artifact “resultcache.gar” in Coherence_Cluster target:

153
6 – Add to the arguments in server start of the managed servers of OSB:
“-DOSB.coherence.cluster=CoherenceCluster
-Dtangosol.coherence.distributed.localstorage=false”;

7 – Add to the arguments in server start of the managed servers of Coherence:


“-DOSB.coherence.cluster=CoherenceCluster”;

8 – Restart all managed servers;

That’s all for today.


Jackson.

154
WebLogic AdminServer Startup stopped at
“Initializing self-tuning thread pool”
After starting AdminServer, it remains with starting status and stopped writing in log
file in:

Check the disk space used, to make sure that there are no partitions with 100%
utilization, including /tmp.
After them, make sure the owner of the weblogic (oracle) has have write permission of
“/tmp”

[root@app1xptoosb1 /]# ls -tlhr / |grep tmp


drwxr-xr-x 5 root root 4.0K Nov 15 09:11 tmp

If the owner of weblogic does not have write permission must be set, because the
application server writes some temporary files in the directory:

[root@app1xptoosb1 /]# chmod 777 /tmp

[root@app1xptoosb1 /]# ls -tlhr / |grep tmp


drwxrwxrwx 10 root root 4.0K Nov 18 09:44 tmp

Jackson.

155
Weblogic starting with the operating system
Hi,
Today, let’s to configure weblogic services startup, when machines starts.
In some blogs, we can find a bunch of customized scripts that create and set variables
to startup the adminservers, nodemanagers and managed server, but, in my case, i
just need to start adminserver and nodemanger, when machines start just after an
incident.

For this situation, we need that the startup of application do not interrupt the operation
system startup.

*The operation system in subject is Red-Hat 6.5

Without create scripts or complex configurations, to obtain this behavior we just need
add startup of services in the file /etc/rc.local.

su - oracle -c "nohup /oracle/domains/domain_name/bin/startWebLogic.sh


/oracle/logs/Adminserver.log 2&1 &" su - oracle -c "nohup
/oracle/binaries/wlserver_10.3/server/bin/startNodeManager.sh
/oracle/logs/Nodemanager.log 2&1 &"

When you uses “su – oracle -c” the operation system makes a call to oracle user.
Using rc.local, the last OS execution file after startup, you guarantee to not interrupt
system startup.

Enjoy.
Dieison.

156
WLST easeSyntax
Who works with WLST know it’s pretty boring to natigate to MBeans, because
whenever necessary to put in parentheses () commands and quotation marks ‘ ‘.
When we forget, need to retype the whole command again.
I found a command that helps a lot when it comes to navigate in MBean tree, it
eliminates the need for parentheses and quotation marks.
After entering the WLST, type:

wls:/xpto_domain/serverConfig easeSyntax()

wls:/xpto_domain/serverConfig ls
dr– AdminConsole

dr– SelfTuning
dr– Servers
dr– ShutdownClasses
dr– SingletonServices

wls:/xpto_domain/serverConfig cd Servers
wls:/xpto_domain/serverConfig/Servers ls
dr– AdminServer
dr– WLS1_MSWS1
dr– WLS1_MSWS2

wls:/xpto_domain/serverConfig/Servers cd WLS1_MSWS1
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1 cd Log
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1/Log cd ..
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1 cd Machine
wls:/xpto_domain/serverConfig/Servers/WLS1_MSWS1/Machine ls
dr– app1wsmachine1

Not tested within python scripts, only browsing the tree Mbean.

Jackson.

157
Quickly change Weblogic to Production
Mode
You were running away to deploy your newest project on Weblogic 12c and
lately discover  that you made your environment as development mode (OPSSSS =/)

Quickly set check box ‘Production mode’ on your domain tab.

It will be necessary to bounce Weblogic server.

Thank’s Oracle for this checkbox in 12c

Maiquel.

158
Weblogic in debug mode
Usually, in non-production environments, it is necessary to check applications
deployed on a Weblogic server. The default log (.out) does not report or details
conclusively the real cause of the problem.
In this case, beyond the levels of logs that can be configured via weblogic console
(Managed Server Logging Advanced), we can add to the JVM startup arguments
(Managed Server Configuration Server Start Arguments) the following arguments:

-Dweblogic.webservice.verbose=true -Dweblogic.wsee.verbose=*
-Dweblogic.wsee.verbose=weblogic.wsee.* -Dweblogic.wsee.verbose.timestamp=true

Recommended use only during the troubleshoot, because it generates a lot of logs.

Jackson.

159
Apache 2.4 with port redirect to Weblogic
12c
According Oracle guys, Apache 2.4 its is a vanila module to Weblogic 12c and same
module runs with Weblogic 11g.

Modules are available to download:


https://blogs.oracle.com/WebLogicServer/entry/announcing_web_socket_proxy_and

# httpd -version Server version: Apache/2.4.6 (Red Hat Enterprise Linux) Server built:
Mar 21 2016 02:33:00

On httpd.conf is necessary to load Apache 2.4 module

LoadModule weblogic_module modules/lib/mod_wl_24.so

My virtual host config:

ServerName grepora.com RedirectMatch ^/$


https://grepora.com/members/maiquel-oliveira/ DirectoryIndex "index.html"
MatchExpression /maiquel-oliveira WebLogicCluster=weblogicmachine:8001

So after config and restart httpd, start get error on system messages caused by
mod_wl_24.so:

# tail -f/var/log/messages httpd: httpd: Syntax error on line 56 of


/etc/httpd/conf/httpd.conf: Cannot load modules/mod_wl_24.so into server:
libstdc++.so.5: cannot open shared object file: No such file or directory # yum
install -y libstdc++.so.5 # systemctl start httpd.service # tail -f/var/log/messages httpd:
httpd: Syntax error on line 56 of /etc/httpd/conf/httpd.conf: Cannot load
modules/mod_wl_24.so into server: libopmnsecure.so: cannot open shared
object file: No such file or directory

Now magic solution to “libopmnsecure.so: cannot open shared object file”:

edit /etc/ld.so.conf

include ld.so.conf.d/*.conf /etc/httpd/modules/lib/mod_wl_24.so # ldconfig # systemctl


start httpd.service # tail -f /var/log/messages systemd: Started The Apache HTTP
Server.

Maiquel.

160
Oracle Licensing: Weblogic Tip!
Like a complement to yesterday post , about Oracle Database Licensing, today’s post
is a little tip to Weblogic licensing evaluating, considering an auditing…

The restricted services can be checked on WebLogic Server Basic – Restricted


Primary Services in WebLogic Server .

There is also a py script on Orace Support that can be executed via WLST on Admin
Sever. Please take a look on: WebLogic Server Basic License Feature Usage
Measurement Script (Doc ID 885587.1)

And it’s all by now!

Hugs!
Matheus.

161
Weblogic JRF files in /tmp
Problem:
In weblogic 11G, there are several JFR files in /tmp directory:

root@app1wsora3 tmp]# pwd; find . -name *.jfr |xargs ls -tlhr


/tmp
-rw——- 1 oracle oinstall 0 Aug 11 18:41
./2016_06_02_13_50_22_4317/2016_08_11_18_41_43_4317.jfr
-rw——- 1 oracle oinstall 37M Aug 11 18:41
./2016_06_02_13_50_22_4317/2016_08_01_11_22_51_4317.jfr
-rw——- 1 oracle oinstall 14M Aug 16 09:25
./2016_06_02_13_50_15_4341/2016_08_16_03_24_12_4341.jfr
-rw——- 1 oracle oinstall 0 Aug 16 12:02
./2016_06_02_13_50_15_4341/2016_08_16_12_02_02_4341.jfr
-rw——- 1 oracle oinstall 14M Aug 16 12:02
./2016_06_02_13_50_15_4341/2016_08_16_09_25_39_4341.jfr
-rw——- 1 oracle oinstall 0 Aug 16 12:43
./2016_06_02_13_50_24_4344/2016_08_16_12_43_28_4344.jfr
-rw——- 1 oracle oinstall 150M Aug 16 12:43
./2016_06_02_13_50_24_4344/2016_08_16_12_17_36_4344.jfr

These files are from DMS (Dynamic Monitoring Service) and they are created when
application server is running.

By default, these files are generated in this directory and is not possible to turn it off.
As a workaround, you can redirect where these files will be generated by the
parameter “-XX:FlightRecorderOptions=repository”.
For example: -XX:FlightRecorderOptions=repository=/oracle/tmp/

This parameter can be adjusted in script “setDomainEnv.sh” or in the startup


arguments for each server.
In this case I set in startup arguments.

Navigate to Enviroment Server Server Start.


In “Arguments”, add: “-XX:FlightRecorderOptions=repository=/oracle/tmp/”

162
Restart the servers.

[root@app1wsora3 tmp]# pwd; find . -name *.jfr


/oracle/tmp
./2016_08_16_13_08_45_11049/2016_08_16_13_08_46_11049.jfr

That’s all for today.

Jackson.

163
Bypass user and password in the Oracle
BAM ICommand.
Every time you need to execute ICommand, you must enter the user and password of
the application server running Oracle BAM.
With the configuration below, it is no longer necessary to inform username and
password every time.

In the file “BAMICommandConfig.xml” located in


“$BEA_HOME/Oracle_SOA1/bam/config/”, add the lines below to the end of the file,
before the tag “/BAMICommand”:

weblogic YOUR_PASSWORD

Restart the Oracle BAM.

Jackson.

164
<EJB Exception in method: ejbPostCreate:
java.sql.SQLException: XA error:
XAResource.XAER_RMFAIL start() failed on
resource 'ggds-datasource_domain':
XAER_RMFAIL : Resource manager is
unavailable
Some incidents that we face are expected. Usually we wait for problems when
something changes in an environment.
But, some times, for no apparent reason, with no systemic alteration, we encounter
errors where our first reaction is: what a f ***!?

This time we find a java exception in a standard domain for GoldenGate Director;

For months the application behaved stable and functional, until it did fails for no
apparent reason;

When I saw part of the exception, “XAER_RMFAIL: Resource manager is unavailable”


went straight to talk to one of the best DBAs know that Matheus Boesing , to request a
check on the resource manager database, (No problem found).

… then we fall back into a BUG: 11672297 Bug: ORA-01092 MAPPED TO


XAER_RMERR instead of XAER_RMFAIL – (Doc ID 1329800.1)

In version 12.1, this bug is fixed, but as a palliative solution can do the following:

Increase the value (Maximum Duration of XA Calls) in JTA configurations of weblogic


domain, the default value is 12000, in my case, I adjusted to 48000;

165
The problem was solved, at least for now.
Dieison.

166
Error BAD_CERTIFICATE in Node Manager
Error:

Mar 8, 2016 2:41:16 PM weblogic.nodemanager.server.Handler run WARNING:


Uncaught exception in server handlerjavax.net.ssl.SSLKeyException:
[Security:090482]BAD_CERTIFICATE alert was received from
app1osbxpto1.localhost.net - 192.28.140.25. Check the peer to determine why it
rejected the certificate chain (trusted CA configuration, hostname verification). SSL
debug tracing may be required to determine the exact reason the certificate was
rejected. javax.net.ssl.SSLKeyException: [Security:090482]BAD_CERTIFICATE alert
was received from app1osbxpto1.localhost.net - 192.28.140.25. Check the peer to
determine why it rejected the certificate chain (trusted CA configuration, hostname
verification). SSL debug tracing may be required to determine the exact reason the
certificate was rejected. at
com.certicom.tls.interfaceimpl.TLSConnectionImpl.fireException(Unknown Source) at
com.certicom.tls.interfaceimpl.TLSConnectionImpl.fireAlertReceived(Unknown
Source) at com.certicom.tls.record.alert.AlertHandler.handle(Unknown Source) at
com.certicom.tls.record.alert.AlertHandler.handleAlertMessages(Unknown Source) at
com.certicom.tls.record.MessageInterpreter.interpretContent(Unknown Source) at
com.certicom.tls.record.MessageInterpreter.decryptMessage(Unknown Source) at
com.certicom.tls.record.ReadHandler.processRecord(Unknown Source) at
com.certicom.tls.record.ReadHandler.readRecord(Unknown Source) at
com.certicom.tls.record.ReadHandler.readUntilHandshakeComplete(Unknown
Source) at
com.certicom.tls.interfaceimpl.TLSConnectionImpl.completeHandshake(Unknown
Source) at com.certicom.tls.record.ReadHandler.read(Unknown Source) at
com.certicom.io.InputSSLIOStreamWrapper.read(Unknown Source) at
sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264) at
sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) at
sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) at
java.io.InputStreamReader.read(InputStreamReader.java:167) at
java.io.BufferedReader.fill(BufferedReader.java:136) at
java.io.BufferedReader.readLine(BufferedReader.java:299) at
java.io.BufferedReader.readLine(BufferedReader.java:362) at
weblogic.nodemanager.server.Handler.run(Handler.java:71) at
java.lang.Thread.run(Thread.java:662)

Solution:

source $DOMAIN_HOME/bin/setDomainEnv.sh .
$WL_HOME/server/bin/setWLSEnv.sh java utils.CertGen -cn `hostname` -keyfilepass
DemoIdentityPassPhrase -certfile mycert -keyfile mykey java utils.ImportPrivateKey
-keystore DemoIdentity.jks -storepass DemoIdentityKeyStorePassPhrase -keyfile
mykey.pem -keyfilepass DemoIdentityPassPhrase -certfile mycert.pem -alias
demoidentity cp DemoIdentity.jks $WL_HOME/server/lib

167
$WL_HOME/common/bin/wlst.sh
connect('weblogic','password','t3://app1osbxpto1.localhost.net:7001') nmEnroll('/oracle
/domains/osb_domain','/oracle/binaries/wlserver_10.3/common/nodemanager/') exit()
Restart node manager.

Jackson.

168
Weblogic – Wrong listening address
This week, had a unexpected stop on Weblogic server after start this server it played a
trick, it turns to refuse any telnet request  on managed server port even localhost,
however it was started successfully.

It’s so easy to resolve:

Check this:

Configure listen addresses

The server instance for which you configure the listen address does not need to be
running.

To configure a server’s listen address:

• If you have not already done so, in the Change Center of the Administration
Console, click Lock & Edit (see Use the Change Center ).

• In the left pane of the Console, expand Environment and select Servers .

• On the Servers page, click the name of the server.

• Select Configuration General .

• On the Severs: Configuration: General page, enter a value in Listen Address


.See Configuration Options for guidelines.

• Click Save .

• To activate these changes, in the Change Center of the Administration Console,


click Activate Changes . Not all changes take effect immediately—some require
a restart (see Use the Change Center ).

After you finish

If the server is running, restart it.

On Administration console:

169
Maiquel.

170
Enabling GoldenGate 12c DDL replication
For some IT demands it’s necessary to replicate DDL’s (Data Definition) to maintain
source/target equalized.

So Oracle delivery excellent feature DDL Replication since GG 10G.

In GG 12c the procedure turns simplified.

So, under GoldenGate home, do SQLPLUS as sysdba:

SQL @role_setup.sql GGS Role setup script   Enter GoldenGate schema name:
ggate Wrote file role_setup_set.txt PL/SQL procedure successfully completed. Role
setup script complete Grant this role to each user assigned to the Extract, GGSCI, and
Manager processes, by using the following SQL command: GRANT
GGS_GGSUSER_ROLE TO where is the user assigned to the GoldenGate
processes. SQL GRANT GGS_GGSUSER_ROLE TO ggate; Grant succeeded. SQL
@ddl_enable.sql Trigger altered.

Configure Extract/Pump/Replicat with DDL Replication:

Insert into those process parameter ” DDL INCLUDE MAPPED”

Sample:

EXTRACT E_DBA USERID OGG PASSWORD OGGPASS EXTTRAIL ./dirdat/fo


TRANLOGOPTIONS EXCLUDEUSER OGG DDL INCLUDE MAPPED TABLE
OGG.*;

Maiquel.

171
How to find GoldenGate recovery time
Sometimes it’s necessary to restart GoldenGate process, and after start GG Extract, it
take’s long time ‘in recovery’ status.

It’ a interesting subject, and can be found here (before read below ) .

GGSCI (greporagg) 16 send EXT status EXTRACT EXT (PID 23068830) Current
status: In recovery[1]: Processing data Current read position: Redo thread #: 2
Sequence #: 4246 RBA: 223285824 Timestamp: 2016-10-08 07:32:36.000000 SCN:
1658.1839128718 Current write position: Sequence #: 29295 RBA: 74336127
Timestamp: 2016-10-14 17:59:43.476624 Extract Trail: ./dirdat/TR

So let’s check how to find transaction:

GGSCI (greporagg) 17 send EXT showtrans Sending SHOWTRANS request to


EXTRACT EXT ... ------------------------------------------------------------ XID: 783.27.1959817
Items: 0 Extract: EXT Redo Thread: 4 Start Time: 2016-10-08:07:33:51 SCN:
1658.1839293825 (7122895070593) Redo Seq: 3388 Redo RBA: 224131088 Status:
Running ------------------------------------------------------------

In database (dark) side:

SQL select s.sid ,s.serial# ,s.username ,s.machine ,s.status ,s.lockwait ,t.used_ublk


,t.used_urec ,t.start_time ,t.XIDUSN ,t.XIDSLOT ,t.XIDSQN from gv$transaction t
inner join gv$session s on t.addr = s.taddr order by start_time asc;

Maiquel.

172
GoldenGate Integrated Capture and
Integrated Replicat Healthcheck Script
GoldenGate integrated Extract gives to dbas powerful tool to check GoldenGate’s
operation in database, this package can  be found to download on Doc ID 1448324.1.

This Healthcheck is similar AWR reports and it been very useful to find some error or
bottleneck.

Tool give some advices and parameter tips.

Let’s check my lab HC topics

Environment overview:

Performance tips:

This HC uses system views created by OGG, so you can customize you own HC

Maiquel.

173
GoldenGate: RAC One Node Archivelog
Missing
The situation:

We have a GoldenGate on Allow Mode running some extracts on RAC One Node
Database (reading the archivelogs). And then, suddenly, the instance crashes
(network lost contact to the server) and the other instance (thread) was auto started by
CRS. To the database no problems: The other node redologs was used during the
startup recover and every thing is ok.

The application running with Weblogic serverpool and gridlink just had a little
contention and continued the operation thought the started instance. The Goldengate
switch was manually made, but some sequences was lost. What we found? the
sequences was in the old thread’s redologfiles. It should be backed up
if fast_start_mttr_target was different to zero. Buuut, the world is not so beautiful:

raconenodedb show parameters mttr NAME TYPE VALUE ------------------------------------


fast_start_mttr_target integer 0

How we solved?
Simple solution: identified the group/thread and made a cp from ASM. The copied
redolog was used as archivelog on goldengate and everything was ok.

raconenodedb select sequence#,group#,thread# from v$log where thread#=2 order by


1; SEQUENCE# GROUP# THREAD# ---------- ---------- ---------- 39636 6 2 39637 7 2
39638 8 2 39639 9 2 39640 10 2

ASMCMD cp group_10.288.859482805 /oracle/grup10_thread2 copying


+DGDATA/MYDB/ONLINELOG/group_10.288.859482805 - /oracle/grup10_thread2

Easy like that.

Matheus.

174
GoldenGate GGSCI> shortcut tips
GGSCI (GoldenGate Software Command Interface) has some interesting shortcuts,
quick, and good to use day-to-day GoldenGate.

My preferred:

Command history (h) :

GGSCI (grepora) 4 h GGSCI Command History 1: info all 2: shell tail -f ggserr.log 3:
edit params extr 4: h

S.O. Commands (shell):

GGSCI (grepora) 5 shell pwd /oracle/product/goldengate12c/ GGSCI (grepora) 6 shell


ls *.obey create_process.obey register_extract.obey GGSCI (grepora) 7 shell tail -f
ggserr.log

Repeat least command successfully executed (!)

GGSCI (grepora) 8 shell ls *.obey create_process.obey register_extract.obey GGSCI


(grepora) 9 ! shell ls *.obey create_process.obey register_extract.obey

RegEx:

GGSCI (grepora) 10 info e*

See you!
Maiquel.

175
Skipping database transaction on Oracle
GoldenGate
Sometimes GoldenGate EXTRACT capture long transactions from database and
could be some B.O.F.H making DUMMY, if it’s the case, it’s a ‘UNWANTED’
transaction, and can skip it on ggsci:

(GUARANTEED DATA LOSS – db transaction skipped)

GGSCI (cloud-db) 60 send ext2 showtrans Sending SHOWTRANS request to


EXTRACT EXT2 ... Oldest redo log files necessary to restart Extract are: Redo Thread
1, Redo Log Sequence Number 20322, SCN 1661.3085726936 (7137026405592),
RBA 597023248 ------------------------------------------------------------ XID:                 
2049.13.3951869 Items:                1 Extract:              EXT2 Redo Thread:          1 Start
Time:           2016-11-07:15:22:07 SCN:                  1661.3085726936
(7137026405592) Redo Seq:             20322 Redo RBA:             597023248
Status:               Running ------------------------------------------------------------

GGSCI (cloud-db) 62 send ext2 SKIPTRANS 2049.13.3951869  THREAD 1 Sending


SKIPTRANS request to EXTRACT EXT2 ... Are you sure you sure you want to skip
transaction [XID 2049.13.3951869, Redo Thread 1, Start Time 2016-11-07:15:22:07,
SCN 1661.3085726936 (7137026405592)]? (y/n)y Sending SKIPTRANS request to
EXTRACT EXT2 ... Transaction [XID 2049.13.3951869, Redo Thread 1, Start Time
2016-11-07:15:22:07, SCN 1661.3085726936 (7137026405592)] skipped.

Check your applications and kill it in the database

Maiquel.

176
GoldenGate: Replicate data from SQLServer
to TERADATA – Part 1
Since we are arriving at the end of the year, I have taken the mission to replicate data
between SQL server and TERADATA. The worst part in this task, is to install and
configure a Goldengante in a Windows environment.

Believe, it is not possible to do a Unix installation of goldengate to collect data from


SQLserver, goldengate binary needs to be installed on Windows SQLserver host.

After installing the GG binaries, it is good practice to add the MGR as a Windows
service:

C:\goldengate install addevents addservice manualstart Oracle GoldenGate


messages installed successfully. Service 'GGSMGR' created. Install program
terminated normally.

In order for GG to access the sql database, you need to create a data source (ODBC),
and configure a new system DSN (here is db0sql1), and select SQL Server as the
database driver.

To perform a DBLOGIN:

DBLOGIN SOURCEDB db0sql1, USERID ggate, PASSWORD ??????

To configure extract uses the same mode of other GG process:

ADD EXTRACT E_SQL, TRANLOG, BEGIN NOW ADD EXTRAIL ./dirdat/tr,


EXTRACT E_SQL, MEGABYTES 100 ADD TRANDATA dbo.DLOG_ERRORS allcols

GGSCI (sqlserverdb) 2edit param e_sql EXTRACT E_SQL TRANLOGOPTIONS


MANAGESECONDARYTRUNCATIONPOINT SOURCEDB db0sql1, USERID ggate,
PASSWORD ?????? CACHEMGR CACHESIZE 1GB exttrail ./dirdat/tr --TABLE MAP
TABLE dbo.DLOG_ERRORS; TABLE dbo.SAC_DATA; TABLE dbo.SAC_LIST;
TABLE dbo.SAC_TITLE;

It is possible to configure an extract process sending trails directly to the destination,


which in this case would be the GG-TERADATA, but we always configure a pump
process as good practice, because in the event of any communication problem, it will
not affect the extraction process.

The Pump and Replicat process configuration, will be presented in Part 2.

Dieison.

177
GoldenGate: Replicate data from SQLServer
to TERADATA – Part 2
This steps should still be performed in SQLserver Host:

The pump process configuration is very simple, its only function is to transport the trail
files to destination.

ADD extract P_MSQL, exttrailsource ./dirdat/tr

C:\goldengate edit param P_MSQL

extract P_MSQL SOURCEDB db0sql1, USERID ggate, PASSWORD ??????


CACHEMGR CACHESIZE 2GB rmthost teradata1.net, mgrport 8809 rmttrail ./dirdat/td
--TABLE MAP TABLE dbo.DLOG_ERRORS; TABLE dbo.SAC_DATA; TABLE
dbo.SAC_LIST; TABLE dbo.SAC_TITLE;

Still in the SQLserver Host, is need to create a definition file, wich will be used in
gg-teradata.
First, create a “tables.def” file that should contain a dblogin and tables that will be
replicated.

defsfile tables_sqlserver.sql purge SOURCEDB db0sql1, USERID ggate,


PASSWORD ?????? TABLE dbo.DLOG_ERRORS; TABLE dbo.SAC_DATA; TABLE
dbo.SAC_LIST; TABLE dbo.SAC_TITLE;

C:\goldengate defgen.exe paramfile tables.def

This process result a new file (tables_sqlserver.sql), copy this file to destination
(gg-teradata).

This steps must be perfomed in GG-Teradata Host:

To configure the goldengate teradata you must install TERDATA ODBC Driver, to
allow the goldengate access the teradata base, you can download ODBC driver here .

After install ODBC Driver is  need to adjust the odbc.ini, which should contain teradata
connection information.

Here is the example of the odbc.ini file.

[teradata_dev] Driver=/opt/teradata/client/ODBC/lib/tdata.so
Description=Teradata base DBCName=teradata1.net LastUser=
Username=GG_TERA Password=???????? Database= DefaultDatabase=dbs
LoginTimeout=3600 SessionMode=ANSI DateTimeFormat=AAA NoScan=Yes
characterSet=UTF16

178
After configuring odbc.ini, add an environment variable in S.O, making the file visible
to goldengate.

export ODBCINI=$GGATE_HOME/.odbc.ini

*You can add this “export” on oracle user profile, if its no set, goldengate will fail.

Now, let’s configure Replicate process:

ADD REPLICAT R_MSQL, EXTTRAIL ./dirdat/td NODBCHECKPOINT

teradata1.net:/oracle/ggate edit param R_MSQL

replicat R_MSQL --This information comes from odbc.ini file targetdb teradata_dev
SOURCECHARSET PASSTHRU discardfile ./dirrpt/R_MSQL.dsc, MEGABYTES
1024, purge sourcedefs ./dirdef/tables_sqlserver.sql --Map MAP
dbo.DLOG_ERRORS, TARGET T_DB1_SAC_V.VW_DLOG_ERRORS; MAP
dbo.SAC_DATA, TARGET T_DB1_SAC_V.VW_SAC_DATA; MAP dbo.SAC_LIST,
TARGET T_DB1_SAC_V.VW_SAC_LIST; MAP dbo.SAC_TITLE, TARGET
T_DB1_SAC_V.VW_SAC_TITLE;

This is a simple example of replication between SQL server and Teradata, a bunch of
customizations can be performed depending on the business need.

Enjoy.
Dieison.

179
Access denied on GoldenGate Manager
After apply GoldenGate fix 12.1.2.1.10 on GoldenGate for Oracle Databases 11G
getting error below during GoldenGate Director Server access:

ggserr.log: WARNING OGG-00936 Oracle GoldenGate Manager for Oracle, mgr.prm:


Access denied (request from 10.1.1.10, rule #4).
To allow a remote Director server connection, you must add in the ./GLOBALS:
_DISABLEFIX21427144
_DISABLE21427144

Maiquel.

180
GoldenGate – exclude Oracle database
thread#
Your Oracle database instance status changed , so you need to dismiss some thread#
on GoldenGate.

SQL select INST_ID,thread#,status from gv$thread;

INST_ID THREAD# STATUS


———- ———- ——
1                   1                OPEN
1                   2               CLOSED

Try to insert ‘THREADOPTIONS PROCESSTHREADS EXCEPT X’ on your


GoldenGate PRM file.
Where X means thread# instance want to ‘exclude’.

It may cause data loss.

Maiquel.

181
GoldenGate 12.1.2 not firing insert trigger
I had to troubleshoot a situation, after GoldenGate capture some DML and replicate
that, Oracle database needs to run insert trigger making some business integration.

After to upgrade this enviroment from GG 11.1.1.1 to 12.1.2 and DB 11.2.0.3 to


12.1.0.2, was identified that GoldenGate wasn’t running this triggers

So, found interesting resolution on Oracle Docs:

SUPPRESSTRIGGERS | NOSUPPRESSTRIGGERS

Valid for nonintegrated Replicat for Oracle. Controls whether or not triggers are fired
during the Replicat session. Provides an alternative to manually disabling triggers.
(Integrated Replicat does not require disabling of triggers on the target system.)

SUPPRESSTRIGGERS

is the default and prevents triggers from firing on target objects that are configured for
replication with Oracle GoldenGate.

SUPPRESSTRIGGERS

is valid for Oracle 11.2.0.2 and later 11gR2 versions.

SUPPRESSTRIGGERS

is not valid for 11gR1.

So, added ‘DBOPTIONS NOSUPPRESSTRIGGERS’ in the replicat parameter file.

Regards!
Maiquel.

182
How to sincronize high data volume with
GoldenGate
I was taking high workload with data load methods, so I decided to move out of
comfort zone and fortunately discovered a excellent way to copy/move high data
volume with GoldenGate Initial Load.

It’s well documented by Oracle and gavinsoorma.com (best and simple one).

# On source GoldenGate (ggsci):

GGSCI ADD EXTRACT load1, SOURCEISTABLE GGSCI EDIT PARAMS load1


EXTRACT load1 userid ggate@goldengate RMTHOST target-mgr.grepora.com,
MGRPORT 7809 RMTTASK replicat, GROUP load2 FORMAT LEVEL 4 ---Loading
tables map CUSTOMER.TABLE1;

# On target GoldenGate (target-mgr.grepora.com)  –  (its possible to be on same


installation)

GGSCI ADD REPLICAT load2, SPECIALRUN GGSCI EDIT PARAMS load2


REPLICAT load2 userid ggate@goldengate ASSUMETARGETDEFS
SOURCECHARSET PASSTHRU TABLE CUSTOMER.TABLE1, TARGET
CUSTOMER_CLOUD.TARGET_TABLE1;

# On source GoldenGate (ggsci):

GGSCI start load1 GGSCI start load1 Extracting from CUSTOMER.TABLE1 to


CUSTOMER_CLOUD.TARGET_TABLE1: *** Total statistics since 2016-08-12
19:04:16 *** Total inserts 8007208.00 Total updates 0.00 Total deletes 0.00 Total
discards 0.00 Total operations 8007208.00

Tested this feature with source GG 12.2 and target GG 12.1, so it were necessary to
specify ” FORMAT LEVEL 4″ on rmthost line.

This feature worked very well, and wasn’t necessary to create db links/bulk batch or
technical WA.

I hope it help keep our lives more simple


Maiquel.

183
How to sincronize high data volume with
GoldenGate – Part II
In the latest post , I documented how to copy/move high table data volume
using GoldenGate Initial Load (with SPECIALRUN option).

Sometimes, we (dba/sysadmins) need to move HIGH data (tables with billion rows),
in shortest time possible.

So, sharing useful tips, that helps to reach this goal.

Making GoldenGate Initial load work as PARALLEL:

To run GoldenGate in PARALLEL (with RANGE  option), it’s necessary to create


remote trail file, so the diference from the first post, this will work with ‘RMTFILE’
option added in GGSCI.

# On source GoldenGate  add follow EXTRACT (ggsci):

GGSCI ADD EXTRACT load1, SOURCEISTABLE GGSCI EDIT PARAMS load1


EXTRACT load1 userid ggate@goldengate RMTHOST target-mgr.grepora.com,
MGRPORT 7809 RMTFILE ./dirdat/il, MEGABYTES 1024, format level 4 PURGE
---Loading tables map CUSTOMER.TABLE1; GGSCI START LOAD*

# On target GoldenGate  add follow REPLICAT:

GGSCI ADD REPLICAT LOAD11, exttrail ./dirdat/ml, checkpointtable


GGATE.CHECKPOINT GGSCI EDIT PARAMS load11 REPLICAT load11 userid
ggate@goldengate ASSUMETARGETDEFS SOURCECHARSET PASSTHRU TABLE
CUSTOMER.TABLE1, TARGET CUSTOMER_CLOUD.TARGET_TABLE1, filter
(@RANGE (1,2));

GGSCI ADD REPLICAT LOAD12, exttrail ./dirdat/ml, checkpointtable


GGATE.CHECKPOINT GGSCI EDIT PARAMS load12 REPLICAT load12 userid
ggate@goldengate ASSUMETARGETDEFS SOURCECHARSET PASSTHRU TABLE
CUSTOMER.TABLE1, TARGET CUSTOMER_CLOUD.TARGET_TABLE1, filter
(@RANGE (1,2)); GGSCI START LOAD*

Tuning database inserts/updates:

According Oracle:

The following are suggestions that can make the load go faster and help you to avoid
errors.

Data: Make certain that the target tables are empty. Otherwise, there may be
duplicate-row errors or conflicts between existing rows and rows that are being
loaded.

184
Constraints: Disable foreign-key constraints and check constraints. Foreign-key
constraints can cause errors, and check constraints can slow down the loading
process. Constraints can be reactivated after the load concludes successfully.
Indexes: Remove indexes from the target tables. Indexes are not necessary for
inserts. They will slow down the loading process significantly. For each row that is
inserted into a table, the database will update every index on that table. You can add
back the indexes after the load is finished.
Note:

Shazam! \o/
Maiquel.

185
Failure unregister integrated extract
Some times it’s impossible to unregister Integrated Extract, however it need to exclude
to avoid RMAN failures.

Follow below to hack GoldenGate registration:

SQL select CAPTURE_NAME from dba_capture; CAPTURE_NAME


------------------------------ OGG$CAP_IE_CAPT

GGSCI (myhost as ggate@foodb) 13 unregister extract IE_CAPT database

ERROR OGG-08222 EXTRACT IE_CAPT must be registered with the database to


perform this operation.

Try it:

SQL select 'exec DBMS_CAPTURE_ADM.DROP_CAPTURE


('''||capture_name||''');' from dba_capture;
'EXECDBMS_CAPTURE_ADM.DROP_CAPTURE('''||CAPTURE_NAME||''');'
---------------------------------------------------------------------- exec
DBMS_CAPTURE_ADM.DROP_CAPTURE ('OGG$CAP_IE_CAPT'); SQL exec
DBMS_CAPTURE_ADM.DROP_CAPTURE ('OGG$CAP_IE_CAPT'); PL/SQL
procedure successfully completed.

Maiquel.

186
Auto start GoldenGate
How to autostart GoldenGate services after system startup?

On Linux: /etc/rc.local

#Auto start GoldenGate su - oracle -c "/oracle/goldengate/./ggsci paramfile


startGG.obey"

On GoldenGate ggsci path, create follow file:

cd /oracle/goldengate/ echo "start mgr" startGG.obey ./ggsci GGSCI  1 edit params


mgr --Startup MGR AUTOSTART er * AUTORESTART er *,RETRIES
5,WAITMINUTES 5,RESETMINUTES 30

Maiquel.

187
Quick find ODI repository version
How to check which ODI repository component/version is created?

HOT SQL:

SELECT COMP_ID,COMP_NAME,OWNER,VERSION FROM


SCHEMA_VERSION_REGISTRY;

Maiquel.

188
ODI 10gR1: Connection to Repository Failed
after Database Descriptor Change
After migrating database to another host, port or SID, the error below started to
happen when running a scenario.
“Of course” all mapped connection was right setted on Topology… But environment is
complex, it’s possible something is missing…

java.sql.SQLException: Io Exception: The Network Adapter could not establish the


connection at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125) at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:162) at
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:274) at
oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:328) at
oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:361) at
oracle.jdbc.driver.T4CConnection.(T4CConnection.java:151) at
oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32) at
oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:595) at
com.sunopsis.sql.SnpsConnection.u(SnpsConnection.java) at
com.sunopsis.sql.SnpsConnection.c(SnpsConnection.java) at
com.sunopsis.sql.h.run(h.java)

Ok, so something really is missing… But where?


Try to check this:

select * from ODIM_PROD.SNP_MTXT_PART where txt like '%MYOLDDB%';


TXT                                      TXT_ORD       I_TXT --------------------------------------- 
------------  --------- jdbc:oracle:thin:@dbsrvr:1521:myoldb_1   0             2913000
jdbc:oracle:thin:@dbsrvr:1521:myoldb_1   0             2000
jdbc:oracle:thin:@dbsrvr:1521:myoldb_1   0             1000
jdbc:oracle:thin:@dbsrvr:1521:myoldb_1   0             92999
jdbc:oracle:thin:@dbsrvr:1521:myoldb_1   0             2923000

To fix:

update snp_mtxt_part set TXT='jdbc:oracle:thin:@dbsrvrnew:1521:mynewdb_1' where


i_txt in (...); commit;

So, to check detailed Topology Connection information, as posted here , you chan
check this:

SELECT DISTINCT SNP_TECHNO.TECHNO_NAME    AS TECHNOLOGY,


SNP_CONNECT.CON_NAME      AS DATA_SERVER,
SNP_PSCHEMA.EXT_NAME      AS PHYSICAL_SCHEMA,
SNP_PSCHEMA.SCHEMA_NAME   AS SCHEMA_NAME,
SNP_PSCHEMA.WSCHEMA_NAME  AS WORK_SCHEMA,

189
SNP_CONTEXT.CONTEXT_NAME  AS CONTEXT_NAME,
SNP_LSCHEMA.LSCHEMA_NAME  AS LOGICAL_SCHEMA,
SNP_CONNECT.JAVA_DRIVER   AS DRIVER_INFO, SNP_MTXT_PART.TXT        
AS URL FROM SNP_TECHNO LEFT OUTER JOIN SNP_CONNECT ON
SNP_CONNECT.I_TECHNO=SNP_TECHNO.I_TECHNO LEFT OUTER JOIN
SNP_PSCHEMA ON SNP_PSCHEMA.I_CONNECT=SNP_CONNECT.I_CONNECT
LEFT OUTER JOIN SNP_PSCHEMA_CONT ON
SNP_PSCHEMA_CONT.I_PSCHEMA=SNP_PSCHEMA.I_PSCHEMA LEFT OUTER
JOIN SNP_LSCHEMA ON
SNP_LSCHEMA.I_LSCHEMA=SNP_PSCHEMA_CONT.I_LSCHEMA LEFT OUTER
JOIN SNP_CONTEXT ON
SNP_CONTEXT.I_CONTEXT=SNP_PSCHEMA_CONT.I_CONTEXT LEFT OUTER
JOIN SNP_MTXT_PART ON
SNP_MTXT_PART.I_TXT=SNP_CONNECT.I_TXT_JAVA_URL WHERE
SNP_CONNECT.CON_NAME IS NOT NULL ORDER BY
SNP_TECHNO.TECHNO_NAME;

Cool, isn’t it?

Have a nice day!


Matheus.

190
Failure to create ODI schedule
Hi,

Today, as in another normal days, I found a problem with ODI schedule in the newly
created enviroment. While creating a schedule to scenario execution, then clicked in
update schedule in Topology Agents OracleDIAgent , I received an exception:

ODI-1274: Agent Exception Caused by: Could not find the AgentScheduler instance in
order to process 'OdiComputePlanning' request

Oracle support has a solution for this exception, but only for ODI 12c, it happens that
my enviroment is ODI 11.1.1.6, in community oracle, has the same question, but,
without answer.

I could not find any solution, then after cry a lot and performing restart all
 (Adminserver, managedserver and nodemanager), I saw another error when starting
nodemanager:

weblogic.nodemanager.common.ConfigException: Native version is enabled but


nodemanager native library could not be loaded

To solve it, I found only methods to bypass the problem, but no one says how can I
solve it. To bypass, just change parameter NativeVersionEnabled to false in
$BEA_HOME/common/nodemanager/nodemanager.properties ,
this will solve the problem with nodemanader, but will not solve the problem with ODI
schedule.

To solve the two exceptions (nodemanger and ODI schedule) keep the nodemanager
parameter NativeVersionEnabled=true and set LD_LIBRARY_PATH in
$domain_home/bin/setDomainEnv.sh as below:

LD_LIBRARY_PATH=$BEA_HOME/server/native/linux/x86_64/

Then, perform a enviroment restart and logout/login in ODI Studio.

If this procedure helped you to solve the problem, or not, send us your comments!
Dieison.

191
ODI – Import(ANT) Modes
Oracle introduce in Data Integrator 12c an spectacular way to avoid object duplication
(10g/11g users will bad remember)

With “Global ID” , ODI repository will generate special HASH to each object created
on the repository (sometimes it will be updated).

This internal ID should be available on “Version” tab as below:

So, why this global id makes sense?

According oracle docs , “read carefully this section in order to determine the import
mode you need.”

Changing ODI import modes, will be able to import/customize duplicated


objects, generated by devops scripts.

Let’s understand the Import Modes:

Import Mode

Description

Duplication

This mode creates a new object (with a new internal ID) in the target Repository, and
inserts all the elements of the export file. The ID of this new object will be based on
the ID of the Repository in which it is to be created (the target
Repository).Dependencies between objects which are included into the export such as
parent/child relationships are recalculated to match the new parent IDs. References to
objects which are not included into the export are not recalculated.

Note that this mode is designed to insert only ‘new’ elements.

The Duplication mode is used to duplicate an object into the target repository. To
transfer objects from one repository to another, with the possibility to ship new
versions of these objects, or to make updates, it is better to use the three Synonym
modes.

This import mode is not available for importing master repositories. Creating a new
master repository using the export of an existing one is performed using the master
repository Import wizard.

Synonym Mode INSERT

192
Tries to insert the same object (with the same internal ID) into the target repository.
The original object ID is preserved.If an object of the same type with the same internal
ID already exists then nothing is inserted.

Dependencies between objects which are included into the export such as parent/child
relationships are preserved. References to objects which are not included into the
export are not recalculated.

If any of the incoming attributes violates any referential constraints, the import
operation is aborted and an error message is thrown.

Synonym Mode UPDATE

Tries to modify the same object (with the same internal ID) in the repository.This
import mode updates the objects already existing in the target Repository with the
content of the export file.

If the object does not exist, the object is not imported.

Note that this mode is able to delete information in the target object if this information
does not exist in the export file.

This import mode does NOT create objects that do not exist in the target. It only
updates existing objects. For example, if the target repository contains a project with
no variables and you want to replace it with one that contains variables, this mode will
update the project name for example but will not create the variables under this
project. The Synonym Mode INSERT_UPDATE should be used for this purpose.

Synonym Mode INSERT_UPDATE

If no ODI object exists in the target Repository with an identical ID, this import mode
will create a new object with the content of the export file. Already existing objects
(with an identical ID) will be updated; the new ones, inserted.Existing child objects will
be updated, non-existing child objects will be inserted, and child objects existing in the
repository but not in the export file will be deleted.

Dependencies between objects which are included into the export such as parent/child
relationships are preserved. References to objects which are not included into the
export are not recalculated.

This import mode is not recommended when the export was done without the child
components. This will delete all sub-components of the existing object.

Import Replace

This import mode replaces an already existing object in the target repository by one
object of the same object type specified in the import file.This import mode is only
supported for scenarios, Knowledge Modules, actions, and action groups and replaces
all children objects with the children objects from the imported object.

193
Note the following when using the Import Replace mode:

If your object was currently used by another ODI component like for example a KM
used by an integration interface, this relationship will not be impacted by the import,
the interfaces will automatically use this new KM in the project.

Warnings:

• When replacing a Knowledge module by another one, Oracle Data Integrator sets
the options in the new module using option name matching with the old module’s
options. New options are set to the default value. It is advised to check the values
of these options in the interfaces.

• Replacing a KM by another one may lead to issues if the KMs are radically
different. It is advised to check the interface’s design and execution with the new
KM.

Se you!
Maiquel.

194
GoldenGate supplemental log check
Are you bored with GoldenGate objects with no supplemental log on Oracle
Database?

This script will check ALL tables in GG PRM, after check on database supplemental
log information.

PRM, dblogin should be changed.

Try this script on crontab:

vi gg_trandata_checkup.sh cd $GGATE echo gg.out echo "dblogin USERID @


PASSWORD template_check.tmp echo "" template_check.tmp cat
dirprm/ext_1.prm|grep -i TABLE|awk {' print $1" "$2 '}|awk -F"," {' print
$1 '}|sed 's/TABLE/INFO TRANDATA/g'  template_check.tmp ./ggsci -s gg.out obey
template_check.tmp EOF cat gg.out|sed '/^$/d'|grep -v ": ALL."|grep -v -i "info
trandata"|grep -v "data is enabled for table"|grep -v "ERROR OGG-01784"

Mail this to GGAdmins.


Maiquel.

195
OGG-01224 Oracle GoldenGate Command
Interpreter for Oracle: Bad file number
I checked strange coincidence during GoldenGate Director monitoring failure and
GoldenGate Manager messages.

During GoldenGate operation, it append never-ending failure messages bellow,


however, none GoldenGate proccess change to “ABBENDED” status

2016-10-26 08:47:28 ERROR OGG-01224 Oracle GoldenGate Command Interpreter


for Oracle: Bad file number. 2016-10-26 08:47:29 ERROR OGG-01668 Oracle
GoldenGate Command Interpreter for Oracle: PROCESS ABENDING.

It’s caused by GoldenGate logfile size ( ggserr.log), so correct with this:

grepora-gg@machine oracle$ cp ggserr.log ggserr.log-err-temp-log && ggserr.log

It good idea stop manager proccess (if it’s possible) before truncate log file.

Keep this in mind


Maiquel.

196
ERROR OGG-02636 when creating a
integrated extract in Goldengate 12C on a
Puggable database 12C
While creating an integrated extract in Goldengate 12C on a Puggable database 12C I
came across the follow error, stating that the needed catalog name is mandatory  and
was not being informed.

ERROR OGG-02636 Oracle GoldenGate Capture for Oracle, ext1.prm: The TABLE
specification ‘TABLE table_name’ for the source table ‘table_name’ does not include a
catalog name. The database requires a catalog name.

There is two ways to solve this case: The first , besides less indicated, is to add the
name of pluggable database (Catalog) before the owner name on the table maps, for
example:

GGSCI (host1.net) 1 edit param ext1

–Tables
TABLE PDB_NAME .SCHEMA_OWNER.TABLE_NAME;

Not really enjoying this solution and after searching for long hours without any other
result , our friend Maiquel DC indicated a parameter that identifies the catalog name
for all tables in the extract ;

Add the following parameter in extract  configuration file:

GGSCI (host1.net) 1 edit param ext1

–Parameters
SOURCECATALOG PDB_NAME

Thats all folks.


Dieison.

197
OGG-0352: Invalid character for character
set UTF-8 was found while performing
character validation of source column
Almost a month without post!
My bad, december is always a crazy time to DBAs, right?

This post’s title error happens because the charset is different between databases
used on replication thought GoldenGate and occurs only with alphanumerical columns
(CHAR, VARCHAR, VARCHAR2), because, even the char length be the same, the
data length will be different (like I explained here ). Take a look in this example:

sourcedb select table_name,column_name,data_length,char_length from


dba_tab_cols where column_name=’NAME' order by 1,2,3;   TABLE_NAME 
COLUMN_NAME     DATA_LENGTH CHAR_LENGTH ------------- --------------- -----------
----------- TABLE_EXAMPLE NAME            25          25

destdb select table_name,column_name,data_length,char_length from dba_tab_cols


where column_name=’NAME' order by 1,2,3; TABLE_NAME  COLUMN_NAME    
DATA_LENGTH CHAR_LENGTH ------------- --------------- ----------- -----------
TABLE_EXAMPLE NAME            100         25

There is basically two solutions:


1) Change one of those charsets.
2) Add “ SOURCECHARSET PASSTHRU ” clause on the replicat file.

I usually prefer the second option, just because it’s less intrusive than number 1.

See ya!
Matheus.

198
OGG-01934 Datastore repair failed,
OGG-01931Datastore ‘dirbdb’ cannot be
opened
After change GoldenGate 12c to ACFS filesystem, got eternal WARNING
OGG-01931, even Datastore is created:
WARNING OGG-01931 Oracle GoldenGate Manager for Oracle, mgr.prm: Datastore
‘dirbdb’ cannot be opened. Error 2 (No such file or directory).

Complete solution steps to resolve the issue on SHARED FILESYSTEM:


1. Login to ggsci prompt and stop all OGG processes including jagent and manager
2. Run “delete datastore” command. Confirm delete of datastore
3. Run “ CREATE DATASTORE SHM ” command
4. Start all OGG processes; start manager, start *, start jagent

ON GGSCI CREATE DATASTORE SHM [ID n]


Indicates that the data store should use System V shared memory for interprocess
communications.

Maiquel.

199
ERROR OGG-00446 – Unable to lock file “*”
(error 11, Resource temporarily unavailable).
GoldenGate 12c was running over NFS filesystem and had unexpected stop then
when it try starts take OGG-00446.

ERROR OGG-00446 Oracle GoldenGate Capture for Oracle, e_crm01.prm: Unable to


lock file “/mnt/ggate/dirchk/MYCRM.cpe” (error 11, Resource temporarily unavailable).

Here the solution:


Move the file /mnt/ggate/dirchk/MYCRM.cpe to
/mnt/ggate/dirchk/MYCRM.cpe_backup

Then copy /mnt/ggate/dirchk/MYCRM.cpe_backup to


/mnt/ggate/dirchk/MYCRM.cpe

I can’t understand why Oracle keeps this stupid bugs on 12c.

C’est La Vie!
Maiquel.

200
Error OGG-00354 Invalid BEFORE
column:(column_name)
When we use extraction process with certain macro filters, and send the trails to a
goldengate with JAVA adapter, the java extract process fails with the following error:
OGG-00354 Invalid BEFORE column:(column_name).

EXTRACT PROCESS

EXT01 exttrail ./dirdat/e1, FORMAT RELEASE 11.1 GETUPDATEBEFORES include


./dirmac/filter.mac TABLE OWNER01.TABLE01, #filter01();

PUMP PROCESS

PUMP01 rmthost ggjava.net rmttrail ./dirdat/j1 , format release 11.1


GETUPDATEBEFORES TABLE OWNER01.TABLE01;

EXTRACT GG JAVA

EXTRACT JAVA01 GETUPDATEBEFORES TABLE OWNER01.TABLE01;

In some cases, this issue can be resolved just removing the clause
“GETUPDATEBEFORES”, as reported in the Oracle note (Doc ID 2151605.1) . But in
some environments this procedure not resolve, because it is an undocumented bug in
goldengate JAVA 11.1, which is caused by the use of format release 11.1.
This same process has been testing in goldengate 12.1, with format release 12.1, and
the problem not occurs.

The solution is the upgrade!  \o/


Dieison.

201
Export/Backup directly to Zip using MKNOD!
We all faced that situation when we have to make a logical backup/export and haven’t
so much area to do that, right?
We know the export usually compress a lot on zip/gzip… It wouldn’t be great if we can
export directly to compressed file?

This situation become much more common because of Datapump, that requires a
directory accessible by database server. If you have not possibility to make a
mounting point or any other area, this can help…

## BKP with MKNOD BKP_DEST=/just/example DATE=`date +%Y%m%d%H%M` cd


$BKP_DEST mknod bkp_$DATE.dmp p gzip bkp_$DATE.dmp.gz & ###
Uncomment and Ajust one of: ## MySQL: #mysqldump -u $user -p$password
$database bkp_$DATE.dmp ## Oracle (Datapump or EXP) expdp \"/ as sysdba\"
dumpfile= bkp_$DATE.dmp full=y directory=DIRECTORY_EXAMPLE
logfile=log_bkpzipped.log compress=y #expdp $user/$password dumpfile=
bkp_$DATE.dmp full=y directory=DIRECTORY_EXAMPLE logfile=log_bkpzipped.log
#exp \"/ as sysdba\" file= bkp_$DATE.dmp log=log_bkpzipped.log compress=y
[tables=owner.table,..] [owner=schema1,..] [...]

Hugs!
Matheus.

202
“tail -f” vs “tail -F”: Do you know the
difference?
Hi all!
Do you know the difference between “tail -f” and “tail -F”?

Ok, don’t feel bad. It’s very difficult to find someone who knows… And with a reason, I
can’t find any link explaining this by Googling.
It’s possible that I don’t know how to search it too. But I searched as I’d search if I
didn’t know that… And couldn’t find anything about…

Let’s take a look on –help, so:

[root@mbdbasrvr]# tail --help Mandatory arguments to long options are mandatory for
short options too. --retry              keep trying to open a file even if it is inaccessible
when tail starts or if it becomes inaccessible later; useful when following by name, i.e.,
with --follow=name -f, --follow[={name|descriptor}] output appended data as the file
grows; -f, --follow, and --follow=descriptor are equivalent -F                       same as
--follow=name --retry -n, --lines=N            output the last N lines, instead of the last
10 --max-unchanged-stats=N with --follow=name, reopen a FILE which has not
changed size after N (default 5) iterations to see if it has been unlinked or renamed
(this is the usual case of rotated log files) If the first character of N (the number of
bytes or lines) is a `+', print beginning with the Nth item from the start of each file,
otherwise, print the last N items in the file.  N may have a multiplier suffix: b 512, k
1024, m 1024*1024. With --follow (-f), tail defaults to following the file descriptor, which
means that even if a tail'ed file is renamed, tail will continue to track its end.  This
default behavior is not desirable when you really want to track the actual name of the
file, not the file descriptor (e.g., log rotation).  Use --follow=name in that case.  That

203
causes tail to track the named file by reopening it periodically to see if it has been
removed and recreated by some other program. Report bugs to .

(Yes, I cutted off  non useful options. But you can check in your SO, if want to verify all
options.)

So, ok! The information is there, but isn’t very clear. You have to match some points to
understand it. I can’t find examples using -F nor –retry… So, let’s innovate posting
about it…
The effect of F (capital) is the same of “-f file_name –retry”. It basically “still working” if
the inode of the file change. Is very useful to systems with log rotation or stuffs like
that.

Let me show you:

Session1:

[root@mbdbasrvr]# echo "test1"test.log

Session2:

[root@mbdbasrvr]# tail -f test.log test1

Session1:

[root@mbdbasrvr]# echo "test2"test.log

Session2:

[root@mbdbasrvr]# tail -f test.log test1

oook, we truncated the file but the tail didn’t change. But if a use an appen by now?

Session1:

[root@mbdbasrvr]# echo "test2"test.log [root@mbdbasrvr]# echo "test3"test.log


[root@mbdbasrvr]# cat test.log test2 test2 test3

Session2:

[root@mbdbasrvr]# tail -f test.log test1 [root@mbdbasrvr]# tail -f test.log test2 test2


test3

Still not working, unless you restart the command… It’s because inode changed.

Let’s do it with -F (capital), to see the difference:

Session1:

[root@mbdbasrvr]# mv test.log oldtest.log [root@mbdbasrvr]# echo "new_test"test.log

Session2:

204
[root@mbdbasrvr]# tail -F test.log new_test

Session1:

[root@mbdbasrvr]# echo "new_test2"test.log

Session2:

[root@mbdbasrvr]# tail -F test.log new_test new_test2

Session1:

[root@mbdbasrvr]# echo "new_test3"test.log [root@mbdbasrvr]# echo


"new_test4"test.log

Session2:

[root@mbdbasrvr]# tail -F test.log new_test new_test2 tail: test.log: file truncated


new_test3 new_test4

Session1:

[root@mbdbasrvr]# mv test.log oldtest.log mv: overwrite `oldtest.log'? y


[root@mbdbasrvr]# echo "new_test_with_capital"test.log [root@mbdbasrvr]# rm
test.log [root@mbdbasrvr]# echo "xxx new file"test.log

Session2:

[root@mbdbasrvr]# tail -F test.log new_test new_test2 tail: test.log: file truncated


new_test3 new_test4 tail: `test.log' has become inaccessible: No such file or directory
tail: `test.log' has appeared;  following end of new file new_test_with_capital tail:
`test.log' has become inaccessible: No such file or directory tail: `test.log' has
appeared;  following end of new file xxx new file

Owoooooow! Cool, hãn?


Even we truncate, or move, or remove and recreate it, the command still working!

Very cool and very useful to some situations… Unfortunately just a few know it…
Let’s spread this information by sharing this post?

Have a nice day, see ya!


Matheus.

205
GB vs GiB | MB vs MiB | KB vs KiB
Oh man!
It’s just me or you doesn’t know about too?

Okey. Here the difference is well explained. I saw it for the first time in EMC
DataDomain interface and it sounded a little “strange”, but ok. Last week a heard a
friend talking about and decided to search… What a surprise! haha

In a nutshell, the units as we know them (1Gigabyte = 1000 Megabytes) was


proposed by  Système International D’Unités (SI) and the other way (1Gibibyte = 1024
Mebibytes, with much more “precision”) was proposed by International
Electrotechnical Commission’s (IEC), in 1999.
The main difference is that the first uses 10^x measurement, rather than 2^x (1024
base), like IEC. For example:

For a DVD:
4.7 GB == 4.337 GiB
8.5 GB == 7.91 GiB

Interesting, isn’t it?


So, again, I suggest you spend some time reading this …

Matheus.

206
RHEL: Figuring out CPUs, Cores and
Hyper-Threading
Hi all!
It’s a recurrent subject, right? But no one is 100% sure to how figure this out… So, let
me quickly show you my way:

– Physical CPUs (sockets):

[root@mysrvr ~]# grep -i "physical id" /proc/cpuinfo | sort -u | wc -l 2 [root@mysrvr ~]#


dmidecode -t processor |grep CPU Version: Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
Version: Intel(R) Xeon(R) CPU X5570 @ 2.93GHz

So, 2 physical CPUs.

– Physical Cores

[root@mysrvr ~]# egrep -e "core id" -e ^physical /proc/cpuinfo|xargs -l2 echo|sort -u


physical id : 0 core id : 0 physical id : 0 core id : 1 physical id : 0 core id : 2 physical id :
0 core id : 3 physical id : 1 core id : 0 physical id : 1 core id : 1 physical id : 1 core id : 2
physical id : 1 core id : 3

Each one of Physical Processors has 4 cores.


So, there is two quad-cores. This way, we have 8 cores at all.

– Logical CPUs

[root@mysrvr ~]# grep -i "processor" /proc/cpuinfo | sort -u | wc -l 16

Ok, so we have cores in double.


This means we have Hyper-Threading (technology by Intel Processors).

Not so hard, right?

Those links are similar and quite cool to understand the concepts:
https://access.redhat.com/discussions/480953
https://www.redhat.com/archives/redhat-list/2011-August/msg00009.html
http://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/
hyper-threading-technology.html

Matheus.

207
Shellscript: Using eval and SQLPlus
I always liked bash programming, and sometimes need to set Bash variables using
information from Oracle tables.

To achieve that I’m using below solution, which I explain in details later.

# SQLPlus should return a string with all Bash commands


$ORACLE_HOME/bin/sqlplus -S -L -R 3 / as sysdba /tmp/sqlplus.log -EOF SET
PAGES 0 FEEDBACK OFF TIMING OFF VERIFY OFF LINES 1000
SELECT 'OK:DBNAME='||UPPER(D.NAME)||'; INST_NAME='||I.INSTANCE_NAME
AS STR FROM V\$DATABASE D, V\$INSTANCE I; EOF # Now, tests if sqlplus exit
fine, and check if result string starts with OK keyword if [ $? -eq 0 ] && [ "$( cat
/tmp/sqlplus.log | head -n 1 | cut -d: -f1 )" == "OK" ]; then sed
-i 's/OK://g' /tmp/sqlplus.log while read r; do eval "$r"; done /tmp/sqlplus.log else echo
"Failed to search local instance $ORACLE_SID" return 2 fi

In the first part, I call sqlplus, which select should return an string that contains valid
bash commands, to set all variables I need. In this example, sqlplus returns Database
Name and Instance Name:

OK:DBNAME=xpto; INST_NAME=xpto_1;

The second part, exists only for consistency checks. It verify if result string starts with
“OK” keywork. If all went fine, it execute the result string using the bash command
eval.

eval – That is where magic happens!

The command eval, can be used to evaluate (and execute) an ordinary string, using
the current bash context and environment. That is different than when you put your
commands in a subshell.

The below source code, reads sqlplus.log and execute every command using eval:

while read line; do eval "$line"; done /tmp/sqlplus.log

Cassiano.

208
Linux Basic: Creating a Filesystem
From disk to filesystem:

Rescan on scisi controller to detect the disk (controller id 0, in  this example)

echo "- - -" /sys/class/scsi_host/host0/scan

– List disks

fdisk -l

Fdisk choosing options n - new p-partition 1- partition number

fdisk /dev/sdm

Create physcal volume

pvcreate /dev/sdm1

Create Volume Group

vgcreate oracle /dev/sdb1

Rename Volume Group

vgrename oracle vgoracle

Create LV

lvcreate -L 19G -n lvoracle vgoracle

Extend LV

lvextend -L +990M /dev/vgoracle/lvoracle

Make FileSystem

mkfs.ext3 -m 0 -v /dev/vgoracle/lvoracle

OBS: m 0 is the journal (for recovery in case of crash).  “0” because I don’t want it
now. So, 100% of disco will be available for using on fs.

Mount filesystem on Directory

mount -t ext3 /dev/vgoracle/lvoracle /oracle/

Just to check:

$ df -h /dev/mapper/vgoracle-lvoracle 20G  173M   20G   1% /oracle

209
Have a nice day!
Matheus.

210
Linux: Resizing Swap Online
Hi all!
Quick one to resize swap online:

[root@server-db ~]# swapon -s Filename                                Type            Size   


Used    Priority /dev/mapper/rootvg-lvswap              partition 5242872 373624  -1
[root@server-db ~]# vgs VG                #PV #LV #SN Attr   VSize   VFree [...]
rootvg              1   6   0 wz--n- 135.69G 5.69G [...] [root@server-db ~]# lvextend -L
+2048M /dev/mapper/rootvg-lvswap Extending logical volume lvswap to 7.00 GB
Logical volume lvswap successfully resized [root@server-db ~]# vgs VG               
#PV #LV #SN Attr   VSize   VFree [...] rootvg              1   6   0 wz--n- 135.69G 3.69G
[...] [root@server-db ~]# mkswap /dev/mapper/rootvg-lvswap Setting up swapspace
version 1, size = 7516188 kB [root@server-db ~]# swapoff /dev/mapper/rootvg-lvswap
[root@server-db ~]# swapon /dev/mapper/rootvg-lvswap [root@server-db ~]# swapon
-s Filename                                Type            Size    Used    Priority
/dev/mapper/rootvg-lvswap              partition 7516188 373624  -1

See ya!
Matheus.

211
nc -l – Starting up a fake service
Hi everyone!

Recently i have faced a situation that made me find out a very nice and useful
command that helped me a lot, and i hope it comes to help you guys as well, and it’s
named:

nc

Situation: We have a replicated environment from one datacenter to another (Using


Golden Gate), where the ETL happens. So basically is:

Datacenter 1 (root data)

Replicates to datacenter 2 (transforming the data)

that replicates to datacenter 3 (production itself)

In Datacenter level 2 , we have a dataguard configured. So then came the question:

• What if we need to do the switchover to the standby environments?

• Will we gonna have everything we need properly set up for the replication?

• How are we going to test the ports if nothing is up in there? Aren’t we gonna get
“connection refused”?

Calm down! There is a very nice workaround for this.

All you need to do is install the nc command as root (if it is not installed already):

yum install nc

Then execute it as follows, on the server you wanna test:

nc -l

example:

I wanna make sure that on the standby server the port 7809 (Golden Gate MANAGER
port) is open. On the standby server you run:

nc -l 7809

Then, from a remote server, you are going to be able to connect through a simple
telnet command:

telnet server.domain port

example:

212
telnet standby.company.com 7809

ON PRACTICE:

• Try the telnet from the remote server to the standby:

remoteserver {/home/oracle}: telnet standby.server 7809

Trying 192.168.0.10…

telnet: connect to address 192.168.0.10: Connection refused

• Then we start the fake service on the standby server!

standby.server {/home/oracle}: nc -l 7809

• And try the telnet again:

remoteserver {/home/oracle}: telnet standby.server 7809

Trying 192.168.0.10…

Connected to standby.server.

Escape character is ‘^]’.

Cheers!

Rafael.

213
Is My Linux Server Physical or Virtual?
Supposing you are in a server shell and don’t know if you machine is virtualized (a
VM)?
One way to check that (supposing VMWare as hypervisioning solution) is:

[root@mydbsrvr ~]# dmidecode | grep -i vmware Manufacturer: VMware, Inc. Product


Name: VMware Virtual Platform Serial Number: VMware-xx xx xx xx xx xx xx xx-xx xx
xx xx xx xx xx xx Description: VMware SVGA II

If you had an answer like this, yes, it’s a VM.

Matheus.

214
VMWare: Adding Shared Disks for Clustered
Oracle Database
Hi folks!
Today a friend asked about how to configure disks on VMWare to create a virtualized
cluster database. I revisited my old notes and decided to share. Here it goes…

First, I really have some constraints about it:


– Fake “high availability”: To have HA with VM it’s not needed 2 vms, if a host fail
VMWare should make a VMotion (if well configured), and no services will be affected.
So, one VM is ok.
– Not real “horizontally scallated”: It probably would be better to use one server as
physical than have two vms on it. Not make sense to do it…

So, why?
To prove concept, evaluate RAC configuration (caches on sequences, etc) and labs,
to learn and practice RAC stuffs…

Ok, now how to make it happen?

1. Add new disk to one of the machines. Some way, one will be the “primary” and
share disks with another.

2. Set Mode Thick Eager Zeroed

215
3. Create a specific controller to this “shared disks”

4. Set controller to virtual sharing

216
# Other Machine
5. Adding the existent disk to other VM (not primary, but from primary)

6. Select disk from primary

217
7. Create a new controller, as you made on primary and select it:

218
8. Set controller to virtual sharing

219
OBS:
If this error happen, one of your controller is not in sharing mode. Please check it.

And here we are!


Good lab!
Matheus.

220
VMware: Recognize Memory Addition Online
A quick script to do that:

grep line /sys/devices/system/memory/*/state | while read bla; do echo online


/sys/devices/system/memory/memory${bla}/state; done

Have a nice day!


Matheus.

221
Recursive string change
You want recursive change one string to another, it’s simple, you need a list with full
file name path called ‘output_list’, and run command bellow:

cat output_list|while read line;


do
cp -p $line $line.bkp;
cat $line |sed ‘s/SOURCE_STRING/TARGET_STRING/g’ $line.bkp && mv $line.bkp
$line;
done

Keep in mind it’s a DANGEROUS command, double check your file list, and if
necessary,  make a full backup from you system.

It will run on UNIX(ES) and Linux.

Maiquel.

222
Kludge to keep Database Alive
It’s not so pretty and Oracle has the Oracle Restart services for that. But to a
temporary and quick need, this script solve the problem:

if ps -fu oracle | grep -v grep | grep ora_smon_orcl /dev/null then echo "orcl instance is
up and running" else echo "orcl instance is down" sqlplus /nolog /dev/null 2&1 EOF
conn / as sysdba startup exit EOF fi

Matheus.

223
RHEL7: rc.local service not starting
It’s very common to automate application startup in rc.local on Linux systems.

Was testing Red Hat 7.2 (Maipo), and found that apps was’t started.

Found this on some Red Hat blog:

“ Systemd is a system and service manager for Linux operating systems. It is


designed to be backwards compatible with SysV init scripts, and provides a number of
features such as parallel startup of system services at boot time, on-demand
activation of daemons, support for system state snapshots, or dependency-based
service control logic. In Red Hat Enterprise Linux 7, systemd replaces Upstart as the
default init system.”

On default /etc/rc.local comes useful info:

#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run ‘chmod +x /etc/rc.d/rc.local’ to ensure
# that this script will be executed during boot .

touch /var/lock/subsys/local

Then, this ‘chmod’ turns rc.local enable during system startup.

To familiarize this new feature:

[root@somesystem~]# systemctl status rc-local ● rc-local.service - /etc/rc.d/rc.local


Compatibility Loaded: loaded (/usr/lib/systemd/system/rc-local.service; static; vendor
preset: disabled) Active: active (running) since Mon 2016-07-11 13:16:18 BRT; 28min
ago Process: 1046 ExecStart=/etc/rc.d/rc.local start (code=exited,
status=0/SUCCESS) CGroup: /system.slice/rc-local.service ├─2272 /bin/sh
/oracle/domains/mywl_domain/startWebLogic.sh ├─2284 /bin/sh
/oracle/domains/mywl_domain/bin/startWebLogic.sh ├─2374 /bin/sh
/oracle/domains/mywl_domain/bin/startNodeManager.sh ├─2377 /bin/sh
/oracle/binaries/wlserver/server/bin/startNodeManager.sh ├─2428
/oracle/jdk1.7.0_25/bin/java -Dwls.home=/oracle/binaries/wlserver/server
-Dweblogic.home=/oracle/binaries/wlserver/server -server -Xms1g -Xmx1g
-XX:MaxPermSize=512m -Dcoherence... └─2442 /oracle/jdk1.7.0_25/bin/java -server

224
-Xms1g -Xmx1g -XX:MaxPermSize=512m -Dweblogic.Name=AdminServer
-Djava.security.policy=/oracle/binaries/wlserver/server/lib/weblogic.policy .

Maiquel.

225
Mount Diretory from Remote RHEL7 Server
(NFS)
Quick Post: To mount a directory via NFS from a RHEL7 remote server:

Souce Host:

[root@sourcehost ~]# cat /etc/exports


/oracle/sharedir targethost(rw,no_root_squash,insecure) [root@sourcehost ~]#
/bin/systemctl restart nfs.service

* Note: The “/bin/systemctl” is the new by RHEL7. For other versions you can just use
“service nfs restart”.

Target Host:

[root@targethost ~]# mkdir -p /sourcehost/sharedir [root@targethost ~]# mount -t nfs


sourcehost:/oracle/sharedir /sourcehost/sharedir [root@srac-his ~]# df -h
/sourcehost/sharedir Filesystem Size Used Avail Use% Mounted on
sourcehost:/oracle/sharedir 100G 279M 100G 1% /sourcehost/sharedir

Have a nice weekend!


Matheus.

226
AIX: NTP Service Basics
Hi all,
I always forget the command and have to search it again. For further searches, I
expect to found in my own posts…

To start Service

startsrc -s xntpd

To stop Service

stopsrc -s xntpd

Configuration File

/etc/ntpd.conf

Expect it be useful to you too.


See ya!
Matheus.

227
Flush DNS Cache
To flush DNS cache? Easy like that:

# Linux
1) Flush DNS – “Auto”

service nscd restart

2) Flush DNS – “Manual”

service nscd stop rm /var/db/nscd/* service nscd start

# Windows
1) Flush DNS

ipconfig /flushdns

For quick referece:
http://www.cyberciti.biz/faq/rhel-debian-ubuntu-flush-clear-dns-cache/

Matheus.

228
Flush DNS on Linux
I began posting about ORA-12514 after database migration involving DNS adjustment.
Then, to make it more clear I wrote about How to Flush DNS Cache .

Now, just a complementar information that can be usefull:

# To invalidade DNS Cache:

ls /var/db/nscd/ group hosts netgroup passwd services nscd --invalidate=hosts (or -i


hosts)

Hugs!

Matheus.

229
RHEL: Adding User/Group to SSH and
SUDOERS file
Some Linux basics… To add a group or a user (this case “new_group”) to the ssh and
sudoers file:

[root@db-server ~]# vi /etc/ssh/sshd_config [root@db-server ~]# cat


/etc/ssh/sshd_config |grep new_group AllowGroups ssh_group1 root oinstall linux
new_group [root@db-server ~]# vi /etc/sudoers [root@db-server ~]# cat /etc/sudoers
|grep new_group %new_group ALL=(ALL) PASSWD: ALL [root@db-server ~]#
service sshd restart Stopping sshd: [ OK ] Starting sshd: [ OK ] [root@db-server ~]#

Matheus.

230
Oracle Database: Compression Algorithms
for Cloud Backup
Hi all!
Again talking about cloud backups for on-premise databases: An important aspect is
to compress the data, so network consumption might be reduced once less data is
being transfered.

It’s also important to evaluate CPU consumption. As higher compress algorithm is, as
much CPU it uses. So, pay attention!

Now, how to choose the compression algorithm? Here the options Oracle give us:

SQL col ALGORITHM_NAME for a15 set line 200 SQL select ALGORITHM_NAME,IN
ITIAL_RELEASE,TERMINAL_RELEASE,ALGORITHM_DESCRIPTION,ALGORITHM
_COMPATIBILITY  from v$rman_compression_algorithm; ALGORITHM_NAME
INITIAL_RELEASE    TERMINAL_RELEASE  
ALGORITHM_DESCRIPTION                                            ALGORITHM_COMPATIB
-------------- ------------------ ------------------
---------------------------------------------------------------- ------------------ BZIP2         
10.0.0.0.0         11.2.0.0.0         good compression ratio                                          
9.2.0.0.0 BASIC          10.0.0.0.0                            good compression
ratio                                           9.2.0.0.0 LOW            11.2.0.0.0                           
maximum possible compression speed                               11.2.0.0.0 ZLIB          
11.0.0.0.0         11.2.0.0.0         balance between speed and compression
ratio                      11.0.0.0.0 MEDIUM         11.2.0.0.0                            balance
between speed and compression ratio                      11.0.0.0.0 HIGH          

231
11.2.0.0.0                            maximum possible compression ratio                              
11.2.0.0.0

How to identify our compression algorithm?

RMAN show COMPRESSION ALGORITHM; RMAN configuration parameters for


database with db_unique_name EZM_PRFL are: CONFIGURE COMPRESSION
ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; #
default

And how to change it?

RMAN CONFIGURE COMPRESSION ALGORITHM 'HIGH'; new RMAN configuration


parameters: CONFIGURE COMPRESSION ALGORITHM 'HIGH' AS OF
RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE; new RMAN configuration
parameters are successfully stored RMAN show COMPRESSION ALGORITHM;
RMAN configuration parameters for database with db_unique_name EZM_PRFL are:
CONFIGURE COMPRESSION ALGORITHM 'HIGH' AS OF
RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE;

Ok,
But how to evaluate my compression ratio?

See the difference between INPUT_BYTES_DISPLAY and


OUTPUT_BYTES_DISPLAY columns from the query:

prddb col STATUS for a10 prddb col INPUT_BYTES_DISPLAY for a15 prddb col
OUTPUT_BYTES_DISPLAY for a15 prddb col TIME_TAKEN_DISPLAY for a20 prddb
SELECT SESSION_KEY, 2         INPUT_TYPE, 3         STATUS, 4        
TO_CHAR(START_TIME, 'mm/dd/yy hh24:mi') start_time, 5        
TO_CHAR(END_TIME, 'mm/dd/yy hh24:mi') end_time, 6  --      
ELAPSED_SECONDS / 3600 hrs, 7         COMPRESSION_RATIO, 8        
INPUT_BYTES_DISPLAY, 9         OUTPUT_BYTES_DISPLAY, 10        
TIME_TAKEN_DISPLAY 11    FROM V$RMAN_BACKUP_JOB_DETAILS 12    where
input_type like 'DB%' 13   ORDER BY SESSION_KEY 14  /SESSION_KEY
INPUT_TYPE    STATUS     START_TIME     END_TIME      
COMPRESSION_RATIO INPUT_BYTES_DIS OUTPUT_BYTES_DI
TIME_TAKEN_DISPLAY ----------- ------------- ---------- -------------- --------------
----------------- --------------- --------------- --------------------           2 DB FULL      
COMPLETED  04/22/16 12:59 04/22/16 13:06        6,84838963     4.26G        
636.50M       00:06:57           9 DB FULL       COMPLETED  04/22/16 13:47 04/22/16
13:54        6,83764706     4.26G         637.50M       00:06:37          14 DB FULL      
COMPLETED  04/22/16 16:26 04/22/16 16:33        6,84189878     4.26G        
637.25M       00:06:48

KB: https://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmconfa.htm#BRAD
V89466

232
Done?
If you have any question, please let me know in the comments!
Matheus.

233
Oracle Database Backup to Cloud: KBHS –
01602: backup piece 13p0jski_1_1 is not
encrypted
Hi all!
I’m preparing a material about downloading, configuring using Oracle Database Cloud
Backup. My case is about backuping a local database to Cloud.

So, as avant-première for you from the Blog, a quick situation about:

# Error

RMAN-03009: failure of backup command on ORA_SBT_TAPE_1 channel at


04/14/2016 13:58:45 ORA-27030: skgfwrt: sbtwrite2 returned error ORA-19511: non
RMAN, but media manager or vendor specific failure, error text: KBHS - 01602:
backup piece 12p1krsi_1_1 is not encrypted

# Solution (one of)

RMAN set encryption on identified by "mypassword" only; executing command: SET


encryption

Why?

To use Oracle Database Backup to Cloud you need to use at least one encrypting
method.
Oracle offers basically 3:
– Password Encryption

234
– Transparent Data Encryption (TDE)
– Dual-Mode Encryption (a combination of password and TDE).

In this post I refered the easier, by I recommend you to take a look on KB:
https://docs.oracle.com/cloud/latest/dbbackup_gs/CSDBB.pdf

Matheus.

235
RMAN Raise ORA-19913 ORA-28365 On
Restore from Cloud Backup
First I think was some error with Database Backup To Cloud, when testing. Then I
realized it was a simple mistake by myself.

Let me show you. First trying to restore datafile:

[oracle@mydbsrvr archivelogs]$ rman target / RMAN restore datafile 6; Starting


restore at 03-MAY-2016 20:00:30 using channel ORA_SBT_TAPE_1 allocated
channel: ORA_DISK_1 channel ORA_DISK_1: SID=178 device type=DISK channel
ORA_SBT_TAPE_1: starting datafile backup set restore channel ORA_SBT_TAPE_1:
specifying datafile(s) to restore from backup set channel ORA_SBT_TAPE_1:
restoring datafile 00006 to /db/u1001/test/cloud_test/test_restore.dbf channel
ORA_SBT_TAPE_1: reading from backup piece 0sr4mdun_1_1 RMAN-00571:
===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
=============== RMAN-00571:
===========================================================
RMAN-03002: failure of restore command at 05/03/2016 20:00:34 ORA-19870: error
while restoring backup piece 0sr4mdun_1_1 ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open

Ok, it might happen because I forgot to set encryption password:

RMAN SET ENCRYPTION ON IDENTIFIED BY "matheusdba" only; executing


command: SET encryption RMAN restore datafile 6; Starting restore at 03-MAY-2016
20:00:30 using channel ORA_SBT_TAPE_1 allocated channel: ORA_DISK_1 channel
ORA_DISK_1: SID=178 device type=DISK channel ORA_SBT_TAPE_1: starting
datafile backup set restore channel ORA_SBT_TAPE_1: specifying datafile(s) to
restore from backup set channel ORA_SBT_TAPE_1: restoring datafile 00006 to
/db/u1001/test/cloud_test/test_restore.dbf channel ORA_SBT_TAPE_1: reading from
backup piece 0sr4mdun_1_1 RMAN-00571:
===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
=============== RMAN-00571:
===========================================================
RMAN-03002: failure of restore command at 05/03/2016 20:00:34 ORA-19870: error
while restoring backup piece 0sr4mdun_1_1 ORA-19913: unable to decrypt backup
ORA-28365: wallet is not open

It hapen again?
This point I suspect some kind of bug… But it was my mistake and is not related to
Cloud, but to Encryption use. To undestand:
For Backup: Use ENCRYPTION
For Restore/Recover: Use DECRYPTION

236
Obviously, but take me a minute to realize…

Setting decryption, and problem solved:

RMAN set DECRYPTION identified by "matheusdba"; executing command: SET


decryption RMAN restore datafile 6; Starting restore at 03-MAY-2016 20:00:58 using
channel ORA_SBT_TAPE_1 using channel ORA_DISK_1 channel
ORA_SBT_TAPE_1: starting datafile backup set restore channel ORA_SBT_TAPE_1:
specifying datafile(s) to restore from backup set channel ORA_SBT_TAPE_1:
restoring datafile 00006 to /db/u1001/test/cloud_test/test_restore.dbf channel
ORA_SBT_TAPE_1: reading from backup piece 0sr4mdun_1_1 channel
ORA_SBT_TAPE_1: piece handle=0sr4mdun_1_1 tag=TAG20160503T193239
channel ORA_SBT_TAPE_1: restored backup piece 1 channel ORA_SBT_TAPE_1:
restore complete, elapsed time: 00:00:03 Finished restore at 03-MAY-2016 20:01:02

See ya!
Matheus.

237
UnknownHostException: Could not
authenticate to Oracle Database Cloud
Backup Module
Hi all!
When running Oracle Database Cloud Backup Module, found this error:

Command:

java -jar opc_install.jar -serviceName Storage -identityDomain usmatheusdba


-opcId 'matheus@boesing.com.br' -opcPass 'BestBlog2016' -walletDir
/db/oracle/admin/cloud/wallet -libDir /db/oracle/admin/cloud/libs

(Credential values changed, of course…)

Error:

Oracle Database Cloud Backup Module Install Tool, build 2016-02-04


java.net.UnknownHostException: usmatheusdba.storage.oraclecloud.com at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:175) at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:385) at
java.net.Socket.connect(Socket.java:546) at
sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:602) at
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160) at
sun.net.NetworkClient.doConnect(NetworkClient.java:178) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:427) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at
sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:275) at
sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:332) at sun.net.www.prot
ocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegate
HttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plain
Connect(HttpURLConnection.java:891) at sun.net.www.protocol.https.AbstractDelegat
eHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at su
n.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java
:1226) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(Https
URLConnectionImpl.java:254) at
oracle.backup.opc.install.OpcConfig.testConnection(OpcConfig.java:235) at
oracle.backup.opc.install.OpcConfig.doOpcConfig(OpcConfig.java:204) at
oracle.backup.opc.install.OpcConfig.main(OpcConfig.java:197) Could not authenticate
to Oracle Database Cloud Backup Module

Solution:
Set Relication Policy of Oracle Storage Cloud Service.
In My Services Home, Oracle Storage Cloud Service will have a link to “Set Retention
Policy”. It’s simply set it.
But pay attention, once you select a replication policy, you can’t change it.

238
As you can see, I already did it:

After that, everything worked fine.

KB:
Problems with Installing the Backup Module
Selecting a Replication Policy for Oracle Storage Cloud Service

See ya!
Matheus.

239
Cloud Computing Assessment – Free
Hi folks!
I’ve been away a few days, right? My bad. I’m sorry.
But I have a good new. I’m preparing a new site where the content of this blog will be
more efficiently allocated. Of course, the daily posts will continue. You’ll like it, I
promise.

By now, I’d suggest you to make this assessment about Cloud Computing provided by
Cloud-Institute.org .
The questions themselves generate some questions for reflection. Follow the link:

http://cloud-institute.org/cloud-open-exam.html

See ya!

Matheus.

240
Monitoring MySQL with Nagios – Quick View
Hi all!
As you know, we have some commercial solutions to monitoring/alerting MySQL, like
MySQL Enterprise Monitor or Oracle Grid/Cloud Control.

But, regarding we are using MySQL instead of Oracle Database, we can assume it’s
probably a decision taken based on cost. So, considering Open Source solutions, we
basically have Nagios, Zabbix, OpenNMS…

Thinking on Nagios, in my opinion the “supra sumo” is mysql_health_check.pl .


Below whitepaper and presentation:
White Paper
Presentation
Code
Good one by Sheeri Cabral and posted here !

Any way, with theese two we can make lots of magic:

1. check_mysql.pl
– Check status of MySql server (slow queries, etc)
– Queries per second graph

2. check_db_query.pl
– Allowes to run SQL Queries and setting thresholds for warning e critical. Ex:

check_db_query.pl -d database -q query [-w warn] [-c crit] [-C conn_file] [-p
placeholder]

Ex for Nagios call:

241
define command{ command_name    check_db_entries command_line   
/usr/local/bin/perl $USER1$/check_db_query.pl -d "$ARG1$" -q "$ARG2$" $ARG3$ }

So, now it’s just make your queries and implement your free monitoring on MySQL!
Matheus.

242
Optimize fragmented tables in MySQL
It happens on MySQL, as you know. Run an Optimize Table solve the question.
BUT , be careful! During the optimize the table stay locked (writing is not possible).

(Fragmented Table)

So what?
To not cause a lock in every table, the script below shows and runs (if you want to list
but not run, comment the line) only for tables that have fragmentation.

It was very useful to me!

#!/bin/sh echo -n "MySQL username: " ; read username echo -n "MySQL password: " ;
stty -echo ; read password ; stty echo ; echo mysql -u $username -p"$password" -NBe
"SHOW DATABASES;" | grep -v 'lost+found' | while read database ; do mysql -u
$username -p"$password" -NBe "SHOW TABLE STATUS;" $database | while read
name engine version rowformat rows avgrowlength datalength maxdatalength
indexlength datafree autoincrement createtime updatetime checktime collation
checksum createoptions comment ; do if [ "$datafree" -gt 0 ] ; then
fragmentation=$(($datafree * 100 / $datalength)) echo "$database.$name is
$fragmentation% fragmented." mysql -u "$username" -p"$password" -NBe "OPTIMIZE
TABLE $name;" "$database" fi done done

The resul will be like:

MySQL username: root MySQL password: ... mysql.db is 12% fragmented. mysql.db
optimize status OK mysql.user is 9% fragmented. mysql.db optimize status OK ...

243
This script is a full copy from this post by Robert de Bock .
Thanks, Robert!

Matheus.

244
MySQL Network Connections on
‘TIME_WAIT’
Hi all!
Recently I caught a bunch of connections in ‘TIME_WAIT’ on a MySQL Server through
‘netstat – antp 3306’…
After some time, we identified this was caused by the environment not using DNS,
only fixed IPS (uuugh!)…

As you know, for security measures MySQL maintains a host cache for connections
established. From MySQL docs:

“For each new client connection, the server uses the client IP address to check
whether the client host name is in the host cache. If not, the server attempts to resolve
the host name. First, it resolves the IP address to a host name and resolves that host
name back to an IP address. Then it compares the result to the original IP address to
ensure that they are the same. The server stores information about the result of this
operation in the host cache. If the cache is full, the least recently used entry is
discarded.”
9.12.6.2 DNS Lookup Optimization and the Host Cache

For this reason, there is a DNS ‘reverse’ lookup for each login was hanging this
connections.

The solution?
Right way: Add an A type registry in DNS for the hosts. Use DNS!
Quick way: Add on /etc/hosts from database server the mapping for the connected
hosts, avoiding the DNS Lookup.
Quicker way: Setting the skip-name-resolve variable at /etc/my.cnf. This variable
avoids this behavior in database layer for new connections and solve the problem.

This is a good (portuguese) post about it: http://wagnerbianchi.com/blog/?p=831

See ya!
Matheus.

245
MySQL: Difference Between current_date(),
sysdate() and now()
Do you know the difference?

current_date(): Only give you the date.


now(): Datetime when the statement,procedure etc… started.
sysdate(): Current datetime.

Take a look between the functions now() and sysdate() after executing sleep of 5
seconds…:

SQL select current_date(),now(),sysdate(),SLEEP(5),now(),sysdate();


"2016-03-24";"2016-03-24 16:00:43";"2016-03-24 16:00:43";"2016-03-24
16:00:43";"2016-03-24 16:00:48"

Matheus.

246
Getting today’s Errors and Warnings from
MySQL log
Quick one!

# Warnings

cat /var/log/mysqld.log |grep `date +%y%m%d` | grep "\[Warning\]"

# Errors

cat /var/log/mysqld.log |grep `date +%y%m%d` | grep "\[ERROR\]"

And a Bonus!
To get entries from X days ago:

cat /var/log/mysqld.log |grep `date --date="46 days ago" +%y%m%d`

Matheus.

247
MySQL: Unable to connect to database ‘xxx’
on server ‘xxx’ on port xx with user ‘root’
Quick tip:

# Problem:

MySQL: Unable to connect to database 'xxx' on server 'xxx' on port xx with user 'root' -


Access denied for user 'root'@'xxxxx'

Solution:

GRANT ALL PRIVILEGES ON *.* TO root@'xxxxx' IDENTIFIED


BY '$PASSWORD' WITH GRANT OPTION; FLUSH PRIVILEGES;

Have a nice week!


Matheus.

248
Say Hello to Oracle Apex and for the new
Blog member too!
Hi people!

That’s my first post and I would like introduce me and this great tool what I will talk
about here, at this site, but Let’s start about Oracle Application Express, or Apex,
which is probably your most intention here! You can read about my history with Apex
in the end of this article.

Oracle Apex is a development tool that enables you to build applications using only
your web browser, using basically PL-SQL, this familiarity helps to create
departmental applications. Even DBA’s can create good web applications easily. Apex
is not a new tool, was first released on 2006, and previously it was called HTML DB.
After 10 years of development, offers a modern IDE, and with more dedication, you
can use to build complex solutions using CSS and JavaScript.

Apex also comes with a entire system to manage your development life cycle. Using
the Team Development it is possible to track your project progress from brainstorm to
tracking bugs and continuous maintenances.

You can start using and testing Oracle Apex right now, just accessing
apex.oracle.com and creating your own workspace. Just click Get Started and select
Free Workspace. Remember that should be used for educational propose.

By the way, in the next weeks and some articles from now, I intend to write about how
to create an entire application, describing most of standards options and explaining
Oracle Apex in details.

249
I start using Apex in version 2, when the standard templates produces applications
that looked like Enterprise Manager some years ago. The latest version 5.02 was
released in October 2015. The Apex 5 has a revolutionary IDE, which is in the same
way powerful, intuitive, clean and easy to use.

Version 2

Version 5

Enjoy and welcome to Apex World! There is an active community on OTN that
supports mostly users needs and questions through discussion web forums.

Cassiano.

250
Understanding Apex URL
An basic step into Apex development is to understand URL syntax.
I keep this note in my favorites folder, to check anytime.

http://apex.oracle.com/ords/f?p=4350:1:220883407765693447

or

f?p=App:Page:Session:Request:Debug:ClearCache:itemNames:itemValues:PrinterFri
endly

where

• App - Application ID or alias.

• Page - Page number or alias.

• Session - Identify a session ID.

• Request - A keyword that you can use to react in your process workflow. When
you press a button, request will be set to button action name, e.g. when press
Submit or Next page, your Request variable should have “submit” value.

• Debug - Set this flag to YES to increase log level (must be uppercase).

• ClearCache - Specify the numeric page number to clear cached items on a single
page, this flag set all item’s values to null. To clear cached items on multiple
pages, use a comma-separated list of page numbers. Clearing a page’s cache
also resets any stateful processes on the page.

• itemsNames - Comma-delimited list of item names.

• itemsValues - Comma-delimited list of item values.

• PrinterFriendly - set to YES, to use a printer friendly template.

I hope this help you too


Cassiano.

251
javascript:apex.confirm
The most simple way to ask for your user attention, is to popup a javascript browser
question. Something like “Do you really wanna proceed?”

In the APEX world, just remember You do not need to reinvent the wheel!
Let’s use the native apex javascript Api, that comes with the function named Confirm ,
which ask user for a confirmation, before to submit page or before run some process.

Easy Example

First, select the button you want this behavior, then set the property Target to URL.
Second, set the target url to below javascript code, and don’t forget to adapt the
message for your need’s.

javascript:apex.confirm('Delete the user &APP;_USER. Really?', 'DELETE');

The second parameter can be used to set the value of REQUEST, when the page is
submitted. You can use this value selectively run some Process point,  setting the
property Condition to “when request = value”.

Complex Example

For more complex needs, you can set Apex Items values, before to proceed with page
submit. In this case, the second parameter should be an Object, with all items and
values necessary for your page flow and correct process.

javascript:apex.confirm("Save Department?", { request:"SAVE", set:{


"P1_DEPTNO":10, "P1_EMPNO":5433 } } );

Cassiano.

252
APEX: Let’s Talk About Charts Attributes
(Inverted Scale)

Hello! If you had play with Apex before, you know how easy is to build a simple report
to present your data. But sometimes, your boss will ask you to build something more
“graphical” or with a better design. But I never thought in color themes or pictures
when I developed my simple reports in Sqlplus. Those colorful themes and design
things are, most of the times, not familiar for DBA’s.

Thinking on that, I decide to write this article, always focusing in the standard Chart
plugin that comes with Apex by default. Take a look on below chart.

253
First of all, to change Chart attributes, you must select in the left side, the item named
“Attributes”. Only in this way you will see all chart properties, on the box at right side of
Apex Development IDE.

After that, you should see Chart attributes in the right side box, like below pic:

So, what I’ve changed in above chart?

Rendering - Apex5 comes with Html5 plugin, prefer this instead of old flash charts.
Html5 are mobile friendly template, and should run better in modern browsers, with is
standard right now.

Show Grid - Which lines should be rendered? By default, chart shows only vertical
lines. You can choose here, to display horizontal lines as well as secondary gray lines,
between black main lines.

254
Marker - You could change the marker for each serie, making the chart more clear.
Several options are available: squares, circles, cross lines and many others. In the
example I use Diamond marker.

Next challenge? I was asked how to invert the graphic, because their data represent
‘errors’, customer ask for lower values be on top of the list. My first ideia was to use
math and multiply results for (-1 ). This way, graph line is inverted as necessary, but
values don’t represent correct values.

The correct way to do it, is modifying X axis properties. Let’s take a look into available
Axis properties.

Title, Prefix/Postfix - Title doesn’t need explanation. Other modify how every value
and hint are rendered in chart canvas.

Label Rotation - to write label in top-down or even with inclination, like below
example.

Label Font - modify color and font face.

Invert Scale! Here is our wonder! Modify to change your chart scale, and achieve my
customer needs.

Major/Minor Interval - Specify how much space between major (black) and minor
(gray) lines in the chart.Check the results. As you can see, in this example I inverted
scale in both X and Y axis.

255
That is it folks! In next articles, I’ll write more about Chart styles and customizations!
Have a nice week.

Cassiano.

256
Script: Copy Large Table Through DBLink
To understand the situation:

Task: Need to migrate large database 11.1.0.6 to 12c Multi-Tenant Database with
minimum downtime.
To better use the features, reorginize objects and compress data, I decided to migrate
the data logically (not physically).
The first option was to migrate schema by schema through datapump with database
link. There is no long columns.

Problem1: The database was veeery slow with perfect match to Bug 7722575
-DATAPUMP VIEW KU$_NTABLE_DATA_VIEW causes poor plan / slow Expdp.
workaround: None
Solution: Upgrade to 11.2. (No way).
Other things: Yes, I tried to change the cursor sharing, the estimate from blocks to
statistics and all things documented. It doesn’t work.

Ok doke! Let’s use traditional exp/imp tools (with some migration area), right?
Problem2: ORA-12899 on import related to multiblocking x singleblocking charsets.
Solution: https://grepora.com/2015/11/20/charsets-single-byte-vs-multibyte-issue/

Done? Not for all. For some tables, just happened the error:

EXP-00006: internal inconsistency error EXP-00000: Export terminated unsuccessfully

An what Oracle says? “ Solution: Use Datapump!”

Well, well… I realized I was going to become by myself…


Ok, so lets create table as select using database link. For most of all, ok…
But, for example, one of the missing tables has 700 million rows (350GB of
compressed and no partitioned data).
Just to remmember that DBLink exclude parallel options (always serial).

The solution was to make a McGayver, this way:


1) Creating an aux table (source database):

alter session force parallel query parallel 12; create table


SCHEMA_OWNER.AUX_ROWID(ROW_ID,NUM) as select rowid, rownum from
SCHEMA_OWNER.TABLE; alter session disable parallel query;

* This table will be used to break the table in chunks.

2) Script run_chunck.sql to run each chunk of data:

DECLARE counter number; CURSOR cur_data is select row_id from ( select row_id,
num from SCHEMA_OWNER.AUX_ROWID@SOURCEDB order by num) where num
= &1 and num =&2; BEGIN counter :=0; FOR x IN cur_data LOOP BEGIN counter :=

257
counter +1; insert into SCHEMA_OWNER.TABLE select * from
SCHEMA_OWNER.TABLE@SOURCEDB where rowid = x.row_id; if counter = 1000
then ---commit every 1000 rows commit; counter := 0; end if; EXCEPTION when
OTHERS then dbms_output.put_line('Error ROW_ID: '||x.row_id||sqlerrm); END; END
LOOP; COMMIT; END; / exit;

3) Run in a BAT or SH like (my example was made for a bat, with “chunks” of 50
million rows – and commits by every 1k, defined on item 2) :

@echo off set /p db="Target Database.: " set /p user="Username.......: " set /p
pass="Password..................: " pause START sqlplus %user%/%pass%@%db%
@run_chunck.sql 1 2060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 2060054 52060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 52060054 102060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 102060054 152060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 152060054 202060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 202060054 252060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 252060054 302060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 302060054 352060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 352060054 402060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 402060054 452060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 452060054 502060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 502060054 552060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 552060054 602060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 602060054 652060053 START sqlplus %user%/%pass%@%db%
@run_chunck.sql 652060054 702060053 -- count(*) from table

Watching the inserts running…

targetdb@sess User:MATHEUS USERNAME EVENT SQL_ID ---------- ----------


------------------------- MATHEUS_BOESING SQL*Net message from dblink
6qc1hsnkkfhnw MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink gt3mq5ct7mt6r
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from dblink 6qc1hsnkkfhnw
MATHEUS_BOESING SQL*Net message from client MATHEUS_BOESING SQL*Net

258
message from client MATHEUS_BOESING SQL*Net message to client
c7a5tcc3a84k6

After a few (26 hours) the copy was successfully concluded.

Matheus.

259
Oracle Convert Number into Days, Hours,
Minutes
There’s a little trick…
Today I had to convert a “number” of minutes into hours:minutes format. Something
like convert 570 minutes in format hour:minutes. As you know, 570/60 is “9,5” and
should be “9:30”.

Lets use 86399 seconds (23:59:59) as example:

I began testing “to_char(to_date)” functions:

boesing@dbselect to_char(to_date(86399,'sssss'),'hh24:mi:ss') formated from dual;

FORMATED
——–
23:59:59

Ok, it works. But using “seconds past midnight” (sssss). By the way, it works between
0 and 86399 only:

boesing@db select to_char(to_date(86400,'sssss'),'hh24:mi:ss') from dual;


select to_char(to_date(86400,'sssss'),'hh24:mi:ss') from dual
*
ERROR at line 1:
ORA-01853: seconds in day must be between 0 and 86399

The problem remains. How to use minutes in 3 digits (570 minutes - 9:30), for
example?
The best way I solve was:

--- Seconds in hours:minutes:seconds


--- If you comment the first "TO_CHAR" line, can be minutes in hours:minutes too..
select
TO_CHAR(TRUNC(vlr/3600),'FM9900') || ':' || -- hours
TO_CHAR(TRUNC(MOD(vlr,3600)/60),'FM00') || ':' || -- minutes
TO_CHAR(MOD(vlr,60),'FM00') -- second
from dual;

It always works.

boesing@dbselect
2 TO_CHAR(TRUNC(86399/3600),'FM9900') || ':' || -- hours
3 TO_CHAR(TRUNC(MOD(86399,3600)/60),'FM00') || ':' || -- minutes
4 TO_CHAR(MOD(86399,60),'FM00') -- second
5 from dual;

260
TO_CHAR(TRUNC
————-
23:59:59

boesing@dbselect
2 TO_CHAR(TRUNC(570/3600),’FM9900′) || ‘:’ || — hours
3 TO_CHAR(TRUNC(MOD(570,3600)/60),’FM00′) || ‘:’ || — minutes
4 TO_CHAR(MOD(570,60),’FM00′) — second
5 from dual;

TO_CHAR(TRUNC
————-
00:09:30

boesing@dbselect
2 TO_CHAR(TRUNC(MOD(570,3600)/60),’FM00′) || ‘:’ || — hours
3 TO_CHAR(MOD(570,60),’FM00′) — minutes
4 from dual;

TO_CHAR
——-
09:30

Any better way? Leave a comment. Thanks!

Matheus.

261
Purge SYSAUX Tablespace
Your SYSAUX is bigger than the rest of database?
It’s not uncommon to “old” databases, usually bad administrated. Some databases
configuration must cause this situation.

The general indication is to review stats and reports retention of objects and database.

But if you need to clean it now, how to do?


1) PURGE_STATS. It’s recommended to execute in smaller steps. Otherwise the RBS
tablespace will be blown up.
2) Oracle is sometimes building new extents for SYSAUX stats table in other
tablespaces. They will be moved back to the SYSAUX tablespace.
3) The Index rebuild will decrease the size of the indexes. They are mostly larger as
the raw data.
4) The Indexes are partly function bases. Therefore it is imported in which order the
index rebuild will be done. Otherwise you have to reexecute this steps again and
again.

Going practical, I used the follow:

exec DBMS_STATS.PURGE_STATS(SYSDATE-180); exec


DBMS_STATS.PURGE_STATS(SYSDATE-160); exec
DBMS_STATS.PURGE_STATS(SYSDATE-140); exec
DBMS_STATS.PURGE_STATS(SYSDATE-120); exec
DBMS_STATS.PURGE_STATS(SYSDATE-100); exec
DBMS_STATS.PURGE_STATS(SYSDATE-80); exec
DBMS_STATS.PURGE_STATS(SYSDATE-60); exec
DBMS_STATS.PURGE_STATS(SYSDATE-40); exec
DBMS_STATS.PURGE_STATS(SYSDATE-20); exec
DBMS_STATS.PURGE_STATS(SYSDATE-7); alter table
WRI$_OPTSTAT_TAB_HISTORY move tablespace sysaux; alter table
WRI$_OPTSTAT_IND_HISTORY move tablespace sysaux; alter table
WRI$_OPTSTAT_HISTHEAD_HISTORY move tablespace sysaux; alter table
WRI$_OPTSTAT_HISTGRM_HISTORY move tablespace sysaux; alter table
WRI$_OPTSTAT_AUX_HISTORY move tablespace sysaux; alter table
WRI$_OPTSTAT_OPR move tablespace sysaux; alter table
WRH$_OPTIMIZER_ENV move tablespace sysaux; Alter index
SYS.I_WRI$_OPTSTAT_IND_ST rebuild TABLESPACE SYSAUX; Alter index
SYS.I_WRI$_OPTSTAT_IND_OBJ#_ST rebuild TABLESPACE SYSAUX; Alter index
SYS.I_WRI$_OPTSTAT_HH_ST rebuild TABLESPACE SYSAUX; Alter index
SYS.I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST rebuild TABLESPACE SYSAUX; Alter
index SYS.I_WRI$_OPTSTAT_TAB_ST rebuild TABLESPACE SYSAUX; Alter index
SYS.I_WRI$_OPTSTAT_TAB_OBJ#_ST rebuild TABLESPACE SYSAUX; Alter index
SYS.I_WRI$_OPTSTAT_OPR_STIME rebuild TABLESPACE SYSAUX;

262
Matheus.

263
Statistics not Being Auto Purged – Splitting
Purge
Hi all!
The post Purge SYSAUX Tablespace ,  made on Fabruary 8this, is yet being high
accessed. So, if  you’re interested, here it goes another post about:

Last week I supported a database was not purging statistics through MMON job,
because is timeouting. Worst than simply that, the database is not purging statistics
since 2012 and SYSAUX was huge!
To understand: By default, the MMON performs the automatic purge that removes all
history older than:
1) current time – statistics history retention (by default 31 days)
2) time of recent analyze in the system – 1
MMON performs the purge of the optimizer stats history automatically, but it has an
internal limit of 5 minutes to perform this job. If the operation takes more than 5
minutes, then it is aborted and stats not purged.

The problem was very clear in alert.log, through the entry:

Unexpected error from flashback database MMON timeout action Errors in file
/oracle/diag/rdbms/oracle/trace/oracle_mmon_1234567.trc: ORA-12751: cpu time or
run time policy violation

But it’s happening since 2012! How to address that?


First, let’s take a look on KB:
Bug 18608261 – Slow MMON auto-purge task (Doc ID 18608261.8)
Bug 16903536 – ORA-12751 in MMON during regular AWR purge (Doc ID
16903536.8)
SYSAUX Grows Because Optimizer Stats History is Not Purged (Doc ID
1055547.1)

You can still follow the post Purge SYSAUX Tablespace . It solves the question and
implement the “shrinks”.
But for an huge database it might take some time… And, occasionally you might to do
it on maintenance windows in more than one part… So, this can help you:

Checking how old your stats are:

select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;

Script to purge day by day (max 2.000 days ~5 years per execution :P):

set serveroutput on size unlimited set time on set timing on spool purge_stats.log
declare vRetentionLimit Date; vOldestStat Date := to_date('13/02/2012
00:00','dd/mm/yyyy hh24:mi'); -- inform oldest stats date vStopExecuting Date :=
to_date('29/04/2016 08:30','dd/mm/yyyy hh24:mi'); -- inform maintance windows

264
ending begin select to_date(sysdate-dbms_stats.get_stats_history_retention) into
vRetentionLimit from dual; for i in 1..2000 loop if sysdate=vStopExecuting then exit;
end if; if vOldestStat = vRetentionLimit then
dbms_output.put_line(to_char(sysdate,'dd.mm.yyyy hh24:mi:ss') || ' - Purging from: ' ||
to_char(vOldestStat,'dd.mm.yyyy hh24:mi:ss')); dbms_stats.purge_stats(vOldestStat);
dbms_output.put_line(to_char(sysdate,'dd.mm.yyyy hh24:mi:ss') || ' - Purged from: ' ||
to_char(vOldestStat,'dd.mm.yyyy hh24:mi:ss')||chr(13)||chr(10) ); end if;
vOldestStat:=vOldestStat+1; end loop; end; / spool off

This way, the purge can be splitted on day-by-day windows. Now you can make the
moves and rebuilds told on Purge SYSAUX Tablespace

Hope it helped you!


Chers!
Matheus.

265
Sqlplus: Connect without configure
TNSNAMES
Okey, you must to know, but is always useful to remmember that… If you don’t want
to configure your TNSNAMES, you can connect directly to description of your
database. This way:

sqlplus conn matheus_boesing@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(P


ROTOCOL=TCP)(HOST=mydb.domain.net)(PORT=1531)))(CONNECT_DATA=(servi
ce_name=mydb))) Enter password: ******** Connected. sqlplus

Based on this, I made two scripts, to connect with the sid (c.sql) or with the
service_name (s.sql) and make my life easier. Here the scripts:

sqlplusget c 1 DEFINE VHOST = &1. 2 DEFINE VPORT = &2. 3 DEFINE VSID = &3.
4 DEFINE VDESC='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=T
CP)(HOST=&VHOST;)(PORT=&VPORT;)))(CONNECT_DATA=(SID=&VSID;)(server= dedicated)))' 5 discon
set linesize 1000 8 set sqlprom '&&VSID; ' 9 select instance_name, host_name 10
from v$instance; 11 exec
dbms_application_info.SET_MODULE('MATHEUS_BOESING','DBA'); 12 alter
session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; 13 UNDEFINE VDESC 14
UNDEFINE 1 15 UNDEFINE 2 16* UNDEFINE 3 sqlplusget s 1 DEFINE VHOST =
&1. 2 DEFINE VPORT = &2. 3 DEFINE VSID = &3. 4 DEFINE VDESC='(DESCRIPTI
ON=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=&VHOST;)(PORT=& VPORT;)))(CONNEC
matheus_boesing@&&VDESC; 8 set linesize 1000 9 set sqlprom '&&VSID; ' 10 select
instance_name, host_name 11 from v$instance; 12 exec
dbms_application_info.SET_MODULE('MATHEUS_BOESING','DBA'); 13 alter
session set nls_date_format='DD/MM/YYYY HH24:MI:SS'; 14 UNDEFINE VDESC 15
UNDEFINE 1 16 UNDEFINE 2 17* UNDEFINE 3 sqlplus

It can be used like this:

sqlplus @s mydb.domain.net 1531 mydb (DESCRIPTION=(ADDRESS_LIST=(ADD


RESS=(PROTOCOL=TCP)(HOST=mydb.domain.net)(PORT=1531)))(CONNECT_DA
TA=(SERVICE_NAME=mydb)(server=dedicated))) Enter password: ********
Connected.

Ok, but, let’s suppose you are working in a cluster and wants to connect directly to the
another instance. I made the script below (ci.sql). It’s not beautiful, but is a lot hopeful:

sqlplus get ci 1 DEFINE VINT = &1. 2 undefine VHOST 3 undefine VSID 4 VARIABLE
VCONN varchar2(100) 5 PRINT ret_val 6 BEGIN 7
SELECT '@c '||host_name||' 1521 '||INSTANCE_NAME 8 INTO :VCONN 9 FROM
gv$instance where INSTANCE_NUMBER=&VINT; 10 END; 11 / 12 set head off; 13
spool auxcon.sql 14 prompt set head on; 15 print :VCONN 16 prompt set head on; 17
spool off; 18* @auxcon sqlplus

266
As you see, you inform the inst_id you want to connect. It can be used like:

mydb @instance INSTANCE_NAME ------------------------------ mydb_2 mydb


@instances INST_NUMBER INST_NAME ----------- --------------------------------------- 1
db2srvr2p.sicredi.net:mydb_1 2 db1srvr1p.sicredi.net:mydb_2 mydb @ci 1 @c
db2srvr2p.sicredi.net 1521 mydb_1 Disconnected from Oracle Database 11g
Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Real
Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real
Application Testing options Enter password: ******** Connected. mydb_1 @instance
INSTANCE_NAME ------------------------------ mydb_1

That’s right?
The scripts a shared help me a lot every day, and it’s exclusive.
I not founded nothing like this. So, I made.

Matheus.

267
ASM: Disk Imbalance Query
It can be useful if you work frequently with OEM metrics…

# OEM Query

SELECT file_num, MAX(extent_count) max_disk_extents, MIN(extent_count)


min_disk_extents , MAX(extent_count) - MIN(extent_count) disk_extents_imbalance
FROM (SELECT number_kffxp file_num, disk_kffxp disk_num, COUNT(xnum_kffxp)
extent_count FROM x$kffxp WHERE group_kffxp = 1 AND disk_kffxp != 65534
GROUP BY number_kffxp, disk_kffxp ORDER BY number_kffxp, disk_kffxp) GROUP
BY file_num HAVING MAX(extent_count) - MIN(extent_count) 5 ORDER BY
disk_extents_imbalance DESC;

# MatheusDBA Query

select max(free_mb) biggest, min(free_mb) lowest, avg(free_mb) AVG,


trunc(GREATEST
((avg(free_mb)*100/max(free_mb)),(min(free_mb)*100/avg(free_mb))),2)||'%' as
balanced, trunc(100-(GREATEST
((avg(free_mb)*100/max(free_mb)),(min(free_mb)*100/avg(free_mb)))),2)||'%' as
inbalanced from v$asm_disk where group_number in (select group_number from
v$asm_diskgroup where name = upper('&DG;'));

I made my own query for two reasons:


1) I didn’t have the OEM query in the time i made it.
2) My query measures the inbalance with the avg of the disks (if everydisk would
balanced, how would be the difference), rather than the real/present difference
between the disk with the maximum and the minimum usage…

So, you can chose the one you need…

Matheus.

268
Rebuild all indexes of a Partioned Table
Another quick post!

Regarding you frequently need to collect all indexes of a partioned table (local and
global indexes), this is a quick script that make the task a little bit easier:

begin
-- local indexes
for i in (select p.index_owner owner, p.index_name, p.partition_name
from dba_indexes i, dba_ind_partitions p
where i.owner='&OWNER;'
and   i.table_name='&TABLE;'
and   i.partitioned='YES'
and   i.visibility='VISIBLE' -- Rebuild only of the visible indexes, to get real effect :)
and   p.index_name=i.index_name
and   p.index_owner=i.owner
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild 
partition '||i.partition_name||' online parallel 12'; -- parallel 12 solve most of the
problems
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- If you don't
use parallel indexes in your database, or the default parallel of the index, or what you
want...
end loop;
-- global indexes
for i in (select i.owner owner, i.index_name
from dba_indexes i
where i.owner='&OWNER;'
and   i.table_name='&TABLE;'
and   i.partitioned='NO'
and   i.visibility='VISIBLE' -- same comment
order by 1,2) loop
execute immediate 'alter index '||i.owner||'.'||i.index_name||' rebuild online parallel 12';
-- same
execute immediate 'alter index '||i.owner||'.'||i.index_name||' parallel 1'; -- same :)
end loop;
end;
/

I hope this script make your life easier. Hugs!

Matheus.

269
Solving Simple Locks Through @lock2s and
@killlocker
Hi guys!
This post is to show the most simple and most common kind of locks for objects and
the simpliest way to solve it (killing the locker).
It’s so common that I scripted it. Take a look:

greporadb @lock2s Inst SID SERIAL# UserName STATUS LOGON_TIME LMODE


REQUEST LC_ET TY ID1 ID2 CTIME LOCKWAIT EVENT ----- ---------- ------- ---------
-------- ------------------- ------ ------- ----- -- ---------- ---------- ---------- ----------------
----------------------------------- 1 354 18145 MATHEUS ACTIVE 17/06/2016 14:25:19 X
NONE 4032 TX 393238 424490 715 00000000DB0DF900 enq: TX - row lock
contention 1 169 25571 GREPORA ACTIVE 17/06/2016 14:22:48 NONE X 714 TX
393238 424490 714 00000000DB0D5ED8 enq: TX - row lock contention 1 252 63517
MATHEUS INACTIVE 17/06/2016 14:17:49 X NONE 714 TX 655363 1550347 4195
SQL*Net message from client 1 846 65011 GREPORA ACTIVE 17/06/2016 14:20:18
NONE X 4075 TX 655363 1550347 715 00000000DB0ECB88 enq: TX - row lock
contention 1 354 18145 GREPORA ACTIVE 17/06/2016 14:25:19 NONE S 4032 TX
655363 1550347 715 00000000DB0DF900 enq: TX - row lock contention 5 rows
selected.

You can identify the Locker by LMODE column. And all his Waiters by REQUEST
column marked by not ‘NONE’, below each Locker…

So, let’s kill the lockers:

greporadb @killlocker
'ALTERSYSTEMKILLSESSION'''||SID||','||SERIAL#||'''IMMEDIATE;'  ------------------------
------------------------------------------------------------------------------------------------------------------
-------------------------------------- alter system kill session '252,63517' immediate; alter
system kill session '354,18145' immediate; 2 rows selected. greporadb alter system
kill session '252,63517' immediate; System altered. greporadb alter system kill
session '354,18145' immediate; System altered. greporadb @lock2s no rows
selected

Solved!
My magic scripts? Here it goes:

get lock2s.sql:

set lines 10000 set trimspool on col serial# for 999999 col lc_et for 999999 col l1name
for a50 col lmode for a6 col username for a25 select /*+ rule */ distinct
b.inst_id,a.sid,b.serial#,b.username,b.status, --b.audsid, --b.module,
--b.machine,b.osuser, b.logon_time,
decode(lmode,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',lmode) lmode,

270
decode(request,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',request) request,
b.last_call_et LC_ET,a.type TY,a.id1,a.id2, d.name||'.'||c.name
l1name,a.ctime,b.lockwait,b.event --distinct
b.inst_id,a.sid,b.username,a.type,d.name||'.'||c.name l1name,a.id1,a.id2,
--decode(lmode,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',lmode) lmode,
--decode(request,1,'null',2,'RS',3,'RX',4,'S',5,'SRX',6,'X',0,'NONE',request)
request,a.ctime,b.lockwait,b.last_call_et from gv$lock a, gv$session b,sys.obj$
c,sys.user$ d,(select a.id1 from gv$lock a where a.request 0) lock1 where a.id1 =
c.OBJ# (+) and a.sid = b.sid and c.owner# = d.user# (+) and a.inst_id=b.inst_id and
b.username is not null and a.id1 = lock1.id1 order by id1,id2, lmode desc /

get killlocker.sql:

select 'alter system kill session '''||sid||','||serial#||''' immediate;'  from v$session where


sid in (select BLOCKING_SESSION from v$session where BLOCKING_SESSION is
not null);

Now you can put in your Linkedin you are a JR DBA…


haha

Matheus.

271
ORA-04091: Table is Mutating,
Trigger/Function may not see it
No!
This is not a super-table nor a x-table (X-Men joke, this was awfull, I know… I’m
sorry).

ORA-04091: Table "TABLE NAME" is Mutating, Trigger/Function may not see it


ORA-06512: em "TRC_INSERT_TABLE", line 14 ORA-04088: error during execution
of trigger 'TRC_INSERT_TABLE'

Very interesting. But not hard to understand. The cause is that the trigger (or a user
defined plsql function that is referenced in this statement) attempted to look at (or
modify) a table that was in the middle of being modified by the statement which fired it.

In other words, your trying to read a data the you are modifying. The obviously cause
an inconsistency, the reason to this error. The data is “mutant”. But the error could be
less annoying, right? Oracle and his jokes…

The solution is to rewrite the trigger to not use the table, or, in some situation, you can
use an autonomous transaction, to turn it independent. It can be done using the
clause PRAGMA AUTONOMOUS_TRANSACTION.

This FAQ can be useful to you: http://www.orafaq.com/wiki/Autonomous_transaction

Matheus.

272
ORA-12014: table does not contain a
primary key constraint
Ok, you are trying to create a materialized view involving a database link and found
a ORA-12014, right?

CREATE MATERIALIZED VIEW &OWNER..MVW;_NAME REFRESH FORCE ON


DEMAND AS SELECT COL1, COL2, COL3 FROM TABLE@REMOTE_DB ; *
ERROR at line 1: ORA-12014: table 'TABLE' does not contain a primary key
constraint SQL

It blowed me sometime ago. But it’s not complicated to workaround it, just try to:

CREATE MATERIALIZED VIEW &OWNER..MVW;_NAME REFRESH FORCE ON


DEMAND AS select * from ( SELECT COL1, COL2, COL3 FROM
TABLE@REMOTE_DB ) ;

An alternative is to use MV log + WITH ROWID on REMOTE_DB side:

CREATE MATERIALIZED VIEW LOG MVW_LOG_NAME ON TABLE WITH ROWID;

And

CREATE MATERIALIZED VIEW &OWNER..MVW;_NAME REFRESH FORCE ON


DEMAND WITH ROWID AS SELECT COL1, COL2, COL3 FROM
MVW_LOG_NAME@REMOTE_DB;

PS: Make sure username used in remote_db database link has select privileges on
MV log. On source db issue:

SELECT LOG_TABLE FROM DBA_MVIEW_LOGS WHERE


LOG_OWNER='OWNER' AND MASTER = 'TABLE';

This will give you MV log table name. On target side issue:

SELECT * FROM MVW_LOG_NAME@remote_db;

See ya!
Matheus.

273
ORA-02062: distributed recovery
# Error/Alert

Errors in file /oracle/diag/rdbms/mydb/mydb2/trace/mydb2_reco_26083546.trc:


ORA-02062: distributed recovery received DBID e450df78, expected 0311e884

# Solution

begin commit; for d in (select local_tran_id from dba_2pc_pending) loop


dbms_transaction.purge_lost_db_entry( d.local_tran_id ); commit; end loop; end; /

Matheus.

274
Windows: “ORA-12514” After Database
Migration/Moving (Using DNS Alias)
It’s usual to use DNS Aliases pointing to scanlistener. This way, we create an
abstraction/layer bewteen clients/application and the cluster where database is. Some
activities like tierization/consolidation and database moving between clusters
(converting to Pluggable, etc), would be much more transparent.

Buuuut, if after a database migration, all the services online and listening, your client is
stucking with:

ORA-12514: TNS:listener does not currently know of service requested in connect


descriptor

Remmember you are using DNS to make this layer. Have you tried to flush DNS
Cache?
I faced this problem with a Windows Application. The solution:

C:\Users\matheus_boesingipconfig /flushdns Windows IP Configuration Successfully


flushed the DNS Resolver Cache.

All working fine after that.

Matheus.

275
RS-7445 [Serv MS leaking memory] [It will
be restarted] [] [] [] [] [] [] [] [] [] []
Hello!
Having this error from cell alerthistory.log? Don’t panic!
Take a look in MOS: Exadata Storage Cell reports error RS-7445 [Serv MS
Leaking Memory] (Doc ID 1954357.1) . It’s related to Bug  – RS-7445 [SERV MS
LEAKING MEMORY] .

The issue is a memory leak in the Java executable and affects systems running with
JDK 7u51 or later versions. This is relevant for all versions in Release 11.2 to 12.1.

What happens is that MS process is consuming high memory (up to 2GB).  Normally
MS use around 1GB but because of the bug the memory allocated can grow upt to
2GB.  You can check it as per example below:

[root@exaserver ~]# ps -feal|grep java 0 S root 16493 14737 0 80 0 - 15317 pipe_w


18:34 pts/0 00:00:00 grep java 0 S root 22310 27043 2 80 0 - 267080 futex_ 18:15 ?
00:00:27 /usr/java/default/bin/java -Xms256m -Xmx512m -XX:-UseLargePages
-Djava.library.path=/opt/oracle/cell/cellsrv/lib -Ddisable.checkForUpdate=true -jar
/opt/oracle/cell/oc4j/ms/j2ee/home/oc4j.jar -out /opt/oracle/cell/cellsrv/deploy/log/ms.lst
-err /opt/oracle/cell/cellsrv/deploy/log/ms.err

Note that: 267080 * 4096 = 1143MB (1GB) . If your number is higher than this, it
indicates the presence of the bug.

In case you want to see the memory in use by MS processes, it can be seen with this
command from any DB node:

[root@exaserver ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group


"ps -feal|grep java| grep cellsrv|grep -v bash| awk '{mem=\$10*4096/1048576; print
\$12 \" - \" mem \" Mb - \" \$15}'"

This error is ignorable, once MS will restart automatically, reseting process and
memory. There is no impact on services, this is just the monitoring process.

Thanks in advance and see you!


Matheus.

276
kernel.panic_on_oops: New Oracle 12c
Installation Requirement
Hi all,
Do you know what mean the parameters on installing 12c?

This parameter controls the kernel’s behaviour when an oops or bug is encountered:

• 0: try to continue operation

• 1: panic immediately.  If the `panic’ sysctl is also non-zero then the machine will
be rebooted.

OOPS is a deviation from correct behavior of the Linux kernel, one that produces a
certain error log.
The better-known kernel panic condition results from many kinds of oops, but other
instances of an oops event may allow continued operation with compromised
reliability.

This is recommended in a system where we want to have node evicted in case of any
hardware failure or any other issue.

To adjust as recommended by Oracle?


1. Put an entry in sysctl.conf for having it permanent:

kernel.panic_on_oops = 1

2. Refresh running command:

sysctl -p

KB: https://www.kernel.org/doc/Documentation/sysctl/kernel.txt

Matheus.

277
Tip for the Future: Segmentation fault
because of LD_LIBRARY_PATH
More than once I forgot to set LD_LIBRARY_PATH in new environments and
sometimes I faced awkward errors. The most common is “Segmentation Fault”.
Today a lost almost 15 minutes searching about Segmentation Fault related to
Datapump on 11.2, then I realized I forgot the LD_LIBRARY_PATH again…

Other day, in a Upgrade from 11.2.0.3.6 to 11.2.0.4.2 I get stuck in lots of errors on
upgrade process. Bullshit again, after a few minutes of errors and searching I founded
a post, somewhere, talking about the variables setting.

So, Matheus from the Future : Check if LB_LIBRARY_PATH and other variables are
setted for the right Oracle Home.

I expect this post save me from this same pain in the future.
Thanks.

Matheus.

278
ORA-02296: cannot enable (string.) – null
values found
Hi all!
Found the error below?

greporadb alter table TABLE_TEST modify COLUMN_TEST not null; alter table
TABLE_TEST modify COLUMN_TEST not null * ERROR at line 1: ORA-02296:
cannot enable (MATHEUSDBA.) - null values found

It happen basically because you have null values for this column. Let’s check:

greporadb SELECT COUNT(*)  FROM TABLE_TEST WHERE COLUMN_TEST IS


NULL; COUNT(*) ---------- 99

Ok doke!
Now, what can we do?
1) Fix the problem updating the null values to a value (or a dummy value).
2) Use NOVALIDATE clause, like:

greporadb alter table TABLE_TEST modify COLUMN_TEST not null NOVALIDATE ;


Table altered.

Is a good practice to set a default value for this column too.


It’s a probability that some kind of behavior (even a bug) on application is causing the
null values, so, setting a default value, you avoid “new errors” on application layer.
Otherside, if you want to show where the bug is occouring, maybe is better to not set
it…

Hope it helped you.


Have a nice day!
Matheus.

279
(12c) RMAN-07539: insufficient privileges to
create or upgrade the catalog schema
Another “The problem - the fix” post.

# KB:
Upgrade Recovery Catalog fails with RMAN-07539: insufficient privileges (Doc ID
1915561.1)
Unpublished Bug 17465689 – RMAN-6443: ERROR UPGRADING RECOVERY
CATALOG

# Problem

[oracle@databasesrvr dbs]$ rman target / Recovery Manager: Release 12.1.0.2.0 -


Production on Tue Jul 21 14:17:09 2015 Copyright (c) 1982, 2014, Oracle and/or its
affiliates. All rights reserved. connected to target database: MYDB (not mounted)
RMAN connect catalog catalog_mydb/catalog_mydb@catalogdb connected to
recovery catalog database PL/SQL package CATALOG_MYDB.DBMS_RCVCAT
version 11.02.00.03 in RCVCAT database is too old RMAN upgrade catalog
RMAN-00571:
=========================================================
RMAN-00569: ============== ERROR MESSAGE STACK FOLLOWS
============== RMAN-00571:
=========================================================
RMAN-07539: insufficient privileges to create or upgrade the catalog schema
RMAN exit

# Solution
– Connect on the catalog database with the 12c (local) OH:
(and don’t worry about the error on alter session).

[oracle@databasesrvr dbs]$ sqlplus sys/magicpass@catalogdb as sysdba SQL*Plus:


Release 12.1.0.2.0 Production on Tue Jul 21 14:21:02 2015 Copyright (c) 1982, 2014,
Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition
Release 11.2.0.1.0 - 64bit Production With the Partitioning, Automatic Storage
Management, OLAP, Data Mining and Real Application Testing options SQL
@?/rdbms/admin/dbmsrmansys.sql alter session set "_ORACLE_SCRIPT" = true *
ERROR at line 1: ORA-02248: invalid option for ALTER SESSION PL/SQL procedure
successfully completed. PL/SQL procedure successfully completed. Grant succeeded.
Grant succeeded. Grant succeeded. Grant succeeded. Grant succeeded. Grant
succeeded. Grant succeeded. Grant succeeded. alter session set
"_ORACLE_SCRIPT" = false * ERROR at line 1: ORA-02248: invalid option for
ALTER SESSION

– Then try to upgrade catalog again:

280
[oracle@databasesrvr dbs]$ rman target / Recovery Manager: Release 12.1.0.2.0 -
Production on Tue Jul 21 14:21:27 2015 Copyright (c) 1982, 2014, Oracle and/or its
affiliates. All rights reserved. connected to target database: MYDB (not mounted)
RMAN connect catalog catalog_mydb/catalog_mydb@catalogdb connected to
recovery catalog database PL/SQL package CATALOG_MYDB.DBMS_RCVCAT
version 11.02.00.03 in RCVCAT database is too old RMAN upgrade catalog; recovery
catalog owner is CATALOG_MYDB enter UPGRADE CATALOG command again to
confirm catalog upgrade RMAN upgrade catalog; recovery catalog upgraded to
version 12.01.00.02 DBMS_RCVMAN package upgraded to version 12.01.00.02
DBMS_RCVCAT package upgraded to version 12.01.00.02.

Matheus.

281
ORA-27302: failure occurred at:
sskgpcreates
# Error:

dbsrvr1:/home/oraclesrvctl start database -d mydb PRCR-1079 : Failed to start


resource ora.mydb.db CRS-5017: The resource action "ora.mydb.db start"
encountered the following error: ORA-27154: post/wait create failed ORA-27300: OS
system dependent operation:semget failed with status: 28 ORA-27301: OS failure
message: No space left on device ORA-27302: failure occurred at: sskgpcreates . For
details refer to "(:CLSN00107:)" in
"/grid/product/11.2.0.4/log/dbsrvr2/agent/crsd/oraagent_oracle/oraagent_oracle.log".
CRS-2674: Start of 'ora.mydb.db' on 'dbsrvr2' failed CRS-2632: There are no more
servers to try to place resource 'ora.mydb.db' on that would satisfy its placement
policy

Seems the weeror is happening on dbsrvr2, right?


The doc below talks more about the error and the semaphores calculation:
Database Startup Fails with ORA-27300: OS system dependent
operation:semget failed with status: 28 (Doc ID 949468.1)

Let’s make an adjust here:

[root@dbsrvr2 ~]# cat /etc/sysctl.conf |grep sem kernel.sem = 250 32000 100 142
[root@dbsrvr2 ~]# vi /etc/sysctl.conf [root@dbsrvr2 ~]# cat /etc/sysctl.conf |grep sem
kernel.sem = 250 32000 100 256 [root@dbsrvr2 ~]# sysctl -p

And try again:

dbsrvr1:/home/oraclesrvctl start database -d mydb dbsrvr1:/home/oracle

Well done!

Matheus.

282
ORA-15081: failed to submit an I/O operation
to a disk
After some disk and a instance of RAC lost, the database was stuck with ORA-15081.
A recover was needed. #StayTheTip

# Error

dbsrvr:/home/oraclesqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on


Fri Jun 29 19:51:37 2015 Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to an idle instance. SQL startup mount; ORACLE instance started. Total
System Global Area 1.3462E+10 bytes Fixed Size 2239232 bytes Variable Size
7214204160 bytes Database Buffers 6241124352 bytes Redo Buffers 4489216 bytes
Database mounted. SQL alter database open; alter database open * ERROR at line 1:
ORA-15081: failed to submit an I/O operation to a disk

# Solution

SQL recover database; Media recovery complete. SQL alter database open; Database
altered.

Be happy with this!

Matheus.

283
PRCR-1079 CRS-2674 CRS-5017 ORA-27102:
out of memory Linux-x86_64 Error: 28: No
space left on device
# Problem

myserver:/home/oraclesrvctl start database -d mydb PRCR-1079 : Failed to start


resource ora.mydb.db CRS-5017: The resource action "ora.mydb.db start"
encountered the following error: ORA-27102: out of memory Linux-x86_64 Error: 28:
No space left on device . For details refer to "(:CLSN00107:)" in
"/grid/product/11.2.0/log/myserver/agent/crsd/oraagent_oracle/oraagent_oracle.log".
CRS-2674: Start of 'ora.mydb.db' on 'myserver' failed CRS-5017: The resource action
"ora.mydb.db start" encountered the following error: ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device . For details refer to "(:CLSN00107:)"
in
"/grid/product/11.2.0/log/myserver2/agent/crsd/oraagent_oracle/oraagent_oracle.log".
CRS-2674: Start of 'ora.mydb.db' on 'myserver2' failed CRS-2632: There are no more
servers to try to place resource 'ora.mydb.db' on that would satisfy its placement
policy myserver:/home/oracle

# Solution

On /etc/sysctl.conf ajust as below and then reload sysctl (“sysctl -p” as root):

#Old #kernel.shmall = 24641536 #New kernel.shmall = 4294967296

Matheus.

284
ORA-06512 ORA-48168 ORA-12012 for ADR
Job Raising Errors
ORA-06512 ORA-48168 ORA-12012 for ADR Job Raising Errors
A database is raising stack below on alertlog:

Errors in file /db/u7011/oracle/admin/MYDB/trace/MYDB_j002_22935.trc: ORA-12012:


error on auto execute of job "SYS"."DRA_REEVALUATE_OPEN_FAILURES"
ORA-48168: the ADR sub-system is not initialized ORA-06512: at "SYS.DBMS_IR",
line 522

But database isn’t with ADR enabled:

SQL select * from V$DIAG_INFO where NAME='Diag Enabled'; INST_ID


NAME                                                             VALUE ----------
----------------------------------------------------------------
------------------------------------------------------- 1 Diag
Enabled                                                     FALSE

The note ORA-12012 And ORA-48168: ADR Sub-system Is Not Initialized (Doc ID
1601769.1) is indicating to make maintenance involving database shutdown… But I
don’t want to.

The note Getting Error In Alert Log ORA-51108: Unable To Access Diagnostic
Repository – Retry Command (Doc ID 1586736.1) indicates to recrate Health
Monitor Information, through:

SQL exec dbms_hm.drop_schema; SQL exec dbms_hm.create_schema;

But It gone wrong:

SQL exec dbms_hm.drop_schema; BEGIN dbms_hm.drop_schema; END; * ERROR


at line 1: ORA-51026: Diag ADR not enabled, can't run check ORA-06512: at
"SYS.DBMS_HM", line 261 ORA-06512: at line 1

As I said, the Diag is not Enabled. So, the easiest “workaround” is to just disable the
job:

SQL exec dbms_scheduler.disable('DRA_REEVALUATE_OPEN_FAILURES');


PL/SQL procedure successfully completed.

See ya!
Matheus.

285
x$kglob: ORA-02030: can only select from
fixed tables/views
Hi all!
While selecting on x$kglob with DBA credentials hanging on:

SQL select count(*) from sys.x$kglob; ERROR at line 1: ORA-00942: a tabela ou view
não existe

But with sys it succeed. Ok, let’s grant privilege:

SQL grant select on sys.x$kglob to dba; grant select on sys.x$kglob to dba * ERROR
at line 1: ORA-02030: can only select from fixed tables/views

What a hell! I couldn’t grant it any way!


So the MCGayver solution was:

create or replace view sys.bla_x$kglob as select * from sys.x$kglob; create or replace


public synonym x$kglob for sys.bla_x$kglob; grant select on sys.bla_x$kglob to dba;

It works. Be happy with that.

Matheus.

286
RHEL5: Database 10g Installation –
Checking operating system version error
Everything is old. RHEL and Database versions. But can be useful if you are preparing
a nonprod lab of your legacy env, right?

Let’s see the problem:

[oracle@dbsrv database]$ ./runInstaller Starting Oracle Universal Installer... Checking


installer requirements... Checking operating system version: must be redhat-3,
SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2 Failed

The “easiest” workaround:

[oracle@dbsrv database]$ ./runInstaller -ignoreSysPrereqs

The “hardway” workaround:

1. Copy parameter file: $ cp database/install/oraparam.ini /tmp 2. Edit parameter


file: $ vi /tmp/oraparam.ini 2.1. Change [Certified Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2 to [Certified
Versions]
Linux=redhat-3,SuSE-9,redhat-4,UnitedLinux-1.0,asianux-1,asianux-2,redhat-5 3.
Run the installer again: $ ./runInstaller -paramFile /tmp/oraparam.ini

KB: Requirements For Installing Oracle10gR2 On RHEL 5/OEL 5 (x86_64) [ID


421308.1]

Matheus.

287
ORA-10456: cannot open standby database;
media recovery session may be in progress
Easy, easy… Take a look:

# Error

db2database2p:srvctl status database -d database


Instance database1 is running on node db1database1p
Instance database2 is not running on node db2database2p
db2database2p:srvctl start instance -d database -i database2
PRCR-1013 : Failed to start resource ora.database.db
PRCR-1064 : Failed to start resource ora.database.db on node db2database2p
CRS-5017: The resource action "ora.database.db start" encountered the following
error:
ORA-10456: cannot open standby database; media recovery session may be in
progress
. For details refer to "(:CLSN00107:)" in "/grid/product/11.2.0/log/db2database2p/agent
/crsd/oraagent_oracle/oraagent_oracle.log".
CRS-2674: Start of 'ora.database.db' on 'db2database2p' failed.

# Solution

db2database2p:sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Thu Jun 4 20:27:46 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to an idle instance.

SQL startup
ORACLE instance started.
Total System Global Area 1.1224E+11 bytes
Fixed Size 2234920 bytes
Variable Size 6.1472E+10 bytes
Database Buffers 5.0466E+10 bytes
Redo Buffers 299741184 bytes
Database mounted.
ORA-10456: cannot open standby database; media recovery session may be in
progress

SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


Database altered.
SQL ALTER DATABASE OPEN READ ONLY;
Database altered.
SQL exit

288
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit
Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options

db2database2p: srvctl status database -d database


Instance database1 is running on node db1database1p
Instance database2 is running on node db2database2p

Matheus.

289
ORA-28004: invalid argument for function
specified in
PASSWORD_VERIFY_FUNCTION
An unexpected error, right?

SQL CREATE PROFILE TEST_PROF LIMIT FAILED_LOGIN_ATTEMPTS 5


PASSWORD_LIFE_TIME 180 PASSWORD_GRACE_TIME 30
PASSWORD_REUSE_MAX 15 PASSWORD_VERIFY_FUNCTION fnc_validation;
CREATE PROFILE TEST_PROF LIMIT * ERROR at line 1: ORA-28004: invalid
argument for function specified in PASSWORD_VERIFY_FUNCTION
FNC_VALIDATION

That is a simple need. You have to use 3 parameters on function: username


varchar2, password varchar2, old_password varchar2 .

Matheus.

290
ORA-27369: job of type EXECUTABLE failed
with exit code: Operation not permitted
When running external script by scheduler. The solution:

chown root $ORACLE_HOME/bin/extjob chmod 4750 $ORACLE_HOME/bin/extjob


chown root $ORACLE_HOME/rdbms/admin/externaljob.ora chmod 640
$ORACLE_HOME/rdbms/admin/externaljob.ora chown root
$ORACLE_HOME/bin/jssu chmod 4750 $ORACLE_HOME/bin/jssu

Have a nice week!


Matheus.

291
Package Body
APEX_030200.WWV_FLOW_HELP Invalid
after Oracle Text Installing
Hi all!
The package body APEX_030200.WWV_FLOW_HELP become invalid after Oracle
Text installation with the follow errors:

Compilation errors for PACKAGE BODY APEX_030200.WWV_FLOW_HELP


#13#10Error: PL/SQL: ORA-00942: table or view does not exist Line: 189
#13#10Error: PL/SQL: SQL Statement ignored Line: 188 #13#10Error: PLS-00201:
identifier 'CTX_DDL.DROP_PREFERENCE' must be declared Line: 191 #13#10Error:
PL/SQL: Statement ignored Line: 191 #13#10Error: PL/SQL: ORA-00942: table or
view does not exist Line: 197 #13#10Error: PL/SQL: SQL Statement ignored Line: 196
#13#10Error: PLS-00201: identifier 'CTX_DDL.DROP_PREFERENCE' must be
declared Line: 199 #13#10Error: PL/SQL: Statement ignored Line: 199 #13#10Error:
PLS-00201: identifier 'CTX_DDL.CREATE_PREFERENCE' must be declared Line:
261 #13#10Error: PL/SQL: Statement ignored Line: 261 #13#10Error: PLS-00201:
identifier 'CTX_DDL.SET_ATTRIBUTE' must be declared Line: 262 #13#10Error:
PL/SQL: Statement ignored Line: 262 #13#10Error: PLS-00201:
identifier 'CTX_DDL.SET_ATTRIBUTE' must be declared Line: 265 #13#10Error:
PL/SQL: Statement ignored Line: 265 #13#10Error: PLS-00201:
identifier 'CTX_DDL.CREATE_PREFERENCE' must be declared Line: 280
#13#10Error: PL/SQL: Statement ignored Line: 280 #13#10Error: PLS-00201:
identifier 'CTX_DOC.FILTER' must be declared Line: 292 #13#10Error: PL/SQL:
Statement ignored Line: 292 #13#10Error: PLS-00201:
identifier 'CTX_DOC.FILTER' must be declared Line: 312 #13#10Error: PL/SQL:
Statement ignored Line: 312

It happens bassically because APEX schema has not been granted with execute
privileges for CTX_DDL and CTX_DOC. The note below it’s exactly about it:
The WWV_FLOW_HELP PACKAGE Status is Invalid After Installing Oracle Text
(Doc ID 1335521.1)

The solution is simple:

mydb grant execute on ctx_ddl to APEX_030200; Grant succeeded. mydb grant


execute on ctx_doc to APEX_030200; Grant succeeded. mydb alter package
APEX_030200.WWV_FLOW_HELP compile; Package altered. mydb alter package
APEX_030200.WWV_FLOW_HELP compile body; Package body altered.

Have a nice day!


Matheus.

292
ORA-12012: error on auto execute of job
“SYS”.”BSLN_MAINTAIN_STATS_JOB”
Hi all,
Evaluating a database I detected it was failing to execute the default scheduler job
SYS.BSLN_MAINTAIN_STATS_JOB. This job is an Oracle defined automatic moving
window baseline statistics computation job, that runs only in weekends.
Below the last stack error in the alert log:

2016-04-24 00:00:10.064000 +00:00 Errors in file


/db/u1001/oracle/diag/rdbms/MYDB/MYDB/trace/MYDB_j000_15675.trc: ORA-12012:
error on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB" ORA-06502:
PL/SQL: numeric or value error ORA-06512: at "DBSNMP.BSLN_INTERNAL", line
2073 ORA-06512: at line 1 2016-04-26 15:54:07.480000 +00:00

And the full tracefile:

Trace file /db/u1001/oracle/diag/rdbms/MYDB/MYDB/trace/MYDB_j000_15675.trc


Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With
the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11.2 System name:    Linux Node
name:      prddb09 Release:        2.6.18-164.el5 Version:        #1 SMP Tue Aug 18
15:51:48 EDT 2009 Machine:        x86_64 Instance name: MYDB Redo thread
mounted by this instance: 1 Oracle process number: 151 Unix process pid: 15675,
image: oracle@prddb09 (J000) *** 2016-04-24 00:00:10.064 *** SESSION
ID:(586.10305) 2016-04-24 00:00:10.064 *** CLIENT ID:() 2016-04-24 00:00:10.064
*** SERVICE NAME:(SYS$USERS) 2016-04-24 00:00:10.064 *** MODULE
NAME:(DBMS_SCHEDULER) 2016-04-24 00:00:10.064 *** ACTION
NAME:(BSLN_MAINTAIN_STATS_JOB) 2016-04-24 00:00:10.064 ORA-12012: error
on auto execute of job "SYS"."BSLN_MAINTAIN_STATS_JOB" ORA-06502: PL/SQL:
numeric or value error ORA-06512: at "DBSNMP.BSLN_INTERNAL", line 2073
ORA-06512: at line 1

According the notes below, the recommended action is to recreate the DBSNMP
component:
Bug 10110625 – DBSNMP.BSLN_INTERNAL reports ORA-6502 running
BSLN_MAINTAIN_STATS_JOB (Doc ID 10110625.8)
ORA-12012: Error on Auto Execute of job SYS.BSLN_MAINTAIN_STATS_JOB
(Doc ID 1413756.1)
KEWBMBTA: Maintain BSLN Thresholds Failed, Check For Details. (Doc ID
1490391.1)

However, it’s a process that can affect other mechanisms. So, I found the follow note
with the same error pointing to a privilege issue:
Ora-06508: Pl/Sql: Could Not Find Program Unit Being Called:
“DBSNMP.BSLN_INTERNAL” (Doc ID 1323597.1)

293
But after granting the privilege as workaround suggested, the fail remais…

MYDB select * from dba_tab_privs where table_name='DBMS_JOB';


GRANTEE                        OWNER                          TABLE_NAME                    
GRANTOR                        PRIVILEGE ------------------------------ ------------------------------
------------------------------ ------------------------------ ----------- APEX_030200                   
SYS                            DBMS_JOB                       SYS                            EXECUTE
SYSMAN                         SYS                            DBMS_JOB                      
SYS                            EXECUTE EXFSYS                         SYS                           
DBMS_JOB                       SYS                            EXECUTE PUBLIC                        
SYS                            DBMS_JOB                       SYS                            EXECUTE
SQL GRANT EXECUTE ON sys.dbms_job to DBSNMP; Grant succeeded. MYDB
select * from dba_tab_privs where table_name='DBMS_JOB';
GRANTEE                        OWNER                          TABLE_NAME                    
GRANTOR                        PRIVILEGE ------------------------------ ------------------------------
------------------------------ ------------------------------ -------------- SYSMAN                        
SYS                            DBMS_JOB                       SYS                            EXECUTE
APEX_030200                    SYS                            DBMS_JOB                      
SYS                            EXECUTE EXFSYS                         SYS                           
DBMS_JOB                       SYS                            EXECUTE DBSNMP                        
SYS                            DBMS_JOB                       SYS                            EXECUTE
PUBLIC                         SYS                            DBMS_JOB                      
SYS                            EXECUTE SQL EXEC
DBMS_SCHEDULER.RUN_JOB('BSLN_MAINTAIN_STATS_JOB'); BEGIN
DBMS_SCHEDULER.RUN_JOB('BSLN_MAINTAIN_STATS_JOB'); END; * ERROR
at line 1: ORA-06502: PL/SQL: numeric or value error ORA-06512: at
"DBSNMP.BSLN_INTERNAL", line 2073 ORA-06512: at line 1 ORA-06512: at
"SYS.DBMS_ISCHED", line 185 ORA-06512: at "SYS.DBMS_SCHEDULER", line 486
ORA-06512: at line 1

After that, while I was quering on DBSNMP, I realized another instance name active in
DBSNMP.BSLN_BASELINES.
I guess this database was created with another instance name and then renamed
without DBNID.

MYDB select * from DBSNMP.BSLN_BASELINES; DBID INSTANCE_NAME   


BASELINE_ID BSLN_GUID                        TI A STATUS ---------- ----------------
----------- -------------------------------- -- - --------- 4092499541 MYDB                       0
75B49690F8B4742084990643EEFFB6AA HX Y ACTIVE 4092499541
oldname                    0 415373CD9959B77AAEE1804F06D88B60 NW Y ACTIVE

So, I deleted the row and the job started to run successfully:

MYDB DELETE FROM DBSNMP.BSLN_BASELINES WHERE INSTANCE_NAME


='oldname'; 1 row deleted. MYDB commit; Commit complete. SQL EXEC
DBMS_SCHEDULER.RUN_JOB('BSLN_MAINTAIN_STATS_JOB'); PL/SQL
procedure successfully completed.

294
Execution logs:

MYDB select * 2    from (select owner, job_name, log_date, status, run_duration
3            from dba_scheduler_job_run_details a 4           where job_name
= 'BSLN_MAINTAIN_STATS_JOB' 5           order by log_date) 6   where rownum 10;
OWNER                          JOB_NAME                  LOG_DATE                           
STATUS          RUN_DURATION ------------------------------ -------------------------
----------------------------------- --------------- --------------- SYS                           
BSLN_MAINTAIN_STATS_JOB   03/04/16 00:00:08,484972 +00:00     FAILED         
+000 00:00:08 SYS                            BSLN_MAINTAIN_STATS_JOB   10/04/16
00:00:07,943598 +00:00     FAILED          +000 00:00:07 SYS                           
BSLN_MAINTAIN_STATS_JOB   17/04/16 00:00:08,486526 +00:00     FAILED         
+000 00:00:08 SYS                            BSLN_MAINTAIN_STATS_JOB   24/04/16
00:00:10,067848 +00:00     FAILED          +000 00:00:09 SYS                           
BSLN_MAINTAIN_STATS_JOB   29/04/16 13:58:10,779201 +00:00     FAILED         
+000 00:00:01 SYS                            BSLN_MAINTAIN_STATS_JOB   29/04/16
14:01:04,162900 +00:00     SUCCEEDED       +000 00:00:00

I hope it help you too!

Matheus.

295
Materialized View with DBLink: ORA-00600:
internal error code, arguments: [kkzuasid]
Hello guys!
Not being able to refresh you Materialized View because of this error?

bamdb exec dbms_mview.refresh('PROD_ORABAM.MVIEW_TEST','C'); BEGIN


dbms_mview.refresh('PROD_ORABAM.MVIEW_TEST','C'); END; * ERROR at line 1:
ORA-00600: internal error code, arguments: [kkzuasid], [2], [0], [1], [], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2809 ORA-06512: at
"SYS.DBMS_SNAPSHOT", line 3025 ORA-06512: at "SYS.DBMS_SNAPSHOT", line
2994 ORA-06512: at line 1

The bad new is there is no workaround (I usually prefer workaround for this, is quicker
and less complicated).
But the good new is there is a patch for this: Patch 17705023 : ORA-600
[KKZUASID] ON MV REFRESH

This error is related to a defect when trying to refresh a materialized view and
using Query Rewrite in RDBMS 11.2.0.4, and is fixed in 12.2 ( Bug 17705023 :
ORA-600 [KKZUASID] ON MV REFRESH ).
You can find more info in MOS Bug 17705023 – ORA-600 [kkzuasid] on MV refresh
(Doc ID 17705023.8) .

In my situation, as per documentation, I applied the patch and solved the situation as
quick as possible. But reviewing the situation to write this post, specially about Query
Rewrite feature , I see you maybe can recreate you materialized view with hint
NOREWRITE OR setting parameter QUERY_REWRITE_ENABLED to false and have
a shot. Maybe an undocumented Workaround?

If you make this, please add your experience as a comment!

After applying patch, of course:

bamdb exec dbms_mview.refresh('PROD_ORABAM.MVIEW_TEST','C'); PL/SQL


procedure successfully completed.

Bye bye, see you next Wednesday!

Matheus.

296
OUI: RHEL Permission Denied error
Another quick tip about running DBCA:

# Error:

[oracle@dbsrv database]$ ./runInstaller Starting Oracle Universal Installer... Checking


installer requirements... All installer requirements met. Preparing to launch Oracle
Universal Installer from /tmp/OraInstall2015-06-25_06-37-23PM. Please wait ...Error in
CreateOUIProcess(): 13 : Permission denied

# Solution:

mount -t ext3 -o remount default /tmp

Ok doke?

Matheus.

297
ORA-19751: could not create the change
tracking file
Let’s make it simple to solve the problem:

# Error:

SQL alter database open; alter database open * ERROR at line 1: ORA-19751: could
not create the change tracking file ORA-19750: change tracking
file: '+DGDATA/mydb/changetracking/ctf.470.859997781' ORA-17502: ksfdcre:1
Failed to create file +DGDATA/mydb/changetracking/ctf.470.859997781 ORA-17501:
logical block size 4294967295 is invalid ORA-15001: diskgroup "DGDATA" does not
exist or is not mounted ORA-17503: ksfdopn:2 Failed to open file
+DGDATA/mydb/changetracking/ctf.470.859997781 ORA-15001: diskgroup
"DGDATA" does not exist or is not mounted ORA-15001: diskgroup "DGDATA" does
not exist or is not mounted

# Solution:

SQL alter database disable BLOCK CHANGE TRACKING; Database altered. SQL
alter database open; Database altered.

Then, after everything be OK, you fix the situation recrating a BCTF:

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING


FILE '+DGDATANEW';

MTFBWU!

Matheus.

298
ORA-01548: active rollback segment found,
terminate
# Problem

SQL drop tablespace UNDOTBS1; drop tablespace UNDOTBS1 * ERROR at line 1:


ORA-01548: active rollback segment '_SYSSMU10_1251904955$' found, terminate
dropping tablespace

SQL drop rollback segment "_SYSSMU3_1251904955$"; Rollback segment dropped.


SQL drop tablespace UNDOTBS1; drop tablespace UNDOTBS1 * ERROR at line 1:
ORA-01548: active rollback segment '_SYSSMU10_1251904955$' found, terminate
dropping tablespace

# Solution

CREATE ROLLBACK SEGMENT rb1 STORAGE(INITIAL 1M next 1M minextents 20)


tablespace UNDOTBS5; CREATE ROLLBACK SEGMENT rb2 STORAGE(INITIAL
1M next 1M minextents 20) tablespace UNDOTBS5; CREATE ROLLBACK
SEGMENT rb3 STORAGE(INITIAL 1M next 1M minextents 20) tablespace
UNDOTBS5;

# Why?
The UNDO_MANAGEMENT is set as ‘MANUAL’, right? To drop any undo the default
UNDO must have at least one segment.

Matheus.

299
RMAN-06059: expected archived log not
found
# Error

RMAN-03002: failure of backup command at 06/28/2015 14:56:30 RMAN-06059:


expected archived log not found, loss of archived log compromises recoverability
ORA-19625: error identifying file
+DGFRA/corpdb/archivelog/2015_06_27/thread_1_seq_20198.1192.883524615
ORA-17503: ksfdopn:2 Failed to open file
+DGFRA/corpdb/archivelog/2015_06_27/thread_1_seq_20198.1192.883524615
ORA-15012: ASM file '+DGFRA/corpdb/archivelog/2015_06_27/thread_1_seq_20198.
1192.883524615' does not exist

# Solution
First of all, you need to know which files exists or not:

RMAN CROSSCHECK ARCHIVELOG ALL;

Then, clear the missing and run another backup.

RMAN DELETE EXPIRED ARCHIVELOG ALL;

It’s hardly recommended that you make a full backup after that, to ensure you have a
recoverable state.

Matheus.

300
ORA-29760: instance_number parameter not
specified
I felt myself stupid when I lost a few minutes to undestand this error:

SQL startup pfile=init_corpdb.ora ORA-29760: instance_number parameter not


specified

Do you belive the solution was simply to set a number in ORACLE_SID?


Take a look:

dbsrvrecho $ORACLE_SID corpdb dbsrvrexport ORACLE_SID=corpdb _1


dbsrvrsqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Sun Jun 28
00:18:05 2015 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an
idle instance. SQL startup pfile=init_corpdb.ora ORACLE instance started. Total
System Global Area 4275781632 bytes Fixed Size 2220200 bytes Variable Size
889196376 bytes Database Buffers 3372220416 bytes Redo Buffers 12144640 bytes
Database mounted. Database opened.

I hope neve miss time with this again…

Matheus.

301
ORA-00600: internal error code, arguments:
[ktecgetsh-inc], [2]
Alert showing:

Errors in file /oracle/diag/rdbms/mydb/mydb/trace/mydb_smon_6024.trc


(incident=9666): ORA-00600: internal error code, arguments: [ktecgetsh-inc], [2], [], [],
[], [], [], [], [], [], [], []

This is a non-fatal internal error happenned while SMON is doing a temporary


segment drop. My SMON encountered 9 out of maximum 100 non-fatal internal errors.

So,

alter system set event="10061 trace name context forever, level 10" scope=spfile;

Restart Database and:

alter system reset event scope=both sid='*';

See ya!
Matheus.

302
ORA-10456: cannot open standby database;
media recovery session may be in progress
A dataguard quick tip!

# Error

SQL ALTER DATABASE OPEN READ ONLY; ALTER DATABASE OPEN READ
ONLY * ERROR at line 1: ORA-10456: cannot open standby database; media
recovery session may be in progress

# Solution

SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


Database altered. SQL ALTER DATABASE OPEN READ ONLY; Database altered.
SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING
CURRENT LOGFILE DISCONNECT; Database altered.

See ya!
Matheus.

303
ORA-01994: GRANT failed: password file
missing or disabled
Quick tip:

[oracle@server ~]$ orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID


password=$password entries=$num_users force=y

KB:
http://docs.oracle.com/cd/B28359_01/server.111/b28310/dba007.htm#ADMIN12478

# OBS 1
“If you are running multiple instances of Oracle Database using Oracle Real
Application Clusters, the environment variable for each instance should point to the
same password file.”

# OBS 2
REMOTE_LOGIN_PASSWORDFILE need to be in EXCLUSIVE to alter user with
sysdba.

# OBS 3
Users can be chacked on V$PWFILE_USERS.

# OBS 4
Entries represent the quantity of users on orapwd/with sysdba.

Matheus.

304
11.2.0.1: ORA-00600: internal error code,
arguments: [7005], [0], [], [], [], [], [], [], [], [],
[], []
# Error

Errors in file /oracle/diag/rdbms/mydb/mydb_1/trace/mydb_1_ora_972.trc


(incident=195818): ORA-00600: internal error code, arguments: [7005], [0], [], [], [], [],
[], [], [], [], [], [] Incident details in: /oracle/diag/rdbms/mydb/mydb_1/incident/incdir_195
818/mydb_1_ora_972_i195818.trc

#Cause
The query causing this error uses a CONTAINS clause on alphanumerical column
using bind variables. This is a perfect match with note ORA-0600 [7005] on a Select
Query Using Contains Clause (Doc ID 1176276.1) , referencing the unpublished Bug
8770557 ORA-600 [7005] While Running Text Queries .
The symptoms includes this two key factors:
– presence of CONTAINS clause
– use of bind variables

# Solution
Apply 11.2.0.2 patchset or higher, where this issue is fixed or Apply one off Patch
8770557 if available for your version / platform.

See ya!
Matheus.

305
ORA-00845: MEMORY_TARGET not
supported on this system (RHEL)
# Solution:
Make sure that /dev/shm is mounted. You can check this by typing df -k at the
command prompt. It will look something like this:

Filesystem Size Used Avail Use% Mounted on



shmfs 1G 512M 512M 50% /dev/shm

If you don’t find it then you will have to manually mount it as root user. The size should
be more than MEMORY_TARGET or MEMORY_MAX_TARGET.

For example, if the MEMORY_TARGET is less than 2 GB, you should make like that:

#root: mount -t tmpfs shmfs -o size=2048m /dev/shm

I recommend you add an entry in /etc/fstab so that the mount remains persistent even
after a reboot.
To make it, add the following entry in /etc/fstab:

shmfs /dev/shm tmpfs size=2048m 0 0

Helped?
Share this post!

Matheus.

306
ORA-01153: an incompatible media recovery
is active
When trying to start or increase parallel of recover manager on datagauard (MRP):

SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING


CURRENT LOGFILE DISCONNECT FROM SESSION; ALTER DATABASE
RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE
DISCONNECT FROM SESSION; * ERROR at line 1: ORA-01153: an incompatible
media recovery is active

I simply happen because there already are a process runnning, let’s check:

SQL select PROCESS,CLIENT_PROCESS,THREAD#,SEQUENCE#,BLOCK# from


v$managed_standby where process = 'MRP0' or client_process='LGWR'; PROCESS  
CLIENT_P    THREAD#  SEQUENCE#     BLOCK# --------- -------- ---------- ----------
---------- MRP0      N/A               1         26          0

If you want to change it, it’s just stop it first and then start with the clauses you want:

SQL alter database recover managed standby database cancel; Database altered.
SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING
CURRENT LOGFILE DISCONNECT FROM SESSION; Database altered.

See ya!
Matheus.

307
308
Table of contents

Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
About the Blog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
GrepOra.com in 2016… . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
GrepOra Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
About the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
ADRCI Retention Policy and Ad-Hoc Purge Script for all Bases . . . . . . . . . . . . . . . . 12
High CPU usage by LMS and Node Evictions: Solved by Setting “_high_priority_processes” . 14
Application Looping Until Lock a Row with NOWAIT Clause . . . . . . . . . . . . . . . . . . 15
VKTM Hang – High CPU Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Oracle TPS: Evaluating Transaction per Second . . . . . . . . . . . . . . . . . . . . . . . . 20
Leap Second and Impact for Oracle Database . . . . . . . . . . . . . . . . . . . . . . . . . 22
HANGANALYZE Part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
HANGANALYZE Part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
ASHDUMP for Instance Crash/Hang ‘Post Mortem’ Analysis . . . . . . . . . . . . . . . . . 30
SYSTEMSTATE DUMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Upgrade your JDBC and JDK before Upgrade your Database to 12c Version! . . . . . . . . 36
Unplug/Plug PDB between different Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Database Migration/Move with RMAN: Are you sure nothing is missing? . . . . . . . . . . . 42
Vulnerability: Decrypting Oracle DBlink password (<11.2.0.2) . . . . . . . . . . . . . . . . . 43
Ordering Sequences over RAC – Hang on ‘DFS lock handle’ . . . . . . . . . . . . . . . . . 45
Infiniband Error: Cable is present on Port “X” but it is polling for peer port . . . . . . . . . . . 49
After adding Datafile in Primary the MRP Stopped in Physical Standby (Dataguard) . . . . . 52
Lock by DBLink – How to locate the remote session? . . . . . . . . . . . . . . . . . . . . . 55
Listing Sessions Connected by SID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
VPD: “row cache objects” latch contention . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Compilation Impact: Object Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
RAC on AIX: Network Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Grepping Entries from Alert.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Grepping Alert by Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Searching entries on Alert.log: A Better Way . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Alter (Fix) Oracle Database Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Explain ORA-XXX on SQL*Plus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Oracle Database Licensing: First Step! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Getting Oracle Parameters: Hidden and Unhidden . . . . . . . . . . . . . . . . . . . . . . . 71
Application Hangs: resmgr:become active . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
How to Prevent Automatic Database Startup . . . . . . . . . . . . . . . . . . . . . . . . . . 74
TFA – Collecting Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
ARCH Process Killed – Fix Without Restart . . . . . . . . . . . . . . . . . . . . . . . . . . 76
DBA_TAB_MODIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Oracle – Lost user’s password? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Scheduler Job by Node (RAC Database) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ORA-01950 On Insert but not on Create Table . . . . . . . . . . . . . . . . . . . . . . . . . 81
Adding datafile hang on “enq: TT – contention” . . . . . . . . . . . . . . . . . . . . . . . . 82
Quick guide about SRVCTL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Saving database space with ASSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Flashback- Part 1 (Flashback Drop) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Flashback – Part 2 (Flashback Query) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Flashback- Part 3 (Flashback Versions Query) . . . . . . . . . . . . . . . . . . . . . . . . . 92
Flashback – Part 4 (Flashback Transaction Query) . . . . . . . . . . . . . . . . . . . . . . 94
Flashback – Part 5 (Flashback Table) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Flashback – Part 6 (Flashback Database) . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Flashback – Part 7 (Flashback Data Archive) . . . . . . . . . . . . . . . . . . . . . . . . . 103
Alert Log: “Private Strand Flush Not Complete” on Logfile Switch . . . . . . . . . . . . . . 106
TPS Chart on PL/SQL Developer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
PL/SQL Developer Taking 100% of Database CPU . . . . . . . . . . . . . . . . . . . . . . 110
Installing and Configuring ASMLIb on Oracle Linux 7 . . . . . . . . . . . . . . . . . . . . . 112
ASM: Adding disk “_DROPPED%” FORCE . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Adding ASM Disks on RHEL Cluster with Failgroups . . . . . . . . . . . . . . . . . . . . . 118
Manually Mounting ACFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Kludge: Mounting ACFS Thought Shellscript . . . . . . . . . . . . . . . . . . . . . . . . . 122
CRSCTL: AUTO_START of Cluster Services (ACFS) . . . . . . . . . . . . . . . . . . . . 123
Changing ACFS mount point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
ORA-27054: NFS file system where the file is created or resides is not mounted with correct
options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Error: Starting ACFS in RHEL 6 (Can’t exec “/usr/bin/lsb_release”) . . . . . . . . . . . . . 126
Create SPFILE on ASM from PFILE on Filesystem . . . . . . . . . . . . . . . . . . . . . . 127
ORA-15186: ASMLIB error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Charsets: Single-Byte vs Multibyte Encoding Scheme Issue . . . . . . . . . . . . . . . . . 129
Date Format in RMAN: Making better! . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Creating RMAN Backup Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
EXP Missing Tables on 11.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
DDBoost: sbtbackup: dd_rman_connect_to_backup_host failed . . . . . . . . . . . . . . . 134
EXP-00079 – Data Protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Backup Not Backuped Archivelogs and Delete Input . . . . . . . . . . . . . . . . . . . . . 136
How to list all my Oracle Products from Database park? . . . . . . . . . . . . . . . . . . . 137
How to list all my Oracle Products from Application park? . . . . . . . . . . . . . . . . . . 139
Service Detected on OEM but not in SRVCTL or SERVICE_NAMES Parameter? . . . . . . 141
Manipulating JMS queues using WLST Script . . . . . . . . . . . . . . . . . . . . . . . . 142
Decrypting WebLogic Datasource Password . . . . . . . . . . . . . . . . . . . . . . . . . 143
Setting up a weblogic Result cache on Oracle Service Bus . . . . . . . . . . . . . . . . . . 145
Avoiding lost messages in JDBC Persistent Store, when processing Global Transactions with
JMS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Reset the AdminServer Password in WebLogic 11g and 12c . . . . . . . . . . . . . . . . . 151
Configuration Coherence Server Out-of-Process in OSB 12C . . . . . . . . . . . . . . . . 152
WebLogic AdminServer Startup stopped at “Initializing self-tuning thread pool” . . . . . . . 155
Weblogic starting with the operating system . . . . . . . . . . . . . . . . . . . . . . . . . 156
WLST easeSyntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Quickly change Weblogic to Production Mode . . . . . . . . . . . . . . . . . . . . . . . . 158
Weblogic in debug mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Apache 2.4 with port redirect to Weblogic 12c . . . . . . . . . . . . . . . . . . . . . . . . 160
Oracle Licensing: Weblogic Tip! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Weblogic JRF files in /tmp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Bypass user and password in the Oracle BAM ICommand. . . . . . . . . . . . . . . . . . . 164

Error BAD_CERTIFICATE in Node Manager . . . . . . . . . . . . . . . . . . . . . . . . . 167


Weblogic – Wrong listening address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Enabling GoldenGate 12c DDL replication . . . . . . . . . . . . . . . . . . . . . . . . . . 171
How to find GoldenGate recovery time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
GoldenGate Integrated Capture and Integrated Replicat Healthcheck Script . . . . . . . . . 173
GoldenGate: RAC One Node Archivelog Missing . . . . . . . . . . . . . . . . . . . . . . . 174
GoldenGate GGSCI> shortcut tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Skipping database transaction on Oracle GoldenGate . . . . . . . . . . . . . . . . . . . . 176
GoldenGate: Replicate data from SQLServer to TERADATA – Part 1 . . . . . . . . . . . . 177
GoldenGate: Replicate data from SQLServer to TERADATA – Part 2 . . . . . . . . . . . . 178
Access denied on GoldenGate Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
GoldenGate – exclude Oracle database thread# . . . . . . . . . . . . . . . . . . . . . . . 181
GoldenGate 12.1.2 not firing insert trigger . . . . . . . . . . . . . . . . . . . . . . . . . . 182
How to sincronize high data volume with GoldenGate . . . . . . . . . . . . . . . . . . . . 183
How to sincronize high data volume with GoldenGate – Part II . . . . . . . . . . . . . . . . 184
Failure unregister integrated extract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Auto start GoldenGate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Quick find ODI repository version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
ODI 10gR1: Connection to Repository Failed after Database Descriptor Change . . . . . . 189
Failure to create ODI schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
ODI – Import(ANT) Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
GoldenGate supplemental log check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
OGG-01224 Oracle GoldenGate Command Interpreter for Oracle: Bad file number . . . . . 196
ERROR OGG-02636 when creating a integrated extract in Goldengate 12C on a Puggable
database 12C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
OGG-0352: Invalid character for character set UTF-8 was found while performing character
validation of source column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
OGG-01934 Datastore repair failed, OGG-01931Datastore ‘dirbdb’ cannot be opened . . . 199
ERROR OGG-00446 – Unable to lock file “*” (error 11, Resource temporarily unavailable). . 200
Error OGG-00354 Invalid BEFORE column:(column_name) . . . . . . . . . . . . . . . . . 201
Export/Backup directly to Zip using MKNOD! . . . . . . . . . . . . . . . . . . . . . . . . . 202
“tail -f” vs “tail -F”: Do you know the difference? . . . . . . . . . . . . . . . . . . . . . . . . 203
GB vs GiB | MB vs MiB | KB vs KiB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
RHEL: Figuring out CPUs, Cores and Hyper-Threading . . . . . . . . . . . . . . . . . . . 207
Shellscript: Using eval and SQLPlus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Linux Basic: Creating a Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Linux: Resizing Swap Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
nc -l – Starting up a fake service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Is My Linux Server Physical or Virtual? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
VMWare: Adding Shared Disks for Clustered Oracle Database . . . . . . . . . . . . . . . 215
VMware: Recognize Memory Addition Online . . . . . . . . . . . . . . . . . . . . . . . . . 221
Recursive string change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Kludge to keep Database Alive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
RHEL7: rc.local service not starting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Mount Diretory from Remote RHEL7 Server (NFS) . . . . . . . . . . . . . . . . . . . . . . 226
AIX: NTP Service Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Flush DNS Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Flush DNS on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
RHEL: Adding User/Group to SSH and SUDOERS file . . . . . . . . . . . . . . . . . . . . 230
Oracle Database: Compression Algorithms for Cloud Backup . . . . . . . . . . . . . . . . 231
Oracle Database Backup to Cloud: KBHS – 01602: backup piece 13p0jski_1_1 is not encrypted 234

RMAN Raise ORA-19913 ORA-28365 On Restore from Cloud Backup . . . . . . . . . . . 236


UnknownHostException: Could not authenticate to Oracle Database Cloud Backup Module 238
Cloud Computing Assessment – Free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Monitoring MySQL with Nagios – Quick View . . . . . . . . . . . . . . . . . . . . . . . . . 241
Optimize fragmented tables in MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
MySQL Network Connections on ‘TIME_WAIT’ . . . . . . . . . . . . . . . . . . . . . . . . 245
MySQL: Difference Between current_date(), sysdate() and now() . . . . . . . . . . . . . . 246
Getting today’s Errors and Warnings from MySQL log . . . . . . . . . . . . . . . . . . . . 247
MySQL: Unable to connect to database ‘xxx’ on server ‘xxx’ on port xx with user ‘root’ . . . 248
Say Hello to Oracle Apex and for the new Blog member too! . . . . . . . . . . . . . . . . . 249
Understanding Apex URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
javascript:apex.confirm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
APEX: Let’s Talk About Charts Attributes (Inverted Scale) . . . . . . . . . . . . . . . . . . 253
Script: Copy Large Table Through DBLink . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Oracle Convert Number into Days, Hours, Minutes . . . . . . . . . . . . . . . . . . . . . . 260
Purge SYSAUX Tablespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Statistics not Being Auto Purged – Splitting Purge . . . . . . . . . . . . . . . . . . . . . . 264
Sqlplus: Connect without configure TNSNAMES . . . . . . . . . . . . . . . . . . . . . . . 266
ASM: Disk Imbalance Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Rebuild all indexes of a Partioned Table . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Solving Simple Locks Through @lock2s and @killlocker . . . . . . . . . . . . . . . . . . . 270
ORA-04091: Table is Mutating, Trigger/Function may not see it . . . . . . . . . . . . . . . 272
ORA-12014: table does not contain a primary key constraint . . . . . . . . . . . . . . . . . 273
ORA-02062: distributed recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Windows: “ORA-12514” After Database Migration/Moving (Using DNS Alias) . . . . . . . . 275
RS-7445 [Serv MS leaking memory] [It will be restarted] [] [] [] [] [] [] [] [] [] [] . . . . . . . . . 276
kernel.panic_on_oops: New Oracle 12c Installation Requirement . . . . . . . . . . . . . . 277
Tip for the Future: Segmentation fault because of LD_LIBRARY_PATH . . . . . . . . . . . 278
ORA-02296: cannot enable (string.) – null values found . . . . . . . . . . . . . . . . . . . 279
(12c) RMAN-07539: insufficient privileges to create or upgrade the catalog schema . . . . . 280
ORA-27302: failure occurred at: sskgpcreates . . . . . . . . . . . . . . . . . . . . . . . . 282
ORA-15081: failed to submit an I/O operation to a disk . . . . . . . . . . . . . . . . . . . . 283
PRCR-1079 CRS-2674 CRS-5017 ORA-27102: out of memory Linux-x86_64 Error: 28: No
space left on device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
ORA-06512 ORA-48168 ORA-12012 for ADR Job Raising Errors . . . . . . . . . . . . . . 285
x$kglob: ORA-02030: can only select from fixed tables/views . . . . . . . . . . . . . . . . 286
RHEL5: Database 10g Installation – Checking operating system version error . . . . . . . . 287
ORA-10456: cannot open standby database; media recovery session may be in progress . 288
ORA-28004: invalid argument for function specified in PASSWORD_VERIFY_FUNCTION . 290
ORA-27369: job of type EXECUTABLE failed with exit code: Operation not permitted . . . . 291
Package Body APEX_030200.WWV_FLOW_HELP Invalid after Oracle Text Installing . . . 292
ORA-12012: error on auto execute of job “SYS”.”BSLN_MAINTAIN_STATS_JOB” . . . . . 293
Materialized View with DBLink: ORA-00600: internal error code, arguments: [kkzuasid] . . . 296
OUI: RHEL Permission Denied error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
ORA-19751: could not create the change tracking file . . . . . . . . . . . . . . . . . . . . 298
ORA-01548: active rollback segment found, terminate . . . . . . . . . . . . . . . . . . . . 299
RMAN-06059: expected archived log not found . . . . . . . . . . . . . . . . . . . . . . . . 300
ORA-29760: instance_number parameter not specified . . . . . . . . . . . . . . . . . . . 301
ORA-00600: internal error code, arguments: [ktecgetsh-inc], [2] . . . . . . . . . . . . . . . 302
ORA-10456: cannot open standby database; media recovery session may be in progress . 303
ORA-01994: GRANT failed: password file missing or disabled . . . . . . . . . . . . . . . . 304
11.2.0.1: ORA-00600: internal error code, arguments: [7005], [0], [], [], [], [], [], [], [], [], [], [] . 305
ORA-00845: MEMORY_TARGET not supported on this system (RHEL) . . . . . . . . . . . 306
ORA-01153: an incompatible media recovery is active . . . . . . . . . . . . . . . . . . . . 307
GrepOra Team

GrepOra is a blog between friends to learn and


share about our daily experiences and challenges
with Oracle technologies.
Someday we realized we’re always having
conversations about Oracle stuff. So we decided to
make a “grep” in these conversations to filter those
are related to Oracle and share.
And this is the origin for the name “GrepOra.com”
(or |GREP ORA).

Welcome to our book, our blog and our world to have some fun and
view/review/learn/laugh with some of our struggles and personal notes for
ourselves in the future.

This is a commemorative book for the GrepOra's 2 years!

Use it to view, learn and review some curiosities, tips and some useful stuff
for daily basis challenges and struggles on working with Oracle tech. But
mostly to have fun! This is a book written by Oracle geeks to Oracle geeks.

| GREP ORA
http://grepora.wordpress.com

You might also like