You are on page 1of 42

Data Pump

Data Pump replaces EXP and IMP (exp and imp were not removed from 10g). It provides high
speed, parallel, bulk data and metadata movement of Oracle database contents across
platforms and database versions. Oracle states that Data Pump's performance on data retrieval
is 60% faster than Export and 20% to 30% faster on data input than Import. If a data pump job is
started and fails for any reason before it has finished, it can be restarted at a later time.
The commands to start the data pump are expdb and impdb, respectively. The data pump uses
files as well as direct network transfer. Clients can detach and reconnect from/to the data pump.
It can be monitored through several views like dba_datapump_jobs. The Data Pump's public API
is the DBMS_DATAPUMP package. More information HERE
Two access methods are supported: Direct Path (DP) and External Tables (ET). DP is the fastest
but does not support intra-partition parallelism. ET does and therefore may be chosen to load
or unload a very large table or partition. Data Pump export and import are not compatible with
the old exp & imp. So if you need to import into a pre-10g database it is best to stick with the
original export utility.
Data Pump are useful for migrating large databases.
To use Data Pump you must have EXP_FULL_DATABASE or IMP_FULL_DATABASE
depending the operation to perform. These allow you to expdp & impdp across ownership for
items such as grants, resource plans, schema definitions, and re-map, re-name, or re-distribute
database objects or structures. By definition, Oracle gives permission to the objects in a
DIRECTORY that a user would not normally have access to.
Data Pump runs only on the server side. You may initiate the export from a client but the job(s)
and the files will run inside an Oracle server. There will be no dump files (expdat.dmp) or log
files created on your local machine.
Oracle creates dump and log files through DIRECTORY objects. So before you can use Data
Pump you must create a DIRECTORY object. Example:

CREATE DIRECTORY datapump AS 'C:\user\datafile\datapump';

Then, as you use Data Pump you can reference this DIRECTORY as a parameter for export
where you would like the dump or log files to end up.

The default name / location of Data Pump is DATA_PUMP_DIR at


'C:\oracle\product\10.2.0\admin\die\dpdump\'

Advantages of Data Pump


1. We can perform export in parallel. It can also write to multiple files on different disks. (Specify
parameters PARALLEL=2 and the two directory names with file specification
DUMPFILE=ddir1:/file1.dmp, DDIR2:/file2.dmp)
2. Has ability to attach and detach from job, monitor the job progress remotely.
3. Has more option to filter metadata objects. Ex, EXCLUDE, INCLUDE
4. ESTIMATE_ONLY option can be used to estimate disk space requirements before performs the job
5. Data can be exported from remote database by using Database link
6. Explicit DB version can be specified, so only supported object types are exported.
7. During impdp, we can change the target file names, schema, and tablespace. Ex, REMAP_SCHEMA,
REMAP_DATAFILES, REMAP_TABLESPACE
8. Has the option to filter data rows during impdp. Traditional exp/imp, we have this filter option only in
exp. But here we have filter option on both impdp, expdp.
9. Data can be imported from one DB to another without writing to dump file, using NETWORK_LINK
parameter.
10. Data access methods are decided automatically. In traditional exp/imp, we specify the value for the
parameter DIRECT. But here, it decides where direct path can not be used , conventional path is used.
11. Job status can be queried directly from data dictionary(For example, dba_datapump_jobs,
dba_datapump_sessions etc)

Some Parameters

Equivalent exp & expdp parameters:


The below parameters are equivalent parameters between exp & expdp:

exp Command expdp Command

FEEDBACK STATUS

FILE DUMPFILE

LOG LOGFILE

OWNER SCHEMAS

TTS_FULL_CHECK TRANSPROT_FULL_CHECK

New parameters in expdp Utility

ATTACH Attach the client session to existing data pump jobs


CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)
DIRECTORY Location to write the dump file and log file.
ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE_ONLY It estimate the space, but does not perform export
EXCLUDE List of objects to be excluded
INCLUDE List of jobs to be included
JOB_NAME Name of the export job
KEEP_MASTER Specify Y not to drop the master table after export
NETWORK_LINK Specify dblink to export from remote database
NOLOGFILE Specify Y if you do not want to create log file
PARALLEL Specify the maximum number of threads for the export job
VERSION DB objects that are incompatible with the specified version will not be exported.
ENCRYPTION_PASSWORD The table column is encrypted, then it will be written as clear text
in the dump file set when the password is not specified. We can define any string as a password
for this parameter.
COMPRESSION Specifies whether to compress metadata before writing to the dump file set.
The default is METADATA_ONLY. We have two values(METADATA_ONLY,NONE). We
can use NONE if we want to disable during the expdp.
SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the
source database. The sample_percent indicates the probability that a block of rows will be
selected as part of the sample.

Equivalent imp & impdp parameters


The below parameters are equivalent parameters between imp & impdp
imp Command impdp Command
DATAFILES TRANSPORT_DATAFILES
DESTROY REUSE_DATAFILES
FEEDBACK STATUS
FILE DUMPFILE
FROMUSER SCHEMAS, REMAP_SCHEMAS
IGNORE TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
INDEXFILE, SHOW SQLFILE
LOG LOGFILE
TOUSER REMAP_SCHEMA

New parameters in impdp Utility


FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source
database. Valid only when NETWORK_LINK parameter is used.
FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.
NETWORK_LINK Performs import directly from a source database using database link name specified in
the parameter. The dump file will be not be created in server when we use this parameter. To get a
consistent export from the source database, we can use the FLASHBACK_SCN or FLASHBACK_TIME
parameters. These two parameters are only valid when we use NETWORK_LINK parameter.
REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.
REMAP_SCHEMA Loads objects to a different target schema name.
REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.
TRANSFORM We can specify that the storage clause should not be generated in the DDL for import. This
is useful if the storage characteristics of the source and target database are different. The valid values
are SEGMENT_ATTRIBUTES, STORAGE. STORAGE removes the storage clause from the CREATE
statement DDL, whereas SEGMENT_ATTRIBUTES removes physical attributes, tablespace, logging, and
storage attributes.
TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.
For instance, TRANSFORM=storage:N:table
ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified
on the export operation.
CONTENT, INCLUDE, EXCLUDE are same as expdp utilities

Some Examples:

Scenario1 Export the whole ORCL database.


expdp userid=system/password@ORCL dumpfile=expfulldp.dmp
logfile=expfulldp.log full=y directory=dumplocation

Scenario2 Export the scott schema from ORCL and import into DEST
database.
expdp userid=system/password@ORCL dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log directory=dumplocation schemas=scott
impdp userid=system/password@DEST dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log directory=dumplocation

Another Example: While import, exclude some


objects(sequence,view,package,cluster,table). Load the objects which came
from RES tablespace into USERS tablespace in target database.
impdp userid=system/password@DEST dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log directory=dumplocation table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('LOAD_EXT')"

Scenario 3 Clone a User


In the past when a DBA had the need to create a new user with the same
structure (All objects, tablespaces quota, synonyms, grants, system
privileges, etc) was a very painful experience, now all can be done very
easily using Data Pump, let use as an example that you want to create the
user ”Z” exactly like the user “A”, to achieve this goal all you will need to
do is first export the schema “A” definition and then import it again saying
to the Data Pump to change the schema “A” for the new schema named “Z” using
the “remap_schema” parameter available with impdp.

expdp user/password schemas=A directory=datapump dumpfile=Schema_A.dmp


[optional: content=metadata_only]
impdp user/password remap_schema=A:Z directory=datapump dumpfile=
Schema_A.dmp
And your new user Z is now created like your existing user A , that easy!

Scenario 4 Create a Metadata File


You can generate a SQL File from an existing exported file. As an Example, I
am going to expdp the Schema Fraudguard, after that, I will use the impdp
command with the sqlfile option to generate a sql containing all the objects
that I already exported:
expdp system schemas=fraudguard content=metadata_only directory=EXPORTPATH
dumpfile=metadata_24112010.dmp
impdp system directory=EXPORTPATH dumpfile= metadata_24112010.dmp
sqlfile=metadata_24112010.sql

Scenario5 Export the emp table from scott schema at ORCL instance and
import into DEST instance.
expdp userid=system/password@ORCL logfile=tableexpdb.log
directory=dumplocation tables=scott.part_emp dumpfile=tableexpdb.dmp
impdp userid=system/password@DEST dumpfile=tableexpdb.dmp
logfile=tabimpdb.log directory=dumplocation table_exists_action=REPLACE

Scenario 6 Create smaller Copies of PROD


That is a very common task for a DBA, you have a task to create a copy of
your Database (for development or test purpose) but your destination server
don’t have enough space to create a full copy of it!

This can be easily solved with Data Pump, for this example, let say that you
only have space for 70% of your production database, now to know how to
proceed, we need to decide if the copy will contain metadata only (no
data/rows) or if it will include the data also. Let’s see how to do each way:

a) Metadata Only
First do a full export of your source database.
expdp user/password content=metadata_only full=y directory=datapump
dumpfile=metadata_24112010.dmp

Then, let’s import the metadata and tell the Data Pump to reduce the size of
extents to 70%, you can do it using the parameter “transform” available with
“impdp”, it represent the percentage multiplier that will be used to alter
extent allocations and datafiles size.
impdp user/password transform=pctspace:70 directory=datapump
dumpfile=metadata_24112010.dmp

b) Metadata and data


First does a full export of your source database using the export parameter
“sample”, this parameter specify a percentage of the data rows to be sampled
and unload from your source database, in this case let’s use 70%.
expdp user/password sample=70 full=y directory=datapump
dumpfile=expdp_70_24112010.dmp

Then, all you need to do as the example before is to import it telling the
Data Pump to reduce the size of extents to 70%, and that’s it!
impdp user/password transform=pctspace:70 directory=datapump
dumpfile=expdp_70_24112010.dmp

Scenario 7 Export only specific partition in emp table from scott schema at
orcl and import into ordb database.
expdp userid=system/password@ORCL dumpfile=partexpdb.dmp
logfile=partexpdb.log directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20

If we want to overwrite the exported data in target database, then we need to


delete emp table for deptno in(10,20).
scott@DEST> delete part_emp where deptno in (10,20);
scott@DEST> commit;
impdp userid=system/password@DEST dumpfile=partexpdb.dmp logfile=tabimpdb.log
directory=dumplocation table_exists_action=append

----------------------------------------------

expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp


TABLES=sh.sales:sales_Q1_2000,sh.sales:sales_Q2_2000

----------------------------------------------

Scenario 8 Export only tables (no code) in scott schema at ORCL and import
into DEST database
expdp userid=system/password@ORCL dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log directory=dumplocation include=table schemas=scott
impdp userid=system/password@DEST dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log directory=dumplocation table_exists_action=replace

Scenario 9 Export only rows belonging to department 10 and 20 in emp and


dept table from ORCLdatabase. Import the dump file in @DESTdatabase.
While importing, load only deptno 10 in target database.
expdp userid=system/password@ORCL dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log directory=dumplocation content=data_only
schemas=scott include=table:"in('EMP','DEPT')" query="where deptno in(10,20)"

impdp userid=system/password@DEST dumpfile=data_filter_expdb.dmp


logfile=data_filter_impdb.log directory=dumplocation schemas=scott
query="where deptno = 10" table_exists_action=APPEND
Scenario 10 Export the scott schema from ORCLdatabase and split the dump
file into 50M sizes. Import the dump file into DEST datbase.
Expdp parfile content:
userid=system/password@ORCL
logfile=schemaexp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
filesize=50M
schemas=scott
include=table

As per the above expdp parfile, initially, schemaexp_split_01.dmp file will


be created. Once the file is 50MB, the next file called
schemaexp_split_02.dmp will be created. Let us say, the dump file size is
500MB, then it creates 10 dump file as each file size is 50MB.
Impdp parfile content:
userid=system/password@DEST
logfile=schemaimp_split.log
directory=dumplocation
dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users
exclude=grant

Scenario 11 Export the scott schema from ORCL database and split the dump file into four files. Import
the dump file into DEST datbase.
Expdp parfile content:
expdp userid=system/password@ORCL logfile=schemaexp_split.log
directory=dumplocation dumpfile=schemaexp_split_%U.dmp parallel=4
schemas=scott include=table

As per the above parfile content, initially four files will be created -
schemaexp_split_01.dmp, schemaexp_split_02.dmp, schemaexp_split_03.dmp,
schemaexp_split_04.dmp. Notice that every occurrence of the substation
variable is incremented each time. Since there is no FILESIZE parameter, no
more files will be created.
Impdp parfile content:
impdp userid=system/password@DEST logfile=schemaimp_split.log
directory=dumplocation dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users exclude=grant

Scenario 12 Export the scott schema from ORCL database and split the dump file into three files.
The dump files will be stored in three different location. This method is
especially useful if you do not have enough space in one file system to
perform the complete expdp job. After export is successful, import the dump
file into DEST database.
Expdp parfile content:
expdp userid=system/password@ORCL logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M schemas=scott include=table

As per above expdp par file content, it place the dump file into three
different location. Let us say, entire expdp dump file size is 1500MB. Then
it creates 30 dump files(each dump file size is 50MB) and place 10 files in
each file system.
Impdp parfile content:
impdp userid=system/password@DEST logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace

Scenario 13 Expdp scott schema in ORCL and impdp the dump file in training schema in DEST database.
expdp userid=scott/tiger@ORCL logfile=netwrokexp1.log directory=dumplocation
dumpfile=networkexp1.dmp schemas=scott include=table

impdp userid=system/password@DEST logfile=networkimp1.log


directory=dumplocation dumpfile=networkexp1.dmp table_exists_action=replace
remap_schema=scott:training

More Examples
To export only a few specific objects--say, function LIST_DIRECTORY and
procedure DB_MAINTENANCE_DAILY--you could use
expdp ananda/iclaim directory=DPDATA1 dumpfile=expprocs.dmp
include=PROCEDURE:\"=\'DB_MAINTENANCE_DAILY\'\",FUNCTION:\"=\'LIST_DIRECTORY\
'\"

This dumpfile serves as a backup of the sources. You can even use it to
create DDL scripts to be used later. A special parameter called SQLFILE
allows the creation of the DDL script file.
This instruction creates a file named procs.sql in the directory specified by
DPDATA1, containing the scripts of the objects inside the export dumpfile.
This approach helps you create the sources quickly in another schema.
impdp ananda/iclaim directory=DPDATA1 dumpfile=expprocs.dmp sqlfile=procs.sql

The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to
specify the schemas to be exported. The following is an example of the schema export and
import syntax:
expdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp
logfile=expdpSCOTT.log
impdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp
logfile=impdpSCOTT.log

The REMAP_TABLESPACE in the impdp sencence allows you to move the objects from one
tablespace to another one.
impdp system SCHEMAS=SCOTT directory=EXPORTPATH DUMPFILE=SCOTT.dmp
LOGFILE=imp.log REMAP_TABLESPACE=FGUARD_DATA:FG_DATA

You can also use several REMAP_TABLESPACE clauses in the impdp sencence:
impdp system SCHEMAS=SCOTT directory=EXPORTPATH DUMPFILE=SCOTT.dmp
LOGFILE=imp.log REMAP_TABLESPACE=FGUARD_DATA:FG_DATA
remap_tablespace=FGUARD_INDX:FG_INDX

The FULL parameter indicates that a complete database export is required. The following is an
example of the full database export and import syntax:
expdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp
logfile=expdpDB10G.log
impdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp
logfile=impdpDB10G.log

Data pump performance can be improved by using the PARALLEL parameter. This should be
used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple
dumpfiles to be created or read:
expdp scott/tiger schemas=SCOTT directory=TEST_DIR parallel=4
dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
Each thread creates a separate dumpfile, so the parameter dumpfile should have as many entries
as the degree of parallelism.
Note how the dumpfile parameter has a wild card %U, which indicates the files will be created as
needed and the format will be SCOTT_nn.dmp, where nn starts at 01 and goes up as needed.

The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific
objects. When the INCLUDE parameter is used, only those objects specified by it will be
included in the export. When the EXCLUDE parameter is used all objects except those specified
by it will be included in the export:
expdp scott/tiger schemas=SCOTT include=TABLE:\"IN (\'EMP\',
\'DEPT\')\" directory=TEST_DIR dumpfile=SCOTT.dmp
logfile=expdpSCOTT.log
expdp scott/tiger schemas=SCOTT exclude=TABLE:\"= \'BONUS\'\"
directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

Export/Import a few tables:


expdp scott/tiger tables=EMP,DEPT directory=TEST_DIR
dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
impdp scott/tiger tables=EMP,DEPT directory=TEST_DIR
dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log

Monitoring Export:
While Data Pump Export is running, press Control-C; it will stop the display of the messages on
the screen, but not the export process itself. Instead, it will display the Data Pump Export prompt
as shown below. The process is now said to be in "interactive" mode:

Export>
This approach allows several commands to be entered on that Data Pump Export job. To find a
summary, use the STATUS command at the prompt:

Export> status
Job: CASES_EXPORT
Operation: EXPORT
Mode: TABLE
State: EXECUTING
Degree: 1
Job Error Count: 0
Dump file: /u02/dpdata1/expCASES.dmp
bytes written = 2048

Worker 1 Status:
State: EXECUTING
Object Schema: DWOWNER
Object Name: CASES
Object Type: TABLE_EXPORT/TBL_TABLE_DATA/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Completed Rows: 4687818

Remember, this is merely the status display. The export is working in the background. To
continue to see the messages on the screen, use the command CONTINUE_CLIENT from the
Export prompt.
While Data Pump jobs are running, you can pause them by issuing STOP_JOB on the Data
Pump Export or Data Pump Import prompts and then restart them with START_JOB.
This functionality comes in handy when you run out of space and want to make corrections
before continuing.

A simple way to gain insight into the status of a Data Pump job is to look into a few views
maintained within the Oracle instance the Data Pump job is running.
These views are DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS, and
V$SESSION_LOGOPS and they are critical in the monitoring of your export jobs so, you can
attach to a Data Pump job and modify the execution of the that job.

DBA_DATAPUMP_JOBS
This view will show the active Data Pump jobs, their state, degree of parallelism, and the number
of sessions attached.

select * from dba_datapump_jobs


OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE
ATTACHED_SESSIONS
---------- ---------------------- ---------- ---------- ------------- -------
-- -----------------
JKOOP SYS_EXPORT_FULL_01 EXPORT FULL EXECUTING 1
1
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT SCHEMA EXECUTING 1
1

DBA_DATAPUMP_SESSIONS
This view give gives the SADDR that assist in determining why a Data Pump session may be
having problems. Join to the V$SESSION view for further information.

SELECT * FROM DBA_DATAPUMP_SESSIONS


OWNER_NAME JOB_NAME SADDR
---------- ------------------------------ --------
JKOOPMANN SYS_EXPORT_FULL_01 225BDEDC
JKOOPMANN SYS_EXPORT_SCHEMA_01 225B2B7C

V$SESSION_LONGOPS
This view helps determine how well a Data Pump export is doing. It also shows you any
operation that is taking long time to execute.
Basically gives you a progress indicator through the MESSAGE column.

select username, opname, target_desc,sofar, totalwork, message from


V$SESSION_LONGOPS

USERNAME OPNAME TARGET_DES SOFAR TOTALWORK MESSAGE


-------- -------------------- ---------- ----- ---------- -------------------
-----------------------------
JKOOP SYS_EXPORT_FULL_01 EXPORT 132 132
SYS_EXPORT_FULL_01:EXPORT:132 out of 132 MB done
JKOOP SYS_EXPORT_FULL_01 EXPORT 90 132
SYS_EXPORT_FULL_01:EXPORT:90 out of 132 MB done
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 17 17
SYS_EXPORT_SCHEMA_01:EXPORT:17 out of 17 MB done
JKOOP SYS_EXPORT_SCHEMA_01 EXPORT 19 19
SYS_EXPORT_SCHEMA_01:EXPORT:19 out of 19 MB done
SQL> select sid, serial#, sofar, totalwork, dp.owner_name, dp.state, dp.job_mode from
gv$session_longops sl, gv$datapump_job dp
where sl.opname = dp.job_name and sofar != totalwork;

SID SERIAL# SOFAR TOTALWORK OWNER_NAME STATE JOB_MODE


———- ———- ———- ———- —————————— —————————— ——————————
122 64151 1703 2574 SYSTEM EXECUTING FULL

You can monitor an Oracle import in several ways:

Monitor at the OS - Do a "ps -ef" on the data pump process and watch it consume CPU. You can
also monitor the data pump log file with the "tail -f", command, watching the progress of the
import in real time. If you watch the import log, be sure to include the feedback=1000 parameter
to direct import to display a dot every 1,000 lines of inserts.

Monitor with the data pump views - The main view to monitor import jobs are
dba_datapump_jobs and dba_datapump_sessions.

Monitor with longops - You can query the v$session_longops to see the progress of data pump,
querying the sofar and totalwork columns.

select sid, serial# from v$session s, dba_datapump_sessions d where s.saddr = d.saddr;

select sid, serial#, sofar, totalwork from v$session_longops;

select x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module='Data Pump Worker'
AND p.time_remaining > 0;

The following are the major new features that provide this increased performance, as well as enhanced
ease of use:

 The ability to specify the maximum number of threads of active execution operating on behalf of
the Data Pump job. This enables you to adjust resource consumption versus elapsed time. See
PARALLEL for information about using this parameter in export. See PARALLEL for information
about using this parameter in import. (This feature is available only in the Enterprise Edition of
Oracle Database 10g.)
 The ability to restart Data Pump jobs. See START_JOB for information about restarting export
jobs. See START_JOB for information about restarting import jobs.
 The ability to detach from and reattach to long-running jobs without affecting the job itself. This
allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data
Pump Export and Import utilities can be attached to only one job at a time; however, you can
have multiple clients or jobs running at one time. (If you are using the Data Pump API, the
restriction on attaching to only one job at a time does not apply.) You can also have multiple
clients attached to the same job. See ATTACH for information about using this parameter in
export. See ATTACH for information about using this parameter in import.
 Support for export and import operations over the network, in which the source of each operation
is a remote instance. See NETWORK_LINK for information about using this parameter in export.
See NETWORK_LINK for information about using this parameter in import.
 The ability, in an import job, to change the name of the source datafile to a different name in all
DDL statements where the source datafile is referenced. See REMAP_DATAFILE.
 Enhanced support for remapping tablespaces during an import operation. See
REMAP_TABLESPACE.
 Support for filtering the metadata that is exported and imported, based upon objects and object
types. For information about filtering metadata during an export operation, see INCLUDE and
EXCLUDE. For information about filtering metadata during an import operation, see INCLUDE
and EXCLUDE.
 Support for an interactive-command mode that allows monitoring of and interaction with ongoing
jobs. See Commands Available in Export's Interactive-Command Mode and Commands Available
in Import's Interactive-Command Mode.
 The ability to estimate how much space an export job would consume, without actually
performing the export. See ESTIMATE_ONLY.
 The ability to specify the version of database objects to be moved. In export jobs, VERSION
applies to the version of the database objects to be exported. See VERSION for more information
about using this parameter in export.

In import jobs, VERSION applies only to operations over the network. This means that VERSION
applies to the version of database objects to be extracted from the source database. See
VERSION for more information about using this parameter in import.

 Most Data Pump export and import operations occur on the Oracle database server. (This
contrasts with original export and import, which were primarily client-based.) See Default
Locations for Dump, Log, and SQL Files for information about some of the implications of server-
based operations.

The DBA_DATAPUMP_JOBS and USER_DATAPUMP_JOBS Views

The DBA_DATAPUMP_JOBS and USER_DATAPUMP_JOBS views identify all active Data Pump jobs,
regardless of their state, on an instance (or on all instances for Real Application Clusters). They also
show all Data Pump master tables not currently associated with an active job. You can use the job
information to attach to an active job. Once you are attached to the job, you can stop it, change its
parallelism, or monitor its progress. You can use the master table information to restart a stopped job or
to remove any master tables that are no longer needed.

Table 1-1 describes the columns in the DBA_DATAPUMP_JOBS view and the USER_DATAPUMP_JOBS
view.

Table 1-1 DBA_DATAPUMP_JOBS View and USER_DATAPUMP_JOBS View


Column Datatype Description

OWNER_NAME VARCHAR2(30) User who initiated the job (valid only for
DBA_DATAPUMP_JOBS)

JOB_NAME VARCHAR2(30) User-supplied name for the job (or the default name generated
by the server)

OPERATION VARCHAR2(30) Type of job

JOB_MODE VARCHAR2(30) Mode of job

STATE VARCHAR2(30) State of the job

DEGREE NUMBER Number of worker processes performing the operation

ATTACHED_SESSIONS NUMBER Number of sessions attached to the job

Note:

The information returned is obtained from dynamic performance views associated


with the executing jobs and from the database schema information concerning the
master tables. A query on these views can return multiple rows for a single Data
Pump job (same owner and job name) if the query is executed while the job is
transitioning between an Executing state and the Not Running state.

The DBA_DATAPUMP_SESSIONS View

The DBA_DATAPUMP_SESSIONS view identifies the user sessions that are attached to a job. The
information in this view is useful for determining why a stopped operation has not gone away.

Table 1-2 describes the columns in the DBA_DATAPUMP_SESSIONS view.

Table 1-2 The DBA_DATAPUMP_SESSIONS View

Column Datatype Description

OWNER_NAME VARCHAR2(30) User who initiated the job.

JOB_NAME VARCHAR2(30) User-supplied name for the job (or the default name
Column Datatype Description

generated by the server).

SADDR RAW(4) (RAW(8) on 64-bit Address of session attached to the job. Can be used
systems) with V$SESSION view.

Monitoring the Progress of Executing Jobs

Data Pump operations that transfer table data (export and import) maintain an entry in the
V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data
transferred). The entry contains the estimated transfer size and is periodically updated to reflect the
actual amount of data transferred.

Note:

The usefulness of the estimate value for export operations depends on the type of
estimation requested when the operation was initiated, and it is updated as required
if exceeded by the actual transfer amount. The estimate value for import operations
is exact.

The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:

 USERNAME - job owner


 OPNAME - job name
 TARGET_DESC - job operation
 SOFAR - megabytes (MB) transferred thus far during the job
 TOTALWORK - estimated number of megabytes (MB) in the job
 UNITS - 'MB'
 MESSAGE - a formatted status message of the form:
 '<job_name>: <operation_name> : nnn out of mmm MB done'
|| Usage Notes:
|| This script is provided to demonstrate various features of Oracle 10g's
|| new DataPump and should be carefully proofread before executing it against
|| any existing Oracle database to insure that no potential
damage can occur.
||
*/

-----
-- Listing 1.1: Setting up a DIRECTORY object for DataPump use.
-- Note that the directory folder need not exist for this
command
-- to succeed, but any subsequent attempt to utilize the
DIRECTORY
-- object will fail until the folder is created on the server.
-- This should be run from SYSTEM for best results
-----
DROP DIRECTORY export_dir;
CREATE DIRECTORY export_dir as 'c:\oracle\export_dir';
GRANT READ, WRITE ON DIRECTORY export_dir TO hr, sh;

-----
-- Listing 1.2: Determining what object types can be exported/imported
-- and filtering levels available
-----
COL object_path FORMAT A25 HEADING 'Object Path Name'
COL comments FORMAT A50 HEADING 'Object Description'
COL named FORMAT A3 HEADING 'Nmd|Objs'

TTITLE 'Database-Level Exportable Objects'


SELECT
object_path
,named
,comments
FROM database_export_objects;

TTITLE 'Schema-Level Exportable Objects'


SELECT
object_path
,named
,comments
FROM schema_export_objects;

TTITLE 'Table-Level Exportable Objects'


SELECT
object_path
,named
,comments
FROM table_export_objects;

-----
-- Listing 1.3: A simple DataPump Export operation. Note that if the export
-- dump file already exists when this is executed, Oracle will
-- return an ORA-39000 error and terminate the operation
-----
EXPDP hr/hr DUMPFILE=export_dir:hr_schema.dmp
LOGFILE=export_dir:hr_schema.explog

>> DataPump Export command issued:

SET ORACLE_SID=zdcdb
EXPDP system/******** PARFILE=c:\rmancmd\dpe_1.expctl

>> DataPump Export parameters file (dpe_1.expctl):

DIRECTORY=export_dir
SCHEMAS=HR,OE
JOB_NAME=hr_oe_schema
DUMPFILE=export_dir:hr_oe_schemas.dmp
LOGFILE=export_dir:hr_oe_schemas.explog

>> Results of Export Operation:

Export: Release 10.1.0.2.0 - Production on Thursday, 10 March, 2005 17:52


Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 -
Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."HR_OE_SCHEMA": system/********
parfile=c:\rmancmd\dpe_1.expctl
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.562 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type
SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/VIEW/GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/SE_TBL_FBM_INDEX_INDEX/INDEX
Processing object type
SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/SE_TBL_FBM_IND_STATS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/SE_POST_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
. . exported "HR"."LOBSEP" 9.195 KB 1
rows
. . exported "HR"."BIGATFIRST" 277.7 KB 10000
rows
. . exported "HR"."APPLICANTS" 10.46 KB 30
rows
. . exported "HR"."APPLICANTS_1" 11.5 KB 45
rows
. . exported "HR"."APPROLES" 6.078 KB 7
rows
. . exported "HR"."APPS" 5.632 KB 3
rows
. . exported "HR"."COST_CENTERS" 6.328 KB 29
rows
. . exported "HR"."COST_CENTER_ASSIGNMENTS" 6.312 KB 20
rows
. . exported "HR"."COUNTRIES" 6.093 KB 25
rows
. . exported "HR"."DATEMATH" 6.984 KB 1
rows
. . exported "HR"."DEPARTMENTS" 7.101 KB 28
rows
. . exported "HR"."DIVISIONS" 5.335 KB 3
rows
. . exported "HR"."EMPLOYEES" 16.67 KB 118
rows
. . exported "HR"."EMPLOYEE_HIERARCHY" 6.414 KB 5
rows
. . exported "HR"."JOBS" 7.296 KB 27
rows
. . exported "HR"."JOB_HISTORY" 6.765 KB 15
rows
. . exported "HR"."LOCATIONS" 7.710 KB 23
rows
. . exported "HR"."MY_USER_ROLES" 6.453 KB 10
rows
. . exported "HR"."PAYROLL_CHECKS" 7.609 KB 6
rows
. . exported "HR"."PAYROLL_HOURLY" 6.039 KB 3
rows
. . exported "HR"."PAYROLL_SALARIED" 5.687 KB 3
rows
. . exported "HR"."PAYROLL_TRANSACTIONS" 7.195 KB 6
rows
. . exported "HR"."REGIONS" 5.296 KB 4
rows
. . exported "HR"."TIMECLOCK_PUNCHES" 5.718 KB 6
rows
. . exported "HR"."USERS" 5.968 KB 3
rows
. . exported "HR"."USER_ROLES" 6.453 KB 10
rows
. . exported "HR"."IOT_TAB" 0 KB 0
rows
. . exported "HR"."NO_UPDATES" 0 KB 0
rows
. . exported "HR"."PLAN_TABLE" 0 KB 0
rows
Master table "SYSTEM"."HR_OE_SCHEMA" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SYSTEM.HR_OE_SCHEMA is:
C:\ORACLE\EXPORT_DIR\HR_OE_SCHEMAS.DMP
Job "SYSTEM"."HR_OE_SCHEMA" successfully completed at 17:53

-----
-- Listing 1.4: A simple DataPump Import. Note that only database objects
from
-- the HR schema will be used to populate a new schema
(HR_OLTP),
-- and all objects other than tables and their dependent objects
-- will be excluded from the import
-----

>> SQL to create new HR_OLTP schema:

DROP USER hr_oltp CASCADE;


CREATE USER hr_oltp
IDENTIFIED BY misdev
DEFAULT TABLESPACE example
TEMPORARY TABLESPACE temp02
QUOTA 50M ON example
PROFILE default;
GRANT CONNECT TO hr_oltp;
GRANT RESOURCE TO hr_oltp;

>> DataPump Import command issued:

SET ORACLE_SID=dbaref
IMPDP system/****** PARFILE=export_dir:dpi_1.impctl

>> DataPump Import parameters file (dpi_1.impctl):

DIRECTORY=export_dir
JOB_NAME=hr_oltp_import
DUMPFILE=export_dir:hr_oe_schemas.dmp
LOGFILE=export_dir:hr_oltp_import.implog
REMAP_SCHEMA=hr:hr_oltp
STATUS=5

>> Results of Import operation:

Import: Release 10.1.0.2.0 - Production on Thursday, 10 March, 2005 18:02


Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 -
Production
With the Partitioning, OLAP and Data Mining options
Master table "SYSTEM"."HR_OLTP_IMPORT" successfully loaded/unloaded
Starting "SYSTEM"."HR_OLTP_IMPORT": system/********
parfile=c:\rmancmd\dpi_1.impctl
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HR_OLTP"."LOBSEP" 9.195 KB 1
rows
. . imported "HR_OLTP"."BIGATFIRST" 277.7 KB 10000
rows
. . imported "HR_OLTP"."APPLICANTS" 10.46 KB 30
rows
. . imported "HR_OLTP"."APPLICANTS_1" 11.5 KB 45
rows
. . imported "HR_OLTP"."APPROLES" 6.078 KB 7
rows
. . imported "HR_OLTP"."APPS" 5.632 KB 3
rows
. . imported "HR_OLTP"."COST_CENTERS" 6.328 KB 29
rows
. . imported "HR_OLTP"."COST_CENTER_ASSIGNMENTS" 6.312 KB 20
rows
. . imported "HR_OLTP"."COUNTRIES" 6.093 KB 25
rows
. . imported "HR_OLTP"."DATEMATH" 6.984 KB 1
rows
. . imported "HR_OLTP"."DEPARTMENTS" 7.101 KB 28
rows
. . imported "HR_OLTP"."DIVISIONS" 5.335 KB 3
rows
. . imported "HR_OLTP"."EMPLOYEES" 16.67 KB 118
rows
. . imported "HR_OLTP"."EMPLOYEE_HIERARCHY" 6.414 KB 5
rows
. . imported "HR_OLTP"."JOBS" 7.296 KB 27
rows
. . imported "HR_OLTP"."JOB_HISTORY" 6.765 KB 15
rows
. . imported "HR_OLTP"."LOCATIONS" 7.710 KB 23
rows
. . imported "HR_OLTP"."MY_USER_ROLES" 6.453 KB 10
rows
. . imported "HR_OLTP"."PAYROLL_CHECKS" 7.609 KB 6
rows
. . imported "HR_OLTP"."PAYROLL_HOURLY" 6.039 KB 3
rows
. . imported "HR_OLTP"."PAYROLL_SALARIED" 5.687 KB 3
rows
. . imported "HR_OLTP"."PAYROLL_TRANSACTIONS" 7.195 KB 6
rows
. . imported "HR_OLTP"."REGIONS" 5.296 KB 4
rows
. . imported "HR_OLTP"."TIMECLOCK_PUNCHES" 5.718 KB 6
rows
. . imported "HR_OLTP"."USERS" 5.968 KB 3
rows
. . imported "HR_OLTP"."USER_ROLES" 6.453 KB 10
rows
. . imported "HR_OLTP"."IOT_TAB" 0 KB 0
rows
. . imported "HR_OLTP"."NO_UPDATES" 0 KB 0
rows
. . imported "HR_OLTP"."PLAN_TABLE" 0 KB 0
rows
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/AUDIT_OBJ
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
ORA-39082: Object type TRIGGER:"HR_OLTP"."BIN$55fGDdubQL6YVYB0dGS/nw==$1"
created with compilation warnings
ORA-39082: Object type TRIGGER:"HR_OLTP"."BIN$55fGDdubQL6YVYB0dGS/nw==$1"
created with compilation warnings
ORA-39082: Object type TRIGGER:"HR_OLTP"."SECURE_EMPLOYEES" created with
compilation warnings
ORA-39082: Object type TRIGGER:"HR_OLTP"."SECURE_EMPLOYEES" created with
compilation warnings
ORA-39082: Object type TRIGGER:"HR_OLTP"."TR_BRIU_APPLICANTS" created with
compilation warnings
ORA-39082: Object type TRIGGER:"HR_OLTP"."TR_BRIU_APPLICANTS" created with
compilation warnings
ORA-39082: Object type TRIGGER:"HR_OLTP"."UPDATE_JOB_HISTORY" created with
compilation warnings
ORA-39082: Object type TRIGGER:"HR_OLTP"."UPDATE_JOB_HISTORY" created with
compilation warnings
Processing object type SCHEMA_EXPORT/TABLE/INDEX/SE_TBL_FBM_INDEX_INDEX/INDEX
Processing object type
SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/SE_TBL_FBM_IND_STATS/INDEX_STATISTICS
Job "SYSTEM"."HR_OLTP_IMPORT" completed with 8 error(s) at 18:02

-----
-- Listing 1.5: Querying status of DataPump operations
-----
TTITLE 'Currently Active DataPump Operations'
COL owner_name FORMAT A06 HEADING 'Owner'
COL job_name FORMAT A20 HEADING 'JobName'
COL operation FORMAT A12 HEADING 'Operation'
COL job_mode FORMAT A12 HEADING 'JobMode'
COL state FORMAT A12 HEADING 'State'
COL degree FORMAT 9999 HEADING 'Degr'
COL attached_sessions FORMAT 9999 HEADING 'Sess'

SELECT
owner_name
,job_name
,operation
,job_mode
,state
,degree
,attached_sessions
FROM dba_datapump_jobs
;

TTITLE 'Currently Active DataPump Sessions'


COL owner_name FORMAT A06 HEADING 'Owner'
COL job_name FORMAT A06 HEADING 'Job'
COL osuser FORMAT A12 HEADING 'UserID'

SELECT
DPS.owner_name
,DPS.job_name
,S.osuser
FROM
dba_datapump_sessions DPS
,v$session S
WHERE S.saddr = DPS.saddr
;

--------------------------------------------------------------

expdp and impdp tablespace on same database


Just a example for export tablespace and import tablespace on the same
database when the tablespace existing

Step1 :

Step 2: export tabespace


Step 3: import tablespace but errors shown when it ends because of already existing
objects there

Step 4: use table_exists_action=replace


Can I Store Datapump dumpfiles in ASM diskgroup

One of my friend asked me , Can I store Datapump dumpfile in asm diskgroup?. Yes you
can. Now we can see How do we create directory and store dumpfile.

Step 1: Go To ASM Instance and Create New Directory.

Documents and Settings\Administrator>set oracle_sid=+asm


C:\Documents and Settings\Administrator>sqlplus
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Feb 22 15:27:13 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: / as sysdba
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> alter diskgroup data add directory '+DATA/dumpset';
Diskgroup altered.

Step 2: Go to DB Instance

Create Directory for dumpfile and logfile

Dumpfile Directory (ASM Disk)


SQL> create or replace directory dp_asm as '+DATA/dumpset';
Directory created.
Log file Directory (Local File System).
SQL> create or replace directory logfile as 'C:\azar';
Directory created.
SQL> grant read,write on directory dp_asm to system;
Grant succeeded.
SQL> grant read,write on directory logfile to system;
Grant succeeded.

SQL> $expdp system/Admin123 directory=dp_asm dumpfile=testasm.dmp


schemas=scott
logfile=logfile:testasm.log
Export: Release 10.2.0.1.0 - Production on Tuesday, 22 February, 2011
15:35:00
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -
Produc
tion
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=dp_asm
dump
file=testasm.dmp schemas=scott logfile=logfile:testasm.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.656 KB 4
rows
. . exported "SCOTT"."EMP" 7.820 KB 14
rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5
rows
. . exported "SCOTT"."BONUS" 0 KB 0
rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
+DATA/dumpset/testasm.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 15:36:01
SQL>
Step 4: Go to ASM Instance and Check the file created in ASM

SQL> select file_number,creation_date,bytes from v$asm_file where


type='DUMPSET';
FILE_NUMBER CREATION_ BYTES
----------- --------- ----------
283 22-FEB-11 212992
SQL>

--------------------------------------------------------

How to export from oracle 11.2 and import into 10.2 version

I think many OTN users asked repeately this question How do i export from higher
version and import into lower version.I just show here How do we do.First : Using
Datapump from Oralce 11.2 and import into 10.2 version

Source DB 11.2 Version :

Step 1: Create Directory

SQL> create or replace directory test_dir as 'D:\backupnew\dump';

Directory created.

SQL> grant read,write on directory test_dir to scott;

Grant succeeded.

SQL> conn scott/tiger@azardb


Connected.
SQL> create table testversion(version varchar2(20));

Table created.

SQL> insert into testversion values('oralce11gr2');

1 row created.

SQL> commit;

Commit complete.

Step 2: Export Table using Datapump

C:\Users\mazar>expdp scott/tiger@azardb directory=test_dir


dumpfile=testver.dmp tables=testversion

Export: Release 11.2.0.1.0 - Production on Sun Jan 23 15:54:13 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/********@azardb
directory=test_dir dumpfile=testver.dmp tables=testversion
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."TESTVERSION" 5.031 KB 1
rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
D:\BACKUPNEW\DUMP\TESTVER.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 15:54:40

Now Go to Target DB 10.2 Version

Step 3: Create Directory for Scott User.

SQL> create or replace directory test_dir as 'd:\newdump';


Directory created.
SQL> grant read,write on directory test_dir to scott;
Grant succeeded.

Step 3: I just Copied this TESTVER.DMP file into target DB 10.2 Directory and import it

D:\oracle\product\10.2.0\db_2\BIN>impdp scott/tiger@ace directory=test_dir


dumpfile=testversion.dmp tables=testversion

Import: Release 10.2.0.1.0 - 64bit Production on Sunday, 23 January, 2011


16:04:
31

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -


64bit
Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39143: dump file "d:\newdump\testversion.dmp" may be an original export
dump
file

It showing error, So you need to export Data in source db 11.2 using Version
parameter
Step 4:

C:\Users\mazar>expdp scott/tiger@azardb directory=test_dir


dumpfile=testver.dmp tables=testversion
version=10.2 reuse_dumpfiles=yes

Export: Release 11.2.0.1.0 - Production on Sun Jan 23 16:06:47 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/********@azardb
directory=test_dir dumpfile=testver.dmp tables=testversion
version=10.2 reuse_dumpfiles=yes
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."TESTVERSION" 4.968 KB 1
rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
D:\BACKUPNEW\DUMP\TESTVER.DMP

Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 16

Step 5: Again I copied this dump file to target DB 10.2 directory and import it.

D:\oracle\product\10.2.0\db_2\BIN>impdp scott/tiger@ace directory=test_dir


dumpfile=testver.dmp tables=testversion remap_tablespace=users_tbs:users

Import: Release 10.2.0.1.0 - 64bit Production on Sunday, 23 January, 2011


16:08:
37

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -


64bit
Production
With the Partitioning, OLAP and Data Mining options
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01": scott/********@ace
directory=test_dir d
umpfile=testver.dmp tables=testversion remap_tablespace=users_tbs:users
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."TESTVERSION" 4.968 KB 1
rows
Job "SCOTT"."SYS_IMPORT_TABLE_01" successfully completed at 16:08:39
D:\oracle\product\10.2.0\db_2\BIN>
--------------------------------------------------------------------------

Datapump REUSE_DUMPFILES parameter


REUSE_DUMPFILES parameter is using for overwriting preexisting dump file.

It default parameter is NO.

Data Pump Export will return an error if you specify a dump file name that already
exists. The REUSE_DUMPFILES parameter allows you to override that behavior and reuse
a dump file name.

This is compatabile for Oracle 11g Version, Not in Oracle 10g version.

see below Example :

C:\Users\mazar>expdp scott/tiger@azardb dumpfile=reusedump.dmp


directory=data_pump_dir

Export: Release 11.2.0.1.0 - Production on Sat Dec 11 12:17:12 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********@azardb
dumpfile=reusedump.dmp directory=data_pump_dir
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.945 KB 4
rows
. . exported "SCOTT"."EMP" 8.578 KB 14
rows
. . exported "SCOTT"."MYTEST" 5.429 KB 1
rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5
rows
. . exported "SCOTT"."BONUS" 0 KB 0
rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\REUSEDUMP.DMP

Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 12

C:\Users\mazar>expdp scott/tiger@azardb dumpfile=reusedump.dmp


directory=data_pump_dir

Export: Release 11.2.0.1.0 - Production on Sat Dec 11 12:18:17 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file
"C:\app\oracle\mazar\admin\azardb\dpdump\reusedump.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists

C:\Users\mazar>expdp scott/tiger@azardb dumpfile=reusedump.dmp


directory=data_pump_dir reuse_dumpfiles=y

Export: Release 11.2.0.1.0 - Production on Sat Dec 11 12:18:31 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********@azardb
dumpfile=reusedump.dmp directory=data_pump_dir reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.945 KB 4
rows
. . exported "SCOTT"."EMP" 8.578 KB 14
rows
. . exported "SCOTT"."MYTEST" 5.429 KB 1
rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5
rows
. . exported "SCOTT"."BONUS" 0 KB 0
rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\REUSEDUMP.DMP

Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 12

--------------------------------------------------------------------------------------------------

Datapump compression parameter

Here I just like to show How compression datapump parameter working in Oracle 11g R2 ( see Below
Screenshot How size vary from others.)

Default: METADATA_ONLY

Purpose

Specifies which data to compress before writing to the dump file set

Syntax and Description

COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}

 ALL enables compression for the entire export operation. The ALL
option requires that the Oracle Advanced Compression option be
enabled.
 DATA_ONLY results in all data being written to the dump file in
compressed format. The DATA_ONLY option requires that the Oracle
Advanced Compression option be enabled.
 METADATA_ONLY results in all metadata being written to the dump file
in compressed format. This is the default.
 NONE disables compression for the entire export operation.

Restrictions

 To make full use of all these compression options, the COMPATIBLE


initialization parameter must be set to at least 11.0.0.
 The METADATA_ONLY option can be used even if the COMPATIBLE
initialization parameter is set to 10.2.
 Compression of data (using values ALL or DATA_ONLY) is valid
only in the Enterprise Edition of Oracle Database 11g

For Example , See Below Screenshot:

Compression =METADATA_ONLY

C:\Users\mazar>set oracle_sid=azardb
C:\Users\mazar>expdp scott/tiger@azardb dumpfile=compressmeta.dmp
directory=data_pump_dir compression=metadata_only

Export: Release 11.2.0.1.0 - Production on Sat Dec 11 10:19:38 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********@azardb
dumpfile=compressmeta.dmp directory=data_pump_dir compression=metadata_only
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.945 KB 4
rows
. . exported "SCOTT"."EMP" 8.578 KB 14
rows
. . exported "SCOTT"."MYTEST" 5.429 KB 1
rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5
rows
. . exported "SCOTT"."BONUS" 0 KB 0
rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\COMPRESSMETA.DMP

Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 10

Compression =ALL

C:\Users\mazar>expdp scott/tiger@azardb dumpfile=compressall.dmp


directory=data_pump_dir compression=all

Export: Release 11.2.0.1.0 - Production on Sat Dec 11 10:21:59 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********@azardb
dumpfile=compressall.dmp directory=data_pump_dir compression=all
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 4.984 KB 4
rows
. . exported "SCOTT"."EMP" 5.625 KB 14
rows
. . exported "SCOTT"."MYTEST" 4.789 KB 1
rows
. . exported "SCOTT"."SALGRADE" 4.898 KB 5
rows
. . exported "SCOTT"."BONUS" 0 KB 0
rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\COMPRESSALL.DMP

Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 10

Compression =DATA_ONLY

C:\Users\mazar>expdp scott/tiger@azardb dumpfile=compressdata.dmp


directory=data_pump_dir compression=data_only

Export: Release 11.2.0.1.0 - Production on Sat Dec 11 10:23:23 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********@azardb
dumpfile=compressdata.dmp directory=data_pump_dir compression=data_only
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 4.984 KB 4
rows
. . exported "SCOTT"."EMP" 5.625 KB 14
rows
. . exported "SCOTT"."MYTEST" 4.789 KB 1
rows
. . exported "SCOTT"."SALGRADE" 4.898 KB 5
rows
. . exported "SCOTT"."BONUS" 0 KB 0
rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\COMPRESSDATA.DMP

Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 10

Compression =NONE

C:\Users\mazar>expdp scott/tiger@azardb dumpfile=compressnone.dmp


directory=data_pump_dir compression=none

Export: Release 11.2.0.1.0 - Production on Sat Dec 11 10:24:28 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********@azardb
dumpfile=compressnone.dmp directory=data_pump_dir compression=none
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 256 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.945 KB 4
rows
. . exported "SCOTT"."EMP" 8.578 KB 14
rows
. . exported "SCOTT"."MYTEST" 5.429 KB 1
rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5
rows
. . exported "SCOTT"."BONUS" 0 KB 0
rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\COMPRESSNONE.DMP
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 10

Screenshot, How size vary from every compression parameter.

Can I rename Export Dumpfile


one of User Asked me, Can I rename my dumpfle, Yes You can

Just see Example :

SQL> conn scott/tiger;


Connected.
SQL> create table mytest(empname varchar2(20),city varchar2(20));
Table created.
SQL> insert into mytest values('azar','riyadh');
1 row created.
SQL> commit;
Commit complete.
SQL> conn / as sysdba
Connected.
SQL> grant read,write on directory data_pump_dir to scott;
Grant succeeded.

I just Do Export

SQL> $expdp scott/tiger directory=data_pump_dir dumpfile=expscott.dmp


tables=mytest
Export: Release 11.2.0.1.0 - Production on Wed Dec 8 10:50:32 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/********
directory=data_pump_dir dumpfile=expscott.dmp tables=mytest
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."MYTEST" 5.429 KB 1
rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
C:\APP\ORACLE\MAZAR\ADMIN\AZARDB\DPDUMP\EXPSCOTT.DMP

Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 10

SQL> conn scott/tiger


Connected.
SQL> drop table mytest;
Table dropped.
SQL> commit;
Commit complete.
SQL> select * from mytest;
select * from mytest
*
ERROR at line 1:
ORA-00942: table or view does not exist

I just renamed TESTSCOTT.DMP FOR EXPSCOTT.DMP

SQL> $impdp scott/tiger directory=data_pump_dir dumpfile=testscott.dmp


tables=mytest
Import: Release 11.2.0.1.0 - Production on Wed Dec 8 10:56:03 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01": scott/********
directory=data_pump_dir dumpfile=testscott.dmp tables=mytest
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."MYTEST" 5.429 KB 1
rows

Job "SCOTT"."SYS_IMPORT_TABLE_01" successfully completed at 10

SQL> conn scott/tiger;


Connected.
SQL> select * from mytest;
EMPNAME CITY
-------------------- --------------------
azar riyadh

---------------------------

KEEP_MASTER and METRICS in EXPDP/IMDP

Recently, i came to know about “KEEP_MASTER” and “METRICS” , the undocumented parameter of

EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER

prevents the Data Pump Master table from getting deleted after an Export/Import job completion

Lets check –

D:\scripts>expdp directory=DATA_PUMP_DIR dumpfile=abc.dmp logfile=abc.log


tables=TM_CONS,FAKE_IND_TEST metrics=y
Export: Release 11.2.0.2.0 - Production on Wed Aug 17 19:13:08 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Username: anand

Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting "ANAND"."SYS_EXPORT_TABLE_01": anand/********


directory=DATA_PUMP_DIR dumpfile=abc.dmp logfile=abc.log
tables=TM_CONS,FAKE_IND_TEST metrics=y

Estimate in progress using BLOCKS method...

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 152 MB

Processing object type TABLE_EXPORT/TABLE/TABLE

Completed 2 TABLE objects in 1 seconds

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Completed 1 INDEX objects in 1 seconds

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Completed 1 INDEX_STATISTICS objects in 1 seconds

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Completed 2 TABLE_STATISTICS objects in 0 seconds

. . exported "ANAND"."FAKE_IND_TEST" 62.61 MB 1000000


rows
. . exported "ANAND"."TM_CONS" 27.65 MB 871080
rows

Master table "ANAND"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded

*****************************************************************************
*

Dump file set for ANAND.SYS_EXPORT_TABLE_01 is:

D:\ORACLE\APP\ADMIN\MATRIX\DPDUMP\ABC.DMP

Job "ANAND"."SYS_EXPORT_TABLE_01" successfully completed at 19:13:28

As, job completed successfully,the Export Master table “SYS_EXPORT_TABLE_01″ will be dropped.

SQL> select
owner,segment_name,segment_type,tablespace_name,(bytes/1024/1024)MB from
dba_segments where segment_name='SYS_EXPORT_TABLE_01';

no rows selected

Now, lets see what happens when we use “KEEP_MASTER”.

D:\scripts>impdp directory=DATA_PUMP_DIR dumpfile=abc.dmp


logfile=abc_imp_chk.log full=y metrics=y keep_master=y
sqlfile=abc_sqlfile.lst

Import: Release 11.2.0.2.0 - Production on Wed Aug 17 19:15:05 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Username: anand

Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -


Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Master table "ANAND"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded

Starting "ANAND"."SYS_SQL_FILE_FULL_01": anand/********


directory=DATA_PUMP_DIR dumpfile=abc.dmp logfile=abc_imp_chk.log full=y
metrics=y keep_master=y sqlfile=abc_sqlfile.lst

Processing object type TABLE_EXPORT/TABLE/TABLE

Completed 2 TABLE objects in 1 seconds

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Completed 1 INDEX objects in 0 seconds

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Completed 1 INDEX_STATISTICS objects in 0 seconds

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Completed 2 TABLE_STATISTICS objects in 1 seconds

Job "ANAND"."SYS_SQL_FILE_FULL_01" successfully completed at 19:15:14

SQL> select
owner,segment_name,segment_type,tablespace_name,(bytes/1024/1024)MB from
dba_segments where segment_name='SYS_SQL_FILE_FULL_01';

OWNER SEGMENT_NAME SEGMENT_TYPE


TABLESPACE_NAME MB

--------------- ---------------------------------------- --------------------


------------------------------ ----------
ANAND SYS_SQL_FILE_FULL_01 TABLE
TEST .125

SQL> select
OBJECT_TYPE,OBJECT_NAME,OBJECT_SCHEMA,ORIGINAL_OBJECT_SCHEMA,ORIGINAL_OBJECT_
NAME,OBJECT_TABLESPACE,SIZE_ESTIMATE,OBJECT_ROW from SYS_SQL_FILE_FULL_01
where ORIGINAL_OBJECT_SCHEMA is not null;

OBJECT_TYPE OBJECT_NAME OBJECT_SCHEMA


ORIGINAL_OBJECT_SCHEMA ORIGINAL_OBJECT_NAME OBJECT_TABLESPACE
SIZE_ESTIMATE OBJECT_ROW

------------------------------ --------------- ------------------------------


------------------------------ ------------------------------ ---------------
--------------- ------------- ----------

TABLE TM_CONS ANAND


ANAND TM_CONS TEST
736 1

TABLE FAKE_IND_TEST ANAND


ANAND FAKE_IND_TEST TEST
736 1

INDEX FAKE_CUST_ID ANAND


ANAND FAKE_CUST_ID TEST
1
TABLE_DATA TM_CONS ANAND
ANAND TM_CONS TEST
75497472

TABLE_DATA FAKE_IND_TEST ANAND


ANAND FAKE_IND_TEST TEST
83886080

OBJECT_TYPE –> Show the object type.

OBJECT_SCHEMA –> Contains the schema name to which it has to be imported.

ORIGINAL_OBJECT_SCHEMA –> column has the original object’s schema name.

OBJECT_TABLESPACE –> Shows the tablespace where the object will be imported.

SIZE_ESTIMATE –> Estimated size of the table in bytes

This can be used to find owner,objects etc information contained in the dumpfile, in case you don’t know

what dump contains. Definitely, we have sqlfile parameter to find the same, but this can also be one.

---------------------------------------------------------------------------------------------

You might also like