Professional Documents
Culture Documents
Data Pump replaces EXP and IMP (exp and imp were not removed from 10g). It provides high
speed, parallel, bulk data and metadata movement of Oracle database contents across
platforms and database versions. Oracle states that Data Pump's performance on data retrieval
is 60% faster than Export and 20% to 30% faster on data input than Import. If a data pump job is
started and fails for any reason before it has finished, it can be restarted at a later time.
The commands to start the data pump are expdb and impdb, respectively. The data pump uses
files as well as direct network transfer. Clients can detach and reconnect from/to the data pump.
It can be monitored through several views like dba_datapump_jobs. The Data Pump's public API
is the DBMS_DATAPUMP package. More information HERE
Two access methods are supported: Direct Path (DP) and External Tables (ET). DP is the fastest
but does not support intra-partition parallelism. ET does and therefore may be chosen to load
or unload a very large table or partition. Data Pump export and import are not compatible with
the old exp & imp. So if you need to import into a pre-10g database it is best to stick with the
original export utility.
Data Pump are useful for migrating large databases.
To use Data Pump you must have EXP_FULL_DATABASE or IMP_FULL_DATABASE
depending the operation to perform. These allow you to expdp & impdp across ownership for
items such as grants, resource plans, schema definitions, and re-map, re-name, or re-distribute
database objects or structures. By definition, Oracle gives permission to the objects in a
DIRECTORY that a user would not normally have access to.
Data Pump runs only on the server side. You may initiate the export from a client but the job(s)
and the files will run inside an Oracle server. There will be no dump files (expdat.dmp) or log
files created on your local machine.
Oracle creates dump and log files through DIRECTORY objects. So before you can use Data
Pump you must create a DIRECTORY object. Example:
Then, as you use Data Pump you can reference this DIRECTORY as a parameter for export
where you would like the dump or log files to end up.
Some Parameters
FEEDBACK STATUS
FILE DUMPFILE
LOG LOGFILE
OWNER SCHEMAS
TTS_FULL_CHECK TRANSPROT_FULL_CHECK
Some Examples:
Scenario2 Export the scott schema from ORCL and import into DEST
database.
expdp userid=system/password@ORCL dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log directory=dumplocation schemas=scott
impdp userid=system/password@DEST dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log directory=dumplocation
Scenario5 Export the emp table from scott schema at ORCL instance and
import into DEST instance.
expdp userid=system/password@ORCL logfile=tableexpdb.log
directory=dumplocation tables=scott.part_emp dumpfile=tableexpdb.dmp
impdp userid=system/password@DEST dumpfile=tableexpdb.dmp
logfile=tabimpdb.log directory=dumplocation table_exists_action=REPLACE
This can be easily solved with Data Pump, for this example, let say that you
only have space for 70% of your production database, now to know how to
proceed, we need to decide if the copy will contain metadata only (no
data/rows) or if it will include the data also. Let’s see how to do each way:
a) Metadata Only
First do a full export of your source database.
expdp user/password content=metadata_only full=y directory=datapump
dumpfile=metadata_24112010.dmp
Then, let’s import the metadata and tell the Data Pump to reduce the size of
extents to 70%, you can do it using the parameter “transform” available with
“impdp”, it represent the percentage multiplier that will be used to alter
extent allocations and datafiles size.
impdp user/password transform=pctspace:70 directory=datapump
dumpfile=metadata_24112010.dmp
Then, all you need to do as the example before is to import it telling the
Data Pump to reduce the size of extents to 70%, and that’s it!
impdp user/password transform=pctspace:70 directory=datapump
dumpfile=expdp_70_24112010.dmp
Scenario 7 Export only specific partition in emp table from scott schema at
orcl and import into ordb database.
expdp userid=system/password@ORCL dumpfile=partexpdb.dmp
logfile=partexpdb.log directory=dumplocation
tables=scott.part_emp:part10,scott.part_emp:part20
----------------------------------------------
----------------------------------------------
Scenario 8 Export only tables (no code) in scott schema at ORCL and import
into DEST database
expdp userid=system/password@ORCL dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log directory=dumplocation include=table schemas=scott
impdp userid=system/password@DEST dumpfile=schemaexpdb.dmp
logfile=schemaimpdb.log directory=dumplocation table_exists_action=replace
Scenario 11 Export the scott schema from ORCL database and split the dump file into four files. Import
the dump file into DEST datbase.
Expdp parfile content:
expdp userid=system/password@ORCL logfile=schemaexp_split.log
directory=dumplocation dumpfile=schemaexp_split_%U.dmp parallel=4
schemas=scott include=table
As per the above parfile content, initially four files will be created -
schemaexp_split_01.dmp, schemaexp_split_02.dmp, schemaexp_split_03.dmp,
schemaexp_split_04.dmp. Notice that every occurrence of the substation
variable is incremented each time. Since there is no FILESIZE parameter, no
more files will be created.
Impdp parfile content:
impdp userid=system/password@DEST logfile=schemaimp_split.log
directory=dumplocation dumpfile=schemaexp_split_%U.dmp
table_exists_action=replace
remap_tablespace=res:users exclude=grant
Scenario 12 Export the scott schema from ORCL database and split the dump file into three files.
The dump files will be stored in three different location. This method is
especially useful if you do not have enough space in one file system to
perform the complete expdp job. After export is successful, import the dump
file into DEST database.
Expdp parfile content:
expdp userid=system/password@ORCL logfile=schemaexp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
filesize=50M schemas=scott include=table
As per above expdp par file content, it place the dump file into three
different location. Let us say, entire expdp dump file size is 1500MB. Then
it creates 30 dump files(each dump file size is 50MB) and place 10 files in
each file system.
Impdp parfile content:
impdp userid=system/password@DEST logfile=schemaimp_split.log
directory=dumplocation
dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
table_exists_action=replace
Scenario 13 Expdp scott schema in ORCL and impdp the dump file in training schema in DEST database.
expdp userid=scott/tiger@ORCL logfile=netwrokexp1.log directory=dumplocation
dumpfile=networkexp1.dmp schemas=scott include=table
More Examples
To export only a few specific objects--say, function LIST_DIRECTORY and
procedure DB_MAINTENANCE_DAILY--you could use
expdp ananda/iclaim directory=DPDATA1 dumpfile=expprocs.dmp
include=PROCEDURE:\"=\'DB_MAINTENANCE_DAILY\'\",FUNCTION:\"=\'LIST_DIRECTORY\
'\"
This dumpfile serves as a backup of the sources. You can even use it to
create DDL scripts to be used later. A special parameter called SQLFILE
allows the creation of the DDL script file.
This instruction creates a file named procs.sql in the directory specified by
DPDATA1, containing the scripts of the objects inside the export dumpfile.
This approach helps you create the sources quickly in another schema.
impdp ananda/iclaim directory=DPDATA1 dumpfile=expprocs.dmp sqlfile=procs.sql
The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to
specify the schemas to be exported. The following is an example of the schema export and
import syntax:
expdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp
logfile=expdpSCOTT.log
impdp scott/tiger schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp
logfile=impdpSCOTT.log
The REMAP_TABLESPACE in the impdp sencence allows you to move the objects from one
tablespace to another one.
impdp system SCHEMAS=SCOTT directory=EXPORTPATH DUMPFILE=SCOTT.dmp
LOGFILE=imp.log REMAP_TABLESPACE=FGUARD_DATA:FG_DATA
You can also use several REMAP_TABLESPACE clauses in the impdp sencence:
impdp system SCHEMAS=SCOTT directory=EXPORTPATH DUMPFILE=SCOTT.dmp
LOGFILE=imp.log REMAP_TABLESPACE=FGUARD_DATA:FG_DATA
remap_tablespace=FGUARD_INDX:FG_INDX
The FULL parameter indicates that a complete database export is required. The following is an
example of the full database export and import syntax:
expdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp
logfile=expdpDB10G.log
impdp system/password full=Y directory=TEST_DIR dumpfile=DB10G.dmp
logfile=impdpDB10G.log
Data pump performance can be improved by using the PARALLEL parameter. This should be
used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple
dumpfiles to be created or read:
expdp scott/tiger schemas=SCOTT directory=TEST_DIR parallel=4
dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
Each thread creates a separate dumpfile, so the parameter dumpfile should have as many entries
as the degree of parallelism.
Note how the dumpfile parameter has a wild card %U, which indicates the files will be created as
needed and the format will be SCOTT_nn.dmp, where nn starts at 01 and goes up as needed.
The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific
objects. When the INCLUDE parameter is used, only those objects specified by it will be
included in the export. When the EXCLUDE parameter is used all objects except those specified
by it will be included in the export:
expdp scott/tiger schemas=SCOTT include=TABLE:\"IN (\'EMP\',
\'DEPT\')\" directory=TEST_DIR dumpfile=SCOTT.dmp
logfile=expdpSCOTT.log
expdp scott/tiger schemas=SCOTT exclude=TABLE:\"= \'BONUS\'\"
directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
Monitoring Export:
While Data Pump Export is running, press Control-C; it will stop the display of the messages on
the screen, but not the export process itself. Instead, it will display the Data Pump Export prompt
as shown below. The process is now said to be in "interactive" mode:
Export>
This approach allows several commands to be entered on that Data Pump Export job. To find a
summary, use the STATUS command at the prompt:
Export> status
Job: CASES_EXPORT
Operation: EXPORT
Mode: TABLE
State: EXECUTING
Degree: 1
Job Error Count: 0
Dump file: /u02/dpdata1/expCASES.dmp
bytes written = 2048
Worker 1 Status:
State: EXECUTING
Object Schema: DWOWNER
Object Name: CASES
Object Type: TABLE_EXPORT/TBL_TABLE_DATA/TABLE/TABLE_DATA
Completed Objects: 1
Total Objects: 1
Completed Rows: 4687818
Remember, this is merely the status display. The export is working in the background. To
continue to see the messages on the screen, use the command CONTINUE_CLIENT from the
Export prompt.
While Data Pump jobs are running, you can pause them by issuing STOP_JOB on the Data
Pump Export or Data Pump Import prompts and then restart them with START_JOB.
This functionality comes in handy when you run out of space and want to make corrections
before continuing.
A simple way to gain insight into the status of a Data Pump job is to look into a few views
maintained within the Oracle instance the Data Pump job is running.
These views are DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS, and
V$SESSION_LOGOPS and they are critical in the monitoring of your export jobs so, you can
attach to a Data Pump job and modify the execution of the that job.
DBA_DATAPUMP_JOBS
This view will show the active Data Pump jobs, their state, degree of parallelism, and the number
of sessions attached.
DBA_DATAPUMP_SESSIONS
This view give gives the SADDR that assist in determining why a Data Pump session may be
having problems. Join to the V$SESSION view for further information.
V$SESSION_LONGOPS
This view helps determine how well a Data Pump export is doing. It also shows you any
operation that is taking long time to execute.
Basically gives you a progress indicator through the MESSAGE column.
Monitor at the OS - Do a "ps -ef" on the data pump process and watch it consume CPU. You can
also monitor the data pump log file with the "tail -f", command, watching the progress of the
import in real time. If you watch the import log, be sure to include the feedback=1000 parameter
to direct import to display a dot every 1,000 lines of inserts.
Monitor with the data pump views - The main view to monitor import jobs are
dba_datapump_jobs and dba_datapump_sessions.
Monitor with longops - You can query the v$session_longops to see the progress of data pump,
querying the sofar and totalwork columns.
select x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module='Data Pump Worker'
AND p.time_remaining > 0;
The following are the major new features that provide this increased performance, as well as enhanced
ease of use:
The ability to specify the maximum number of threads of active execution operating on behalf of
the Data Pump job. This enables you to adjust resource consumption versus elapsed time. See
PARALLEL for information about using this parameter in export. See PARALLEL for information
about using this parameter in import. (This feature is available only in the Enterprise Edition of
Oracle Database 10g.)
The ability to restart Data Pump jobs. See START_JOB for information about restarting export
jobs. See START_JOB for information about restarting import jobs.
The ability to detach from and reattach to long-running jobs without affecting the job itself. This
allows DBAs and other operations personnel to monitor jobs from multiple locations. The Data
Pump Export and Import utilities can be attached to only one job at a time; however, you can
have multiple clients or jobs running at one time. (If you are using the Data Pump API, the
restriction on attaching to only one job at a time does not apply.) You can also have multiple
clients attached to the same job. See ATTACH for information about using this parameter in
export. See ATTACH for information about using this parameter in import.
Support for export and import operations over the network, in which the source of each operation
is a remote instance. See NETWORK_LINK for information about using this parameter in export.
See NETWORK_LINK for information about using this parameter in import.
The ability, in an import job, to change the name of the source datafile to a different name in all
DDL statements where the source datafile is referenced. See REMAP_DATAFILE.
Enhanced support for remapping tablespaces during an import operation. See
REMAP_TABLESPACE.
Support for filtering the metadata that is exported and imported, based upon objects and object
types. For information about filtering metadata during an export operation, see INCLUDE and
EXCLUDE. For information about filtering metadata during an import operation, see INCLUDE
and EXCLUDE.
Support for an interactive-command mode that allows monitoring of and interaction with ongoing
jobs. See Commands Available in Export's Interactive-Command Mode and Commands Available
in Import's Interactive-Command Mode.
The ability to estimate how much space an export job would consume, without actually
performing the export. See ESTIMATE_ONLY.
The ability to specify the version of database objects to be moved. In export jobs, VERSION
applies to the version of the database objects to be exported. See VERSION for more information
about using this parameter in export.
In import jobs, VERSION applies only to operations over the network. This means that VERSION
applies to the version of database objects to be extracted from the source database. See
VERSION for more information about using this parameter in import.
Most Data Pump export and import operations occur on the Oracle database server. (This
contrasts with original export and import, which were primarily client-based.) See Default
Locations for Dump, Log, and SQL Files for information about some of the implications of server-
based operations.
The DBA_DATAPUMP_JOBS and USER_DATAPUMP_JOBS views identify all active Data Pump jobs,
regardless of their state, on an instance (or on all instances for Real Application Clusters). They also
show all Data Pump master tables not currently associated with an active job. You can use the job
information to attach to an active job. Once you are attached to the job, you can stop it, change its
parallelism, or monitor its progress. You can use the master table information to restart a stopped job or
to remove any master tables that are no longer needed.
Table 1-1 describes the columns in the DBA_DATAPUMP_JOBS view and the USER_DATAPUMP_JOBS
view.
OWNER_NAME VARCHAR2(30) User who initiated the job (valid only for
DBA_DATAPUMP_JOBS)
JOB_NAME VARCHAR2(30) User-supplied name for the job (or the default name generated
by the server)
Note:
The DBA_DATAPUMP_SESSIONS view identifies the user sessions that are attached to a job. The
information in this view is useful for determining why a stopped operation has not gone away.
JOB_NAME VARCHAR2(30) User-supplied name for the job (or the default name
Column Datatype Description
SADDR RAW(4) (RAW(8) on 64-bit Address of session attached to the job. Can be used
systems) with V$SESSION view.
Data Pump operations that transfer table data (export and import) maintain an entry in the
V$SESSION_LONGOPS dynamic performance view indicating the job progress (in megabytes of table data
transferred). The entry contains the estimated transfer size and is periodically updated to reflect the
actual amount of data transferred.
Note:
The usefulness of the estimate value for export operations depends on the type of
estimation requested when the operation was initiated, and it is updated as required
if exceeded by the actual transfer amount. The estimate value for import operations
is exact.
The V$SESSION_LONGOPS columns that are relevant to a Data Pump job are as follows:
-----
-- Listing 1.1: Setting up a DIRECTORY object for DataPump use.
-- Note that the directory folder need not exist for this
command
-- to succeed, but any subsequent attempt to utilize the
DIRECTORY
-- object will fail until the folder is created on the server.
-- This should be run from SYSTEM for best results
-----
DROP DIRECTORY export_dir;
CREATE DIRECTORY export_dir as 'c:\oracle\export_dir';
GRANT READ, WRITE ON DIRECTORY export_dir TO hr, sh;
-----
-- Listing 1.2: Determining what object types can be exported/imported
-- and filtering levels available
-----
COL object_path FORMAT A25 HEADING 'Object Path Name'
COL comments FORMAT A50 HEADING 'Object Description'
COL named FORMAT A3 HEADING 'Nmd|Objs'
-----
-- Listing 1.3: A simple DataPump Export operation. Note that if the export
-- dump file already exists when this is executed, Oracle will
-- return an ORA-39000 error and terminate the operation
-----
EXPDP hr/hr DUMPFILE=export_dir:hr_schema.dmp
LOGFILE=export_dir:hr_schema.explog
SET ORACLE_SID=zdcdb
EXPDP system/******** PARFILE=c:\rmancmd\dpe_1.expctl
DIRECTORY=export_dir
SCHEMAS=HR,OE
JOB_NAME=hr_oe_schema
DUMPFILE=export_dir:hr_oe_schemas.dmp
LOGFILE=export_dir:hr_oe_schemas.explog
-----
-- Listing 1.4: A simple DataPump Import. Note that only database objects
from
-- the HR schema will be used to populate a new schema
(HR_OLTP),
-- and all objects other than tables and their dependent objects
-- will be excluded from the import
-----
SET ORACLE_SID=dbaref
IMPDP system/****** PARFILE=export_dir:dpi_1.impctl
DIRECTORY=export_dir
JOB_NAME=hr_oltp_import
DUMPFILE=export_dir:hr_oe_schemas.dmp
LOGFILE=export_dir:hr_oltp_import.implog
REMAP_SCHEMA=hr:hr_oltp
STATUS=5
-----
-- Listing 1.5: Querying status of DataPump operations
-----
TTITLE 'Currently Active DataPump Operations'
COL owner_name FORMAT A06 HEADING 'Owner'
COL job_name FORMAT A20 HEADING 'JobName'
COL operation FORMAT A12 HEADING 'Operation'
COL job_mode FORMAT A12 HEADING 'JobMode'
COL state FORMAT A12 HEADING 'State'
COL degree FORMAT 9999 HEADING 'Degr'
COL attached_sessions FORMAT 9999 HEADING 'Sess'
SELECT
owner_name
,job_name
,operation
,job_mode
,state
,degree
,attached_sessions
FROM dba_datapump_jobs
;
SELECT
DPS.owner_name
,DPS.job_name
,S.osuser
FROM
dba_datapump_sessions DPS
,v$session S
WHERE S.saddr = DPS.saddr
;
--------------------------------------------------------------
Step1 :
One of my friend asked me , Can I store Datapump dumpfile in asm diskgroup?. Yes you
can. Now we can see How do we create directory and store dumpfile.
Step 2: Go to DB Instance
--------------------------------------------------------
How to export from oracle 11.2 and import into 10.2 version
I think many OTN users asked repeately this question How do i export from higher
version and import into lower version.I just show here How do we do.First : Using
Datapump from Oralce 11.2 and import into 10.2 version
Directory created.
Grant succeeded.
Table created.
1 row created.
SQL> commit;
Commit complete.
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 -
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/********@azardb
directory=test_dir dumpfile=testver.dmp tables=testversion
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."TESTVERSION" 5.031 KB 1
rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
*****************************************************************************
*
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
D:\BACKUPNEW\DUMP\TESTVER.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 15:54:40
Step 3: I just Copied this TESTVER.DMP file into target DB 10.2 Directory and import it
It showing error, So you need to export Data in source db 11.2 using Version
parameter
Step 4:
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Step 5: Again I copied this dump file to target DB 10.2 directory and import it.
Data Pump Export will return an error if you specify a dump file name that already
exists. The REUSE_DUMPFILES parameter allows you to override that behavior and reuse
a dump file name.
This is compatabile for Oracle 11g Version, Not in Oracle 10g version.
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
--------------------------------------------------------------------------------------------------
Here I just like to show How compression datapump parameter working in Oracle 11g R2 ( see Below
Screenshot How size vary from others.)
Default: METADATA_ONLY
Purpose
Specifies which data to compress before writing to the dump file set
ALL enables compression for the entire export operation. The ALL
option requires that the Oracle Advanced Compression option be
enabled.
DATA_ONLY results in all data being written to the dump file in
compressed format. The DATA_ONLY option requires that the Oracle
Advanced Compression option be enabled.
METADATA_ONLY results in all metadata being written to the dump file
in compressed format. This is the default.
NONE disables compression for the entire export operation.
Restrictions
Compression =METADATA_ONLY
C:\Users\mazar>set oracle_sid=azardb
C:\Users\mazar>expdp scott/tiger@azardb dumpfile=compressmeta.dmp
directory=data_pump_dir compression=metadata_only
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Compression =ALL
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Compression =DATA_ONLY
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Compression =NONE
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
I just Do Export
---------------------------
Recently, i came to know about “KEEP_MASTER” and “METRICS” , the undocumented parameter of
EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER
prevents the Data Pump Master table from getting deleted after an Export/Import job completion
Lets check –
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: anand
Password:
With the Partitioning, OLAP, Data Mining and Real Application Testing options
*****************************************************************************
*
D:\ORACLE\APP\ADMIN\MATRIX\DPDUMP\ABC.DMP
As, job completed successfully,the Export Master table “SYS_EXPORT_TABLE_01″ will be dropped.
SQL> select
owner,segment_name,segment_type,tablespace_name,(bytes/1024/1024)MB from
dba_segments where segment_name='SYS_EXPORT_TABLE_01';
no rows selected
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: anand
Password:
SQL> select
owner,segment_name,segment_type,tablespace_name,(bytes/1024/1024)MB from
dba_segments where segment_name='SYS_SQL_FILE_FULL_01';
SQL> select
OBJECT_TYPE,OBJECT_NAME,OBJECT_SCHEMA,ORIGINAL_OBJECT_SCHEMA,ORIGINAL_OBJECT_
NAME,OBJECT_TABLESPACE,SIZE_ESTIMATE,OBJECT_ROW from SYS_SQL_FILE_FULL_01
where ORIGINAL_OBJECT_SCHEMA is not null;
OBJECT_TABLESPACE –> Shows the tablespace where the object will be imported.
This can be used to find owner,objects etc information contained in the dumpfile, in case you don’t know
what dump contains. Definitely, we have sqlfile parameter to find the same, but this can also be one.
---------------------------------------------------------------------------------------------