v EXPORT AND IMPORT
There are four types of exports
1) Full Databases Export – you need exp_full_database –role need for this- SYS has this role by default
2) Schema export
3) Table export
4) Tablespace export
v TABLES EXPORT AND IMPORT:-
Log in to WPIBM072
SQL> create or replace directory test_dir as '/oracle/ps/exports2/sreeji/dev7prod/’;
SQL> grant read, write on directory test_dir to PSHCM;
expdp PSHCM/ACC3SS7D@PSHCMD7 tables= PS_A_WK_ROLLUP_ERN directory=TEST_DIR dumpfile=exportdev7tbl.dmp logfile= exportdev7tbl.log
Login to WPIBM026
SQL> create or replace directory test_dir as '/oracle/ps/exports2/sreeji/dev7prod/’;
SQL> grant read, write on directory test_dir to PSHCM;
expdp PSHCM/ACC3SS1P@PSHCMP1 table=PS_A_WK_ROLLUP_ERN directory=TEST_DIR dumpfile=exportpro1tbl.dmp logfile= exportpro1tbl.log
Once this table is backed up by the above script..run the below
impdp PSHCM/ACC3SS1P@PSHCMP1 tables= PS_A_WK_ROLLUP_ERN DIRECTORY=test_dir dumpfile=exportdev7tbl.dmp logfile= exportdev7tbl.log TABLE_EXISTS_ACTION=REPLACE
PARAMETER: -- TABLE_EXISTS_ACTION : options
· SKIP: leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.By default the value is SKIP .
· APPEND loads rows from the source and leaves existing rows unchanged.
· TRUNCATE deletes existing rows and then loads rows from the source.
· REPLACE drops the existing table and then recreates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.
After for your safely copy those file to your RDT using WINSCP from location /oracle/ps/exports2/sreeji/dev7prod/ -- for both the dumps
v SCHEMAS EXPORT: Datapump Schema Refresh:-
Refresh SCOTT Schema from PROD TO TARGET:
Select banner from v$version; // Will give you the version name.
On the Source Database PROD:
Create a directory or use an existing directory (ex: data_pump_dir) and grant read and write permissions on this directory to user ‘SYSTEM‘
CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR as ‘/oracle/ps/exports2/sreeji/’;
Grant read, write on directory DATA_PUMP_DIR to system;
Step 1: Exporting the data from prod(source)
$ vi expdp_refresh_schema.sh
$ expdp system/****@sourcehostname dumpfile=expdpschema.dmp Directory=data_pump_dir
logfile=export.log schemas= scott
$ nohup sh expdp_refresh_schema.sh>refresh_schema.out &
Step 2 : Copying the dumpfiles from source to target
For copying Dumpfiles from one server to another server we can use either Winscp(Graphical tool for copying files from windows to linux and vice versa),FTP, SFTP, SCP, etc.
$ scp expdpschema.dmp system@targethostname:/oracle/admin/datapump
Step 3 : Importing data from dumpfile into target database
Before importing dumpfile into target(TEST) make sure you delete or backup all the objects in that schema, to clear all objects from particular schema run the script from here
$ impdp system/****@targethostname dumpfile=expdpschema.dmp Directory=data_pump_dir logfile=import.log remap_schema= scott:newscott
Step 4 : Verify target database objects with source
SQL> select count(*) from dba_objects where owner=’NEWSCOTT’ ;
SQL> select count(*) from dba_tables where owner =’SCOTT’;
The above results should be same as that of source ‘scott’ schema
v DATAPUMP SENARIOS:
1) Import 11g data dump into 10g:
You can use Oracle data pump to export data dump from 11g server, and import into 10g using the data Pump parameter called Version. When you use Version, Datapump exports all objects that are compatible for your target version.
$ expdp hr/hr dumpfile=dumpfilename.dmp directory=data_pump_dir logfile=logfile.log schemas=SCOTT version=10.2.
Once the export is done, you do the regular import from 10g server.
v IMPORTING MULTIPLE DUMPFILES:
If the size of the dump file is large, usually they will be split into smaller chunks for easier ftp upload/download.
If you are trying to import a dump file that is split into many chunks, then you need to modify the DUMPFILE paratermeter to include %U
Ex: If the dump files are named EXP_PROD_1.dmp, EXP_PROD_2.dmp etc , then DUMPFILE=EXP_PROD_%U.DMP
v HOW TO KILL DATAPUMP JOBS:
When you import or export using data pump impdp or expdp tools, the import/export is done by a job. You have an option to provide a job name using JOB_NAME parameter too
Following sql will give you the list of data pump jobs
select * from dba_datapump_jobs
If you want to kill your impdp or expdp
1) Make sure that your impdp/expdp command prompt window is active
2) Press Control-C , It will pause the job. Don't press another Control-C or that will close the command prompt. This will just close the window, but the job will still be running in the background
3) Type Kill_Job
ex: Import> kill_job
Are you sure you wish to stop this job (y/n): y
1) Get the name of the job using
select * from dba_datapump_jobs
· If by mistake, you closed the window and your import/export job is still running,
1) Get the name of the job using
· select * from dba_datapump_jobs
2) Open a new command prompt window. If you want to kill your import job type
· impdp username/password@database attach=name_of_the_job
3) Once you are attached to job, Type Kill_Job
· ex: Import> kill_job
· Are you sure you wish to stop this job (y/n): y
· And your job is killed, it will no longer show in dba_datapump_jobs
v REUSE DUMPFILE:
4) REUSE_DUMPFILE :( Overwrite existing dumpfile)
This is the option with data pump expdp utility. Normally when you perform the export using expdp utility and if the dumpfile is present in the export directory it will throw an error “ORA-27038: created file already exists”. This situation happens when you wanted to perform the repetitive exports using the same dumpfile. Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should mention the parameter value as Y to overwrite the existing dump file. By default the option considered as N.
$ expdp scott/tiger directory=exp_dir dumpfile = x.dmp table s= example reuse_dumpfiles = y
There are four types of exports
1) Full Databases Export – you need exp_full_database –role need for this- SYS has this role by default
2) Schema export
3) Table export
4) Tablespace export
v TABLES EXPORT AND IMPORT:-
Log in to WPIBM072
SQL> create or replace directory test_dir as '/oracle/ps/exports2/sreeji/dev7prod/’;
SQL> grant read, write on directory test_dir to PSHCM;
expdp PSHCM/ACC3SS7D@PSHCMD7 tables= PS_A_WK_ROLLUP_ERN directory=TEST_DIR dumpfile=exportdev7tbl.dmp logfile= exportdev7tbl.log
Login to WPIBM026
SQL> create or replace directory test_dir as '/oracle/ps/exports2/sreeji/dev7prod/’;
SQL> grant read, write on directory test_dir to PSHCM;
expdp PSHCM/ACC3SS1P@PSHCMP1 table=PS_A_WK_ROLLUP_ERN directory=TEST_DIR dumpfile=exportpro1tbl.dmp logfile= exportpro1tbl.log
Once this table is backed up by the above script..run the below
impdp PSHCM/ACC3SS1P@PSHCMP1 tables= PS_A_WK_ROLLUP_ERN DIRECTORY=test_dir dumpfile=exportdev7tbl.dmp logfile= exportdev7tbl.log TABLE_EXISTS_ACTION=REPLACE
PARAMETER: -- TABLE_EXISTS_ACTION : options
· SKIP: leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.By default the value is SKIP .
· APPEND loads rows from the source and leaves existing rows unchanged.
· TRUNCATE deletes existing rows and then loads rows from the source.
· REPLACE drops the existing table and then recreates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.
After for your safely copy those file to your RDT using WINSCP from location /oracle/ps/exports2/sreeji/dev7prod/ -- for both the dumps
v SCHEMAS EXPORT: Datapump Schema Refresh:-
Refresh SCOTT Schema from PROD TO TARGET:
Select banner from v$version; // Will give you the version name.
On the Source Database PROD:
Create a directory or use an existing directory (ex: data_pump_dir) and grant read and write permissions on this directory to user ‘SYSTEM‘
CREATE OR REPLACE DIRECTORY DATA_PUMP_DIR as ‘/oracle/ps/exports2/sreeji/’;
Grant read, write on directory DATA_PUMP_DIR to system;
Step 1: Exporting the data from prod(source)
$ vi expdp_refresh_schema.sh
$ expdp system/****@sourcehostname dumpfile=expdpschema.dmp Directory=data_pump_dir
logfile=export.log schemas= scott
$ nohup sh expdp_refresh_schema.sh>refresh_schema.out &
Step 2 : Copying the dumpfiles from source to target
For copying Dumpfiles from one server to another server we can use either Winscp(Graphical tool for copying files from windows to linux and vice versa),FTP, SFTP, SCP, etc.
$ scp expdpschema.dmp system@targethostname:/oracle/admin/datapump
Step 3 : Importing data from dumpfile into target database
Before importing dumpfile into target(TEST) make sure you delete or backup all the objects in that schema, to clear all objects from particular schema run the script from here
$ impdp system/****@targethostname dumpfile=expdpschema.dmp Directory=data_pump_dir logfile=import.log remap_schema= scott:newscott
Step 4 : Verify target database objects with source
SQL> select count(*) from dba_objects where owner=’NEWSCOTT’ ;
SQL> select count(*) from dba_tables where owner =’SCOTT’;
The above results should be same as that of source ‘scott’ schema
v DATAPUMP SENARIOS:
1) Import 11g data dump into 10g:
You can use Oracle data pump to export data dump from 11g server, and import into 10g using the data Pump parameter called Version. When you use Version, Datapump exports all objects that are compatible for your target version.
$ expdp hr/hr dumpfile=dumpfilename.dmp directory=data_pump_dir logfile=logfile.log schemas=SCOTT version=10.2.
Once the export is done, you do the regular import from 10g server.
v IMPORTING MULTIPLE DUMPFILES:
If the size of the dump file is large, usually they will be split into smaller chunks for easier ftp upload/download.
If you are trying to import a dump file that is split into many chunks, then you need to modify the DUMPFILE paratermeter to include %U
Ex: If the dump files are named EXP_PROD_1.dmp, EXP_PROD_2.dmp etc , then DUMPFILE=EXP_PROD_%U.DMP
v HOW TO KILL DATAPUMP JOBS:
When you import or export using data pump impdp or expdp tools, the import/export is done by a job. You have an option to provide a job name using JOB_NAME parameter too
Following sql will give you the list of data pump jobs
select * from dba_datapump_jobs
If you want to kill your impdp or expdp
1) Make sure that your impdp/expdp command prompt window is active
2) Press Control-C , It will pause the job. Don't press another Control-C or that will close the command prompt. This will just close the window, but the job will still be running in the background
3) Type Kill_Job
ex: Import> kill_job
Are you sure you wish to stop this job (y/n): y
1) Get the name of the job using
select * from dba_datapump_jobs
· If by mistake, you closed the window and your import/export job is still running,
1) Get the name of the job using
· select * from dba_datapump_jobs
2) Open a new command prompt window. If you want to kill your import job type
· impdp username/password@database attach=name_of_the_job
3) Once you are attached to job, Type Kill_Job
· ex: Import> kill_job
· Are you sure you wish to stop this job (y/n): y
· And your job is killed, it will no longer show in dba_datapump_jobs
v REUSE DUMPFILE:
4) REUSE_DUMPFILE :( Overwrite existing dumpfile)
This is the option with data pump expdp utility. Normally when you perform the export using expdp utility and if the dumpfile is present in the export directory it will throw an error “ORA-27038: created file already exists”. This situation happens when you wanted to perform the repetitive exports using the same dumpfile. Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should mention the parameter value as Y to overwrite the existing dump file. By default the option considered as N.
$ expdp scott/tiger directory=exp_dir dumpfile = x.dmp table s= example reuse_dumpfiles = y
No comments:
Post a Comment