Friday , August 18 2017
Home / Oracle DBA / Exports can be Done in 4 Levels

Exports can be Done in 4 Levels

Exports can be done at 4 Levels.

  1. Table-Level
  2. User-Level
  3. TABLESPACE level
  4. DB-Level (You need DBA Privilege)

When we are doing at Table-Level, we can mention either one or More Tables according our need. Similarly, same rule applies to (B) also. But, we can’t be choosy at different Levels with different options.For E.g. If we want Scott’s Emp, Dept & Steve’s Table_A, Table_B this can’t be mentioned. When you select at User-Level, complete Schema will be exported. But this is allowed.

Tables=scott.emp,scott.dept,steve.tab1, steve.tab2
To see help in Exp or Imp, simply say
$ exp help=y
This will show you all the different parameters with small descriptions and with their default values.

Export is basically: Table-Def + Data + Const-Def (indexes) + Index-Def + Views-Def + Synonyms- Def + PL*SQL-Proc + Packages + DB-Links + Sequences etc. So as we can see here, except the Table-Data, everything else is “DEF-Only”. This is the reason why size of the Exp is lot less (probably 8-10 times) compared to DB-Size. And this DMP file (which came from our Exp) can further be “compressed” using UNIX “compress” utily (still 10 times). So ultimately we can backup a 10GB database in about 120-Mb and make very portable.

When we do DB-Level Export, it contains def. for all TSec, Undo Segments, Users and every thing. But the reasons why Exp file is so less in size are:

  1. It is carrying DEF only. For e. On a Table we may have 8 indexes. In this case more than table, Indexes are occupying more space in the Database. But, at the time of Exp, we only worry about INDEX-DEF.
  2. SYS tables (Base-Tables) are not getting exporte The reason is, these base tables are belonging to this Database. We don’t have to take these to the Target-DB; since that Database has it’s own set of BT.
  3. TEMP – Def is only exported but not the content.
  4. REDOs & Controlfile are not getting exported. So all these are reasons why our Exp file is so small compared to our Physical Database though we exported FULL Database.

In oracle, there is no utility to give you the Complete-Schema Definition. But we can some-how get this out using Exp & Imp. What we do:

$ exp scott/tiger file=scott.dmp log=scott.log
$ imp scott/tiger file=scott.dmp show=y log=scott_schema.sql

Once the second command is done, we should have a file “scott_schema.sql”. By editing this file, we can get the complete-schema for SCOTT.
Exp Must be done while Database is up & running (same with Import).
While Exp is going on users may access & modify the Database. But when this happens, Exp is not guaranteeing a time-image. This is the reason we do Exp in the night before doing the Physical backup.
Always mention the file names –> DMP-File & LOG-File names. Otherwise Oracle will use default file name for DMP, which is “expdat.dmp”.
By def. Oracle only offers 256K of buffer size which is very less when exporting very big tables. So supply your own buffer-size as a parameter.
In case there are so many tables to be exported, instead of mentioning all their names at the command level. You can create a “parameter file” in which you can list all the tables etc.
If you want to export some tables from SCOTT and import those into STEVE then you can do it from a DBA account.

$ exp system/manager table=scott.emp file=scott_emp.dmp \ Log=scott_emp.log
$ exp system/manager fromuser=scott touser=steve file=scott_emp.dmp

Exp can be done in 4 diff. ways.
Command Line Interactive Method Parameter file
Using Oracle Enterprise Manager(OEM)
But usually we use “Command-Line” method since we can incorporate all that in a batch file (a shell- script).

EXAMPLES:

  1. If you want to import all Scott’s objects to john

$ imp system/manager file=scott.dmp log=scott_imp.log fromuser=scott touser=john

  1. If you want to import full database

$ imp system/manager file=exp.dmp log=imp.log

  1. If you want to import only emp and dept into steeve

$ imp system/manager file=scott1.dmp tables=emp,dept \ fromuser=scott touser=steeve

  1. If you want to import only the data of emp and ignore errors

$ imp system/manager file=steeve.dmp log=steeve_ign.log ignore=y

  1. Migrating data across partitions and tables

$ imp scott/tiger file=export.dmp tables=(e:qc,e:qd) ignore=y

We do logical Backups for:

  1. Moving Database objects from one User account to another User account.
  2. When we are upgrading from Oracle Version.
  3. When migrating from one platform to another platform.
  4. It is an alternative to the regular COLD or HOT Backup(Physical). If the tape doesn’t work at least we can recover from the Logical Backup.
  5. For Database REORG: if the database has been growing for a while it needs to manage too many extents (since the Segment keep on growing). The only way to get rid of so many extents is byMoving Database objects from one database to another

-Taking a FULL database Export
-Drop the database
-Re-creating the database
-Impor FULL database

By doing so, all the segments will be created with INITIAL extent as big as the Table’s space (basically it adds all extents space and makes an INITIAL extent). Another Important thing that REORG ensures INTEGRITY of Oracle Data. Although we probably never see any problems with the data there could be little chance that Oracle gets some bad data because of Disk-Block corruption which would hide in the dark until you try to select the data from that Table. These kinds of intricacies would be avoided by doing REORG’s at least every 4 months.

Points to remember before doing export or import:

  1. Always specify ‘LOG=<logfile>’ parameter when you do an EXPORT to know what exactly happens when you are away from the machine. It will be also helpful at the time of IMPORT so that we can compare the LOG files and make sure all tables are rightly being imported with all the rows.
  1. Always specify ‘BUFFER=2000000’ so that it overrides system default BUFFER parameter which is only 256k. Which means whenever EXP writes 256k worth of data to MEMORY. It tries to write that data to Export File.
  1. Always specify ‘FILE=<filename>’ otherwise Oracle starts writing to a default file called EXPDAT.DMP.by which you may overwrite an earlier version of EXP which might be one that is needed.
  2. Always perform Exp by giving the complete syntax (command line) rather than the performing by INTERACTIVE method.
  3. Whenever you are doing EXP or IMP don’t try to do at the prompt instead write the command in a file and execute as a SHELL program. That too not directly but by using either NOHUP or AT (UNIX Commands).   This will ensure you that though there is problem with your terminal or Modem the command will still run as a background process.
  4. Upon using EXPORT File try to compress it so that it won’t occupy much space.
  5. If you are doing an EXP specifying ‘FULL=Y’ and if the database size is relatively larger then don’t create indexes as part of the IMP rather IMP with ‘INDEXES=N’ and try to create indexes manually after the IMP is done.
  1. EXP&IMP are the only way to get all the SCHEMA definitions. You can get the SCHEMA definitions by giving the following command.

$ exp scott/tiger file=expscott.dmp log=expscott.log
$imp scott/tiger file=expscott.dmp log=scott_DDL.sql

  1. Issue these commands immediately one after the other without dropping any tables or deleting any data or objects. This statement fails since we are trying to create all SCOTT’s objects once again which are already there. So IMP process will generate all errors and will dump into LOG file. Once the IMP finishes we can go into the LOG file and by removing the error messages. We can get the entire SCHEMA definition (undocumented).
  2. In point-8 we discussed that INDEXES should be created after the IMP. But we don’t have any SQL script to generate indexes. Please see the code:

$ exp sys/sys full=y file=expfull_Dec08.dmp log=impfull.log \ buffer=2000000
$ imp sys/sys full=y file=expfull_Dec08.dmp indexfile=cr8_indexes.sql

(At this point we didn’t create any objects except Oracle writes all the INDEXES information to this file)

Now re-create the database and issue the following command

$ imp sys/sys full=y file=expfull_Dec08.dmp log=impfull.log \ buffer=2000000 indexes=n commit=y

Edit the file “cr8_indexes.sql” since it has info like this: CONNECT SCOTT;

CREATE INDEX……..

CONNECT STEVE;

As we know this would fail because there is no password associated with the UserID. Hence the entire file would have incomplete statements. This can be altered by issuing the following command:


SQL> ALTER SESSION SET CURENT_SCHEMA=SCOTT; This can be done via ‘vi’ editor
:1,$s/CONNECT/ALTER SESSION SET CURENT SCHEMA = SCOTT;/

This would be very helpful since we don’t really know all the passwords that’s why we are logging as that user in-directly from SYS or SYSTEM as that user.

  1. If you have large tables and if the Undo segments are not big enough to store the entire table’s information you should use COMMIT=Y at the time of import (as shown in the above example). This will ensure that the data is committed to the TABLE whenever the BUFFER is full which won’t fill-up Undo Segments.There is a disadvantage in specifying ‘COMMIT=Y’ which is if the IMPORT fails midway (for any reason) the last imported table might contain PARTIAL number of rows this would cause some more failures when this table acts as Master table for other tables.

Solution: In the above scenario it is best to just drop the last table and start the same command once again.

Read More:
Redo Logfile Management
Control File Management
Undo Tablespace

Comments

comments

Check Also

How to switch on primary database to physical standby database

After configuration data guard then data is switching  into primary database  to standby database : …

Leave a Reply

Your email address will not be published. Required fields are marked *