Sunday, 1 July 2018

Interview Q and A for Oracle Export Import Part - 1

1. In which cases imp/exp is used?
Eliminate database fragmentation
-Schema refresh (move the schema from one database to another)
-Detect database corruption. Ensure that all the data can be read (if the data can be read that means there is no block corruption)
-Transporting tablespaces between databases
-Backup database objects

2.  Which are the common IMP/EXP problems?
ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh.
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.

3. How can we check DATAPUMP file is corrupted or not?
Sometimes we may be in situation, to check whether the dumpfile exported long time back is VALID or not or our application team is saying that the dumpfile provided by us is corrupted.
Use SQLFILE Parameter with import script to detect corruption. The use of this parameter will read the entire datapump export dumpfile and will report if corruption is detected.
impdp system/*** directory=dump_dir dumpfile=expdp.dmp logfile=corruption_check.log sqlfile=corruption_check.sql
This will write all DDL statements (which will be executed if an import is performed) into the file which we mentioned in the command.

4. How can we find elapsed time for particular object during Datapump or Export?
We have an undocumented parameter ‘metrics’ in DATAPUMP to check how much it took to export different objects types.
Expdp system/passwd directory=dump_dirdumpfile=expdp_full.dmp logfile=expdp_full.log full=y metrics=y;

5. How to move table from one tablespace to another tablespace?

You can use any of the below method:
1.Export the table, drop the table, create definition of table in new tablespace and then import the data using (imp ignore=y).
2.Create new table in new tablespace then drop the original table and rename temporary table with original table name.
CREATE TABLE temp_name TABLESPACE new_tablespace as select * from 'source_table';
DROP TABLE real_table;
RENAME temp_name to real_table;

6. What is the difference between SQL*loader and Import utilities?
Both these utilities are used for loading the data into the database. The difference is that the import utility relies on the data being produced by another oracle utility Export while SQL*Loader is a high speed data loading mechanism allows data to be loaded that has been produced by other utilities from different data source.SQL * Loader loads data from standard OS files or flat file in oracle database tables.Export/Import allows moving existing data in oracle format to and from oracle database.

7. How to re-organize schema?
We can use dbms_redefinition package for online re-organization of schema objects. Otherwise using import/export and data pump utility you can recreate or re-organize your schema.

8. How we can improve the EXP Performance?
1.Set the BUFFER parameter to a high value (e.g. 2M)
2.If you run multiple export sessions, ensure they write to different physical disks.
3. Stop unnecessary applications to free the resources.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).

9. How we can improve the IMP performance?
1.Import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated), import the data and recreate the indexes
2.Store the dump file to be imported on a separate physical disk from the oracle data files
3.If there are any constraints on the target table, the constraints should be disabled during the import and enabled after import
4.Set the BUFFER parameter to a high value,
Default is 256KB (ex. BUFFER=30000000 (~30MB)  ) and COMMIT =y  or set COMMIT=n (is the default behavior: import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables.)
5.Use the direct path to import the data (DIRECT=y)
6.(if possible) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init<SID>.ora file
7.(if possible) Set the LOG_BUFFER to a big value and restart oracle.
8. Stop redo log archiving, if possible.
9. Use COMMIT=n, if possible.
10. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
11. Use STATISTICS=NONE
12. Disable the INSERT triggers, as they fire during import.
13. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle  11g) during import.

10. How does NLS affect import/export(exp/imp)??
Import and export are client products, in the same way as SQL*Plus or Oracle Forms, and will therefore translate characters from the database character set to that defined by NLS_LANG. The character set used for  the export will be stored in the export file and, when the file is imported, the import will check the character set that was used. If it is different to that defined by NLS_LANG at the import site, the characters will be translated to the import character set and then, if necessary, to the database character set.

11. How should NLS_LANG be set when using export?
Oracle recommends to set the character set part of NLS_LANG environment parameter
to ALWAYS the same character set as the character set of the database you are exporting.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
That way no conversion will take place and the exportfile will be created in the same character set as the original database and contain ALL data from original database (even incorrectly stored data if that would be the case).
Even if the plan is to import this into a database with a different character set later the conversion can be postponed until the import. Note that this has no relation with the Operating system. If your have a
WE8MSWIN1252 database on a unix server (which is totally supported) then you should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before export.
During *interaction* with the database (= sqlplus) you need to configure your *unix* client properly and that cannot be 1252 seen *unix* does not has a 1252 characterset
Note:264157.1 The correct NLS_LANG setting in Unix Environments

12. How should NLS_LANG be set when using import?
If the source and target database have the same character set, the character set part of the NLS_LANG should be set to that same character set on both the export and the import.
Even if the character sets of the exporting and importing databases are not the same the best (preferred) value to use for the character set part of NLS_LANG on both export and import is still
the character set of the source database.
select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
Setting the NLS_LANG to the character set of the target database during import is also correct as such, but import has some limitations when going to a multibyte characterset (like UTF8), hence using the SOURCE NLS_CHARACTERSET during both imp and exp session is simply the best option, avoiding any problems like IMP-16 "Required character set conversion (type %lu to %lu) not supported".
So, the preferred place to do the conversion is between the import executable and the target database.
Note that this has no relation with the Operating system. If your source database is a WE8MSWIN1252 database then you simply should set NLS_LANG to AMERICAN_AMERICA.WE8MSWIN1252 before import, even on a Unix server.
Note that during *interaction* with the database (= sqlplus) you need to configure your *unix* client properly and that cannot be 1252 seen *unix* does not has a 1252 characterset
Note:264157.1 The correct NLS_LANG setting in Unix Environments

Example: you want to go from an WE8MSWIN1252 to an UTF8 db:
Note that this is only the exp/imp example, if you want to migrate to AL32UTF8
Or UTF8 check Note:260192.1 Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode)
for the full story.
1) double check the NLS_CHARACTERSET on the SOURCE database
select * from nls_database_parameters where parameter = 'NLS_CHARACTERSET';
and export with the NLS_LANG set to AMERICAN_AMERICA.<NLS_CHARACTERSET>
In this case we want to create a export file containing WE8MSWIN1252 data.
(This is also the setting you want to use if you take an
export as backup)
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ exp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>exp ....
2) import with the NLS_LANG set to American_america.WE8MSWIN1252 (= source NLS_CHARACTERSET)
into the new UTF8 db.
on unix this is:
$ set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
$ export NLS_LANG
$ imp ....
on windows this is:
c:\>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
c:\>imp ....
The conversion to UTF8 is done while inserting the data
in the UTF8 database by the imp connection.
We recommend to set the NLS_LANG explicit in the current shell for unix
(-> Note:131207.1 How to Set Unix Environment Variable )
or in the dos box used for the exp or imp tool on windows.
(-> "c:\>set NLS_LANG=AMERICAN_AMERICA.<characterset of the source database>")

13. How is import affected by the NLS_LANGUAGE and NLS_TERRITORY ?
Not. Normally you use the AMERICAN_AMERICA default, but if you imported with NLS_LANG set to FRENCH_FRANCE for example then you will not have problems, even if the originating environment
used GERMAN_GERMANY or so.

14. How to know in what characterset a dmp (export) file is created?
simply issue: imp system/oracle@database show=yes file=test.dmp the output gives you import done in US7ASCII character set and AL16UTF16 NCHAR character set
-> this is the current NLS_LANG value set in the environment and the NCHAR characterset of the target database import server uses WE8MSWIN1252 character set (possible charset conversion)
-> this is only shown if the NLS_LANG during this import session is different from the target database characterset, so if you see 3 lines you might have problems :-)
export client uses UTF8 character set (possible charset conversion)
-> this is the characterset used during the export session and the characterset used in the dmp file.

15. How does NLS affect datapump (expdp/impdp)?
Datapump does not use the NLS_LANG to do conversion between databases.
Conversion between 2 database charactersets is done purely based on the NLS_CHARACTERSET ( or NLS_NCHAR_CHARACTTERSET for Nchar,Nvarchar and Nclob datatypes) of the source and target database.
However, if you specify a parameter file then the NLS_LANG *is* used. This is only important if you use non-English characters (e.g. for the QUERY parameter) in the parameter file.
If you use non-English characters in the parameter file then the NLS_LANG environment variable should be set (in the session where the Data Pump job is started) to the correct encoding of the parameter file (!).

16. What causes ORA-01401 or ORA-12899 during import (imp and impdp) ?
9i and lower gives ORA-01401: inserted value too large for column 10g and up gives ORA-12899: value too large for column
This is seen when exporting from a database with a 8 bit NLS_CHARACTERSET (like WE8ISO8859P1, WE8MSWIN1252 , WE8DEC ...) to a database with a 16 bit NLS_CHARACTERSET (like JA16SJIS , ZHS16GBK, KO16MSWIN949) or NLS_CHARACTERSET set to AL32UTF8 or UTF8

17. What are the different kind of export backups ?
Full back - Complete database.
Incremental - Only affected tables from last incremental date/full backup date.
Cumulative backup - Only affected table from the last cumulative date/full backup date.

18. What is the use of FILE option in EXP command ?
To give the export file name.

19. What is the use of COMPRESS option in EXP command ?
Flag to indicate whether export should compress fragmented segments into single extents.
If we specify COMPRESS=y during export then at the time of table creation while importing, the INITIAL extent of the table would be as large as the sum of all the extents allocated to the table in the original database.
If we specify COMPRESS=n during export then while creating table in the import, it will use the same values of INITIAL extent as in the original database.

Now lets say I have a table of 100 MB. There have been some deletions and updations and only 50 MB of actual data is there. I export the table with COMPRESS=y and recreate it in some other database. It will sum all the extents and assign as INITIAL extent while creating the table. There is only 50 MB of data in the table but it has allocated 100 MB already. In case, you have limited space this is not a very good option.
If I do with COMPRESS=N and then import the table, its INITIAL extent will be as large as INITIAL extent in the original database and then as required, new extents will be allocated. So now my table in the new database would be approximately 50 MB in size.

20. What is the use of GRANT option in EXP command ?
A flag to indicate whether grants on database objects will be exported or not. Value is 'Y' or 'N'.

21. What is the use of INDEXES option in EXP command ?
A flag to indicate whether indexes on tables will be exported.

22. What is the use of ROWS option in EXP command ?
Flag to indicate whether table rows should be exported. If 'N' only DDL statements for
the databse objects will be created.

23. What is the use of CONSTRAINTS option in EXP command ?
A flag to indicate whether constraints on table need to be exported.

24. What is the use of FULL option in EXP command ?
A flag to indicate whether full database export should be performed.

25. What is the use of OWNER option in EXP command ?
List of table accounts should be exported.

26. What is the use of TABLES option in EXP command ?
List of tables should be exported.

27. What is the use of RECORD LENGTH option in EXP command ?
Record length in bytes.

28. What is the use of INCTYPE option in EXP command ?
Type export should be performed COMPLETE, CUMULATIVE, INCREMENTAL.

29. What is the use of RECORD option in EXP command ?
For Incremental exports, the flag indirect whether a record will be stores data dictionary tables recording the export.

30. What is the use of PARFILE option in EXP command ?

Name of the parameter file to be passed for export.

No comments:

Post a Comment