Tuesday, 3 July 2018

Interview Q and A for Oracle Remain Topic Part - 1

Data Masking

1 . Can Data Masking be used with Oracle 8i and 9i databases?
Data Masking can be run against 9i (9.2) databases, as well as 10g and 11g. However, it cannot be used with 8i, as it was released well after 8i was desupported.

2. Can Data Masking be used with third-party vendor databases?
Oracle Data Masking Pack can support masking of data in heterogeneous databases, such as IBM DB2 and Microsoft SQLServer, through the use of Oracle DatabaseGateways.

3. How can one obtain a trial copy of the Data Masking Pack?
There is no trial version.

4. Is there a Data Masking API?
This functionality is not currently available. Enhancement Request 7582922 has been logged to this end.

5. Is the Data Masking Pack capable of masking based on criteria or business logic?
This functionality is not currently available.

6. After implementing a mask definition and format, how can one verify that Data Masking is working?
Issue a SELECT against the masked column(s).

7. Can data masked with Data Masking be unmasked?
No. Masking data effectively changes the data. The purpose of Data Masking is to prevent sensitive data from ever
being viewed, while retaining the ability to test using similarly-formatted data.

8. Is Grid Control the only way to use data masking?
The Data Masking Pack is a separately licensed Oracle Enterprise Manager management pack available with Oracle
Enterprise Manager Grid Control 10.2.0.4 and 10.2.0.5. 10g EM DB Console does not have this feature but the 11g EM
DB Console includes this functionality.

9. How is the DM_FMTLIB package set up in Oracle Enterprise Manager Cloud Control 12.1?
The 12.1 Cloud Control documentation did not include the instructions on setting up the DM_FMTLIB package on source
databases to enable pre-mask definitions that are included in the Data Masking module.
The location of this package on the 12.1 installation is found here:
<MIDDLEWARE_HOME>/plugins/oracle.sysman.db.oms.plugin_12.1.0.2.0/sql/db/latest/masking/dm_fmtlib_pkgbody.sql
The source databases can continue to be setup as per the 11g docs.

SQL Performance Analyzer (SPA) and Database Replay (DB Replay)

10. What is the difference between SPA and DB replay?
Both SPA and DB Replay are tools that allow you to test workloads against a Database. SPA works on a specific set of SQL that you have defined and recorded in a SQL Tuning Set (STS). Database replay replays ALL the activity for a captured period.

11. When using both SPA and DB Replay together, what is the recommended order in which to use them?
First run SPA in order to iron out any problems with individual statements. Once you have fixed the problems then capture and run a replay. This way you can eliminate most of the issues within the smaller workload so that you do not have to deal with them within the context of any more general problems that DB Replay might identify.

12. Can SPA/DB Replay be used when migrating from non-RAC to RAC or vice versa?
Yes. SPA works on a specific STS while DB Replay captures all database activity irrespective of the node it originates from.

13. Should a Test Database be an exact replica of the Production?
It depends. The less different the servers, the less divergence of results you will encounter. With SPA, you only need objects to support that set of data. With DB replay you are likely to need a closer representation.

14. How can I use SPA to deal with SQL that is already slow?
SQL Performance Analyzer does not assume any SQL is good or bad, it just analyzes performance and suggests improvements. This means that you can treat all SQL (Good or Bad) the same. i.e.:
Capture a STS containing the slow SQL.
Put the STS on to the Test system.
Apply Tuning effort to the STS on the test system.
Implement any improvements on the production system.

15. Does SPA make any changes to the database? For example, my STS may contain inserts and updates. Do these change the data?
No. SPA does not make any changes to your schema so experiments can be repeated without restoring the database.

16. Can you suggest a good article for SPA implementation?
The following document is a good start point for SPA: Document 1577290.1 SQL Performance Analyzer Summary

17. Does the database need to be restarted before a workload capture tables place?
No, you do not need to restart the database. Replay can still be done and will be useful, but be aware that if you have a high number of "in-flight" transactions or a busy system that you may see a high level of divergence when you conduct replay analysis.

18. How can I estimate the likely disk space requirements for capture?
Perform a short duration capture (10-30min) using a similar load, then extrapolate to the real duration based upon that.
Look at the 'bytes received via SQL*Net from client' statistic in an AWR report covering the workload and double it.

19. What overhead is likely from using workload capture?
Workload capture has been designed to be light-weight and very efficient. Typically the overhead is less than 5% in the worst case and only adds 64K of memory per session extra plus the overhead of writing the capture data.
The main overhead is from the writing of the capture files. It is about 4% to 5%, depending on the type of database application that is running. The overhead depends on the application workload being captured and is proportional to the data involved in each database request not to the work it actually performs. For example an insert intensive workload will have a higher overhead since we will be capturing all the data that is inserted. A workload that mostly consists of long running queries will not have a high capture overhead because we will only be capturing the statement text and bind data.

20. Can I capture workload from 2 different databases and replay it together?
Yes. This is called Consolidated Replay,

21. When Replay is taking place, are client connections replicated the same as they were applied in real time?
Yes. Client connections are replayed by a special replay client that removes the need for the actual clients/middleware while preserving the timing and concurrency characteristics of the original

22. How many clients are required for the replay?
You can run the calibrate function of the workload replay client (WRC) to get a recommended number.

23. Can I use fewer clients than what the Workload Replay Calibrate function suggests?
You can use fewer clients but then there will be a higher chance of encountering issues such as the client being unable to spawn sufficient threads for the workload, etc.

24. Can I use Database Replay with a Data Guard ?
Yes. You can capture on the primary then disengage and do the replay.

25. Can Database Replay test impact of middle tier changes?
No. You cannot test the impact of middle tier changes with database replay. However, we have another product called Oracle Application Replay Pack that can do the testing of the middle tier upgrade

26. Can I Replay on an application server rather than a database server ?
Database replay is intended to run a database workload against a database. It does not test the application server. To test the application server use the Oracle Application Replay Pack.

27. Can I Replay on a different platform to the capture platform ?
Yes. The database workload capture is stored in a platform-independent format. So you can capture workload on one OS platform, e.g. Windows NT, and replay on a different one, e.g. Linux.

28. Do the database initialisation parameters of the source and target database need to be the same for Database Replay?
It depends what you are trying to test. If you want to test what difference those parameters make to the workload then that is fine. If you are trying to test something else you should try to make the systems as similar as possible to avoid unnecessary divergent results.

29. How do I make a second replay consistent with the first if the database has moved forward in time?
The best way to do this is to setup a guaranteed restore point using flashback before beginning the replay. Once replay has completed you can flash back to the previous state so the test is repeatable. i.e.
Create a Database Flashback
Do the replay
Export reports and do analysis
Flashback to restore the original state
Once restored , delete the old flashback logs so that there is space for the new ones.
Make changes and repeat as necessary
There are other ways of doing this such as export or rman backup, but flashback is the most convenient.

30. If my test system is not as powerful as my production system, how can I compare executions?
If you establish a replay baseline by replaying on the reduced capacity system first and then make the changes after and compare that to the initial replay then that can help.
Another good option is to do explain plan only comparisons, especially if you bring the optimizer statistics in from the production system.

31. Can I generate a SQL profile based upon the replay on the Test Database?
Yes.

32. Can you test character set changes with DB replay?
Yes. You can test character set changes using RAT, but there are some finer points to this.

33. How can I perform load testing using Database Replay?
You can perform load testing by varying replay parameters such as think time or connect time scaling to a non-default value. The replay options are set up when preparing the replay. For example, by reducing the think time scaling to 50%, the same workload is running faster (in less time), thus creating a load/stress test.,alternatively you may be able to get a faster replay if both think_time_scale and connect_time_scale are set to zero.

Real Application Testing (RAT) Licensing

34. Is SPA/DB Replay/RAT part of the Enterprise Edition License?
No. SPA is part of the Database Replay product and this is a separately Licensed option. RAT is only available on top of Enterprise Edition. To find out about this and other features see: Document 1361401.1 Where to Find Information About Performance Related Features
x
35. Are RAT features available in OEM 11g and 12c Cloud Control?
Yes. RAT is extensively supported within OEM and this is the recommended interface.

36. The RAT Run AWR Compare Period Report button is not currently available in EM 12c, however this is available in 11g. Will this functionality be available in 12c?
Yes. That will be provided at a future date.

37. Does capturing a workload affect the Performance of the database as a whole? Is it ok to do this at peak times on a production database. ?
Workload capture has a minimal performance impact. Information is captured within the database and is independent of the operating system. Workload capture is designed to capture a real workload, so capturing an interesting workload such as at peak times is fully expected. You want to capture a workload that is realistic for your application so that you can have a meaningful test when you replay it. Remember that once you have captured a workload you can replay it multiple times. If you are concerned with the overhead, capture a small workload at a less busy time and use that to assess the overhead.

Advanced Replication

38.  Replication System resource requirements?
System Tablespace - at least 30M
Rollback Segment Space - at least 5M
Note: catrep.sql requires a rollback segment that can
extend to 5M to run successfully.
shared_pool_size - minimum 25M

39. Replication Init.ora parameter requirements?
COMPATIBLE
JOB_QUEUE_PROCESSES
JOB_QUEUE_INTERVAL
SHARED_POOL_SIZE
DISTRIBUTED_LOCK_TIMEOUT
DISTRIBUTED_TRANSACTIONS
GLOBAL_NAMES (Must be TRUE)
OPEN_LINKS
DB_NAME
DB_DOMAIN
Hint:If using 9.0.1 RDBMS this can be set in init ora parameter
DISTRIBUTED_TRANSACTIONS=0.
Please note this is an obsoleted parameter Reference Init.ora Parameter "Distributed_Transations (Doc ID 30720.1)

40. How to handle the ORA-1403 error?
In a shared ownership environment using asynchronous propagation, conflicts are bound to happen. Oracle strongly recommends using conflict resolution routines in such an environment.
Oracle's advanced replication option comes with some standard routines which can be used to resolve the conflicts.
Or user defined conflict resolution routines can be used to better suit one's needs.
If conflict resolution cannot be implemented, replication administrator must make sure the conflicts are resolved manually in order to maintain a synchronized replicated environment.

41. What data types does replication Support?
Oracle7 Oracle8
NUMBER, DATE, NUMBER, DATE,
CHAR, VARCHAR2, CHAR, VARCHAR2,
RAW, ROWID NCHAR,NVARCHAR2,
RAW, ROWID, BLOB,
CLOB, AND NCLOB.

42. What types of objects can be replicated?
You can replicate the following object types:
Tables, Views, Indexes, Synonyms, Triggers and Packages
LONG datatypes are not replicated due to the limitations with PL/SQL.

43. What is Asynchronous/Synchronous replication?
Asynchronous Replication:
- Available since 7.1.6
- Store and forward mechanism
All transactions stored in a deferred RPC queue.
Deferred transactions are then propagated at a specific interval.
Changes from your local site are not immediately reflected at other sites.
Conflicts can occur but will not be detected until the transactions are propagated.
Response time for updates is improved because we do not wait for a response from the remote site.
Failures of remote sites or networks do not block other sites from data access locally.
Synchronous Replication:
Changes at other sites are immediately reflected at your local site.
No worry about conflicting updates.
Failure of any replicated site will block you from performing local updates.
Updates have a slower response time because you're waiting for response from remote sites.

44. Can Workgroup Server be used as a replication server?
Oracle7 Workgroup Server supports basic replication only. This means, the workgroup server can be either the master site for the read only snapshots or the read only snapshot site itself.
Oracle7 Enterprise Server and Oracle8 Enterprise Server support both basic and advanced replication.
Personal Oracle and Personal Oracle Lite can support both read only and updatable snapshots but they cannot be the master sites.

48. Can LONG datatypes be replicated?
The LONG data type columns cannot be replicated using row-level replication.
If a table has a LONG column, Oracle will not error out while doing a CREATE_MASTER_REPOBJECT or GENERATE_REPLICATION_SUPPORT. The modifications to that column is not propagated.
Also, Oracle8 does not support user-defined object types and file based LOB's (BFILEs).

49. How to use database link connection qualifiers in V7.3 and up?
The database link connection qualifiers can be used to push transactions at the group level. The grain of parallelism was reduced to the group level in 7.3, from the site level in 7.2.
In Oracle8, the grain has further reduced to transaction level. The parallel propagation in Oracle8 does not require any connection qualifiers.
When connection qualifiers are used in Oracle7 server release 7.3 or in Oracle8 with serial propagation, a transaction cannot span repgroups that use different connection qualifiers.

50. How to locate a failed defcall at the remote site?
A transaction with multiple calls fails at the remote site. The calls within the transaction which were applied before the conflict are rolled back and the entire transaction is logged in the error log.
The two methods which can be used to find the call which failed due to conflicts involve looking up the deferror and using Replication Manager. These methods are discussed below.
Method 1: CALLNO column in DEFERROR view will contain the call which caused the error. Now from DEFCALL view, we can get the DML which failed. In order to get to the actual data, use to print the calls from a transaction.
Method 2: Using replication Manager, under "Administration"
outline element will have local errors line. Open this line and the transaction which failed will be there. The property sheet of the
transaction will have a page called "Error Call" which will describe the call in question.

51. What does EXECUTE_AS_USER do in a replicated environment?
It determines who pushes the deferred transaction queue.
Deferred transaction queue (DTQ) can be pushed as the user who originally queued the transaction.
DTQ can also be pushed as the connected session user if execute_as_user = TRUE.
If EXECUTE_AS_USER=TRUE
The DTQ is pushed as the connected session user. (Typically REPADMIN pushes the deferred transaction queue.)
No additional database links are necessary.
Grant REPADMIN at each site the EXECUTE ANY PROCEDURE system privilege OR
Grant REPADMIN EXECUTE object privilege on each of the generated replication procedures.
For security reasons, it is a more practical implementation because you DON'T have to create database links for all users at all sites.
EXECUTE_AS_USER=TRUE will allow a smoother migration to the ORACLE 8 security model.
Time should be spent to understand security in order to set things up correctly.
EXECUTE_AS_USER = FALSE (default)
DTQ is pushed as the user who originated the transaction
Each user should have a private database link to each site in a replicated environment.
This link must have a username and password.
The username specified must have the necessary EXECUTE privileges on the generated replication procedures.

52. How do jobs get marked broken and how to get around this?
Failure to execute a job after 16 attempts.
You've executed DBMS_JOB.BROKEN (JOB,TRUE) to break the job.
Always look at the SNP.TRC file that gets created in the background_dump_dest for more information as to why your job is failing.
Once you've fixed the cause of the failures, you can execute DBMS_JOB.
BROKEN(JOB#,FALSE) to unbreak the job.
You can also execute DBMS_JOB.RUN(JOB#) to unbreak the job.
Be sure to issue a 'COMMIT' after your command when running the DBMS_JOB package.
Reference bulletin: 'HOW TO GET AROUND THE 16-ATTEMPT LIMIT FOR JOBS'

53. How to debug the DO_DEFERRED_REPCAT_ADMIN procedure?
Status in DBA_REPCATLOG on Master Definition site is AWAIT_CALLBACK when status on Master site is READY
1) Check for SNP processes running at both sites and relevant SNP.TRC files.
2) Check the ERRNUM and MESSAGE columns in DBA_REPCATLOG.
3) Check the USERID column of DBA_REPCATLOG and LOG_USER column of the DBA_JOBS view to ensure that the job is run on behalf of the REPADMIN user.
4) Ensure the databases are up and running and that the user in the CONNECT TO clause, REPSYS has the necessary privileges.
5) Ensure that the replication administrator, REPADMIN has the necessary private database link and granted the necessary privileges.

54. How to cleanup a replicated environment?
Errors in setup of environment, would like to start over and recreate environment.
Execute dbms_repcat.remove_master_databases() on the Master Definition site.
Execute dbms_repcat.drop_master_repgroup on the Master site.

55. How to relocate a master definition site?
The procedure DBMS_REPCAT.RELOCATE_MASTERDEF needs to be invoked from any of the existing master sites. In the following table, Y indicates availability and N indicates the non availability of the sites.
Masterdef Master notify_masters include_old_masterdef
Y Y True True
N Y True False
N N False False
When the master definition site is recovered, if that site needs to be now added on as a master site call:
DBMS_REPCAT.RELOCATE_MASTERDEF with NOTIFY_MASTERS = FALSE.

56. How do I print the contents of the deferred transaction queue?
By using Server Manager in releases 7.1.6 and 7.2.X . Click on the transaction ID and do a SHOW CALLS to see the defcall entries.
By using Replication Manager in release 7.3.X.
By running the scripts in GSX entry .

57. How do I move the SYSTEM owned replication tables from SYSTEM tablespace?
(Oracle7 Only)
Step 1: Make sure DEFTRAN, DEFCALL, DEFERROR and DBA_REPCATLOG are empty.
Step 2: Issue DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY
Step 3: Export user SYSTEM.
Step 4: Drop the tables in the order listed:
a. SYSTEM.DEF$_CALLDEST
b. SYSTEM.DEF$_CALL
c. SYSTEM.DEF$_DEFAULTDEST
d. SYSTEM.DEF$_ERROR
Step 5: Drop table SYSTEM.REPCAT$_REPSCHEMA cascade constraints.
Step 6: Drop table SYSTEM.DEF$_DESTINATION.
Step 7: Alter user system default tablespace symrep_objs  quota unlimited on symrep_objs.
Step 8: Alter user system quota 0 on system.
Step 9: Revoke unlimited tablespace on system.
Step 10: Import the following tables in the order listed:
a. SYSTEM.DEF$_CALL
b. SYSTEM.DEF$_CALLDEST
c. SYSTEM.DEF$_DEFAULTDEST
d. SYSTEM.DEF$_DESTINATION
e. SYSTEM.DEF$_ERROR
f. SYSTEM.REPCAT$_REPSCHEMA
Step 11: Grant unlimited tablespace to system.

58. The DBMS_SNAPSHOT package body is invalid after running catrep.sql. Why?
This is due to bug 389795.
There is no need to rerun catrep.sql
One can compile the package body manually and validate the package. To compile it manually:
SQL> alter package SYS.DBMS_SNAPSHOT compile body;

59. What are the different types of conflict resolution?
3 types of conflicts
Update
Uniqueness
Delete
Update System Defined Implementations of Conflict Resolution:
Latest Timestamp
Earliest Timestamp
Additive
Average
Minimum value (always decreasing)
Maximum value (always increasing)
Priority Group
Site Priority
Overwrite (Multiple Updatable SS sites).
Discard (Multiple Updatable SS sites).
Uniqueness Constraint System Defined Implementations of Conflict Resolution:
Append Site Name
Append Sequence
Ignore Discard

60. How do I handle DELETE conflicts?
Handling DELETE conflicts:
The replicated application should not overuse deletes.
They are difficult to resolve because they require a history of deleted rows which Oracle does not maintain.
Instead, the application should mark a row as deleted and these rows should be periodically purged from the replicated table using procedural replication.
is a bulletin of 'CONFLICT RESOLUTION, A SITE PRIORITY EXAMPLE'
is a bulletin of 'CONFLICT RESOLUTION, A TIMESTAMP
EXAMPLE'

61. How do I export/import in a multi-master replicated environment?
Steps to do an export/import in a multi master replicated environment.
From Oracle7 server release 7.3 onwards, OFFLINE INSTANTIATION can be used when a new master site is to be added.
Step 1: Synchronize all the existing master sites.
Step 2: Make sure DEFTRAN, DEFCALL, DEFERROR and DBA_REPCATLOG are empty
Step 3: As replication administrator, call: DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY
Step 4: From the master definition site, call the procedure: DBMS_OFFLINE_OG.BEGIN_INSTANTIATION
Step 5: Run the export utility and export individual objects from the master group from any master site.
Step 6: Non administrative activities can be resumed at other master sites by calling : DBMS_OFFLINE_OG.RESUME_SUBSET_OF_MASTERS
Step 7: Transfer the export file to the new master site.
Step 8: At the new master site issue: DBMS_OFFLINE_OG.BEGIN_LOAD
This procedure will disable the triggers and propagation to this site.
Step 9: Using the import utility, import the data.
Step 10: Once the import is complete, call: DBMS_OFFLINE_OG.END_LOAD
Step 11: From the master definition site issue the following procedure call to resume master activity at the new site.
DBMS_OFFLINE_OG.END_INSTANTIATION

11.2: Advanced Replication is supported with ASM

No comments:

Post a Comment