READ-ONLY SNAPSHOTS
62. How do
you refresh a snapshot remotely?
You can use
DBMS_REFRESH.CHANGE in a local procedure that will change the time of the
snapshot's refresh to do it almost immediately.
Note 1031959.6
63. Why do
I have multiple entries in DBA_SNAPSHOT_LOGS for the same table?
There are as many entries in
DBA_SNAPSHOT_LOGS for the same master table as there are snapshots that are
refreshing against that master table. Note 1024394.6
64. Can
you put triggers on snapshots?
Triggers on snapshots are not
supported primarily because Oracle cannot
Note 67424.1
65. What
is the maximum number of snapshots in a refresh group?
100 Note 1022286.6
66. How do
you offline instantiate a snapshot earlier than 7.3.x using export/import?
This is NOT supported prior
to 7.3.x, however there are circumstances that might justify doing this (see
Note 1021306.6).
67. What
is the refsnap refresh utility?
The refresh utility referred
to in the Administration Manual is "refsnap". This utility was used
to automatically refresh snapshots.
SNAPSHOT_REFRESH_INTERVAL
SNAPSHOT_REFRESH_PROCESSES
or
JOB_QUEUE_INTERVAL
JOB_QUEUE_PROCESSES
should be used instead to
allow for SNP processes.
68. What
are read-only snapshots?
Oracle7 has introduced the
table snapshot feature that allows replication of a master table by an
unlimited number of snapshots
existing on different nodes
in a distributed network. The snapshot cannot be updated. It is a read-only
kind of table.
69. Can
you have synonym-based snapshots?
NO. This is not supported.
70. What
is locked during a snapshot refresh?
SYS.SNAP$ base table and the
snapshot's base table
SNAP$_<snapshot_name>.
71. How do
you avoid complete loss of data if a complete refresh fails?
A complete refresh truncates
the snapshot's base table first.
If the actual refresh fails,
you would have lost the data.
You can avoid this by
creating a refresh group since the group will do a
delete from the base table
instead of a truncate. Note 1029824.6
72. Can
you create unique indexes on snapshots?
No, it is not supported.
You face the following risks:
1) You risk changing the
execution plan of the queries performed during fast refresh. This can result in
extremely poor
refresh performance.
2) Operations performed
during refresh may temporarily violate the constraint. Violation of the
constraint would abort
your refresh. We only
guarantee that the snapshot will be a read-consistent view of the master after
refresh. We
don't have any guarantee on
how that is achieved. Note 67424.1
73. Can
you create a snapshot off of a snapshot?
No. Please note in Oracle8i /
Oracle9i a number of these restrictions have been removed.
LogMiner
74. What
is the LogMiner utility?
The Oracle LogMiner utility
enables you to query redo log files through a SQL interface. Redo log files
contain information about the history of activity on a database including every
change made to user data and to the data dictionary in a database. Therefore,
redo log files are the only source that contains all the necessary information
to perform recovery operations. Because redo log data is often kept in archived
files, the data is already available. There should be no additional operations
needed to obtain the data that LogMiner uses. Note 62508.1 The LogMiner Utility
75. What
versions of Oracle support LogMiner?
The LogMiner utility was
shipped with the Oracle8i and Oracle9i databases.
76. Can
LogMiner be used against Oracle8 (8.0.x) logs?
Yes - both the Oracle8i and
Oracle9i LogMiner utilities can be used against Oracle 8.0.x redo logs.
However, some of the LogMiner release 9.0.1 features only work with redo log
files produced on an Oracle9i or later database.
Note 74988.1 How to Build an
Oracle 8.0 LogMiner Dictionary
78. Can
LogMiner be used against Oracle7 logs?
No - the Oracle8i and
Oracle9i LogMiner utilities cannot read Oracle7 redo logs.
79. What
datatypes are supported by LogMiner?
LogMiner has the following
restrictions:
In 8.x and 9.0.x LogMiner
cannot be used with the following:
- Data types LONG and LOB
- Simple and nested abstract
data types (ADTs)
- Collections (nested tables
and VARRAYS)
- Object Refs
- Index Organized Tables
(IOTs)
In 9.2.x, LogMiner can be
used with LONG and LOB, but it cannot be used with:
- Simple and nested abstract
data types (ADTs)
- Collections (nested tables
and VARRAYS)
- Object Refs
- Index Organized Tables
(IOTs)
For a complete list of
restrictions, please view the LogMiner documentation located in the Database
Administrator's Guide.
80. How
does LogMiner handle Chained Rows?
The Oracle8i LogMiner utility
does not support Chained Rows. Even though this is not supported, there is a document
available on this subject: Note 74750.1 LogMiner Analysis of DML on Chained
Rows
The Oracle9i LogMiner utility
does support extracting information on Chained Rows against Oracle9i log files
only. See:
Note 148616.1 Oracle9i
LogMiner New Features
81. Is
there a GUI interface for LogMiner available anywhere?
Yes - for Oracle9i the
LogMiner Viewer is available in Enterprise Manager.
Note 170484.1 Overview of the
LogMiner Viewer 9.0.1 Used to Query Redo Log Files
Note 170054.1 How To Create a
Standalone LogMiner Viewer Repository
Note 151365.1 How to
Configure the EM Log Miner Viewer 9.0.1 Feature
82. Can
LogMiner extract DDL too?
Oracle8i - No. In this
version LogMiner only supports obtaining information about DML on conventional
tables. Since only the internal operations to the data dictionary are recorded
and it is difficult to track these operations.
However you can determine the
time of some DDL operations.
Fail Safe
83. What
is Oracle Fail Safe?
Oracle Fail Safe is software
that works with Microsoft Cluster Server (MSCS) to provide highly available business
solutions on Microsoft clusters. Oracle Fail Safe works with MSCS cluster
software to provide high availability for applications and single-instance
databases running on a cluster. With Oracle Fail Safe, you can reduce downtime
for single-instance Oracle databases and almost any application that can be
configured as a Microsoft Windows service.
Oracle Fail Safe consists of
two main components: Oracle Services for MSCS and Oracle Fail Safe Manager. Oracle
Services for MSCS works with the MSCS software to configure fast, automatic
failover during planned and unplanned outages for resources configured for high
availability. Oracle Fail Safe Manager provides a user-friendly interface and
wizards that help you to configure and manage cluster resources, and troubleshooting
tools that help you to diagnose problems.
Together, these components
enable rapid deployment of highly available database, application, and Internet
business solutions. No changes are required to existing applications to access
Oracle Fail Safe databases.
Oracle Fail Safe offers these
features:
- Highly Available Resources
and Applications (Fast automatic Failover).
- Easy to Use and Support
(GUI, Wizards, Configuration replication)
- Easy to Integrate with OCI
and ODBC Applications
Note: Oracle Services for
MSCS was referred to as Oracle Fail Safe Server in previous releases of Oracle
Fail Safe.
84. What
are the prerequisites for using Oracle Fail Safe?
Oracle Fail Safe runs under
Windows NT, Windows 2000 and Windows Server 2003 EE. Windows NT
Enterprise Edition requires
Microsoft Cluster Server (MSCS) Version 1.0, where as MSCS is included with
Windows 2000/2003. MSCS is
the cluster software that Oracle Fail Safe uses. Currently, MSCS (and thus,
Oracle Fail Safe) supports
upto 8 nodes in a cluster system.
Cluster
Nodes:
1. On each cluster node,
install:
a. Microsoft Windows on a
private (system) disk.
b. Microsoft Cluster Server
(MSCS) on a private disk if you installed Microsoft Windows NT in step a.
(MSCS is included in
Microsoft Windows 2000 and Microsoft Windows Server 2003 installations.)
2. Validate that the IP
addresses and hostnames resolve properly across the cluster.
3. On each cluster node for
each Oracle product that you intend to install, create an Oracle home on a
private disk. Ensure that the Oracle homes on each cluster node have the same
name.
4. On each cluster node,
install into the Oracle home (or homes) the optional Oracle software (Oracle Database,
Oracle HTTP Servers, and other applications) that you plan to use with Oracle
Fail Safe.
Client
Nodes (running Oracle Fail Safe Manager):
On one or more client
systems, optionally, install Oracle Fail Safe Manager. Client nodes where you
will be running Oracle Fail Safe Manager need to be running WindowsNT or
Windows server 2000/2003 respectively.
85. Does
Fail Safe support multiple Oracle homes?
Oracle Fail Safe supports
Oracle databases in multiple homes under these conditions:
· Install Oracle Services for
MSCS in any one Oracle home on all cluster nodes. Only one version of Oracle
Services for MSCS can be installed and running on a node.
Fail Safe can be installed in
any Oracle home but only once on each node.
If a database is to be made
Fail Safe (added to a Fail Safe group), the database must exist on both nodes. Any Oracle home that is to be used
with Fail Safe must be symmetrical on the cluster (i.e. same home name and
Oracle version).
All resources in a Fail Safe
group must reside in the same Oracle home. EX: You cannot add a database from
one home and another database from another home into the same Fail Safe group.
You can install multiple
versions of Oracle Fail Safe Manager on a system, but each version must be installed
in a different Oracle home, and the latest release of Oracle Fail Safe Manager
must be installed last.
86. In
what types of situations is Oracle Fail Safe useful?
Fail Safe has two principal
uses:
Allows an Oracle database,
Oracle Forms, or Oracle Reports to run on either node of a cluster, for continued
availability in the event of node downtime.
Shares workload between the
cluster nodes, eliminating the need to have a standby machine that, typically,
is idle and thus underutilized.
87. How is
Oracle Fail Safe installed and managed?
Oracle Fail Safe has been
designed to be easy to install and manage. A component called Oracle Fail Safe Manager
acts as a console, allowing standard Windows operations such as drag-and-drop
capabilities to be used in management.
There is also a command-line
interface in Fail Safe (FSCMD) to support scripting and batch automation of administration
tasks (such as backups).
A powerful set of
verification troubleshooting tools is provided with release 2.1.2 and above.
Verify Cluster, Verify Group, and Verify Standalone Database enable the system
manager or DBA to check the health of a Fail Safe installation, monitor the
progress of configuration changes and quickly identify the source of any problems.
88. What
terminology does Fail Safe introduce?
There are a number of terms
specific to Oracle Fail Safe and cluster systems:
Cluster
Alias:
The cluster alias is a
node-independent network name that identifies a cluster and is used for
cluster-related system management. MSCS creates a group called the Cluster
Group, and the cluster alias is the virtual address of this group. Oracle Fail
Safe Manager interacts with the cluster components and MSCS using the cluster
alias. Clients do not connect to the cluster alias.
Fail
Safe Group:
A logical collection of
cluster resources that forms a minimal unit of failover.Minimally, a Fail Safe
group consists of a virtual server, network name (chosen when Oracle Fail Safe
is installed) and an IP address, which is used by clients to access the service
provided by the Fail Safe group. A Fail Safe group runs on only one cluster
node at a time. A Fail Safe group and a virtual server are synonymous.
Failover:
The process by which a Fail
Safe group (virtual server) is made available on the surviving node.
Failover
Time:
The total time taken between
the database becoming unavailable on the failed node and available on the surviving
node.
Failback:
The process by which a
database is (optionally) made available again on the first node, after the
reason failover took place has been rectified.
Virtual
Server:
A collection of services that
gives the appearance of a physical Windows server to clients, regardless of the
actual physical node on which the services are running. (To clients accessing
the Fail Safe resources, the virtual server looks like a physical node.) A
virtual server contains all of the resources necessary to run a particular
application, including a network name and an IP address resource. Thus, you may
have several virtual servers--one for each of the Fail Safe (failover) groups.
A Fail Safe group and a virtual server are synonymous.
89. What
happens when database failover occurs?
The sequence of events
depends on whether the failover is planned or unplanned.
Failover
events in a planned failover:
Oracle Fail Safe initiates a
checkpoint, and, when it is complete,
1. shuts down the database.
2. Oracle Fail Safe requests
MSCS to move the Fail Safe group resources to the other cluster node.
3. MSCS starts the instance
on the second node.
4.Client applications can
then connect to the Oracle Fail Safe database on the second node, using the same
virtual server address.
Failover
events in an unplanned failover:
1. MSCS detects that a
problem has occurred on one node.
A predetermined failover
policy for the Fail Safe group is followed. Often, one or more attempts will be
made to restart the database rather than fail it over, in case the problem was
transitory.
2. If failover is activated,
MSCS starts the database instance on the surviving node.
3. Instance recovery takes
place.
4. Client applications then
can connect to the Oracle Fail Safe database on the second node, using the same
virtual server address.
90. Does
Fail Safe work only with databases?
No. Fail Safe can also work
with Oracle and limited support for third-party applications. We have worked
with vendors to make sure that third-party applications can work with a Fail
Safe Database. However, we don't support configuring the third party
applications.
The current releases provide
support for Oracle Forms and Oracle Reports.
91. Can
Fail Safe be used to share a cluster workload?
Yes. Oracle Fail Safe
minimizes the need to have a standy machine that is idle until a failover
occurs. However, care is needed in capacity planning. If both cluster nodes
perform useful work (this is called an active/active configuration), loading
may be such that the surviving node may be unable to cope effectively with the
workload. A compromise would be to plan for both nodes to be, say, 75% utilized
under normal operation, so after failover, performance of the surviving node
would be slower but hopefully acceptable.
92. How do
I make an Oracle database into a Fail Safe database?
Take the following steps to
create a Fail Safe database:
Create a database on either
cluster node, placing the data files, log files and control files on the shared
cluster disks. Database files such as the initialization file can reside on a
private (non-cluster) disk.
1. Create a Fail Safe Group
using Oracle Fail Safe Manager, supplying a virtual server network name and IP
address.
2. Drag the standalone
database to the Fail Safe Group to invoke the Add Resource to Group Wizard. The
wizard will then determine the disks used by the database, add them to the Fail
Safe group, configure SQL*Net V2 or Net8 files to work with the virtual server,
and test that the Fail Safe database works correctly on each node.
3. Configuring a Fail Safe
database is an example of a long-running operation that take several minutes to
complete because such operations typically are performed on both nodes in the
cluster. During long-running operations, a window displays the progress of the
operation. This window can provide useful diagnostic information in the event
of a problem occurring.
93. How
long does it take to fail over a database?
In tests using Oracle
databases with SAP R/3, both planned and unplanned failovers took between about
30 seconds on systems with a few users, to about a minute on systems with 300
users. For database failover, the Oracle8 and above releases allow users to connect
before recovery is complete.
94. Can
Oracle Fail Safe support applications that use OCI or ODBC?
An application perceives
failover as a network problem. There are two issues:
How to re-establish the
connection to the database
How to re-establish the state
of the application after failover has taken place
If you want to configure an
existing application to access databases or other applications configured with Oracle
Fail Safe, then few or no changes are required. Because applications always
access cluster resources at the same virtual address, applications treat
failover as a quick node restart. After a failover occurs, database clients or
users must reconnect and replay any transactions that were left undone (such as
database transactions that were rolled back during instance recovery).
Applications developed with OCI (including ODBC clients that use the Oracle
ODBC driver) can take advantage of automatic reconnection after failover.
With transparent application
failover, clients do not need to explicitly reconnect after a group fails over.
The OCI connection handles reconnection and state recovery automatically for
the client application. In fact, applications that are not actively updating
the database at the time of a failure may not notice that failover is occurring.
The transparent application
failover feature is supported for Oracle Database Enterprise Edition. For
Oracle Database 10g Release 10.1.0.3 and later, the transparent application
failover feature is also supported for Oracle Database 10g Standard Edition.
95. In
what state are transactions left after failover?
When failover occurs, the
database running on the failed node is shut down using Abort mode. Thus, any active
transactions will be rolled back when recovery takes place on the second node
(but also see the next question).
96. What
does the Oracle Resource DLL do?
This DLL is the heart of
Oracle Fail Safe. Its main tasks include the following:
Bring the database server
online and take it offline
Act as a bridge between MSCS
and the database
Perform "Is Alive"
polling to test queries against the database to check all is well.
Change the timing parameters
(Pending Timeout, etc) based on database workload
97. What
SQL*Net changes are needed to use Fail Safe?
Oracle Fail Safe automatically
updates the tnsnames.ora and listener.ora files on all cluster nodes (that are possible
owners for the single-instance database) and on the client node where you are
running Oracle Fail Safe Manager. If you need to enable remote clients (that
are not running Oracle Fail Safe) to process work against a single-instance
Oracle database through a cluster node, you must edit the tnsnames.ora files to
update the host name with the virtual address information. Edit each client's
local tnsnames.ora file using a network configuration tool.
When you configure a Fail
Safe group, the most important information that you supply is the group name
and the associated virtual server address. The host name, database instance,
SID entry, and protocol information must match on both cluster nodes and on
each client system that is running Oracle Fail Safe Manager. Add a single-instance
database to the group procedure updates the net service name entry in the
tnsnames.ora file for the database to use the virtual addresses of the group.
If there are multiple Oracle homes on the node, then all tnsnames.ora files are
updated.
Oracle Fail Safe checks the
tnsnames.ora file for an existing entry with the same name and updates the tnsnames.ora
file. If an entry for the net service name exists, then Oracle Fail Safe
updates the tnsnames.ora file otherwise it will write a new entry.
Oracle Fail Safe also creates
a new Oracle Fail Safe listener that is configured to listen on the virtual
address associated with the single-instance database and automatically adds it
to the failsafe group.
98. How
does Oracle Fail Safe differ from Microsoft Cluster Server?
Oracle Fail Safe has built-in
knowledge of Oracle resources (databases, Reports, Forms) and how they work on
shared nothing cluster systems. MSCS is not adequate for configuration and
management of Oracle resources.
99. How
does Fail Safe compare to RAC?
There are some similarities
between Oracle Fail Safe and RAC, but there are a greater number of
differences.
Both products support Oracle
databases on more than one node. Oracle Fail Safe currently supports upto eight
nodes(limitation of MSCS cluster), whereas RAC can, depending on the vendor,
support larger numbers of nodes. The key differences are in cost and
scalability: Oracle Fail Safe is a much lower-cost solution than RAC, but does
not offer the scalability of RAC for expanding the cluster and its workload as
a business' needs grow.
Other differences lie in the
technology used. Oracle Fail Safe uses the MSCS clustering software, which means
the cluster nodes share no resources. RAC uses shared disks and all nodes have
concurrent access to the data on all disks. Sophisticated lock and cache
management technologies used in conjunction with partners disk cluster
technologies, allow RAC to offer a highly available, highly scalable solution
for WINDOWS clusters.
Fail Safe is suited for
easily partitioned workloads and data for customers who:
Require high availability for
the database, and a single-instance Oracle will suffice.
Expect most configuration and
management of clusters and Oracle databases to be handled automatically.
Expect tuning and management
of Oracle Fail Safe databases to be the same as single instance Oracle10g/Oracle11g
databases.
Real Application Clusters is
suited for corporate database customers who:
Require parallel access to a
single database by multiple systems
Have access to multi-instance
Oracle database and cluster tuning and configuration knowledge
100. When
will Fail Safe support more than two nodes?
This depends entirely on how
many nodes will MSCS support. For now both Windows Server 2003 Enterprise Edition
and Datacenter Edition will support server cluster configurations of up to 8
nodes.
101. Is
Fail Safe always going to use Enterprise Manager?
Oracle Fail Safe Manager also
works with Oracle Enterprise Manager-the needed software is included on the Oracle
Fail Safe CD. Oracle Enterprise Manager is not a requirement for Oracle Fail
Safe but you can use Enterprise Manager for routine database administration
tasks (such as database backup and restore operations, or SGA analysis) on Fail
Safe databases.
Integration with Oracle Fail
Safe Manager became optional starting with the 2.1.3 release. Prior to 2.1.3, Oracle
Enterprise Manager was required in order to discover any standalone databases
running on the cluster nodes. This discovery is performed directly by Oracle
Fail Safe server starting with the 2.1.3 release.
Enterprise Manager can be
used (optionally) to help you with managing events and jobs in the cluster.
102. Oracle
Fail Safe Release Oracle Enterprise Manager Integration
Oracle Fail Safe 3.4.1
Release 9.2.0 and 10.1.0
Oracle Fail Safe 3.3.x OEM
1.5
Oracle Fail Safe 2.1.3 OEM
1.6 (optional)
103. Does
Fail Safe support rolling upgrades and patches?
Yes, Fail safe supports
rolling upgrades. OFS allows a cluster system to continue to provide service
while the software is being upgraded to the next release. During the process
you need to perform a planned failover to make the cluster resources running on
one node move to another node. This requires a brief downtime, typically less
than 1minute.
STREAMS CONCEPTS
104. What
is streams ?
OracleStreams is an
information sharing feature that provides replication, message queuing, data
warehouse loading, and event notification. It is also the foundation
behindOracleAdvanced Queuing andOracleData Guard SQL Apply.
Streams is extremely
flexible. Using the Streams components in different combinations, Streams can
also be used to minimize downtime during application upgrades, audit DML/DDL
activity within theOracledatabase.
105. Architecture
/ Components of streams
Capture
Captures the database
changes, and application generated messages into the staging area(Queue).
Changes are captured in two
ways.
With implicit capture, the
server captures DML and DDL events at a source database.
Explicit capture allows
applications to explicitly generate events and place them in the staging area.
Implicit capture mines redo
log, either by hot mining the online redo log or, if necessary, by mining
archived log files.
After retrieving the data,
the capture process formats it into a Logical Change Record (LCR) and places it
in a staging area for further processing.
The capture process can
intelligently filter LCRs based upon defined rules. Thus, only changes to
desired objects are captured.
User applications can
explicitly enqueue user messages representing events into the staging area.
These messages can be
formatted as LCRs, which will allow them to be consumed by the apply engine, or
they can be formatted for consumption by another user application.
Staging
Once captured, events are
placed in a staging area. The staging area is a queue that provides a service
to store and manage captured events.
Subscribers examine the contents
of the staging area and determine whether or not they have an interest in an
event.A subscriber can either be a user application, another staging area,
usually on another system, or the default apply process.
Propagation
If the subscriber is another
staging area, the event is propagated to the other staging area, either within
the same database or in a remote database, as appropriate.
To simplify network routing
and reduce WAN traffic, events need not be sent to all databases and
applications. Rather,they can be directed through staging areas on one or more
systems until they reach the subscribing system. For example, an event may
propagate via a hub database that does not actually apply the event. A single
staging area can stage events from multiple databases, simplifying setup and
configuration.
Consumption
Events in a staging area are
consumed by the apply engine, where the changes they represent are applied to a
database, or they are consumed by an application. Oracle Streams includes a
flexible apply engine, that allows use of a standard or custom apply function.
This enables data to be transformed when necessary. Support for explicit
dequeue allows application developers to use Oracle Streams to notify
applications of changes to data, while still leveraging the change capture and
propagation features of Oracle Streams.
Default
Apply
The default apply engine
applies DML changes and DDL changes represented by implicitly or explicitly
captured LCRs.
The default apply engine will
detect conflicts where the destination row has been changed and does not
contain theexpected values. If a conflict is detected, then a resolution
routine may be invoked.
User-Defined
Function Apply
The apply engine can pass the
LCR or a user message to a user-defined function. This provides the greatest
amount of flexibility in processing an event. A typical application of a
user-defined function would be to reformat the data represented by the LCR
before applying it to a local table, for example, field format, object name and
column name mapping transformations. A user-defined function could also be used
to perform column subsetting, or to update other objects that may not be
present in the source atabase.
Explicit
Dequeue
User applications can
explicitly dequeue LCRs or user messages from the receiving staging area. This
allows a user application to efficiently access the data in a Streams' staging
are. Streams can send notifications to registered PL/SQL or OCI functions,
giving the applications an alternative to polling for new messages. Of course,
applications can still poll, or even wait, for new subscribed messages in the
staging area to become available.
106. What
are Rules & Rule sets ?
Streams lets users control
which information to share and where to send it by specifying rules. At the
highest level,users can indicate if they want to capture, propagate or apply
changes at the table, schema, or global (database) level.
For more complex
requirements, for example, to apply only a particular subset of data at a given
location, users can specify a rule condition similar to the condition in the
WHERE clause of a SQL query. If necessary, related rules can be grouped into
rule sets
107. What
are transformations ?
A transformation is a change
in the form of an object participating in capture and apply or a change in the
data it holds. Transformations can include changing the datatype representation
of a particular column in a table at a particular site, adding a column to a
table at one site only, or including a subset of the data in a table at a
particular site.
A transformation can be
specified during enqueue, to transform the message to the correct type before
inserting it into the staging area. It can also be specified for propagation,
which may be useful for subsetting data before it is sent to a remote site.
Finally, it can be specified at dequeue or local apply, which can be useful for
formatting a message in a manner appropriate for a specific destination.
108. Why
we need streams ?
Replication of data across
different sites.
Message Queuing
Datawarehouse loading.
Event Management and
Notification.
Data Protection.
Database Availability During
Upgrade and Maintenance Operations
109. What
are the advantages of streams over Advanced Replication ?
Streams provides enhanced
functionality and flexibility, when compared to Advanced Replication.
Streams provides some
features that are not provided in Advanced Replication:
No quiesce for DDL changes
Setting up Streams is online,
whereas Advanced Replication requires a minimal downtime on source
Streams can have different
structure for tables on source & target(even the schema can be different),
which is not possible on Advanced Replication.
Streams captures the
transaction by mining redo logs, whereas Advanced Replication used triggers,
which has performance overhead.
Streams can be used to
replicate only a subset of data
Streams can be used for
heterogeneous support.
110. What
are the different types of Streams Configuration ?
Unidirectional Streams
Bi-directional Streams
Hub-spoke Configuration
Downstream Capture
Real time Downstream Capture
111. What
is Spilling ?
The logical change records
are staged in a memory buffer associated with the queue, they are not
ordinarily written to disk.
If the messages/LCR's staged
in the buffer for a period of time without being dequeued, or if there is not
enough space in
memory to hold all of the
captured events, then they are spilled to disk.
AQ$_QUEUE_TABLE_NAME_P, a
table for storing the captured events that spill from memory
112. CanOracleStreams
be used between different hardware platforms and OS versions?
Streams can be used across
different hardware, different OS & differentOracleVersions.
113. What
is Streams tags ?
Every redo entry in the redo
log has a tagassociated with it. The datatype of the tagis RAW. By default,
when a user or application generates redo entries, the value of the tagis NULL
for each redo entry, and a NULL tagconsumes no space in the redo entry. The
size limit for a tagvalue is 2000 bytes.
In Streams, rules can have
conditions relating to tag values to control the behavior of Streams clients.
For example, a tag can be used to determine whether an LCR contains a change
that originated in the local database or at a different database, so that you
can avoid change cycling (sending an LCR back to the database where it
originated). Also, a tag can be used to specify the set of destination
databases for each LCR. Tags can be used for other LCR tracking purposes as
well.
You can specify Streams tags
for redo entries generated by a certain session or by an apply process. These
tags then become part of the LCRs captured by a capture process. Typically,
tags are used in Streams replication environments, but you can use them
whenever it is necessary to track database changes and LCRs.
You can control the value of
the tags generated in the redo log in the following ways:
1. Use the
DBMS_STREAMS.SET_TAG procedure to specify the value of the redo tags generated
in the current session.
When a database change is
made in the session, the tag becomes part of the redo entry that records the
change.
Different sessions can have
the same tag setting or different tag settings.
2. Use the CREATE_APPLY or
ALTER_APPLY procedure in the DBMS_APPLY_ADM package to control the value of the
redo tags generated when an apply process runs. All sessions coordinated by the
apply process coordinator use this tag setting. By default, redo entries
generated by an apply process have a tag value that is the hexadecimal
equivalent of '00' (double zero).
114. Streams
Heterogeneous services
OracleStreams is an open
information sharing solution. Each element supports industry standard languages
and standards. Streams supports capture and apply fromOracleto
non-Oraclesystems. Changes can be applied to a non- Oraclesystem via a
transparent gateway. Streams also includes an API to allow non-Oracledata
sources to easily submit or receive change records, allowing for heterogeneous
data movement in both directions. In addition, messages can be sent to and
received from other message queuing systems such as MQ Series via the Message
Gateway.
115. What
are the different SCN's with respect to Streams ?
First SCN
The first SCN is the lowest
SCN in the redo log from which a capture process can capture changes. If you
specify a first SCN during capture process creation, then the database must be
able to access redo data from the SCN specified and higher.
Start SCN
The start SCN is the SCN from
which a capture process begins to capture changes. You can specify a start SCN
that is different than the first SCN during capture process creation, or you
can alter a capture process to set its start SCN. The start SCN does not need
to be modified for normal operation of a capture process. Typically, you reset
the start SCN for a capture process if point-in-time recovery must be performed
on one of the destination databases that receive changes from the capture
process. In these cases, the capture process can be used to capture the changes
made at the source database after the point-in-time of the recovery.
captured SCN
The system change number
(SCN) that corresponds to the most recent change scanned in the redo log by a
capture process.
applied SCN
A system change number (SCN)
relating to a capture process that corresponds to the most recent message
dequeued by an apply process that applies changes captured by the capture
process.
Required Checkpoint SCN
The SCN that corresponds to
the lowest checkpoint for which a capture process requires redo data is the
required checkpoint SCN. The redo log file that contains the required
checkpoint SCN, and all subsequent redo log files, must be available to the
capture process. If a capture process is stopped and restarted, then it starts
scanning the redo log from the SCN that corresponds to its required checkpoint
SCN. The required checkpoint SCN is important for recovery if a database stops
unexpectedly. Also, if the first SCN is reset for a capture process, then it
must be set to a value that is less than or equal to the required checkpoint
SCN for the captured process. You can determine the required checkpoint SCN for
a capture process by querying the REQUIRED_CHECKPOINT_SCN column in the
DBA_CAPTURE data dictionary view.
Instantiation SCN
The system change number
(SCN) for a table which specifies that only changes that were committed after
the SCN at the source database are applied by an apply process.
116. How
are LOBs queued and propagated? What happens when there is an update/insert
operation
involving
1GB sized LOB?
A single Logical Change
Record (LCR) for all non-LOB columns is queued. For each LOB column, LCRs based
on the chunk-size of the LOB indicated in the redo log is queued. The number of
LCRs queued is dependent on the size of the blob and the chunk size indicated.
As an implementation detail: Streams uses a queue buffer and we may buffer all
lob chunks - this may cause spill-over to disk.
117. When
does Streams read the Oracle on-line Redo Logs? Does the presence of Stream
replication
affect
log-switching/archiving mechanism?
Streams Capture reads the
changes after they are written to the redo log. Streams is independent of the
log-switching and archiving mechanism. Streams can seamlessly switch between
reading the archived logs to online logs, and back again, if necessary.
In Streams, Logical Change
Records (LCRs) are eagerly captured and propagated. Each captured change is
staged and propagated independent of the transaction boundary. The APPLY
process, however, only applies committed transactions.
118. Can I
still use Streams if I can't set the init.ora parameter GLOBAL_NAMES to TRUE?
The init.ora parameter,
GLOBAL_NAMES, should be set to TRUE in any distributed system. This parameter
requires that
the GLOBAL_NAME (from the
GLOBAL_NAME view) for the dblink target database match the database link name.
Streams does not enforce this
setting and will run independent of this setting. In a replicated scenario,
setting this parameter to TRUE is an additional confirmation that the correct
database is being updated.
119. Can
Streams be used to maintain availability during migrations between platforms or
database upgrades?
Yes, Streams can be used to
ensure database availability to users while a database maintenance or migration
is in progress.
120. How
to remove the Streams configuration ?
Note.276648.1 Remove Streams
Procedure for 9.2.0.X
In 10g there is a procedure
that enables you to remove the entire Streams configuration locally.
NOTE: THIS WILL REMOVE ENTIRE
STREAMS SET UP IN THE GIVEN DATABASE.
IF YOU JUST WANT TO REMOVE
ONE STREAMS CONFIGURATION, THIS CANNOT BE USED.
It is part of the
dbms_streams_adm package and is run simply as-is:
SQL> connect / as sysdba
connected.
SQL> execute
DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
You can run this multiple
times so if the procedure fails to complete, there is no harm in running it
again.
Note that the procedure will
not remove the STRMADMIN user and will need to be run separately, once at each
database where a Streams environment resides.
121. Supplied
PL/SQL packages for administering Streams.
DBMS_STREAMS
DBMS_STREAMS_ADM
DBMS_STREAMS_AUTH
DBMS_STREAMS_MESSAGING
DBMS_STREAMS_TABLESPACE_ADM
DBMS_CAPTURE_ADM
DBMS_APPLY_ADM
DBMS_PROPAGATION_ADM
122. As
with capture process in Streams is identified in format 'cnnn', is there an
equivalent 'apply' background process ?
Yes the apply process is an
optional Oracle background process that dequeues logical change records (LCRs)
and is denoted as A001(Annn)
in alert logfile....
In alert logfile you couls
see messages like the following:
Streams APPLY A001 started
with pid=22, OS id=2176
Streams Apply Reader P000
started with pid=23 OS id=2008
Streams Apply Server P001
started with pid=24 OS id=2524
Streams Apply Reader P000
pid=23 OS id=2008 stopped
Streams Apply Server P001
pid=24 OS id=2524 stopped
Streams Apply Server P000
pid=23 OS id=2008 stopped
Streams Apply Server P001
pid=24 OS id=2524 stopped
And when the first message
received at the destination site (Apply site)
a trace file will be
generated in udump destination...
*** SERVICE NAME:(orcl)
2004-05-09 12:40:36.849
*** SESSION ID:(131.1073)
2004-05-09 12:40:36.849
05/09/2004 12:40:36
kpoaqxeq: received first msg
123. If using Streams to replicate on SCHEMA level,
what type of supplemental logging is needed?
Is there a
performance degradation associated, if we enable DATABASE supplemental logging
?
Yes preferable you enable the
supplemental logging at the table level..(First priority)
Enabling database level will
not have much of overhead or major impact on the database but more redo will be
generated.
124. What
are full options available to enable 3rd party replication product called
SHARPLEX and other supported replication of datatypes, DML and DDL .
The options for Sharplex are
same as used in Oracle database..
Refer:
Note 238457.1 What DML and
DDL is Captured by Streams
Note 238455.1 Streams
Supported and Unsupported Datatypes (ORA-902 Invalid Datatype)
Also the documentation of
Streams can be obtained from otn.oracle.com site.
The Second Chapter 2 Streams
Capture Process gives the details about the configuration.
Demo and examples can be
obtained from Oracle􀳦 Streams Advanced Queuing User's Guide Reference
Release 10.1 Part Number B10785-01
125. How to do an evaluation on setup/admin using
OEM.?
Kindly refer Note:206018.1
Configuring Streams 9.2 Using Oracle Enterprise Manager 9.2.0 . The note
applies even for 10G...
126. Capture processes has aborted and been
undetected for a period of time.
The archivelogs have expired
and been removed from disk.
What are the steps to take, to
resync the CAPTURE and APPLY process?
+ stop apply
+ delete the apply errors
+ stop the capture process
+ recreate the capture
+ instantiate the tables
+ start apply
+ start the capture
127. Can Streams be used to replicate between 9i
-> 10g and in a vice versa configuration
(provided
the datatypes exist in the 2 different versions of database).Are there any
limitations/problems ?
Yes, you can replicate
between patchset 9.2. and 10g (both ways),provided you apply the atleast
patchset 9.2.0.5
There are no known problems
with this configuration. See Note:742060.1 Release Schedule of Current Database
Patch Sets
128. The
"Capture" and "apply" processes are meant to resume their
previous STATE after DB shutdown /startup.
After recreation of the controlfile - how does one resync up the
"capture" and "apply" processes?
Yes,"Capture" and
"apply" processes will maintain the state
Drop and recreate the capture
process using DBMS_CAPTURE_ADM.STop_CAPTURE and DBMS_CAPTURE_ADM.drop_CAPTURE
A capture process reads
online redo logs whenever possible and archived redo logs otherwise.
For this reason, the database
must be running in ARCHIVELOG mode when a capture process is configured to
capture changes. You must keep an archived
129. If
drop/recreate my capture process, how can I ensure that I retain all the rules
associated with it previously?
When you drop the capture
processes alone the associated rules and rulessets are also removed.
You could check from
dba_rules,dba_Capture and DBA_RULE_SET the names of the rules.
select
CAPTURE_NAME, RULE_SET_NAME,RULE_SET_OWNER from dba_capture;
130. From which query can I find out the start and
end SCN for each of the archivelogs that I presently have on disk?
FIRST_CHANGE# and
NEXT_CHANGE# from V$log_history will give the start and end SCN...
131. APPLY
process has shown apply errors on Target database, is there any method of
identifying the DML or DDL that was executed on the Source database to
differentiate the data difference?
Use dba_apply_error for
checking the error message
See Chapter 17 of streams
manual has the print lcr procedure that can be used to view details
Refer :Note 265201.1
Troubleshooting Streams Apply Error ORA-1403, ORA-26787 or ORA-26786
132. How
to monitor my environment and ensure everything Streams is functioning
properly?
a) Check alert logfile from
source and Target database for any errors
b) Check capture Cnnn.trc at
source and Annn.trc at target for warnings and errors
c) Check diagnostic views
from of Note.231226.1 Capture Process Diagnostic
SQL Queries and Note.231350.1
Sreams: Apply Process Diagnostic SQL Queries
d) Review output from
Heathcheckup script from both sites Note 273674.1 Streams Configuration Report
and Health Check Script
e) Monitor OEM alerts and
STRMMON can be used Note.290605.1 Oracle Streams STRMMON Monitoring Utility
f) To make sure data is in
sync run the following query from source site:
SELECT capture_name,
queue_owner, queue_name, rule_set_name, status, start_scn,required_checkpoint_scn,max_checkpoint_scn,
captured_scn, applied_scn
FROM dba_capture;
Make sure captured_scn and
applied_scn are moving and are same .
133. Do
all instances in streams environment need to be running on archivelog mode?
In a Unidirectional
Replication,the source must be in archivelog mode
In a Bidirectional
replication,the source and target must be in archivelog mode.
In a downstreams setup,we
need to copy the archived logs from source to this directory in the downstreams
server.
134. Can
we implement streams between different platforms
Yes you can implement streams
between different platforms?
135. What
is difference between Streams and Advanced replication?
See 370850.1 Comparison
Between Features RAC, Dataguard, Streams, Advanced Replication and Basic
Replication.
136. How
can we remove/clean Streams completely
See 260028.1 Remove Streams
in 10g
138. Can
we implement downstreams between different platforms?
For downstreams database the
platforms must match
No comments:
Post a Comment