Monday, 2 July 2018

Interview Q and A for Oracle RAC Part - 3

131. Is Veritas Storage Foundation supported with Oracle RAC?
Veritas certifies Veritas Storage Foundation for Oracle RAC with each release. Check Ceritify and Veritas Support Matrix for the latest details.

132. Is there a cluster file system (CFS) Available for Linux?
Yes, ACFS (ASM Cluster File System with Oracle Database 11g Release 2) and OCFS (Oracle Cluster Filesystem) are available for Linux. The following Metalink note has information for obtaining the latest version of OCFS:
Note 238278.1 - How to find the current OCFS version for Linux

133. Is it possible to run Oracle RAC on logical partitions (i.e. LPARs) or virtual separate servers.
Yes, it is possible. Check Certify to understand the current details for the different hardware solutions.
On high end servers can be partitioned into domains (partitions) of smaller sizes, each domain with its own CPU(s) and operating system. Each domain is effectively a virtual server. Oracle RAC can be run on cluster comprises of domains. The benefits of using this is similar to a regular cluster, any domain failure will have little effect on other domains. Besides, the management of the cluster may be easier since there is only one physical server. Note however, since one E10K is still just one server. There are single points of failures. Any failures, such as back plane failure, that crumble the entire server will shutdown the virtual cluster. That is the
tradeoff users have to make in how best to build a cluster database.


134. What are the implications of using srvctl disable for an instance in my Oracle RAC cluster? I
want to have it available to start if I need it but at this time to not want to run this extra instance
for this database.
During node reboot, any disabled resources will not be started by the Clusterware, therefore this instance will not be restarted. It is recommended that you leave the vip, ons,gsd enabled in that node. For example, VIP address for this node is present in address list of database services, so a client connecting to these services will still reach some other database instance providing that service via listener redirection. Just be aware that by disabling an Instance on a node, all that means is that the instance itself is not starting. However, if the database was originally created with 3 instances, that means there are 3 threads of redo. So, while the instance itself is disabled, the redo thread is still enabled, and will occasionally cause log switches. The archived logs for this 'disabled' instance will still be needed in any potential database recovery scenario. So, if you are going to disable the instance through srvctl, you may also want to consider disabling the redo thread for that instance.
srvctl disable instance -d orcl -i orcl2
SQL> alter database disable public thread 2;
Do the reverse to enable the instance.
SQL> alter database enable public thread 2;
srvctl enable instance -d orcl -i orcl2

135. If using plsql native code, the plsql_native_library_dir needs to be defined. In an Oracle RAC
environement, must the directory be in the shared storage?
In Oracle RAC configuration, this parameter must be set in each instance. The instances are not required to have a shared file system. On each instance the plsql_native_library_dir can be set to point to an instance local directory. Alternately, if the Oracle RAC configuration supports a shared (cluster) file system, you can use a common directory (on the shared file system) for all instances. You can also check out the PL/SQL Native Compilation FAQ on OTN: www.oracle.com/technology/tech/pl_sql/htdocs/ncomp_faq.html With Oracle RAC 11g Release 2, use ACFS (ASM Cluster file system)

136. How do I identify which node was used to install the cluster software and/or database software?
You can find out which node by running olsnodes command. The node which is returned first is the node from which the software was installed and patches should be installed.
Note: When applying patches in a rolling fashion, you are recommended to run the rolling scripts from the last node added to the cluster first and follow the list in reverse order.

137. Are the Oracle Clusterware bundle patches cumulative, do they conflict with one another?
Fix-wise, the Oracle Clusterware bundles are cumulative, that is, CRS bundle #3 fixes all the issues that bundle #2 did, and some additional ones, see Note:405820.1 for complete list of bugs fixed in each bundle.
However, OPatch does not allow to apply ANY patch if there are any overlapping libs or binaries between an already existing patch and the to-be-installed patch. If two patches touch a particular file, e.g: kcb.o, then the existing patch must be manually removed before the new applied.
So, bundle patches are cumulative, however they do conflict with one another due to the way OPatch allows patch application, hence previous bundle must be manually removed before a new one is applied. To check if any two patches conflict invoke OPatch as per Note:458485.1 or using: $ OPatch/opatch prereq CheckConflictAmongPatches -phbasefile patchlist
where patchlist is a text file containing all the patch numbers to be checked, separated by a newline.

138. I have added a second network to my cluster, can I load balance my users across this network?
Server side load balancing will only work on a single network which is configured as the public network with the Oracle VIPS. If you add a second network, with a second listener, do not add this new listener to the local_listener and remote_listener parameter. You can use client-side load balancing and failover for users connecting to this network however you will be unable to use server-side load balancing or receive FAN events for this network.
Oracle RAC 11g Release 2 adds the support for multiple public networks. Connections will be load balanced across the instances. Each network will have its own service. To enable load balancing use the LISTENER_NETWORKS parameter instead of LOCAL_LISTENER and REMOTE_LISTENER.

139. How is Oracle Enterprise Manager integrated with the Oracle RAC 11g Release 2 stack?
Oracle Enterprise Manager (EM) is available in 2 versions: Oracle EM Grid Control and Oracle EM Database Control. Oracle EM Grid Control underlies a different release cycle than the Oracle Database, while the new version of Oracle EM Database Control is available with every new database release. At the time of writing, Oracle EM Grid Control is available in version 10.2.0.5. This version does not support new features of the Oracle Database 11g Release 2. Oracle 11g Rel. 2 Database, however, can be managed with Oracle EM in the current version with some restrictions (no 11.2 feature support).
With Oracle Database and Grid Infrastructure 11g Release 2, Oracle EM Database Control is now able to manage the full Oracle RAC 11g Release 2 stack. This includes: Oracle RAC Databases, Oracle Clusterware, and Oracle Automatic Storage Management.
The new feature that needs to be noted here is the full management of Oracle Clusterware 11g Release 2 with Oracle EM Database Control 11g Release 2.

140. What storage option should I use for Oracle RAC on Linux? ASM / OCFS / Raw Devices / Block Devices / Ext3 ?
The recommended way to manage large amounts of storage in an Oracle RAC environment is ASM (Automatic Storage Management). If you really need/want a clustered filesystem, then Oracle offers OCFS (Oracle Clustered File System); for 2.4 kernel (RHEL3/SLES8) use OCFS Version 1 and for 2.6 kernel (RHEL4/SLES9) use OCFS2. All these options are free to use and completely supported, ASM is bundled in the RDBMS software, and OCFS as well as ASMLib are freely downloadable from Oracle's OSS (Open Source Software) website.
EXT3 is out of the question, since it's data structures are not cluster aware, that is, if you mount an ext3 filesystem from multiple nodes, it will quickly get corrupted.
Another option of course is NFS and iSCSI both are outside the scope of this FAQ but included for completeness.
If for any reason the above options (ASM/OCFS) are not good enough and you insist on using 'raw devices' or 'block devices' here are the details on the two (This information is still very useful to know in the context of ASM and OCFS).
On Unix/Linux there are two types of devices:
block devices (/dev/sde9) are **BUFFERED** devices!! unless you explicitly open them in O_DIRECT you will get buffered (linux buffer cache) IO.
character devices (/dev/raw/raw9) are *UN-BUFFERRED** devices!! no matter how you open them, you always get unbufferred IO, hence no need to specify O_DIRECT on the file open call.
Above is not a typo, block devices on Unix do buffered IO by default (cached in linux buffer cache), this means that RAC can not operate on it (unless opened with O_DIRECT), since the IO's will not be immediately visible to other nodes.
You may check if a device is block or character device by the first letter printed with the "ls -l" command:
crw-rw---- 1 root disk 162, 1 Jan 23 19:53 /dev/raw/raw1
brw-rw---- 1 root disk 8, 112 Jan 23 14:51 /dev/sdh
Above, "c" stands for character device, and "b" for block devices.
Starting with Oracle 10.1 an RDBMS fix added the O_DIRECT flag to the open call (O_DIRECT flag tells the Linux kernel to bypass the Linux buffer cache and write directly to disk), in the case of a block device, that ment that a create datafile on '/dev/sde9' would succeed (need to set filesystemio_options=directio in init.ora).. This enhancement was well received, and shortly after bug 4309443 was fixed (by adding the O_DIRECT flag on the OCR file open call) meaning that starting with 10.2 (there are several 10.1 backports available) the Oracle OCR file could also access block devices directly. For the voting disk to be opened with O_DIRECT you need fix for bug 4466428 (5021707 is a duplicate). This means that both voting disks and OCR files could live on block devices. However, due to OUI bug 5005148, there is still a need to configure raw devices for the voting or OCR files during installation of RAC, not such a big deal, since it's just 5 files in most cases. It is not possible to ask for a backport of this bug since it means a full re-release of 10g, one alternative if raw devices are not a good option is to use 11g Clusterware (with 10g RAC database).
By using block devices you no longer have to live with the limitations of 255 raw devices per node. You can access as many block devices as the system can support. Also block devices carry persistent permissions across reboots, while with raw devices one would have to customize that after installation otherwise the Clusterware stack or database would fail to startup due to permission issues.
ASM or ASMlib can be given the raw devices (/dev/raw/raw2) as was done in the initial deployment of 10g Release 1, or the more recommended way: ASM/ASMLib should be given the block devices directly (eg. /dev/sde9).
Since RAW devices are being phased out of Linux in the long term, it is recommended everyone should switch to using the block devices (meaning, pass these block devices to ASM or OCFS/2 or Oracle Clusterware)
Note: With Oracle Database 11g Release 2, Oracle Clusterware files (OCR and Voting Disk can be store in ASM and this is the Best Practice). The Oracle Universal Installer and the configuration assistants (IE DBCA, NETCA) will not support raw/block devices. All command line interfaces will support raw/block for this release.
Therefore if you are using raw/block today, you can continue to use it and upgrading to 11g Release 2 will not change the location of any files. However due to the desupport in the next release, you are recommended to plan a migration to a supported storage option. All files supported natively in ASM, will not be supported in production with the ASM Cluster File System (ACFS)

141. How can I validate the scalability of my shared storage? (Tightly related to RAC / Application
scalability)
Storage vendors tend to focus their sales pitch mainly on the storage unit's capacity in Terabytes (1000 GB) or Petabytes (1000 TB), however for RAC scalability it's critical to also look at the storage unit's ability to process I/O's per second (throughput) in a scalable fashion, specifically from multiple sources (nodes). If that criteria is not met, RAC / Application scalability most probably will suffer, as it partially depends on storage scalability as well as a solid and capable interconnect (for network traffice between nodes).
Storage vendors may sometimes discourage such testing, boasting about their amazing front or backend battery backed memory caches that "eliminate" all I/O bottlenecks. This is all great, and you should take advantage of such caches as much as possible... however, there is no substitute to a a real world test, you may uncover that the HBA (Host Buss Adapater) firmware or the driver versions are outdated (before you claim poor RAC / Application scalability issues).
It is highly recommended to test this storage scalability early on so that expectations are set accordingly. On Linux there is a freely available tool released on OTN called ORION (Oracle I/O test tool) which simulates Oracle I/O.
On other Unix platforms (as well as Linux) one can use IOzone, if prebuilt binary not available you should build from source, make sure to use version 3.271 or later and if testing raw/block devices add the "-I" flag.
In a basic read test you will try to demonstrate that a certain IO throughput can be maintained as nodes are added. Try to simulate your database io patterns as much as possible, i.e. blocksize, number of simultaneous readers, rates, etc.
For example, on a 4 node cluster, from node 1 you measure 20MB/sec, then you start a read stream on node 2 and see another 20MB/sec while the first node shows no decrease. You then run another stream on node 3 and get another 20MB/sec, in the end you run 4 streams on 4 nodes, and get an aggregated 80MB/sec or close to that. This will prove that the shared storage is scalable. Obviously if you see poor scalability in this phase, that will be carried over and be observed or interperted as poor RAC / Application scalability.
In many cases RAC / Application scalability is at blame for no real reason, that is, the underlying IO subsystem is not scalable.

142. I was installing Oracle 9i RAC and my Oracle files did not get copied to the remote node(s). What went wrong?
First make sure the cluster is running and is available on all nodes. You should be able to see all nodes when running an 'lsnodes -v' command.
If lsnodes shows that all members of the cluster are available, then you may have an rcp/rsh problem on Unix or shares have not been configured on Windows. You can test rcp/rsh on Unix by issuing the following from each node:
[node1]/tmp> touch test.tst
[node1]/tmp> rcp test.tst node2:/tmp
[node2]/tmp> touch test.tst
[node2]/tmp> rcp test.tst node1:/tmp
On Windows, ensure that each node has administrative access to all these directories within the Windows environment by running the following at the command prompt: NET USE \\host_name\C$Clustercheck.exe also checks for this.

143. How should I deal with space management? Do I need to set free lists and free list groups?
Manually setting free list groups is a complexity that is no longer required. We recommend using Automatic Segment Space Management rather than trying to manage space manually. Unless you are migrating from an earlier database version with OPS and have already built and tuned the necessary structures, Automatic Segment Space Management is the preferred approach.
Automatic Segment Space Management is NOT the default, you need to set it. For more information see:
Note: 180608.1 Automatic Space Segment Management in RAC Environments

144. A customer is currently using RAC in a 2 node environment. How should one review the ability to
scale out to 4, 6, 8 or even more nodes? What should the requirements of a scale out test?
Once a customer is using RAC on a two node cluster and want to see how far they can actually scale it, the following are some handy tips to follow:
1. Ensure they are using a real enough workload that it does not have false bottlenecks.
2. Have tuned the application so it is reasonable scalable on their current RAC environment.
3. Make sure you are measuring a valid scalability measure. This should either be doing very large batch jobs quicker (via parallelism) or being able to support a greater number of short transactions in a shorter time.
4. Actual scalability will vary for each application and its bottlenecks. Thus the request to do the above items. You would see similar scalability if scaling up on a SMP.
5. For failover, you should see what happens if you lose a node. If you have 2 nodes, you lose half your power and really get into trouble or have lots of extra capacity.
6. Measuring that load balacing is working properly. Make sure you are using RCLB and a FAN aware connection pool.
7. Your customer should also testing using DB Services.
8. Get familiar w/ EM GC to manage a cluster and help eliminate a lot of the complexity of many of the nodes.
9. Why stop at 6 nodes? A maximum of 3 way messaging ensure RAC can scale much, much further.

145. What are the changes in memory requirements from moving from single instance to RAC?
If you are keeping the workload requirements per instance the same, then about 10% more buffer cache and 15% more shared pool is needed. The additional memory requirement is due to data structures for coherency management. The values are heuristic and are mostly upper bounds. Actual resource usage can be monitored by querying current and maximum columns for the gcs resource/locks and ges resource/locks entries in V$RESOURCE_LIMIT.
But in general, please take into consideration that memory requirements per instance are reduced when the same user population is distributed over multiple nodes. In this case: Assuming the same user population N number of nodes M buffer cache for a single system then (M / N) + ((M / N )*0.10) [ + extra memory to compensate for failed-over users ]
Thus for example with a M=2G & N=2 & no extra memory for failed-over users
=( 2G / 2 ) + (( 2G / 2 )) *0.10
=1G + 100M

146. What are my options for setting the Load Balancing Advisory GOAL on a Service?
The load balancing advisory is enabled by setting the GOAL on your service either through PL/SQL DBMS_SERVICE package or EM DBControl Clustered Database Services page. There are 3 options for GOAL:
None - Default setting, turn off advisory
THROUGHPUT - Work requests are directed based on throughput. This should be used when the work in a service completes at homogenous rates. An example is a trading system where work requests are similar lengths.
SERVICE_TIME - Work requests are directed based on response time. This should be used when the work in a service completes at various rates. An example is as internet shopping system where work requests are various lengths
Note: If using GOAL, you should set CLB_GOAL=SHORT

147. Will adding a new instance to my Oracle RAC database (new node to the cluster) allow me to scale the workload?
YES! Oracle RAC allows you to dynamically scale out your workload by adding another node to the cluster. You must remember that adding more work to the database means that in addition to the CPU and Memory that the new node brings, you will have to ensure that your I/O subsystem can support the additional I/O requirements. In an Oracle RAC environment, you need to look at the total I/O across all instances in the cluster.

148. How do I change my Veritas SF RAC installation to use UDP instead of LLT?
Using UDP with Veritas Clusterware and Oracle RAC 10g seems to require an exception from Veritas so this may be something you should check with them. To make it easier for customers to convert their LLT environments to UPD, Oracle has created Patch
6846006 on 10.2.0.3 which contains the libraries that were overwritten by the Veritas installation (IE those mentioned above). Converting from specialized protocols to UDP requires a relink after the Oracle libraries have been restored. This needs a complete cluster shutdown and cannot be accomplished in a rolling fashion.
NOTE: Oracle RAC 11g will not support LLT for interconnect.

149. Can I have different servers in my Oracle RAC? Can they be from different vendors? Can they be
different sizes?
Oracle Real Application Clusters (RAC) requires all the nodes to run the same Operating System binary in a cluster (IE All nodes must be Windows 2008 or all nodes must be OEL 4). All nodes must be the same architecture (I.E. All nodes must be either 32 bit or all nodes must be 64 bit or all nodes must be HP-UX PARISC since you cannot mix PARISC with Itanium).
Oracle RAC does support a cluster with nodes that have different hardware configurations. An example is a cluster with 3 nodes with 4 CPUs and another node with 6 CPUs. This can easily occur when adding a new node after the cluster has been in production for a while. For this type of configuration, customers must consider some additional features to get the optimal cluster performance. The servers used in the cluster can be from different vendors; this is fully supported as long as they run the same binaries. Since many customers implement Oracle RAC for high availability, you must make sure that your hardware vendor will support the configuration. If you have a failure, will you get support for the hardware configuration?
The installation of Oracle Clusterware expects the network interface to be the same name on all nodes in the cluster. If you are using different hardware, you may need to work with your operating system vendor to make sure the network interface names are the same name on all nodes (IE eth0). Customers implementing uneven cluster configurations need to consider how they will balance the workload across the cluster. Some customers have chosen to manually assign different workloads to different nodes. This can be done using database services however it is often difficult to predict workloads and the system cannot dynamically react to changes in workload. Changes to workload require the DBA to modify the service. You will also need to consider how you will survive failures in the cluster. Will the service levels be maintained if the larger node in the cluster fails? Especially in a small cluster, the impact of losing a node could impact the ability to continue processing the application workload.
The impact of the different sized nodes depends on how much difference there is in the size. If there is a large difference between the nodes in terms of memory and CPU size, than the "bigger" nodes will attract more load, obviously, and in the case of failure the "smaller" node(s) will become overpowered. In such a case, static routing of workload via services e.g. batch and certain services, which can be suspended/stopped if the large node fails and the cluster has significantly reduced capacity, may be advisable. The general recommendation is that the nodes should be sized in such a way that the aggregated peak load of the large node(s) can be absorbed by the smaller node(s), i.e. smaller node should have sufficient capacity to run the essential services alone. Another option is to add another small node to the cluster on demand in case that the large one fails.
It should also be noted especially if there is a large difference between the sizes of the nodes, the small nodes can slow down the larger node. This could be critical one if the smaller node is very busy and must serve data to the large node.
To help balance workload across a cluster, Oracle RAC 10g Release 2 and above provides the Load Balancing Advisory (LBA). The load balancing advisory runs in an Oracle RAC database and monitors the work executed by the service on all instances where the service is active in the cluster. The LBA provides recommendations to the subscribed clients about the state of the service and where the client should direct connection requests. Setting the GOAL on the service activates the load balancing advisory. Clients that can utilize the load balancing advisory are Oracle JDBC Implicit Connection Cache, Oracle Universal Connection Pool for Java, Oracle Call Interface Session Pool, ODP.NET Connection Pool, and Oracle Net Services Connection Manager. The Oracle Listener also uses the Load Balancing Advisory if CLB_GOAL parameter is set to SHORT (recommended Best Practice if using an integrated Oracle Client mentioned here). If CLB_GOAL is set to LONG (default), the Listener will load balance the number of sessions for the service across the instances where the service is available.

150. I am seeing the wait events 'ges remote message', 'gcs remote message', and/or 'gcs for action'. What should I do about these?
These are idle wait events and can be safetly ignored. The 'ges remote message' might show up in a 9.0.1 statspack report as one of the top wait events. To have this wait event not show up you can add this event to the PERFSTAT.STATS$IDLE_EVENT table so that it is not listed in Statspack reports.

151. Do I have to link my OCI application with a thread library? Why?
YES, you must link the application to a threads library. This is required because the AQ notifications occur asynchronously, over an implicitly spawned thread.

152. How does the datasource properties initialLimit, minLimit, and maxLimit affect Fast Connection Failover processing with JDBC?
The initialLimit property on the Implicit Connection Cache is effective only when the cache is first created. For example, if the initialLimit is set to 10, you'll have 10 connections pre-created and available when the conn cache is first created. Pls don't be confused between minLimit and initialLimit. The current behavior is that after a DOWN event and the affected connections are cleaned up, it is possible for the number of connections in the cache to be lower than minLimit.
An UP event is processed for both (a) new instance joins, as well as (b) down followed by an instance UP.
This has no relevance to initialLimit, or even minLimit. When a UP event comes into our jdbc Implicit Connection Cache, we will create some new connections. Assuming you have your listener load balancing set up properly, then those connections should go to the instance that was just started. When your application does a get connection to the pool, it will be given an idle connection, if you are running 10.2 and have the load balancing advisory turned on for the service, we will allocate the session based on the defined goal to provide the best service level
MaxLimit, when set, defines the upper boundary limit for the connection cache. By default, maxLimit is unbounded - your database sets the limit.

153. Will FAN/OCI work with Instant Client?
Yes, FAN/OCI will work with Instant Client. Both client and server must be Oracle Database 10g Release 2.

154. What clients provide integration with FAN through FCF?
With Oracle Database 10g Release 1, JDBC clients (both thick and thin driver) are integrated with FAN by providing FCF. With Oracle Database 10g Release 2, we have added ODP.NET and OCI. Other applications can integrate with FAN by using the API to subscribe to the FAN events.
Note: If you are using a 3rd party application server, then you can only use FCF if you use the Oracle driver and except for OCI, its connection pool. If you are using the connection pool of the 3rd Party Application Server, then you do not get FCF. Your customer can subscribe directly to FAN events however that is a development project for the customer. See the white paper Workload Management with Oracle RAC 10g on OTN

155. What type of callbacks are supported with OCI when using FAN/FCF?
There are two separate callbacks supported. The HA Events (FAN) callback is called when an event occurs. When a down event occurs, for example, you can clean up a custom connection pool. i.e. purge stale connections. When the failover occurs, the TAF callback is invoked. At failover time you can customize the newly created database session. Both FAN and TAF are client-side callbacks. FAN also has a separate server side callout that should not be confused with the OCI client callback.

156. Does FCF for OCI react to FAN HA UP events?
OCI does not perform any implicit actions on an up event, however if a HA event callback is present, it is invoked. You can take any required action at that time.

157. Can I use FAN/OCI with Pro*C?
Since Pro*C (sqllib) is built on top of OCI, it should support HA events. You need to precompile the application with the option EVENTS=TRUE, make sure you link the application with a thread library. The database connection must use a Service that has been enabled for AQ events. Use dbms_service.modify_service to enable the service for events (aq_ha_notifications => true) or use the EM Cluster Database Services page.

158. Do I need to install the ONS on all my mid-tier serves in order to enable JDBC Fast Connection
Failover (FCF)?
With 10g Release 1, the middle tier must have ONS running (started by same users as application). With 10g Release 2 or later, they do not need to install the ons on the middle tier. The JDBC driver allows the use of remote ONS (ie uses the ONS running in the RAC cluster) . Just use the datasource parameter ods.setONSConfiguration("nodes=racnode1:4200,racnode2.:4200");

159. Will FAN/FCF work with the default database service?
No. If you want the advanced features of RAC provided by FAN and FCF, then create a cluster managed service for your application. Use the Clustered Managed Services Page in Enterprise Manager DBControl to do this.

160. My customer has an XA Application with a Oracle RAC Database, can I do Load Balancing
across the Oracle RAC instances?
No, not in the traditional Oracle Net Services Load Balancing. We have written a document that explains the ** best practices for 9i, 10g Release 1 and 10g Release 2** . With the Oracle Database 10g Services, life gets easier. With Oracle RAC 11g, Oracle provides transparent support for XA global transactions in an Oracle RAC environment which supports load balancing with Oracle Net Services across Oracle RAC instances.

161. Is rcp and/or rsh required for normal Oracle RAC operation ?
rcp"" and ""rsh"" are not required for normal Oracle RAC operation. However in older versions ""rsh"" and ""rcp"" should to be enabled for Oracle RAC and patchset installation. In later releases, ssh is used for these operations.
Note Oracle Enterprise Manager uses rsh.s

162. Do we have to have Oracle Database on all nodes?
Each node of a cluster that is being used for a clustered database will typically have the database and Oracle RAC software loaded on it, but not actual datafiles (these need to be available via shared disk). For example, if you wish to run Oracle RAC on 2 nodes of a 4-node cluster, you would need to install the clusterware on all nodes, Oracle RAC on 2 nodes and it would only need to be licensed on the two nodes running the Oracle RAC database. Note that using a clustered file system, or NAS storage can provide a configuration that does not necessarily require the Oracle binaries to be installed on all nodes.
With Oracle RAC 11g Release 2, if you are using policy managed databases, then you should have the Oracle RAC binaries accessible on all nodes in the cluster.

163. What software is necessary for Oracle RAC? Does it have a separate installation CD to order?
Oracle Real Application Clusters is an option of Oracle Database and therefore part of the Oracle Database CD. With Oracle 9i, Oracle 9i RAC is part of Oracle9i Enterprise Edition. If you install 9i EE onto a cluster, and the Oracle Universal Installer (OUI) recognizes the cluster, you will be provided the option of installing RAC. Most UNIX platforms require an OSD installation for the necessary clusterware. For Intel platforms (Linux and Windows), Oracle provides the OSD software within the Oracle9i Enterprise Edition release. With Oracle Database 10g, Oracle RAC is an option of EE and available as part of SE. Oracle provides Oracle Clusterware on its own CD included in the database CD pack.
With Oracle Database 11g Release 2, Oracle Clusterware and Automatic Storage Management are installed as a single set of binaries called the grid infrastructure. The media for the grid infrastructure is on a separate CD or under the grid directory. For standalone servers, Automatic Storage Management and Oracle Restart are installed as the grid infrastructure for a standalone server which is installed from the same media.

164. Is Infiniband supported for the Oracle RAC interconnect?
IP over IB is supported. RDS on Linux is supported with 10.2.0.3 forward.

165. Are Sun Logical Domains (ldoms) supported with RAC?
Sun Logical Domains (ldoms) are supported with Oracle Database (both single instance and RAC).

166. The Veritas installation asks for setting LD_LIBRARY_PATH_64. Should I remove this?
Yes You do not need to set LD_LIBRARY_PATH for Oracle.

167. I am receiving an ORA-29740 error. What should I do?
This error can occur when problems are detected on the cluster: Error: ORA-29740 (ORA-29740)
Text: evicted by member %s, group incarnation %s
---------------------------------------------------------------------------
Cause: This member was evicted from the group by another member of the cluster database for one of several reasons, which may include a communications error in the cluster, failure to issue a heartbeat to the control file, etc.
Action: Check the trace files of other active instances in the cluster group for indications of errors that caused a reconfiguration.

168. Is Oracle Application Server integrated with FAN and FCF?
Yes, For detailed information on the integration with the various releases of Application Server 10g,

169. What does the Virtual IP service do? I understand it is for failover but do we need a separate network card? Can we use the existing private/public cards? What would happen if we used the public ip?
The 10g Virtual IP Address (VIP) exists on every RAC node for public network communication. All client communication should use the VIPs in their TNS connection descriptions. The TNS ADDRESS_LIST entry should direct clienst to VIPs rather than using hostnames. During normal runtime, the behaviour is the same as hostnames, however when the node goes down or is shutdown the VIP is hosted elsewhere on the cluster, and does not accept connection requests. This results in a silent TCP/IP error and the client fails immediately to the next TNS address. If the network interface fails within the node, the VIP can be configured to use
alternate interfaces in the same node. The VIP must use the public interface cards. There is no requirement to purchase additional public interface cards (unless you want to take advantage of within-node card failover.)

170. I want to configure a secure environment for ONS so have added a Wallet however I am seeing errors (SSL handshake failed) after adding the wallet?
Remember that if you enable SSL for one instance of ONS, you must enable SSL for all instances with ONS (including any AS instances running OPMN).
The error message in this case showed that SSL is enabled for the local ONS server, but the SSL handshake is failing when another ONS or OPMN server attempts to connect to it, indicating that the remote server does not have SSL enabled (or has an incompatible wallet configured).

171. How does OCSSD starts first if voting disk & OCR resides in ASM Diskgroups? OR
You might wonder how CSSD, which is required to start the clustered ASM instance, can be started if voting disks are stored in ASM? 
        This sounds like a chicken-and-egg problem: without access to the voting disks there is no CSS, hence the node cannot join the cluster.  But without being part of the cluster, CSSD cannot start the ASM instance.
To solve this problem the ASM disk headers have new metadata in 11.2:  you can use kfed to read the header of an ASM disk containing a voting disk. The kfdhdb.vfstart and kfdhdb.vfend fields tell CSS where to find the voting file. This does not require the ASM instance to be up. Once the voting disks are located, CSS can access them and joins the cluster.

      172. If my OCR and Voting Disks are in ASM, can I shutdown the ASM instance?
No. You will have to stop the Oracle Clusterware stack on the node on which you need to stop the Oracle ASM instance. Either use "crsctl stop cluster -n node_name" or "crsctl stop crs" for this purpose.

173. What combinations of Oracle Clusterware, Oracle RAC and ASM versions can I use?
See Note:337737.1 for a detailed support matrix. Basically the Clusterware version must be at least the highest release of ASM or Oracle RAC. ASM must be at least 10.1.0.3 to work with 10.2 database.
Note: With Oracle Database 11g Release 2, You must upgrade Oracle Clusterware and ASM to 11g Release 2 at the same time.

174. I had a 3 node Oracle RAC. One of the nodes had to be completely rebuilt as a result of a problem. As there are no backups, What is the proper procedure to remove the 3rd node from the cluster so it can be added back in?
Follow the documentation for removing a node but you can skip all the steps in the node-removal doc that need to be run on the node being removed, like steps 4, 6 and 7 (See Chapter 10 of Oracle RAC Admin and Deployment Guide). Make sure that you remove any database instances that were configured on the failed node with srvctl, and listener resources also, otherwise rootdeltenode.sh will have trouble removing the nodeapps.
Just running rootdeletenode.sh isn't really enough, because you need to update the installer inventory as well, otherwise you won't be able to add back the node using addNode.sh. And if you don't remove the instances and listeners you'll also have problems adding the node and instance back again.

175. Where do I find Oracle Clusterware binaries and ASM binaries with Oracle Database 11g Release 2?
With Oracle Database 11g Release 2, the binaries for Oracle Clusterware and Automatic Storage Management (ASM) are distributed in a single set of binaries called the grid infrastructure. To install the grid infrastructure, go to the grid directory on your 11g Release 2 media and run the Oracle Universal Installer).
Choose the Grid Infrastructure for a Cluster. If you are install ASM for a single instance of Oracle Database on a Standalone Server, choose the Grid Infrastructure for a Standalone Server. This installation includes Oracle Restart.

176. I have the 11.2 Grid Infrastructure installed and now I want to install an earlier version of Oracle Database (11.1 or 10.2), is this supported ?
Yes however you need to "pin" the nodes in the cluster before trying to create a database using an earlier version of Oracle Database (IE not 11.2). The command to pin a node is crsctl pin css -n nodename. You should also apply the patch for Bug 8288940 to make DBCA work in an 11.2 cluster.

177. I get an error with DBCA from 10.2 or 11.1 after I have installed the 11.2 Grid Infrastructure?
You will need to apply the patch for Bug 8288940 to your database home in order for it to recognize ASM running from the new grid infrastructure home. Also make sure you have "pinned" the nodes.

178. Can I use iSCSI storage with my Oracle RAC cluster?
For iSCSI, Oracle has made the statement that, as a block protocol, this technology does not require validation for single instance database. There are many early adopter customers of iSCSI running Oracle9i and Oracle Database 10g. As for Oracle RAC, Oracle has chosen to validate the iSCSI technology (not each vendor's targets) for the 10g platforms - this has been completed for Linux and Windows. For Windows we have tested up to 4 nodes - Any Windows iSCSI products that are supported by the host and storage device are supported by Oracle. We don't support NAS devices for Windows, however some NAS devices (eg NetApp) can also present themselves as iSCSI devices. If this is the case then a customer can use this iSCSI device with Windows as long as the iSCSI device vendor supports Windows as an initiator OS. No vendorspecific information will be posted on Certify.

179. What would you recommend to customer, Oracle Clusterware or Vendor Clusterware (I.E. HP Service Guard, HACMP, Sun Cluster, Veritas etc.) with Oracle Real Application Clusters?
You will be installing and using Oracle Clusterware whether or not you use the Vendor Clusterware. Oracle Clusterware provides a complete clustering solution and is required for Oracle RAC or Automatic Storage Management (including ACFS).
Vendor clusterware is only required with Oracle 9i RAC. Check the certification matrix in MyOracleSupport for details of certified vendor clusterware.

180. Can I run Oracle 9i RAC and Oracle RAC 10g in the same cluster?
YES. However Oracle Clusterware (CRS) will not support a Oracle 9i RAC database so you will have to leave the current configuration in place. You can install Oracle Clusterware and Oracle RAC 10g into the same cluster. On Windows and Linux, you must run the 9i Cluster Manager for the 9i Database and the Oracle Clusterware for the 10g Database. When you install Oracle Clusterware, your 9i srvconfig file will be converted to the OCR. Both Oracle 9i RAC and Oracle RAC 10g will use the OCR. Do not restart the 9i gsd after you have installed Oracle Clusterware. With Oracle Clusterware 11g Release 2, the GSD resource will
be disabled by default. You only need to enable this resource if you are running Oracle 9i RAC in the clsuter.
Remember to check certify for details of what vendor clusterware can be run with Oracle Clusterware.
For example on Solaris, your Oracle 9i RAC will be using Sun Cluster. You can install Oracle Clusterware and Oracle RAC 10g in the same cluster that is running Sun Cluster and Oracle 9i RAC

181. What storage is supported with Standard Edition Oracle RAC?
As per the licensing documentation, you must use ASM for all database files with SE Oracle RAC. There is no support for CFS or NFS. From Oracle Database 10g Release 2 Licensing Doc:
Oracle Standard Edition and Oracle Real Application Clusters (RAC) When used with Oracle Real Application Clusters in a clustered server environment, Oracle Database Standard Edition requires the use of Oracle Clusterware. Third-party clusterware management solutions are not supported. In addition, Automatic Storage Management (ASM) must be used to manage all database-related files, including datafiles, online logs, archive logs, control file, spfiles, and the flash recovery area. Third-party volume managers and file systems are not supported for this purpose.

182. How many NICs do I need to implement Oracle RAC?
At minimum you need 2: external (public), interconnect (private). When storage for Oracle RAC is provided by Ethernet based networks (e.g. NAS/nfs or iSCSI), you will need a third interface for I/O so a minimum of 3. Anything else will cause performance and stability problems under load. From an HA perspective, you want these to be redundant, thus needing a total of 6.

183. Can I run Oracle RAC 10g with Oracle RAC 11g?
Yes. The Oracle Clusterware should always run at the highest level. With Oracle Clusterware 11g, you can run both Oracle RAC 10g and Oracle RAC 11g databases. If you are using ASM for storage, you can use either Oracle Database 10g ASM or Oracle Database 11g ASM however to get the 11g features, you must be running Oracle Database 11g ASM. It is recommended to use Oracle Database 11g ASM.
Note: When you upgrade to 11g Release 2, you must upgrade both Oracle Clusterware and Automatic Storage Management to 11g Release 2. This will support Oracle Database 10g and Oracle Database 11g (both RAC and single instance). Yes, you can run Oracle 9i RAC in the cluster as well. 9i RAC requires the clusterware that is certified with Oracle 9i RAC to be running in addition to Oracle Clusterware 11g

184. Can I have multiple public networks accessing my Oracle RAC?
Yes, you can have multiple networks however with Oracle RAC 10g and Oracle RAC 11g, the cluster can only manage a single public network with a VIP and the database can only load balance across a single network. FAN will only work on the public network with the Oracle VIPs.
Oracle RAC 11g Release 2 supports multiple public networks. You must set the new init.ora parameter LISTENER_NETWORKS so users are load balanced across their network. Services are tied to networks so users connecting with network 1 will use a different service than network 2. Each network will have its own VIP.

185. I could not get the user equivalence check to work on my Solaris 10 server when trying to install
10.2.0.1 Oracle Clusterware. The install ran fine without issue. << Message: Result: User equivalence check failed for user "oracle". >>
Cluvfy and the OUI tries to find SSH on Solaris at /usr/local/bin. Workaround is to create a softlink from /usr/bin/ssh to /usr/local/bin.
Note: User equivalence is required for installations (IE using OUI) and patching. DBCA, NETCA, and DBControl also require user equivalence.

186. Can we output the backupset onto regular file system directly (not onto flash recovery area) using RMAN command, when we use SE RAC?
Yes, - customers might want to backup their database to offline storage so this is also supported.

187. Can RMAN backup Oracle Real Application Cluster databases?
Absolutely. RMAN can be configured to connect to all nodes within the cluster to parallelize the backup of the database files and archive logs. If files need to be restored, using set AUTOLOCATE ON alerts RMAN to search for backed up files and archive logs on all nodes.

188. Does Oracle support rolling upgrades in a cluster?
This answer is for clusters running the Oracle stack. If 3rd party vendor clusterware in included, you need to check with the vendor about their support of a rolling upgrade.
By a rolling upgrade, we mean upgrading software (Oracle Database, Oracle Clusterware, ASM or the OS itself) while the cluster is operational by shutting down a node, upgrading the software on that node, and then reintegrating it into the cluster, and so forth one node at a time until all the nodes in the cluster are at the new software level.
For the Oracle Database software, it is possible only for certain single patches that are marked as rolling upgrade compatible. Most Bundle patches and Critical Patch Updates (CPU) are rolling upgradeable.
Patchsets and DB version (10g to 11g) changes are not supported in a rolling fashion, one reason that this may be impossible is that across major releases, there may be incompatible versions of the system tablespace, for example. To upgrade these in a rolling fashion one will need to use a logical standby with Oracle Database 10g or 11g, see Note: 300479.1 for details.
Read the MAA Best Practice on Rolling Database Upgrades using Data Guard SQL Apply or with Oracle RAC 11g, Rolling Database Upgrades for Physical Standby Databases using Transient Logical Standby 11g The Oracle Clusterware software always fully supports rolling upgrades, while the ASM software is rolling upgradeable at version 11.1.0.6 and beyond.
For Oracle Database 11g Release 2, Oracle Clusterware and ASM binaries are combined into a single ORACLE_HOME called the grid infrastructure home. This home fully supports rolling upgrades for patches, bundles, patchsets and releases. (If you are upgrading ASM from Oracle Database 10g to 11g Release 2, you will not be able to upgrade ASM in a rolling fashion.)
The Oracle Clusterware and Oracle Real Application Clusters both support rolling upgrades of the OS software when the version of the Oracle Database is certified on both releases of the OS (and the OS is the same, no Linux and Windows or AIX and Solaris, or 32 and 64 bit etc.). This can apply a patch to the operating system, a patchset (such as EL4u4 to EL4u6) or a release (EL4 to EL5).
Stay within a 24 hours of upgrade window and fully test this path as it's not possible for Oracle to test all these different paths and combinations.

189. I have a 2 node Oracle RAC cluster, if I pull the interconnect on node 1 to simulate failure, why does node 2 reboot?
When Oracle Clusterware recognizes a problem on the interconnect, it will try to keep the largest sub-cluster running. However in a 2 node cluster, we can only keep one node up so the first node that joined the cluster will be the node that stays up and Oracle Clusterware will reboot the other node even if you pulled the cable from the node that stayed up. In the case above, if node 1 was the first node to join the cluster (ie...the first one started), even if you pull the interconnect cable from node 1, node 2 will be rebooted.

190. If I change my cluster configuration, do I need to update the ONS configuration on my middle tier?
For the best availability and to ensure the application receives all FAN events, yes, you should update the configuration. To a certain degree, ONS will discover nodes. ONS runs on each node in the cluster and is aware of all other nodes in the cluster. As long as when ONS on the middle tier can find at least one node in the cluster when it starts, it will find the rest of the nodes. In the case where the only node up is the new node in the cluster when the middle tier starts, the middle tier will not find the cluster.

191. Why do we have a Virtual IP (VIP) in Oracle RAC 10g or 11g? Why does it just return a dead connection when its primary node fails?
The goal is application availability. When a node fails, the VIP associated with it is automatically failed over to some other node. When this occurs, the following things happen.
(1) VIP detects public network failure which generates a FAN event.
(2) the new node re-arps the world indicating a new MAC address for the IP.
(3) connected clients subscribing to FAN immediately receive ORA-3113 error or equivalent. Those not subscribing to FAN will eventually time out.
(4) New connection requests rapidly traverse the tnsnames.ora address list skipping over the dead nodes, instead of having to wait on TCP-IP timeouts Without using VIPs or FAN, clients connected to a node that died will often wait for a TCP timeout period
(which can be up to 10 min) before getting an error.
As a result, you don't really have a good HA solution without using VIPs and FAN. The easiest way to use FAN is to use an integrated client with Fast Connection Failover (FCF) such as JDBC, OCI, or ODP.NET.

192. What do the VIP resources do once they detect a node has failed/gone down? Are the VIPs
automatically acquired, and published, or is manual intervention required? Are VIPs mandatory?
With Oracle RAC 10g or higher, each node requires a VIP. With Oracle RAC 11g Release 2, 3 additional SCAN vips are required for the cluster. When a node fails, the VIP associated with the failed node is automatically failed over to one of the other nodes in the cluster. When this occurs, two things happen:
1. The new node re-arps the world indicating a new MAC address for this IP address. For directly connected clients, this usually causes them to see errors on their connections to the old address;
2. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.
In the case of existing SQL conenctions, errors will typically be in the form of ORA-3113 errors, while a new connection using an address list will select the next entry in the list. Without using VIPs, clients connected to a node that died will often wait for a TCP/IP timeout period before getting an error. This can be as long as 10 minutes or more. As a result, you don't really have a good HA solution without using VIPs.
With Oracle RAC 11g Release 2, you can delegate the management of the VIPs to the cluster. If you do this, the Grid Naming Service (part of the Oracle Clusterware) will automatically allocated and manage all VIPs in the cluster. This requires a DHCP service on the public network.

193. What are my options for load balancing with Oracle RAC? Why do I get an uneven number of
connections on my instances?
All the types of load balancing available currently (9i-10g) occur at connect time. This means that it is very important how one balances connections and what these connections do on a long term basis. Since establishing connections can be very expensive for your application, it is good programming practice to connect once and stay connected. This means one needs to be careful as to what option one uses. Oracle Net Services provides load balancing or you can use external methods such as hardware based or
clusterware solutions. The following options exist prior to Oracle RAC 10g Release 2 (for 10g Release 2 see Load Balancing
Advisory):
Random Either client side load balancing or hardware based methods will randomize the connections to the instances. On the negative side this method is unaware of load on the connections or even if they are up meaning they might cause waits on TCP/IP timeouts.
Load Based Server side load balancing (by the listener) redirects connections by default depending on the RunQ length of
each of the instances. This is great for short lived connections. Terrible for persistent connections or login storms. Do not use this method for connections from connection pools or applicaton servers
Session Based
Server side load balancing can also be used to balance the number of connections to each instance. Session count balancing is method used when you set a listener parameter, prefer_least_loaded_node_listenername= off. Note listener name is the actual name of the listener which is different on each node in your cluster and by default is listener_nodename.
Session based load balancing takes into account the number of sessions connected to each node and then distributes the connections to balance the number of sessions across the different nodes.

194. What do I do if I am getting handshake failed messages in my ONS.LOG file every minute?
For Example: The client gets this error message in Production in the ons.log file every minute or so: 06/11/10 10:11:14 [2] Connection 0,129.86.186.58,6200 SSL handshake failed 06/11/10 10:11:14 [2] Handshake for 0,129.86.186.58,6200: nz error = 29049 interval = 0 (180 max) These annoying messages in ons.log are telling you that you have a configuration mismatch for ONS somewhere in the farm. Oracle RAC has its own ONS server for which SSL is disabled by default. You must either enable SSL for Oracle RAC ONS, or disable it for OID ONS(OPMN). You need to create a wallet for each Oracle RAC ONS server, or copy one of
the wallets from OPMN on the OID instances.
In ons.conf you need to specify the wallet file and password:
walletfile=
walletpassword=
ONS only uses SSL between servers, and so ONS clients will not be affected. You specify the wallet password when you create the wallet. If you copy a wallet from an OPMN instance, then use the same password configured in opmn.xml. If there is no wallet password configured in opmn.xml, then you don't need to specify a wallet password in ons.conf either.

195. What should I do to make my Oracle RAC deployment highly available?
Customers often deploy Oracle Real Application Clusters (RAC) to provide a highly available infrastructure for their mission critical applications. Oracle RAC removes the server as a single point of failure. Load balancing your workload across many servers’ along with fast recovery from failures means that the loss of any one server should have little or no impact on the end user of the application. The level of impact to the end user depends on how well the application has been written to mask failure. If an outage occurs on an Oracle RAC instance, the ideal situation would be that the failover time + transaction response time to be less then the
maximum acceptable response time. Oracle RAC has many features that customers can take advantage of to mask failures from the end user however it requires more work than just installing Oracle RAC. To the application user, the availability metric that means the most is the response time for their transaction. This is the end-to-end response time which means all layers must be available and performing to a defined standard for the agreed times.
If you are deploying Oracle RAC and require high availability, you must make the entire infrastructure of the application highly available. This requires detailed planning to ensure there are no single points of failure throughout the infrastructure. Oracle Clusterware is constantly monitoring any process that it under its control, which includes all the Oracle software such as the Oracle instance, listener, etc. Oracle Clusterware has been programmed to recover from failures, which occur for the Oracle processes. In order to do it’s monitoring and recovery, various system activities happen on a regular basis such as user authentication, sudo, and
hostname resolution. In order for the cluster to be highly available, it must be able to perform these activities at all times. For example, if you choose to use the Lightweight Directory Access Protocol (LDAP) for authentication, then you must make the LDAP server highly available as well as the network connecting the users, application, database and LDAP server. If the database is up but the users cannot connect to the database because the LDAP server is not accessible, then the entire system is down in the eyes of your users. When using external authentication such as LDAP or NIS (Network Information Service), a public network failure will cause failures within the cluster. Oracle recommends that the hostname, vip, and interconnect are defined in the /etc/hosts file on all nodes in the cluster.
During the testing of the Oracle RAC implementation, you should include a destructive testing phase. This is a systematic set of tests of your configuration to ensure that 1) you know what to expect if the failure occurs and how to recover from it and 2) that the system behaves as expected during the failure. This is a good time to review operating procedures and document recovery procedures. Destructive testing should include tests such as node failure, instance failure, public network failure, interconnect failures, storage failure, storage network failure, voting disk failure, loss of an OCR, and loss of ASM.
Using features of Oracle Real Application Clusters and Oracle Clients including Fast Application Notification (FAN), Fast Connection Failover (FCF), Oracle Net Service Connection Load Balancing, and the Load Balancing Advisory, applications can mask most failures and provide a very highly available application. For details on implementing best practices, see the MAA document Client Failover Best Practices for Highly Available Oracle Databases and the Oracle RAC Administration and Deployment Guide.

196. Can our Oracle RAC 10g VIP fail over from NIC to NIC as well as from node to node ?
Yes, the Oracle RAC 10g VIP implementation is capable from failing over within a node from NIC to NIC and back if the failed NIC is back online again, and also we fail over between nodes. The NIC to NIC failover is fully redundant if redundant switches are installed

197. With three primary load balancing options (client-side connect-time LB, server-side connect-time LB, and the runtime connection load balancing) Is it fair to say Runtime Connection Load Balancing is the only option to leverage FAN up/down events?
No. The listener is a subscriber to all FAN events (both from the load balancing advisory and the HA events). Therefore server side connection load balancing leverages FAN HA events as well as laod balancing advisory events. With the Oracle JDBC driver 10g Release 2, if you enable Fast Connection Failover, you also enable Runtime Connection Load Balancing (one knob for both).

198. Is there a way to provide or configure HA for the interconnect using Infiniband on AIX ?
The HA support will be with VIPA configured over two separate IB interfaces. The two interfaces can either be two ports on one adapter (not ideal HA) or two ports from different adapters. This VIPA configuration is different from the "AIX Etherchannel" configuration. "AIX Etherchannel" is not supported with Infiniband;

199. What is startup sequence in Oracle 11g RAC? 11g RAC startup sequence?
This is about to understand the startup sequence of Grid Infrastructure daemons and its resources in 11gR2 RAC. In 11g RAC aka Grid Infrastructure we all know there are additional background daemons and agents, and the Oracle documentation is not so clear nor the other blog. For example:- I have found below two diagram follow any one of these. explanation from diagram
OHASD Phase:- OHASD (Oracle High Availability Server Daemon) starts Firsts and it will start
OHASD Agent Phase:- OHASD Agent starts and in turn this will start
gipcd
Grid interprocess communication daemon, used for monitoring cluster interconnect
mdnsd
Multicast DNS service It resolves DNS requests on behalf of GNS
gns
The Grid Naming Service (GNS), a gateway between DNS and mdnsd, resolves DNS requests
gpnpd
Grid Plug and Play Daemon, Basically a profile similar like OCR contents stored in XML format in $GI_HOME/gpnp/profiles/<peer> etc., this is where used by OCSSD also to read the ASM disk locations to start up with out having ASM to be up, moreover this also provides the plug and play profile where this can be distributed across nodes to cluster
evmd/
evmlogger
Evm service will be provided by evmd daemon, which is a information about events happening in cluster, stop node,start node, start instance etc.
cssdagent (cluster synchronization service agent), in turn starts
ocssd
Cluster synchronization service daemon which manages node membership in the cluster
If cssd found that ocssd is down, it will reboot the node to protect the data integrity. cssdmonitor (cluster synchronization service monitor), replaces oprocd and provides I/O fencing
OHASD orarootagent starts and in turn starts
crsd.bin
Cluster ready services, which manages high availability of cluster resources , like stopping , starting, failing over etc.
diskmon.bin
disk monitor (diskdaemon monitor) provides I/O fencing for exadata storage
octssd.bin
Cluster synchronization time services , provides Network time protocol services but manages its own rather depending on OS
CRSD Agent Phase:-crsd.bin starts two more agents
crsd orarootagent(Oracle root agent) starts and in turn this will start
gns
Grid interprocess communication daemon, used for monitoring cluster interconnect
gns vip
Multicast DNS service It resolves DNS requests on behalf of GNS
Network
Monitor the additional networks to provide HAIP to cluster interconnects
Scan vip
Monitor the scan vip, if found fail or unreachable failed to other node
Node vip
Monitor the node vip, if found fail or unreachable failed to other node
crsd oraagent(Oracle Agent) starts and in turn it will start (the same functionality in 11gr1 and 10g managed by racgmain and racgimon background process) which is now managed by crs Oracle agent itself.
ASM & disk groups
Start & monitor local asm instance
ONS
FAN feature, provides notification to interested client
eONS
FAN feature, provides notification to interested client
SCAN Listener
Start & Monitor scan listener
Node Listener
Start & monitor the node listener (rdbms?)

200. How does Cluster are startup with OCR and Voting Disk in ASM?
The startup sequence has been changed/replaced, now being 2-phased, optimized approach:
Phase I
· OHASD will startup "local" resources first.
· CSSD uses GPnP profile which stores location of voting disk so no need to access ASM (voting disk is stored different within ASM than other files so location is known).
Simultaneously,
· ORAAGENT starts up and ASM instance is started (subset of information in OCR is stored in OLR, enough to startup local resources), and ORAROOTAGENT starts CRSD.
So the 1st phase of Clusterware startup is to essentially start up local resources.
Phase II

· At this point ASM and full OCR information is available and the node is "joined" to cluster.

1 comment: