Monday, 2 July 2018

Interview Q and A for Oracle RAC Part - 4

201. What is gsdctl in RAC? List gsdctl commands in Oracle RAC?
GSDCTL stands for Global Service Daemon Control, we can use gsdctl commands to start, stop, and obtain the status of the GSD service on any platform. The options for gsdctl are:-
$ gsdctl start -- To start the GSD service
$ gsdctl stop  -- To stop the GSD service
$ gsdctl stat  -- To obtain the status of the GSD service
Log file location for gsdctl:
$ ORACLE_HOME/srvm/log/gsdaemon_node_name.log

202. How do I stop the GSD?
If you are on 9.0 on Unix you would issue:
$ ps -ef | grep jre
$ kill -9 <gsd process>
Stop the OracleGSDService on Windows. Note: Make sure that this is the process in use by GSD
If you are on 9.2 you would issue: $ gsdctl stop

203. What is the purpose of the gsd service
GSD is only needed for configuration/management of cluster database. Once the database has been configured and started and you don't use 'srvctl or EM' to manage or 'dbca to extend/remove' or 'dbua to upgrade' this database, GSD can be stopped.
In Oracle 9i RAC, the GSD doesn't write anywhere unless tracing was turned on, in which case traces go to stdout.
Note: With Oracle RAC 11g Release 2, the gsd resource is disabled by default. You will only need to enable the resource if you are running Oracle 9i RAC in the same cluster.

204. What is Oracle RAC One Node?
Oracle RAC one Node is a single instance running on one node of the cluster while the 2nd node is in cold standby mode. If the instance fails for some reason then RAC one node detect it and restart the instance on the same node or the instance is relocate to the 2nd node in case there is failure or fault in 1st node. The benefit of this feature is that it provides a cold failover solution and it automates the instance relocation without any downtime and does not need a manual intervention. Oracle introduced this feature with the release of 11gR2 (available with Enterprise Edition).

205. What is RAC and how is it different from non RAC databases?
 RAC stands for Real Application cluster. It is a clustering solution from Oracle Corporation that ensures high availability of databases by providing instance failover, media failover features.
Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide a highly scalable and available database solution for all the business applications.
Oracle RAC provides the foundation for enterprise grid computing.Oracle Real Application clusters allows multiple instances to access a single database, the instances will be running on multiple nodes. 
In Real Application Clusters environments, all nodes concurrently execute transactions against the same database.
Real Application Clusters coordinates each node's access to the shared data to provide consistency and integrity.

206. What are the advantages of RAC (Real Application Clusters)?
 Reliability - if one node fails, the database won't fail
Availability - nodes can be added or replaced without having to shutdown the database
Scalability - more nodes can be added to the cluster as the workload increases

207. What is Cache Fusion? How does this affect applications?
Oracle RAC is composed of two or more instances. When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block, it is easy to get the block image from the instance which has the block in its SGA rather than reading from the disk. To enable inter instance communication Oracle RAC makes use of interconnects. The Global Enqueue Service (GES) monitors and Instance enqueue process manages the cache fusion.
Cache Fusion is a new parallel database architecture for exploiting clustered computers to achieve scalability of all types of applications. Cache Fusion is a shared cache architecture that uses high speed low latency interconnects available today on clustered systems to maintain database cache coherency. Database blocks are shipped across the interconnect to the node where access to the data is needed. This is accomplished transparently to the application and users of the system. As Cache Fusion uses at most a 3 point protocol, this means that it easily scales to clusters with a large numbers of nodes.

In a RAC environment, it is the combining of data blocks, which are shipped across the interconnect from remote database caches (SGA) to the local node, in order to fulfill the requirements for a transaction (DML, Query of Data Dictionary).

208. What command would you use to check the availability of the RAC system?
  crs_stat -t -v (-t -v are optional)

209. How do we verify that RAC instances are running?
 SQL>select * from V$ACTIVE_INSTANCES;
The query gives the instance number under INST_NUMBER column,host_:instancename under INST_NAME column.

210. How can you connect to a specific node in a RAC environment?
 tnsnames.ora ensure that you have INSTANCE_NAME specified in it.

211. Which is the "MASTER NODE" in RAC? OR
How do I determine which node in the cluster is the "Master" node?
The node with the lowest node number will become master node and dynamic remastering of the resources will take place.To find out the master node for particular resource, you can query v$ges_resource for MASTER_NODE column.
To find out which is the master node, you can see
$GI_HOME/log/cssd/ocssd.log file and search for "master node number".
When the first master node fails in the cluster the lowest node number will become master node.

212. What components in RAC must reside in shared storage?
All datafiles, controlfiles, SPFIles, redo log files must reside on cluster-aware shred storage.

213. Give few examples for solutions that support cluster storage?
·ASM (automatic storage management),
·Raw disk devices,
·Network file system (NFS),
·OCFS2 and
·OCFS (Oracle Cluster Fie systems).

214. What are Oracle Cluster Components? OR What are the Oracle Clusterware key components?
1.Cluster Interconnect (HAIP)
2.Shared Storage (OCR/Voting Disk)
3.Clusterware software
4.Oracle Kernel Components

215. What are Oracle Kernel Components?
Basically Oracle kernel need to switched on with RAC On option when you convert to RAC, that is the difference as it facilitates few RAC bg process like LMON,LCK,LMD,LMS etc.

216. How to turn on RAC?
 # link the oracle libraries
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_on
# rebuild oracle
$ cd $ORACLE_HOME/bin
$ relink oracle

217. Disk architechture in RAC?
SAN (Storage Area Networks) - generally using fibre to connect to the SAN
NAS (Network Attached Storage) - generally using a network to connect to the NAS using either NFS, ISCSI 

218. What is Oracle Clusterware?
The Clusterware software allows nodes to communicate with each other and forms the cluster that makes the nodes work as a single logical server. The software is run by the Cluster Ready Services (CRS) using the Oracle Cluster Registry (OCR) that records and maintains the cluster and node membership information and the voting disk which acts as a tiebreaker during communication failures. Consistent heartbeat information travels across the interconnect to the voting disk when the cluster is running.

219. What is Voting Disk and OCR?
Voting Disk
The Voting Disk Files are used by Oracle Clusterware to determine which nodes are currently members of the cluster. The voting disk files are also used in concert with other Cluster components such as CRS to maintain the clusters integrity.
Oracle RAC uses the voting disk to manage cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The voting disk must reside on shared disk.
All nodes in the cluster registers their heart-beat information in the voting disk, so as to confirm that they are all operational. If heart-beat information of any node in the voting disk is not available that node will be evicted from the cluster. The CSS (Cluster Synchronization Service) daemon in the clusterware maintains the heart beat of all nodes to the voting disk. When any node is not able to send heartbeat to voting disk, then it will reboot itself, thus help avoiding the split-brain syndrome.
For high availability, Oracle recommends that you have a minimum of three or odd number (3 or greater) of votingdisks.
A node must be able to access more than half of the voting disks at any time. For example, if you have 3 voting disks configured, then a node must be able to access at least two of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.
Oracle Database 11g Release 2 provides the ability to store the voting disks in ASM along with the OCR. Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.
Oracle Cluster Registry (OCR)
The cluster registry holds all information about nodes, instances, services and ASM storage if used, it also contains state information ie they are available and up or similar.
The OCR must reside on shared disk that is accessible by all of the nodes in your cluster.

220. What are the administrative tasks involved with voting disk?
 Following administrative tasks are performed with the voting disk :
1) Backing up voting disks
2) Recovering Voting disks
3) Adding voting disks
4) Deleting voting disks
5) Moving voting disks

221. Can you add voting disk online? Do you need voting disk backup?
Yes,  as per documentation, if you have multiple voting disk you can add online, but if you have only one voting disk , by that cluster will be down as its lost you just need to start crs in exclusive mode and add the votedisk using crsctl add votedisk <path>

222. What is the Oracle Recommendation for backing up voting disk?
Oracle recommends us to use the dd command to backup the voting disk with a minimum block size of 4KB.

223. How do we backup voting disks?
1) Oracle recommends that you back up your voting disk after the initial cluster creation and after we complete any node addition or deletion procedures.
2) First, as root user, stop Oracle Clusterware (with the crsctl stop crs command) on all nodes. Then, determine the current voting disk by issuing the following command:
crsctl query votedisk css
3) Then, issue the dd or ocopy command to back up a voting disk, as appropriate.
Give the syntax of backing up voting disks:-
On Linux or UNIX systems:
dd if=voting_disk_name of=backup_file_name
where, voting_disk_name is the name of the active voting disk
backup_file_name is the name of the file to which we want to back up the voting disk contents
On Windows systems, use the ocopy command:
copy voting_disk_name backup_file_name

224. How do we verify an existing current backup of OCR?

How do you find out what OCR backups are available?

How to take backup or check backups of ocr

We can verify the current backup of OCR using the following command :

ocrconfig -showbackup

   ocrconfig -manualbackup (to take manual backup)

225. Are block devices supported for OCR, Voting Disks, and ASM devices?
Block Devices are only supported on Linux. On other Unix platforms, the directio symantics are not applicable
(or rather not implemented) for block devices.
Note: The support for raw/block devices is scheduled for Oracle Database 12g. The Oracle Database 10g OUI does not support block devices however Oracle Clusterware and ASM do. With Oracle RAC 11g Release 2, the Oracle Universal Installer and the Configuration Assistants do not support raw or block devices anymore. The Command Line Interfaces still support raw/block devices and hence the Oracle Clusterware files can be moved after the initial installation.

226. You have lost OCR disk, what is your next step?
The cluster stack will be down due to the fact that cssd is unable to maintain the integrity, this is true in 10g, From 11gR2 onwards, the crsd stack will be down, the hasd still up and running. You can add the ocr back by restoring the automatic backup or import the manual backup,

227.What are the major RAC wait events?
In a RAC environment the buffer cache is global across all instances in the cluster and hence the processing differs.The most common wait events related to this are gc cr request and gc buffer busy
GC CR request :the time it takes to retrieve the data from the remote cache
Reason: RAC Traffic Using Slow Connection or Inefficient queries (poorly tuned queries will increase the amount of data blocks requested by an Oracle session. The more blocks requested typically means the more often a block will need to be read from a remote instance via the interconnect.)
GC BUFFER BUSY: It is the time the remote instance locally spends accessing the requested data block.

228. What do you do if you see GC CR BLOCK LOST in top 5 Timed Events in AWR Report? 
This is most likely due to a fault in interconnect network.
Check netstat –s
84884742 total packets received
1201 fragments dropped after timeout
3384 packet reassembles failed
You do not want to see fragments dropped or packet reassemblies failed.
ifconfigıa:
eth0 Link encap:Ethernet HWaddr 00:0B:DB:4B:A2:04
inet addr:130.35.25.110 Bcast:130.35.27.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21721236errors:135 dropped:0 overruns:0 frame:95
TX packets:273120 errors:0 dropped:0 overruns:0 carrier:0
You do not want to see high number of errors.
if you see "fragments dropped" or "packet reassemblies failed" , Work with your system administrator find the fault with network. 

229. Srvctl cannot start instance, I get the following error PRKP-1001 CRS-0215, however sqlplus can start it on both nodes? How do you identify the problem?
Set the environmental variable SRVM_TRACE to true.. And start the instance with srvctl. Now you will get detailed error stack.
This could be many things but a popular issue is when you have a separate ASM Home and the listener is running out of this home (it was the first home installed). Srvctl needs a TNS_ADMIN alias to the network/admin directory in this home instead of using the default ORACLE_HOME/network/admin for the database you are trying to start. For srvctl to work you must
srvctl setenv nodeapps -n node -T TNS_ADMIN=full path
on each node in the cluster. You cannot rely on a TNS_ADMIN environment variable. See Note 420977.1
Another cause is non-existent spfile, see Note 732683.1

230. What are Oracle Clusterware processes for 10g on Unix and Linux?
Cluster Synchronization Services (ocssd) — Manages cluster node membership and runs as the oracle user; failure of this process results in cluster restart.
Cluster Ready Services (crsd) — The crs process manages cluster resources (which could be a database, an instance, a service, a Listener, a virtual IP (VIP) address, an application process, and so on) based on the resource's configuration information that is stored in the OCR. This includes start, stop, monitor and failover operations. This process runs as the root user
Event manager daemon (evmd) —A background process that publishes events that crs creates.
Process Monitor Daemon (OPROCD) —This process monitor the cluster and provide I/O fencing. OPROCD performs its check, stops running, and if the wake up is beyond the expected time, then OPROCD resets the processor and reboots the node. An OPROCD failure results in Oracle Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux platforms.
RACG (racgmain, racgimon) —Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur.

231. What are Oracle database background processes specific to RAC? OR
What are Oracle RAC software components?
Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database.Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service) that enable cache fusion.Oracle RAC instances are composed of following background processes:
ACMS—Atomic Controlfile to Memory Service (ACMS)
GTX0-j—Global Transaction Process
LMON—Global Enqueue Service Monitor
LMD—Global Enqueue Service Daemon
LMS—Global Cache Service Process
LCK0—Instance Enqueue Process
RMSn—Oracle RAC Management Processes (RMSn)
RSMN—Remote Slave Monitor
To ensure that each Oracle RAC database instance obtains the block that it needs to satisfy a query or transaction, Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances.

232. What is GRD?
GRD stands for Global Resource Directory. The GES and GCS maintains records of the statuses of each datafile and each cahed block using global resource directory. This process is referred to as cache fusion and helps in data integrity.

233. What is ACMS?
ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environment ACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates are globally committed on success or globally aborted in event of a failure.

234. What is SCAN listener?
A scan listener is something that additional to node listener which listens the incoming db connection requests from the client which got through the scan IP, it got end points configured to node listener where it routes the db connection requests to particular node listener.
SCAN IP can be disabled if not required. However SCAN IP is mandatory during the RAC installation. Enabling/disabling SCAN IP is mostly used in oracle apps environment by the concurrent manager (kind of job scheduler in oracle apps).
Steps to disable the SCAN IP,
i.  Do not use SCAN IP at the client end.
ii. Stop scan listener
    srvctl stop scan_listener
iii.Stop scan
    srvctl stop scan (this will stop the scan vip's)
iv. Disable scan and disable scan listener
    srvctl disable scan

235. What are the different network components are in 10g RAC?
public, private, and vip components
Private interfaces is for intra node communication.VIP is all about availability of application. When a node fails then the VIP component fail over to some other node, this is the reason that all applications should based on vip components means tns entries should have vip entry in the host list

236. What is an interconnect network?
An interconnect network is a private network that connects all of the servers in a cluster. The interconnect network uses a switch/multiple switches that only the nodes in the cluster can access.

237. What is the use of cluster interconnect (Private Interconnect)? 
Cluster interconnect is used by the Cache fusion for inter instance communication.Clusterware uses the private interconnect for cluster synchronization (network heartbeat) and daemon communication between the clustered nodes. This communication is based on the TCP protocol.
RAC uses the interconnect for cache fusion (UDP) and inter-process communication (TCP).
238. How can we configure the cluster interconnect?
· Configure User Datagram Protocol (UDP) on Gigabit Ethernet for cluster interconnects.
· On UNIX and Linux systems we use UDP and RDS (Reliable data socket) protocols to be used by Oracle Clusterware.
· Windows clusters use the TCP protocol.

239. What is a virtual IP address or VIP?
A virtual IP address or VIP is an alternate IP address that the client connections use instead of the standard public IP address. To configure VIP address, we need to reserve a spare IP address for each node, and the IP addresses must use the same subnet as the public network.

240. What is the use of VIP in Oracle RAC? 
Without using VIPs or FAN, clients connected to a node that died will often wait for a TCP timeout period (which can be up to 10 min) before getting an error. As a result, you don't really have a good HA solution without using VIPs.
If a node fails, then the node's VIP address fails over to another node on which the VIP address can accept TCP connections but it cannot accept Oracle connections. New node re-arps the world indicating a new MAC address for the IP. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.

241. Give situations under which VIP address failover happens?
VIP addresses failover happens when the node on which the VIP address runs fails; all interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the network.

242. What is the significance of VIP address failover?
When a VIP address failover happens, Clients that attempt to connect to the VIP address receive a rapid connection refused error .They don't have to wait for TCP connection timeout messages.

243. What is the use of a service in Oracle RAC environment?
Applications should use the services feature to connect to the Oracle database. Services enable us to define rules and characteristics to control how users and applications connect to database instances.

244. What are the characteristics controlled by Oracle services feature?
The characteristics include a unique name, workload balancing, failover options, and high availability.

245. What enables the load balancing of applications in RAC?
Oracle Net Services enable the load balancing of application connections across all of the instances in an Oracle RAC database.

246. What are the types of connection load-balancing?
Connection Workload management is one of the key aspects when you have RAC instances as you want to distribute the connections to specific nodes/instance or those have less load.
There are two types of connection load-balancing:
1.Client Side load balancing (also called as connect time load balancing)
2.Server side load balancing (also called as Listener connection load balancing)

247. What is the difference between server-side and client-side connection load balancing?
Client-side balancing happens at client side where load balancing is done using listener.In case of server-side load balancing listener uses a load-balancing advisory to redirect connections to the instance providing best service.
Client Side load balancing:- Oracle client side load balancing feature enables clients to randomize the connection requests among all the available listeners based on their load.
An tns entry that contains all nodes entries and use
load_balance=on (default its on) will use the connect time load balancing or client side load balancing.
Sample Client Side TNS Entry:-
    finance =
    (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = myrac2-vip)(PORT = 2042))
         (ADDRESS = (PROTOCOL = TCP)(HOST = myrac1-vip)(PORT = 2042))
         (ADDRESS = (PROTOCOL = TCP)(HOST = myrac3-vip)(PORT = 2042))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
         (SERVER = DEDICATED)
         (SERVICE_NAME = FINANCE) (FAILOVER=ON)
    (FAILOVER_MODE =  (TYPE = SELECT) (METHOD = BASIC) (RETRIES = 180) (DELAY = 5))
    )
    )
Server side load balancing:- This improves the connection performance by balancing the number of active connections among multiple instances and dispatchers. In a single instance environment (shared servers), the listener selects the least dispatcher to handle the incoming client requests. In a rac environments, PMON is aware of all instances load and dispatchers , and depending on the load information PMON redirects the connection to the least loaded node.
In a RAC environment,
*.remote_listener parameter which is a tns entry containing all nodes addresses need to set to enable the load balance advisory updates to PMON.
Sample Tns entry should be in an instances of RAC cluster,

    local_listener=LISTENER_MYRAC1
    remote_listener = LISTENERS_MYRACDB

248. What are the different types of Server-Side Connection Load Balancing?
With server-side load balancing, the SCAN listener directs a connection request to the best instance currently providing the service by using the load balancing advisory. The two types of connection load balancing are:
· SHORT—Connections are distributed across instances based on the amount of time that the service is used. Use the SHORT connection load balancing goal for applications that have connections of brief duration. When using connection pools that are integrated with FAN, set the connection load balancing goal to SHORT. SHORT tells the listener to use CPU-based statistics.
         · LONG—Connections are distributed across instances based on the number of sessions in each instance, for each instance that supports the service. Use the LONG connection load balancing goal for applications that have connections of long duration. This is typical for connection pools and SQL*Forms sessions. LONG is the default connection load balancing goal, and tells the listener to use session-based statistics.

249. What are the administrative tools used for Oracle RAC environments?
Oracle RAC cluster can be administered as a single image using the below
·       OEM (Enterprise Manager),
·       SQL*PLUS,
·       Server control (SRVCTL),
·       Cluster Verification Utility (CLUVFY),
·       DBCA,
·       NETCA

250. Name some Oracle Clusterware tools and their uses?
·OIFCFG - allocating and deallocating network interfaces.
·OCRCONFIG - Command-line tool for managing Oracle Cluster Registry.
·OCRDUMP - Identify the interconnect being used.
·CVU - Cluster verification utility to get status of CRS resources.

251. What is the difference between CRSCTL and SRVCTL?
crsctl manages clusterware-related operations:
    Starting and stopping Oracle Clusterware
    Enabling and disabling Oracle Clusterware daemons
    Registering cluster resources
srvctl manages Oracle resource–related operations:
    Starting and stopping database instances and services
    Also from 11gR2 manages the cluster resources like network,vip,disks etc

252. How do we remove ASM from a Oracle RAC environment?
We need to stop and delete the instance in the node first in interactive or silent mode.After that asm can be removed using srvctl tool as follows:
srvctl stop asm -n node_name
srvctl remove asm -n node_name
We can verify if ASM has been removed by issuing the following command:
srvctl config asm -n node_name

253. How do we verify that an instance has been removed from OCR after deleting an instance?
Issue the following srvctl command:
srvctl config database -d database_name
cd CRS_HOME/bin
./crs_stat

254. What are the modes of deleting instances from ORacle Real Application cluster Databases?
We can delete instances using silent mode or interactive mode using DBCA(Database Configuration Assistant).

255. What are the background process that exists in 11gr2 and functionality?
Process Name     Functionality
crsd     •The CRS daemon (crsd) manages cluster resources based on configuration information that is stored in Oracle Cluster Registry (OCR) for each resource. This includes start, stop, monitor, and failover operations. The crsd process generates events when the status of a resource changes.
cssd     •Cluster Synchronization Service (CSS): Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members when a node joins or leaves the cluster. If you are using certified third-party clusterware, then CSS processes interfaces with your clusterware to manage node membership information. CSS has three separate processes: the CSS daemon (ocssd), the CSS Agent (cssdagent), and the CSS Monitor (cssdmonitor). The cssdagent process monitors the cluster and provides input/output fencing. This service formerly was provided by Oracle Process Monitor daemon (oprocd), also known as OraFenceService on Windows. A cssdagent failure results in Oracle Clusterware restarting the node.
diskmon     •Disk Monitor daemon (diskmon): Monitors and performs input/output fencing for Oracle Exadata Storage Server. As Exadata storage can be added to any Oracle RAC node at any point in time, the diskmon daemon is always started when ocssd is started.
evmd     •Event Manager (EVM): Is a background process that publishes Oracle Clusterware events
mdnsd     •Multicast domain name service (mDNS): Allows DNS requests. The mDNS process is a background process on Linux and UNIX, and a service on Windows.
gnsd     •Oracle Grid Naming Service (GNS): Is a gateway between the cluster mDNS and external DNS servers. The GNS process performs name resolution within the cluster.
ons     •Oracle Notification Service (ONS): Is a publish-and-subscribe service for communicating Fast Application Notification (FAN) events
oraagent     •oraagent: Extends clusterware to support Oracle-specific requirements and complex resources. It runs server callout scripts when FAN events occur. This process was known as RACG in Oracle Clusterware 11g Release 1 (11.1).
orarootagent     •Oracle root agent (orarootagent): Is a specialized oraagent process that helps CRSD manage resources owned by root, such as the network, and the Grid virtual IP address
oclskd     •Cluster kill daemon (oclskd): Handles instance/node evictions requests that have been escalated to CSS
gipcd     •Grid IPC daemon (gipcd): Is a helper daemon for the communications infrastructure
ctssd     •Cluster time synchronisation daemon(ctssd) to manage the time syncrhonization between nodes, rather depending on NTP

256. Under which user or owner the process will start?
Component                                                             Name of the Process                                Owner
Oracle High Availability Service                            ohasd                                                         init, root
Cluster Ready Service (CRS)                               Cluster Ready Services                            root
Cluster Synchronization Service (CSS)               ocssd,cssd monitor, cssdagent               grid owner
Event Manager (EVM)                                           evmd, evmlogger                         grid owner
Cluster Time Synchronization Service (CTSS)                 octssd                                                         root
Oracle Notification Service (ONS)                        ons, eons                                                    grid owner
Oracle Agent                                                           oragent                                         grid owner
Oracle Root Agent                                                  orarootagent                                root
Grid Naming Service (GNS)                                  gnsd                                              root
Grid Plug and Play (GPnP)                                   gpnpd                                                         grid owner
Multicast domain name service (mDNS)             mdnsd                                                         grid owner

257. What is the major difference between 10g and 11g RAC?
There is not much difference between 10g and 11gR (1) RAC. But there is a significant difference in 11gR2.
Prior to 11gR1(10g) RAC, the following were managed by Oracle CRS
    Databases
    Instances
    Applications
    Node Monitoring
    Event Services
    High Availability
From 11gR2(onwards) its completed HA stack managing and providing the following resources as like the other cluster software like VCS etc.
    Databases
    Instances
    Applications
    Cluster Management
    Node Management
    Event Services
    High Availability
    Network Management (provides DNS/GNS/MDNSD services on behalf of other traditional services) and SCAN – Single Access      Client Naming method, HAIP
    Storage Management (with help of ASM and other new ACFS filesystem)
    Time synchronization (rather depending upon traditional NTP)
    Removed OS dependent hang checker etc, manages with own additional monitor process

258. What is hangcheck timer?
The hangcheck timer checks regularly the health of the system. If the system hangs or stop the node will be restarted automatically.
There are 2 key parameters for this module:
-> hangcheck-tick: this parameter defines the period of time between checks of system health. The default value is 60 seconds; Oracle recommends setting it to 30seconds.
-> hangcheck-margin: this defines the maximum hang delay that should be tolerated before hangcheck-timer resets the RAC node.

259. State the initialization parameters that must have same value for every instance in an Oracle RAC database?
Some initialization parameters are critical at the database creation time and must have same values.Their value must be specified in SPFILE or PFILE for every instance.The list of parameters that must be identical on every instance are given below:
ACTIVE_INSTANCE_COUNT
ARCHIVE_LAG_TARGET
COMPATIBLE
CLUSTER_DATABASE
CLUSTER_DATABASE_INSTANCE
CONTROL_FILES
DB_BLOCK_SIZE
DB_DOMAIN
DB_FILES
DB_NAME
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
DB_UNIQUE_NAME
INSTANCE_TYPE (RDBMS or ASM)
PARALLEL_MAX_SERVERS
REMOTE_LOGIN_passWORD_FILE
UNDO_MANAGEMENT

260. How many voting disks are you maintaining ? OR
Why we need to keep odd number of voting disks ?
http://www.toadworld.com/KNOWLEDGE/KnowledgeXpertforOracle/tabid/648/TopicID/RACR2ARC6/Default.aspx
A node must be able to access more than half of the voting disks at any time. For example, if you have 3 voting disks configured, then a node must be able to access at least two of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster.
By default Oracle will create 3 voting disk files in ASM.
Oracle expects that you will configure at least 3 voting disks for redundancy purposes. You should always configure an odd number of voting disks >= 3. This is because loss of more than half your voting disks will cause the entire cluster to fail.
You should plan on allocating 280MB for each voting disk file. For example, if you are using ASM and external redundancy then you will need to allocate 280MB of disk for the voting disk. If you are using ASM and normal redundancy you will need 560MB.

Why do we have to create odd number of voting disk?
As far as voting disks are concerned, a node must be able to access strictly more than half of the voting disks at any time. So if you want to be able to tolerate a failure of n voting disks, you must have at least 2n+1 configured. (n=1 means 3 voting disks). You can configure up to 32 voting disks, providing protection against 15 simultaneous disk failures.
Oracle recommends that customers use 3 or more voting disks in Oracle RAC 10g Release 2. Note: For best availability, the 3 voting files should be physically separate disks. It is recommended to use an odd number as 4 disks will not be any more highly available than 3 disks, 1/2 of 3 is 1.5...rounded to 2, 1/2 of 4 is 2, once we lose 2 disks, our cluster will fail with both 4 voting disks or 3 voting disks.

261. What is SCAN? (11gR2 feature) ? Which version of JDBC supports SCAN ?
SCAN or Single Client Access Name is a new Oracle Real Application Clusters (RAC) 11g Release 2 feature that provides a single name for clients to access an Oracle Database running in a cluster. The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster. Having a single name to access the cluster allows clients to use the EZConnect client and the simple JDBC thin URL to access any database running in the clusters independently of which server(s) in the cluster the database is active. SCAN provides load balancing and failover of client connections to the database. The SCAN works as an IP alias for the cluster. A typical JDBC URL using SCAN would look like:
jdbc:oracle:thin:@sales1-scan:1521/oltp
SCAN feature is supported only from JDBC version 11.2.0.1. This was one of the features added to 11.2 as a part of "IPv6 support".

Single Client Access Name (SCAN) is a single name that allows client connections to connect to any database in an Oracle cluster independently of which node in the cluster the database (or service) is currently running. The SCAN should be used in all client connection strings and does not change when you add/remove nodes from the cluster. SCAN eliminates the need to change TNSNAMES entry when nodes are added to or removed from the Cluster. SCAN allows clients to use EZConnect or the this JDBC URL.
sqlplus system/manager@ sales1-scan:1521/oltp
jdbc:oracle:thin:@sales1-scan:1521/oltp
For high availability and scalability, Oracle recommends configuring the SCAN to resolve to three addresses.
The SCAN is defined as a single name resolving to 3 IP addresses in either the cluster's GNS(Grid Naming Service) or your corporate DNS(Domain Name Service). The SCAN must resolve to at least one address on the public network.
SCAN provides a single domain name via (DNS), allowing and-users to address a RAC cluster as-if it were a single IP address. SCAN works by replacing a hostname or IP list with virtual IP addresses (VIP). 

262. What are SCAN components in a cluster?
1.SCAN Name
2.SCAN IPs (3)
3.SCAN Listeners (3)

263. What is FAN?
Fast application Notification as it abbreviates to FAN relates to the events related to instances, services and nodes. This is a notification mechanism that Oracle RAc uses to notify other processes about the configuration and service level information that includes service status changes such as, UP or DOWN events. Applications can respond to FAN events and take immediate action.

264. What is TAF? OR  What is Server-side Transparent Application Failover (TAF) and how do I use it?
TAF (Transparent Application Failover) is a configuration that allows session fail-over between different nodes of a RAC database cluster. If a communication link failure occurs after a connection is established, the connection fails over to another active node. Any disrupted transactions are rolled back, and session properties and server-side program variables are lost. In some cases, if the statement executing at the time of the failover is a Select statement, that statement may be automatically re-executed on the new connection with the cursor positioned on the row on which it was positioned prior to the failover.
After an Oracle RAC node crashes—usually from a hardware failure—all new application transactions are automatically rerouted to a specified backup node. The challenge in rerouting is to not lose transactions that were "in flight" at the exact moment of the crash. One of the requirements of continuous availability is the ability to restart in-flight application transactions, allowing a failed node to resume processing on another server without interruption. Oracle's answer to application failover is a new Oracle Net mechanism dubbed Transparent Application Failover. TAF allows the DBA to configure the type and method of failover for each Oracle Net client.
TAF architecture offers the ability to restart transactions at either the transaction (SELECT) or session level. Oracle Database 10g Release 2, introduces server-side TAF when using services. After you create a service, you can use the dbms_service.modify_service pl/sql procedure to define the TAF policy for the service. Only the basic method is supported. Note this is different than the TAF policy (traditional client TAF) that is supported by srvctl and EM Services page. If your service has a server side TAF policy defined, then you do not have to encode TAF on the client connection string. If the instance where a client is connected, fails, then the connection will be failed over to another instance in the cluster that is supporting the service. All
restrictions of TAF still apply.
NOTE: both the client and server must be 10.2 and aq_ha_notifications must be set to true for the service.
Sample code to modify service:
execute dbms_service.modify_service (service_name => 'gl.us.oracle.com' -
, aq_ha_notifications => true -
, failover_method => dbms_service.failover_method_basic -
, failover_type => dbms_service.failover_type_select -
, failover_retries => 180 -
, failover_delay => 5 -
, clb_goal => dbms_service.clb_goal_long);

265. What are the requirements for Oracle Clusterware?
1. External Shared Disk to store Oracle Cluster ware file (Voting Disk and Oracle Cluster Registry - OCR) 
2. Two netwrok cards on each cluster ware node (and three set of IP address) -
Network Card 1 (with IP address set 1) for public network
Network Card 2 (with IP address set 2) for private network (for inter node communication between rac nodes used by clusterware and rac database)
IP address set 3 for Virtual IP (VIP) (used as Virtual IP address for client connection and for connection failover)
3. Storage Option for OCR and Voting Disk - RAW, OCFS2 (Oracle Cluster File System), NFS, …..

266. Which enable the  load balancing of applications in RAC?
 Oracle Net Services enable the load balancing of application connections across all of the instances in an Oracle RAC database.

267. How to find location of OCR file when CRS is down?
If you need to find the location of OCR (Oracle Cluster Registry) but your CRS is down.
When the CRS is down: Look into “ocr.loc” file, location of this file changes depending on the OS:
On Linux: /etc/oracle/ocr.loc
On Solaris: /var/opt/oracle/ocr.loc 

When CRS is UP: Set ASM environment or CRS environment then run the below command:
ocrcheck

268. In 2 node RAC, how many NIC’s are r using ?
2 network cards on each clusterware node 
Network Card 1 (with IP address set 1) for public network
Network Card 2 (with IP address set 2) for private network (for inter node communication between rac nodes used by clusterware and rac database)

269. In 2 node RAC, how many IP’s are r using  (Oracle 10g)?
6 - 3 set of IP address
## eth1-Public:  2
## eth0-Private: 2
## VIP: 2

270. How to find IP’s information in RAC ?
Edit the /etc/hosts file as shown below:
# Do not remove the following line, or various programs
# that requires network functionality will fail.
127.0.0.1               localhost.localdomain localhost
## Public Node names
 192.168.10.11          node1-pub.hingu.net     node1-pub
192.168.10.22          node2-pub.hingu.net     node2-pub
## Private Network (Interconnect)
 192.168.0.11            node1-prv               node1-prv
192.168.0.22            node2-prv               node2-prv
## Private Network (Network Area storage)
 192.168.1.11            node1-nas               node1-nas
192.168.1.22            node2-nas               node2-nas
192.168.1.33            nas-server              nas-server
## Virtual IPs
 192.168.10.111          node1-vip.hingu.net     node1-vip

192.168.10.222          node2-vip.hingu.net     node2-vip

No comments:

Post a Comment