231. What is Local Inventory ?
Inventory inside each Oracle Home is called as
local Inventory or oracle_home Inventory. This Inventory holds information to
that oracle_home only.
232. What is Oracle Home Inventory?
Oracle home inventory or local inventory is
present inside each Oracle home. It only contains information relevant to a
particular Oracle home. This file is located in the following location: $ORACLE_HOME/inventory
It contains the following files and folders:
· Components File
· Home Properties
File
· Other Folders
233. Can I have multiple Global Inventory on a
machine ?
Quite common questions is that can you have
multiple global Inventory and answer is YES you can have multiple global
Inventory but if your upgrading or applying patch then change Inventory Pointer
oraInst.loc to respective location. If you are following single global
Inventory and if you wish to uninstall any software then remove it from Global
Inventory as well.
234. What to do if my Global Inventory is corrupted ?
No need to worry if your global Inventory is
corrupted, you can recreate global Inventory on machine using Universal
Installer and attach already Installed oracle home by option -attachHome
./runInstaller
-silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=”Oracle_Home_Location”
ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”
235. If any one of these 6 mandatory background
processes is killed/not running, the instance will be aborted ?
Background processes are started automatically
when the instance is started.Mandatory background processes are DBWn, LGWR,
CKPT, SMON, PMON, and RECO. All other processes are optional, will be invoked
if that particular feature is activated.
If any one of these 6 mandatory background
processes is killed/not running, the instance will be aborted. Any issues
related to backgroud processes should be monitored and analyzed from the trace
files generated and the alert log.
237. What is SGA_TARGET and SGA_MAX_SIZE ?
SGA_MAX_SIZE is the largest amount of memory
that will be available for the SGA in the instance and it will be allocated
from memory. You do not have to use it all, but it will be potentially wasted
if you set it too high and don't use it. It is not a dynamic parameter.
Basically it gives you room for the Oracle instance to grow.
SGA_TARGET is actual memory in use by the
current SGA. This parameter is dynamic and can be increased up to the value of
SGA_MAX_SIZE.
SGA_MAX_SIZE and SGA_TARGET both are the
parameter are used to change the SGA SIZE.
SGA_MAX_SIZE sets the maximum value for
sga_target.
SGA_TAGET is 10G feature used to change the sga
size dynamically .it specifies the total amount of SGA memory available to an
instance.
this feature is called Automatic Shared Memory
Management. With ASMM, the parameters java_pool_size, shared_pool_size,
large_pool_size and db_cache_size are affected.
SGA_MAX_SIZE
& SGA_TARGET
SGA_MAX_SIZE sets the overall amount of memory
the SGA can consume but is not dynamic.
The SGA_MAX_SIZE parameter is the max allowable
size to resize the SGA Memory area parameters. If the SGA_TARGET is set to some
value then the Automatic Shared Memory Management (ASMM) is enabled, the
SGA_TARGET value can be adjusted up to the SGA_MAX_SIZE parameter, not more
than SGA_MAX_SIZE parameter value.
I.e. If SGA_MAX_SIZE=4GB and SGA_TARGET=2GB,
later period of time, if you want you can resize your SGA_TARGET parameter to
the value of SGA_MAX_SIZE i.e. 4GB, you can’t resize the SGA_TARGET value to
more than 4GB.
It is significant that SGA_TARGET includes the
entire memory for the SGA, in contrast to earlier releases in which memory for
the internal and fixed SGA was added to the sum of the configured SGA memory
parameters. Thus, SGA_TARGET gives you precise control over the size of the
shared memory region allocated by the database. If SGA_TARGET is set to a value
greater than SGA_MAX_SIZE at startup, then the latter is bumped up to
accommodate SGA_TARGET
Do not dynamically set or unset the SGA_TARGET
parameter. This should be set only at startup.
SGA_TARGET is a database initialization
parameter (introduced in Oracle 10g) that can be used for automatic SGA memory
sizing.
SGA_TARGET provides the following:
§ Single parameter for total SGA size
§ Automatically sizes SGA components
§ Memory is transferred to where most
needed
§ Uses workload information
§ Uses internal advisory
predictions
§ STATISTICS_LEVEL must be set to
TYPICAL
§ SGA_TARGET is dynamic
§ Can be increased till SGA_MAX_SIZE
§ Can be reduced till some component
reaches minimum size
§ Change in value of SGA_TARGET affects
only automatically sized components
238. If I keep SGA_TARGET =0 then what will happen ?
Disable automatic SGA tuning by setting
sga_target=0
Disable ASMM by setting SGA_TARGET=0
SGA_TARGET is a database initialization
parameter (introduced in Oracle 10g) that can be used for automatic SGA memory
sizing.
Default value 0 (SGA auto tuning
is disabled)
239. What happens when you run ALTER DATABASE
OPEN RESETLOGS ?
The current online redo logs are archived, the
log sequence number is reset to 1, new database incarnation is created, and the
online redo logs are given a new time stamp and SCN.
The reason to do the open the database with the
resetlogs is that after doing an incomplete recovery , the data files and
control files still don't come to the same point of the redo log files. And as
long as the database is not consistent within all the three file-data, redo and
control, you can't open the database. The resetlogs clause would reset the log
sequence numbers within the log files and would start them from 0 thus enabling
you to open the database but on the cost of losing all what was there in the
redo log files.
240. In what scenarios open resetlogs required
?
An ALTER DATABASE OPEN RESETLOGS statement is
required,
1.after incomplete recovery (Point in Time
Recovery) or
2.recovery with a backup control file.
3. recovery with a control file recreated with
the reset logs option.
Whenever you perform incomplete recovery or
recovery with a backup control file, you must reset the online logs when you
open the database. The new version of the reset database is called a new
incarnation..
241. Difference between RESETLOGS and
NORESETLOGS ?
After recover database operation, open the
database with: ALTER DATABASE OPEN [NO]RESETLOGS
NORESETLOGS:
The NORESETLOGS option does not clear the redo
log files during startup and the online redo logs to be used for recovery. Only
used in scenario where MANUAL RECOVERY is started, CANCEL is used, and then
RECOVER DATABASE is started.
RESETLOGS:
CAUTION: Never use RESETLOGS unless necessary.
Once RESETLOGS is used then the redo log files
cannot be used and any completed transactions in those redo logs are lost!!
Before using the RESETLOGS option take an
offline backup of the database.
242. What is Database Incarnation ?
Database incarnation is effectively a new
“version” of the database that happens when you reset the online redo logs
using “alter database open resetlogs;”. Database incarnation falls into
following category Current, Parent, Ancestor and Sibling
i) Current Incarnation : The database
incarnation in which the database is currently generating redo.
ii) Parent Incarnation : The database
incarnation from which the current incarnation branched following an OPEN
RESETLOGS operation.
iii) Ancestor Incarnation : The parent of the
parent incarnation is an ancestor incarnation. Any parent of an ancestor
incarnation is also an ancestor incarnation.
iv) Sibling Incarnation : Two incarnations that
share a common ancestor are sibling incarnations if neither one is an ancestor
of the other.
243. How to view Database
Incarnation history of Database ?
Using SQL> select * from v$database_incarnation;
Using RMAN>LIST INCARNATION;
However, you can use the RESET DATABASE TO
INCARNATION command to specify that SCNs are to be interpreted in the frame of
reference of another incarnation.
•For example my current database INCARNATION is
3 and now I have used FLASHBACK DATABASE TO SCN 3000;then SCN 3000 will be
search in current incarnation which is 3. However if I want to get back to SCN
3000 of INCARNATION 2 then I have to use, RMAN> RESET DATABASE TO INCARNATION 2;
RMAN>
RECOVER DATABASE TO SCN 3000;
244. What does the parameter
LOG_ARCHIVE_MAX_PROCESSES do?
Since Oracle 8i supports multiple archive
destinations it also supports multiple archiver processes to ease the load on a
single archiver. LOG_ARCHIVE_MAX_PROCESSES specifies the number of ARCH
processes to be invoked.
This value is evaluated at instance startup if
the LOG_ARCHIVE_START initialization parameter has the value TRUE. If the
LOG_ARCHIVE_START is true and LOG_ARCHIVE_MAX_PROCESSES is not set explicitly
it will start the arc0
process. Otherwise if LOG_ARCHIVE_MAX_PROCESSES
is explicitly set to a value (range 1-10) then it will start additional
archiver processes named arc0, arc1, arc2. . . . . . etc. There is no need to
change this parameter from its default value of 1, because the system
determines how many ARCn processes are needed and LGWR automatically starts up
more ARCn processes when the database workload requires more.
255. Is the parameter dynamic ?
It can be dynamically changed by alter system
set LOG_ARCHIVE_MAX_PROCESSES=n
where n varies between 1 and 10. However, this
has no effect if LOG_ARCHIVE_START is FALSE .
256. What is the nature of dynamism of the
archiver processes and how are they activated and deactivated ?
If LOG_ARCHIVE_START is set to true, Oracle
starts up a single archiver process names ARC0. Subsequently if the parameter
is changed using the ALTER SYSTEM command it will start the specified number of
archive processes. For example:
SVRMGRL>alter system set
LOG_ARCHIVE_MAX_PROCESSES=4;
will invoke additional processes ARC1 , ARC2
and ARC3 in the following sequence:
a. The shadow process contacts the primary
archive process(arc0 here) and sends a message to increase the number of processes
b. The archiver process invokes the kcrrschd
function. (kcrrschd: Schedule Multiple arch Processes).
c. It checks whether the requested(new) number
of processes is lower than the current(old) number of archiver processes. If it
is lower or if ARCHIVING is DISABLED then it simply returns. Else if it is more
than the supported limit (10) it limits the requested number of processes to
10.
d. The scheduler function acquires a latch on
the kcrrxs{} structure (ARCH Activation status) structure.
e. The function then loops over the specified
number of processes and updates the structure to change the state to scheduled
(KCRRSCHED)
f. It then releases the latch and hands over to
the kcrrsmp(kcrrsmp: Start Multiple arch Processes) function for invocation.
g. The kcrrsmp function acquires a latch
prematurely on the kcrrxs{} structure(ARCH Activation status) to serialize
access to the code section.
h. It schedules any pending archiver processes
and cleans up dead processes,
if any.
i. The function then loops over the specified
number of processes and changes the state of each from KCRRSCHED to KCRRSTART
i.e. the Archiver process is pending activation.
j. The latch is then released and ARCH
processes are started .
k. The kcrrsmp function then re-acquires the
latch. Individual archiver processes are then notified to activate themselves.
The archiver processes activate themselves and update the alert file.
l. The calling function sleeps for all the
archiver processes to activate themselves and finally when all are active it
updates the state in the kcrrxs structure.
m. Finally when the current number of archiver
processes match the requested number of archiver processes, it releases the
latch and breaks.
The alert. log reflects the sequence of events
as follows:
sql: prodding the archiver
ALTER SYSTEM SET log_archive_max_processes=4;
Tue Jul 13 02:15:14 1999
ARC0: changing ARC1 KCRRNOARCH->KCRRSCHED
ARC0: changing ARC2 KCRRNOARCH->KCRRSCHED
ARC0: changing ARC3 KCRRNOARCH->KCRRSCHED
ARC0: STARTING ARCH PROCESSES
ARC0: changing ARC1 KCRRSCHED->KCRRSTART
ARC0: changing ARC2 KCRRSCHED->KCRRSTART
ARC0: changing ARC3 KCRRSCHED->KCRRSTART
ARC0: invoking ARC1
Tue Jul 13 02:15:15 1999
ARC1: changing ARC1 KCRRSTART->KCRRACTIVE
Tue Jul 13 02:15:15 1999
ARC0: Initializing ARC1
ARC0: ARC1 invoked
ARC0: invoking ARC2
ARC1 started with pid=10
ARC1: Archival started
Tue Jul 13 02:15:15 1999
ARC2: changing ARC2 KCRRSTART->KCRRACTIVE
Tue Jul 13 02:15:15 1999
ARC0: Initializing ARC2
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
The same activation sequence follows for ARC2
and ARC3.
InterEstingly, the number of processes can also
be reduced. For example :
SVRMGRL>alter system set
LOG_ARCHIVE_MAX_PROCESSES=2;
will do the following sequentially:
a. The shadow process contacts the currently
active archiver process.
b. The archiver process will then invoke the
kcrrxmp function( kcrrxmp: Stop Multiple Arch Processes).
c. The kcrrxmp function will acquire a latch on
the kcrrxs{} structure(ARCH Activation status) so that another process cannot
update the structure simultaneously.
d. It checks whether the requested(new) number
of processes is lower than the current(old) number of archiver processes.
e. If less, then it scans the "head"
of the list to find which processes have been scheduled recently and are less
likely to get rescheduled for archival.
f. It loops over each process and asks it to
change status from KCRRACTIVE to KCRRSHUTDN
g. Once the state is changed it signals the
Operating system to terminate the process and the state changes to KCRRDEAD.
The relevant process state information is cleared and the kcrrxs{} structure
updated. The loop mentioned in (f) and (g) continues till the current number of
archiver processes match the requested number of archiver processes.
h. The kcrrxs structure is updated with the new
number of archiver processes.ss
i. The latch is released.
The sequence of states is recorded in the
alert. log file as follows:
sql: prodding the archiver
Tue Jul 13 00:34:20 1999
ARC3: changing ARC0 KCRRACTIVE->KCRRSHUTDN
ARC3: sending ARC0 shutdown message
ARC3: changing ARC1 KCRRACTIVE->KCRRSHUTDN
ARC3: sending ARC1 shutdown message
ARC3: received prod
Tue Jul 13 00:34:20 1999
ALTER SYSTEM SET log_archive_max_processes=2;
Tue Jul 13 00:34:20 1999
ARCH shutting down
ARC0: Archival stopped
ARC0: changing ARC0 KCRRSHUTDN->KCRRDEAD
Tue Jul 13 00:34:20 1999
ARCH shutting down
ARC1: Archival stopped
ARC1: changing ARC1 KCRRSHUTDN->KCRRDEAD
257. How does one know which archiver process
has archived an online log ?
The
achiver processes are scheduled to do the archiving in a round-robin manner.
There could be scenarios, however, when multiple archiver processes are
activated depending upon the load. Since Oracle8i supports multiple archive log
destinations as well as duplexing of archive logs, it becomes all the more
necessary to record which process has archived which logfile. Any successful
archival in Oracle8i generates a trace file with the archiver process name.
Extracts from one such trace follows:
Instance
name: v815
Redo
thread mounted by this instance: 1
Oracle
process number: 12
Unix
process pid: 3658, image: oracle@oracle8i (ARC3)
***
Session ID:(12. 1) 1999. 07. 13. 02. 15. 15. 000
***
1999. 07. 13. 02. 15. 15. 000
***
1999. 07. 13. 02. 33. 06. 000
ARC3:
Begin archiving log# 1 seq# 38 thrd# 1
ARC3:
VALIDATE
ARC3:
PREPARE
ARC3:
INITIALIZE
ARC3:
SPOOL
ARC3:
Creating archive destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38. dbf'
ARC3:
Archiving block 1 count 1 to : '/bigdisk/oracle8i/dbs/arch/1_38. dbf'
ARC3:
Closing archive destination 1 : /bigdisk/oracle8i/dbs/arch/1_38. dbf
ARC3:
FINISH
ARC3:
Archival success destination 1 : '/bigdisk/oracle8i/dbs/arch/1_38. dbf'
ARC3:
COMPLETE, all destinations archived
ARC3:
ArchivedLog entry added: /bigdisk/oracle8i/dbs/arch/1_38. dbf
ARC3:
ARCHIVED
*** 1999. 07. 13. 02. 33. 06. 000
ARC3:
Completed archiving log# 1 seq# 38 thrd# 1
From
this it is evident that Archiver process 3 archived Log sequence 38 onto
destination 1 :/bigdisk/oracle8i/dbs/arch.
258. Is my password encrypted when I logon?
Yes,
the Oracle password protocol provides security for client-server and server-server
password communication by encrypting passwords passed over a network, . The
Oracle password protocol uses a session key, valid for a single database
connection attempt to encrypt the user's password. Each connection attempt uses
a separate key for encryption, making the encryption more
difficult
to decipher.
As
of version 10.2 password encryption during session authentication is based on
the new AES 128 standard except for JDBC thin clients that will support AES as
of 11g . Post Oracle Database 10g Release 1 clients and servers use a NIST
approved AES algorithm without any modification *). Older clients use a modified
Data Encryption Standard (DES) algorithm, the best algorithm available in both
the server and client is negotiated during the session handshake for reasons of
backward compatibility.
259. What about parameters ora_encrypt_login
and dblink_encrypt_login ?
Do
not use these parameters as of version 9.2 as they have become obsolete, some
documentation may refer to them as being necessary, but the problems with the
clear text passwords were all fixed as of Oracle version 7.1 .
Password
during a logon is ALWAYS encrypted. We do this since Oracle 7.1. the parameters
addressed the issue that if a 7.1 client connected to a 7.0 server which did no
credential encryption that by default a second unencrypted transfer was done.
To specify if this second unencrypted transfer should happen or not we had
ora_encrypt_login for login by a client, dblink_encrypt_login for login by using
a database link. So the parameters today would make sense only if a newer
client connects to a 7.0 database. A 9.2 client cannot even connect to a 7.3
and lower database, so the parameters are not needed. The second unencrypted
transfer does not happen.
260. What encryption does Oracle use to store
the passwords?
After
the key-encrypted password is passed to the server, the server decrypts it, then
re-encrypts it using a hashing algorithm and then compares it with the password
stored in the data dictionary table USER$. If they match, the user successfully
connects to the database. Before 11g a proprietary Data Encryption Standard
(DES) based one-way (irreversible) algorithm was used. In 11g the case
insensitive password hashing algorithm was replaced by the 160 bit SHA-1
hashing algorithm in better alignment with industry standards.*) Some
documentation may state Oracle uses a modified version of AES which is a
documentation error.
261. What is Row Movement?
The
row movement lets you specify whether the database can move a table row. It is
possible for a row to move, for
example,
during table compression or an update operation on partitioned data.
262. How to enable row movement?
Use
the option below with ALTER TABLE statement { ENABLE | DISABLE } ROW MOVEMENT
Default
value is disable.
263. Concepts / how Row Movement works
internally?
When
enable row movement was not set a rowid was assigned when you inserted a row
and that never changed.
After
row movement is enabled a simple update could change your rowid.
When
you "Enable Row Movement", you are just giving "Permission"
to move a row. You do not need to delete and insert, you just update the row,
and that causes delete and insert, and in turn changes the ROWID. So, delete
and insert is the implicit operation which happens in the background. Row
movement is enabled at the table level, and is disabled by default.
264. Is Row Movement only for partitioned
tables?
No,
Row Movement is associated with partitioned as well as non-partitioned tables. It
is allowed for non-partitioned tables starting 9.2. It comes in to affect for
non-partitioned tables when operations like table compression is performed.
265. Does Row Movement invalidate index?
No,
it does not invalidate indexes
266. Does Row Movement create row chaining?
Row
Movement will not cause Row Chaining. If the row doesn’t fit into a single data
block, it must be chained. Row Chaining basically is the distribution of a
single table row across multiple data blocks.
Row
Movement is different as it updates the corresponding indexes—the ROWID
actually changes. This has benefits on the long run, because the additional
block read can be avoided in the TABLE ACCESS BY INDEX ROWID operation.
267. Does Row Movement have impact on
performance?
Yes,
it can cause performance problem as it can cause Oracle to move rows to
discontinuous data blocks.There could be some performance impact as it will
necessarily consume processing resources on your machine while running.
The
reason being it will:
-
read the table
-
delete/insert the rows at the bottom of the table to move them up
-
generate redo
-
generate undo
268. What are restrictions associated with
"Enable Row Movement"?
You
cannot specify this clause for a nonpartitioned index-organized table.
Tables
need to be in an ASSM (Automatic Segment Space Management) tablespace for this
to work.
269. Row Movement Related Errors
ORA-14402:
updating partition key column would cause a partition change
You
get this error during UPDATE of a row if row movement is not enabled on the
partitioned table, and the row with the new partitioned key value would need to
be placed into a a different partition compared where the row before update is.
See
Document 236191.1 for more details.
ORA-10636:
ROW MOVEMENT is not enabled
Document
1132163.1 What is the Meaning of SHRINK SPACE CHECK?
ORA-08189:
cannot flashback the table because row movement is not enabled
Document
270060.1 Use Flashback Table Feature and Resolve errors
Document
287698.1 OERR: ORA-8189 cannot flashback the table because row movement is not
enabled
270. What are some methods for transferring a
table from one schema to another?
Export-Import, Create table as .. Select ..,
Copy
271.What happens if a tablespace This results in
the index automatically clause is left off a primary key constraint?
generated being placed in the users' default
tablespace, which is usually the same tablespace as where the table is being
created which can cause performance problems.
272. Why should you care about the
NLS_DATE_FORMAT?
Because its' value (dd-mon-yy or dd-mon-rr)
determines the results of your date arithmetic when you are dealing with years
of 99 and 00..nn.
273.Why is the high water mark important?
The high water mark is used in association with
each individual table and tells Oracle
1). where to start loading data during a
SQL*Loader process
2). how far to scan a table's information when
doing a full-table scan.
274. What is the mechanism provided by ORACLE
for table replication ?
Snapshots and SNAPSHOT LOGs
275. What is a SNAPSHOT ?
Snapshots are read-only copies of a master
table located on a remote node which is periodically refreshed to reflect
changes made to the master table.
276. What is a SNAPSHOT LOG ?
A snapshot log is a table in the master
database that is associated with the master table. ORACLE uses a snapshot log
to track the rows that have been updated in the master table. Snapshot logs are
used in updating the snapshots based on the master table
277. What is a Shared SQL pool ?
The data dictionary cache is stored in an area in SGA called the Shared SQL Pool. This will allow sharing of parsed SQL statements among concurrent users.
The data dictionary cache is stored in an area in SGA called the Shared SQL Pool. This will allow sharing of parsed SQL statements among concurrent users.
278. What is snapshots ?
Snapshot is an object used to dynamically replicate data between distribute database at specified time intervals. In ver 7.0 they are read only.
Snapshot is an object used to dynamically replicate data between distribute database at specified time intervals. In ver 7.0 they are read only.
279. What are the various type of snapshots ?
Simple and Complex.
Simple and Complex.
280. Differentiate simple and complex, snapshots ?
- A simple snapshot is based on a query that does not contains GROUP BY clauses, CONNECT BY clauses, JOINs, sub-query or snashot of operations.
- A complex snapshots contain atleast any one of the above.
281. What dynamic data replication ?
Updating or Inserting records in remote database through database triggers. It may fail if remote database is having any problem.
282. How can you Enforce Refrencial Integrity in snapshots ?
Time the references to occur when master tables are not in use. Peform the reference the manually immdiately locking the master tables. We can join tables in snopshots by creating a complex snapshots that will based on the master tables
283. What are the options available to refresh
snapshots ?
COMPLETE - Tables are completly regenerated using the snapshot's query and the master tables every time the snapshot referenced.
FAST - If simple snapshot used then a snapshot log can be used to send the changes to the snapshot tables.
FORCE - Default value. If possible it performs a FAST refresh; Otherwise it will perform a complete refresh.
COMPLETE - Tables are completly regenerated using the snapshot's query and the master tables every time the snapshot referenced.
FAST - If simple snapshot used then a snapshot log can be used to send the changes to the snapshot tables.
FORCE - Default value. If possible it performs a FAST refresh; Otherwise it will perform a complete refresh.
284. what is snapshot log ?
It is a table that maintains a record of modifications to the master table in a snapshot. It is stored in the same database as master table and is only available for simple snapshots. It should be created before creating snapshots.
285. When will the data in the snapshot log be used ?
We must be able to create a after row trigger on table (i.e., it should be not be already available )
After giving table privileges.
We cannot specify snapshot log name because oracle uses the name of the master table in the name of the database objects that support its snapshot log
We cannot specify snapshot log name because oracle uses the name of the master table in the name of the database objects that support its snapshot log
The master table name should be less than or
equal to 23 characters.
(The table name created will be MLOGS_tablename, and trigger name will be TLOGS name)
(The table name created will be MLOGS_tablename, and trigger name will be TLOGS name)
286. What are snapshots?
Snapshots are copies of remote data, based upon
queries. In their simplest form, they can be thought of as a table created by a
command such as: create table t as select * from z;
No comments:
Post a Comment