301. Part
of the sqlhc.sql script that inserts rows into the plan_table, but the
documentation states that SQLHC Leaves "no database footprint". How
can this be the case?
The SQLHC script performs a
rollback at the end. Any additions made to the plan table (or any other object)
are removed at the end of the script and the original state restored.
302. The
SQLHC Documentation states that there is no configuration required, but it does
not state how to create the 'plan_table'. How can I create the plan_table?
On most databases, the plan
table is pre-created centrally and referenced by a public synonym using the 'catplan.sql'
script from $ORACLE_HOME/rdbms/admin. If it is not there or accessible then
you should check your installation logs for issues.
303. Is
the SQLHC script the same for all versions?
Yes. The script is identical
304. Can
SQLHC be used on Oracle 9i or below?
No. SQLHC was designed to
work on 10g and above. It will not work on 9i or below because it uses SQL_ID
to identify the SQL and this is not available in 9i or below.
305. What
is the licensing requirement for SQLHC?
SQLHC requires no license
and is FREE. Since SQLHC can use information from AWR reports if the Diagnostic
or Tuning pack are installed, it asks if these packs are licensed at your site.
If they are licensed, then answer "Yes" to this question to perform
additional checks. If they are not licensed then Answer "No". You
should check with your site license manager if you have doubts regarding the
licensing arrangements with regard to Oracle packs. SQLHC does NOT check for a
valid license
306. What
is missing from the report if the Tuning Pack is not installed?
SQLHC uses packs to give it
access to AWR (Automatic Workload Repository) based information (AWR
information is accessible via the Diagnostic pack; the Tuning pack is a
superset of the Diagnostic pack so either will provide access to this
information). The AWR infrastructure is installed by default (because the database
uses AWR information for internal purposes), the real question is whether you
are licensed to use it or not.
Report sections that use AWR
information are annotated with a 'captured by AWR' key phrase.These sections
currently include: "Indexes Summary", "Historical SQL Statistics
(DBA_HIST_SQLSTAT)" and "Historical Execution Plans",
If AWR information is not
made available by the presence of a license for one of the aforementioned
packs, used then the AWR related information in these sections will not be
reported.
307. When
using SQLHC, is it more beneficial to have AWR than not having AWR?
Yes. AWR information is a
component of the diagnostic pack and is fundamental for performance diagnostics
and interpretation. Generally, if you do not have AWR then you would need to
use statspack but this is of limited use on later versions.
If you have enterprise
editions it is very useful to have AWR to diagnose performance issues. Specifically
for SQLHC, having AWR information provides more usable diagnostic output than
without, so we would recommend it but it is not a pre-requisite.
308. Are
the global health checks specific to any version of Oracle?
Some of the checks (such as
those introduced in a particular version e.g. automatic statistics gathering)
are specific to versions. Otherwise, no, the checks are a combination of best
practices and indicators of invalid information, among other things.
309. Does
SQLHC work with --Insert application here--
SQLHC is a simple Health
Check script that uses SQL commands run against the data dictionary to produce
a report. It is a good, fast way to check your SQL for issues independent of
the app you are using. This means that it can work against any SELECT generated
by any application software. We would recommend that you execute the script in
SQL*Plus but you can probably execute it anywhere that can run SQL Statements.
310. Are
there any limitations on using this script in a Database running EBusiness
Suite?
No.
311. Is
it practical to use this tool for SAP?
Yes.
312. Does
this SQL Health Check tool take into consideration that the query is executed
on an Exadata system/database ?
No.
313. Are
there any Exadata specific settings?
Not at this time.
314. Can
this tool be used with Data Guard?
Yes.
315. How
we can integrate this script in to GridControl?
SQLHC is a simple SQL script
that you run outside of Grid Control.
316. Are
there specific checks related to Peoplesoft?
Not currently.
317. Are
there specific checks related to Siebel?
Yes, there is at least one health
check for Siebel included in SQLHC , Compatibility with Specific Database
Features
318. Can
SQLHC be used on a remote database?
SQLHC assumes that you have
access to the database where you want to run the health check so connecting to
a remote database will work fine.
319. Can
SQLHC be used on SQL that accesses a remote database?
SQLHC assumes that you have
access to the database where you want to run the health check so selects like :
select * from
table@remote_database
will not work. With this select
and selects like:
select * from local_table,
remote_table@remote_database
we suggest using SQLTXPLAIN
instead of SQLHC.
320. Does
SQLHC work with XML Type and XML indexes?
No.
321. Does
it work with encrypted columns?
Yes. When gathering
statistics, DBMS_STATS does not treat these columns any differently from any
others. Since SQLHC
simply reports on the health
and validity of the statistics on these columns it will work fine.
322. Does
SQLHC work with LOB columns?
Yes it does, but since we do
not collate the same type statistics information for LOB columns as standard
columns and
we don't directly access
these columns with standard predicates, the same output as a standard column is
not provided.
323. Does
SQLHC advise for extended statistic or index reordering?
No, extended statistics and
index reordering are not considered by SQLHC
324. Is
SQLHC RAC aware?
Yes, it reports on
information available across the entire RAC (ie. AWR/statistics) and also that
which is specific to the
node (e.g. initialisation
parameters etc)
325. Can
I use SQLHC to analyze a call to a pl/sql procedure?
Not with this tool. We
recommend that you use SQLT for more advanced capabilities.
326. Is
there a way to run without having to execute the SQL (like EXPLAIN PLAN?)
For example: EXPLAIN PLAN
FOR SELECT /* TARGET SQL */ * FROM dual;
No. Currently, if you run
SQLHC against the SQL_ID from an 'EXPLAIN PLAN FOR' command, it runs and
completes successfully. However it does not pickup the statistics for the
tables in the query, nor does it produce the current explain plan it. For SQLHC
to retrieve information the SQL has to have been executed at sometime on the
instance and the SQL data still needs to be in memory or in the Automatic
Workload repository. Essentially, SQLHC creates a report by extracting existing
'historical' data about a SQL statement. It does not execute and monitor a
current statement.
327. Does
SQLHC show multiple plans from the SQL Tuning Advisor?
No. Since SQLHC is designed as
a lightweight health check it does not have the same functionality as a more
comprehensive
tool such as SQLTXPLAIN or
SQL Tuning Advisor itself. If you want to include SQL Tuning Advisor in a SQL
report which includes Health Checks and much more then use SQLTXPLAIN instead
(ensuring you have the appropriate license for the Advisor).
328. How
does SQLHC differ from SQLT?
SQLT is far more advanced
than SQLHC; while it incorporates similar checks to those done in SQLHC it is
far more
extensive generally. SQLHC
is simply a quick health check for a statement.
329. What
is the difference between SQLHC and AWR and TKProf?
SQLHC, AWR and TKProf are
completely different tools for different purposes.
SQLHC is a FREE script to
check the health of the environment as applies to a single SQL statement See:
Document 1366133.1 SQL
Tuning Health-Check Script
AWR is the Automatic
Workload Repository and is a fundamental mechanism for the storage of general performance
information in the database. Based on this stored information, reports are
provided that can help diagnose and resolve issues related to the performance
of the database and statements within it. 1363422.1 Automatic Workload
Repository (AWR) Reports - Start Point
TKProf is a tool that formats
trace files to provide collated information about a process as recorded in that
trace file. See:
Document 760786.1 TKProf
Interpretation (9i and above)
330. SQLHC
mentions some issues with system statistics. Are system statistics supposed to
be gathered?
Statistics pertaining to the
actual system itself are useful so that Oracle can determine the likely load
that the queries will
be running in an adjust
plans accordingly. 470316.1 Using Actual System Statistics (Collected CPU and
IO information)
331. Are
statistics on dictionary objects supposed to be gathered?
Yes. Since the cost base
optimizer relies on accurate statistics it is sensible to gather statistics on
all tables and maintain
them. 457926.1 How to Gather
Statistics on SYS Objects and 'Fixed' Objects?
Alternatively Dynamic
sampling can be used. 336267.1 Optimizer Dynamic Sampling
(OPTIMIZER_DYNAMIC_SAMPLING)
332. Are
statistics required on temporary tables?
It is usually best to use
dynamic sampling on Global Temporary tables, because of their volatile nature.
Alternatively you can fix
their statistics to a 'general' value to preserve a particular 'acceptable'
access path.
Document 130899.1 How to Set
User-Defined Statistics Instead of RDBMS Statistics
See the following document
for other suggestions : 356540.1 How to workaround issues with objects that
have no statistics
333. Can
the tool determine whether statistics were gathered using ANALYZE instead of
DBMS_STATS
Although the statistics
collected by ANALYZE may be different to DBMS_STATS the changes may not be
sufficient to positively identify the use of ANALYZE nor is there any view that
identifies ANALYZE usage or otherwise. The tool will rather identify issues
with the statistics validity which is likely more important than the source for
plan generation assuming the statistics are correct. the ANALYZE command should
not be used in version 11 only DBMS_STATS should be used to collect statistics.
334. Since
11g Oracle automates the statistics collection, why is the script checking for
statistics health?
Just because statistics
collection is automated does not mean that large data loads or different
collection intervals may
not have changed them. We
also cannot guarantee that the automated statistics have not been disabled.
335. Does
SQLHC use just the most recent statistics for health check?
Yes. SQLHC uses the
statistics at the time that it is run. For a more comprehensive look at the
historic statistics on
objects use SQLT.
336. If
the statistics have been locked, then will SQLHC give the desired results?
Yes. SQLHC will give the
same results whether the statistics are locked or not. Locking statistics does
not prevent them
from being invalid.
337. SQLHC
says that Value A is greater than Value B. This is impossible. How can this
have occurred?
SQLHC reports on what it
finds in the dictionary and if it finds issues it outputs a warning. It does
not necessarily
comment on the origin of the
problem. As for potential causes, then:
different statistics may
have been collected at different times
indexes and tables may have
been collected separately when row counts were different, (for example if you gathered
index stats, then deleted rows from the table and then just gathered on the
table then you would have a mismatch.
and so it is possible for
meaningless statistics to exist. The tool checks for such issues so that plans
are not affected.
338. If a
collective parameter such as "OPTIMIZER_FEATURES_ENABLE" has been
set, does SQLHC just report on that parameter change or on all the affected
underlying parameters?
SQLHC just reports on the
top level parameter change. Each parameter is handled independently of each
other.
Individual issues may
generate more than one observation
339. Does
SQLHC show all the checks done or just the results that are non-compliant?
SQLHC shows only the not
compliant observations. All Statistics and plans are shown for the associated
objects however.
340. Is
the "Tables" and "Indexes Summary" for all tables or only
for the tables affected by a particular SQL?
SQLHC reports on all the
tables referenced in the query you have submitted to the tool.
341. Does
SQLHC check the dictionary statistics against the actual values in the objects?
No. SQLHC just compares the
dictionary statistics against each other and against known bets practices
342. Does
the health check account for bind variables in any way?
Yes, but for a more detailed
analysis of the binds / histograms on relevant columns etc then SQLTXPLAIN will
provide
more details (SQLHC is
designed to be very fast).
343. Will
SQLHC tell you that the same SQL is used with different plans ?
Yes. It will show there are
multiple plans but that is not the primary function of the tool. For that you
would be better to use
SQLT. If you have an
extremely large number of versions then refer to: 296377.1 Troubleshooting:
High Version Count Issues
344. Can
the health check script help me identify changes to the explain plan over a
long period of time?
Yes, if AWR data is used. With AWR, SQLHC could help in showing the
plan changes from the one in memory compared to the one stored in the AWR
repository. It will list all the execution plans of a query from the past
(assuming they are still available in AWR)
345. Does
SQLHC provide any advise to improve the performance or is it just a health
check?
SQLHC does not provide any
direct advice. However as a reult of implementing the suggested changes,
performance may be improved. For specific SQL Tuning Advice, you should use the
SQL Tuning Advisor. 262687.1 How to use the Sql Tuning Advisor.
346. Does
SQLT provide SQL recommendations or does it only Provide the Execution plan?
SQLTXPLAIN provides far more
than just the Execution plan. It generates a whole set of detailed information
about a query and its execution and can also link to the tuning advisor to
provide specific plan advice.
347. Is
SQLHC able to identify disk I/O performance issues?
It is not currently designed
to incorporate this kind of general checks. We welcome any additional
health-checks that you may suggest. If any specific health-checks are needed
and not covered by this script, then as long as the health-check can be
produced with a SQL Command (leaving no database footprint ) then these can be
implemented in future versions.
348. How
is it best to use the information provided by SQLHC?
SQLHC does a number of check
on the validity of various statistics and parameters with a brief explanation
of why they may be a cause for concern. Typically you would be running SQLHC
because you are concerned about the performance of a particular SQL statement.
Since invalid or ill-advised setting may cause the optimizer to pick a
sub-optimal access path it makes sense to rectify any potential problems found
by the health check.
So, review the findings,
assess their potential impact on the statement in question and implement fixes
as appropriate.
349. Does
SQLHC take into account session parameter changes as well as global ones?
Yes.
350. Could
SQLHC not be integrated with advisor tools rather than having multiple tools?
SQLHC was deliberately
created from SQLT so as to provide a very lightweight initial check. If you
want a more
comprehensive report then
use SQLT. See:
Document 215187.1 SQLT
(SQLTXPLAIN) - Tool That Helps To Diagnose SQL Statements Performing Poorly
351. Application users are complaining
about the slowness of the database, what should I do? /
If I want to quickly check for the cause of
a performance issue with AWR, what are the principal things that I need to
check?
Firstly, we would suggest
that you examine the findings from an ADDM report from the time period and
implement the suggestions.
Then look at the AWR report.
Very quickly, get some
background information: take a look at the version to get an idea of what you
are dealing with, then look at the Elapsed time and DB time to determine how
much load the database is under.
Then look at the top wait
events that are occurring on the system. If possible compare this to another
report where the load was similar. Then look for a general area where most of
the waits originate. For example, if the top 5 waits are:
library cache load lock
cursor: pin S wait on X
row cache lock
latch: row cache objects
library cache: mutex X
Based on this information,
decide what the majority of the problems are related to. If you look at the
names of the events you can get some clues: "library cache" and
"cursor" are to do with parsing. "row cache" is to do with
the in-memory cache of dictionary information. From this you can deduce that
most of the time is being spent parsing and manipulating dictionary objects
-now you can use this to target the next stages of your investigation. Now pick
one of the waits and address it : pick the worst bottleneck to start with as
this will give you the biggest benefit.
Once you have made changes,
examine the effect on the system. Compare it to the previous period and the
baseline. Repeat until acceptable performance has been achieved. For
reference,Document 1359094.1 How to Use AWR reports to Diagnose Database
Performance Issues
352. What
is the principal event that causes performance issues?
There is no principal event
applicable to all applications. Some applications may put more pressure on I/O
while others may contend for CPU or for memory structures. Instead of focusing
on individual events that may not be pertinent to your system, you need to
focus on whatever is consuming time on the system. You want the system to spend
most of it's time on the CPU without "wasting" CPU with poorly tuned
SQL statements.
353. What does the AWR DB time represent?
DB Time is the time used by
the database sessions and processes during the AWR report period.
354. Why does the DB time exceed elapsed
time at the top of the report?
On modern, multiprocessor
systems more than one database call may take place simultaneously. For example,
if there are four, heavily loaded CPU's, then in one hour of elapsed time, there
is an upper limit of 240 minutes of DB time possible (4 x 60 mins.).
355. Why
does the sum of the percentages of DB time under "Time Model
Statistics" equal more than 100?
In the time model statistics
hierarchy, a child statistic may be counted under more than one parent.
356. How do you account for multiple CPUs
in AWR reports
Near the top of the report,
"CPUs" reflects the number of threads. "Cores" is the
number of physical CPUs.
357. In
the Load Profile, how do you interpret DB Time(s) of 63.8 (64) seconds VS DB
CPU(s) of 1.8 seconds?
Near the top of the report,
the AWR reports a number of statisitics and among these are:
Elapsed time (this is the
Wall clock time)
DB Time (this is the total
time used within the database within the Elapsed Time Period)
As an example, you might
see:
Elapsed : 60.68 (mins)
DB Time : 3,856.09 (mins)
This means that for every
minute of Elapsed time there is 63.6 minutes of work in done in the database
(3,856.09 / 60.68). This figure is reported (in seconds) as DB Time(s) in the
Load
Profile section alongside
the amount of CPU used per second (DB CPU (s)):
DB Time(s): 63.6 (per
second)
DB CPU (s): 1.8 (per second)
On a Multi CPU system, the
total CPU Time available to you (assuming 100% CPU usage) is going to be : Number
of CPUs / second.
In other words, if you have
8 CPUs then you could potentially use 8 seconds of CPU time per second. In this
case DB CPU (s) : 1.8 (per second) is reporting that the system is using 1.8
seconds of CPU of the
potential 8 seconds that it could be using (ie 1/4 of the CPU capacity). This
means that 61.8 seconds/second of the DB time is not on the CPU and therefore
is something else. As correlation, this should be reflected in the O/S CPU
statistics from the period.
In terms of interpretation
"DB Time(s): 63.6 (per second)" means that the system is heavily
loaded, "DB CPU (s): 1.8 (per second)" means that most of the time is
not CPU but something else.
358. How
to obtain CPU consumption by user for each hour from AWR?
It is not possible to report
an individual user's CPU usage using an AWR report.
359. What
is the difference between the instance CPU and host CPU sections of the AWR
report?
Host CPU information is
coming directly from the operating system and is reflecting the CPU usage on
the whole machine.
Instance CPU is reflecting
the CPU as used by the instance itself.
360. What
do waits for cpu+wait for cpu indicate?
These waits show the time
spent in the CPU run queue
361. How
do support interpret AWR? Does Oracle support have tools to interpret the
results from an AWR?
No. We just interpret the
output manually. It is good practice to start with an ADDM report then progress
to AWR analysis as necessary. Document 1359094.1 FAQ: How to Use AWR reports to
Diagnose Database Performance Issues
There is no substitute for
experience however.
362. I
see high number of waits for 'Event XYZ' in my AWR report. How should I
proceed?
Start by looking at the
Troubleshooting Assistant and selecting the "Troubleshooting Database
Resource Contention" section:
Document 1543445.2
Troubleshooting Assistant: Troubleshooting Performance Issues
This allows you to identify
content based on symptoms you are encountering.
You can also look at the
Troubleshooting guide to identify potential problem causes:
Document 1377446.1
Troubleshooting Performance Issues
If these suggestions do not
help you then you probably will need to ask in the Community or file a Service
Request.
363. In the Top 5 Timed Foreground Events
'event XYZ' has high "time(s)" and "avg wait(ms)". What
does that mean?
The Top 5 Timed Foreground
Events is ordered by percentage of the total database time used. Something with
a high percentage and a high time is likely to be using the most database resource.
The "avg wait(ms)" is the average time for all the waits in the
period. The average could be made up with everything close to the average, a
few peaks and lots of lower values or a mixture. Whether an average is a high
value or not is dependent on the operation. Without significant contention, you
would expect latches to be waited on for less than 1 or 2 milliseconds.
If they are being held for
long periods then that is likely to be causing significant problems. For their
waits it may be possible to have longer waits and still have acceptable overall
performance. Remember to keep any waits in context of the overall time spent
and proportion of the total database time. A single 50 millisecond wait for a
latch in an hour is unlikely to even register in the top 5. Don't focus on that
at the expense of the 500 million 2 millisecond waits for the different latch
that is taking all your resources!
Note that some waits for
wait events will be expected based upon your activity. For example if you are
running a large number of SQL statements heavily using Parallel Execution, then
you will start to see Parallel Query events appearing in your Top 5 events
simply because the management of the parallel query infrastructure takes up
resource. It does not necessarily mean that there is a problem. However, if
your users start to report performance degradation at this time then you might
want to look at how efficiently these queries are running and whether it is unrealistic
to expect the system to cope with the load being imposed on it at that time.
364. Where
do we find the "event XYZ" in the AWR? It is not listed in the top
waits.
If an event did not account
for any "meaningful" amount of time then it might not show up at all.
Remember AWR shows only what is significant to your system.
365. Sometimes, in the Instance
Efficiency Percentages section of an AWR report, the "Soft Parse %"
value is low but the database response is good. Is this something we should be concerned
about and need to investigate ?
If you do not have a
performance issue, then there is little to be gained from focusing on
individual measures. The "Top 5 Timed Foreground Events" section is
the area that is making the most difference to your performance. If you make a
change to something that is not affecting the top waits significantly then you
are unlikely to notice a large benefit. There will always be waits. If your
users do not have an issue with the performance of the system then there is no
problem.
366. What is meant "User calls"
and "Parses" ?
Parse calls are requests to
parse the SQL statements that have been presented to the database.
User calls are calls to the
database by user sessions. Parse calls are a subset of the user calls.
367. What
is SQL ordered by Parse Calls?
This lists SQL which has had
the most parse calls (ie soft parses or hard parses)
368. How
do I interpret the "SQL ordered by Physical Reads (UnOptimized)"
Section?
1466035.1 How to Interpret
the "SQL ordered by Physical Reads (UnOptimized)" Section in AWR
Reports (11.2 onwards) For Smart Flash Cache Database
369. Why
does the report not concentrate on cursor waits?
AWR looks at the whole
system rather than at the cursor level. If you want to look at individual
cursor waits you can use ASH reports or 10046 trace
370. If
the number of executions is very small (for example one execution), does this
mean that the sql needs to be tuned?
No, it just means the SQL
statement was only executed once in the snapshot period. You should select SQL
to be tuned based upon the amount of overall resource it is taking up on the system.
For example, SQL with high buffer gets, CPU or I/O usage may be a candidate for
tuning.
371. How
do I interpret the "Buffer Cache Advisory" Section?
Document:754639.1 How to
Read Buffer Cache Advisory Section in AWR and Statspack Reports
372. Is the
"Av Rd (ms)" from disk or from a buffer?
"Av Rd(ms)"
indicates the average time to read data from disk into a buffer.
373. Does
Invalidations or Reloads in the SQL Area indicate the Cache is insufficient?
In most cases invalidations
or reloads are an indication that sharing is not occurring efficiently within
the application rather than there is a need for tuning of the shared pool
memory areas.
If cursors are not shared
then the memory will fill up with cursors and space will need to be reclaimed
to fit in new cursors. Existing cursors may then be removed that then have to
be reloaded the next time they are executed. Adding more space just means that
more unshared cursors can be stored and does not relieve the pressure.
Investigate how well things are being shared before looking to add more memory.
Document 62143.1
Understanding and Tuning the Shared Pool
Document 296377.1
Troubleshooting: High Version Count Issues
374. What
are the meanings of the "Library Cache Activity" Namespaces
The library cache is divided
into a number of different areas called namespaces that contain library cache
objects dependent on the application profile you are using.
Taking a step back, assuming
you have a performance issue, you should only be concerned with "Library
Cache Activity" if you are seeing a high proportion of library cache type
waits. If that is the case then you are primarily looking for reloads and
invalidations.
Reloads are where an object
is re-executed but is found to have been removed from the library cache for
some reason (for example been aged out due to space pressure and lack of use). This
means that it has to be re-loaded. This is inefficient as the cursor will need
to be re-parsed when potentially the information could have already been
available.
Invalidations are where a
cursor has been made obsolete (or invalidated) because one of the underlying
pieces of information that it was created from has been changed. For example,
if statistics have been re-gathered then a plan based upon those old statistics
needs to be re-generated because it cannot be relied upon.
The namespace these occur on
can help you narrow down the sort of area in the code that is causing the
problem. in the majority of cases issues will be in the "SQL AREA"
Namespace. A problem with standard application SQL cursors is most likely to be
in the "SQL AREA" Namespace. Problems with high reloads and
invalidations in other namespaces are rare and may require input from support
to diagnose the cause.
375. How should I deal with high waits
for 'Buffer Busy Waits' on RAC?
Identify hot blocks from
"GC Buffer Busy" waits section of the AWR reports
376. How
do I interpret the "PGA Memory Advisory" Section?
Document:786554.1 How to
Read PGA Memory Advisory Section in AWR and Statspack Reports
377. How
do I interpret the "OS Statistics" Section?
Document:762526.1 How to
Interpret the OS stats section of an AWR report
378. Why
does my storage administrator tell me that the response time measurements
reported by the storage device indicate that average read time for a file
("Av Rd(ms)") is much lower than what is reported in the AWR?
As with any other
application, Oracle Database is not cognizant of what goes on within operating
system code, once it submits the I/O request. All it can do is to measure the
starting and ending times of the request. Whether, in addition to the actual
seek, rotational delay, and transfer time, one or more of the following events
consumes time is unknown to Oracle Database, as the OS does not report back to
the application that level of detail:
1. Swap time needed to
restore the swapped out process to memory.
2. Time during which the
thread issuing the read request waits in the run queue for a CPU, after an OS
interrupt has detected that the read request has completed and marks the thread
as runnable.
3. I/O queuing. The output
from "iostat" will display average queue length under the column
labeled "avgqu-sz".
379. Can you use AWR reports to identify
DB performance issue due to physical limitation of hardware or some problem
with configuration /SQL ?
Start with the Top 5 waits
as usual. You would expect I/O waits such as 'db file sequential read' and/or
'db file scattered read' to be high for an I/O problem. Note that I/O can be
across the whole system or limited to single disks so you might see slow
performance against some waits and not others depending on the activity.
If you see high numbers for
I/O related wait events and high average read/write times in the following
section:
IO Stats > Tablespace IO
Stats
IO Stats > File IO Stats
then you might suspect some
issue with the I/O subsystem. This could be because it is performing slowly or
perhaps is being over stressed by forces external to the database or possibly
by
excessive I/O from the
database. If the I/O is coming from the database then you would expect to see
SQL responsible for this in the "Top SQL" sections. Look at:
SQL Statistics > SQL
ordered by Reads
and look for queries with
excessive Physical Reads. This could be caused by a single massive execution or
a huge number of smaller executions (or a combination).
380. In
AWR Efficiency Percentages, "Parse CPU to Parse Elapsd %" we are
consistently getting very low % (i.e. 0.3%) but every other Efficiency
Percentages is >95%
Looking at efficiency
percentages alone can lead to dangerous assumptions. Always start with the Top
5 waits and work from there. If there is no performance problem and you are
running within your target service level, then does it matter that some ratio
is not 'optimal'?
Parse elapsed time should be
mostly CPU but, with things like dynamic sampling, it is possible that other
waits will account for the time. If you have identified this as causing a
problem then you should trace the session (use 10046 level 8) to see where the
other 99.7% of parse time is being spent: Document 376442.1 Recommended Method
for Obtaining 10046 trace for Tuning.
No comments:
Post a Comment