166. What versions of
what packages is it that Oracle 11gR2 requires, that come on the Solaris 10
Update 6 and greater releases?
According to Oracle
Development in Bug 9152554, the Oracle Database 11gR2 product is built and
tested on the Solaris 10 U6 OS update level. So Oracle Global Software Support
cannot support anything level. The runtime environments for running 11gr2
Database need to be at Solaris 10 U6 or higher OS update levels.
167. Must the OS
kernel must be patched via Sun Updates or is applying the kernel patches
sufficient?:
According to Oracle/Sun,
applying individual Solaris patches or the appropriate MU (Maintenance Update)
patch bundle (MU6 or later) to upgrade to U6 only updates existing packages.
The additional packages come with Solaris 10 and were introduced after the
install of the update level that you have applied are not updated in such an
upgrade. So the environment required to run Oracle Database 11gR2 is not
Reference Oracle Development in Bug 9152554
168. Our kernel level
is current. However, the /etc/release file shows update 3 level. Is this going
to present a problem?
According to Oracle/Sun,
applying individual Solaris patches or the appropriate patch bundle (MU6 or
later) to upgrade to U6 only updates existing packages. The additional packages
that come with Solaris not updated in such an upgrade. So the environment
required to run Oracle Database 11gR2 is not met.
The /etc/release file is
updated to reflect the new update level e.g. U6 during either a fresh install,
or during the application of an OS upgrade to an update release. The MU
(Maintenance Update) patch bundle NOT amend /etc/release. Since the MU is not
the same as a fresh install or OS upgrade, the original update level data in
/etc/release is maintained.
169. Is Oracle 11gR2
certified on less than "update 6" if the newest patches are
installed? If not, why not?
According to Oracle
Development in Bug 9152554, the Oracle Database 11gR2 product is built and
tested on the Solaris 10 U6 OS update level. So Oracle Global Software Support
cannot support anything level. The runtime environments for running 11gr2
Database need to be at Solaris 10 U6 or higher OS update levels.
170. Why is the
/etc/release file is not reflecting the fact that additional Solaris patch
bundle(s) are installed?
When a Solaris system is
first installed, it creates the /etc/release file based on the release version
of the media the system was installed from. The purpose of this is to document
a baseline patch level and all packages installed on the system.
The /etc/release file
doesn't get updated with the kernel updates. You can find the kernel patch
level installed by use of the "uname -a" command. If you just apply
the kernel patch and no other patches, your kernel is up to date, not the whole
release. This is why the /etc/release will reflect what was initially installed
and not the current kernel version or patch level you have.
Oracle's Solaris
maintenance team verifies that applying a MU (Maintenance Update) Patch Bundle
does not modify the update release version that the system reports. Applying
the MU does however amend
/etc/release to advise
you of the latest MU revision installed.
171. Is applying a
kernel patch or a Solaris patch bundle the equivalent to installing the
specific Solaris 10 "update 6" image?
No, it is not. The OS
kernel can be updated to U6 levels (or later) with MU (Maintenance Update)
patch bundles, or with the individual kernel patch. However, according to
Oracle's Solaris team, this is not the update level that Oracle Development
built and certified 11gR2 with.
Oracle/Sun has
specifically started that "installing patches will not bring it to Update
6".
Oracle 11gR2 software is
NOT certified for:
a.) Solaris 10
"less than update 6" with individual patches.
b.) Solaris 10
"less than update 6" with a certified MU (Maintenance Update) Solaris
update method
It is only certified for
a base install image of Solaris 10 Update 6 or greater, or an upgraded image of
an earlier Solaris 10 update to at least Update 6 or greater.
172. Instead of
installing Solaris 10 update 6 or greater, could I upgrade my "update
4" non-global zone from it's "update 6" global zone host, and
then copy over the /etc/file?
No, you'll need to
update the non-global zones to a level of at least Solaris 10 Update 6.
173. Okay, you have
convinced me. What is it that I need to do to get my Solaris 10 system to be a
true and full "update 6" base image?
You can upgrade the
Solaris OS by using two upgrade methods: standard and Solaris Live Upgrade.
a.) Standard Upgrade is
a full re-install of Solaris 10 from an "update 6" media kit.
1.) Pros - known, more
familiar to Solaris customers
2.) Cons - harder,
inconvenient, larger downtime
b.) Solaris Live Upgrade
lets you keep your system running while you upgrade, and you can switch back
and forth between Solaris OS releases by a simple reboot.
1.) Pros - very little
downtime
2.) Cons - potentially
more complicated, can infrequently fail to boot because not all configurations
can be properly duplicated with Live Upgrade
174. How can I
determine what "update" level of Solaris 10 was installed on my
system?
Although not included in
the Installation Guide, the correct output from the "cat
/etc/release" command for a Solaris 10 update 6 system is:
Solaris 10 10/08
s10s_u6wos_07b SPARC
It is the "u6"
in the line "Solaris 10 10/08 s10s_u6wos_07b SPARC" that shows that
this is the minimum required Solaris 10, update 6. A value of "u3"
(for example) would indicate an unacceptable "update Other acceptable
outputs would be (for example):
Solaris 10 10/08 s10s_u6
Solaris 10 5/09 s10s_u7
Solaris 10 10/09 s10s_u8
Solaris 10 9/10 s10s_u9
175. How and where
does the 11gR2 OUI retrieve its values?
According to Oracle
Development in Bug 9152554, the "Actual Value" reported by the OUI
comes from the output of the following:
(/usr/bin/pkginfo
-l SUNWsolnm | /usr/bin/nawk -F= '/VERSION/ {"/usr/bin/uname -r" |
getline uname; print uname "-" $2}' ) > /var/tmp/CVU_11.2.0.1.0_oracle/scratch/exout1489.out
2>&1
You can manually execute
the main part of that on your system as follows:
/usr/bin/pkginfo -l SUNWsolnm |
/usr/bin/nawk -F= '/VERSION/ {"/usr/bin/uname -r" | getline uname;
print uname "-" $2}'
176. Can I just copy
over an /etc/release file from another Solaris 10 system that really is
"update 6"?
The /etc/release file is
a validated file. Even if you did manage to copy, edit, or otherwise simulate
it, you would have an unsupported configuration and as such Oracle would
reserve the right to ask you to the correct version in the correct manner
before proceeding with your support case.
177. Does the 11gR2
OUI read the /etc/release file, or use the /usr/bin/pkginfo -l SUNWsolnm
command ?
It does both. For
example, if you were to copy or edit your way around the OUI's check of
/etc/release, you would still be stopped by the incorrect kernel message that
results from the SUNWsolnm command.
you were to simply ignore
the OUI's messages, the linking portion of the 11gR2 install would fail.
178.What's included
in a GI PSU ?
Unlike other Grid
Infrastructure patches, 11gR2 GI PSUs contains both GI PSU and Database PSU
(YES, both GI and DB
PSU) for a particular quarter. For example, 11.2.0.2.2 GI PSU contains both the
11.2.0.2.2 GI PSU and the 11.2.0.2.2 Database PSU.
You are able to take
note of this when you extract a GI PSU, you will see 2 directories (named with
the Patch number) one for the GI PSU and one for the RDBMS PSU.
179.How do I find out
whether a bug is fixed in a Clusterware or Grid Infrastructure PSU ?
To find out, check the patch readme and "opatch lsinventory -bugs_fixed"
will list each individual bugs that have been fixed by all installed patches
180. How to apply a
Clusterware or Grid Infrastructure patch manually?
It's recommended to use
OPatch auto when applicable; but in the case that OPatch auto does not apply or
fails to apply the patch, refer to patch README and the following notes for
manual patch instructions. The following notes may also be of use when in this
situation:
Examples
OPatch Auto Example to Apply a GI PSU (includes Database
PSU)
Platform is Linux 64-bit
All Homes (GI and Database)
are not shared
All Homes are 11.2.0.2
The Grid Infrastructure
owner is grid
The Database owner is
oracle
The Grid Infrastructure
Home is /ocw/grid
Database Home1 is
/db/11.2/db1
Database Home2 is
/db/11.2/db2
The 11.2.0.2.3 GI PSU will
be applied to in this example
ACFS is NOT in use on this
cluster
Note: This example only
covers the application of the patch iteslf. It does NOT cover the additional
database,
ACFS or any other component
specific steps required for the PSU installation. That said you must ALWAYS
consult
the patch README prior to
attempting to install any patch.
1. Create an EMPTY
directory to stage the GI PSU as the GI software owner (our example uses a
directory named
gipsu3):
% mkdir /u01/stage/gipsu3
Note: The directory must be readable,
writable by root, grid and all database users.
2. Extract the GI PSU into the empty
stage directory as the GI software owner:
% unzip -d /u01/stage/gipsu3
p12419353_112020_Linux-x86-64.zip
3. Verify that opatch in ALL 11.2.0.2
homes that will be patched meet minimum version requirement documented in the
PSU README (see "How to find
out the opatch version?"). If the version of OPatch in any one (or
all) of the homes
does not meet the minimum version
required for the patch, OPatch must be upgraded in this/these homes prior to
continuing (see "How do I install
the latest OPatch release?").
4. As grid user repeat the following to
validate inventory for ALL applicable homes on ALL nodes:
% $GI_HOME/OPatch/opatchlsinventory -detail
-oh <home-path>
Note: If any errors or inconsistencies
are returned corrective action MUST be taken prior to applying the patch.
5. As the user root, execute OPatch
auto to apply GI PSU 11.2.0.2.3 to all 11.2.0.2 Grid Infrastructure and
Database
homes:
# cd /u01/stage/gipsu3
# export GI_HOME=/ocw/grid
# $GI_HOME/OPatch/opatch auto /u01/stage/gipsu3
Note: You can optionally apply the
patch to an individual 11.2.0.2 home by specifying the -oh <home path> to
the
opatch auto command:
# $GI_HOME/OPatch/opatch auto
/u01/stage/gipsu3 -oh /u01/stage/gipsu3
This would apply the 11.2.0.2.3 GI PSU
to ONLY the 11.2.0.2 GI Home.
6. As the grid user repeat validate
inventory for all patched homes:
% $GI_HOME/OPatch/opatchlsinventory -detail
-oh <home-path>
7. Repeat steps 1-6 on each of the
remaining cluster nodes, 1 node at a time.
8. If you applied the Databse PSU to
the Database Homes (as shown in this example), you must now load the Modified
SQL Files into the Database(s) as
follows:
Note: The patch README should be
consulted for additional instructions!
% cd $ORACLE_HOME/rdbms/admin
% sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sqlpsu apply
SQL> QUIT
EXAMPLE: Apply a CRS patch manually
The example assumes the
following:
Platform is Solaris SPARC
64-bit
All homes (CRS, ASM and DB)
are non-shared
All homes are version 11.1.0.7
The Clusterware Home is
/ocw/crs
The Clusterware, ASM and
Database owner is oracle
The ASM Home is
/db/11.2/asm
Database Home 1 is
/db/11.1/db1
Database Home 2 is
/db/11.1/db2
Note: This example only
covers the application of the patch iteslf. It does NOT cover the additional
databaseor any
other component specific
steps required for the patch installation. That said you must ALWAYS consult
the patch
README prior to attempting
to install any patch.
1. Create an EMPTY
directory to stage the 11.1.0.7.7 CRS PSU as the CRS software owner (our
example uses a
directory named crspsu7):
% mkdir /u01/stage/crspsu7
2. Extract the CRS PSU into
the empty stage directory as the CRS software owner:
% unzip -d /u01/stage/crspsu7
p11724953_11107_Solaris-64.zip
3. Verify that opatch in
ALL 11.1.0.7 homes (for the Database homes there is a Database compnent to CRS
patches)
meet minimum version
requirement documented in the PSU README (see "How to find out the opatch
version?"). If the
version of OPatch in any
one (or all) of the homes does not meet the minimum version required for the
patch, OPatch
must be upgraded in
this/these homes prior to continuing (see "How do I install the latest
OPatch release?").
4. As oracle repeat the
following to validate inventory for ALL applicable homes:
%
$CRS_HOME/OPatch/opatchlsinventory -detail -oh <home-path>
5. On the local node, stop
all instances, listeners, ASM and CRS:
root", it is the software install
owner, this is a commonly made mistake
7. As the oracle user execute the prepatch
script for the CRS Home:
%
/u01/stage/crspsu7/11724953/custom/scripts/prepatch.sh -crshome /ocw/crs
8. As the oracle user execute the
prepatch script for the Database/ASM Homes:
% /u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh
-dbhome
/db/11.1/asm
%
/u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh
-dbhome
/db/11.1/db1
%
/u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/prepatch.sh
-dbhome
/db/11.1/db2
9. As oracle apply the CRS PSU to the
CRS Home using opatchnapply:
% export ORACLE_HOME=/ocw/crs
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local /u01/stage/crspsu7/11724953
10. As the oracle user apply the
database compnent of CRS PSU to the Database/ASM Homes using opatchnapply:
% export ORACLE_HOME=/db/11.1/asm
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local
/u01/stage/crspsu7/11724953/11724953/custom/server/11724953/
% export ORACLE_HOME=/db/11.1/db1
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local
/u01/stage/crspsu7/11724953/11724953/custom/server/11724953/
% export ORACLE_HOME=/db/11.1/db2
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local
/u01/stage/crspsu7/11724953/11724953/custom/server/11724953/
11. As the oracle user execute the
postpatch script for the CRS Home:
%
/u01/stage/crspsu7/11724953/custom/scripts/postpatch.sh -crshome /ocw/crs
12. As the oracle user execute the
postpatch script for the Database/ASM Homes:
%
u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh
-dbhome
/db/11.1/asm
%
u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh
-dbhome
/db/11.1/db1
% u01/stage/crspsu7/11724953/custom/server/11724953/custom/scripts/postpatch.sh
-dbhome
/db/11.1/db2
13. As root execute the postrootpatch
script (this script will start the CRS stack):
#
u01/stage/crspsu7/11724953/custom/scripts/postrootpatch.sh -crshome /ocw/crs
14. As the oracle user repeat the
following to validate inventory for ALL applicable homes:
% $CRS_HOME/OPatch/opatchlsinventory -detail
-oh <home-path>
15. Start all instances, listeners and
ASM on local node (CRS was started with the postrootpatch script):
% $ORACLE_HOME/bin/srvctl start instance
-i<local instance name> -d <db name>
% $ASM_HOME/bin/srvctl start asm -n <local
node>
% $CRS_HOME/bin/srvctl start nodeapps -n
<local node>
16. Repeat steps 1-15 on each node in
the cluster, one node at a time.
EXAMPLE: Applying a GI PSU patch manually
The example assumes the
following:
Platform is Linux 64-bit
All Homes (GI and Database)
are not shared
The Grid Infrastructure
owner is grid
The Database owner is
oracle
The Grid Infrastructure
Home is /ocw/grid
Database Home1 is
/db/11.2/db1
Database Home2 is
/db/11.2/db2
All Homes are 11.2.0.2
ACFS is NOT in use on this
cluster
Note: This example only
covers the application of the patch iteslf. It does NOT cover the additional
databaseor any
other component specific
steps required for the patch installation. That said you must ALWAYS consult
the patch
README prior to attempting
to install any patch.
1. Create an EMPTY
directory to stage the GI PSU as the GI software owner (our example uses a
directory named
gipsu3):
% mkdir /u01/stage/gipsu3
Note: The directory must be readable,
writable by root, grid and all database users.
2. Extract the GI PSU into the empty
stage directory as the GI software owner:
% unzip -d /u01/stage/gipsu3 p12419353_112020_Linux-x86-64.zip
3. Verify that opatch in ALL 11.2.0.2
homes that will be patched meet minimum version requirement documented in the
PSU README (see "How to find out
the opatch version?"). If the version of OPatch in any one (or all) of the
homes does
not meet the minimum version required
for the patch, OPatch must be upgraded in this/these homes prior to
continuing (see "How do I install
the latest OPatch release?").
4. As grid user repeat the following to
validate inventory for ALL applicable homes on ALL nodes:
% $GI_HOME/OPatch/opatchlsinventory -detail
-oh <home-path>
Note: If any errors or inconsistencies
are returned corrective action MUST be taken prior to applying the patch.
5. On the local node, use the
"srvctl stop home" command to stop the database resources:
% $GI_HOME/bin/srvctl stop home -o
/db/11.2/db1 -s /tmp/statefile_db1.out -n <local node>
% $GI_HOME/bin/srvctl stop home -o
/db/11.2/db2 -s /tmp/statefile_db2.out -n <local node>
6. As root unlock the Grid
Infrastructure Home as follows:
# export ORACLE_HOME=/ocw/grid
# $ORACLE_HOME/perl/bin/perl
$ORACLE_HOME/crs/install/rootcrs.pl -unlock ## execute this in
Grid Infrastructure cluster, for Oracle
Restart see the note below.
Note: If you are in a Oracle Restart
environment, you will need to use the roothas.pl script instead of the
rootcrs.pl script as follows:
# $ORACLE_HOME/perl/bin/perl
$ORACLE_HOME/crs/install/roothas.pl -unlock
7. Execute the prepatch script for the
Database Homes as the oracle user:
% /u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/prepatch.sh
-dbhome
/db/11.2/db1
%
/u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/prepatch.sh
-dbhome
/db/11.2/db2
8. As the grid user apply the patch to
the local GI Home using opatchnapply:
% export ORACLE_HOME=/ocw/grid
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local /u01/stage/gipsu3/12419353
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local /u01/stage/gipsu3/12419331
9. As the oracle user apply the patch
to the local Database Homes using opatchnapply:
% export ORACLE_HOME=/db/11.2/db1
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local
/u01/stage/gipsu3/12419353/custom/server/12419353
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local /u01/stage/gipsu3/12419331
% export ORACLE_HOME=/db/11.2/db2
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local
/u01/stage/gipsu3/12419353/custom/server/12419353
% $ORACLE_HOME/OPatch/opatchnapply -oh
$ORACLE_HOME -local /u01/stage/gipsu3/12419331
10. Execute the postpatch script for
the Database Homes as the oracle user:
%
/u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/postpatch.sh
-dbhome
/db/11.2/db1
%
/u01/stage/gipsu3/12419353/custom/server/12419353/custom/scripts/postpatch.sh
-dbhome
/db/11.2/db2
11. At the root user execute the
rootadd_rdbms script from the GI Home:
# export ORACLE_HOME=/ocw/grid
# $ORACLE_HOME/rdbms/install/rootadd_rdbms.sh
12. As root relock the Grid
Infrastructure Home as follows (this will also start the GI stack):
# export ORACLE_HOME=/ocw/grid
# $ORACLE_HOME/perl/bin/perl
$ORACLE_HOME/crs/install/rootcrs.pl -patch ## execute this in
Grid Infrastructure cluster, for Oracle
Restart see the note below.
Note: If you are in a Oracle Restart
environment, you will need to use the roothas.pl script instead of the
rootcrs.pl
script as follows:
# $ORACLE_HOME/perl/bin/perl
$ORACLE_HOME/crs/install/roothas.pl -patch
13. Restart the database resources on
the local node using the "srvctl start home" command:
# $GI_HOME/bin/srvctl start home -o
/db/11.2/db1 -s /tmp/statefile_db1.out -n <local node>
# $GI_HOME/bin/srvctl start home -o
/db/11.2/db2 -s /tmp/statefile_db2.out -n <local node>
14. As grid user repeat the following
to validate inventory for ALL patched homes:
% $GI_HOME/OPatch/opatchlsinventory -detail
-oh <home-path>
15. Repeat steps 1-14 on each node in
the cluster, one node at a time.
16. If you applied the Databse PSU to
the Database Homes (as shown in this example), you must now load the Modified
SQL Files into the Database(s) as
follows:
Note: The patch README should be
consulted for additional instructions!
% cd $ORACLE_HOME/rdbms/admin
% sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sqlpsu apply
SQL> QUIT
No comments:
Post a Comment