Thursday, February 28, 2008

Upgrade 10g to 11g without DBUA

  1. Install oracle 11g software.
  2. Run utlu111i.sql (located in 11g_ORACLE_HOME/rdbms/admin) on 10g database and examin the output.
  3. update the timezone to 4.
    select * from v$timezone_file;
    FILENAME VERSION
    ------------ ----------
    timezlrg.dat 3
  4. SELECT CASE COUNT(DISTINCT(tzname))
    WHEN 183 then 1
    WHEN 355 then 1
    WHEN 347 then 1
    WHEN 377 then 2
    WHEN 186 then case COUNT(tzname) WHEN 636 then 2 WHEN 626 then 3 ELSE 0 end
    WHEN 185 then 3
    WHEN 386 then 3
    WHEN 387 then case COUNT(tzname) WHEN 1438 then 3 ELSE 0 end
    WHEN 391 then case COUNT(tzname) WHEN 1457 then 4 ELSE 0 end
    WHEN 392 then case COUNT(tzname) WHEN 1458 then 4 ELSE 0 end
    WHEN 188 then case COUNT(tzname) WHEN 637 then 4 ELSE 0 end
    WHEN 189 then case COUNT(tzname) WHEN 638 then 4 ELSE 0 end
    ELSE 0 end VERSION
    FROM v$timezone_names;


    VERSION
    ----------
    3
  5. If the Version of the existing timezone is less than 4, then apply the patch for Version 4 timezone files. Download patch 5632264 and apply to update the timezone
  6. select * from v$timezone_file;

    FILENAME VERSION
    ------------ ----------
    timezlrg.dat 4

    SELECT CASE COUNT(DISTINCT(tzname))
    WHEN 183 then 1
    WHEN 355 then 1
    WHEN 347 then 1
    WHEN 377 then 2
    WHEN 186 then case COUNT(tzname) WHEN 636 then 2 WHEN 626 then 3 ELSE 0 end
    WHEN 185 then 3
    WHEN 386 then 3
    WHEN 387 then case COUNT(tzname) WHEN 1438 then 3 ELSE 0 end
    WHEN 391 then case COUNT(tzname) WHEN 1457 then 4 ELSE 0 end
    WHEN 392 then case COUNT(tzname) WHEN 1458 then 4 ELSE 0 end
    WHEN 188 then case COUNT(tzname) WHEN 637 then 4 ELSE 0 end
    WHEN 189 then case COUNT(tzname) WHEN 638 then 4 ELSE 0 end
    ELSE 0 end VERSION
    FROM v$timezone_names;


    VERSION
    ----------
    4
  7. Gather statistics on schemas instructed by utlu111i.sql
    EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
    EXEC DBMS_STATS.GATHER_SCHEMA_STATS(’user_name’);
  8. Grant privileges revoked from public to resolve invalid objects
  9. Save Oracle Enterprise Manager Database Control Data

    1. Install Oracle Database 11g Release 1 (11.1).
    2. Set ORACLE_HOME to your old Oracle home.
    3. Set ORACLE_SID to the SID of the database being upgraded.
    4. Set PATH, LD_LIBRARY_PATH and SHLIB_PATH to point to the Oracle home from
    which the database is being upgraded.
    5. Go to the Oracle Database 11g Release 1 (11.1) home.
    6. Execute one of the following:
    a. For a single-instance database, run the following command, where old_SID is
    the SID of the database being upgraded and save_directory is the path to
    the storage place you have chosen for your Database Control files and data:
    emdwgrd -save -sid old_SID -path save_directory
    b. If the database is an Oracle RAC database, remote copy is required across the
    cluster nodes. Define an environment variable to indicate which remote copy
    is configured. For example: setenv EM_REMCP /usr/bin/scp
    Then, execute the following save command:
    emdwgrd -save -cluster -sid old_SID -path save_directory
    If 10g Oracle home is on a shared device, add -shared to the previous command line.
    7. Enter the SYS password for the database to be upgraded.
  10. Shutdown the database and set the environment variables to point to 11g
    ORACLE_HOME=/oraclebase/app/oracle/product/11.1.0/test
    ORACLE_SID=test
    PATH=$ORACLE_HOME/bin:$PATH
  11. start in upgrade mode startup upgrade;
  12. Run $ORACLE_HOME/rdbms/admin/catupgrd.sql to upgrade the database. Once done database is shutodown automatically. startup in normal mode.
  13. List the status of the database components
    select comp_name,status,version from dba_registry;
  14. To see the output of the upgrade
    @?/rdbms/admin/utlu111s.sql
  15. To run the upgrade tasks that doesn't require db to be in upgrade mode
    @?/rdbms/admin/catuppst.sql
  16. Check for invalid objects and run utlrp.sql if there are any.
  17. Create listener on 11g.
  18. set the compatible parameter
    alter system set compatible='11.1.0.0.0' scope=spfile ;
  19. Add an entry to /etc/oratab.


Upgrade 10g to 11g with DBUA

  1. Install oracle 11g software.
  2. Run utlu111i.sql (located in 11g_ORACLE_HOME/rdbms/admin) on 10g database and examin the output.
  3. update the timezone to 4.
    select * from v$timezone_file;
    FILENAME VERSION
    ------------ ----------
    timezlrg.dat 3
  4. SELECT CASE COUNT(DISTINCT(tzname))
    WHEN 183 then 1
    WHEN 355 then 1
    WHEN 347 then 1
    WHEN 377 then 2
    WHEN 186 then case COUNT(tzname) WHEN 636 then 2 WHEN 626 then 3 ELSE 0 end
    WHEN 185 then 3
    WHEN 386 then 3
    WHEN 387 then case COUNT(tzname) WHEN 1438 then 3 ELSE 0 end
    WHEN 391 then case COUNT(tzname) WHEN 1457 then 4 ELSE 0 end
    WHEN 392 then case COUNT(tzname) WHEN 1458 then 4 ELSE 0 end
    WHEN 188 then case COUNT(tzname) WHEN 637 then 4 ELSE 0 end
    WHEN 189 then case COUNT(tzname) WHEN 638 then 4 ELSE 0 end
    ELSE 0 end VERSION
    FROM v$timezone_names;


    VERSION
    ----------
    3
  5. If the Version of the existing timezone is less than 4, then apply the patch for Version 4 timezone files. Download patch 5632264 and apply to update the timezone
  6. select * from v$timezone_file;

    FILENAME VERSION
    ------------ ----------
    timezlrg.dat 4

    SELECT CASE COUNT(DISTINCT(tzname))
    WHEN 183 then 1
    WHEN 355 then 1
    WHEN 347 then 1
    WHEN 377 then 2
    WHEN 186 then case COUNT(tzname) WHEN 636 then 2 WHEN 626 then 3 ELSE 0 end
    WHEN 185 then 3
    WHEN 386 then 3
    WHEN 387 then case COUNT(tzname) WHEN 1438 then 3 ELSE 0 end
    WHEN 391 then case COUNT(tzname) WHEN 1457 then 4 ELSE 0 end
    WHEN 392 then case COUNT(tzname) WHEN 1458 then 4 ELSE 0 end
    WHEN 188 then case COUNT(tzname) WHEN 637 then 4 ELSE 0 end
    WHEN 189 then case COUNT(tzname) WHEN 638 then 4 ELSE 0 end
    ELSE 0 end VERSION
    FROM v$timezone_names;


    VERSION
    ----------
    4
  7. Gather statistics on schemas instructed by utlu111i.sql
    EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
    EXEC DBMS_STATS.GATHER_SCHEMA_STATS(’user_name’);
  8. Grant privileges revoked from public to resolve invalid objects
  9. Save Oracle Enterprise Manager Database Control Data

    1. Install Oracle Database 11g Release 1 (11.1).
    2. Set ORACLE_HOME to your old Oracle home.
    3. Set ORACLE_SID to the SID of the database being upgraded.
    4. Set PATH, LD_LIBRARY_PATH and SHLIB_PATH to point to the Oracle home from
    which the database is being upgraded.
    5. Go to the Oracle Database 11g Release 1 (11.1) home.
    6. Execute one of the following:
    a. For a single-instance database, run the following command, where old_SID is
    the SID of the database being upgraded and save_directory is the path to
    the storage place you have chosen for your Database Control files and data:
    emdwgrd -save -sid old_SID -path save_directory
    b. If the database is an Oracle RAC database, remote copy is required across the
    cluster nodes. Define an environment variable to indicate which remote copy
    is configured. For example: setenv EM_REMCP /usr/bin/scp
    Then, execute the following save command:
    emdwgrd -save -cluster -sid old_SID -path save_directory
    If 10g Oracle home is on a shared device, add -shared to the previous command line.
    7. Enter the SYS password for the database to be upgraded.
  10. Create listener on 11g for upgrade
  11. Run dbua from 11g environment to uprgade.
  12. set the compatible parameter.
    alter system set compatible='11.1.0.0.0' scope=spfile ;


Wednesday, February 27, 2008

Search and replace in vi editor

seach and replace globally
:%s/old_text/new_text/g


search and replace interactively
:%s/old_text/new_text/c

search and replace a text in current line
:.s/old_text/new_text

/proc/statm , /proc/status and top

From top

PID ... VIRT RES SHR ... COMMAND
4743 ... 2388m 722m 3532 ... java

From cat /proc/4743/status

VmSize: 2445800 kB
VmLck: 0 kB
VmRSS: 740004 kB
VmData: 2373700 kB
VmStk: 24 kB
VmExe: 72 kB
VmLib: 53241 kB
StaBrk: 00515000 kB
Brk: 00557000 kB


getconf PAGE_SIZE 4096

From cat /proc/4743/statm
611450 185001 883 18 0 593431 0

1. size :- total program size (611450 X 4096/1024 = 2445800kB = 2388M)
2. resident :- resident set size (185001 X 4096/1024 = 740004kB = 722M)
3. share :- shared pages (883 X 4096 = 3532)
4. trs :- text (code) (18 X 4096/1024 = 72kB = VmExe )
5. drs :- data/stack
6. lrs :- library (593431 X 4096/1024 = 2373724kB = VmData +VmStk)
7. dt :- dirty pages


Script to list /proc/pid/statm content


#!/bin/sh


pageSize=$(getconf PAGE_SIZE)

printf "VMSize(KB)\tReserved(KB)\tShared(KB)\tCode\tLibrary\tData/Stack\tDirty\n"

while [ 2 > 1 ]
do

vals=`cat /proc/$1/statm`

totalVM=`echo $vals $pageSize | awk '{print $1*$8/1024}'`
reserved=`echo $vals $pageSize | awk '{print $2*$8/1024}'`
shared=`echo $vals $pageSize | awk '{print $3*$8/1024}'`
code=`echo $vals $pageSize | awk '{print $4*$8/1024}'`
datastack=`echo $vals $pageSize | awk '{print $5*$8/1024}'`
library=`echo $vals $pageSize | awk '{print $6*$8/1024}'`
dirty=`echo $vals $pageSize | awk '{print $7*$8/1024}'`


printf "$totalVM\t\t$reserved\t\t$shared\t\t$code\t$datastack\t$library\t\t$dirty\n"

sleep $2

done

Tuesday, February 26, 2008

rpm with different architecture

to look up the architecture
rpm -q --queryformat "%{NAME} %{ARCH} \n"


to delete
rpm -e libstdc++.i386
rpm -e libstdc++.x86_64

to update 32 bit versions
up2date --arch=i386 -i compat-db

to update 64 bit versions
up2date --arch=x86_64 -i compat-gcc-32-c++

LoadAvg and LoadFactor script


#!/bin/sh

printf "\tLoad Average\tLoad Factor(loadavg/#CPU)\n"
cpuno=$(cat /proc/cpuinfo | grep processor | wc -l)

while [ 2 > 1 ]
do

x=$(cat /proc/loadavg | awk '{print $1}')
#y=$(echo $x / $cpuno | bc)
y=`echo $x $cpuno | awk '{print $1/$2}'`
printf "\t$x\t\t$y\n"
sleep $1

done



Using Mercury Profiler on x86_64

Install jrockit-R27.2.0-jdk1.6.0 or above.

copy JAVA_HOME/mercuryprofiler/lib/x86-linux to JAVA_HOME/mercuryprofiler/lib/x86-linux64

put _HOME/mercuryprofiler/lib/x86-linux64 in LD_LIBRARY_PATH

put -javaagent:$JAVA_HOME/mercuryprofiler/lib/probeagent.jar in the application start line.

start the application.

Hard Parse Vs Soft Parse

Hard parse consists of following steps.

1. Load the sql into memory. This memory area is the shared pool.

2. Syntax parse. Check the syntax of the sql, if there are any spelling mistakes, keywords used in wrong places, etc.

3. Semantic parse. Check if the objects refered in the sql do exists and user is authorized to access these objects.

4. Optimization. An execution plan is created based on the schema statistics or dynamic sampling.

5. Create executable. An executable is created with native file calls to service the sql query.

Soft parse consists of all the steps mentioned above except for the first one. If the sql is already found in the shared pool it is used instead of loading it to shared pool.

Monday, February 25, 2008

Changing SQL prompt

Add the following to end of $ORACLE_HOME/sqlplus/admin/glogin.sql

set sqlprompt "_USER'@'_CONNECT_IDENTIFIER SQL>"

below shown code can also be added to get the same output and even run custom query at sqlplus login. As they rely on database tables cannot be used when database is down.


define _editor=vi
set termout off
column prompter new_value m_prompt

select sys_context('USERENV','CURRENT_USER') '@' sys_context('USERENV','DB_NAME')' SQL>' prompter from dual;

set sqlprompt '&m_prompt'
undefine m_prompt
commit;

set termout on



expdp / impdp

create directory mydir as '/home/oracle/dpdump';
grant read,write on directory mydir to asanga;


expdp asanga/asa dumpfile=mydir:test.dmp schemas=asanga flashback_scn=14073963 logfile=mydir:expdp.log


expdp asanga/asa directory=mydir dumpfile=test.dmp schemas=asanga flashback_scn=14073963 logfile=expdp.log


impdp asanga/asa dumpfile=mydir:test.dmp logfile=mydir:impdp.log full=y
impdp asanga/asa directory=mydir dumpfile=test.dmp logfile=impdp.log full=y

to remap to different schema and generate new OID for type objects.
impdp directory=mydir dumpfile=remap.dmp logfile=rmap.log full=y remap_schema=sbx:sbxway transform=oid:n:type


FLASHBACK_TIME


This Export parameter specifies a timestamp of the form "YYYY-MM-DD HH24:MI:SS". Export finds the SCN that most closely matches the specified timestamp.

flashback_time='"2008-02-25 15:30:00"'

to get the date from the system on linux
flashback_time=\"$(date +%F' '%T)\"

Default value: none


Can specify the FLASHBACK_TIME parameter on the command line.


flashback_time=\"TO_TIMESTAMP\(\'19-06-2004 13:24:26\', \'DD-MM-YYYY HH24:MI:SS\'\)\"

You can also specifiy the FLASHBACK_TIME parameter in a parameter file.
When using the TO_TIMESTAMP function in a parameter file in Oracle10g,
use double quotes.
E.g.:
FLASHBACK_TIME="TO_TIMESTAMP('19-06-2004 13:24:26', 'DD-MM-YYYY HH24:MI:SS')"

Exclude Objects

expdp hr/hr EXCLUDE=INDEX:\"LIKE \'EMP%\'\" DUMPFILE=dpump_dir1:exp.dmp

Importing only a subset of rows
Using the query option it is possible to specify a select statment to a table or all the tables in schema. If the table is not specified then by default it will be applied to all the tables in schema.
create table x (a number);

Table created.

SQL> begin
  2  for i in 1 .. 100
  3  loop
  4  insert into x values(i);
  5  end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> commit;

SQL>  create table y as select * from x;

Table created.

SQL> delete from y where mod(a,3)=0;

33 rows deleted.

SQL> commit;
Two tables X and Y has 100 and 67 rows now. Export the two tables
Starting "ASANGA"."SYS_EXPORT_SCHEMA_01":  asanga/******** directory=dumpdir dumpfile=test.dmp logfile=test.log schemas=asanga
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
. . exported "ASANGA"."X"                                5.585 KB     100 rows
. . exported "ASANGA"."Y"                                5.359 KB      67 rows
Master table "ASANGA"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Drop the tables from schema before import
SQL> drop table x;

Table dropped.

SQL> drop table y;

Table dropped.
Import only the rows where value of column A is between 10 and 20 on table X and only rows where value is even.
impdp asanga/asa directory=dumpdir dumpfile=test.dmp logfile=imp.log query=X:'"where a between 10 and 20"',
Y:'"where mod(a,2)=0"'
...
...
Master table "ASANGA"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "ASANGA"."SYS_IMPORT_FULL_01":  asanga/******** directory=dumpdir dumpfile=test.dmp logfile=imp.log query=X:"where a between 10 and 20",
Y:"where mod(a,2)=0"
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"ASANGA" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "ASANGA"."X"                                5.585 KB      11 out of 100 rows
. . imported "ASANGA"."Y"                                5.359 KB      34 out of 67 rows
Job "ASANGA"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 13:33:47


Saturday, February 23, 2008

DBWn and LGWR writes

DBWn writes when....
  1. Server process takes too long to find a free buffer (too long is determined by internal parameters).
  2. checkpoint queue (list of ditry buffers waiting to be written to the disk) becomes too long.
  3. Three second time out. (Every three second DBW will writer some dirty buffers to disk).
  4. alter system checkpoint is called. all dirty buffers are written to the disk.

checkpoints automatically occur when

  1. tablespace is dropped
  2. tablespace is made read only
  3. tablespace is put into hot backup mode
  4. tablespace is taken offline

LGWR writes when....

  1. User commits a transaction
  2. log buffer is one third full
  3. Just before DBWn writes
  4. Three second time out


Database/Instance Recovery in RAC

When a database starts oracle performs two consistency checks (among others).

  1. Check if the start SCN value of each datafile header matches the corresponding stop SCN value in the controlfile.
  2. Check if the checkpoint counter values matches.
If these two checks are successful then no instance recovery is needed.

If datafile header SCNs are out of sync then at least an instance recovery is needed.

If checkpoint counter value check fails then Oracle knows datafile was replaced with a backup copy while it was down and requires a media recovery.

Instance recovery is completed when Oracle has performed

  1. Cache recovery : replays the contents of the online redologs of the failed instance.
  2. Transaction recovery : rollback the uncommitted transactions of the failed instance.
  • During the first phase of the recovery GES (Global Enque Service) remaster the enqueues and GCS (Global Cache Service) remaster its resources from the failed instance among the remaining instances.
  • First step in GCS remastering is for Oracle to assign a new incarnation number.
  • Oracle determines how many more nodes are left in the cluster.
  • In an attempt to recreate the resource master of the failed node all GCS resource request and writer requests are temporarily suspended. GRD (Global Resource Directory) is frozen.
  • All the dead shadow process related to the GCS are cleaned up from the failed instance.
  • After enqueues are reconfigured one of the surviving instances grab the instance recovery enqueue.
  • Same time GCS resources are remastered SMON determines the blcoks that need recovery. This is known as the recovery set. Due to the nature of the cache fusion SMON needs to merge contents of all online redo logs of each failed instance to determine the blocks that need recovery (recovery set) and the order of apply.
  • In this stage buffer space for recovery is allocated and GCS resources identified by reading the online redo logs are claimed as recovery resources. This prevent the other instances accessing these resources.
  • SMON performes roll forward (cache recovery) and roll back (transaction recovery).
  • A new master node is assigned to the cluster if the failed node was the previous master. All GCS shadow processes are traversed and GRD is removed from the frozen state. This complete the reconfiguration process.


Friday, February 22, 2008

Instance Recovery in Single Instance Database

When a database starts oracle performs two consistency checks (among others).
  1. Check if the start SCN value of each datafile header matches the corresponding stop SCN value in the controlfile.
  2. Check if the checkpoint counter values matches.
If these two checks are successful then no instance recovery is needed.

If datafile header SCNs are out of sync then at least an instance recovery is needed.

If checkpoint counter value check fails then Oracle knows datafile was replaced with a backup copy while it was down and requires a media recovery.

Instance recovery is completed when Oracle has performed
  1. Cache recovery : replays the contents of the online redologs of the failed  instance.
  2. Transaction recovery : rollback the uncommitted transactions of the failed instance.

In a nutshell instance recovery is nothing more than using the contents in the online redo logs to rebuild the database buffer cache to the state it was in before the crash.

At startup Database goes through nomount and mount phases. During the checks if it is found that instance recovery is needed then SMON automatically starts the instance recovery process. This phase is know as the roll forward phase and it reinstates all changes to both data blocks as well as undo blocks, for both committed and uncommitted transactions.

Each redo record has bare minimum information to reconstruct a change (block address, new value). In the roll forward phase blocks are brought into the database buffer cache, changes are applied and then written back to the disk.

At the end of the roll forward phase cache recovery is completed and it as  crash never happened.
Once the roll forward phase is over database is opened for users to connect. 

But there are still uncommitted transactions in the database. 
To complete the transaction recovery these uncommitted transactions needs to be rolled back
Oracle does this automatically during rollback phase. If  a user connects and hit a datablock that 
needs to be
rolled back then this is done automatically as roll forward phase has populated the undo segments that was protecting the uncommitted transaction. This ensures the read consistency. 


Wednesday, February 20, 2008

running iptraf in background

list help
iptraf -h

detailed statistics
iptraf -d eth0 -L /home/sbx/iptraf.log -B

output
Average rates:
Total: 2.00 kbits/s, 1.90 packets/s
Incoming: 0.00 kbits/s, 0.00 packets/s
Outgoing: 2.00 kbits/s, 1.90 packets/s

Peak total activity: 5.71 kbits/s, 4.40 packets/s
Peak incoming rate: 0.00 kbits/s, 0.00 packets/s
Peak outgoing rate: 5.71 kbits/s, 4.40 packets/s


to stop
kill -USR2 pid

Monday, February 18, 2008

Clone new Cluster Node

Cloning Oracle Clusterware

  1. Archive the Oracle Clusterware home from node A (existing node) and copy it to node B and node C. (node B and C are new nodes) ($CRS_HOME)
  2. Unarchive the home on the new nodes B and C. In case of shared home, unarchive the home only once on either of the nodes.
  3. On nodes B and C go to $CRS_HOME/clone/bin and execute the following command:
  4. perl clone.pl ORACLE_HOME=Path to the Oracle_Home being cloned
    ORACLE_HOME_NAME=Oracle_Home_Name for the Oracle_Home being cloned
    '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2'
    '-O"sl_tableList={node_B:node_B_priv:node_B-vip,
    ode_C:node_C_priv:node_C-vip}"' '-OINVENTORY_LOCATION=inventory location'
  5. On UNIX, navigate to the directory and run orainstRoot.sh on the nodes B and C. This populates /etc/oraInst.loc with the location of the Central Inventory. On nodes B and C, go to $CRS_HOME and run root.sh. This will bring the Oracle Clusterware stack on node B. Repeat this step on node C.
  6. To get the remote port information, execute the following command from the $CRS_HOME/opmn/conf directory: ./ons.config
  7. On node B, execute the following from $CRS_HOME/bin:
    ./racgons add_config node B:Remote_Port node C:Remote_Port
  8. Execute the following command to get the interconnect information. You can use this information in the next step.
    $CRS_HOME/bin/oifcfg iflist –p
  9. Execute oifcfg command as follows:
    oifcfg setif -global interface_name/subnet:public inteface_
    name/subnet:cluster_interconnect

Important Considerations when Cloning Oracle Clusterware

Should have an existing home on the remote node and it should be writable.
Should have executed root.sh and should have run other configuration tools on the source node.
Can also use a response file instead of passing these parameters through the command line.
The tar operation need not be performed as root.
For a shared home need to also pass -cfs parameter on the command line.
Never pass the CLUSTER_NODES and LOCAL_NODE parameters through the command line.

Cloning Real Application Clusters

  1. Archive the Real Application Clusters home from node A (existing) and copy it to node B and node C. (new nodes) ($ORACLE_HOME)
  2. Unarchive the home on the nodes B and C. In case of shared home, unarchive the home only once on either of the nodes.
  3. On nodes B and C go to $CRS_HOME/oui/bin and execute the following command:
    perl clone.pl ORACLE_HOME=Path to the Oracle_Home being cloned ORACLE_HOME_NAME=Oracle_Home_Name for the Oracle_Home being cloned '-O"CLUSTER_NODES={node B,node C}"' '-OLOCAL_NODE=node_B"
  4. For UNIX, perform the following additional steps: On node B, go to $ORACLE_HOME and run root.sh. Repeat this step on node C.On node B, set the environment variable to ORACLE_HOME. Also add $ORACLE_HOME/lib to LD_LIBRARY_PATH.
  5. Run Net Configuration Assistant on node B.
  6. Run Database Configuration Assistant on node B.

Important Considerations when Cloning Real Application Clusters
The order of nodes specified should always be the same on all hosts.
Oracle Clusterware should be installed on the cluster nodes prior to starting Real Application Clusters installation.
The nodes for Real Application Clusters installation would be a subset of the nodes for Oracle Clusterware installation.
For a shared home you need to also pass -cfs parameter on the command line.

Oracle Support Status and Alerts

Oracle 10g Release 2 (10.2) Support Status and Alerts

Metalink note ID 316900.1

Oracle 11g Release 1 (11.1) Support Status and Alerts

Metalink note ID 454507.1

Oracle 11g Release 2 (11.2) Support Status and Alerts

Metalink note ID 880782.1

Oracle 12c Release 1 (12.1) Support Status and Alerts

Metalink note ID 1565065.1

Time Zone File patches

Metalink note ID 412160.1


Introduction to Oracle Recommended Patches Metalink note ID 756388.1

Recommended Patches for Oracle databases Metalink note ID 756671.1

Oracle Enterprise Manager Metalink note ID 822485.1

Oracle Fusion Middleware Metalink note ID 859115.1

Oracle Solaris Operating System Metalink note ID 1272947.1

Patch Set Updates Known Issues Notes Metalink note ID 1227443.1

Quick Reference to Patchset Patch Numbers Metalink note ID 753736.1

Release Schedule of Current Database Releases Metalink note ID 742060.1

Certification Information for Oracle Database on Linux x86-64 Metalink note ID 1304727.1

Oracle Database (RDBMS) Releases Support Status Summary Metalink note ID 161818.1

Wednesday, February 13, 2008

Classes of Wait Events

Administrative
Waits resulting from DBA commands that cause users to wait (for example, an index rebuild)

Application
Waits resulting from user application code (for example, lock waits caused by row level locking or explicit lock commands)

Cluster
Waits related to Real Application Cluster resources (for example, global cache resources such as 'gc cr block busy'

Commit
This wait class only comprises one wait event - wait for redo log write confirmation after a commit (that is, 'log file sync')

Concurrency
Waits for internal database resources (for example, latches)

Configuration
Waits caused by inadequate configuration of database or instance resources (for example, undersized log file sizes, shared pool size)

Idle
Waits that signify the session is inactive, waiting for work (for example, 'SQL*Net message from client')

Network
Waits related to network messaging (for example, 'SQL*Net more data to dblink')

Other
Waits which should not typically occur on a system (for example, 'wait for EMON to spawn')

Scheduler
Resource Manager related waits (for example, 'resmgr: become active')

System I/O
Waits for background process IO (for example, DBWR wait for 'db file parallel write')

User I/O
Waits for user IO (for example 'db file sequential read')

Monday, February 11, 2008

Oracle JDBC Logging

select the appropriate jar for logging
ojdbc14_g.jar
ojdbc5_g.jar
ojdbc6_g.jar
ojdbc5dms.jar minimal logging
ojdbc6dms.jar minimal logging
ojdbc5dms_g.jar
ojdbc6dms_g.jar

put it on the classpath and make sure it's the only one.

Using System properties

Create a properties file to set the different levels of tracing, for example:

handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler

# default file output is in user's home directory.

java.util.logging.FileHandler.pattern = jdbc.log
java.util.logging.FileHandler.limit = 50000

java.util.logging.FileHandler.count = 1

java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter


# Predefined levels are:

# ALL, SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST, OFF
# Setting this to SEVERE avoids duplicate output from default logger

java.util.logging.ConsoleHandler.level = SEVERE

java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter

oracle.jdbc.level = FINEST

To run the test with tracing enabled use following command (have trace enable jars *_g.jar in class path instead of optimized jdbc jars):

java -Doracle.jdbc.Trace=true -Djava.util.logging.config.file=properties file location program

Using java.util.logging

import oracle.jdbc.driver.OracleLog;
// set the file handler

FileHandler handler = new FileHandler("oracle.log");

// designate the module to be traced

Logger logger = Logger.getLogger("oracle.jdbc");

// set the tracing level

logger.setLevel(Level.FINE);

// Add file handler to the desired logger

logger.addHandler(handler);

// turn on trace

OracleLog.setTrace(true);


// java codef

// turn off trace

OracleLog.setTrace(false);

Programmatically

// compute the ObjectName

String loader= Thread.currentThread().getContextClassLoader().toString().replaceAll("[,=:\"]+", "");

javax.management.ObjectName name= new javax.management.ObjectName("com.oracle.jdbc:type=diagnosability,name="+loader);

// get the MBean server

javax.management.MBeanServer mbs= java.lang.management.ManagementFactory.getPlatformMBeanServer();

// find out if logging is enabled or not

System.out.println("LoggingEnabled = " +mbs.getAttribute(name, "LoggingEnabled"));

// enable logging

mbs.setAttribute(name,new javax.management.Attribute("LoggingEnabled", true));

// disable logging

mbs.setAttribute(name,new javax.management.Attribute("LoggingEnabled", false));



ASMLib on Multipath Disks

The configuration file contains many configuration variables. The ORACLEASM_SCANORDER variable specifies disks to be scanned first.

The ORACLEASM_SCANEXCLUDE variable specifies the disks that are to be ignored.The variables take a whitespace-seperated list of prefix strings to match. In other words, if a disk starts with one of the prefix strings, it matches. For example, the prefix string sd will match all SCSI devices. The disk sda starts with sd. Note that these are not globs. They do not use wildcards. They are simple prefixes. Also note that the /dev/ path is not part of the prefix.


# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
#ORACLEASM_SCANORDER=${ORACLEASM_SCANORDER}ORACLEASM_SCANORDER=emcpowera


# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
#ORACLEASM_SCANEXCLUDE=${ORACLEASM_SCANEXCLUDE}ORACLEASM_SCANEXCLUDE="sda sdb sdc sdd"

more on metalink note 309815.1



Useful metalink notes
Configuring Oracle ASMLib on Multipath Disks [ID 309815.1]
How to Partition DM-Multipath Pseudo Devices [ID 470913.1]
Configuring Oracle ASMLib on Multipath Disks [ID 309815.1]
How To Setup ASM & ASMLIB On Native Linux Multipath Mapper disks? [ID 602952.1]
Configuring non-raw multipath devices for Oracle Clusterware 11g (11.1.0, 11.2.0) on RHEL5/OL5 [ID 605828.1]
Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2 (10.2.0) on RHEL5/OEL5 [ID 564580.1]
How to Configure LUNs for ASM Disks using WWID, DM-Multipathing, and ASMLIB on RHEL 5/OEL 5 [ID 1365511.1]

Wednesday, February 6, 2008

Meanings of four digits in *Nix permissions

SUID or setuid: change user ID on execution. If setuid bit is set, when a file is executed by a user, the process will have the same rights as the owner of the file being executed.
If set, then replaces "x" in the owner permissions to "s", if owner has execute permissions, or to "S" otherwise.

SGID or setgid: change group ID on execution. Same as above, but inherits rights of the group of the owner of the file on execution. For directories it also may mean that when a new file is created in the directory it will inherit the group of the directory (and not of the user who created the file).
If set, then replaces "x" in the group permissions to "s", if group has execute permissions, or to "S" otherwise.


Sticky bit. It was used to trigger process to "stick" in memory after it is finished, now this usage is obsolete. Currently its use is system dependent and it is mostly used to suppress deletion of the files that belong to other users in the folder where you have "write" access to.
If set, then replaces "x" in the others permissions to "t", if others have execute permissions, or to "T" otherwise.


setuid, setgid, sticky bits are cleared
chmod 0777 test.txt = -rwxrwxrwx test.txt
chmod 0666 test.txt = -rw-rw-rw- test.txt

sticky bit is set
chmod 1777 test.txt = -rwxrwxrwt test.txt
chmod 1666 test.txt = -rw-rw-rwT test.txt

setgid bit is set
chmod 2777 test.txt = -rwxrwsrwx test.txt
chmod 2666 test.txt = -rw-rwSrw- test.txt

setgid and sticky bits are set
chmod 3777 test.txt = -rwxrwsrwt test.txt

setuid bit is set
chmod 4777 test.txt = -rwsrwxrwx test.txt
chmod 4666 test.txt = -rwSrw-rw- test.txt

setuid and sticky bits are set
chmod 5777 test.txt = -rwsrwxrwt test.txt

setuid and setgid bits are set
chmod 6777 test.txt = -rwsrwsrwx test.txt

setuid, setgid, sticky bits are set
chmod 7777 test.txt = -rwsrwsrwt test.txt

Tuesday, February 5, 2008

Cluster in Brief

Major Oracle Clusterware components

  • The Oracle Clusterware Software Components

Cluster Synchronization Services (CSS)

Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members when a node joins or leaves the cluster. If you are using third-party clusterware, then the css process interfaces with your clusterware to manage node
membership information.

Processes connected to this service are init.cssd, ocssd both run as root user and ocssd.bin run as oracle user. Failure of this process results in cluster restart.(ocssd)

Cluster Ready Services (CRS)

The primary program for managing high availability operations within a cluster. Anything that the crs process manages is known as a cluster resource which could be a database, an instance, a service, a Listener, a virtual IP (VIP) address, an application process, and so on. The crs process manages cluster resources based on the resource’s configuration information that is stored in the OCR. This includes start, stop, monitor and failover operations. The crs process generates events when a resource status changes. When you have installed RAC, crs monitors the Oracle instance, Listener, and so on, and automatically restarts these components when a failure occurs. By default, the crs process makes five attempts to restart a resource and then does not make further restart attempts if the resource does not restart.

Process connected with this service is crsd.bin and runs as root user. This process restarts automatically upon failure.


Event Management (EVM)

A background process that publishes events that crs creates.

Proceses connected with this service are evmd runs as root while evmd.bin and evmlogger runs as oracle user. This process also starts the racgevt process to manage FAN server callouts.


Oracle Notification Service (ONS)

A publish and subscribe service for communicating Fast Application Notification (FAN) events.

Ons process is connected with this service. Runs as oralce user.

RACG

Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur.


Process Monitor Daemon (OPROCD)

This process is locked in memory to monitor the cluster and provide I/O fencing. OPROCD performs its check, stops running, and if the wake up is beyond the expected time, then OPROCD resets the processor and reboots the node. An OPROCD failure results in the Oracle
Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux platforms.

oprocd processes connected with this service and runs as root user.


  • The Real Application Clusters Software Components


To ensure that each RAC database instance obtains the block that it needs to satisfy a query or transaction, RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances, which effectively increases the size of the System Global Area for a RAC instance.

These RAC-specific processes and the GRD collaborate to enable Cache Fusion.

The RAC-specific processe.

LMS—Global Cache Service Process

LMD—Global Enqueue Service Daemon

LMON—Global Enqueue Service Monitor

LCK0Instance Enqueue Process


The Oracle Clusterware Voting Disk and Oracle Cluster Registry

Voting Disk

Manages cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. RAC uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on shared disk. For high availability, Oracle recommends that you have multiple voting disks. The Oracle Clusterware enables multiple voting disks but you must have an odd number of voting disks, such as three, five, and so on. If you define a single voting disk, then you should use external
mirroring to provide redundancy.


Oracle Cluster Registry (OCR)

Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR also manages information about processes that the Oracle Clusterware controls. The OCR stores configuration information in a series of key-value pairs within a directory tree structure. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster. The Oracle Clusterware can multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability. You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).