Friday, January 27, 2012

Linux Root Exploit CVE-2012-0056

A Linux vulnerability has been identified which allows a user to gain root user privilege. This is caused by "failure of the Linux kernel to properly restrict access to the "/proc//mem" file".

Computer world has a detail article about this (Same article available on CIO).

Redhat has a support note regarding this vulnerability and a test code to check if the system is vulnerable.

Resizing VirtualBox and Linux Partitions with GParted

VirtualBox 4 has an option to modify the logical size of the virtual disk after it's been created without affecting the physical size. The size specified in this option is the new overall size of the disk not the increment. If the current size is 100MB and resizing to 200MB would make the disk 200MB not 300MB (100 + 200). Once the size is increased it cannot be decreased (yet).

Current size of the disk
-rw------- 1 root root  18G Jan 27 16:43 rac4.vdi
The size of disk seen by the OS using this virutal disk
Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Size of the partitions
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3              16G   13G  1.8G  88% /
/dev/sda1              99M   12M   83M  13% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
Increase the size with modifyhd option
VBoxManage modifyhd rac4.vdi --resize 40960
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Physical size is unaffected. This will grow as the space is used.
-rw------- 1 root root  18G Jan 27 16:45 rac4.vdi
But the size of the disk seen by the OS has changed
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
The additional space is unallocated and linux partitions are not affected after this increase.
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3              16G   13G  1.8G  88% /
/dev/sda1              99M   12M   83M  13% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
Using the Gnome Partition Editor (GParted) it is possible to resize the linux partition and allocate the unused space to it effectively increasing the size of it.

Download the Live CD iso of gparted and boot the virtual box server from it.

Select the default option and complete the startup.



GParted shows the size of current partitions (15.9 GB) and the unallocated space (20 GB).

Right click on the partition to resize and select resize/move item from the popup menu.

Resize the partition by either dragging the highlighted box or adjusting the numerical spinners.



Click apply to complete the resizing.



Restart the virtual server and see the size of linux partition changed.
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3              35G   13G   21G  39% /
/dev/sda1              99M   12M   83M  13% /boot
tmpfs                1006M     0 1006M   0% /dev/shm
Size of the new partition is 35 GB (15 GB previously allocated and 20 GB added with GParted).

Connection Leak Detection on Applications Servers

Connection leak detection on a 3-tier application servers present a challenge as the connecting database user name would be same for all the connections. Therefore when a connection leak happens unless the usecase is known, or if it's a something recently started then leak is probably due to some new deployment it's diffcult to detect from the database site. One way is finding the number of session with the same sqlid for previous sql
select inst_id,machine,username, prev_sql_id,count(*) from gv$session where username='ASANGA' group by inst_id,machine,username, prev_sql_id order by 1,2,3,4
SQL id with the highest count is a suspect but may not always be the case.

Use of logon triggers would be of no use if a connection pool is used as when connections are handed out from the pool logon trigger doesn't fire.

To identify the exact point in the code where the leak happening is to populate the CLIENT_INFO column with line and class name before handing over the connection. One way this could be accomplished is by overrding the getConnection method.
Throwable throwable = new Throwable();
StackTraceElement[] stackTraceElements = throwable.getStackTrace();
StackTraceElement element = stackTraceElements[stackTraceElements.length > 0 ? 1 : 0];
CallableStatement clm = con.prepareCall("begin DBMS_APPLICATION_INFO.set_client_info(?); end;");
// String x = ste.getClassName() + ":" + ste.getMethodName() + ":" + ste.getLineNumber();
// client info column has a limit of varchar2(64),String is truncated to be 63 chars
StringBuilder y = new StringBuilder(ste.getClassName()).append(":").append(ste.getMethodName()).append(":").append(ste.getLineNumber());
clm.setString(1, (y.length() > 63 ? y.substring(0,63).toString(): y.toString()) );
clm.execute();
clm.close();
The above code snippet will populate the CLIENT_INFO column with the class name, method name and line number.Quering the v$session will show these values
select username,client_info from v$session where username is not null;

USERNAME CLIENT_INFO
-------- ---------------------------
ASANGA   com.Test : main : 86
ASANGA   com.Test : main : 79
ASANGA   com.Test : main : 79
ASANGA   com.Test : main : 79
ASANGA   com.Test : main : 85




Update 02 October 2012
There's another way to populate the v$session columns without making a database call. This is done by setting end to end metrics on the connection. Above code which makes a call using DBMS_APPLICATION_INFO could be changed to use end to end metric (this will populate client_identifier column instead of client_info column as above)
Throwable throwable = new Throwable();
StackTraceElement[] stackTraceElements = throwable.getStackTrace();
StackTraceElement element = stackTraceElements[stackTraceElements.length > 0 ? 1 : 0];
String metrics[] = new String[OracleConnection.END_TO_END_STATE_INDEX_MAX];
// client_identifier has a limit of varchar2(64),String is truncated to be 63 chars
StringBuilder y = new StringBuilder(ste.getClassName()).append(":").append(ste.getMethodName()).append(":").append(ste.getLineNumber());
metrics[OracleConnection.END_TO_END_CLIENTID_INDEX] = y.length() > 63 ? y.substring(0, 63).toString() : y.toString();//ste.getClassName() + " : " + ste.getMethodName() + " : " + ste.getLineNumber();
((OracleConnection) con).setEndToEndMetrics(metrics, (short) 0);
This way there's no need to make upfront db call to populate the columns in v$session. Beside end to end client id index other constants that could be used are
OracleConnection.END_TO_END_ACTION_INDEX :- populate action column
OracleConnection.END_TO_END_MODULE_INDEX :- populate module column
Columns are not populated as soon as the metrics are set on the connection, a sql query must be executed for the columns to get populated. This is explained in Bug 3735857 - V$SESSION.OSUSER not populated for JDBC clients [ID 3735857.8]

Update 05 October 2012
How to Set the value for the column "PROGRAM" of View V$SESSION from a Universal Connection Pool (UCP) [ID 1152523.1] explain somewhat similar procedure. Main difference is earlier information to populate the v$session was set at connection level whereas on this note using UCP it is done at connection pool level meaning all connections will have the same information.

Update 16 January 2013
Beside the method for setting class name, method name and line number there must be a mechanism to clear this information when the jdbc connection is closed. This information is not cleared when connection close method is called. Therefore it won't be possible to distinguish between connections that are opened but not closed vs closed connection. OracleConnectionWrapper could be used to wrap the close method and clear the text set in the getConnection. Following is an example where text "closed" is set on client_info column for closed connections.
@Override
    public void close() throws SQLException {

        CallableStatement clm = super.prepareCall("begin DBMS_APPLICATION_INFO.set_client_info(?); end;");
        clm.setString(1,"closed");
        clm.execute();
        clm.close();
        super.close();

    }
Clearing cannot be done using end to end metrics as they require an SQL to be executed for the columns to get populated. Also both PL/SQL (for clearing) and end to end metric (for setting) cannot be used together as they populate two different columns.

Useful metalink notes
Simple EndToEndMetrics Demonstration [ID 1264780.1]
Bug 3735857 - V$SESSION.OSUSER not populated for JDBC clients [ID 3735857.8]
No more data to read from socket" is Thrown When End-to-end Metrics is Used [ID 1081275.1]

Thursday, January 19, 2012

2012 January PSU and the growth in the System Change Number (SCN) bug.

With 2012 January PSU/CPU Oracle has patched the SCN growth bug, the case when a database is having a high rate change for SCN. Computer world has an comprehensive article about this issue.

Following metalink notes provides means to test if the database is affected by this bug.
Information on the System Change Number (SCN) and how it is used in the Oracle Database [ID 1376995.1]
Installing, Executing and Interpreting output from the "scnhealthcheck.sql" script [ID 1393363.1]
Evidence to collect when reporting "high SCN rate" issues to Oracle Support [ID 1388639.1]

scnhealthcheck.sql mentioned in 1393363.1 could be used to verify if the database is affected.
SQL> @scnhealthcheck.sql
--------------------------------------------------------------
ScnHealthCheck
--------------------------------------------------------------
Current Date: 2012/01/19 10:39:57
Current SCN:  4165911
Version:      11.2.0.3.0
--------------------------------------------------------------
Result: A - SCN Headroom is good
Apply the latest recommended patches
based on your maintenance schedule
For further information review MOS document id 1393363.1
--------------------------------------------------------------
Recommendations are if the output is Result: A - SCN Headroom is good then system SCN health is good apply the recommended patches (756671.1) during the normal maintenance schedule.

If the output is Result: B - SCN Headroom is low then apply the recommended patches immediately outside the normal maintenance schedule and continue to monitor the SCN health. Once patched headroom will increase overtime (could take days or weeks) to script to report Result: A - SCN Headroom is good.

If the output is Result: C - SCN Headroom is low then apply the recommended patches immediately and collect required information listed on 1388639.1 and raise a SR so additional advice could be given.

With this PSU a new hidden parameter called "_external_scn_rejection_threshold_hours" has been introduced in connection with this bug. Once the patch is applied Oracle recommend setting this to 24 on 10g and 11gR1 systems. Value of _external_scn_rejection_threshold_hours after patched applied on 11gR1 (11.1.0.7)
To set the value to 24
alter system set "_external_scn_rejection_threshold_hours" = 24 
   comment='Set threshold on dd/mon/yyyy - See MOS Document 1393363.1' 
   scope=spfile ;
On 11gR2 systems this set to 24 by default once the patch is applied.
Apart from the the test cases mentioned in 1376995.1 IOUG newsletter mentioned another "way to test is to look at the backups with a restore process to validate SCN values and recoverability which is part of a best practice for database administrators. With these tests and test plans for applying the Patch Set Updates (PSU) or traditional Critical Patch Update patches, the vulnerabilities can be validated with appropriate fixes of the environment."

More clarifications on the issue from computer world

Wednesday, January 18, 2012

dbms_cube_exp.schema_info_imp_beg error while importing with impdp

Following error came up when doing a schema import from a 11gR1 standard edition database to another 11gR1 standard edition database.Beginning of import
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-12663: Services required by client not available on the server
ORA-36961: Oracle OLAP is not available.
Failing sql is:
BEGIN 
dbms_cube_exp.schema_info_imp_beg(0, '11.01.00.00.00');
dbms_cube_exp.schema_info_imp_end(0, '11.01.00.00.00');COMMIT; END;
and at the end
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-12663: Services required by client not available on the server
ORA-36961: Oracle OLAP is not available.
Failing sql is:
BEGIN 
dbms_cube_exp.schema_info_imp_beg(1, '11.01.00.00.00');
dbms_cube_exp.schema_info_imp_loop(1, '11.01.00.00.00', '<?xml version="1.0" encoding="UTF-16"?>
<Metadata
Version="1.1"
CreateOnly="True">
</Metadata>');
dbms_cube_exp.schema_info_imp_end(1, '11.01.00.00.00');COMMIT; END; 
The import completed without any further errors and schema objects were available on the imported database. Metalink note DataPump Import (IMPDP) Receives The Errors ORA-39083 PLS-201 identifier DBMS_CUBE_EXP.SCHEMA_INFO_IMP_BEG [ID 789549.1] mentions that this is a ignorable error.

OLAP is not an option available with standard edition database but it gets added in an invalid state when a database is created using one of the provided database templates (transaction processing database) with oracle. Although option get installed it remains in a invalid state.
Metalink note "How To Find Out If OLAP Is Being Used And How To Remove OLAP [Doc ID 739032.1]" could be used to verify if olap is installed and used. (being used in a standard edition system is unlikely). The remove steps listed in this metalink note cannot be run on standard edition as those files are not installed with standard edition.

There's another metalink that list "How to Remove OLAP From the Standard Edition database. [ID 1362752.1]" which is by dropping the OLAPSYS  user and running utlrp afterwards. This will remove the OLAP Catalog option from the database but OLAP Analytic Workspace and Oracle OLAP API will remain in an invalid state.
SQL> select comp_id, comp_name, version, status
  2         from dba_registry
  3         where comp_name like '%OLAP%';

COMP_ COMP_NAME                 VERSION    STATUS
----- ------------------------- ---------- --------
APS   OLAP Analytic Workspace   11.1.0.7.0 INVALID
XOQ   Oracle OLAP API           11.1.0.7.0 INVALID
Metalink note Remove Invalid OLAP Objects From SYS And OLAPSYS Schemas [ID 565773.1] could be used to resolve any invalid objects issues after the removal of olap catalog.

SR has been raised to find out ways to remove these options from a standard edition database.

Useful metalink notes
How To Remove or De-activate OLAP After Migrating From 9i To 10g or 11g [ID 467643.1]
ORA-39127 SYS.DBMS_CUBE_EXP.INSTANCE_EXTENDED_INFO_EXP ORA-44002 On Expdp After Upgrade To 11.2.0.2 [ID 1353491.1]
How To Find Out If OLAP Is Being Used And How To Remove OLAP [Doc ID 739032.1]
How to Remove OLAP From the Standard Edition database. [ID 1362752.1]
Remove Invalid OLAP Objects From SYS And OLAPSYS Schemas [ID 565773.1]
How To Remove Or To Reinstall The OLAP Option To 10g And 11g [ID 332351.1]
DataPump Import(IMPDP) Receives The Errors ORA-39083 PLS-201 identifier DBMS_CUBE_EXP.SCHEMA_INFO_IMP_BEG [ID 789549.1]

Workaround
First of all it must be strenuously stressed that this workaround is not an official oracle supported one (yet! maybe!).

Copied the olap folder ($ORACLE_HOME/olap) from a enterprise installation into standard edition home and ran the remove scripts mentioned in Doc ID 739032.1.
@?/olap/admin/catnoamd.sql
@?/olap/admin/olapidrp.plb
@?/olap/admin/catnoaps.sql
@?/olap/admin/catnoxoq.sql
Afterwards ran @?/rdbms/admin/utlrp and execution didn't show any errors. But three sys object related OLAP came as invalid
SQL> select object_name,object_type from dba_objects where owner='SYS' and status <> 'VALID';

OBJECT_NAME              OBJECT_TYPE
------------             -------------
INTERACTIONEXECUTE        FUNCTION
XOQ_VALIDATE              PROCEDURE
CWM2_OLAP_INSTALLER       PACKAGE BODY
Dropped them and ran utlrp again still no errors. The OLAP option that remained invalid in the dba_registry is now shown as removed.
SQL> select comp_id, comp_name, version, status
  2         from dba_registry
  3         where comp_name like '%OLAP%';

COMP_ COMP_NAME                 VERSION    STATUS
----- ------------------------- ---------- --------
APS   OLAP Analytic Workspace   11.1.0.7.0 REMOVED
XOQ   Oracle OLAP API           11.1.0.7.0 REMOVED

Update 02 February 2012
Outcome from the SR was these options were never intended to be in SE (Standard Edition) but could get in through imp and using the template that has some EE (Enterprise Edition) components (was the case here). So there's no "official" way to remove these components from SE and require a custom approach since the script provided to remove these components from EE are not even installed on SE. Therefore it wouldn't be a issue if those scripts are copied from EE to SE to remove the unsupported components.

Related post : ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.SCHEMA_INFO_EXP while Exporting

Running Opatch throws up org.xml.sax.SAXParseException: : XML-20108: (Fatal Error) Start of root element expected

The server was hosting multiple Oracle homes from 10gR2, 11gR1 and 11gR2. Primarily used for testing PSU patch before applying to production system. There hasn't been any new installation of Oracle homes and inventory has been without any errors.

Trying to apply the PSU Jan threw up the following error.
Oracle Interim Patch Installer version 11.2.0.1.9
Copyright (c) 2011, Oracle Corporation.  All rights reserved.

Oracle Home       : /opt/app/oracle/product/11.2.0/ent11.2.0.3
Central Inventory : /opt/app/oraInventory
   from           : /opt/app/oracle/product/11.2.0/ent11.2.0.3/oraInst.loc
OPatch version    : 11.2.0.1.9
OUI version       : 11.2.0.3.0
Log file location : /opt/app/oracle/product/11.2.0/ent11.2.0.3/cfgtoollogs/opatch/opatch2012-01-18_12-16-37PM.log

org.xml.sax.SAXParseException: : XML-20108: (Fatal Error) Start of root element expected.
        at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:415)
        at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:284)
        at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:331)
        at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:293)
        at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:209)
        at oracle.sysman.oii.oiii.OiiiOracleHomeInfoXMLReader.updateHomeProperties(OiiiOracleHomeInfoXMLReader.java:86)
        at oracle.sysman.oii.oiii.OiiiInstallInventory.updateOracleHomesProperties(OiiiInstallInventory.java:803)
        at oracle.sysman.oii.oiii.OiiiInstallInventory.updateOracleHomesProperties(OiiiInstallInventory.java:781)
        at oracle.sysman.oii.oiii.OiiiInstallInventory.readHomes(OiiiInstallInventory.java:737)
        at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadPartialInstallInv(OiiiInstallAreaControl.java:776)
        at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initInstallInv(OiiiInstallAreaControl.java:821)
        at oracle.sysman.oii.oiii.OiiiInstallAreaControl.loadInstallInventory(OiiiInstallAreaControl.java:592)
        at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1977)
        at oracle.sysman.oii.oiii.OiiiInstallAreaControl.initAreaControl(OiiiInstallAreaControl.java:1930)
        at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:301)
        at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:240)
        at oracle.sysman.oii.oiic.OiicStandardInventorySession.initSession(OiicStandardInventorySession.java:189)
        at oracle.opatch.OUIInventorySession.initSession(OUIInventorySession.java:63)
        at oracle.opatch.OUISessionManager.setupSession(OUISessionManager.java:150)
        at oracle.opatch.OUISessionManager.lockCentralInventory(OUISessionManager.java:267)
        at oracle.opatch.OUISessionManager.instantiate(OUISessionManager.java:87)
        at oracle.opatch.OUISessionManager.updateOPatchEnvironment(OUISessionManager.java:661)
        at oracle.opatch.InventorySessionManager.updateOPatchEnvironment(InventorySessionManager.java:91)
        at oracle.opatch.OPatchSession.main(OPatchSession.java:1661)
        at oracle.opatch.OPatch.main(OPatch.java:653)
No matter which Oracle home was choosen the same error came up.

Two metalink notes related to the issue are
OPatch Is Failing With "org.xml.sax.SAXParseException" XML-20108: (Fatal Error) [ID 1365569.1]
Opatch command throws warning:org.xml.sax.SAXParseException: : XML-20108: (Fatal Error) Start of root element expected. [ID 1344580.1]

Issue mentioned in the first metalink note isn't the case here as the opatch version matches the oracle home version.
The second one mentions if $ORACLE_HOME/inventory/ContentsXML/oraclehomeproperties.xml file is 0 bytes then this error is shown. But the size of this file in the oracle home wasn't 0 bytes
ls -l $ORACLE_HOME/inventory/ContentsXML/oraclehomeproperties.xml
-rw-rw---- 1 oracle oinstall 526 Nov 16 14:20 /opt/app/oracle/product/11.2.0/ent11.2.0.3/inventory/ContentsXML/oraclehomeproperties.xml
It seemed the cause mentioned in the second metalink note is also not the case. But checking all the homes in the server it was found that there was one oracle home (home running 11gR1) which had this file with 0 bytes. There was a cloned copy of this home on which the file wasn't 0 bytes. Copying it into the this oracle home's inventory/ContentsXML resolved this issue.

Therefore when there are multiple oracle homes in a server and if inventory/ContentsXML/oraclehomeproperties.xml file gets corrupted in at least one of them the opatch will throw the above mentioned XML parse error.

Tuesday, January 17, 2012

Bitmap Index Locking

Bitmap indexes are useful when the cardinality of a column is low and it is required to join multiples of such columns. Primarily designed for used in data warehouse environments where DML activities are infrequent. Bitmap index can introduce performance overheads if used in a OLTP systems where DML activities are frequent. Reason is the way rows are locked to make changes to the bitmap index when a row is changed or inserted into the table.

This blog is to show these locks in action. Bitmap indexes are only available with Oracle Enterprise Edition.

Setup the initial environment
create table y (a number, b varchar2(20));

begin
    for i in  1 .. 10
    loop
      insert into y values(mod(i,5),dbms_random.string('X',10));
    end loop;
end;
    /

select DBMS_ROWID.ROWID_OBJECT(rowid) "OBJECT",
       DBMS_ROWID.ROWID_RELATIVE_FNO(rowid) "FILE",
       DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) "BLOCK",
       DBMS_ROWID.ROWID_ROW_NUMBER(rowid) "ROW", rowid,y.* from y order by 5,6;
    
    OBJECT       FILE      BLOCK        ROW ROWID                       A B
---------- ---------- ---------- ---------- ------------------ ---------- ----------
     93735          4        583          0 AAAW4nAAEAAAAJHAAA          1 1ML8L3NZBE
     93735          4        583          1 AAAW4nAAEAAAAJHAAB          2 E79SCS1B26
     93735          4        583          2 AAAW4nAAEAAAAJHAAC          3 OAMU9H2Y7B
     93735          4        583          3 AAAW4nAAEAAAAJHAAD          4 OAIF1LZZLE
     93735          4        583          4 AAAW4nAAEAAAAJHAAE          0 FU68J1S48B
     93735          4        583          5 AAAW4nAAEAAAAJHAAF          1 7WD1OB3O6T
     93735          4        583          6 AAAW4nAAEAAAAJHAAG          2 FC9UTGS8RY
     93735          4        583          7 AAAW4nAAEAAAAJHAAH          3 7H58F7BEZH
     93735          4        583          8 AAAW4nAAEAAAAJHAAI          4 S9UKCCLM38
     93735          4        583          9 AAAW4nAAEAAAAJHAAJ          0 EEBV89AYHA

10 rows selected.

create bitmap index aidx on y(a) compute statistics;
Values shown for column B will be different replace them accordingly.

The bitmap index entry would contain the column A as the index key and low rowid and high row id range encompassing all the rows in that bitmap key entry and the bitmap for the specific rowid range. See the bitmap index storage section

The above bitmap index could be conceptually visualised as
A       low rowid           high rowid          bitmap
1,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  1000010000
2,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0100001000
3,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0010000100
4,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0001000010
0,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0000100001
Now to see how locking takes place open two sql plus sessions and set auto commit off on one of them run a DML to change column A value from 1 to 2 for a single row.
SQL> update y set a = 2 where b  = '1ML8L3NZBE';

1 row updated.
As a result of this update two bitmap index entries will lock. Those are the index entry for 1 (to remove bitmap value 1 and set it to 0 for the row) and index entry for 2 (to set bitmap value from 0 to 1 for the row).

Because of this trying to update column A value from 1 to any other for another row (1 ---> 0,2,3,4) or any other value trying to change to 1 (0,2,3,4 --> 1) from two different sessions will result in session executing in second being blocked and will wait for first session to commit.

Similarly any other row that try to change its column A value to 2 will also block and wait. That is 0,1,3,4 ---> 2 and 2 ---> 0,1,3,4 will block and wait.

To test run the following updates from second session.
-- the other row with 1 on column A will be blocked and wait
--Wait will appear as a enq: TX row lock contention
update y set a = 4 where b = '7WD1OB3O6T'; 

-- row with 0 on column A trying to update to 1 blocked
update y set a = 1 where b = 'EEBV89AYHA';  

--another row with 2 on column A trying to update blocked and wait
update y set a = 3 where b = 'FC9UTGS8RY'; -- (could have used E79SCS1B26 as well) 

-- row with 0 on column A trying to update to 2 blocked and wait
update y set a = 2 where b = 'EEBV89AYHA'; 
Trying to delete any of the other rows with values 1 (B = 7WD1OB3O6T) or 2 (B = E79SCS1B26 or B = FC9UTGS8RY) will also block and wait. But it is it is possible to insert rows with values 1 or 2 for column A.

This is why it is considered bad practice to use bitmap indexes on tables with frequent DML activities.

Rollback the update statement on the first session and execute a insert statement
insert into y values (1,'ABCDEF');
now trying to update or delete any row with value 1 on column A will block and wait.

The above locking will take place as long as low rowid and high rowid of the rows in questions are the same index entry. If updating row's rowids are in two different index entries then the updates and deletes will go through without blocking. Taking the above example if new index entries are created in the bitmap index (represented by rowid range starting with BBBW) as below
A       low rowid           high rowid          bitmap
1,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  1000010000
2,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0100001000
3,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0010000100
4,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0001000010
0,  AAAW4nAAEAAAAJHAAA,  AAAW4nAAEAAAAJHAAJ,  0000100001
1,  BBBW4nAAEAAAAJHAAA,  BBBW4nAAEAAAAJHAAJ,  1000010000
2,  BBBW4nAAEAAAAJHAAA,  BBBW4nAAEAAAAJHAAJ,  0100001000
3,  BBBW4nAAEAAAAJHAAA,  BBBW4nAAEAAAAJHAAJ,  0010000100
4,  BBBW4nAAEAAAAJHAAA,  BBBW4nAAEAAAAJHAAJ,  0001000010
0,  BBBW4nAAEAAAAJHAAA,  BBBW4nAAEAAAAJHAAJ,  0000100001
then when a row is updated in 1,AAAW4nAAEAAAAJHAAA-AAAW4nAAEAAAAJHAAJ rowid range it is possible to update another row in 1,BBBW4nAAEAAAAJHAAA-BBBW4nAAEAAAAJHAAJ
row id range without being blocked. This could be tested by inserting many more rows to the table which will result in additional index entries being created. Run the following from a different session to the two that is opened
begin
    for i in  1 .. 90000
    loop
    insert into y values(mod(i,5),dbms_random.string('X',10));
   end loop;
    end;
    /
commit;

select DBMS_ROWID.ROWID_OBJECT(rowid) "OBJECT",
           DBMS_ROWID.ROWID_RELATIVE_FNO(rowid) "FILE",
           DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid) "BLOCK",
           DBMS_ROWID.ROWID_ROW_NUMBER(rowid) "ROW", rowid,y.* from y where a = 1 order by 5,6;
     
...
...
   OBJECT       FILE      BLOCK        ROW ROWID                       A B
---------- ---------- ---------- ---------- ------------------ ---------- ----------
     93735          4       5375        356 AAAW4nAAEAAABT/AFk          1 PQCAAA3OI7
     93735          4       5375        361 AAAW4nAAEAAABT/AFp          1 6G20GPXCSL
     93735          4       5375        366 AAAW4nAAEAAABT/AFu          1 XH9BDPW8XL
     93735          4       5375        371 AAAW4nAAEAAABT/AFz          1 WFAI39JSNU
     93735          4       5375        376 AAAW4nAAEAAABT/AF4          1 GXE3899NJE
     93735          4       5375        381 AAAW4nAAEAAABT/AF9          1 X8X0I0PMRB

18002 rows selected.
Running
update y set a = 2 where b  ='GXE3899NJE';
on session 1 and
update y set a = 2 where b  ='X8X0I0PMRB';
on session two will result in session 2 being blocked. It is safe to presume (may not always be the case) those two rowids are in the same range. But
update y set a = 2 where b  ='GXE3899NJE';
and
update y set a = 2 where b  = '1ML8L3NZBE';
will not block each other presuming these rowids are in different index entries.

Tuesday, January 10, 2012

KFOD - ASM Tool

KFOD tool available in ASM Home (Grid Home) in 11gR2 and Oracle Home in other versions (11gR1 and 10gR2) gives a view of disks from the operating systems point of view. This is useful in diagnosting disk access errors in ASM.

Options available wth KFOD
kfod help=y
_asm_a/llow_only_raw_disks              KFOD allow only raw devices [_asm_allow_only_raw_disks=TRUE/(FALSE)]
_asm_l/ibraries         ASM Libraries[_asm_libraries=lib1,lib2,...]
_asms/id                ASM Instance[_asmsid=sid]
a/sm_diskstring         ASM Diskstring [asm_diskstring=discoverystring, discoverystring ...]
c/luster                KFOD cluster [cluster=TRUE/(FALSE)]
db/_unique_name         db_unique_name for ASM instance[db_unique_name=dbname]
di/sks          Disks to discover [disks=raw,asm,all]
ds/cvgroup              Include group name [dscvgroup=TRUE/(FALSE)]
g/roup          Disks in diskgroup [group=diskgroup]
h/ostlist               hostlist[hostlist=host1,host2,...]
metadata_a/usize                AU Size for Metadata Size Calculation
metadata_c/lients               Client Count for Metadata Size Calculation
metadata_d/isks         Disk Count for Metadata Size Calculation
metadata_n/odes         Node Count for Metadata Size Calculation
metadata_r/edundancy            Redundancy for Metadata Size Calculation
n/ohdr          KFOD header suppression [nohdr=TRUE/(FALSE)]
o/p             KFOD options type [OP=DISKS/CANDIDATES/MISSING/GROUPS/INSTS/VERSION/CLIENTS/RM/RMVERS/DFLTDSTR/GPNPDSTR/METADATA/ALL]
p/file          ASM parameter file [pfile=parameterfile]
s/tatus         Include disk header status [status=TRUE/(FALSE)]
v/erbose                KFOD verbose errors [verbose=TRUE/(FALSE)]

More on metalink note ASM tool to check access to ASM disks [ID 1359873.1]