12/16/11

WHAT HAPPENS TO INDEX ENTRY WHEN WE DELETE/UPDATE CORRESPONDING ROW FROM TABLE ??


WHAT HAPPENS TO INDEX ENTRY WHEN WE DELETE/UPDATE CORRESPONDING ROW FROM TABLE ??

create a table and index:

SQL> create table test_table (id number, name varchar2(20));
Table created.


SQL> insert into test_table values (1, 'JUNK1');
1 row created.

SQL> insert into test_table values (2, 'JUNK2');
1 row created.

SQL> commit;
Commit complete.

SQL> create index test_table_idx on test_table (id);
Index created.


SQL> select object_id from dba_objects where
     object_name='TEST_TABLE_IDX';

OBJECT_ID
----------
  73515

Now dump the Index structure:

SQL> alter session set events
     'immediate trace name treedump level 73515';
Session altered.


----- begin tree dump
leaf: 0x10007d3 16779219 (0: nrow: 2 rrow: 2)
----- end tree dump

nrow:  number of all index entries including deleted entries

rrows: number of current index entries

DBA: 16779219  (DATA BLOCK ADDRESS)


OBSERVATION: there is only one leafblock/root block which has two row entries.(no deleted entries as nrow = rrow)


Dump the leafblock:

We have only one leafblock which is also a root block, and
rootblock is allways next to header block.

SQL> select header_file, header_block from dba_segments where
     segment_name='TEST_TABLE_IDX';

HEADER_FILE HEADER_BLOCK
----------- ------------
          4         2002

So root block# = 2002+1

or we can find the block address using DBA (Which we got from treedump)

SQL> select DBMS_UTILITY.DATA_BLOCK_ADDRESS_FILE(16779219),
     DBMS_UTILITY.DATA_BLOCK_ADDRESS_BLOCK(16779219) from dual;

File #            Block#
-----------------------------------------------                                                   4                 2003

Sql> alter system dump datafile 4 block 2003;
Session altered

Leaf block dump
===============
header address 3086706276=0xb7fb6664
kdxcolev 0
KDXCOLEV Flags = - - -
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 2
kdxcofbo 40=0x28
kdxcofeo 8008=0x1f48
kdxcoavs 7968
kdxlespl 0
kdxlende 0
kdxlenxt 0=0x0
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[8020] flag: ------, lock: 0, len=12
col 0; len 2; (2):  c1 02              à value of indexed column
col 1; len 6; (6):  01 00 07 cd 00 00   à rowid appended
row#1[8008] flag: ------, lock: 0, len=12
col 0; len 2; (2):  c1 03              à value of indexes column
col 1; len 6; (6):  01 00 07 cd 00 01   à rowid appended
----- end of leaf block dump -----
End dump data blocks tsn: 4 file#: 4 minblk 2003 maxblk 2003
Now Delete one row from table:


OBSERVATION: leaf block contains the two rows entries (one for each row in table).  


DELETE A ROW FROM TABLE:

Sql> delete from test_table where id=1;
1 row deleted

Sql> commit;
Commited

SQL>  alter session set events
      'immediate trace name treedump level  73515';
Session altered.

----- begin tree dump
leaf: 0x10007d3 16779219 (0: nrow: 2 rrow: 1)
----- end tree dump


Dumpfile indicates that we have only one current entry (rrow=1) 
And total rows including marked as delete = 2 (nrow=2)

 --Index has two rows entries even after we deleted a row from the table. i.e corresponding index is just marked as DELETE but not deleted physically.

Lets dump the leafblock:


SQL> alter system dump datafile 4 block 2003;
System altered


Leaf block dump
===============
header address 3086706276=0xb7fb6664
…………………………………………………………………………………………….
row#0[8020] flag: ---D--, lock: 2, len=12
col 0; len 2; (2):  c1 02 
col 1; len 6; (6):  01 00 07 cd 00 00     à deleted rowid
row#1[8008] flag: ------, lock: 0, len=12
col 0; len 2; (2):  c1 03
col 1; len 6; (6):  01 00 07 cd 00 01   à current existing rowid
----- end of leaf block dump -----
End dump data blocks tsn: 4 file#: 4 minblk 2003 maxblk 2003


Observations:

First row entry is still there but marked as DELETE with flag “D”


Now lets see what if inserted a new row into the table:


SQL> insert into test_table values (3, 'JUNK2');
1 row created.

SQL> commit;
Commit complete.


SQL> alter system dump datafile 4 block 2003;
System altered


Leaf block dump
===============
header address 3086706276=0xb7fb6664
…………………………………………………………………………………………….
row#0[8008] flag: ------, lock: 0, len=12
col 0; len 2; (2):  c1 03
col 1; len 6; (6):  01 00 07 cd 00 01   àold remaining rowid
row#1[7996] flag: ------, lock: 2, len=12
col 0; len 2; (2):  c1 04
col 1; len 6; (6):  01 00 07 cd 00 02      à New rowid
----- end of leaf block dump -----
End dump data blocks tsn: 4 file#: 4 minblk 2003 maxblk 2003


Observation:

As soon as a new rowentry added to the leaf block the row entry which was marked as ‘DELETE’ was actually deleted from the leafblock.

So row entries that are marked as DELETE will be physically deleted when new row is inserted into the leaf block.


This behaviour is confirmed also by dumping the tree structure:

----- begin tree dump
leaf: 0x10007d3 16779219 (0: nrow: 2 rrow: 2à no more deleted entry
----- end tree dump




What Happenes when the row updated ??

Now we know that if rows are deleted then corresponding index row entries are marked as DELETE initially and if any new row entries are inserted into the leaf blocks then it actually delets the  row entries that are marked as DELETTE previosly.

But what happens to the index row entries when corresponding rows are updated ??

SQL>  select * from test_table;
        ID                NAME
  ---------- --------------------
         2                        JUNK
         3                    JUNK2

SQL> alter system dump datafile 4 block 2003;


Leaf block dump
===============
header address 3086706276=0xb7fb6664
…………………………………………………………………………………………….
row#0[8008] flag: ------, lock: 0, len=12
col 0; len 2; (2):  c1 03
col 1; len 6; (6):  01 00 07 cd 00 01 
row#1[7996] flag: ------, lock: 2, len=12
col 0; len 2; (2):  c1 04
col 1; len 6; (6):  01 00 07 cd 00 02     
----- end of leaf block dump -----
End dump data blocks tsn: 4 file#: 4 minblk 2003 maxblk 2003



Now update the row:

SQL> update test_table set id=4 where name='JUNK2';
1  row updated.

SQL> commit;
Commit complete.

Leaf block dump
===============
header address 3086706276=0xb7fb6664
…………………………………………………………………………………………….
row#0[8008] flag: ------, lock: 0, len=12
col 0; len 2; (2):  c1 03
col 1; len 6; (6):  01 00 07 cd 00 01
row#1[7996] flag: ---D--, lock: 2, len=12   à entry marked as delete
col 0; len 2; (2):  c1 04
col 1; len 6; (6):  01 00 07 cd 00 02
row#2[7984] flag: ------, lock: 2, len=12   à New entry added
col 0; len 2; (2):  c1 05
col 1; len 6; (6):  01 00 07 cd 00 02
----- end of leaf block dump -----
End dump data blocks tsn: 4 file#: 4 minblk 2003 maxblk 2003


Observations: Even for updates oracle will follows the same procedure as of deletes.

i.e It will mark the old as DELETE and inserts new row for the updated value. If there is any further row insertes into the same leaf block then it will delete the rows entries that are marked as DELETED previously.

It can be confirmed by treedump also:

----- begin tree dump
leaf: 0x10007d3 16779219 (0: nrow: 3 rrow: 2)
----- end tree dump


Current rows = 2
Total rows including that marked as DELETE = 3


Once new rows are inserted into the same leaf blocks then oracle will delete the rowentries that are marked as DELETE previously.


8/28/11

"root.sh" failed on second during the installation of 11.2.0.2 Grid InfraStructure

While installing 11.2.0.2 Grid Infrastructure root.sh failed on second node but successful on first node
on node2, root.sh failed with following errors:

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node prittoprfdb1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Failed to start Oracle Clusterware stack
Failed to start Cluster Synchorinisation Service in clustered mode at /u01/app/crs/11.2.0.2/crs/install/crsconfig_lib.pm line 1016.
/u01/app/crs/11.2.0.2/perl/bin/perl -I/u01/app/crs/11.2.0.2/perl/lib -I/u01/app/crs/11.2.0.2/crs/install /u01/app/crs/11.2.0.2/crs/install/rootcrs.pl execution failed



May be there are number of different reasons for failing of root.sh on second node, but in my case it failed due to a "MULTICASTING BUG"


What is Multicasting? and Why we need it?


Oracle introduces a new feature called "Redundant Interconnect Usage" which provides the redundancy for interconnect without using any external NIC bondings. Oracle internally provides the redundancy if you specify two private interfaces at the tie of installation of 11.2.0.2 Grid Infrastructure.


To use this new feature the Multicasting should be enabled for private inerfaces even switches that used.


multicast based communication on the private interconnect is utilized to establish communication with peers in the cluster on each startup of the stack on a node. Once the connection with the peers in the cluster has been established, the communication is switched back to unicast


In CSSD log you will find the following error:


2010-09-16 23:13:14.862: [GIPCHGEN][1107937600] gipchaNodeCreate: adding new node 0x2aaab408d4a0 { host 'node1', haName 'CSS_ttoprf10cluster', srcLuid 54d7bb0e-ef4a0c7e, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 9563084, flags 0x0 }


2010-09-16 23:13:15.839: [ CSSD][1087465792]clssnmvDHBValidateNCopy: node 1, node1, has a disk HB, but no network HB, DHB has rcfg 180134562, wrtcnt, 8627, LATS 9564064, lastSeqNo 8624, uniqueness 1284701023, timestamp 1284703995/10564774




Solution:


apply a Patch: 9974223 on both nodes and run the root.sh again on node2.













ORA-00210,ORA--00202,ORA-17503,ORA-5001,ORA-27140,ORA-27300-3 while Installing RAC database


After Successfully installed GI, I try to install RAC database but I got the following Error message:

ORA-00210: cannot open control file
ORA-00202: error in writing''+RECODG/utsdb/controlfile/current.256.732754521''
ORA-17503: ksfdopn: 2 Failed to open file +RECODG/utsdb/controlfile/current.256.732754521
ORA-15001: diskgroup "RECODG" does not exist or is not mounted
ORA-15055: unable to connect to ASM instance
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_euid failed with status: 1
ORA-27301: OS failure message: Not owner
ORA-27302: failure occurred at: skgpwinit5
ORA-27303: additional information: startup euid = 100 (grid), current euid = 101 (oracle)


The eeror message is little bit confusing because it complaints about "ASM" but infact ASM is working fine, but the problem is permission of "oracle" executables.

Cause
The issue is caused by wrong permissions of the GI_HOME/bin/oracle executable.

In one case, customer changed the file permission of GI_HOME/bin/oracle from "-rwsr-s--x" to "-rwxrwxr-x".
The correct permission should be "-rwsr-s--x".
Solution
1. Change the file permission of GI_HOME/bin/oracle to "-rwsr-s--x":
$su - grid
$cd GI_HOME/bin
$chmod 6751 oracle
$ls -l oracle