Monday, April 14, 2014

Adding a Node to 12cR1 RAC

This post list the steps for adding a node to 12cR1 standard cluster (not to a flex cluster) which is similar to that of adding a node to 11gR2 RAC. Node addition is done in three phases. Phase one is to add the clusterware to the new node. Second phase will add the database software and final phase will extend the database to the new node by creating a new instance for it. It is possible to do the node additional in silent mode or in an interactive mode with the use of GUIs. This post uses the latter method (earlier post of 11gR2 used silent mode and steps for 12c are similar to that).
1. It is assumed that physical connections (shared storage connections, network) are made to the new node being added. The pre node add steps could be checked with cluvfy by executing the pre node add command from an existing node and passing the hostname of the new node (in this case rhel12c2 is the new node).
[grid@rhel12c1 ~]$ cluvfy stage -pre nodeadd -n rhel12c2

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "rhel12c1"

Checking user equivalence...
User equivalence check passed for user "grid"
Package existence check passed for "cvuqdisk"

Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Checking shared resources...

Checking CRS home location...
Location check passed for: "/opt/app/12.1.0/grid"
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.0.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rhel12c1,rhel12c2
TCP connectivity check passed for subnet "192.168.0.0"

Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rhel12c1,rhel12c2
TCP connectivity check passed for subnet "192.168.1.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.
Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "rhel12c2:/usr,rhel12c2:/var,rhel12c2:/etc,rhel12c2:/opt/app/12.1.0/grid,rhel12c2:/sbin,rhel12c2:/tmp"
Free disk space check passed for "rhel12c1:/usr,rhel12c1:/var,rhel12c1:/etc,rhel12c1:/opt/app/12.1.0/grid,rhel12c1:/sbin,rhel12c1:/tmp"
Check for multiple users with UID value 501 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "binutils"
Package existence check passed for "compat-libcap1"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "ksh"
Package existence check passed for "make"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed
Group existence check passed for "asmadmin"
Group existence check passed for "asmoper"
Group existence check passed for "asmdba"

Checking ASMLib configuration.
Check for ASMLib configuration passed.

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

User "grid" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
The DNS response time for an unreachable node is within acceptable limit on all nodes

Check for integrity of file "/etc/resolv.conf" passed

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

Pre-check for node addition was successful.
2. To extend the cluster by installing clusterware on the new node run the addNode.sh in the $GI_HOME/addnode directory as grid user from an existing node. As mentioned earlier this post uses the interactive method to add the node.
Click add button and add the hostname and vip name of the new node

Fix any pre-req issues and click install to begin the GI installation on the new node.

Execute the root scripts on the new node

Output from root script execution
[root@rhel12c2 12.1.0]# /opt/app/12.1.0/grid/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /opt/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /opt/app/12.1.0/grid/crs/install/crsconfig_params
2014/03/04 16:16:06 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2014/03/04 16:16:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel12c2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel12c2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.evmd' on 'rhel12c2'
CRS-2676: Start of 'ora.evmd' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rhel12c2'
CRS-2676: Start of 'ora.gpnpd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rhel12c2'
CRS-2676: Start of 'ora.gipcd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rhel12c2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rhel12c2'
CRS-2676: Start of 'ora.diskmon' on 'rhel12c2' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rhel12c2'
CRS-2676: Start of 'ora.cssd' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rhel12c2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rhel12c2'
CRS-2676: Start of 'ora.ctssd' on 'rhel12c2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rhel12c2'
CRS-2676: Start of 'ora.asm' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rhel12c2'
CRS-2676: Start of 'ora.storage' on 'rhel12c2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rhel12c2'
CRS-2676: Start of 'ora.crsd' on 'rhel12c2' succeeded
CRS-6017: Processing resource auto-start for servers: rhel12c2
CRS-2672: Attempting to start 'ora.ons' on 'rhel12c2'
CRS-2676: Start of 'ora.ons' on 'rhel12c2' succeeded
CRS-6016: Resource auto-start has completed for server rhel12c2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/03/04 16:22:11 CLSRSC-343: Successfully started Oracle clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/03/04 16:22:37 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
This conclude the phase one. Next phase is to add the database software to new node.



3. To add database software run addNode.sh in the $ORACLE_HOME/addnode directory as oracle user from an existing node. When the OUI starts the new node comes selected by default.

At the end of the database software installation message is shown to invoke DBCA to extend the database to new node.

This conclude phase two
4. Final phase is to extend the database to the new node. Before invoking DBCA change the permission on the directory $ORACLE_BASE/admin to include write permission for the oinstall group so that oracle user is able to write into the directory. After the database software is installed the permission on this directory was as follows
[oracle@rhel12c2 oracle]$ ls -l
drwxr-xr-x. 3 grid   oinstall 4096 Mar  4 16:21 admin
Since oracle user doesn't have write permission (as oinstall group doesn't have write permission) the DBCA fails with the following.
Change permissions with
chmod 775 admin
and invoke the DBCA.
5. Select instance management from DBCA and then add instance.
Select which database to extend (if there are multiple databases in the cluster and confirm the new instance details (comes auto populated)

6. Check the instance is visible on the cluster
[oracle@rhel12c1 addnode]$ srvctl config database -d cdb12c
Database unique name: cdb12c
Database name: cdb12c
Oracle home: /opt/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/cdb12c/spfilecdb12c.ora
Password file: +DATA/cdb12c/orapwcdb12c
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: cdb12c
Database instances: cdb12c1,cdb12c2
Disk Groups: DATA,FLASH
Mount point paths:
Services: pdbsvc
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed

SQL> select inst_id,instance_name,host_name from gv$instance;

   INST_ID INSTANCE_NAME    HOST_NAME
---------- ---------------- -------------------
         1 cdb12c1          rhel12c1.domain.net
         2 cdb12c2          rhel12c2.domain.net


SQL> select con_id,name from gv$pdbs;

    CON_ID NAME
---------- ---------
         2 PDB$SEED
         3 PDB12C
         2 PDB$SEED
         3 PDB12C
The service created for this PDB is not yet available on the new node. As seen below only one instance appear as preferred instance and none on the available instance.
[oracle@rhel12c1 ~]$ srvctl config service -d cdb12c -s pdbsvc
Service name: pdbsvc
Service is enabled
Server pool: cdb12c_pdbsvc
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
TAF failover retries:
TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb12c
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:
Preferred instances: cdb12c1
Available instances:
Modify the service to include the instance on the newly added node as well
[oracle@rhel12c1 ~]$ srvctl modify service -db cdb12c -pdb pdb12c -s pdbsvc -modifyconfig -preferred "cdb12c1,cdb12c2"

[oracle@rhel12c1 ~]$ srvctl config service -d cdb12c -s pdbsvc
Service name: pdbsvc
Service is enabled
Server pool: cdb12c_pdbsvc
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
TAF failover retries:
TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: pdb12c
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:
Preferred instances: cdb12c1,cdb12c2
Available instances:
7. Use cluvfy to perform the post node add checks
[grid@rhel12c1 ~]$ cluvfy stage -post nodeadd -n rhel12c2
With this concludes the addition of a new node to 12cR1 RAC.

Related Posts
Adding a Node to 11gR2 RAC
Adding a Node to 11gR1 RAC

Tuesday, April 1, 2014

Deleting a Node From 12cR1 RAC

Deleting a node from a 12cR1 RAC is similar to that of 11gR2. Deletion has three distinct phases, that is removing of the database instance, removing of Oracle database software and finally the clusterware. However as per oracle documentation "you can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. It must also be noted from Oracle documentation that "deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster". For example it is possible to see some information with regard to the deleted node on an OCR dump file and this shouldn't be a cause for concern.
The RAC setup in this case is a 2 node RAC and node named rhel12c2 will be removed from the cluster. The database is a CDB which has a single PDB.
SQL>  select instance_number,instance_name,host_name from gv$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME
--------------- ---------------- --------------------
              1 cdb12c1          rhel12c1.domain.net
              2 cdb12c2          rhel12c2.domain.net # node and instance to be removed

SQL> select con_id,dbid,name from gv$pdbs;

    CON_ID       DBID NAME
---------- ---------- ---------
         2 4066687628 PDB$SEED
         3  476277969 PDB12C
         2 4066687628 PDB$SEED
         3  476277969 PDB12C
1. First phase include removing the database instance from the node to be deleted. For this run the DBCA on any node except on the node that has the instance being deleted. In this case DBCA is run from node rhel12c1. Follow the instance management option to remove the instance.

Following message is shown as pdbsvc is created to connect to PDB.

2. At the end of the DBCA run the database instance is removed from the node to be deleted
SQL> select instance_number,instance_name,host_name from gv$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME
--------------- ---------------- -------------------
              1 cdb12c1          rhel12c1.domain.net

SQL> select con_id,dbid,name from gv$pdbs;

    CON_ID       DBID NAME
---------- ---------- ---------
         2 4066687628 PDB$SEED
         3  476277969 PDB12C

[oracle@rhel12c1 ~]$ srvctl config database -d cdb12c
Database unique name: cdb12c
Database name: cdb12c
Oracle home: /opt/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/cdb12c/spfilecdb12c.ora
Password file: +DATA/cdb12c/orapwcdb12c
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: cdb12c
Database instances: cdb12c1 # only shows the remaining instance
Disk Groups: DATA,FLASH
Mount point paths:
Services: pdbsvc
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed
3. Check if redo log threads for the deleted instance is removed from the database
SQL> select inst_id,group#,thread# from gv$log;

   INST_ID     GROUP#    THREAD#
---------- ---------- ----------
         1          1          1
         1          2          1
As it only shows redo threads for instance 1 no further action is needed. If DBCA has not removed the redo log threads with alter database disable thread thread# With this step the first phase is completed
4. Second phase is to remove the Oracle database software. In 12c by default the listener runs out of the grid home. However it is possible to setup a listener to run out of the Oracle home (RAC home) as well. If this is the case then stop and disable any listeners running out of the RAC home. In this configuration there's no listeners running out of the RAC home.
5. On the node to be deleted update the node list to include only the node being deleted. Before running the node update command the inventory.xml will have all the nodes under the oracle home after the command is run this will reduce to containing only the node to be deleted. However the inventory.xml in other nodes will still have all the nodes in the cluster under oracle home.
<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rhel12c1"/>
      <NODE NAME="rhel12c2"/>
   </NODE_LIST>
</HOME>


[oracle@rhel12c2 ~]$ /opt/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller.sh -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={rhel12c2}" -local
Starting Oracle Universal Installer...

<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rhel12c2"/>
   </NODE_LIST>
</HOME>
6. Run the deinstall command with local option. Without the -local option this will remove the oracle home of all the nodes!
[oracle@rhel12c2 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /opt/app/oracle/product/12.1.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/12.1.0/grid
The following nodes are part of this cluster: rhel12c2,rhel12c1
Checking for sufficient temp space availability on node(s) : 'rhel12c2,rhel12c1'
## [END] Install check configuration ##

Network Configuration check config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_check2014-02-27_11-08-10-AM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_check2014-02-27_11-08-16-AM.log
Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed []:
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /opt/app/oraInventory/logs//ocm_check9786.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/12.1.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel12c2,rhel12c1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rhel12c2'.
Oracle Home selected for deinstall is: /opt/app/oracle/product/12.1.0/dbhome_1
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Checking the config status for CCR
rhel12c2 : Oracle Home exists with CCR directory, but CCR is not configured
rhel12c1 : Oracle Home exists and CCR is configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2014-02-27_11-07-43-AM.out'
Any error messages from this session will be written to: '/opt/app/oraInventory/logs/deinstall_deconfig2014-02-27_11-07-43-AM.err'
######################## CLEAN OPERATION START ########################
Database de-configuration trace file location: /opt/app/oraInventory/logs/databasedc_clean2014-02-27_11-08-52-AM.log

Network Configuration clean config START
Network de-configuration trace file location: /opt/app/oraInventory/logs/netdc_clean2014-02-27_11-08-52-AM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /opt/app/oraInventory/logs//ocm_clean9786.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node : Done
Delete directory '/opt/app/oracle/product/12.1.0/dbhome_1' on the local node : Done
The Oracle Base directory '/opt/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/opt/app/12.1.0/grid'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END

## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-02-27_11-06-59AM' on node 'rhel12c2'
Clean install operation removing temporary directory '/tmp/deinstall2014-02-27_11-06-59AM' on node 'rhel12c1'
## [END] Oracle install clean ##

######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
Cleaning the CCR configuration by executing its binaries
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/opt/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/opt/app/oracle/product/12.1.0/dbhome_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
7. After the deinstall completed run the node update command on any remaining node. This will update the node list by removing the deleted oracle home from the node list. The inventory.xml output before and after command has executed is shown. The command shown is for non shred oracle homes. For shared homes folow oracle documentation.
<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rhel12c1"/>
      <NODE NAME="rhel12c2"/>
   </NODE_LIST>
</HOME>

[oracle@rhel12c1 ~]$ /opt/app/oracle/product/12.1.0/dbhome_1/oui/bin/runInstaller.sh -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.1.0/dbhome_1 "CLUSTER_NODES={rhel12c1}" LOCAL_NODE=rhel12c1
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5119 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

<HOME NAME="OraDB12Home1" LOC="/opt/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="rhel12c1"/>
   </NODE_LIST>
</HOME>
This conclude the second phase. Final phase is to remove the clusterware.



8. Check if the node to be deleted is active and unpinned. If the node is pinned then unpin it with crsctl unpin command. Following could be run as either grid user or root
[grid@rhel12c2 ~]$ olsnodes -t -s
rhel12c1        Active  Unpinned
rhel12c2        Active  Unpinned
9. On the node to be deleted run the node update command to update the node list for the grid home such that it will include only the node being deleted. The inventory.xml output before and after the command has been executed is shown below.
<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rhel12c1"/>
      <NODE NAME="rhel12c2"/>
   </NODE_LIST>
</HOME>

[grid@rhel12c2 ~]$ /opt/app/12.1.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid "CLUSTER_NODES={rhel12c2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5119 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rhel12c2"/>
   </NODE_LIST>
</HOME>
10. If the GI home is non shared run the deinstall with -local option. If -local option is omitted this will remove GI from all nodes.
[grid@rhel12c2 ~]$ /opt/app/12.1.0/grid/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2014-02-27_04-47-10PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /opt/app/12.1.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/app/oracle
Checking for existence of central inventory location /opt/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/app/12.1.0/grid
The following nodes are part of this cluster: rhel12c2,rhel12c1
Checking for sufficient temp space availability on node(s) : 'rhel12c2,rhel12c1'
## [END] Install check configuration ##

Traces log file: /tmp/deinstall2014-02-27_04-47-10PM/logs//crsdc_2014-02-27_04-48-27PM.log
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/netdc_check2014-02-27_04-48-29-PM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/asmcadc_check2014-02-27_04-48-29-PM.log
Database Check Configuration START
Database de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/databasedc_check2014-02-27_04-48-29-PM.log
Database Check Configuration END

######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/app/12.1.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rhel12c2,rhel12c1
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'rhel12c2'.
Oracle Home selected for deinstall is: /opt/app/12.1.0/grid
Inventory Location where the Oracle home registered is: /opt/app/oraInventory
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2014-02-27_04-47-10PM/logs/deinstall_deconfig2014-02-27_04-48-01-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2014-02-27_04-47-10PM/logs/deinstall_deconfig2014-02-27_04-48-01-PM.err'

######################## CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/databasedc_clean2014-02-27_04-48-33-PM.log
ASM de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/asmcadc_clean2014-02-27_04-48-33-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2014-02-27_04-47-10PM/logs/netdc_clean2014-02-27_04-48-34-PM.log
Network Configuration clean config END

---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rhel12c2".
/tmp/deinstall2014-02-27_04-47-10PM/perl/bin/perl -I/tmp/deinstall2014-02-27_04-47-10PM/perl/lib -I/tmp/deinstall2014-02-27_04-47-10PM/crs/install /tmp/deinstall2014-02-27_04-47-10PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp"
Press Enter after you finish running the above commands

<----------------------------------------

[root@rhel12c2 ~]# /tmp/deinstall2014-02-27_04-47-10PM/perl/bin/perl -I/tmp/deinstall2014-02-27_04-47-10PM/perl/lib -I/tmp/deinstall2014-02-27_04-47-10PM/crs/install /tmp/deinstall2014-02-27_04-47-10PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp"
Using configuration parameter file: /tmp/deinstall2014-02-27_04-47-10PM/response/deinstall_OraGI12Home1.rsp
Network 1 exists
Subnet IPv4: 192.168.0.0/255.255.255.0/eth0, static
Subnet IPv6:
VIP exists: network number 1, hosting node rhel12c1
VIP Name: rhel12c1-vip
VIP IPv4 Address: 192.168.0.89
VIP IPv6 Address:
VIP exists: network number 1, hosting node rhel12c2
VIP Name: rhel12c2-vip
VIP IPv4 Address: 192.168.0.90
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rhel12c2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.FLASH.VOLUME1.advm' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.CLUSTERDG.dg' on 'rhel12c2'
CRS-2677: Stop of 'ora.FLASH.VOLUME1.advm' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'rhel12c2'
CRS-2677: Stop of 'ora.FLASH.dg' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.CLUSTERDG.dg' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rhel12c2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rhel12c2'
CRS-2677: Stop of 'ora.asm' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rhel12c2'
CRS-2677: Stop of 'ora.net1.network' on 'rhel12c2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rhel12c2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.storage' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rhel12c2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.storage' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rhel12c2'
CRS-2673: Attempting to stop 'ora.asm' on 'rhel12c2'
CRS-2677: Stop of 'ora.ctssd' on 'rhel12c2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rhel12c2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rhel12c2'
CRS-2677: Stop of 'ora.cssd' on 'rhel12c2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rhel12c2'
CRS-2677: Stop of 'ora.gipcd' on 'rhel12c2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel12c2' has completed
CRS-4133: Oracle High Availability Services has been stopped.

2014/02/27 16:53:43 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node

Failed to delete the directory '/opt/app/oracle/product/12.1.0'. The directory is in use.
Failed to delete the directory '/opt/app/oracle/diag/rdbms/cdb12c/cdb12c2/log/test'. The directory is in use.
Removal of some of the directories failed but this had no impact on the removing of the node from the cluster. These directories could be manually cleaned up afterwards.
11. From any remaining node run the following command with reaming nodes as the node list. The inventory.xml output is given before and after the command is run
<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rhel12c1"/>
      <NODE NAME="rhel12c2"/>
   </NODE_LIST>
</HOME>

[grid@rhel12c1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.1.0/grid "CLUSTER_NODES={rhel12c1}" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 5119 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

<HOME NAME="OraGI12Home1" LOC="/opt/app/12.1.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="rhel12c1"/>
   </NODE_LIST>
</HOME>
12. From the node remaining in the cluster run the node deletion command as root
[root@rhel12c1 bin]# crsctl delete node -n rhel12c2
CRS-4661: Node rhel12c2 successfully deleted.
13. Finally use the cluster verification utility to check the node deletion has completed successfully.
[grid@rhel12c1 bin]$ cluvfy stage -post nodedel -n rhel12c2

Performing post-checks for node removal

Checking CRS integrity...
CRS integrity check passed
Clusterware version consistency passed.
Node removal check passed
Post-check for node removal was successful.
This conclude the deletion of node from 12cR1 RAC.

Related Post
Deleting a Node From 11gR2 RAC
Deleting a 11gR1 RAC Node