I recently installed a 2-node RAC cluster using the following configuration:
Operating System: Solaris 10 (SPARC-64)
Oracle Clusterware: 220.127.116.11
Oracle ASM: 18.104.22.168
Oracle RDBMS: 10.2.0.3
Because the servers had 4 network interface cards, I asked the system administrators to configure IPMP on the Virtual IP and Private Interconnect interfaces.
The Clusterware, ASM, and RDBMS installations went as planned. However, when we tried restarting the ASM instance, it would take several minutes before coming up. While it started, I ran a ptree on the racgimon process and found that it was hanging on the “sh -c /usr/sbin/arp -a | /usr/xpg4/bin/grep SP” command. It took awhile to sort out, but I was finally able to put together enough blog posts and Metalink notes to figure out what needed to be done.
- Collect the hostname, VIP, and private interconnect aliases and IP addresses for each RAC node from /etc/hosts.
- Collect network interface information on each node, identifying which interfaces are part of each IPMP group.
- Identify which interfaces nodeapps is using on each node.
srvctl config nodeapps -n <hostname>
- Update the nodeapps interfaces as necessary.
srvctl modify nodeapps -n <hostname> -A <ip_ddress>/<subnet_mask>/<ipmp_interface1>\|<ipmp_interface2>
- Identify the OCR private interface(s).
- Delete the OCR private interface(s).
oifcfg delif -global <if_name>
- Set the CLUSTER_INTERCONNECTS parameter in each ASM and database instance pfile/spile.
alter system set cluster_interconnects='ip_address' scope=spfile sid='SID1';
- Restart all services on each node.
- Verify that each database and ASM instance is using the appropriate Private Interconnect.
select * from gv$cluster_interconnects;
The ASM startup will now take a fraction of the time it was taking before, and the correct interconnect IP address will be used.