Category Archives: Database

Oracle Log and Trace File Cleanup

Published by:

UPDATE: Several script bugs brought to my attention by a comment posted below have been fixed. The script should now be compatible with Linux and Solaris. Please let me know if any additional bugs are identified.

Every running Oracle installation has several directories and files that need to be rotated and/or purged. Surprisingly, or not, Oracle has not included this basic maintenance in their software. I have come across the oraclean utility in the past, but the script does not do everything I need.

To achieve what I required, I recently hacked together a single script that does the following things:

  • Cleans audit_dump_dest.
  • Cleans background_dump_dest.
  • Cleans core_dump_dest.
  • Cleans user_dump_dest.
  • Cleans Oracle Clusterware log files.
  • Rotates and purges alert log files.
  • Rotates and purges listener log files.

The script has been tested on Solaris 9 and 10 with Oracle database versions 9i and 10g. It has also been tested with Oracle Clusterware and ASM 11g. The script can be scheduled on each server having one or more Oracle homes installed, and it will clean all of them up using the retention policy specified. The limitation is that log file retention is specified per server, not per instance. However, I find that placing a single crontab entry on each database server is easier than setting up separate log purge processes for each one.

The script finds all unique Oracle Homes listed in the oratab file and retrieves the list of running Oracle instances and listeners. Once the script knows that information, it rotates and cleans the trace, dump, and log files.


Usage: -d DAYS [-a DAYS] [-b DAYS] [-c DAYS] [-n DAYS] [-r DAYS] [-u DAYS] [-t] [-h]
   -d = Mandatory default number of days to keep log files that are not explicitly passed as parameters.
   -a = Optional number of days to keep audit logs.
   -b = Optional number of days to keep background dumps.
   -c = Optional number of days to keep core dumps.
   -n = Optional number of days to keep network log files.
   -r = Optional number of days to keep clusterware log files.
   -u = Optional number of days to keep user dumps.
   -h = Optional help mode.
   -t = Optional test mode. Does not delete any files.

Solaris 10 + IPMP + Oracle RAC

Published by:

I recently installed a 2-node RAC cluster using the following configuration:

Operating System: Solaris 10 (SPARC-64)
Oracle Clusterware:
Oracle ASM:
Oracle RDBMS:

Because the servers had 4 network interface cards, I asked the system administrators to configure IPMP on the Virtual IP and Private Interconnect interfaces.

The Clusterware, ASM, and RDBMS installations went as planned. However, when we tried restarting the ASM instance, it would take several minutes before coming up. While it started, I ran a ptree on the racgimon process and found that it was hanging on the “sh -c /usr/sbin/arp -a | /usr/xpg4/bin/grep SP” command. It took awhile to sort out, but I was finally able to put together enough blog posts and Metalink notes to figure out what needed to be done.

  1. Collect the hostname, VIP, and private interconnect aliases and IP addresses for each RAC node from /etc/hosts.
  2. Collect network interface information on each node, identifying which interfaces are part of each IPMP group.
    ifconfig -a
  3. Identify which interfaces nodeapps is using on each node.
    srvctl config nodeapps -n <hostname>
  4. Update the nodeapps interfaces as necessary.
    srvctl modify nodeapps -n <hostname> -A <ip_ddress>/<subnet_mask>/<ipmp_interface1>\|<ipmp_interface2>
  5. Identify the OCR private interface(s).
    oifcfg getif
  6. Delete the OCR private interface(s).
    oifcfg delif -global <if_name>
  7. Set the CLUSTER_INTERCONNECTS parameter in each ASM and database instance pfile/spile.
    alter system set cluster_interconnects='ip_address' scope=spfile sid='SID1';
  8. Restart all services on each node.
  9. Verify that each database and ASM instance is using the appropriate Private Interconnect.
    select * from gv$cluster_interconnects;

The ASM startup will now take a fraction of the time it was taking before, and the correct interconnect IP address will be used.

Metalink Note 283107.1 – Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP
Metalink Note 368464.1 – How to Setup IPMP as Cluster Interconnect

How To Move Online Redo Logs While Database Is Running

Published by:


You have online redo log files in the wrong place and want to move them.


  1. Run the following SQL to locate the current redo log groups and their members:
  2. select group_number
    , l.status group_status
    , f.member group_member
    , f.status file_status
    from v$log l
    , v$logfile f
    where =
    order by
    , f.member;

  3. Find a location that can contain the new log files.
  4. Create the new log files using the following SQL. Note that there must be at least two logfile groups after dropping unnecessary log files.
  5. sqlplus /nolog
    SQL> connect / as sysdba
    SQL> alter database add logfile ('/path/to/new/redo_01_01.log','/path/to/new/redo_01_02.log') size 150M;
    SQL> alter database add logfile ('/path/to/new/redo_02_01.log','/path/to/new/redo_02_02.log') size 150M;

  6. Run the SQL from Step 1 to determine which logfile is CURRENT.
  7. Run the following command to switch log files. After running the SWITCH command, run the SQL from Step 1 again. Repeat the process until one of the newly created logfile groups has a CURRENT status.
  8. SQL> alter system switch logfile;

  9. After one of the newly created log files is CURRENT, run the following SQL. You should only drop logfile groups that have an INACTIVE status.
  10. SQL> alter database drop logfile group GROUP#;

  11. Repeat Step 6 for each undesired logfile group.

The dropped log files should have automatically been deleted from the file system.


Oracle® Database Administrator’s Guide – Managing the Redo Log