Skip to content

Home Backup Server Using Bacula

I have several personal computers that store documents, pictures, music, and videos that I need to backup regularly. Google Drive, Google Picasa, and Google Play Music provide offsite storage for my most critical files, and while these cloud-based file copies are convenient, I do not completely trust the files to Google. What if the services are compromised or the files are corrupted?

The general recommendation is to have at least three copies of your important files and data: a primary online copy, a local backup, and an offsite backup.

I debated between the need for a local backup and researched several secondary cloud-based options including Amazon S3, Backblaze, and Crashplan. The catch with any cloud backup solution is bandwidth. A full backup or restore of a terabyte can take weeks, and it’s often faster to load the files onto an external hard drive and ship them to/from the cloud storage vendor.

After ruling out a cloud-based backup solution due to prohibitively long backup and restore times, I looked into a few different local backup solutions. Most of my experience has revolved around commercial solutions including Netbackup, CommVault, and EMC Networker. Given that the commercial solutions are overpriced and overly complicated for home backup needs, I looked into Amanda and Bacula as open source solutions. Either one works fine, and Amanda has a few more enterprise class options, but I decided to go with Bacula because it is included in the Ubuntu repositories.

After selecting Bacula as the backup software solution, I researched a few different small servers. The backup server needed to be able to support at least 3 disks and RAID5, and I ended up purchasing an HP ProLiant N40L MicroServer. The base server comes with 2GB of memory, a single 250GB hard drive, and a 4-port SATA RAID controller. I had two 1TB hard drives laying around, so I purchased two more for a total of 4TB of raw storage. The MicroServer has space for a 5.25″ optical drive, so I opted not to install an optical drive and instead purchased a 5.25″ to 3.5″ bay adapter to hold the included 250GB hard drive. Beyond the storage, I upgraded the memory to 2x4GB memory sticks and bought a remote access card for headless administration.

If you are interested in buying the same components, you can check them out on Amazon. 2TB hard drives are probably a better choice if you need the capacity. I just wanted to make use of the two 1TB drives I already had.

After the hardware arrived, I installed Ubuntu 12.04 LTS Server (64-bit) on the 250GB drive mounted in the optical bay. The OS drive is not redundant, but I figure the OS can be rebuilt on a new drive in a pinch. Then I installed the ZFS on Linux kernel module and configured the 4x1TB drives as a ZFS file system. Finally, I installed the Bacula server and configured all of the clients.

Overall, the MicroServer fits nicely in my home media center, and the Bacula software has been doing a good job of consistently backing up my files for the past couple of months.

picasaView plugin: Could not load data from picasaweb.

Posted in Linux.

Tagged with , , , , .

Ubuntu Linux + Sony BDP-S570 + DLNA

I bought a Sony BDP-S570 3D Blu-ray Disc Player a couple of months ago, and I also happen to run a MythTV server where I store all of my music, videos, and recordings. The BDP-S570 says it is a DLNA client, but for whatever reason it does not recognize the MythTV UPnP/DLNA server.

This is what I did to share my MythTV media with the Sony BDP-S570:

  • Add the unofficial MiniDLNA Ubuntu PPA:
    sudo add-apt-repository ppa:stedy6/stedy-minidna
  • Update the APT package index:
    sudo apt-get update
  • Install MiniDLNA:
    sudo apt-get install minidlna
  • Edit the /etc/minidlna.conf file:
    sudo vi /etc/minidlna.conf

    friendly_name=MythTV DLNA Server

  • Restart the MiniDLNA server, removing the existing media list:
    sudo /etc/init.d/minidlna stop
    sudo rm -r /tmp/minidlna
    sudo /etc/init.d/minidlna start
  • Turn on your Sony BDP-S570, and see if your media server is listed:
    Setup menu > Network Settings > Connection Server Settings

  • Scan for the media server if it is not already listed. It will have a status of “Shown” if it has been found.
  • Now try to play some of the media!

The instructions above will most likely work for other Sony BDP-S* players, too.

Posted in Linux, Music, Video.

AD Patch Worker Hangs on XDOLoader Process

Have you run an e-Business Suite R12 patch that slowed down or hung at the Java Loader steps for no apparent reason? I first encountered this issue in January, and finding a workable solution took several hours of research. No Oracle Support notes pointed directly to the issue at the time, although several more recent notes make the issue easier to identify and solve. Hopefully this post will be useful to someone else.

Platform: Red Hat Enterprise Linux Server
Application Version: e-Business Suite 12.1+


Patch runs fine until it begins to slow down and hang partway through the java loader (e.g., XDOLoader) steps for no apparent reason. There are no indications that the hang is being caused by a database performance or locking issue.


AD patch worker log error:

Io exception: Connection reset

Run jstack on the hanging java process:

"main" prio=10 tid=0x08937000 nid=0x22ea runnable [0xf73e1000]
java.lang.Thread.State: RUNNABLE
at Method)
- locked <0xf29b25a0> (a
- locked <0xf29b2370> (a
- locked <0xf29b1fd0> (a
- locked <0xf29b2250> (a
at Source)
at oracle.jdbc.driver.T4CTTIoauthenticate.marshalOauth(
at oracle.jdbc.driver.T4CConnection.logon(
at oracle.jdbc.driver.PhysicalConnection.(
at oracle.jdbc.driver.T4CConnection.(
at oracle.jdbc.driver.T4CDriverExtension.getConnection(
at oracle.jdbc.driver.OracleDriver.connect(
at java.sql.DriverManager.getConnection(
at java.sql.DriverManager.getConnection(
at oracle.apps.xdo.oa.util.XDOLoader.initAppsContext(
at oracle.apps.xdo.oa.util.XDOLoader.init(
at oracle.apps.xdo.oa.util.XDOLoader.(
at oracle.apps.xdo.oa.util.XDOLoader.main(

Check /dev/random entropy:

cat /proc/sys/kernel/random/entropy_avail
NOTE: Higher numbers are better. The patch will begin to slow down or hang whenever entropy is ~50 or less.


The java process depends on the /dev/random device to provide random numbers to the SecureRandom Java class. If /dev/random runs out of random numbers, the patch workers calling SecureRandom hang until enough random numbers are available.

NOTE: Pick one of the solutions below. Solution number 1 is my preferred solution, since it is specific to the e-Business Suite and should not affect other processes on the server.

  1. Search for all jre/lib/security/ files and replace:


  2. Run the rngd daemon to seed /dev/random with random numbers:
    Install the rngd-utils package in RedHat 5 or kernel-utils in RedHat 4.
    rngd -r /dev/urandom -o /dev/random -f -t 1
  3. Replace the /dev/random device with /dev/urandom. (Not recommended for security reasons.)

    sudo mv /dev/random /dev/random.bak
    sudo ln -s /dev/urandom /dev/random


Posted in Applications, Linux, Oracle.

Tagged with , , , , , , , , .

Oracle Log and Trace File Cleanup

UPDATE: Several script bugs brought to my attention by a comment posted below have been fixed. The script should now be compatible with Linux and Solaris. Please let me know if any additional bugs are identified.

Every running Oracle installation has several directories and files that need to be rotated and/or purged. Surprisingly, or not, Oracle has not included this basic maintenance in their software. I have come across the oraclean utility in the past, but the script does not do everything I need.

To achieve what I required, I recently hacked together a single script that does the following things:

  • Cleans audit_dump_dest.
  • Cleans background_dump_dest.
  • Cleans core_dump_dest.
  • Cleans user_dump_dest.
  • Cleans Oracle Clusterware log files.
  • Rotates and purges alert log files.
  • Rotates and purges listener log files.

The script has been tested on Solaris 9 and 10 with Oracle database versions 9i and 10g. It has also been tested with Oracle Clusterware and ASM 11g. The script can be scheduled on each server having one or more Oracle homes installed, and it will clean all of them up using the retention policy specified. The limitation is that log file retention is specified per server, not per instance. However, I find that placing a single crontab entry on each database server is easier than setting up separate log purge processes for each one.

The script finds all unique Oracle Homes listed in the oratab file and retrieves the list of running Oracle instances and listeners. Once the script knows that information, it rotates and cleans the trace, dump, and log files.


Usage: -d DAYS [-a DAYS] [-b DAYS] [-c DAYS] [-n DAYS] [-r DAYS] [-u DAYS] [-t] [-h]
   -d = Mandatory default number of days to keep log files that are not explicitly passed as parameters.
   -a = Optional number of days to keep audit logs.
   -b = Optional number of days to keep background dumps.
   -c = Optional number of days to keep core dumps.
   -n = Optional number of days to keep network log files.
   -r = Optional number of days to keep clusterware log files.
   -u = Optional number of days to keep user dumps.
   -h = Optional help mode.
   -t = Optional test mode. Does not delete any files.

Posted in Database, Oracle.

Tagged with , , , , , , , , , .

Copy Tables From DB2 to Oracle – The Free Way

Part of a recent project I was working on involved the decommissioning of an old DB2 database on an IBM z/OS mainframe. As part of the decommissioning process, the business wanted to keep the data available for potential audit reporting. The Oracle Migration Workbench for DB2 sounded like the best option, but it turned out to not be supported on z/OS.

After several attempts at using SQL*Loader to move the 350 tables, a colleague suggested Oracle’s Generic Connectivity. After coordinating with several other groups, this is the process that finally worked:

  1. Have a DB2 account created, so that the data can be queried.
  2. Install the DB2 Connect client on the UNIX server on which the Oracle database resides.
  3. Configure the DB2 Connect client.
    – The DB2 administrator and UNIX administrator coordinated on this, so
    I do not have the specifics.
  4. Test the DB2 connection
    . /export/home/db2inst1/sqllib/cfg/db2profile
    db2 connect to MYDB2DATABASE user <username>
    db2 => select current time as DB2_TIME from sysibm.sysdummy1
    db2 => terminate
  5. Install the unixODBC package on the Oracle database server.
  6. Configure the odbc.ini file (usually located in /usr/local/etc/odbc.ini).
    Description = DB2 Driver
    Driver = /export/home/db2inst1/sqllib/lib/
  7. Test the unixODBC connection.
    isql -v MYDB2DATABASE username password
    SQL> select current time as DB2_TIME from sysibm.sysdummy1
    SQL> quit
  8. Create an initialization file for Oracle Generic Connectivity.
    cd $ORACLE_HOME/hs/admin
    vi initMYDB2DATABASE.ora
    # HS init parameters
    # HS init parameters
    HS_FDS_TRACE_LEVEL = debug
    HS_FDS_SHAREABLE_NAME = /usr/local/lib/
    # ODBC specific environment variables
    set ODBCINI=/usr/local/etc/odbc.ini
    # Environment variables required for the non-Oracle system
    set DB2INSTANCE=db2inst1
  9. Create a listener entry in the Oracle listener.ora.
    (SID_DESC =
    (ORACLE_HOME = /path/to/your/oracle/home)
    (PROGRAM = hsodbc)
  10. Ensure the listener connection timeout is unlimited in the listener.ora.
  11. Ensure the connection timeout is unlimited in the sqlnet.ora.
  12. Restart the database listener.
    lsnrctl stop listener_name; lsnrctl start listener_name
  13. Add a tnsnames.ora entry for the HS listener.
    (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521>))
    (HS = OK)
  14. Log into the Oracle database as a user that has the CREATE DATABASE LINK privilege.
  15. Create a database link to the DB2 database.
  16. Test the database link.
    select current time as DB2_TIME from sysibm.sysdummy1@MYDB2DATABASE;
  17. Move as many tables as possible using:
    create table table_name as select * from db2_schema.db2_table_name@MYDB2DATABASE;
  18. Some tables will fall out due to “ORA-00997: illegal use of LONG datatype”.
    COPY FROM username/password@ORACLE_SID TO username/password@ORACLE_SID -
    CREATE table_name USING SELECT * from db2_schema.db2_table_name@MYDB2DATABASE;

Known Issues:

  1. ORA-28511: lost RPC connection to heterogeneous remote agent using
    Solution: Set the connections to not timeout.
  2. ORA-00997: illegal use of LONG datatype
    Solution: Use the SQL*Plus COPY command.
  3. Error when running SQL*Plus COPY command.
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [Generic Connectivity Using ODBC]DRV_BlobRead: DB_ODBC_ENGINE (1489): ;
    [unixODBC][IBM][CLI Driver][DB2] SQL0805N Package
    “MYDB2DATABASE.NULLID.SYSLH203.5359534C564C3031″ was not found. SQLSTATE=51002
    (SQL State: 51002; SQL Code: -805)
    Solution: This error is due to packages missing on the DB2 side. I had the DB2 database admin create the missing package.

  4. ORA-01400: cannot insert NULL into (“oracle_schema”.”table_name”.”column_name”)
    Solution: Create an empty table and alter the column to accept NULL.
    COPY FROM username/password@ORACLE_SID TO username/password@ORACLE_SID -
    CREATE table_name USING SELECT * from db2_schema.db2_table_name@MYDB2DATABASE WHERE 1=2;
    ALTER TABLE table_name MODIFY column_name NULL;
    COPY FROM username/password@ORACLE_SID TO username/password@ORACLE_SID -
    APPEND table_name USING SELECT * from db2_schema.db2_table_name@MYDB2DATABASE;
  5. Enable DB2 ODBC driver tracing.
    Solution: Edit the db2cli.ini file.

Metalink Note:375624.1 – How to Configure Generic Connectivity (HSODBC) on Linux 32 bit using DB2Connect

Posted in Oracle.

Tagged with , , , , , .