Database Install Notes

This document contains notes on installing several databases on a Linux platform, setting up ODBC and using database connections in a program (Perl, PHP).

Installing MySQL on Linux

Before you proceed, you should read the build instructions in the MySQL documentation. A PDF of the latest documentation can be downloaded from http://dev.mysql.com/. The "MySQL Reference Manual" contains everything that you need.

If you haven't already done so, create a userid and group for MySQL. The standard group id and userid for CentOS/RedHat is 27 so use them if they are available. Under Ubuntu, the standard userid is 102 and the standard group ID is 105. Note that you want to create a system account (i.e. one who's ID number is below 500 for CentOS/RedHat, or below 1000 for Ubuntu, and that has no home directory). This example shows how to do it on CentOS/RedHat:

     su
     /usr/sbin/groupadd -g 27 mysql
     /usr/sbin/useradd -c "MySQL Server" -d /var/mysql -g mysql -s /sbin/nologin
                       -M -n -r -u 27 mysql

Download the latest MySQL tar file from http://dev.mysql.com/.

Note that some of the later builds (e.g. MySQL Community Server 5.5.15 GA) don't build or install cleanly, especially if you wish to use the ODBC connector. As of this writing, MySQL Community Server 5.5.28 GA does appear to work but you may want to pass over some of the intermediate versions and go with a known working version. The latest known working version, that we are aware of, before 5.5.28, was mysql-5.1.58.

Make sure you get the source files and not one of the myriad of pre-built distributions (you will experience a lot less problems if you build it yourself). Untar the tar file in the top level source directory (e.g. /rpm/MySQL):

     tar -xvzf mysql-m.n.xx.tar.gz

The tar command will create a new directory for the m.n.xx version of MySQL. Switch to that directory and build MySQL. Earlier versions of MySQL required some compiler flags to build properly, like this:

     cd mysql-m.n.xx
     CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions \
       -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler \
       --enable-thread-safe-client --with-mysqld-ldflags=-all-static \
       --enable-local-infile
     make

The mysql-5.1.58 version doesn't appear to need these flags so you can build it like this:

     cd mysql-5.1.58
     ./configure --prefix=/usr/local/mysql --enable-assembler \
       --enable-thread-safe-client --with-mysqld-ldflags=-all-static \
       --enable-local-infile
     make

If you wish to build one of the mysql-5.5.x or later versions, Oracle has switched the build to use cmake instead of GNU configure, thereby making the build portable across many different systems. You will first need to make sure that cmake is installed on your system. Under RedHat/CentOS, you can install cmake like this:

     su
     yum install cmake

Under Ubuntu, you can install cmake like this:

     su
     apt-get install cmake

Once you have cmake installed, you can build MySQL by first creating a build directory (of your choice) and then running cmake:

     cd mysql-5.5.28
     mkdir bldprod
     cd bldprod
     cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local/mysql \
       -DMYSQL_DATADIR=/var/mysql -DSYSCONFDIR=/etc \
       -DENABLED_LOCAL_INFILE=ON
     make

Note that for mysql-5.1.x or mysql-5.5.x, InnoDB may no longer built into MySQL. If you want to be sure that it is, you should include the "--with-plugins=innobase" parameter on the ./configure line for mysql-5.1.x or "-DWITH_INNOBASE_STORAGE_ENGINE=1" on the cmake line for mysql-5.5.x or later. There are lots of other plugins that can be built in a similar manner so consult the docs, if you need one.

Incidentally, when using cmake, if you ever need to find out what options were used to build the makefile, you can look in your build directory at a file named CMakeCache.txt.

After make is done, switch to super-duper user and install MySQL:

     su
     make install

Copy the default config file to /etc, where MySQL can find it:

     cp support-files/my-medium.cnf /etc/my.cnf

Configure the client and server as you require. Here are some options to consider:

     [client]
       port = 3306            # This should be the default but you never know
       socket = /tmp/mysql.sock
     [mysqld]
       port = 3306            # This should be the default but you never know
       datadir = /var/mysql   # Data directory
       socket = /tmp/mysql.sock
       skip-networking        # If the server is just going to be used locally

If you want InnoDB, you should uncomment all of the lines pertaining to it and pay attention to the lines that point to the data directory:

     [mysqld]
       innodb_data_home_dir = /var/mysql/
       innodb_log_group_home_dir = /var/mysql/
       innodb_log_arch_dir = /var/mysql/
            .
            .
            .
       innodb_lock_wait_timeout = ...

Also, some bright guy got the idea to turn off local infile as a security enhancement in later versions of MySQL. If you want this capability (trust me, you do), make sure it is turned on for all clients (the "loose" prefix actually turns it on only for those clients that understand the option) and the server by editing the appropriate sections of the configuration file:

     [client]
       loose-local-infile = 1
            .
            .
            .
     [mysqld]
       local-infile = 1

Note that, apparently, calmer heads thought better of the "no local infile" thing and have partially reversed the decision to turn it off. Later versions of MySQL (e.g. 5.5.28) are pre-built with it turned on and only if you are building it from source do you need to supply the build option. You still have to pass "local-infile=1" to mysqld (as is shown in the config file, above) but you no longer need to tell the clients about it. So, for later versions of MySQL "loose-local-infile=1" becomes an error and it should be left out of the config file. Nothing like having your stuff together.

Install the predefined tables and then set the file permissions on them (see next paragraph first). For earlier versions of MySQL, mysql_install_db is found in /usr/local/mysql/bin. For later versions of MySQL, mysql_install_db is found in /usr/local/mysql/scripts. Run it from the appropriate place:

     su
     /usr/local/mysql/scripts/mysql_install_db \
          --basedir=/usr/local/mysql --user=mysql
     chown -R mysql:mysql /var/mysql
     chmod g=rwx /var/mysql
     chmod g=rw /var/mysql/
     chgrp root /var/mysql/mysql /var/mysql/test
     chmod go= /var/mysql/mysql /var/mysql/perform /var/mysql/test

Probably, the permissions will already be set this way so you could just check before messing with them. They should look like this:

     drwxrwx--- 4 mysql mysql 4096 Mar 13 23:20 /var/mysql
     /var/mysql
       drwx------ 2 mysql root    4096 Mar 13 23:20 mysql
       -rw-rw---- 1 mysql mysql  15147 Mar 13 23:20 mysql-bin.000001
       -rw-rw---- 1 mysql mysql 510697 Mar 13 23:20 mysql-bin.000002
       -rw-rw---- 1 mysql mysql     38 Mar 13 23:20 mysql-bin.index
       drwx------ 2 mysql mysql   4096 Mar 13 23:20 performance_schema
       drwx------ 2 mysql root    4096 Mar 13 23:20 test
     /var/mysql/mysql
       -rw-rw---- 1 mysql mysql   8820 Mar 13 23:20 columns_priv.frm
       -rw-rw---- 1 mysql mysql      0 Mar 13 23:20 columns_priv.MYD
            .
            .
            .

To complete the next steps, MySQL must be running. Start it as follows:

     su
     /usr/local/mysql/bin/mysqld_safe --user=mysql &

You can test to see if it is up and running by:

     /usr/local/mysql/bin/mysqladmin version
     /usr/local/mysql/bin/mysqladmin variables

You should see information about the server version and then typical query output, from the second command.

If you don't want the test database, you can get rid of it while the server is up and running:

     /usr/local/mysql/bin/mysqladmin -u root drop test

You should assign a password to the root userid. Also, if you don't want the anonymous userids, you should get rid of them (its a good idea, especially if you got rid of the test database):

     /usr/local/mysql/bin/mysql -u root
       set password for 'root'@'localhost' = password('newpwd');
       set password for 'root'@'host_name' = password('newpwd');
       delete from mysql.user where User = '';
       flush privileges;

Since we installed MySQL in /usr/local/mysql, it is probable that most users won't have this directory in their PATH environment variable so they won't be able to execute the MySQL programs by name only. We like to add a couple of symlinks to the most commonly-used MySQL programs to a standard location so that they can be found by everyone, without having to update their PATH environment variable. Here is what we add:

     ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql
     ln -s /usr/local/mysql/bin/mysqldump /usr/local/bin/mysqldump

Sometimes the host name used in the user table is qualified by the local domain (e.g. myhost.localdomain). Also, you may need to set the password for the root user, qualified by '127.0.0.1' and/or '::1'. You can see all of the rows in this table with the following select:

     select * from mysql.user;

If you need to add another root user for the machine (e.g. when you change the machine's name), the following should do the trick:

     grant all privileges on *.* to 'root'@'newhost_name'
               identified by 'newpwd' with grant option;

Note that you should replace "host_name" with the actual name of your server and "newpwd" with the new password that you wish to use.

If you want to add new users with access to everything, here's a quick synopsis of what to do:

     grant all privileges on *.* to 'monty'@'localhost'
               identified by 'some_pass' with grant option;
     grant all privileges on *.* to 'monty'@'host_name'
               identified by 'some_pass' with grant option;

If you want a user with access to everything, who can login from anywhere (dangerous, wot?), try this:

     grant all privileges on *.* to 'monty'@'%'
               identified by 'some_pass' with grant option;

If you want to add a user with privileges on just one database, try something like:

     grant all privileges on OneDatabase. to 'monty'@'localhost'
               identified by 'some_pass';
     grant all privileges on OneDatabase. to 'monty'@'host_name'
               identified by 'some_pass';

If monty gets out of hand and you'd like to revoke his privileges, this should do the trick:

     revoke all privileges, grant option from 'monty'@'localhost',
                'monty'@'host_name';

If you'd like to expunge all traces of monty from your, otherwise pristine, database, first revoke his privs (as shown above) and then do:

     delete from mysql.user where User='monty';

OK, finished playing? Shut down the server:

     /usr/local/mysql/bin/mysqladmin -u root -pnewpwd shutdown

Note that the install doesn't appear to tell the loader where it puts the dynamic link libraries that are necessary to access MySQL databases. If you are developing any applications that will access a MySQL database, you may have to fix this situation manually. One fix is to add symlinks for the MySQL modules, in an obvious location such as /usr/lib. Another fix is to change ld.so.conf or ld.so.conf.d to point the loader at the directory where the MySQL modules can be found.

If you prefer the first approach, this command can take care of things:

     find /usr/local/mysql/lib -name \*.so -exec ln -s \{\} /usr/lib \;

Our preference is to add a file to /etc/ld.so.conf.d that points the loader at the MySQL dynamic link libraries. The file should look like this:

/etc/ld.so.conf.d/mysql-i386.conf:

     /usr/local/mysql/lib

Note that this file may already exist as a result of a previous MySQL installation not being properly removed. This can occur if your OS install thinks it knows better about what should be on your system and installs the MySQL RPM, for example, after which you remove the RPM before building MySQL from source. In that case, the file may already point to /usr/lib/mysql. We remove that libary's name and replace it with /usr/local/mysql/lib. If you are paranoid, you can just order the two libraries so that the modules from /usr/local/mysql/lib will take precedence but any missing modules will be found in /usr/lib/mysql. You config fill will look like:

/etc/ld.so.conf.d/mysql-i386.conf:

     /usr/local/mysql/lib
     /usr/lib/mysql

Run the loader configuration command to inform the loader about the new dynamic link library:

     su
     /sbin/ldconfig

Starting/Stopping MySQL at Boot Time

Once you have MySQL running nicely, you can set up the database server to start at boot time by switching to the install directory, copying the startup script to the init directory and then installing it at the correct run levels:

     cd /rpm/MySQL/mysql-m.n.xx
     or
     cd /rpm/MySQL/mysql-m.n.xx/bldprod
     su
     cp support-files/mysql.server /etc/rc.d/init.d/mysql
     chmod ugo+x /etc/rc.d/init.d/mysql
     /sbin/chkconfig --add mysql
     /sbin/chkconfig mysql on

After that, if you wish, you start the MySQL daemon do this:

     /etc/rc.d/init.d/mysql start

It should start/stop automatically when the system is brought up/down.

Resetting MySQL's Root Password

If you ever forget the root password, you can reset it with this sequence of commands:

     su
     cd /etc/init.d
     ./mysql stop
     mv mysql mysql.orig
     cp mysql.orig mysql
     emacs mysql

Change the line in the startup section that reads:

     /usr/bin/mysqld_safe ...

To read:

     /usr/bin/mysqld_safe --skip-grant-tables ...

Save the file, exit the editor, and restart the mysql server:

     ./mysql start

Run the MySQL client from the command prompt:

     /usr/local/mysql/bin/mysql

Issue the following statements in the mysql client. Replace the MyNewPass with the password that you want to use:

     update mysql.user set Password=password('MyNewPass') where User='root';
     flush privileges;
     quit

Stop the mysql server and restore the original startup script:

     ./mysql stop
     rm -f mysql
     mv mysql.orig mysql

Restart the mysql server:

     ./mysql start

You should now be able to login to mysql using the root password:

     /usr/local/mysql/bin/mysql -uroot -pMyNewPass

Perl MySQL DBD Interface

It is quite common to develop applications that use the Perl MySQL DBD interface (e.g. a Web UI to your data). If this is your plan, you will need some extra Perl modules so you might as well fire up CPAN and get started with their installation. In order to do this, make sure that the server has access to the outside world (through your firewall) so it can download the Perl modules from the selected CPAN site. Once it does, type:

     su
     perl -MCPAN -e shell

The first time through, you can let automatic configuration set everything up. All that remains is to set up the URL list, which you do with:

CPAN will then ask you to pick some mirror sites for downloading sources. Apparently, those sites that use FTP are typically slow and prone to fail so you may wish to only consider those sites that use HTTP. If you pick North America and United States, here are the sites that we use:

     http://cpan.belfry.net/
     http://cpan.mirror.facebook.net/
     http://mirrors.ibiblio.org/CPAN/
     http://www.perl.com/CPAN/

If you ever need to redo the list of sites, you can rerun the configuration at any time from the command line with:

Once you have made your choices, remember to commit the configuration with:

There is more information at:

     http://rackerhacker.com/2008/06/16/adjusting-cpan-mirror-list/

Here are a list of modules that we install before we install the MySQL DBD interface:

     install Bundle::CPAN
     force install Date::Calc  (later versions have it already)
     install Time::HiRes

If the install of Time::HiRes fails, exit CPAN, do the following and then rerun the install of HiRes from scratch:

     export LC_ALL=C

If you care that the install gets to run all of its built-in tests, to verify that the install worked OK, before installing the MySQL bundle you will need to exit CPAN set the environment variables needed by the Perl install script:

     export DBD_MYSQL_TESTDB=stores
     export DBD_MYSQL_TESTUSER=root
     export DBD_MYSQL_TESTPASSWORD=itsasecret  (use the actual root password)
     export DBD_MYSQL_TESTHOST=localhost
     export DBD_MYSQL_TESTPORT=3306

Note that the super user password is now stored in an environment variable in plain text. This is a big security hole. Only do this for the time needed to run the Perl install script.

Create a database that can be used by the install script. To do so, run mysql and do the following:

     mysql -uroot -p
       rootsecretpassword
       create database stores;
       quit

After exiting from mysql, restart CPAN and then install DBD:

     perl -MCPAN -e shell
     install Bundle::DBD::mysql

We're now done with the Perl test database so we can get rid of it:

     mysql -uroot -p
       rootsecretpassword
       drop database stores;
       quit

At this point, you must log off to clear the environment variables with the super user password in them. Then, you need to edit root's bash history to delete the line where you set the password. To do that, log on as root again. With your favorite editor, edit /root/.bash_history and remove any lines that contain the root password. Save the file. Log out again (the command that sets the password is still in the in-memory command history). You can log in for a third time and check that the plaintext password is no longer visible in the command history. What a pain in the butt! Nice install procedure.

Incidentally, if you wish, you can skip all of the steps to create a test database and just install the MySQL DBD Interface without it. All of the install script's test steps will be skipped and the install will complete successfully. Most of the time, this is all that it takes. Its a lot simpler.

Oh, and while we're talking about incidentals, you can often install this module directly with your Operating System's package manager. If this option is available, you should probably take it.

Exporting/Importing a MySQL Database

If you have occasion to move a MySQL database from one place to another, the following two scripts are offered as examples of how to do it. Basically, you just export the original database with mysqldump, copy it to the new location and then import it with the mysql command. These two scripts add lots of bells and whistles.

MyDBExport:

     #!/bin/sh
     #
     # Shell script to export a MySQL database and create a zip file.
     #
     # This script can be run by any user because the user name and password
     # that are to be used to access the database tables are supplied as
     # parameters to this script.  Run the script like this:
     #
     #      MyDBExport dbname username [password]
     #          [rmthost rmtdest rmtuser rmtpasswd]
     #
     # Note that, if the password isn't supplied, you will be prompted for it
     # at the appropriate time.  However, if you wish to have the exported
     # database copied to a remote host, via FTP, you'll need to supply the
     # password and then you can supply the remote host, destination directory,
     # username, and password.
     #
     # The exported database is zipped and dropped in the current directory.
     # If you supply the rmthost (and other parameters), the zipped database
     # will also be FTPed to the remote host specified.
     #
     # The exported database may be loaded onto another system by running the
     # mysql command on that machine and piping the exported database into it,
     # like this:
     #
     #      gunzip -c db.exp.sql.gz | mysql -uuser -p
     #
     # You can also use the MyDBImport script, which is meant to be used as a
     # companion to this script.
     #
     # Incidentally, the exported data produced by this script can be used to
     # export a MyISAM database and import in into InnoDB.  Thus, this script
     # can be used to convert databases stored with the default (MyISAM) engine
     # used under MySQL 5.1.x to the new default (InnoDB) engine used under
     # MySQL 5.5.x.
     #
     ############################################################################
     #
     # Let's see if the user has supplied a password or not.
     #
     if [ "x$3" != "x" ]; then
         PasswdParm="-p$3"
     else
         PasswdParm="-p"
     fi
     #
     # Export the database and pass it through gzip.  This produces a zipped
     # SQL file that can be loaded into another system, etc.
     #
     mysqldump --databases --add-drop-database --lock-tables --order-by-primary \
         -u$2 $PasswdParm $1 2>/dev/null | gzip >$1.exp.sql.gz
     #
     # If the user supplied a remote host name, FTP the exported database to
     # its destination there.
     #
     if [ "x$4" != "x" ]; then
         echo -e "user $6 $7\\nbin\\nput $1.exp.sql.gz $5/$1.exp.sql.gz" | \
             ftp -n $4
     fi

DBImport:

     #!/bin/sh
     #
     # Shell script to import a MySQL database from a zip file.
     #
     # This script can be run by any user because the user name and password
     # that are to be used to create the database and its tables are supplied
     # as parameters to this script.  Run the script like this:
     #
     #      MyDBImport dbname username [password]
     #
     # Note that, if the password isn't supplied, you will be prompted for it
     # at the appropriate time.
     #
     # This script presumes that the exported/zipped database, that is to be
     # imported by this script, was somehow dropped in the current directory.
     # It further presumes that the exported database was created by the
     # companion script MyDBExport, which can be asked to copy exported
     # databases to remote machines via FTP, and that the name of the zip
     # file matches the conventions used by MyDBExport.
     #
     # This script only accepts the name of the exported database which is to
     # be imported.  Based on the naming conventions used by MyDBExport, the
     # zip file name is constructed like this:
     #
     #      dbname.exp.sql.gz
     #
     # The actual name of the database that is restored by the zipped import
     # file depends on the name of the database that was backed up by
     # MyDBExport but, in general, this should match the name of the zipped
     # import file.
     #
     # Note that this script looks for a dummy file, named dbname.lastexport,
     # in the current directory.  If it finds such a file, it checks the
     # dummy file's timestamp against the import file's timestamp.  If the
     # import file isn't newer than the dbname.lastexport file, the database
     # isn't imported.  If the last export file isn't found, it is not
     # created.  Thus, to turn on this behavior, you must create the dummy
     # file, in the current directory, ahead of time, like this:
     #
     #      touch -f mydb.install
     #
     ############################################################################
     #
     # Let's see if the user has supplied a password or not.
     #
     if [ "x$3" != "x" ]; then
         PasswdParm="-p$3"
     else
         PasswdParm="-p"
     fi
     #
     # Check to see if the user wants us to check, and, if so, is there anything
     # worth doing.
     #
     HaveLast=0
     if [ -w $1.lastexport ]; then
         HaveLast=1
      if test $1.lastexport -nt $1.exp.sql.gz; then
          echo The database is right up to date with the last export.
          exit 0
      fi

fi
#
# Also, check to see if the export crapped out or looks OK. Any export # that is less than 10K, we assume that it crapped out. #
ZipFile=`find . -name $1.exp.sql.gz -follow -size +10k -print`

     if [ "x$ZipFile" == "x" ]; then
         echo Looks like the export of the $1 database failed.
         exit 1
     fi
     #
     # Export the current database, for backup porpoises, and pass it through
     # gzip.  If this fails, we won't try the restore, since the first thing
     # it does is blow away the existing database and we don't want to try
     # that unless we are assured of a high degree of success.
     #
     mysqldump --databases --add-drop-database --lock-tables --order-by-primary \
         -u$2 $PasswdParm $1 2>/dev/null | gzip >$1.tmp.sql.gz
     RetVal=$?
     if [ $RetVal != 0 ]; then
         echo Backup of the existing $1 database failed.
         rm -f $1.tmp.sql.gz
         exit 2
     fi
     #
     # Get rid of the old back-em-up file (if any) and make the new
     # back-em-up the "One".
     #
     rm -f $1.bak.sql.gz >/dev/null 2>&1
     mv $1.tmp.sql.gz $1.bak.sql.gz
     echo Old database exported and saved as $1.bak.sql.gz
     #
     # Import the exported database from the zipped file.
     #
     echo Loading the new $1 database.
     #db gunzip -c $1.exp.sql.gz | mysql -u$2 $PasswdParm >/dev/null 2>&1
     gunzip -c $1.exp.sql.gz | mysql -u$2 $PasswdParm
     RetVal=$?
     if [ $RetVal != 0 ]; then
         echo Load of the new $1 database failed.
         gunzip -c $1.bak.sql.gz | mysql -u$2 $PasswdParm >/dev/null 2>&1
         exit 3
     fi
     #
     # Indicate we're done with this zip file.
     #
     if [ $HaveLast ]; then
         touch $1.lastexport
     fi

Installing Informix on Linux

Create an informix user and group:

     su
     /usr/sbin/useradd -c "Informix database" -m informix

If you want to give the new userid a password:

     su
     passwd informix

Load the latest tar files for the Linux distribution onto a suitable Linux machine (e.g. CentOS 4, 5, 6, RedHat 9, RedHat EL4, El5 or EL6). At a minimum, the three files for the 4GL RDS, SQL Development and Informix SE are required. You may also have files for the debugger and esql for C. And, finally, you will probably need to have the files for the client SDK and possibly connect.

For later versions of Informix, the tar files will have names like CZ22MEN.tar and C57Z3NA.tar. You'll just have to keep the names straight, as the files are named by IBM to indicate what version of the software they supply, not the names of the components actually contained within the tar files. You may be able to figure out which file installs which component by listing the contents of the tar file and looking for the install program itself. For example:

     tar -tf CZ22MEN.tar

Even if you can figure out which component is contained within a file, you'll still have to remember which version it represents and whether it is 32-bit or 64-bit, and carefully pick the one that you want. Below are some examples of Informix install files that we know to work on different versions of Linux.

The install files for RedHat Linux, versions 8 and 9 are:

     4GL_RDSDeveloper-7.32.UC1.tar
     ClientSDK.2.81.UC3.LINUX-I32.tar
     SQLDevelopment-7.32.UC1.tar
     InformixSE-7.25.UC6.tar

The install files for RedHat Enterprise Linux (4 or 5) or CentOS, version 5 are:

     C97BLNA.tar - contains install4db (4GL debugger)
     C97BJNA.tar - contains install4gp
     C97BMNA.tar - contains installsql
     se.7.25.uc6r1.linux32.tar - contains installse

Later versions of Enterprise Linux 5 or CentOS (5.5) are known to work with these install files:

     CZFQ1EN.tar - contains install4db (4GL debugger)
     CZFV6EN.tar - contains install4gp
     CZFZ5EN.tar - contains installsql
     C56DANA.tar - contains installse (7.25.UC6R1)

CentOS 6.3 and 6.4 (and presumably RedHat 6) work with the following install files:

     CZ22QEN.tar - contains install4db (7.50.FC3, 4GL debugger, 32-bit)
     CZ22MEN.tar - contains install4gp (7.50.FC3, 32-bit)
     C57Z3NA.tar - contains esql.rpm
     CZ22EEN.tar - contains installsql (7.50.FC3, 32-bit)
     C56DANA.tar - contains installse (7.25.UC6R1)

In the last case, the files are all 32-bit files. Be sure that you have the correct install files (i.e. all 32-bit or all 64-bit), depending on the OS that you've installed. The RDS, for example, comes in two files CZ22MEN.tar and CZ22MEN.tar. The first is 32-bit and the later is the 64-bit version. If in doubt, install the 32-bit versions since they should also run on a 64-bit OS.

Informix is typically installed in the /usr/share/informix directory. You will need to create this directory like this:

     su
     mkdir /usr/share/informix
     chown informix:informix /usr/share/informix

The tar files should be placed in a suitable archive location. Since you created an informix user, their home directory (e.g. /home/informix) is a good spot for the archive. Or, an "informix" subdirectory in the home directory of the installing user is another possible spot where the tar files can be archived. You will also need to temporarily copy them to /usr/share/informix, because you will be untarring them there. For example:

     cp /home/informix/distributions/CZ22MEN.tar /usr/share/informix

The install program doesn't do much. Basically, you untar the distribution files in the top level directory where they will live (/usr/share/informix) and tar builds the ultimate directory structure. The install scripts just set up file permissions and a few links and things, and register the installed modules, but don't do any actual copying. This clever move was done in order to make it much easier for the guy who wrote the installation script but it does nothing but create trouble for the guy installing the product.

Because of the way installation is done, you must always install the Informix software in the following order:

     Application development tools like the 4GL debugger, rapid development
     system, and the SQL development environment (oldest to newest)
     SQL APIs like esqlc (oldest to newest)
     Database servers like Informix SE (oldest to newest)
     Any additional products like documentation (oldest to newest)

Failure to do so will result in a major disaster. The reason being that the extracted files from the later component installations will overwrite those files with the same names from the earlier component installations. Since the Informix database server is the most important component, if its files were overwritten by the debugger's files, problems could arise. Installing the debugger, and other, lesser components first precludes this from happening.

So, if you have any other distributions than those specifically mentioned herein, install them in the order shown above.

Begin the installation by setting the Informix environment variable INFORMIXDIR and exporting it:

     export INFORMIXDIR=/usr/share/informix

Untar the distribution files directly in /usr/share/informix:

     su informix
     cd /usr/share/informix
     tar -xvf install_file_1
          .
          .
          .
     tar -xvf install_file_n

This will build the proper directory structure, unpack the files that belong in it and create install scripts for each distribution. Run the install scripts as super user, in this order:

     su
     ./install4db (if you have the debugger and this install file is present)
     ./install4gp
     ./installsql
     ./installse

Note that the install may ask for license numbers and keys. It does not seem to be too discriminating. We have used license keys from many years ago and they still seem to be acceptable. So, give whatever number/key you have on hand a try and see what happens before you spend a lot of time looking for the "proper" one, if its not readily available.

It would appear that the client SDK is also required for Informix applications to work on most versions of Linux. If it turns out to be required on your version of Linux (if in doubt, it probably is), load its tar file onto the system and untar it:

     cp /home/informix/distributions/clientsdk.tar /usr/share/informix
     su informix
     cd /usr/share/informix
     tar -xvf clientsdk.tar

For earlier versions of the Client SDK, untarring the distribution file will add some doc files to the documentation directory and then create a cpio file. Run the command to extract the cpio file.

     cpio -i <csdk.cpi

Several smaller cpio files and some install files will result. With the INFORMIXDIR environment variable still set and running as super user, do the install:

     su
     ./installclientsdk

For later versions of the Client SDK (and the Connect component), it would appear that some genius has decided that what used to be a simple install script should now be a Java application from InstallShield. Apart from the good news that you now require about 24M of bloatware to do a few simple copies, there is a requirement in this fabulous code, for certain runtime routines, that preclude it from working with the standard Open Java Project JRE that is installed by your OS.

Consequently, to get this piece of junk to work, you either need to go to the Oracle Web site and get the proper Java runtime from Larry, if your OS is CentOS or RedHat or any of the other operating systems that use RPMs to install packages, or, if your OS is Ubuntu, you need to install the proper runtime from the Partner Repository.

Unfortunately, there is one other wrinkle. The latest Java runtime also appears not to work -- only the SE 6 version is any good. Luckily, it is still available from Oracle and Canonical.

Under CentOS/RedHat/etc., begin by selecting and downloading the latest SE 6 update from this site:

     http://www.oracle.com/technetwork/java/javase/downloads/index.html

On the page mentioned above, under the "Java SE 6 Update xx" row, you’ll see a button for "JRE Download". Click that button and, on the page that ensues, select the Linux x86 "rpm.bin" file (e.g. jre-6u37-linux-i586-rpm.bin). Save the downloaded file somewhere where you can run it.

At some point in time, the latest 7uxx version of Java will be fixed and it no longer will cause conflicts with almost every Java application currently in use, including Libre Office. At that time, you can use it instead. Until then, when Oracle fixes it, stick with the latest update for 6uxx.

The downloaded file is not your typical RPM. To install it, make sure it has execute permissions and then, as root, run it:

     chmod ugo+x jre-6u37-linux-i586-rpm.bin
     su
     ./jre-6u37-linux-i586-rpm.bin

If you wish, since the Oracle version of the Java runtime is better than the Open Java Project runtime, which is typically bundled with most versions of Linux, you can tell the OS to start using this version of Java. This is done by pointing the alternative Java at the one we've just installed:

     /usr/sbin/alternatives --install /usr/bin/java java \
       /usr/java/default/bin/java 20000

If you are doing this, you should also make Mozilla and any other browser in the system, use this new Java. We'll use the alternatives command once again:

     /usr/sbin/alternatives --install /usr/lib/mozilla/plugins/libjavaplugin.so \
       libjavaplugin.so /usr/java/default/jre/lib/i386/libnpjp2.so 20000

Under Ubuntu, the Oracle/Sun Java has been moved to the Partner Repository but it still can be easily installed by apt-get, if you add the Partner Repository to the list of repositories that apt-get uses. The instructions for installing from the Partner Repository depend on which version of Ubuntu you are running. Where the word "osvername" appears, you should replace that name with the name of your OS version:

     Ubuntu 10.04 LTS  - lucid
     Ubuntu 10.10      - maverick
     Ubuntu 11.04      - natty
     Ubuntu 11.10      - oneric
     Ubuntu 12.04 LTS  - precise
     Ubuntu 12.10      - quantal

Here are the instructions:

     su
     add-apt-repository "deb http://archive.canonical.com/ osvername partner"
     apt-get update
     apt-get install sun-java6-jre sun-java6-plugin

As with CentOS/RedHat, since the Oracle version of the Java runtime is better than the Open Java Project runtime which is bundled with Ubuntu, if you wish, you can tell the OS to start using this version of Java. This is done by pointing the alternative Java at the one we've just installed:

     su
     update-alternatives --config java

After you install the Oracle/Sun Java, you should be able to run the install script. As far as we can tell, except for being a major pain in the butt, the new install script does nothing that a plain old shell script wouldn't have done (buy, hey, this is progress). Apparently, IBM doesn't see the irony in requiring you to install software from the OTHER database vendor before you can install their database.

Just in case you now find yourself in another directory from the Informix install directory, you should change back to it. You should also make sure that the Informix environment variable INFORMIXDIR is set correctly and then take care to export it. Once these are done, become super user and run the client install:

     cd /usr/share/informix
     export INFORMIXDIR=/usr/share/informix
     su
     ./installclientsdk

If you still get an error message or the install fails, under CentOS/RedHat you can try:

     ./installclientsdk -javahome /usr/java/default

Under Ubuntu you can try:

     ./installclientsdk -javahome /usr/lib/jvm/java-6-sun/jre/

Some intermediate versions of Informix may require Connect for your Informix applications to work. If this proves to be the case (it doesn't appear to be so with the latest version of Informix), begin by loading its tar file onto the Linux machine and untarring it. For example:

     tar -xvf connect.3.50.UC6.LINUX.tar

This will contain the installconnect script, which you should then run as super user:

     su
     ./installconnect

Note that the main install appears to install GLS and the messages so there is no need to run install_gls or install_msg. Also note that we say "No" for version 9 of the engine as this seems to be a good choice.

As we noted earlier, the Informix software must always be installed in the following order because failure to do so will result in a major disaster:

     Application development tools (oldest to newest)
     SQL APIs (oldest to newest)
     Database servers (oldest to newest)
     Any additional products like documentation (oldest to newest)

This being the case, if you have any other distributions than those that we have explicitly covered in these instructions, you will need to figure out where they fit in the big scheme of things and install them in correct order.

Once all of the installs are done and before you proceed too much further, you may wish to test a few of the Informix programs that were installed. Testing them can be as simple as typing their names at the command line. For example, we'd suggest you try:

     /usr/share/informix/bin/dbaccess
     /usr/share/informix/bin/fglpc
     /usr/share/informix/bin/isql
     /usr/share/informix/bin/secheck

These commands will test most of the installed packages and let you know if the basic Informix functionality will run. If you see an error that says "cannot execute binary file", it is probable that you installed the wrong version of Informix on your system. For example, on a system that has the 32-bit C-library installed, you've installed the 64-bit version of Informix. You can find out more information about the executables that you've installed with the file command, like this:

     file /usr/share/informix/bin/isql

The output from this command should reveal what type of executable is installed. For example, you might see:

     /usr/share/informix/bin/isql: ELF 32-bit LSB executable, Intel 80386,
       version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux
       2.2.5, not stripped

If your system is a 32-bit system, you're in top shape. If it is a 64-bit system, your day just got worse. You might consider reinstalling (in the proper order) the Informix packages again, but this time using the correct ones.

Whether Informix is to be invoked by remote clients, through TCP, or locally in loopback mode, you need to assign a service number for the database server in /etc/services (if you think the unnamed pipes/IPC thing works, give your head a shake). Since Informix didn't see fit to request a well-known port from IANA, we'll use the Sybase port (1498). Hack /etc/services and change or add port 1498:

     informix      1498/tcp                   # Sybase (usurped by Informix)

Rename the sqlhosts file in the /usr/share/informix/etc subdirectory to sqlhosts.demo and then make a copy of it as sqlhosts:

     cd /usr/share/informix/etc
     mv sqlhosts sqlhosts.demo
     cp sqlhosts.demo sqlhosts

If you are just setting up a database server that can be used by TCP clients, hack the sqlhosts file to read:

     servname sesoctcp machname informix

Where servname is the name you'll call the Informix service (can be equal to the machine name for simplicity sake), "sesoctcp" specifies that TCP sockets are to be used (remember that unnamed pipes are a losing proposition), machname is the name of the machine that runs the database (as defined in /etc/hosts or resolved through DNS) and "informix" is the well-known port name of the database server, as defined in /etc/services.

If you want to use IPX, you'll need to read the Database Server Administrator's Guide because it is beyond what we care enough about to describe herein.

Note that, in order for the server and all its clients to play the same game, they must all be shown the same files at /usr/share/informix/etc/sqlhosts and /etc/services. Here is an example of /usr/share/informix/etc/sqlhosts:

     #*************************************************************************
     #
     # File used to describe Informix servers, which protocols they use and
     # what is run to service the requests.
     #
     #*************************************************************************
     deltoids       sesoctcp  deltoids       informix  # Well-known port

Before you can do anything with the database server, you must start it as root. As a matter of fact, a good test to see whether Informix is properly installed is to start it as root. The following shell commands will take care of starting it:

     su
     export INFORMIXDIR=/usr/share/informix
     $INFORMIXDIR/lib/sqlexecd deltoids -l /var/log/sqlexecd.log

Note that, if you get a -930 error, it is probably because your host's name does not appear in the /etc/hosts file, exactly the way Informix is looking for it or the name is not being looked up properly in DNS. The message that accompanies the -930 error indicates which host name Informix is looking for.

Starting/Stopping Informix at Boot Time

If it is desirable to start/stop Informix at the system level, the following startup script (which should be installed in /etc/rc.d/init.d/sqlexecd) will do the trick.

     #! /bin/sh
     #
     # informix - Script to start/stop the Informix database servitron.
     #
     # Revision History:
     # ewilde      2004Mar23  Initial coding.
     #
     # chkconfig: 2345 12 78
     # description: Informix SQL database server.
     # processname: sqlexecd
     # config: /usr/share/informix/etc/sqlhosts
     # pidfile: /var/run/informix/sqlexecd.pid
     # Source function library.
     . /etc/rc.d/init.d/functions
     # Source networking configuration.
     . /etc/sysconfig/network
     # Information about what and where we're running.
     prog="Informix"
     INFORMIXDIR="/usr/share/informix"
     lockfile="/var/lock/subsys/sqlexecd"
     logfile="/var/log/sqlexecd.log"
     process="sqlexecd"
     service="deltoids"
     owner="informix"
     export INFORMIXDIR
     # Start up Informix.
     start()
     {
     # Set up the logfile.
     touch ${logfile}
     chown ${owner}.${owner} ${logfile}
     chmod 0664 ${logfile}
     # Start the servitron.
     ${INFORMIXDIR}/lib/${process} ${service} -l ${logfile} >/dev/null 2>&1
     # How'd that go?
     ret=$?
     if [ $ret -eq 0 ]; then
          action $"Starting $prog: " /bin/true
     else
          action $"Starting $prog: " /bin/false
     fi
     # Locking, if necessary.
     [ $ret -eq 0 ] && touch ${lockfile}
     return $ret
     }
     # Stop Informix.
     stop()
     {
     # If there's a lock file, try to find the PID.
     if [ -f ${lockfile} ]; then
          SQLPID=`ps -eo pid,args | grep ${INFORMIXDIR} | grep ${process} \
              | sed -n 's/^ \([0-9]\).*/\1/p'`
          if [ x"$SQLPID" != x ]; then
            # Kill the PID.
            kill -9 $SQLPID
            ret=$?
            # See how it went.
            if [ $ret -eq 0 ]; then
                 action $"Stopping $prog: " /bin/true
            else
                 action $"Stopping $prog: " /bin/false
            fi
            # Clean up the lock file.
            [ $ret -eq 0 ] && rm -f ${lockfile}
            return $ret
       fi

fi

     # Not running.
     return 0
     }
     # Restart Informix.
     restart()
     {
     stop
     start
     }
     # Conditional restart of Informix.
     condrestart()
     {
     [ -e ${lockfile} ] && restart || :
     }
     # See how we were called.
     case "$1" in
          start)
               start
               ;;
          stop)
               stop
               ;;
          status)
               status ${process}
               ;;
          restart)
               restart
               ;;
          condrestart)
               condrestart
               ;;
          *)
               echo $"Usage: $0 {start|stop|status|condrestart|restart}"
               exit 1
     esac
     exit $?

To install it into the system, use:

     /sbin/chkconfig --add sqlexecd
     /sbin/chkconfig sqlexecd on
     /sbin/chkconfig --list sqlexecd

The startup script shown above will write logfiles to /var/log. You should add a logrotate file in /etc/logrotate.d/sql that looks like this one.

     /var/log/sqlexecd.log {
         notifempty
         missingok
         create 0664 informix informix
         copytruncate
     }

Missing Informix Links

The fabulous Informix install scripts don't put links to the dynamic libraries into the /usr/lib directory. Nor do they update /etc/ld.so.conf with a list of the Informix directories or add a file containing a list of the Informix directories to /etc/ld.so.conf.d. If you'll be building 4GL programs or trying to run certain Informix utilities, you'll need to fix this problem.

On earlier systems that don't use /etc/ld.so.conf (or on any system that you wish to have all dynamic libraries in /usr/lib), you may install links to the Informix dynamic libraries yourself to get them to work. The following command will list the files needing links:

     find /usr/share/informix -name \*.so -print

If you are happy with what you see (i.e. it makes sense that the libraries listed should be included in the dynamic libraries load path), the following command will actually make the links in /usr/lib:

     find /usr/share/informix -name \*.so -exec ln -s \{\} /usr/lib \;

On modern Linux systems including the latest CentOS/RedHat and Ubuntu, the preferred method for fixing this problem is to add a file to /etc/ld.so.conf.d that contains a list of all of the dynamic library directories. You can determine all of the directories that contain dynamic libraries in the Informix install path with the same command as shown above:

     find /usr/share/informix -name \*.so -print

We did so on our system and came up with this file:

/etc/ld.so.conf.d/informix-i386.conf:

     /usr/share/informix/lib/c++
     /usr/share/informix/lib/cli
     /usr/share/informix/lib/client/csm
     /usr/share/informix/lib/dmi
     /usr/share/informix/lib/esql
     /usr/share/informix/lib/icc/icclib
     /usr/share/informix/lib

After you've added the file to /etc/ld.so.conf.d, you'll need to rebuild the loader's cache of dynamic libraries with these commands:

     su
     ldconfig

You can check that the cache has been updated with:

     ldconfig -p | grep /usr/share/informix

Informix Threads Fix

Note that the gang at RedHat has screwed up the threads library implementation on RedHat 9 such that the Informix tools get segmentation violations (later versions of RedHat such as EL3, EL4, EL5 and EL6, as well as CentOS 4, 5 and 6 don't have this problem).

The segmentation violation or segmentation fault problems are due to RedHat 9.0 including the Native POSIX Thread Library (NPTL), a new implementation of POSIX threads for Linux.

This library provides performance improvements and increased scalability for i686 or better processors. This thread library is designed to be binary compatible with the old Linux Threads implementation; however, applications that rely on the places where the Linux Threads implementation deviates from the POSIX standard will need to be fixed.

If an application does not work properly with NPTL, you can run it using the old Linux Threads implementation by setting the following environment variable:

     LD_ASSUME_KERNEL=kernel-version

The following versions are available:

     2.4.1 - Linuxthreads with floating stacks
     2.2.5 - Linuxthreads without floating stacks

If you want to know what the current kernel version of Red Hat 9.0 issue this command:

     rpm -q kernel

If your kernel is something like "kernel-2.4.20-8", use "2.4.1". Otherwise, if it is a version 2.2 kernel, use "2.2.5". Note that the LD_ASSUME_KERNEL environment variable will have to be exported for it to work. Also note that the development environment setup script (setupinfx) shown in the "Using an Informix Database" section does this properly.

Broken Informix Dynamic Libraries

When compiling C programs to be included in "fglgo", you may encounter the following problem, caused by IBM building the Informix dynamic libraries on an older Linux platform. Here is an example:

     cfglgo ad.c ad2.c fgiusr.c read_dir.c -o newfglgo
     /usr/share/informix/lib/esql/libifgls.so: undefined reference to `__ctype_b'
     /usr/share/informix/lib/esql/libifgls.so: undefined reference to
         `__ctype_toupper'
     /usr/share/informix/lib/esql/libifgls.so: undefined reference to
         `__ctype_tolower'
     collect2: ld returned 1 exit status

The reference to `__ctype_b' should be satisfied by the C runtime library.

The problem is related to glibc. Anything on RedHat9 with a glibc >= glibc-2.3.2-11.9 is causing the problem. Actually, any stock glibc from at least 2.3.0 up is causing problems.

glibc is no longer exporting `__ctype_b'.

The compat_symbol lines were removed from ctype-info.c in the newer glibc libraries. RedHat8 and below kept the lines in ctype-info.c to support old static libraries.

To fix the problem, add these stubs to one of the modules being compiled and linked with fglgo.

     /*
      * RH9 is missing externals for certain functions from the ctype library.
      * Since these functions are referenced in the Informix libraries (which
      * we can't recompile to fix the problem), we must include stubs herein to
      * resolve the unreferenced externals and call the correct replacement
      * functions.
      */
     const unsigned short int *__ctype_b (void)
         { return (__ctypebloc()); }
     int __ctype_toupper(int c)
         { return (toupper(c)); }
     int __ctype_tolower(int c)
         { return (tolower(c)); }

Perl Informix DBD Interface

You may wish to develop some applications that use the Perl Informix DBD interface (e.g. a Web UI to your data). If you plan to do so, you will need some extra Perl modules so you might as well fire up CPAN and get started with their installation. However, before invoking CPAN, be sure that the server has access to the outside world (through your firewall) so it can download the Perl modules from the selected CPAN site. Once it does, type:

     su
     perl -MCPAN -e shell

The first time through, you can let automatic configuration set everything up. All that remains is to set up the URL list, which you do with:

CPAN will then ask you to pick some mirror sites for downloading sources. Apparently, those sites that use FTP are typically slow and prone to fail so you may wish to only consider those sites that use HTTP. If you pick North America and United States, here are the sites that we use:

     http://cpan.belfry.net/
     http://cpan.mirror.facebook.net/
     http://mirrors.ibiblio.org/CPAN/
     http://www.perl.com/CPAN/

If you ever need to redo the list of sites, you can rerun the configuration at any time from the command line with:

Once you have made your choices, remember to commit the configuration with:

There is more information at:

     http://rackerhacker.com/2008/06/16/adjusting-cpan-mirror-list/

Here are a list of modules that need to be installed:

     install Bundle::CPAN
     force install Date::Calc  (later versions have it already)
     install Time::HiRes

If the install of Time::HiRes fails, exit CPAN, do the following and then rerun the install of HiRes from scratch:

     export LC_ALL=C

Before installing the Informix bundle, you will need to exit CPAN and do the following:

     PATH=/usr/share/informix/bin:$PATH
     INFORMIXDIR=/usr/share/informix
     INFORMIXSERVER=deltoids  (or whatever your Informix server is named)
     export PATH INFORMIXDIR INFORMIXSERVER

Set the environment variables needed by the Perl install script.

     DBPATH=/home/tst/perltest  (pick the user who will own the test directory)
     DBD_INFORMIX_DATABASE=stores
     DBD_INFORMIX_USERNAME=root
     DBD_INFORMIX_PASSWORD=itsasecret  (use the actual root password)
     export DBPATH DBD_INFORMIX_DATABASE DBD_INFORMIX_USERNAME \
         DBD_INFORMIX_PASSWORD

Note that the super user password is now stored in an environment variable in plain text. This is a big security hole. Only do this for the time needed to run the Perl install script.

Make a test directory where the database used by the install script can be stored.

     su
     mkdir /home/tst/perltest
     chown tst:tst /home/tst/perltest
     chmod ug=rwx,o=rx /home/tst/perltest

Change to the newly created test directory. Despite the fact that the DBPATH environment variable is set to the test directory, Informix is such a brain dead piece of scrap that it still creates the new database in the current directory. So, do:

     cd /home/tst/perltest

Create a database in this directory that can be used by the install script. To do so, run "isql" and use the "DATABASE" menu item followed by the "CREATE" menu item. Enter the name of the new database (e.g. "stores") and press enter to create it. Back out of the database menu and switch to the Query Language menu where you should enter and execute the SQL statement:

     grant dba to public

When done, exit from "isql", restart CPAN and then install DBD:

     perl -MCPAN -e shell
     install Bundle::DBD::Informix

We're now done with the Perl test database so we can get rid of it:

     cd /root
     rm -rf /home/tst/perltest

At this point, you must log off to clear the environment variables with the super user password in them. Then, you need to edit root's bash history to delete the line where you set the password. To do that, log on as root again. With your favorite editor, edit /root/.bash_history and remove any lines that contain the root password. Save the file. Log out again (the command that sets the password is still in the in-memory command history). You can log in for a third time and check that the plaintext password is no longer visible in the command history. What a pain in the butt! Nice install procedure.

Setting Up Termcap for Informix

Many of the Informix tools and applications (e.g. dbaccess, any 4GL apps that you've written) use CURSES to display their output and interface with the user. For CURSES to work, termcap must be installed.

Unfortunately, on many of the later versions of your favorite OS, termcap is now obsolete (which is not to say that nobody is using it but simply that, "we got bored supporting it, especially when we could be coming up with another cool-looking-but-essentially-useless skin for some really-hot-or-maybe-not-so-much app"). Lucky for us, a replacement for termcap exists in the form of a compatibility library for terminfo.

Under CentOS/RedHat/etc., you can install the compatibility library like this:

     su
     yum install compat-libtermcap

Under Ubuntu, you can install the compatibility library with:

     su
     apt-get install termcap-compat

Under Ubuntu, you may also want:

     apt-get install libncurses5-dev

Alternately, instead of using the terminfo compatibility library, Informix also comes with its own copy of termcap, which you can use from the Informix install library by setting the environment TERMCAP, like this:

     export TERMCAP=${INFORMIXDIR}/etc/termcap

if the INFORMIXDIR environment variable is set, or, depending on where you installed Informix, something like this:

     export TERMCAP=/usr/share/informix/etc/termcap

In many cases, we prefer to use our own version of TERMCAP which we got from an old version of RedHat and which we keep around just for running old, legacy stuff. As with the termcap file in the Informix install library, the TERMCAP environment is pointed at the old RedHat termcap file:

     export TERMCAP=/home/dbusr/termcap.OLDRH

Any of these approaches should work equally well. Or, so one would think. Unfortunately, if you choose the third approach and use an old RedHat termcap file, you may run into a little problem that Informix has with termcap entries that are chained. In a termcap file, chaining is done with the "tc=" parameter. The problem manifests itself when you run any of the Informix tools or 4GL apps with this error:

     Termcap entry too long
     Too many tc= indirections

Apparently, in the case of Informix, "Too many" means four (i.e. more than three). The vt100 entry in older RedHat termcap files has this problem, for example, while the vt200 and vt220 entry appears not to. The simple solution is to pick vt200 or some other entry that works.

However, it is possible to fix the problem with your favorite text editor. Open up /etc/termcap and look for something like this:

     vt100|vt100-am|dec vt100 (w/advanced video):\
             :5i:am:bs:ms:xn:xo:\
             :co#80:it#8:li#24:vt#3:\
                  .
                  .
                  .
             :us=\E[4m:tc=vt100+fnkeys:

If you follow the chain to the vt100+fnkeys entry you might see:

     vt100+fnkeys|dec vt100 numeric keypad:\
             :k0=\EOy:k5=\EOt:k6=\EOu:k7=\EOv:k8=\EOl:k9=\EOw:k;=\EOx:\
             :tc=vt100+pfkeys:

Following the chain to the vt100+pfkeys entry you might see:

     vt100+pfkeys|dec vt100 numeric keypad:\
             :@8=\EOM:k1=\EOP:k2=\EOQ:k3=\EOR:k4=\EOS:tc=vt100+keypad:

And, following the chain to the fourth vt100+keypad entry you might see:

     vt100+keypad|dec vt100 numeric keypad no fkeys:\
             :K1=\EOq:K2=\EOr:K3=\EOs:K4=\EOp:K5=\EOn:

Simply follow all of the indirections in the chain and smack them all together in the single vt100 entry, like this:

     vt100|vt100-am|dec vt100 (w/advanced video):\
             :5i:am:bs:ms:xn:xo:\
             :co#80:it#8:li#24:vt#3:\
                  .
                  .
                  .
             :us=\E[4m:\
             :k0=\EOy:k5=\EOt:k6=\EOu:k7=\EOv:k8=\EOl:k9=\EOw:k;=\EOx:\
             :@8=\EOM:k1=\EOP:k2=\EOQ:k3=\EOR:k4=\EOS:\
             :K1=\EOq:K2=\EOr:K3=\EOs:K4=\EOp:K5=\EOn:

We have also noticed some strange behavior when connecting to later Linux systems (e.g. CentOS 6.4) via SSH only. If you use telnet, everything is fine. But, we all know we shouldn't use telnet (the world is an evil place, my friend) so we use SSH instead. If you do, you may find that somebody in kernel-land, or perhaps its SSH, is monkeying with the key codes that your terminal emulator is sending. Despite the fact the your terminal emulator correctly sends "\EOA", "\EOB", etc. for the cursor keys, cursor movement is broken.

You can figure out what is being received by Informix (and other apps, for that matter) by running:

     showkey -a

Type the broken keys on your terminal emulator and you'll see what codes are actually being sent. We're betting that someone is mapping:

     /EOA  -->  /E[A
     /EOB  -->  /E[B
     /EOC  -->  /E[C
     /EOD  -->  /E[D

These codes ain't stricly incorrect. They work fine with the vt200 termcap entry, for example. But they are wrong for vt100s. So, once again, you can pick the vt200 or vt220 entry or you can "fix" the vt100 entry. If you choose the later, it should look something like this:

     vt100|vt100-am|dec vt100 (w/advanced video):\
                  .
                  .
                  .
             :ku=\E[A:kd=\E[B:kr=\E[C:kl=\E[D:\
                  .
                  .
                  .

Whether you use the Informix-supplied termcap file or an older RedHat termcap file, you'll probably be faced with this problem either way. As for the terminfo compatibility library, its anybody's guess.

Emulator Key Mapping for Running Informix Apps

Many Informix applications use the function keys F1-F10 to implement special functions directly. Most PC-based emulators are not set up correctly to send these keys to the Informix application.

To set an emulator up properly, map the keys F1 thru F10 as follows:

     F1  - \eOy
     F2  - \eOP
     F3  - \eOQ
     F4  - \eOR
     F5  - \eOS
     F6  - \eOt
     F7  - \eOu
     F8  - \eOv
     F9  - \eOl
     F10 - \eOw

where "\e" is whatever string the emulator needs to see in order to send an escape. These settings correspond to the termcap entries "k0", "k1", ... "k9", as noted in the Informix document "IBM Informix 4GL Reference Manual" under "Modifying termcap and terminfo", the "Extending Function Key Definitions" section.

Any logins that wish to use Informix applications should set "TERM=vt100". Note that the Informix termcap in $INFORMIXDIR/etc/termcap has some whack-oh entries for VT100 keys so don't use it. The regular Linux VT100 termcap definition seems to work OK.

Using an Informix Database

If you'd like to set up the databases for a particular system in a separate user's home directory, you should add those users now. For example:

     /usr/sbin/useradd -c "Collection System" -m coll

If you want to give the new userid a password:

     passwd coll

To run Informix from a user's home directory, you must set up several environment variables. The following shell script or something like it but altered as appropriate for the system in question, might be useful.

setupinfx:

     #!/bin/sh
     #
     # Set up the development environment for Collect under Informix on Linux.
     #
     # Be sure to run this script with the "." command.  For example:
     #
     #   . /home/coll/devenv
     #
     # If you do not use the "." command, the values of the environment
     # variables will not be set in the current shell environment.
     #
     INFORMIXDIR=/usr/share/informix
     INFORMIXSERVER=deltoids
     DBPATH=/home/coll/colldev
     DBTEMP=/home/coll/dbtmp
     MAKE_INSTALL=/home/coll/colldev
     export INFORMIXDIR INFORMIXSERVER DBPATH DBTEMP MAKE_INSTALL
     #
     # Set up the locale, if need be.
     #
     #CLIENT_LOCALE=en_us
     #DB_LOCALE=en_us
     #SERVER_LOCALE=en_us
     #export CLIENT_LOCALE DB_LOCALE SERVER_LOCALE
     #
     # Note that the gang at RedHat has screwed up the threads library
     # implementation on RedHat 9 such that the Informix tools get segmentation
     # violations.
     #
     # The segmentation violation or segmentation fault problems are due to
     # RedHat 9.0 including the Native POSIX Thread Library (NPTL), a new
     # implementation of POSIX threads for Linux.
     #
     # This library provides performance improvements and increased scalability
     # for i686 or better processors.  This thread library is designed to be
     # binary compatible with the old Linux Threads implementation; however,
     # applications that rely on the places where the Linux Threads implementation
     # deviates from the POSIX standard will need to be fixed.
     #
     # If an application does not work properly with NPTL, you can run it using
     # the old Linux Threads implementation by setting the following environment
     # variable:
     #
     #     LD_ASSUME_KERNEL=kernel-version
     #
     # The following versions are available:
     #
     #     2.4.1 - Linuxthreads with floating stacks
     #     2.2.5 - Linuxthreads without floating stacks
     #
     # To find out what the current kernel version of Red Hat is, we issue this
     # command:
     #
     #     uname -r
     #
     # If the kernel version is something like "2.4.18-26.8.0" or "2.4.20-8", we
     # use "2.4.1".  Otherwise, if it is a version 2.2 kernel, we use "2.2.5".
     # For later versions of the kernel, such as "2.6.18-92.1.18.el5", nothing
     # need be done.
     #
     KernelVer=`uname -r`
     echo $KernelVer | grep -q "^2.2"
     DirVal=$?
     if [ $DirVal == 0 ]; then
         echo Assuming kernel version 2.2.5
         LD_ASSUME_KERNEL=2.2.5; export LD_ASSUME_KERNEL
     else
         echo $KernelVer | grep -q "^2.4"
         DirVal=$?
         if [ $DirVal == 0 ]; then
             echo Assuming kernel version 2.4.1
             LD_ASSUME_KERNEL=2.4.1;
             export LD_ASSUME_KERNEL
         fi
     fi
     #
     # The fabulous Informix install scripts don't put links to the dynamic
     # libraries into the /usr/lib directory.  If you'll be building 4GL programs
     # or trying to run certain Informix utilities, you may install the links
     # yourself to get them to work.  The following command will list the files
     # needing links:
     #
     #     find /usr/share/informix -name \.so -print
     #
     # If you are happy with what you see (i.e. it makes sense that the libraries
     # listed should be included in the dynamic libraries load path), the
     # following command will actually make the links in /usr/lib:
     #
     #     find /usr/share/informix -name \.so -exec ln -s \{\} /usr/lib \;
     #
     # On modern Linux systems including the latest CentOS/RedHat and Ubuntu,
     # the preferred method for fixing this problem is to add a file to
     # /etc/ld.so.conf.d that contains a list of all of the dynamic library
     # directories.  You can determine all of the directories that contain
     # dynamic libraries in the Informix install path with the same command as
     # shown above:
     #
     #   find /usr/share/informix -name \*.so -print
     #
     # If you do this on your system, you'll probably came up with a file,
     # call it /etc/ld.so.conf.d/informix-i386.conf, that looks like this:
     #
     #   /usr/share/informix/lib/c++
     #   /usr/share/informix/lib/cli
     #   /usr/share/informix/lib/client/csm
     #   /usr/share/informix/lib/dmi
     #   /usr/share/informix/lib/esql
     #   /usr/share/informix/lib/icc/icclib
     #   /usr/share/informix/lib
     #
     # If neither of these approaches are taken, a final alternative approach
     # is to add the informix libraries to the loader's dynamic library search
     # path, which is what we do on the fly, here.  It will work in conjunction
     # with either the symlink approach or the ld.so.conf approach, mentioned
     # above so there is no harm in doing it here, just in case.  Kind of like
     # wearing a belt and suspenders too.
     #
     # Note that we only do this if the Informix libraries are not already on the
     # search path.
     #
     echo $LD_LIBRARY_PATH | grep -q "$INFORMIXDIR/lib"
     DirVal=$?
     if [ $DirVal != 0 ]; then
         LD_LIBRARY_PATH=$INFORMIXDIR/lib:$INFORMIXDIR/lib/esql:\
             $INFORMIXDIR/lib/tools:$LD_LIBRARY_PATH
         export LD_LIBRARY_PATH
     fi
     #
     # Set the path to include the Informix binary directory at the front, if
     # it isn't already there.
     #
     echo $PATH | grep -q "$INFORMIXDIR/bin"
     DirVal=$?
     if [ $DirVal != 0 ]; then
         PATH=$INFORMIXDIR/bin:$PATH
     fi
     #
     # Now, add the current directory.
     #
     echo $PATH | grep -q "\.:"
     DirVal=$?
     if [ $DirVal != 0 ]; then
         PATH=.:$PATH
     fi
     export PATH
     #
     # Tell the database to use the termlp program for printing.  This program
     # will redirect output to the terminal in such a way as to cause the
     # emulator to print it on the user's local printer.
     #
     # Basically, this program sends:
     #
     #   <esc>[5i
     #   <esc>E
     #   <esc>(s12H
     #
     # Followed by the file to be printed.
     #
     # Followed by:
     #
     #   <esc>[4i
     #
     # This switches on the printer port on the local PC, resets the printer,
     # sets it to 12 cpi (96 colums in portrait, 136 columns in landscape),
     # prints the file, then turns the printer port off.
     #
     DBPRINT=/home/coll/colldev/termlp
     export DBPRINT
     #
     # The regular termcap doesn't work with Informix 4GL programs.  You can
     # use the one that Informix supplies but we have our own.
     #
     # Incidentally, later versions of Linux use terminfo, not termcap, but
     # Informix didn't get that memo, hence the need for its own termcap.
     #
     #TERMCAP=${INFORMIXDIR}/etc/termcap; export TERMCAP
     TERMCAP=/home/coll/termcap.OLDRH; export TERMCAP
     #INFORMIXTERM=terminfo; export INFORMIXTERM
     #
     # Display the Informix directory choices for the user.
     #
     echo Informix directory: $INFORMIXDIR
     echo Informix server and path: $INFORMIXSERVER, $DBPATH
     #
     # Change to the build directory (this is important, since Informix can
     # screw over the wrong database tables, if you don't do this).  Note that
     # the ACTIVE_COLL_DIR is just a variable that we set for our convenience.
     # It has nothing to do with Informix.
     #
     ACTIVE_COLL_DIR=/home/coll/colldev; export ACTIVE_COLL_DIR
     cd /home/coll/colldev

After you've set up this script, you can invoke it from the user's login script or you can run it from the command line like this:

     . ./setupinfx

This should now allow you to run Informix, Informix tools and any 4GL programs with impunity. However, if you experience an error that looks like this:

     Program stopped at "xxx.4gl", line number 0.
     SQL statement error number -404.
     The cursor or statement is not available.

It is probably because the Informix database engine isn't running, or because you've set the wrong values for INFORMIXSERVER or DBPATH. This is basically Informix' way of telling you that it can't find your database. Its just not real strong on being transparent. Better you should have to work a bit to figure out what's going on.

Exporting/Importing an Informix Database

Should you wish to move an Informix database from one place to another, this procedure should work.

On the source server, set the Informix path and database directory to the place where Informix was installed and the place where the database lives:

     PATH=/usr/informix/bin:$PATH
     DBPATH=/work2/dbpath
     export PATH DBPATH

Running as the "informix" user, export the database to flat files:

     su informix
     dbexport -o /export/dir dbname

This will create a bunch of export files in the path /export/dir/dbname.exp. If you want to FTP this to another machine, tar it up:

     cd /export/dir
     tar -cvf dbname.exp.tar dbname.exp

or

     cd /export/dir
     tar -cvf - dbname.exp | gzip -c >dbname.exp.tar.gz

FTP the tar file to other machine and then untar it:

     cd /import/dir
     tar -xvf dbname.exp.tar

or

     cd /import/dir
     tar -xvzf dbname.exp.tar.gz

Set up the environment for importing:

     PATH=/apps/informix/bin:$PATH
     DBPATH=/import/dir
     INFORMIXDIR=/apps/informix
     INFORMIXSERVER=munis1
     export PATH DBPATH INFORMIXDIR INFORMIXSERVER

Run the import as the informix user:

     su informix
     dbimport -i /import/dir [-l /log/file/name [ansi]] dbname

If your database needs transaction logging (you can find out by running the SQL command "select dirpath from systables where tabid = 0" in the original table), you'll need to create the logfile when you do the import, using the "-l" parameter and possibly the "ansi" parameter. If you don't do it at this time, you are screwed because Informix is so brain-dead as to not allow you to add the log after the fact.

Also, be sure to use an absolute path name for the logfile since the "-l" option appears not to work for Informix under Linux when relative path names are used. All that happens is a "database not found" error. Any "-l" option with a relative path appears to cause the problem, even when it is in the current directory and/or the directory where the database will live.

One word of caution, though. Full-blown logging, transactions and rollback are not something most applications are prepared to deal with so, if you use the "ansi" parameter, you are probably screwed too. It doesn't seem to work very well with Informix so it might be a good idea not to use it for anything but a special database, where you really know what you're doing. However, it is there if you think you know better.

The permissions on any imported databases will probably be set wrong. You can either give each database directory and the files within it all permissions (at a minimum, the user permissions are not set so you should at least add them) or you can create a group that each person who will access the database can belong to and then make the group of each database this group.

To set general permissions (admittedly a security hole), do something like this:

     chmod ugo=rwx /home/coll/collprod/coll.dbs
     chmod ugo=rwx /home/coll/collprod/coll.dbs/*

To add each user of the database to a group that can access the database, first create the group (as super user), in this fashion:

     /usr/sbin/groupadd collusers

Add each of the users who must access the database to the group (once again as super user), something like this:

     gpasswd -a coll collusers

Finally, set the group of the database directory and all its files to the group just created, for example:

     chgrp collusers /home/coll/collprod/coll.dbs
     chgrp collusers /home/coll/collprod/coll.dbs/*

We now include a couple of scripts that we use to export and import a production database from the production server to the hotbackup server, on a nightly basis. These scripts should be fairly self-explanatory:

DBExport:

     #!/bin/sh
     #
     # Shell script to export the production database and create a tar file.
     #
     # This script must be run as the informix user.  Either logon as informix or
     # do the following as root:
     #
     #      su -c /bin/path/DBExport informix
     #
     #
     # Clean up the export directory, if there's any left-over junk.
     #
     rm -rf /backup/export/*
     #
     # Set up the environment variables needed by Informix.
     #
     INFORMIXDIR=/usr/share/informix
     INFORMIXSERVER=elproducto
     DBPATH=/var/informix/prod
     #
     # Set the path to include the Informix binary directory at the front, if it
     # isn't already there.
     #
     echo $PATH | grep -q "^$INFORMIXDIR/bin"
     DirVal=$?
     if [ $DirVal != 0 ]; then
         PATH=$INFORMIXDIR/bin:$PATH
     fi
     export INFORMIXDIR INFORMIXSERVER DBPATH PATH
     #
     # Change to the production directory.
     #
     cd /var/informix/prod
     #
     # Export the production database to the export directory.  This will create
     # /backup/export/prod.exp.
     #
     dbexport -o /backup/export prod 2>&1 >/dev/null
     #
     # Tar and gzip the exported database so that FTP can copy it.
     #
     cd /backup/export
     tar -cf - prod.exp | gzip >/backup/export/prod.exp.tar.gz
     #
     # We don't need the export directory any longer so we'll delete it.  All we
     # need is the gzipped tar file.
     #
     rm -rf /backup/export/prod.exp
     #
     # FTP the exported database to its destination.
     #
     echo -e "user prod shhh\\nbin\\nput /backup/export/prod.exp.tar.gz /home/prod/prod.exp.tar.gz" | \
         ftp -n 10.100.0.1 2>&1 >/dev/null

DBImport:

     #!/bin/sh
     #
     # Shell script to import the production database from a tar file.
     #
     # This script must be run as the informix user.  Either logon as informix
     # or do the following as root:
     #
     #      su -c /bin/path/DBImport informix
     #
     #
     # Check to see if there's anything worth doing.
     #
     if test /home/prod/lastexport -nt /home/prod/prod.exp.tar.gz; then
         echo The database is right up to date with the last export.
         exit 0
     fi
     #
     # Also, check to see if the export crapped out or looks OK.
     #
     TarFile=`find /home/prod -name prod.exp.tar.gz -follow -size +1024k -print`
     if test x"$TarFile" == x; then
         echo Looks like the export of the production database failed.
         exit 1
     fi
     #
     # Set up the environment variables needed by Informix.
     #
     INFORMIXDIR=/usr/share/informix
     INFORMIXSERVER=eltestoh
     DBPATH=/home/prod/prod
     #
     # Set the path to include the Informix binary directory at the front, if it
     # isn't already there.
     #
     echo $PATH | grep -q "^$INFORMIXDIR/bin"
     DirVal=$?
     if [ $DirVal != 0 ]; then
         PATH=$INFORMIXDIR/bin:$PATH
     fi
     export INFORMIXDIR INFORMIXSERVER DBPATH PATH
     #
     # Change to the production directory.
     #
     cd $DBPATH
     #
     # Just in case, get rid of the old export directory.
     #
     rm -rf  $DBPATH/prod.exp
     #
     # Extract the export directory from the tar file.  The export
     # (DBExport) thoughtfully copied it where we could find it (in
     # /home/prod/prod.exp.tar.gz).
     #
     echo Extracting the exported database.
     tar -xzf /home/prod/prod.exp.tar.gz
     RetVal=$?
     if test $RetVal != 0; then
         echo Extract of production database from tar file failed.
         exit 2
     fi
     #
     # Now that we have something good to install on this machine, get rid of
     # the old database.
     #
     echo Making a back-em-up of the current database.
     rm -rf $DBPATH/prod.dbs.bak
     mv $DBPATH/prod.dbs $DBPATH/prod.dbs.bak
     rm -f $DBPATH/prod.log
     #
     # Import the production database from the export directory (which we built,
     # above).
     #
     echo Loading the new database.
     dbimport -i $DBPATH -l $DBPATH/prod.log prod 2>&1 >/dev/null
     RetVal=$?
     if test $RetVal != 0; then
         echo Load of the new production database failed.
         rm -rf $DBPATH/prod.dbs
         mv $DBPATH/prod.dbs.bak $DBPATH/prod.dbs
         exit 3
     fi
     #
     # We don't need the export directory any longer so we'll delete it.  We'll
     # keep the tar file for yucks.  We also don't need the data in the logfile.
     #
     echo Cleaning up after load of new database.
     rm -rf  $DBPATH/prod.exp
     cat /dev/null >$DBPATH/prod.log
     #
     # Save the old database for posterity.
     #
     SaveDate=`date +%y%b%d`
     mv $DBPATH/prod.dbs.bak $DBPATH/prod-$SaveDate.dbs
     echo Old database saved as $DBPATH/prod-$SaveDate.dbs
     #
     # Indicate we're done with these tar files.
     #
     touch /home/prod/lastexport
     #
     # Since we're doing target practice here, we'll make a copy of the database
     # for quick restores.
     #
     echo Copying new database to quick restore database
     rm -f $DBPATH/quickrest.dbs/
     cp $DBPATH/prod.dbs/ $DBPATH/quickrest.dbs

Informix Log Files

Informix is quite capable of chewing up all of the disk space it can find with transaction log files, if you let it. A common technique to circumvent this situation is to copy /dev/null to each log file on a daily or weekly basis (this technique can be employed for a test system). Another technique is to rotate the log files, keeping only a certain number and deleting older files after a certain time interval has elapsed (this technique can be used for a production system). Keeping log files for a reasonable length of time will allow debugging of transactions, should the need arise to determine when a problem occurred.

If your database is used only during certain hours of the day, the users will see no effect if log file management is done regularly at off-peak times. This allows a daily cron job to clear or rotate the log files with no noticable impact.

If you have a test database, where you don't care about log files, set up the cron table (/etc/crontab) or set up a daily script (in /etc/cron.daily) to clear the log files for all transaction logs for all databases that are in use (sample crontab entry shown):

     05 2 * *  root /usr/bin/find /home/coll/colldev -name \.log \
                     -exec cp /dev/null \{\} \;

For a production system, where you'd like to keep a few revisions of the log, a logrotate file should be added to /etc/logrotate.d/dbname that looks like this one. It will rotate the main database transaction log weekly, keeping 60 weeks worth of log files, compressing the rotated copies and then zero out any other log files that happen to be present. Only the main transaction log is really important.

     /home/coll/collprod/coll.log {
         weekly
         rotate 60
         notifempty
         missingok
         compress
         create 0660 coll informix
         copytruncate
         postrotate
             /usr/bin/find /home/coll/collprod -name \*.log \
                 -exec cp /dev/null \{\} \;
         endscript
     }

UnixODBC Install

Instead of connecting directly to various databases, one can use the ODBC functions of programming languages such as PHP or Perl to connect to a database in a database-independant manner. This allows one to write code that can be run on top of any database that has an ODBC driver. In the case of UnixODBC, the databases that are supported by it include MySQL, Informix and Oracle.

Before installing UnixODBC, you might want to install any and all of the databases that you wish to use UnixODBC with -- it makes the install easier, if you have the databases already installed. However, contrary to common sense, you should not attempt to install or build the database connectors until you've installed UnixODB so after you've installed the databases and before you install their ODBC connectors, install UnixODBC.

This document specifically describes how to install the MySQL and Informix connectors, both of which may be used through UnixODBC, in subsequent sections.

Also, you should check to see if UnixODBC is available through the package manager of your operating system. If it is, you should install it that way because it is much easier than building it from scratch. Any version of UnixODBC should work with the database connectors described herein. Under CentOS/RedHat, you can use yum to check if UnixODBC is already installed:

     su
     yum list installed | grep ODBC

You can check if it is available and install it if it is:

     su
     yum search unixODBC
     yum install unixODBC.i686

Under Ubuntu, you can use apt to check if UnixODBC is already installed:

     su
     apt-show-versions | grep ODBC

You can check if it is available and install it if it is:

     su
     apt-cache search unixodbc
     apt-get install unixodbc

If you do decide that you want to build UnixODBC from source, either because it isn't available on your system or you want the latest bug fixes, etc., that haven't made it to the official OS release yet, begin by downloading the latest UnixODBC tar file from http://www.unixodbc.org/. However, beware that later versions of UnixODBC include some crazy stuff (for a database driver, that is) that you may want to disable (e.g. there is a KDE GUI that requires a graphics toolkit to build). The 2.2.11 version appears to work quite well and it does not seem to be as left-leaning as the later versions. If it works with your database connector, you may want to use it. Also, the 2.3.1 version seems to build and work well, so it is another good choice.

Untar the chosen tar file in the top level source directory (e.g. /rpm/UnixODBC):

     tar -xvzf unixODBC-a.b.yy.tar.gz

It will create a new directory for that version of UnixODBC. Switch to that directory and build UnixODBC:

     cd unixODBC-a.b.yy
     ./configure --prefix=/usr/local/unixODBC --enable-ltdl-convenience
     make

Note that the "--enable-ltdl-convenience" flag is not necessary for versions of UnixODBC >= 2.2.14 (the problem with ltdl has been fixed in that version's configure file). If you have one of those versions, you'd use:

     ./configure --prefix=/usr/local/unixODBC

When you get the error about Qt headers, if you don't want to download and install yet another graphics toolkit to build UnixODBC (one has to ask oneself w.t.f. a database driver needs a GUI for, anyway), the following build should work for version 2.2.12 and later:

     cd unixODBC-a.b.yy
     ./configure --prefix=/usr/local/unixODBC --disable-gui
     make

Switch to super-duper user and install UnixODBC:

     su
     make install

The install may not inform the loader about where it put the dynamic link libraries that are necessary to access databases through UnixODBC. You may have to fix this situation manually. One fix is to add symlinks for the UnixODBC modules, in an obvious location such as /usr/lib. Another fix is to change ld.so.conf or ld.so.conf.d to point the loader at the directory where UnixODBC can be found.

If you prefer the first approach, this command can take care of things:

     find /usr/local/unixODBC/lib -name \*.so -exec ln -s \{\} /usr/lib \;

We prefer to add a file to /etc/ld.so.conf.d, which points the loader at the UnixODBC modules, like this:

/etc/ld.so.conf.d/UnixODBC.conf:

     /usr/local/UnixODBC/lib

Run the loader configuration command to inform the loader about the new dynamic link library:

     su
     /sbin/ldconfig

If there are any databases that will be used by ODBC, they need to be configured by adding a section to the ODBC initialization file. If UnixODBC was installed in /usr/local/unixODBC (as we show above), the file is /usr/local/unixODBC/etc/odbc.ini. For installations done by a package manager (e.g. RedHat or CentOS), the file may be in /etc/odbc.ini.

Here is a sample connection for a MySQL student tracking database:

     [StudentTracking]
     Description = Student tracking database
     Trace       = Off
     TraceFile   = stderr
     Driver      = MySQL
     SERVER      = localhost
     PORT        =
     SOCKET      =
     USER        = mytrack
     PASSWORD    = SecretSquirrel
     DATABASE    = StudentTracking

Note that, for some reason, it would appear that the user name and password must be set in the DSN definition. This being the case, the USER name and PASSWORD should match the name set when the database was created and permissions assigned. Of course, the database name must also match the name given to the database. The permissions on odbc.ini can be:

     -rw-r--r--    root     root

However, since the password to the database is in this file so it may be better to use something like this:

     -rw-r-----    root     mysql

But, note that if you do this, you will need to include all of the users of ODBC in the mysql group. Typically, this will include apache, since httpd now runs under the apache userid and PHP programs are typical users of ODBC to access databases for web pages.

Also note that you may need to supply the socket path if you are using a non-standard socket in the database's initialization file. Furthermore, some versions of MySQL, for example, don't work all that well with TCP so, once again, you may need to supply a socket path to force the use of the socket instead of TCP.

To open this sample database in PHP you would do something like this:

     odbc_connect("StudentTracking", "mytrack", "SecretSquirrel");

Here is another sample showing an Informix connection to your collections database (but see Installing the Informix ODBC Connector, in these notes, for more information, since getting Informix to work is very tricky):

     [CollectOh]
     Description=Collections Database
     Driver=Informix
     Server=localhost
     Port=1498
     Database=/my/database/path/collectdb
     CLIENT_LOCALE=en_us.8859-1
     DB_LOCALE=en_us.8859-1
     TRANSLATIONDLL=/usr/share/informix/lib/esql/igo4a304.so

In this case, the user can supply the username and password to the ODBC connect function when they connect to the database so it has been left out of the DSN. To open this sample database in PHP you would do something like this:

     odbc_connect("CollectOh", "theman", "payme");

Finally, it may be possible to open a DSN-less connection something like this (once again for MySQL in PHP):

     odbc_connect(
       "Driver={MySQL};" .
       "CommLinks=tcpip(Host=localhost);" .
       "DatabaseName=StudentTracking;" .
       "uid=mytrack;pwd=SecretSquirrel",
       "mytrack", "SecretSquirrel");

Installing MySQL Connector/ODBC or MyODBC

UnixODBC does not work without an intermediate shim or connector between it and the database. In the case of MySQL, the connector is supplied by MySQL so one should download the source for the latest ODBC connector from the MySQL Web site: http://dev.mysql.com/downloads/connector/odbc/. You can find the source by selecting the "Source Code" Platform and then scrolling to the end of the list where you'll see "Compressed TAR Archive".

The latest version of Connector/ODBC (e.g. 5.2.2) works with the 5.5.x versions of MySQL. Earlier versions of MySQL (e.g. 5.1.x or 5.0.x) used a connector called MyODBC. The later versions of MyODBC included some crazy stuff (for a database shim, that is) that you may want to disable (e.g. there is a KDE GUI that requires a graphics toolkit to build). If you can find it, the 3.51.11 version had some known bugs in it but it appeared to work reasonably well and it did not appear to be as left-leaning as the later versions so you may want to use it instead. The 3.51.12 version also appeared to work OK, so that was the version that we used for some time. Currently, if you try to download source for MyODBC from the MySQL Web site, you'll see source for 3.51.30. We have no idea how this works, since we've moved on to the 5.5.x MySQL along with Connector/ODBC 5.2.2.

That being the case, without further ado, here's how to build the 5.2.2 connector:

     tar -xvzf mysql-connector-odbc-5.2.2-src.tar.gz
     cd mysql-connector-odbc-5.2.2-src
     cmake -DWITH_UNIXODBC=1 -DODBC_INCLUDES=/usr/local/unixODBC/include \
         -DODBC_LIB_DIR=/usr/local/unixODBC/lib
     make

Once the build is done successfully, install it as root:

     su
     make install

Meanwhile, for earlier versions of MySQL, to begin building MyODBC, make sure that you have libtool and the libtool development tools installed. You can use the typical package version that comes with your OS but make sure the development package is also installed (e.g. libtool-devel). If it is not, you'll get an error that reads:

     configure: error: Could not find compile MySQL single user test program.
       Please check config.log for errors

When you check config.log, you'll be no further ahead, since it is trying to compile a test program that it made up (i.e. it ain't missing). If you just install libtool-devel (or whatever it is called), you'll save yourself a whole heap o' trouble.

Untar the source tar file in the top level source directory (e.g. /rpm/MySQL):

     tar -xvzf MyODBC-a.b.yy.tar.gz

It will create a new directory for that version of the MyODBC connector. Switch to that directory and build MyODBC:

     cd MyODBC-a.b.yy
     ./configure --prefix=/usr/local/unixODBC \
       --with-unixODBC=/usr/local/unixODBC \
       --with-mysql-path=/usr/local/mysql --enable-thread-safe
     make

When you get the error about Qt headers, if you don't want to download and install yet another graphics toolkit to build MyODBC (one has to ask oneself w.t.f. a database shim needs a GUI for, anyway), the following build should work for version 3.51.12 and later:

     cd MyODBC-a.b.yy
     ./configure --prefix=/usr/local/unixODBC \
       --with-unixODBC=/usr/local/unixODBC \
       --with-mysql-path=/usr/local/mysql --enable-thread-safe --disable-gui
     make

Switch to super-duper user and install MyODBC:

     su
     make install

This will put MyODBC in the same library where the UnixODBC modules were placed. Why not? They work together. Storing them together makes sense.

To make the hookup between UnixODBC and the MySQL connector or MyODBC, make sure there is an entry in the ODBC instances file that points to the MySQL connector or MyODBC. If UnixODBC was installed in /usr/local/unixODBC (as we show above), the file is /usr/local/unixODBC/etc/odbcinst.ini. For installations done by a package manager (e.g. RedHat or CentOS), the file may be in /etc/odbcinst.ini.

If you are using MyODBC, the connection to MyODBC should look like this:

     [MySQL]
     Description     = MySQL driver for Linux
     Driver          = /usr/local/unixODBC/lib/libmyodbc3.so
     Setup           = /usr/local/unixODBC/lib/libmyodbc3S.so
     FileUsage       = 1

Note that the setup library (libmyodbc3S.so) can be missing but it does not matter, since it is only used by the GUI.

If you are using the MySQL connector, the connection to it should look like this:

     [MySQL]
     Description     = MySQL driver for Linux
     Driver          = /usr/local/lib/libmyodbc5w.so
     FileUsage       = 1

In either case, the permissions on odbcinst.ini can be:

     -rw-r--r--    root     root

Define your MySQL DSNs in odbc.ini. A typical file might look something like this:

     [Trackor]
     Description=Where Is Waldo Database
     Trace       = Off
     TraceFile   = stderr
     Driver      = MySQL
     Server      = localhost
     Port        =
     Socket      =
     User        = secretsquirrel
     Password    = DecoderRing
     Database    = WaldoTrack

Each DSN is listed, begining with a descriptive name, enclosed in square brackets. You can include comments that are indicated by an initial '#' if you wish. The driver name must match the name that was given to the MySQL driver in odbcinst.ini (in our case, we named the MySQL drive "MySQL", just to be boring and predicatable).

The "Server" parameter can be set to "localhost" or it can be the name or IP address of a remote host that is running the MySQL engine, presuming that you can connect to databases that are hosted on the remote machine, via TCP.

If the port used by MySQL is other than the default value (3306), its number should be included with each DSN. Alternately, if your connection to the MySQL engine uses a socket (e.g. on localhost) and the socket that is used by MySQL is non-standard, you should include its name with each DSN. This is not necessary, however, if you have set these values in the [Client] section of the my.cnf file used by MySQL since the client parameters set therein are passed to UnixODBC.

The permissions on odbc.ini can be:

     -rw-r--r--    root     root

Though, if for some reason it is necessary that the user name and password be set in the DSN definition, it may be better to use something like this:

     -rw-r-----    root     mysql

But, note that if you do this, you will need to include all of the users of ODBC in the mysql group. Typically, this will include apache, since httpd now runs under the apache userid and PHP programs are typical users of ODBC to access databases for web pages.

If you do set a user name and password in any of the DSNs, make sure that the values for the User parameter and Password parameter both match the user information set when the database was created and permissions assigned.

You should now (assuming you have permissions to connect to the database) be able to connect to the database using ODBC. Try the unixODBC command:

     isql -v Trackor

If you need to use a username and possibly a password, try this:

     isql -v Trackor user [password]

Note that you may need to supply a path name for isql, if UnixODBC was installed somewhere that is not in the current path:

     /usr/local/unixODBC/bin/isql -v Trackor

If isql can't connect to the database, you should consult the "Troubleshooting UnixODBC Connector Problems" section in these notes, for tips on how to diagnose and fix UnixODBC connector problems.

If you finally get UnixODBC isql to connect to the database, it will give you an SQL> prompt. At that point, you can type in any valid SQL query against your chosen database and you should see an answer. A good, quick test is the "show tables;" command, which lists all of the tables in the chosen database. When you are done playing and marvelling at how wonderful ODBC is, you can type "quit" to get out. Congrats! Your ODBC shim works.

Installing the Informix ODBC Connector

UnixODBC does not work without an intermediate shim or connector between it and the database. In the case of Informix, the connector is supplied by IBM in the CLI distribution files. If you ran installclientsdk during the install procedure (described above), you should be in good shape. Look for the files libifcli.so and igo4a304.so (maybe in /usr/lib or perhaps under cli and esql in /usr/share/informix/lib, /usr/local/informix/lib or /opt/informix/lib).

Before you begin, make sure that the usual Informix setup has been completed (see the section Using an Informix Database, above), specifically that:

  1. The INFORMIXDIR environment variable is set to the root of your Informix installation (typically /usr/share/informix, /usr/local/informix or /opt/informix).
  2. The well-known service number is defined for the online service in /etc/services.
  3. An sqlhosts file has been constructed in the $INFORMIXDIR/etc directory that defines all your database servers. And, be sure that you define either the "onsoctcp" or "sesoctcp" protocol for the database server (depending on which version of Informix you're using), since UnixODBC only speaks TCP.
  4. All of the Informix libraries can be found by ld (these are located in $INFORMIXDIR/lib and its subdirectories and must include at least "cli" and "esql"). This usually involves adding a list of these directories to /etc/ld.so.conf and re-running the ldconfig utility as root. However, in the installable module spirit of things, we create a file in /etc/ld.so.conf.d that, at a minimum, points to the two Informix load libraries, like this:
     /etc/ld.so.conf.d/informix-i386.conf:

.

.

/usr/share/informix/lib/cli
/usr/share/informix/lib/esql

     5) Once this file is created, you can rerun ldconfig:
     su
     ldconfig

To make the hookup between UnixODBC and the Informix connector, make sure there is an entry in the ODBC instances file that points to the Informix connector. If UnixODBC was installed in /usr/local/unixODBC (as we show above), the file is /usr/local/unixODBC/etc/odbcinst.ini. If an RPM or other OS package was installed, the file may be /etc/odbcinst.ini.

Make sure, when you add the Informix driver to odbcinst.ini, that there is no stray white space between the parameter names and their values. When you access the database, if you get "Server %s is not listed as a dbserver name in sqlhosts" or anything similar, it is probably caused by stray white space in either the Informix driver definition in odbcinst.ini or the DSN entries, in odbc.ini, below, that use the Informix driver. The driver's parser seems to be very easily confused.

The connection to the Informix connector should look like this:

     [Informix]
     Description=Informix driver for Linux
     Driver=/usr/share/informix/lib/cli/libifcli.so
     APILevel=1
     ConnectFunctions=YYY
     DriverODBCVer=03.51
     FileUsage=0
     SQLLevel=1
     smProcessPerConnect=Y

Note that we obtained the ODBC version that the driver speaks for the DriverODBCVer parameter from the UnixODBC drivers table found at:

     http://www.unixodbc.org/drivers.html

From that chart, the 2.2.14 version of UnixODBC speaks version 3.51 of the ODBC protocol to the Informix connector.

The permissions on odbcinst.ini can be:

     -rw-r--r--    root     root

Define the ODBCINI environment variable to point at your system wide odbc.ini. Do something like:

     export ODBCINI=/usr/local/unixODBC/etc/odbc.ini

or

     export ODBCINI=/etc/odbc.ini

Note that you must do this, since ODBCINI is the only way to control where the Informix driver looks for it's configuration information.

Define your Informix DSNs in odbc.ini (which should be located in the same directory as the odbcinst.ini file). Once again, don't include any whitespace between the parameter names and their values. Also, there must be no comments or whitespace at the beginning of the odbc.ini file. The first line must say "[ODBC Data Sources]" or the driver will fail to find your DSNs. Trust us, it is best to make this file look just like this:

     [ODBC Data Sources]
     CollectOh=Informix
     [CollectOh]
     Description=Collections Database
     Driver=Informix
     Server=localhost
     Port=1498
     Database=/my/database/path/collectdb
     CLIENT_LOCALE=en_us.8859-1
     DB_LOCALE=en_us.8859-1
     TRANSLATIONDLL=/usr/share/informix/lib/esql/igo4a304.so

Each DSN is listed under the "[ODBC Data Sources]" section with its value set to the name given to the Informix driver in odbcinst.ini. Once you've listed all of the DSNs under the "[ODBC Data Sources]" section, you can include comments but no whitespace. The "Driver" parameter of each DSN should also have its value set to the name given to the Informix driver in odbcinst.ini.

The "Server" parameter can be set to "localhost" or it can be set to the same name used for the database engine, as it is set up in sqlhosts. If you use the database engine name, presumably you can connect to databases that are hosted on remote machines via TCP.

If the port used in the sqlhosts file is something non-intuitive, its number should be included with each DSN. We're not sure what "intuitive" is so we just get the number from /etc/services and/or sqlhosts and always include it.

For full-blown Informix, the "Database" parameter should be set to just the database name, since the server knows where to find all of the databases (e.g. "Database=CollectOh"). For Informix SE, the "Database" parameter should be set to the complete path name (minus the ".dbs" part) of the database, as shown above.

You will have to locate igo4a304.so in the directory where it actually lives and put its name in for the "TRANSLATIONDLL" parameter but it should be found somewhere in the same path where Informix was installed, usually under lib/esql.

The permissions on odbc.ini can be:

     -rw-r--r--    root     root

However, if for some reason it is necessary that the user name and password be set in the DSN definition, it may be better to use something like this:

     -rw-r-----    root     informix

But, note that if you do this, you will need to include all of the users of ODBC in the informix group. Typically, this will include apache, since httpd now runs under the apache userid and PHP programs are typical users of ODBC to access databases for web pages.

If you do set a user name and password in any of the DSNs, make sure that the values for the User parameter and Password parameter both match the user information set when the database was created and permissions assigned.

You should now (assuming you have permissions to connect to the database) be able to connect to the database using ODBC. Try the unixODBC command:

     isql -v CollectOh

If you need to use a username and possibly a password, try this:

     isql -v CollectOh user [password]

Note that you may need to supply a path name for isql, if UnixODBC was installed somewhere that is not in the current path:

     /usr/local/unixODBC/bin/isql -v CollectOh

If isql can't connect to the database, you may wish to consult the "Troubleshooting UnixODBC Connector Problems" section in these notes, to diagnose and fix any generic UnixODBC problems. Errors that are specific to the Informix driver follow in the next paragraphs.

If you get an error message that reads "Unspecified System Error = -11041", it could well be due to the fact that you didn't set the INFORMIXDIR environment variable to the root of your Informix installation (typically /usr/share/informix, /usr/local/informix or /opt/informix) and/or export it properly. Do something like:

     export INFORMIXDIR=/usr/share/informix

Or, if you get an error saying "Unspecified System Error = -25553", it may mean that, like us, you are an idiot and have set INFORMIXDIR to the wrong path name.

If you get an error message that reads "Data source name not found and no default driver specified", it could be caused by your failure to set the ODBCINI environment variable to point to your odbc.ini file. Do something like:

     export ODBCINI=/usr/local/unixODBC/etc/odbc.ini

or

     export ODBCINI=/etc/odbc.ini

Other reasons for this error are the "[ODBC Data Sources]" section not being the absolutely first line in the odbc.ini file, the DSN not being mentioned in that section or anything else that is bunged up in the DSN's parameters. Have fun debugging this one. The best advice is to make the odbc.ini file look EXACTLY like it is shown above.

If you finally get UnixODBC isql to connect to the database, it will give you an SQL> prompt. At that point, you can type in any valid SQL query against your chosen database and you should see an answer. A good, quick test is the "select tabname from SYSTABLES;" command, which will enumerate all of the tables in the chosen database (there is no "show tables" in Informix). When you are done playing and marvelling at how wonderful ODBC is, you can type "quit" to get out. Congrats! Your ODBC shim works.

A useful application of the Informix/ODBC connector is to access an Informix database from PHP. If you'd like to use an Informix database from PHP, this example may prove instructive:

     <html>
     <head>
     <title>ODBC test script</title>
     </head>
     <body>
     <p align="center"><b>ODBC Test Script</b></p>
     <script language="php">
       // We must set these environment variables for Informix to work.  Either
       // do it here or in php.ini.
       putenv("INFORMIXDIR=/usr/share/informix");
       putenv("ODBCINI=/usr/local/unixODBC/etc/odbc.ini");
     // Open up a connection to the database.
     if (!($con = odbc_connect("CollectOh", "", "")))
       echo "<p>Connection to CollectOh failed.</p>\n";
     else
       {
       // Let's try enumerating all of the tables in the database (there ain't
       // no "show tables" here).
       if (($res = odbc_exec($con, "select * from SYSTABLES")))
         {
         echo "<p>\n";
         odbc_result_all($res);
         echo "</p>\n";
         }
      // Close up shop, like good dobies.
      odbc_close($con);
      }

</script>

     </body>
     </html>

Troubleshooting UnixODBC Connector Problems

This section describes the steps that we take to troubleshoot problems with UnixODBC connectors, regardless of which connector is being used. UnixODBC is not big on reporting the actual cause of an error so we are all reduced to acting like Hercule Poirot to figure out what's going on.

If, for example, you receive this error:

     Can't open lib '/usr/local/lib/xxx.so': file not found

It doesn't really mean that the file can't be found. This is UnixODBC's way of telling you that there is something wrong with the database connector. Or, you included a spurious carriage return on the end of the name. Or, the resources that database connector needs are missing.

There's not much we can do about the database connector being broken. It comes from the database vendor. Check that you have the correct shim. Make sure that you aren't trying to use the 64-bit shim on a 32-bit OS or with the 32-bit version of UnixODBC, and vice versa. If you built the database from source, check that the build was done correctly and that the properly-built shim was installed where you think it was. Verify that a vestigial shim (from a prior build or install) isn't lurking about, playing havoc.

Make sure that the name that you used for the database connector doesn't have any secret, hidden characters in it. Many people have noted that, when cutting and pasting connector names from sources on the Internet, it is possible that a spurious carriage return or some other invisible, evil character can get included in the name, where you can't see it but where UnixODBC can use it to go screaming off into the weeds.

For resource problems, the ldd command is your friend. Use it to display the resources that the database connector needs and resolve any missing ones. For example, when you display the MySQL connector like this:

     ldd /usr/local/lib/libmyodbc5w.so

You should see something like this:

     linux-gate.so.1 =>  (0x00bb1000)
     libodbc.so.2 => not found
     libodbcinst.so.2 => not found
     libmysqlclient.so.18 => not found
     libpthread.so.0 => /lib/libpthread.so.0 (0x00547000)
     libm.so.6 => /lib/libm.so.6 (0x001d6000)
     libdl.so.2 => /lib/libdl.so.2 (0x003d2000)
     libc.so.6 => /lib/libc.so.6 (0x00d57000)
     /lib/ld-linux.so.2 (0x00711000)

Well, actually, you shouldn't see something like this. The part where "libobbc.so.2" and "libodbcinst.so.2" and "libmysqlclient.so.18" aren't found? That's not good.

If its resources aren't found, the database connector isn't going to work. The fix for this problem is to either add symlinks for the missing resources, in an obvious location such as /usr/lib, or to modify ld.so.conf or ld.so.conf.d to point the loader at the locations where these resources can be found. Typically, the libodbc resources can be found in /usr/local/unixODBC/lib or, if an RPM was used to install UnixODBC, /usr/lib.

Additionally, you may need to add symlinks from generic module names, such as libodbc.so.2, which should point to the actual module such as libodbc.so.2.0.0, in the appropriate place. Usually, the install for UnixODBC and your database connector takes care of these symlinks but you never know.

If you get an error message that looks something like this (from isql, your program or a Web application such as a PHP program):

     [IM002][unixODBC][Driver Manager]Data source name not found, and no
       default driver specified

It may or may not mean what it says. It is possible that the DSN is missing or that the driver is not correct. You should first check that there is a valid DSN entry for the database that you are trying to open in the odbc.ini file (well, first you should check that the odbc.ini file is where UnixODBC expects it, i.e. where you told it to look when you built UnixODBC -- in a pinch you should put it in /usr/local/unixODBC/etc and then you can symlink to it from other places like /etc/odbc.ini).

The database that you are trying to open should have a separate section with its name in square brackets. In that section, it should mention the driver (e.g. "MySQL").

The next thing to check is that there is an odbcinst.ini file in the same place as odbc.ini. Once again /usr/local/unixODBC/etc or maybe /usr/unixODBC/etc are good spots and symlinks are a good way to cover all of the bases.

The odbcinst.ini file should have a separate section with the driver name that was used in odbc.ini, in brackets. The driver should be pointed to in this section (e.g. /usr/local/lib/libmyodbc5w.so).

Finally, the database (and all of its associated files) should be accessible by the user executing the program that is failing. If you did something like setting the ownership of the database files to mysql:mysql or informix:informix, you need to set the group permissions of these files to "rw" and add the primary group that the user belongs to /etc/group under the mysql or informix group. Note that, if you are running a PHP program under a Web server such as httpd, the user accessing the database is probably apache. So, in /etc/group one might expect to find:

     mysql:x:27:apache,jschmoe

Also, a word to the wise. If you can't get:

     /usr/local/unixODBC/bin/isql -v DSN

To work against the database, you might as well figure that out before you go any further, since it eliminates of lot of middlemen during debugging.