All posts by Matt Davies

32 bit ODBC driver with 32 bit unixODBC on a 64 bit Ubuntu OS

In this article I’m going to build a 64 bit ubuntu 10 OS, and then install 32 bit unixodbc and get it to connect to a remote odbc data source using a third party 32 bit driver. No 64 bit driver available ๐Ÿ™


If I manage to pull it off, and then get ruby-odbc to work, I won’t have to build and manage a new server in order to get data from a remote data source into our current ruby on rails applications.

First up, lets install the OS

I’m building the OS with VMware Fusion on my mac using ubuntu-10.04.2-server-amd64.iso

Do not install any additional packages during the OS install

sudo locale-gen en_GB.UTF-8
sudo /usr/sbin/update-locale LANG=en_GB.UTF-8
sudo aptitude update
sudo aptitude safe-upgrade
sudo reboot
sudo aptitude -y install ssh
sudo aptitude install build-essential linux-headers-`uname -r`
sudo ln -s /usr/src/linux-headers-`uname -r` /usr/src/linux
sudo reboot
sudo aptitude install ia32-libs
sudo apt-get install g++-multilib
mkdir ~/src
cd ~/src
tar -xzvf unixODBC-2.3.0.tar.gz
cd unixODBC-2.3.0/
CFLAGS=-m32 LDFLAGS=-m32 CXXFLAGS=-m32 ./configure
sudo make install

That has installed the 32 bit unixodbc libraries into /usr/local/lib, you can confirm that by running

file /usr/local/lib/

should say

/usr/local/lib/ ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped

Now upload the third party sassafras driver into that directory, and then test it

ldd /usr/local/lib/ => (0xf770e000) => /usr/lib32/ (0xf7632000) => /lib32/ (0xf760c000) => /lib32/ (0xf74b1000) => /usr/lib32/ (0xf7400000) => /usr/lib32/ (0xf73dc000) => /lib32/ (0xf73d8000) => /usr/lib32/ (0xf73d0000) => /lib32/ (0xf73cb000) => /lib32/ (0xf73c7000) => /lib32/ (0xf73b3000)
/lib/ (0xf770f000) => /lib32/ (0xf739a000)

Voila, it all looks rosey in the garden.

Now we need to define our odbc connections in /etc/odbc.ini and /etc/odbcinst.ini

These settings are specific to the driver you are using, check with the driver manufacturers for your recommended setup

sudo vi /etc/odbc.ini
Driver = ksodbc
Description = foobar
RemoteHost = [IP ADDRESS]

sudo vi /etc/odbcinst.ini
Description = Sassafras Keyserver Database
Driver = /usr/local/lib/
UsageCount = 1

Let’s test it

isql -v ksdsn [username] [password]

| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |

If you get that back, you are at a joyous moment in time when you are connected.

Happy days.

Thanks to the good people on unixODBC mailing list for a lot of help with this process, the ubuntu forums of course, without whom I would be a lesser man, and last but not least, Julian Delvin at sassafras.

Part 2 will be configuring ruby and then rails to talk to the database, wish me luck!

Bundler update, they grow up so fast!

Recently I wrote a post about using bundler, well so much has changed since then I thought we needed a little update.

Firstly, we’re now using bundler 0.9.7, so you need to remove the old bundler and install the new one

gem uninstall bundler

gem install bundler -v 0.9.7

The new bundler installs the files it needs from different places, so you’ll need to edit your gitignore files and remove the old structure from your app. (Make sure you haven’t put anything in bin that is not connected with bundler, you’ll need to keep that if you have)

rm -rf bin

rm -rf vendor/bundler_gems

All you need to add to .gitignore is .bundle

The gemfile has changed alot, you’ll need to check the documentation on the github site for all the specifics, but here’s our new one if you need a start. ย Sample Gemfile

How you incorporate bundler into your app has also changed, and it also depends on what version of rails you are running. ย For this project, we’re still on rails 2.3.5, so this is what you need to do.

Running bundler has changed as well, you used to run things like gem bundle, now you run bundle install, bundle lock, etc etc. ย We’ve locked out project down so after a pull from the repo you have to run bundle install –relock to get all you new code and relock the app. ย (This is going to change shortly to just having to run bundle install I think, waiting to see what happens with that one.)

And last, but not least, the new bundler has the ability to run the binaries from the gems it uses via a new command

bundle exec (rails binary)

e.g. bundle exec cucumber

This will ensure that you are using the binary from the gem you have installed via bundler, pretty neat.

There’ll be more updates as soon as we get time to update, bundler is already on 0.9.10 so expect one soon. ย I got some teething troubles with the newest version on deploy so we’re holding fire on upgrading until I can work out what went wrong.

Happy bundling everyone.

Continuous Integration with Integrity

What is he banging on about this time?

Having a Continuous Integration system in place is a central practice of a functioning agile development team.

Our old CI server was bardy, supa dupa bardy, and over complicated. We were using cruisecontrol.rb, it did the trick but we were using java to run our selenium tests, which was a bit of a pain, so we decided to use Integrity to handle it all instead. ( And dropped the selenium tests for the time being and just use cucumber and rspec. )

So you’ve built a bog standard Ubuntu Server build with ruby and apache with passenger installed, as well as all the normal security, monitoring and maintenance software loaded and configured. (Hopefully we’ll get an article up soon on how to do all this, without giving away too many secrets of course :-))

You’ll also need to install all the software on the system that you’ll need to run whatever tests you want to run in every project that needs to be tested. Nearly all the gems will be bundled, so they’re not so much of a worry, but things like imagemagick and relevant development libraries and system tools need to be installed.

Install Integrity

Let’s grab the code, install it and create the sqlite db

$ gem install bundler
$ git clone git://
$ cd integrity
$ git checkout -b deploy v0.2.3
$ gem bundle --only default
$ ./bin/rake db

Configure Integrity


uncomment this line(we’re going to use campfire notifier)

require "integrity/notifier/campfire"

and change c.base_url to be the url of your integrity server

c.base_url     ""

add this block prior to running the app to add some security to the site

use Rack::Auth::Basic do |user, pass|
  user == "admin" && pass == "secret"


Uncomment these two lines

gem "broach", :git => "git://"
gem "nap", :git => "git://"

Now you need to run gem bundle again inside the integrity app to add the new gems for the campfire notifier

$ gem bundle --only default

apache config

I’m going to serve it with apache, so I need to create the virtual host definition and enable it.

sudo vi /etc/apache2/sites-available/integrity

	CustomLog /var/log/apache2/integrity_access.log combined 
	DocumentRoot /path/to/integrity/public 
	<Directory /path/to/integrity/public> 
		AllowOverride all Options -MultiViews    

sudo a2ensite integrity
sudo /etc/init.d/apache2 restart

When you browse to your server you should now see the integrity application running, but obviously with no projects built yet, let’s build one.


We use the same name as our git repository on github, e.g.

Repository Url

This is the clone URL to your github project with .git at the end removed(current issue).

Branch to Track


Build Script

We currently use a bash script to perform some tasks prior to running the test suite. We were using a rake task, but ran into some issues with rack1.1 so we’re using this method now.
/path/to/ && script/cucumber

sudo vi /path/to/bashscript

cp /path/to/database.yml config/
RAILS_ENV=test rake db:migrate --trace
/usr/bin/gem bundle > /dev/null 2>&1

Tick the Campfire Notifications and you’ll see some extra configuration options. We’re going to use this to send the result of the test back to our campfire site so we can all see the test results, good or bad ๐Ÿ™‚


This is you campfire subdomain, so if my campfire site was at it would be



Tick it

Room Name

This is the name of the room itself, not the URL to it. e.g., if my room was called matts’s pictures 1.0, then you need to put in this box exactly that, matt’s pictures 1.0.

API Token

This is your API authentication token from Campfire. If you login to campfire and click on My info you’ll see it there

Notify on success

Tick it, it’s always good to know when the tests pass, in a vain attempt to balance out the failures ๐Ÿ™

Update the project.

Configure Github

Firstly I had to add the contents of ~/.ssh/config/ as a SSH Public Key on my github account.

Now we need to configure our Post Receive Hook on the Repo we’re testing, so click on the repository, then click on Admin. In there click on Service Hooks and then select Post receive URLS

Now add one with this format

this is defined in

c.github       "TOKEN"

Once you’ve updated you’re settings you can test that hook. What should happen is the payload should be delivered to your integrity server and the test process should start. It may not, more than likely won’t, so I’d go back to my integrity project and try manual builds from there until you get it to work.

Try out a manual build at to see how it goes.

Couple of tips

1. Always have tail -f integrity.log open, it will give you some information about what’s happening.

2. Output the results of the different commands happening in your build so you can see what’s happening in detail. I found this really helpful.

e.g. in the Build Scripts section you could do this

/path/to/ && script/cucumber > /path/to/cucumber.log

you can then tail that log during a build to output any debugging you need

3. The bash scripts and whatever code you put into the Build Scripts section of the app will run in the new build folder. If you’re having problems running anything just cd into that new build and try running the commands there. NB, you must be the same user that apache is running as, which is the same user that owns the integrity files and folders and you’re apps files and folder.

Now when anyone pushes to our master branch of our application on github, our complete test suite runs and we all get notified of whether or not the tests passed in our campfire room. Now the fun never ends ๐Ÿ™‚

MySQL Replication

The situation

We’ve got a web application that uses a mysql database as it’s backend. Some of the data held in that application needed to be used in another web application, but only read, never written to. So what we had to come up with is a method of using that data.


1. Use the Rails ActiveResource class

Using this method we could read restful data from the remote system directly, and use the ruby objects we pull into our site to display the necessary data.
This is a relatively easy method of achieving our goal, requiring minimal amounts of coding on the remote application and we keep one source of data.

Some of the downsides to this method are if the remote system goes away(network failure, mysql crashes etc etc), then our new web app falls over. Also the remote system will always have to do most of the grunt work, running the mysql queries, creating the xml, etc etc. If it’s a busy remote system this may have a negative impact on how both systems run.

2. Use the Rails ActiveRecord class

The downsides to this problem are the same as the ActiveResource method, with a couple of additional problems. We’d then have access to all the database tables, and there are some tables that contain sensitive data, so that would require some additional work. Also, you can only define one database per application, so if we wanted to add any tables to our new web application, we’d have to add them to the database that powers the remote application as well. That could prove very difficult to manage.

3. XML

This is really a less efficient ActiveResource type solution with the same pro’s and con’s.

4. Master/Slave MySQL Replication

The only real downside for this method is that we’d never done it before. After a bit of testing we quickly found out that we could eliminate all the problems of the other methods.

We can synchronise only the tables we want, so we wouldn’t run into problems with sensitive data.
If the remote system goes away then our system will remain unaffected and the data changes will “catch up” once the remote comes back online.
There is no additional load on the remote system as the new system will query itself.
If we need to add any tables to our new application then that will have no effect on the remote application at all.

MySQL Replication

Replication follows a Master/Slave methodology, in our case the master is the application that is already in place, and the slave will be the new application.

I’ve left out how to create the user for replication(slave_user) and give it the necessary permissions, you can find out how to do that in one of the pages in the article above if you don’t know how to. Also don’t forget to set the bind-address to the IP of the slave on the master, and open up the firewall on the master on port 3306 for the IP of the slave.

First off we need to enable binary logging on the master, set the server-id, the database we want to replicate, (we’ll specify the tables we want in the slave), and some other variables to keep the system from getting out of hand.

sudo vi /etc/mysql/my.cnf

Either add these lines to the [mysqldb] section or uncomment and edit them if they are already there.


restart the mysql server

sudo /etc/init.d/mysql restart

Now we need to tell the slave mysql server that it is the slave, and also which tables to replicate. You can also set some other management options here.

sudo vi /etc/mysql/my.cnf

Either add these lines to the [mysqldb] section or uncomment and edit them if they are already there.


restart the mysql server

sudo /etc/init.d/mysql restart

As we already have data in our master system we’re using the mysqldump method to get the current data(via Sequel Pro). In order for the data in the master to be the same as that in the slave initially, we need to stop any commits on tables, or LOCK the tables of the master database, prior to taking the data dump for the slave.

mysql -uusername -ppassword
mysql>use database_name;

This will block all commits until you close that mysql session. Export the sequel dump from the master now and import it into the slave. I did this with Sequel Pro and the import/export tools available in that.

We now need to grab some details from the master that we’ll need later on when we tell the slave to start replicating.

| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
| mysql-bin.000004 | 20060 | database_name| |

Now back on the slave we need to use these variables and some others to setup the replication.

mysql -uusername -ppassword
mysql>STOP SLAVE; (check if the slave is already running or not)
mysql>CHANGE MASTER TO MASTER_HOST=‘IP of master’, MASTER_USER=‘slave_user’, MASTER_PASSWORD=‘slave_users password’, MASTER_LOG_FILE=‘mysql-bin.000004’, MASTER_LOG_POS=20060;

Go back to the master and close the mysql session you started earlier. This will release the lock on the tables and allow updates and commits on the master again.


That’s it, any changes in database_name.table_name on the master will be replicated over to the slave. If the master goes away for any reason, the slave system will still function and will ‘catch up’ when the master comes back online.

What we’ve now got working is a central data storage point that pushes any changes in it’s own data out to a remote system as and when the data changes. It’s pretty easy to add more slaves if you need to for scalability.

If you had a lot of people/systems all working on the same dataset, and you wanted to make certain that they were all using the same data, then the remote systems that only read data could be the slaves, and any systems that need to write data could use the master. You can also set up master to master replication, so that if some remote systems needed to write data as well, then they could.

My little bundler of Joy

Bit of Background

One of the new bits of technology coming into Rails 3.0 is the all singing and dancing Gem Bundler . If you’ve ever found yourself in config.gem hell, or have pushed a perfectly fine project from development to production and it’s all gone kaput due to gem problems, than this is the medicine that you need. Here’s the blurb

Bundler is a tool that manages gem dependencies for your ruby application. It takes a gem manifest file and is able to fetch, download, and install the gems and all child dependencies specified in this manifest. It can manage any update to the gem manifest file and update the bundled gems accordingly. It also lets you run any ruby code in context of the bundled gem environment.

We’re not running Rail 3.0 yet, but thats no reason not to start using bundler.

Let’s install it, I’ve settled on version 0.7.1 as I was having odd problems with any of the newer versions.

sudo gem install bundler -v 0.7.1

Next step is to create a file called Gemfile in your applications root directory, and then subsequently fill it with what gems you need, and also in that environments you need them. Here’s the one I’m using.

Sample Gemfile

Now add these lines to your .gitignore file, this will make deploys easier and faster


Now it’s time to start a bundling. From inside your application run

gem bundle

That will now do all it’s magical stuff for you, creating the bin executables and the necessary files and folder in the vendor/bundler_gems folder.

Will it work yet? Not quite ๐Ÿ™

To get it working with rails 2.3.4 I had to do the following things.

1. Create a file called config/preinitializer.rb with this code in

require "#{File.dirname(__FILE__)}/../vendor/bundler_gems/environment"

2. Add the following line to all environment files, ie, config/environments/*.rb

Bundler.require_env RAILS_ENV

Now when I start my app using script/server, or nginx and passenger it all works as it should. I even removed all my local gems just to check. (stupid idea as not all my rails projects are bundled yet :-))

Deploying the medicine

I’ve unashamedly knicked the deploy stuff from numerous people, I love you all, even though I can’t remember your names.

Capistrano Deploy Script

Watch out for….

It might sound obvious but some gems can’t be handled in this way. Have a good think about that prior to using bundler, one for us was passenger. I had a hell of a time trying to workout how to use it with bundler, than I got some sane advice to just install passenger as a module of nginx, problem gone. The unicorn is stalking me at the moment though, so we may have a passenger who wants to get off shortly.

We were using REE in production, in my experience REE and bundler didn’t get on, so I had to remove all REE and install the ruby that comes down the pipe with aptitude, then rebuild nginx with passenger module after doing that, it all worked ok then.

I now deploy from the application like so

./bin/cap staging/production deploy

That way I know what version of capistrano I’m using per app

Don’t delete all your local gems until all your projects are bundled, and even then be careful.

If you have rake tasks that are being run by cronjobs, make sure that the rake you are accessing is present.
RAILS_ENV=production rake some:jobname will not work from a cron job now, as rake is not available like that any more. You’ll need to edit it to point at the rake in your project bin.

RAILS_ENV=production /path/to/project/bin/rake some:jobname

Or add the bin folder in your project path to your $PATH, that might be a better solution.

Happy bundling everyone.

Our Current Technology

ubuntu 8.04, rails 2.3.4, ruby 1.8.6, nginx 0.7.64 (with passenger 2.2.8 installed as a module)

People who’ve helped immeasurably

#carlhuda on freenode