32 bit ODBC driver with 32 bit unixODBC on a 64 bit Ubuntu OS

In this article I’m going to build a 64 bit ubuntu 10 OS, and then install 32 bit unixodbc and get it to connect to a remote odbc data source using a third party 32 bit driver. No 64 bit driver available :-(

Why?

If I manage to pull it off, and then get ruby-odbc to work, I won’t have to build and manage a new server in order to get data from a remote data source into our current ruby on rails applications.

First up, lets install the OS

I’m building the OS with VMware Fusion on my mac using ubuntu-10.04.2-server-amd64.iso

Do not install any additional packages during the OS install

sudo locale-gen en_GB.UTF-8
sudo /usr/sbin/update-locale LANG=en_GB.UTF-8
sudo aptitude update
sudo aptitude safe-upgrade
sudo reboot
sudo aptitude -y install ssh
sudo aptitude install build-essential linux-headers-`uname -r`
sudo ln -s /usr/src/linux-headers-`uname -r` /usr/src/linux
sudo reboot
sudo aptitude install ia32-libs
sudo apt-get install g++-multilib
mkdir ~/src
cd ~/src
wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.0.tar.gz
tar -xzvf unixODBC-2.3.0.tar.gz
cd unixODBC-2.3.0/
CFLAGS=-m32 LDFLAGS=-m32 CXXFLAGS=-m32 ./configure
make
sudo make install

That has installed the 32 bit unixodbc libraries into /usr/local/lib, you can confirm that by running

file /usr/local/lib/libodbcinst.so.1.0.0

should say

/usr/local/lib/libodbcinst.so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped

Now upload the third party sassafras driver into that directory, and then test it

ldd /usr/local/lib/libksodbc.dbg.so
linux-gate.so.1 => (0xf770e000)
libgssapi_krb5.so.2 => /usr/lib32/libgssapi_krb5.so.2 (0xf7632000)
libm.so.6 => /lib32/libm.so.6 (0xf760c000)
libc.so.6 => /lib32/libc.so.6 (0xf74b1000)
libkrb5.so.3 => /usr/lib32/libkrb5.so.3 (0xf7400000)
libk5crypto.so.3 => /usr/lib32/libk5crypto.so.3 (0xf73dc000)
libcom_err.so.2 => /lib32/libcom_err.so.2 (0xf73d8000)
libkrb5support.so.0 => /usr/lib32/libkrb5support.so.0 (0xf73d0000)
libdl.so.2 => /lib32/libdl.so.2 (0xf73cb000)
libkeyutils.so.1 => /lib32/libkeyutils.so.1 (0xf73c7000)
libresolv.so.2 => /lib32/libresolv.so.2 (0xf73b3000)
/lib/ld-linux.so.2 (0xf770f000)
libpthread.so.0 => /lib32/libpthread.so.0 (0xf739a000)

Voila, it all looks rosey in the garden.

Now we need to define our odbc connections in /etc/odbc.ini and /etc/odbcinst.ini

These settings are specific to the driver you are using, check with the driver manufacturers for your recommended setup

sudo vi /etc/odbc.ini
[ksdsn]
Driver = ksodbc
Description = foobar
RemoteHost = [IP ADDRESS]

sudo vi /etc/odbcinst.ini
[ksodbc]
Description = Sassafras Keyserver Database
Driver = /usr/local/lib/libksodbc.dbg.so
UsageCount = 1

Let’s test it

isql -v ksdsn [username] [password]

+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL>

If you get that back, you are at a joyous moment in time when you are connected.

Happy days.

Thanks to the good people on unixODBC mailing list for a lot of help with this process, the ubuntu forums of course, without whom I would be a lesser man, and last but not least, Julian Delvin at sassafras.

Part 2 will be configuring ruby and then rails to talk to the database, wish me luck!

Posted in Server Side | Tagged , | Leave a comment

There’s an app for that ..

More details soon..

Posted in Code, Design, Opinon | Leave a comment

IWMW2010 – a conference with the theme ‘the Web in turbulent times’

IWMW Talks I remembered

Stylesheets for mobile phones with Helen from Cambridge.

I enjoyed this session where Helen from Cambridge showed the steps she’d considered and taken to experiment with media queries and different styles being fed to different devices. The slides are available, and there was some good discussion about linearising the page for mobile devices. There was no consensus about the best way to do this, illustrating that the focus of a homepage can be difficult to maintain when having to distill. If nothing else the constraints of smaller devices means you have to make some hard decisions about what is really useful and importanat to your vistors. Decsisons that perhaps can be fudged with the screen available to desktop users. I was struck by the amount of work and thought that Helen had done in considering the user, and yet it may all be to no avail, since devices have improved so much that they provide their own method for navigating sites.

Course Advertsing and XCRI

This session was quite a broad one, giving an overview of a project that has been running for a while to try to standardise a format that describes and structures course information. This session focussed on the XCRI-CAP part of the project which looks at marketing information of courses. Some good tools were presented to check how ready an institution would be to start using this. It struck me that it could be really useful in the our circumstances where we have courses across the Glamorgan Group, from a variety of different levels and a standard way of referring to them. Other universities are starting to use this format, and the real benefits of standards that might actually accrue to the end user.

Slate my website barcamp

Really fun session run by Mike Nolan from Edgehill. The idea was to have a quick look around a university’s site and mark it on design, content and code. Reading, Nottingham and Edgehill were reviewed (Dan from York did the honours for the Edgehill review), and marks then given whilst everyone discussed aspects of the site. Really useful to see the site through someone else’s eyes and it worked really well to quickly identify things that can be done better. Was a shame it didn’t go on longer.

The Web in Turbulent Times

Really good broad talk about IT and where the web fits in. Nice video here with twitter responses from the time. She makes very good point that IT projects are considered separate from business project when they are in fact integral. There is an unhelpful perception that IT is somehow separate from the business.Chris also made some interesting points about shared services and the pressure from government for the Education sector to share things more.

HTML5 (and friends)

Enjoyed the good talk from Patrick Lauke, thinking that it worked well as a tactical talk, encouraging a look at the practical steps one can take to get started with HTML5. It struck me that there was an appetite in the audience to get cracking and Patrick made it seem less daunting and complicated than many people (myself included)imagine it to be.

‘So what do you do exactly?’ In challenging times justifying the roles of the web teams

Galvanising talk about stats and measuring what we do. Particularly liked the reminder that Universities are big businesses and the web is central to how we do business. I think the whole room saw the value of taking the time to present the case for what we do in business terms (going back to the unhelpful separation between IT and Business goals). The importance of providing context for costs per click was nice with Sid explaining that the cost of a link to gocompare.com on google seems high in isolation but was worth it to that company. Similarly the link to download a brochure from a car manufacturers site could be measured and used to make the case for that method of communication.

No money? No matter – Improve your website with next to no cash

Another talk, by Paul Boag, that had many nodding their heads and resolving to implement the suggestions. The key one for me was the idea of content curation. In the forecast hard times ahead, he suggested that we take the opportunity to scale down sites to provide a better user experience and focus on making smaller but more better sites.

Sharepoint, Sheffield CMS and Student Portal

There was a mixture of talks on the last day which began merging a little by then. Josef Lapka presented a very nice Student Portal that they have created at Canterbury, which lots of people were impressed with. Richard Brierton gave a talk about the process of rolling out a new CMS at Sheffield, and people were eager to hear about the practicalities and problems that they had faced. We then came to a talk on Sharepoint by James Lapping and Peter Gilbert that provoked a very busy twitter back channel, coming out strongly against.

General Themes

  • 2 years is too long for an IT project
  • Lots more people seem to be doing or thinking about agile.
  • CMS – The eternal search for the holy grail goes on.
  • Mobile Apps vs Mobile Web
  • Practical talks versus strategic vision

Rather than link individually to each talk, it’s better if I point you to the Resources page where the organisers have done a great job in collecting and presenting much of the event content.

Posted in Conference | Leave a comment

Why your PDF should be HTML

Over the last few months the issue of formats has come up a few times, when librarians, educators and marketeers have all wanted to use PDFs to deliver information to the user. I thought now would be an opportune time for me to state why, in many cases this is a bad idea (even if done with the best of intentions).

What the experts say

Jakob Nielsen’s Alertbox, July 14, 2003 – PDF: Unfit for Human Consumption
Usability guru Jakob Neilsen is forthright in his appraisal of PDF as a format on web sites.

Joe Clark’s 2005 article about PDF accessibility included here to refer to the section where Joe clearly elucidates why most things should be HTML and even better has written a thorough list of exceptions. If your information doesn’t fall into one of these categories then you really should be using HTML.

Where we are going wrong

Of the many PDFs that are currently available on our various web sites very few can really justify the format that they are in if we use the criteria laid out in the articles linked to previously. I believe that a combination of overstating the role of a particular visual style and understating the inconvenience to the user leads to a situation where uploading a document suffices. I don’t believe this is the case. If we want to provide the best experience for users then we need to be making that extra (small) effort to put the information in the right format. It’s not that hard, and everyone benefits.

Issues with ISSUU

A beta subject guides page has been created by the proactive librarians that we have at Glamorgan that uses issuu.com , a service for hosting PDFs that wraps them up in flash, and add various user interface feature like page turning animations, zooming, various views and useful social features like commenting and sharing. I think it’s unfortunate that the useful features have been mingled with the user interface fluff that actually makes the information harder to retrieve.

Putting it into practice

To show what’s possible I downloaded a PDF of Lighting Design and Technology & Live Event Technology from a Subject guides page, and spent an hour or two copying and pasting to create an HTML version. The pdf is 115k to download. The HTML is 41.5k. In addition to the smaller file size the user does not need to wait for the PDF reader to open up, can navigate via a table of contents and most usefully can click on the many URLs to go straight to the info. The HTML format enables to the user to directly interact rather than read, then copy the links.

It may not have the visual impact of the issuu PDF version, but it is more functional, in the browser window that people are used to. Also, none of this precludes making the pdf available for those people that wish to download it.

Summary

Hope that people find this a useful position statement, and would love to see some response in the comments.

Posted in Accessibility, Code, Opinon | Comments Off

Bundler update, they grow up so fast!

Recently I wrote a post about using bundler, well so much has changed since then I thought we needed a little update.

Firstly, we’re now using bundler 0.9.7, so you need to remove the old bundler and install the new one

gem uninstall bundler

gem install bundler -v 0.9.7

The new bundler installs the files it needs from different places, so you’ll need to edit your gitignore files and remove the old structure from your app. (Make sure you haven’t put anything in bin that is not connected with bundler, you’ll need to keep that if you have)

rm -rf bin

rm -rf vendor/bundler_gems

All you need to add to .gitignore is .bundle

The gemfile has changed alot, you’ll need to check the documentation on the github site for all the specifics, but here’s our new one if you need a start.  Sample Gemfile

How you incorporate bundler into your app has also changed, and it also depends on what version of rails you are running.  For this project, we’re still on rails 2.3.5, so this is what you need to do.

Running bundler has changed as well, you used to run things like gem bundle, now you run bundle install, bundle lock, etc etc.  We’ve locked out project down so after a pull from the repo you have to run bundle install –relock to get all you new code and relock the app.  (This is going to change shortly to just having to run bundle install I think, waiting to see what happens with that one.)

And last, but not least, the new bundler has the ability to run the binaries from the gems it uses via a new command

bundle exec (rails binary)

e.g. bundle exec cucumber

This will ensure that you are using the binary from the gem you have installed via bundler, pretty neat.

There’ll be more updates as soon as we get time to update, bundler is already on 0.9.10 so expect one soon.  I got some teething troubles with the newest version on deploy so we’re holding fire on upgrading until I can work out what went wrong.

Happy bundling everyone.

Posted in Uncategorized | Tagged , | Leave a comment

Continuous Integration with Integrity

What is he banging on about this time?

Having a Continuous Integration system in place is a central practice of a functioning agile development team.

Our old CI server was bardy, supa dupa bardy, and over complicated. We were using cruisecontrol.rb, it did the trick but we were using java to run our selenium tests, which was a bit of a pain, so we decided to use Integrity to handle it all instead. ( And dropped the selenium tests for the time being and just use cucumber and rspec. )

So you’ve built a bog standard Ubuntu Server build with ruby and apache with passenger installed, as well as all the normal security, monitoring and maintenance software loaded and configured. (Hopefully we’ll get an article up soon on how to do all this, without giving away too many secrets of course :-))

You’ll also need to install all the software on the system that you’ll need to run whatever tests you want to run in every project that needs to be tested. Nearly all the gems will be bundled, so they’re not so much of a worry, but things like imagemagick and relevant development libraries and system tools need to be installed.

Install Integrity

Let’s grab the code, install it and create the sqlite db

$ gem install bundler
$ git clone git://github.com/integrity/integrity
$ cd integrity
$ git checkout -b deploy v0.2.3
$ gem bundle --only default
$ ./bin/rake db

Configure Integrity

init.rb

uncomment this line(we’re going to use campfire notifier)

require "integrity/notifier/campfire"

and change c.base_url to be the url of your integrity server

c.base_url     "myintegrityserver.com"

config.ru

add this block prior to running the app to add some security to the site

use Rack::Auth::Basic do |user, pass|
  user == "admin" && pass == "secret"
end

Gemfile

Uncomment these two lines

gem "broach", :git => "git://github.com/Manfred/broach.git"
gem "nap", :git => "git://github.com/qrush/nap.git"

Now you need to run gem bundle again inside the integrity app to add the new gems for the campfire notifier

$ gem bundle --only default

apache config

I’m going to serve it with apache, so I need to create the virtual host definition and enable it.

sudo vi /etc/apache2/sites-available/integrity

<VirtualHost myintegrityserver.com:80> 
	ServerName myintegrityserver.com 
	CustomLog /var/log/apache2/integrity_access.log combined 
	DocumentRoot /path/to/integrity/public 
	<Directory /path/to/integrity/public> 
		AllowOverride all Options -MultiViews    
	</Directory>
</VirtualHost>

sudo a2ensite integrity
sudo /etc/init.d/apache2 restart

When you browse to your server you should now see the integrity application running, but obviously with no projects built yet, let’s build one.

Name

We use the same name as our git repository on github, e.g.
web_app

Repository Url

This is the clone URL to your github project with .git at the end removed(current issue).
e.g.
git@github.com:username/reponame

Branch to Track

master

Build Script

We currently use a bash script to perform some tasks prior to running the test suite. We were using a rake task, but ran into some issues with rack1.1 so we’re using this method now.
e.g.
/path/to/bashscript.sh && script/cucumber

sudo vi /path/to/bashscript

#!/bin/sh
cp /path/to/database.yml config/
RAILS_ENV=test rake db:migrate --trace
/usr/bin/gem bundle > /dev/null 2>&1

Tick the Campfire Notifications and you’ll see some extra configuration options. We’re going to use this to send the result of the test back to our campfire site so we can all see the test results, good or bad :-)

Subdomain

This is you campfire subdomain, so if my campfire site was at matt.campfirenow.com it would be

matt

SSL

Tick it

Room Name

This is the name of the room itself, not the URL to it. e.g., if my room was called matts’s pictures 1.0, then you need to put in this box exactly that, matt’s pictures 1.0.

API Token

This is your API authentication token from Campfire. If you login to campfire and click on My info you’ll see it there

Notify on success

Tick it, it’s always good to know when the tests pass, in a vain attempt to balance out the failures :-(

Update the project.

Configure Github

Firstly I had to add the contents of ~/.ssh/config/id_rsa.pub as a SSH Public Key on my github account.

Now we need to configure our Post Receive Hook on the Repo we’re testing, so click on the repository, then click on Admin. In there click on Service Hooks and then select Post receive URLS

Now add one with this format

http://myintegrityserver.com/github/TOKEN

this is defined in config.ru

c.github       "TOKEN"

Once you’ve updated you’re settings you can test that hook. What should happen is the payload should be delivered to your integrity server and the test process should start. It may not, more than likely won’t, so I’d go back to my integrity project and try manual builds from there until you get it to work.

Try out a manual build at http://myintegrityserver.com to see how it goes.

Couple of tips

1. Always have tail -f integrity.log open, it will give you some information about what’s happening.

2. Output the results of the different commands happening in your build so you can see what’s happening in detail. I found this really helpful.

e.g. in the Build Scripts section you could do this

/path/to/bashscript.sh && script/cucumber > /path/to/cucumber.log

you can then tail that log during a build to output any debugging you need

3. The bash scripts and whatever code you put into the Build Scripts section of the app will run in the new build folder. If you’re having problems running anything just cd into that new build and try running the commands there. NB, you must be the same user that apache is running as, which is the same user that owns the integrity files and folders and you’re apps files and folder.

Now when anyone pushes to our master branch of our application on github, our complete test suite runs and we all get notified of whether or not the tests passed in our campfire room. Now the fun never ends :-)

Posted in Uncategorized | Leave a comment

MySQL Replication

The situation

We’ve got a web application that uses a mysql database as it’s backend. Some of the data held in that application needed to be used in another web application, but only read, never written to. So what we had to come up with is a method of using that data.

Options

1. Use the Rails ActiveResource class

Using this method we could read restful data from the remote system directly, and use the ruby objects we pull into our site to display the necessary data.
This is a relatively easy method of achieving our goal, requiring minimal amounts of coding on the remote application and we keep one source of data.

Some of the downsides to this method are if the remote system goes away(network failure, mysql crashes etc etc), then our new web app falls over. Also the remote system will always have to do most of the grunt work, running the mysql queries, creating the xml, etc etc. If it’s a busy remote system this may have a negative impact on how both systems run.

2. Use the Rails ActiveRecord class

The downsides to this problem are the same as the ActiveResource method, with a couple of additional problems. We’d then have access to all the database tables, and there are some tables that contain sensitive data, so that would require some additional work. Also, you can only define one database per application, so if we wanted to add any tables to our new web application, we’d have to add them to the database that powers the remote application as well. That could prove very difficult to manage.

3. XML

This is really a less efficient ActiveResource type solution with the same pro’s and con’s.

4. Master/Slave MySQL Replication

The only real downside for this method is that we’d never done it before. After a bit of testing we quickly found out that we could eliminate all the problems of the other methods.

We can synchronise only the tables we want, so we wouldn’t run into problems with sensitive data.
If the remote system goes away then our system will remain unaffected and the data changes will “catch up” once the remote comes back online.
There is no additional load on the remote system as the new system will query itself.
If we need to add any tables to our new application then that will have no effect on the remote application at all.

MySQL Replication

Replication follows a Master/Slave methodology, in our case the master is the application that is already in place, and the slave will be the new application.

I’ve left out how to create the user for replication(slave_user) and give it the necessary permissions, you can find out how to do that in one of the pages in the article above if you don’t know how to. Also don’t forget to set the bind-address to the IP of the slave on the master, and open up the firewall on the master on port 3306 for the IP of the slave.

First off we need to enable binary logging on the master, set the server-id, the database we want to replicate, (we’ll specify the tables we want in the slave), and some other variables to keep the system from getting out of hand.

MASTER
sudo vi /etc/mysql/my.cnf

Either add these lines to the [mysqldb] section or uncomment and edit them if they are already there.

server-id=1
log_bin=/var/log/mysql/mysql-bin.log
expire_logs_days=10
max_binlog_size=100M
binlog_do_db=database_name

restart the mysql server

sudo /etc/init.d/mysql restart

Now we need to tell the slave mysql server that it is the slave, and also which tables to replicate. You can also set some other management options here.

SLAVE
sudo vi /etc/mysql/my.cnf

Either add these lines to the [mysqldb] section or uncomment and edit them if they are already there.

server-id=2
master-connect-retry=60
replicate-do-table=database_name.table_name

restart the mysql server

sudo /etc/init.d/mysql restart

As we already have data in our master system we’re using the mysqldump method to get the current data(via Sequel Pro). In order for the data in the master to be the same as that in the slave initially, we need to stop any commits on tables, or LOCK the tables of the master database, prior to taking the data dump for the slave.

MASTER
mysql -uusername -ppassword
mysql>use database_name;
mysql>FLUSH TABLES WITH READ LOCK;

This will block all commits until you close that mysql session. Export the sequel dump from the master now and import it into the slave. I did this with Sequel Pro and the import/export tools available in that.

We now need to grab some details from the master that we’ll need later on when we tell the slave to start replicating.

mysql>SHOW MASTER STATUS;
-———————-—————-—————-—————————+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
-———————-—————-—————-—————————+
| mysql-bin.000004 | 20060 | database_name| |
-———————-—————-—————-—————————+

Now back on the slave we need to use these variables and some others to setup the replication.

SLAVE
mysql -uusername -ppassword
mysql>STOP SLAVE; (check if the slave is already running or not)
mysql>CHANGE MASTER TO MASTER_HOST=‘IP of master’, MASTER_USER=‘slave_user’, MASTER_PASSWORD=‘slave_users password’, MASTER_LOG_FILE=‘mysql-bin.000004’, MASTER_LOG_POS=20060;
mysql>START SLAVE;
mysql>quit;

Go back to the master and close the mysql session you started earlier. This will release the lock on the tables and allow updates and commits on the master again.

MASTER
mysql>quit;

That’s it, any changes in database_name.table_name on the master will be replicated over to the slave. If the master goes away for any reason, the slave system will still function and will ‘catch up’ when the master comes back online.

What we’ve now got working is a central data storage point that pushes any changes in it’s own data out to a remote system as and when the data changes. It’s pretty easy to add more slaves if you need to for scalability.

If you had a lot of people/systems all working on the same dataset, and you wanted to make certain that they were all using the same data, then the remote systems that only read data could be the slaves, and any systems that need to write data could use the master. You can also set up master to master replication, so that if some remote systems needed to write data as well, then they could.

Posted in Code | Leave a comment

Designing a Course Details Page

Have been looking at improving some of the previous ideas I’ve had for our course details pages. They are a crucial part of our site, so I wanted to have a good think about the way they should work. An essential part of design is clarity about the different priorities of the information you have to convey. If everything you have to say is of equal importance then that could be reflected in the visual representation. However, this rarely happens, and there is usually a definite hierarchy from the most important information down. An often tricky set of decisions when organising a home page, the structure of a course information page should be a little more obvious. This set of priorities then informs the semantic structure of the page.

To help me arrive at a sensible page structure, I had a ponder about what someone might want from a course detail page. This page should collect as much relevant info about the course and secondly, supplemental info that might help someone make a decision. Marketing have provided the broad headings that most of the information comes under, and these are pretty common across other University’s course pages.(give examples) So, a user would click through to the course page about English, and see a summary that helps them to decide if they are on the right page. If so, then the other headings are designed to give them more detailed info that would help them to decide, whilst hopefully making them interested about the possibility of studying that course.

Ideally, once someone has read enough of the exciting words about the course, then they would want to act upon what they’ve just read. To enable that the sidebar has a set of links with a variety of designs, to attract attention and emphasise that there are things the user can do. There’s a link to apply online, to book and open day, to contact the university. Hopefully with more similar tailored tools to follow. Currently styled with a predominantly blue theme I envisage there being versions based on the faculty color themes.

Further down the page there boxes with related info. Based on the presumption that all the previous info didn’t quite match what they were looking for, these are provided as jumping off points. There are links to related courses, Links to different parts of the site to Fees, Accommodation and a Parent focused site.

It’s best to keep this info all together as much as possible, rather than spread it across multiple pages and the attendant difficulty of managing links and urls. A consequence of that is that the page would potentially be very long. I don’t mind long pages myself, but it’s a good choice for the user to give them the option about which sections are relevant to them. Putting them in an accordion interface puts that control back in the hands of the user.

As per “Paul Boag’s article”:http://24ways.org/2009/what-makes-a-website-successful I think it’s really important that we try to create pages with very clear ideas of why they exist and what we would like them to achieve. If we can manage that then that clarity will benefit the people who use the site.

I tried a more rigid representation of this hierarchy, with the ‘Information’, ‘Links’, And ‘Tools’ block following one another in a single column. As you can see, it’s probably not a good idea to enforce such a rigid hierarchy.

Annoted design idea

I looked placing the tools on the left,but felt that they then became too prominent, since the info block is designed to be read first. It’s common practice to have in-page navigation on the left, and ordinarily I can see that it’s a convenient place to put things to make it easy to move around. However, since these pages are designed to be an end result of a search, it would be counter productive to then give links to take a user away the same visual priority as the main information

Annotated Design Idea

In the end I’ve settled on a right aligned tools block, with the idea that it’s easily placed for jumping off, but doesn’t demand too much attention. The task priority on this page would then be

  • Read
  • Act
  • Continue Searching

Annotated Design

Posted in Design | Comments Off

My little bundler of Joy

Bit of Background

One of the new bits of technology coming into Rails 3.0 is the all singing and dancing Gem Bundler . If you’ve ever found yourself in config.gem hell, or have pushed a perfectly fine project from development to production and it’s all gone kaput due to gem problems, than this is the medicine that you need. Here’s the blurb

Bundler is a tool that manages gem dependencies for your ruby application. It takes a gem manifest file and is able to fetch, download, and install the gems and all child dependencies specified in this manifest. It can manage any update to the gem manifest file and update the bundled gems accordingly. It also lets you run any ruby code in context of the bundled gem environment.

We’re not running Rail 3.0 yet, but thats no reason not to start using bundler.

Let’s install it, I’ve settled on version 0.7.1 as I was having odd problems with any of the newer versions.

sudo gem install bundler -v 0.7.1

Next step is to create a file called Gemfile in your applications root directory, and then subsequently fill it with what gems you need, and also in that environments you need them. Here’s the one I’m using.

Sample Gemfile

Now add these lines to your .gitignore file, this will make deploys easier and faster

bin/
vendor/bundler_gems/*
!vendor/bundler_gems/cache/*

Now it’s time to start a bundling. From inside your application run

gem bundle

That will now do all it’s magical stuff for you, creating the bin executables and the necessary files and folder in the vendor/bundler_gems folder.

Will it work yet? Not quite :-(

To get it working with rails 2.3.4 I had to do the following things.

1. Create a file called config/preinitializer.rb with this code in

require "#{File.dirname(__FILE__)}/../vendor/bundler_gems/environment"

2. Add the following line to all environment files, ie, config/environments/*.rb

Bundler.require_env RAILS_ENV

Now when I start my app using script/server, or nginx and passenger it all works as it should. I even removed all my local gems just to check. (stupid idea as not all my rails projects are bundled yet :-))

Deploying the medicine

I’ve unashamedly knicked the deploy stuff from numerous people, I love you all, even though I can’t remember your names.

Capistrano Deploy Script

Watch out for….

It might sound obvious but some gems can’t be handled in this way. Have a good think about that prior to using bundler, one for us was passenger. I had a hell of a time trying to workout how to use it with bundler, than I got some sane advice to just install passenger as a module of nginx, problem gone. The unicorn is stalking me at the moment though, so we may have a passenger who wants to get off shortly.

We were using REE in production, in my experience REE and bundler didn’t get on, so I had to remove all REE and install the ruby that comes down the pipe with aptitude, then rebuild nginx with passenger module after doing that, it all worked ok then.

I now deploy from the application like so

./bin/cap staging/production deploy

That way I know what version of capistrano I’m using per app

Don’t delete all your local gems until all your projects are bundled, and even then be careful.

If you have rake tasks that are being run by cronjobs, make sure that the rake you are accessing is present.
e.g.
RAILS_ENV=production rake some:jobname will not work from a cron job now, as rake is not available like that any more. You’ll need to edit it to point at the rake in your project bin.

RAILS_ENV=production /path/to/project/bin/rake some:jobname

Or add the bin folder in your project path to your $PATH, that might be a better solution.

Happy bundling everyone.

Our Current Technology

ubuntu 8.04, rails 2.3.4, ruby 1.8.6, nginx 0.7.64 (with passenger 2.2.8 installed as a module)

People who’ve helped immeasurably

http://github.com/wycats/bundler

http://litanyagainstfear.com/blog/2009/10/14/gem-bundler-is-the-future/

http://tomafro.net/2009/11/a-rails-template-for-gem-bundler

http://yehudakatz.com/2009/11/03/using-the-new-gem-bundler-today/

http://alexjsharp.com/2009/12/23/bundler-is-the-new-hotness-deploying-rails-apps-with-bundler-and-capistrano

#carlhuda on freenode

Posted in Code | Comments Off

Security Essentials

They say you can’t get over a problem until you admit you have one..

Microsoft finally give away a free anti-virus application

Posted in Opinon | Comments Off