Cartoons on the Homepage

Screenshot of old site

What we used to get away with

First of series of trips down memory lane with the aim of rediscovering some of the lessons of our history.

A review of the web presence of the University has just been completed and that process of examination has made me reflect on the strengths and weaknesses of our current site and the way that we are doing things. To that end, i thought it’d be useful to take a look at the way we used to do things and see where we’ve improved, how the web world has changed and what lesson our particular history tells us. It might also be an educational journey for those who don’t remember some of the things we’ve done.

My five year old could do better

As critiques of University web pages I’ll bet that not many have had that particular sentence thrown at them. It all came about when we decided that the existing style of home page that we had at that time was not really appealing to our target audience, so we resolved to produce something more in line with a younger audience. Looking back through at older versions of and my sketchy memory it seems that the web at that time was louder, bolder and brighter.

Web guru Jeffery Zeldman’s site from then was very different. The language of the time was the limited color palette, bold, often pixelated graphics and blocks of color. We were all using tables for layout back then. It’s funny looking back through the archives of pages I remember, and illuminating that the general excitement and enthusiasm of the time seems to come across. It was in that context that we decided to be bold and create some character illustrations that would be different from the stock images other places used at the time.

I’d produced some illustrations for various parts of the site and I’d love to be able to say we did some in depth user testing, allied to extensive market research but that would be a lie. In fact we spoke to the representative from marketing at the time and outlined our plans to take the site in this very bold new direction and to his/her credit they went with the idea.

So how far have we come?

What strikes me during this meander down memory lane is the lack of links – eight in total, linking to broad categories of information. The absence of a search button reveals that we were very much in the business of guessing what users might want and laying out browsable options for them. In the intervening years the web has changed very much to a searchable medium, where users expect a quick interaction will deliver the info that they require.

The intense demands for space on the modern day homepage make it feel that we need to revisit our search and really explore how it’s being used and how we can improve it. Perhaps the desire to be up front and on the homepage stems from anxiety about all the stakeholders’ information being discovered. The decision to put things in these broad categories was , i remember taken with marketing. The overall site was smaller which probably explains how it was possible to collect things in these areas.

The size of our site has grown dramatically, reflected in the 60+ links currently on our homepage. As a team we will need to really examine the function and purpose of the different parts of the site, and reassess the role of the homepage in that process. Should the home page function like a table of contents, or a brochure, or a billboard, or a directory, or a storefront. All the analogies are relevant but if we try to do all of them in one place then we will end up failing at them all.

My thoughts on IWMW2009

Where I went

I had the opportunity to attend IWMW2009 at the University of Essex in Colchester. To digest all I saw and did I thought I’d write a post.

What I saw

Headlights on Dark roads

Described on ‘Derek will review the recent history of libraries and the challenges now facing them.’ In fact, the talk was far more interesting than that sounds and a wide ranging meditation on the state of current literacy, the culture that libraries have traditional worked in and the large changes that technology has wrought.

One of his interesting ideas was the shift from a literary culture to a visual one. He used a great slide to emphasize how images stay in our memories rather than words. With a challenge to name all the images. I think I got a few, but he didn’t put all the answers up.

Also very good from the whole conference was the use of twitter. Brian Kelly talks about this on his blog.

So, we can see what other people thought of the Plenary at the time via twitter

An Introduction to WAIARIA

I attended a BARCAMP where Dan Jackson from UCL took us through the concepts and some possible ways to implement ARIA. It was very good and you really need to view all the slides to appreciate how much info is there. Dan was an engaging speaker who helped me get to grips with a subject that I’d been putting off learning about because the whole issue seems wrapped up in a big W3C bun-fight at the moment.

Servicing ‘Core’ and ‘Chore’: A framework for understanding a Modern IT Working Environment

For me, this talk was a call to get to grips with the the emerging reality of users not being dependent on IT departments for their tools and the IT departments taking a much more active role in helping users. They are increasingly able to help themselves to the menu of external IT tools that give them what they need very quickly. Rather than competing with them perhaps we should form a relationship with users of our services that helps us and them work out where our best efforts should be directed. It seems very sensible that IT should be an unobtrusive part of people’s work and external services are part of our set of tools to achieve that.

Making your killer applications killer

Despite a technology failure, Paul Boag gave an enthusiastic talk about the context that Universities release their course information into. The rest of the web is increasingly using dynamic and interactive features on screen that give people the chance to try things like comparisons and reviews that help people make their choice. He contends that University’s need to start providing richer and deeper experiences around the course information. He rattled through some examples of sites that provided interactivity, and personality. I found this point particularly interesting, because it’s often the case that an organization’s persona becomes pretty dry and conservative. It’s quite a leap in mindset to have a clear and distinct character shine through the writing. Hard to do, but probably highly rewarding.

He also touched on the reasons why things are as they are, with Universities taking their requirements to produce accessible sites seriously, Limits on resources and a lack of experience in producing this more engaging and interactive experience. Universities have traditionally offered large amounts of rather dry information, but the nature of the web and the audience requires us to adapt the way we get our message over.

He then encouraged Us to ‘just do it’ – especially with regard to creating proof of concept things. He acknowledged the importance of showing a new feature rather than trying to describe it to get the go ahead to do the work. He presented the idea of HIJAX (which I’d never hea) to help with accessibility. To cut costs he advocated not reinventing the wheel and using existing libraries, APIs and third party websites.

Overall, a good call to arms if perhaps a little daunting. If we implemented at least some of the things he talked about we’d be heading in the right direction.

What is the web

James Curran ran a brave experiment in presenting an idea. He talked around the nebulous question of ‘What is the web’ , I think with the idea of getting people who work on ‘it’ every day to consider the fundamental concepts to help us have a vision of where it is taking us. The brave part was the continually refreshing twitter feed being displayed on the screen that James was attempting to respond to. It was intriguing, especially when people in the room were critical; I thought people might be too polite. Quite a tricky task to maintain focus of the talk, but thought it was definitely worth a go.

Hub websites for youth participation

I have to admit this talk didn’t really do much for me. I think I was expecting a more fully formed idea, and perhaps it suffered by being in the early stages of the project. At this stage it gave me the impression of a heavily academic treatment of a potentially very interesting project. Maybe it is too large in it’s scope. The idea of the opinions of a generation who are growing up with a technology, having a way to express that opinion seems good, but I wonder if the web itself will provide a place for those opinions to be expressed.

iTunes U

Attended a session on iTunesU, again, just to find out about something that I knew nothing about. It was great to see how much great content is available from the various universities, but Barry did a great job of explaining just how much work needed to be done around that content. Oxford had lecturers who had established podcasts well before the opportunity for iTunes U existed, which helped them greatly. There are lots of things that you need to do when creating the content and if you are thinking of this then Barry’s slides are a comprehensive guide to just how much work you are proposing to take on.

How the BBC make websites

Enjoyed the BBC session the most. Obviously, they have brilliant content as the organization’s whole business is producing great stuff. They emphasized that they see their main job as making that resource available, so everything is geared around that end. The bit about hackable URLs provoked lots of sage nodding from the audience. I was also surprised by how much thinking goes into things before they get anywhere near writing code. They did lots of paper prototyping, wire-framing and story-boarding, and once the code was written they emphasized testing, testing and more testing.

What I missed

The only thing I was midly disappointed about was not being able to catch some of the other barcamps, and hopefully some of them will appear online over the next week or so.


Lots of slides can be found on slideshare

What I did

All the talks were only one part of the experience for me. The rest of the time was taken up with meeting people from lots of other Universities, and realizing that we are facing the same issues and that sometimes we come up with ways to solve them. It was an eye opener for me just how many other Universities were in the looking for or implementing CMSs.

We were unusual in that we take a pretty open source approach to the CMS systems that we use, and talking to people it was clear that every CMS has strengths and weaknesses. If the mythical CMS exists that will magically transform business processes, make people better writers, satisfy end users, manage it’s own infrastructure and take University web presences to a new level, then I don’t think anyone there has found it.

On a personal note. I found it really useful to go on my own, which forced me to get out and say hello to people, which as it turns out is much easier than I’d thought. Despite being engaged in the dreaded ‘networking’ i enjoyed the chance to tell some people how impressed I am with the work they are doing. Hopefully I can go back next year with a list of things that we’ve done that we started by going to IWMW2009.

All Kicking Off

The IKO today seemed like a good to take stock of how I think things are going with our recently adopted agile approach, and to see what I’ve learnt along the way. We’ve been agile since the start of September 2008 and the time has really flown by. I’ve looked back through some of the things I’ve written about the process.

Early Enthusiasm

Back on September 2, I wrote

Now that I have a book describing the practices of an agile developer, I’m quite exited by the possibilities of following a method. Boundaries and Structures are good to work within.

The book in question, The Practices of an Agile Devloper was a good introduction to the tenets of Agile Development. Particularly good were the ‘What it feels like’ bits in the book, that softened the often technical and rational ideas on show. Even if the Agile Manifesto seems a little overblown, I was convinced that the realistic attitude that seems prevalent in Agile development is far better than unrealistic and unwieldy project planning that I have bumped up against in my time.

How we got the ball rolling

An external contractor was appointed to help us out with a chunk of the making IT personal project, and this was an opportune time for someone with practical experience of Agile to get us started. We initially decided that we would use physical cards, as it was suggested that they gave a strong sense of the reality of the work and the at a glance nature of the whiteboard would be an advantage. So, on September 11, I began the first story card of the first day of our first sprint.

I’ll just quickly say that in Agile, there is the concept of an iteration period, during which a team commits to a body of work to be done by the end of the period. Some teams choose longer, some shorter. We chose a week as the best fit, enabling us to report often to the customers (more of which later). This period is called a sprint – I guess no methodology is without it’s jargon.

I can clearly remember enjoying the novel sense of achievement engendered by picking up physical cards, which in Agile results in points towards the teams total. This method is to help the team get better at estimating how long it is taking to do the work that is coming in – Something that Hofstader’s Law suggests everyone is pretty bad at. The jargon for this in Agile is ‘Velocity’. In truth I think we struggled with the scoring system, based as it is, on a relative measure of how long something would take, and using a Fibonacci sequence of numbers, but I think we have since simplified that as a consequence of switching tools.

Initial Kick Off

Phew! Started our second sprint today after a pretty exhausting Initial Kick Off meeting.

Then it was a good buzz having a large stack of tasks to get through, and the atmosphere was really focused.
September 18, 2008

The Inital Kick Off (yet more jargon) is the name given to the meeting that we have at the start of the sprint where we look at all the work we have and our customers are asked about which work is highest priority for them. This is a great for concentrating minds and gettinge everyone to realise that not everything can (or should) be done at once. It’s a good time for the people who bring jobs to us to communicate how important it is to them and in return they get really realistic and useful information that they can then feed into their own planning processes.

Central to Agile is the idea that we do the work for customers, and rather than making them wait for a grand unveiling of a product, there is an incremental approach where they have a weekly opportunity to give us feedback. What I find refreshing is the idea that it’s natural for things to change, and much easier for people to discuss things that are shown rather than the abstract ideas thrown up by specification documents and wish lists; Because the sprints are short the theory is that the work is kept close to what the customers ask for.

All the work that can’t be done during the current sprint is put in a backlog, for all to see, again, making planning easier and as new work arises it is placed in the priorities in the backlog.

One problem that we have come up against in this process is the way the we estimate and frame the work so that it can be broken down into manageable chunks. It can be quite easy to be vague when writing the card for the work, which then can come back and bite you when you realise that the job is too big for one sprint. I guess the good thing is that the it’s becomes clear pretty early in the process when that is happening.

This can be a particular problem when bringing more design based work in to the process. By it’s very nature there’s a little more subjectivity and consequently how one defines the completion of particular parts of a design process can be tricky. Cenydd Bowles has written a nice article about this very thing

Virtually There

During the first 4 months of the process we tried physical cards on a whiteboard,the a google spreadsheet, then a ticketing system similar to the system we’ve used for years, before finally settling on our current tool, Pivotal Tracker, at the turn of the year. It’s been a good step forward. It gives us a good set of tools for managing our workload, and developing in full view of our customers is initially quite scary but turns out to be confidence building.

Wrapping Up

I’ve found the structure of developing this way suits me. It’s probably not for everyone.

  • I enjoy the short turnarounds where things are not allowed to drift.
  • The discipline of having to make an estimate (even if it’s wrong) feels worthwhile.
  • The (mostly) daily updates we have where we (quickly)communicate what we’re individually working on are useful.

Hope you’ve enjoyed your tour of our silo.

What’s long, hard and… green

Don’t be so disgusting, it’s obviously a cucumber.

Or Cucumber with a capital C, perhaps, which was frequently long, hard and red (steady!) until our shiny new testing regime was unveiled!

As part of our commitment to use Cucumber more effectively, here are a few of the things I have learned about its use, and if you learn them too (from me) you will enjoy using Cucumber just as much as this guy enjoys being brutally murdered:

Background, the new GivenScenario

Remember GivenScenario? If you don’t, I don’t blame you. It died in infancy, and in fact was boxed and buried before I even had the chance to use it.

The idea was that you could define some sort of ‘set up’ scenarios that saved you from having the same code repeated all over your feature spec. There were a number of problems with this solution though, as you ended up with scenarios that had no value other than as leg-ups for other scenarios, and it could get pretty confusing if your GivenScenario had a GivenScenario itself, and so many other reasons. It makes you wonder why they ever thought it was a good idea. IDIOTS.

So anyway, you write ugly, repetitive features like this:

Scenario: view preferences page
  Given I am logged into cas as "Peter" "Portal" with an username of "00700001"
  And I am a Business School student
  And I am enrolled on a course
  And there are campuses
  And I am on the preferences page

Scenario: should show user personal details
  Given I am logged into cas as "Peter" "Portal" with an username of "00700001"
  And I am a Business School student
  And I am enrolled on a course
  And there are campuses
  Then I should see "Peter Portal"
  And I should see ""

There is a newer, and better way to do this though. Introducing Background, which you can use in pretty much the exact same way, but it doesn’t count as a scenario itself. Joy.

  Given I am logged into cas as "Peter" "Portal" with an username of "00700001"
  And I am a Business School student
  And I am enrolled on a course
  And there are campuses
  And I am on the preferences page

Scenario: view preferences page
  Then I should see "Your Profile and Preferences"

Scenario: should show user personal details
  Then I should see "Peter Portal"
  And I should see ""

Lovely (this gets much more lovely than you can see here, as there are a lot of scenarios with the same background).


As I was writing some particularly awkward Cucumber steps involving assigning something to something else and then assigning it to something else (see, pretty confusing), and having been told assigning stuff was much harder than it actually turned out to be, I was really struggling to figure out what was going on.

I ended up putting in a load of print statements to see what my variables were doing.

There is a better way though. I was pointed to this article, which describes how you can use breakpoint in Cucumber steps.

However, you shouldn’t do this, because breakpoint is deprecated, and it’s now called debugger. You can put this in anywhere and when you run the test you get an interactive prompt and you can inspect any variable you wish (and probably even set things if you’re trying something out). The most convenient way to do it, as described in the article, is to put it in a step by itself:

Then /^I debug$/ do

That way you can simply drop in the step whenever you want to know what’s happening:

Scenario: list users
  Given I am logged into cas as "Peter" "Portal" with an username of "00700001"
  And I am on the glamlife users index page
  And I debug
  Then I should see "Glamlife Users"

When you’re finished with the debugger, just press Ctrl+D, and Cucumber will continue on its merry green way.

You can even put it in a Background and debug all your scenarios at once if you really want to!

Honourable mention goes to save_and_open_page, which will open the page in a browser so you can see visually what’s wrong with it, instead of having to interpret the HTML, see the Technical Pickles blog for more on this.


As usual there’s a Railscast covering much of this stuff, and more (there’s some particularly good stuff in there for people with many similar features with just a couple of variables), and for anyone who is just here to look at the pictures, there’s also a good one with an introduction to Cucumber.

Don’t be an idiot

All this won’t help you though, like it didn’t help me, if you miss something obvious like the fact that if your ‘visit’ line doesn’t come after your other Givens, you data won’t be there no matter how much you debug. 🙁

Liquid Refreshment

This week I have been experimenting with Liquid.

Not an excuse for boozing at work, but a method of allowing content editors to incorporate dynamic content without having access to all the wonderful and dangerous things that Ruby allows.

In short, you can use tags which look like to output variables or to do simple computations. If you’re interested (and who wouldn’t be?), you can read more about Liquid here.

The object of this investigation was to allow content editors to include polls in their content that users could vote on. Polls are a feature that we’ve been thinking about adding to Glamlife for a while, and it was decided that giving editors the option to add them into Glamlife’s ‘chunks’ would be the ideal way, to give them as much flexibility as possible.

So my first task was to try and dig up as much information about Liquid as I could find. As is so often the case with these cutting-edge technologies, there isn’t much out there. The Liquid developers’ wiki was useful, but not exactly comprehensive. I was also able to find a few blog posts about it, for example this one about custom tags, and this overview of Liquid’s features.

If a picture is worth a thousand words, how much is a video worth? I found that the most useful resource by far was Ryan Bates’s ‘Railscast’ video tutorial on using Liquid. After watching this and having a little play with it in the Rails console environment, I felt pretty good about Liquid and set about trying to shoehorn the poll form into it.

As many of you will be aware, however, life has a nasty habit of crushing your dreams just when you think they might be coming to fruition. Imagine my despair, then, when I found that the Rails helper methods (which are required for the form) are unavailable in the lib files which you are required to code your custom tags in. As yet, with my relatively limited knowledge of Rails, and despite watching another Railscast video, I have been unable to solve this problem.

An interim solution will be to simply hard-code the form into a view. This is inflexible, and will only show the latest poll created, but it will at least get the feature out into the wild and satisfy the clamour for exciting new features. Keep your eyes peeled on Glamlife (if you have access to it!) to see if we have more success with this!

Cucumber Stories

Cucumber is a replacement for Storyrunner. David Chelimsky has a good writeup of the history and the plans.

UPDATE Theres seems to be some work on creating a repository of common scenarios and stories at the aptly named Cucumber Stories

Why is this interesting?

Well, the theory goes that once the feature requirements are written in the featurename.feature files then the coders write steps that correspond in featurename_steps.rb files. This is where code that actually does the things being described lives. By defining features in this way the coders are clear what exactly is required and the time consuming back and forth working out what people mean is reduced. By writing down what is meant it gives people a solid starting point. If the stories need to change then they can, but this aspect of the project is a clear way to get clarity from the clients on what they want.

Thats’s the theory.

To get started you can see that Craig Has some instructions and background on how to get it up and running.

And once you’ve done those things you’ll be itching to get cracking and write some features and scenarios. Craig also helps us out here

What it feels like to write them

I’ve been doing this for a couple of days now and to be honest am still struggling a little about how much detail I need to be going in to in the scenarios. So i decided to describe the behaviour in broad terms. If that is acceptable to the coders in the team that is fine; if they think that more detail is needed then they can define that or any other member of the team.

Here is an example of what I’m talking about.

Feature: Provide News
In order that news is relevant and timely
As an editor
I want ability to add news

Scenario: Add person to system
Given They are allowed
And they have been trained
When an editor is added
Then they have ability to add, create and edit news

In this example, the business value is the provide news from my area. The role is defined as an editor and the action is ability to add news.

So, I then go into a scenario which is to detail how the feature actually will work. In the example we describe the behaviour needed to add a person to the system. Given is a reserved word which sets the context. In this exammple it assumes that the user only arrives at this step once approved. Note that it doesn’t specify how that is done, just that it should have been done (We will come to that in another post.)

And is also a reserved word that allows us to add the the behaviours. IN the example above I am requiring that have been trained. Again, how we verify this is for elsewhere; all we need here is that it has taken place. When those steps have taken place we define that the next step is to add the editor, and to verify that has taken place we use the reserved word Then to check that the previous step has been completed.

The next stage

This is the first part of the process. The really clever bit is that steps are written in corresponding step files to make the behaviours we defined here a reality. Steps are beyond the scope of this article, but there are some good examples and explanations

I’ll let you know how we get on with this method.

Git for Designers

Git for Designers

GIT is a version control system that is getting a lot of good reviews lately, and we’ve started to use. I thought I’d relate a methodology that we’ve arrived at in very simple terms, that even a designer can understand.

We all have GitHub repositories, and we are using local branches to work on our respective development tasks. So, the dilemma is to how to share work in our branches and repositories with each other without having to commit to the master repository.

An answer comes in the form of remote branches and pull requests.

The theory goes like this…

  • Make a copy of the latest version of the code, on one’s own repository, get a local copy of that code, and develop to your hearts content – including creating branches for different aspects of the work.
  • Periodically check the original to make sure one’s repos and local code is up to date.
  • Commit changes to local and own repositories – including any branches that you think you might want to share.
  • Get those changes into the master repository.

Ok, so let’s make it happen.

First thing to do is make a fork of the original repository, by pressing the fork option on the github home page.

This creates a repository that you can then get onto your machine with the git clone command.

git clone

CD in to the folder that has created

Have a little check of all your branches with

git branch -a

So, the next thing we need to do is track the remote branch so that we get any changes from the original repos that we forked from.

git remote add branchname

branchname is the name you give to the tracking branch. It doesn’t show up when you run git branch -a yet, but some lines have been added to your config – open up .git/config and you will see that the branchname points to the original repos.

[remote "branchname"]
url =
fetch = +refs/heads/*:refs/remotes/practiceb/*

So now, I’d like to get something into this branch to check it’s tracking correctly. I can do that by running

git fetch branchname

Which gets some files from repos.So, this time when you run

git branch -a

Your remote branch called branchname appears in your list or branches.

So now you are tracking this branch, but According to the GIT Manual you cannot checkout a remote tracking branch. Instead you need to create a local branch, which you do with the following command.

git checkout --track -b newlocalbranchname branchname/master

Just to be sure have a look with git branch -a and you should see the new branch with an asterix, indicating the branch you are currently on.

So, if you remember the theory, we’ve now got a local branch with the changes from the original repository, and a local master branch with changes from our forked repos. This gives us the mechanism to get any changes from the original repos.

We run git pull whilst in our newlocalbranch. This fetches and merges. We can then switch to our master branch with git checkout master from where we run git merge newlocalbranch pulling the changes over.

The next bit of the theory was to put our changes up. That is pretty easy – after our minutes of productive work you decide you’re happy with your changes.

git commit -a

And, then git push to get it onto your github repository

The final piece of the jigsaw is to get your changes into the original repository. You can do this by sending a pull request to the admin for the repository. The button is on the github homepage of the repository.

And there you have it.

Click where?

When placing hyperlinks on a site, there can be a temptation to use ‘click here’ as the link text, assuming that the context of a link is immediately apparent.

The rest of this article contains some help and advice on this issue.

One of the issues that we face as developers of various CMS is what to do when the people writing the content write in a way that is contrary to the WCAG. Point 6.1 of the guidelines explains, in a rather technical way, the problem. For a more informal discussion and some real world examples I’ve linked to the article why ‘click here’ is bad practice.

I’ve selected a few highlights from the page –

“Click here” is device-dependent. There are several ways to follow a link, with or without a mouse. Users probably recognize what you mean, but you are still conveying the message that you think in a device-dependent way.

There’s usually a fairly simple way to do things better. Instead of the text “For information on pneumonia, click here”, you could simply write “pneumonia information”.

Accessibility isn’t something that can be left to developers to worry about.

Evaluating accessibility the TechDis way

Yesterday I received a word document via email from a colleague which served as my introduction to the project. This is a JISC funded resource that seeks to a resource to assist in implementing accessibility and usability in a range of organisations. Read their site for a more detailed explanation.

I’ve had a quick look at the document and it piqued my curiousity. Over the past three or so years we’ve aimed to integrate accessibility and usability into all the new work that we create, but we’ve been remiss in formalising what we’ve done and documenting the methods we’ve used in aiming to make our sites usable and accessible.

Over that period we’ve had many discussions within the team about the pros and cons of particular methods but we’ve no record of our thought processes. I reasoned that an evaluation exercise would shine a light on our decision making and help us to do things better. An immediate attraction of the word document is its brevity.

Once one has selected the URL’s you wish to evaluate it’s away you go with the technical stuff. I’ve decided to look at Glamlife , one of our sites that should be pretty accessible, so that I don’t have too daunting a list of things to fix, and the evaluate the evaluation! The URLs I’ve chosen to test are

The home page, a section home page and a content page.

Technical HTML Conformance

“Each HTML page should be put through at least one HTML validator and each CSS should be validated using a CSS validation service.”

I used The W3C Mark-up Validation Service

The one remaining error on the home page is an element that we use in our rollover javascript. An explanation of how this works can be found on I currently don’t know how to change this so that it validates, but it shouldn’t cause the page to choke on any accessibility tests so we can leave it in. It’s only on this page so my efforts are better employed elsewhere.

The section homepages are a different layout and based on a different template, and all validate as XHTML 1.0 strict. Being based on templates the overwhelming majority of pages validate, unless there are things entered in the content that break the validation. This is an unavoidable hazard of a CMS. However, it’s worth remembering that valid does not equal accessible. Validation is just one of the tools available to us that assists in assertaining that our code is of a certain standard.

CSS Validation

A quick run through of the CSS we use for glamlife came up a with a few errors, which were easily fixed. I think the CSS for the site could do with some tidying up, to make development easier

Screen Size/Resolution

The TechDis site recommend as a resource to test ‘common’ browser resolutions, though they seem pretty tiny to me. The copyright info on the page says 1995-2001, so presumably this is when the info was relevant. Instead of that I’ve chosen to check our general site stats, and as at Feb 2007 we’ve around 10% of users using 800×600 resoultions. We’ve been pretty conservative with our design to accommodate this size.

Enlarging the font size

The font size scales up well in Safari and goes up and down through the text size options on IE6 on XP. (the smallest size is almost illegible), which I think is a common problem.

Is the site usable without images?


Does the site work without JavaScript?


Can you use the site without using the mouse?

I can tab through the site, but to thoroughly test this you would need to have input from someone for whom this method of navigating a site is important. How one lets the user know about the keyboard shortcuts that we use is an important question that needs more work. We have however, chosen our access keys with regard to the UK government’s advice

Navigation without a mouse (or other pointing device) is something that we need to get out and learn more about. And the Tab order of the page is something that we can defininitely develop some University standards on. The access keys for Glamlife can be found on the Accessibility Statement page.

Automatic Checkers

The guidance from TechDis recommends some automatic checkers Used this to check, using the WCAG priority 1,2 and 3 options. One error we had was on link text. The checkpoint is 13.1 Clearly identify the target of each link. The fix for this will be a small change to the content.


We used webxact throughout the development phase of Glamlife and so it does not flag up many errors and/or warnings. The one error that comes up is something that we have looked into and may look into again. It certainly not a showstopper.

Accessibility Heuristic Evaluation

This sounded quite daunting to me until I realised that it’s a method to include some judgement based criteria for any site that you care to evaluate. Essentially one answers the following questions and assigns scores for how fully you feel the site satisfies the questions. There is a fuller explanation on the TechDis site.

  • Does the website have a clear, intuitive and logical navigation structure?
  • Does the website have a clean, consistent and clear visual design?
  • Does the site provide appropriate alternative descriptions for all visual elements?
  • Are all the website interactions, forms, navigation scripts etc accessible and usable?
  • Does the website use clear and simple language, appropriate for its audience?

We can answer yes to all these questions, gaining 3-4 points for each answer. Which tells us that we are doing ok. They are very useful for providing a mechanism to evaluate non-technical issues, matters of personal preference or other qualitative issues.

Usability Heuristic Evaluation

Similarly we score well on the usability Heuristics too. We’ve kept the site pretty free of inappropriate Frames, Java, Flash and animation, and the content is clear and well written.

Assistive Technology Testing

We have tested the site with the screen reader that is installed as standard in the IT labs here at Glamorgan, however, This is of limited value because we are not regular users of the software, and consequently this skews the tests. Useful feedback would come from a regular screen reader user.

Just prior to launch I posted a request for users opinions on the accessify forums, and was rewarded with some useful feedback.Specifically, a user explained what was being read out when he viewed the site suggested some changes to our access keys and a few other things about what was actually being read by his screen reader. Which we then implemented. The whole area of assitive technology testing is an area that the University needs to get to grips with. We have done what we can.

Browser Compatibility Testing

Have tested on IE6, Firefox,Safari which are the most popular on our site with approx 98% of users using them. We also feel that the web standards based approach that we’ve taken is likely to be helpful to users with browsers that we have not directly tested.


The evaluation exercise has been good to go through, but it does take some time. If one was doing it for sites with bigger issues about accessibility then I can imagine it would be an arduous task, but a completely necessary one. Accessibility and Usability was central to Glamlife from the start and it still threw up some issues, and continues to do so. They are not features that once provided can be ticked, it is a continual process of evaluation and development.

If you’ve found this article interesting, please add some comments. As a team we are always keen to get feedback on what we do.

A cite for sore eyes

Researchers are required and need to refer to papers, articles etc as evidence of their research activity, as a consequence the research sites will soon have lots of publication information available, and a quick glance at the sites shows that there is work to be done on semantically making sense of the information.

The LRC have produced

This example of a journal publication from

  • Burke, S and Kirk, KM1
    Genetics education in the nursing professions: a literature review
    Journal of Advanced Nursing, 2006 54(2): 228–237,
    ISSN 0309–2402.

This comprises of the Author name/s, the Article title ( usually quite long), the Journal name, a date when the article was published, the volume of the journal ( in this case 54), the number. The pages the article can be found on, and an ISSN number (explanation here

This is the textile necessary

*(pub) Burke, S and Kirk, KM"^1^":#1
??Genetics education in the nursing professions: a literature review??
_Journal of Advanced Nursing_, 2006 *54*(2): 228-237,
ISSN 0309-2402.

One of the problems with this is that only the title is contained within the cite tags. It would be better if all the text is contained within the cite. For such sematically rich information I think that HTML is the best way to mark it up.

  • Burke, S and Kirk, KM 1
    Genetics education in the nursing professions: a literature review
    Journal of Advanced Nursing, 2006 54(2): 228–237,
    ISSN 0309–2402.

I think this is a better more useful way to mark up a publication. The whole entry is now cited rather than just the title, there also more classes that we can hang CSS styles on, and the classes can also serve to explain a little to authors of the code what the information means.

It can serve as a useful starting point for when we start to pull publication info out of a database.

Other things to think about are the possible use of a citation microformat, and we also need to get further information about how we can display the data to correspond to the variety of citing style out there.