All posts by Kevin Mears

IWMW2010 – a conference with the theme ‘the Web in turbulent times’

IWMW Talks I remembered

Stylesheets for mobile phones with Helen from Cambridge.

I enjoyed this session where Helen from Cambridge showed the steps she’d considered and taken to experiment with media queries and different styles being fed to different devices. The slides are available, and there was some good discussion about linearising the page for mobile devices. There was no consensus about the best way to do this, illustrating that the focus of a homepage can be difficult to maintain when having to distill. If nothing else the constraints of smaller devices means you have to make some hard decisions about what is really useful and importanat to your vistors. Decsisons that perhaps can be fudged with the screen available to desktop users. I was struck by the amount of work and thought that Helen had done in considering the user, and yet it may all be to no avail, since devices have improved so much that they provide their own method for navigating sites.

Course Advertsing and XCRI

This session was quite a broad one, giving an overview of a project that has been running for a while to try to standardise a format that describes and structures course information. This session focussed on the XCRI-CAP part of the project which looks at marketing information of courses. Some good tools were presented to check how ready an institution would be to start using this. It struck me that it could be really useful in the our circumstances where we have courses across the Glamorgan Group, from a variety of different levels and a standard way of referring to them. Other universities are starting to use this format, and the real benefits of standards that might actually accrue to the end user.

Slate my website barcamp

Really fun session run by Mike Nolan from Edgehill. The idea was to have a quick look around a university’s site and mark it on design, content and code. Reading, Nottingham and Edgehill were reviewed (Dan from York did the honours for the Edgehill review), and marks then given whilst everyone discussed aspects of the site. Really useful to see the site through someone else’s eyes and it worked really well to quickly identify things that can be done better. Was a shame it didn’t go on longer.

The Web in Turbulent Times

Really good broad talk about IT and where the web fits in. Nice video here with twitter responses from the time. She makes very good point that IT projects are considered separate from business project when they are in fact integral. There is an unhelpful perception that IT is somehow separate from the business.Chris also made some interesting points about shared services and the pressure from government for the Education sector to share things more.

HTML5 (and friends)

Enjoyed the good talk from Patrick Lauke, thinking that it worked well as a tactical talk, encouraging a look at the practical steps one can take to get started with HTML5. It struck me that there was an appetite in the audience to get cracking and Patrick made it seem less daunting and complicated than many people (myself included)imagine it to be.

‘So what do you do exactly?’ In challenging times justifying the roles of the web teams

Galvanising talk about stats and measuring what we do. Particularly liked the reminder that Universities are big businesses and the web is central to how we do business. I think the whole room saw the value of taking the time to present the case for what we do in business terms (going back to the unhelpful separation between IT and Business goals). The importance of providing context for costs per click was nice with Sid explaining that the cost of a link to gocompare.com on google seems high in isolation but was worth it to that company. Similarly the link to download a brochure from a car manufacturers site could be measured and used to make the case for that method of communication.

No money? No matter – Improve your website with next to no cash

Another talk, by Paul Boag, that had many nodding their heads and resolving to implement the suggestions. The key one for me was the idea of content curation. In the forecast hard times ahead, he suggested that we take the opportunity to scale down sites to provide a better user experience and focus on making smaller but more better sites.

Sharepoint, Sheffield CMS and Student Portal

There was a mixture of talks on the last day which began merging a little by then. Josef Lapka presented a very nice Student Portal that they have created at Canterbury, which lots of people were impressed with. Richard Brierton gave a talk about the process of rolling out a new CMS at Sheffield, and people were eager to hear about the practicalities and problems that they had faced. We then came to a talk on Sharepoint by James Lapping and Peter Gilbert that provoked a very busy twitter back channel, coming out strongly against.

General Themes

  • 2 years is too long for an IT project
  • Lots more people seem to be doing or thinking about agile.
  • CMS – The eternal search for the holy grail goes on.
  • Mobile Apps vs Mobile Web
  • Practical talks versus strategic vision

Rather than link individually to each talk, it’s better if I point you to the Resources page where the organisers have done a great job in collecting and presenting much of the event content.

Why your PDF should be HTML

Over the last few months the issue of formats has come up a few times, when librarians, educators and marketeers have all wanted to use PDFs to deliver information to the user. I thought now would be an opportune time for me to state why, in many cases this is a bad idea (even if done with the best of intentions).

What the experts say

Jakob Nielsen’s Alertbox, July 14, 2003 – PDF: Unfit for Human Consumption
Usability guru Jakob Neilsen is forthright in his appraisal of PDF as a format on web sites.

Joe Clark’s 2005 article about PDF accessibility included here to refer to the section where Joe clearly elucidates why most things should be HTML and even better has written a thorough list of exceptions. If your information doesn’t fall into one of these categories then you really should be using HTML.

Where we are going wrong

Of the many PDFs that are currently available on our various web sites very few can really justify the format that they are in if we use the criteria laid out in the articles linked to previously. I believe that a combination of overstating the role of a particular visual style and understating the inconvenience to the user leads to a situation where uploading a document suffices. I don’t believe this is the case. If we want to provide the best experience for users then we need to be making that extra (small) effort to put the information in the right format. It’s not that hard, and everyone benefits.

Issues with ISSUU

A beta subject guides page has been created by the proactive librarians that we have at Glamorgan that uses issuu.com , a service for hosting PDFs that wraps them up in flash, and add various user interface feature like page turning animations, zooming, various views and useful social features like commenting and sharing. I think it’s unfortunate that the useful features have been mingled with the user interface fluff that actually makes the information harder to retrieve.

Putting it into practice

To show what’s possible I downloaded a PDF of Lighting Design and Technology & Live Event Technology from a Subject guides page, and spent an hour or two copying and pasting to create an HTML version. The pdf is 115k to download. The HTML is 41.5k. In addition to the smaller file size the user does not need to wait for the PDF reader to open up, can navigate via a table of contents and most usefully can click on the many URLs to go straight to the info. The HTML format enables to the user to directly interact rather than read, then copy the links.

It may not have the visual impact of the issuu PDF version, but it is more functional, in the browser window that people are used to. Also, none of this precludes making the pdf available for those people that wish to download it.

Summary

Hope that people find this a useful position statement, and would love to see some response in the comments.

Designing a Course Details Page

Have been looking at improving some of the previous ideas I’ve had for our course details pages. They are a crucial part of our site, so I wanted to have a good think about the way they should work. An essential part of design is clarity about the different priorities of the information you have to convey. If everything you have to say is of equal importance then that could be reflected in the visual representation. However, this rarely happens, and there is usually a definite hierarchy from the most important information down. An often tricky set of decisions when organising a home page, the structure of a course information page should be a little more obvious. This set of priorities then informs the semantic structure of the page.

To help me arrive at a sensible page structure, I had a ponder about what someone might want from a course detail page. This page should collect as much relevant info about the course and secondly, supplemental info that might help someone make a decision. Marketing have provided the broad headings that most of the information comes under, and these are pretty common across other University’s course pages.(give examples) So, a user would click through to the course page about English, and see a summary that helps them to decide if they are on the right page. If so, then the other headings are designed to give them more detailed info that would help them to decide, whilst hopefully making them interested about the possibility of studying that course.

Ideally, once someone has read enough of the exciting words about the course, then they would want to act upon what they’ve just read. To enable that the sidebar has a set of links with a variety of designs, to attract attention and emphasise that there are things the user can do. There’s a link to apply online, to book and open day, to contact the university. Hopefully with more similar tailored tools to follow. Currently styled with a predominantly blue theme I envisage there being versions based on the faculty color themes.

Further down the page there boxes with related info. Based on the presumption that all the previous info didn’t quite match what they were looking for, these are provided as jumping off points. There are links to related courses, Links to different parts of the site to Fees, Accommodation and a Parent focused site.

It’s best to keep this info all together as much as possible, rather than spread it across multiple pages and the attendant difficulty of managing links and urls. A consequence of that is that the page would potentially be very long. I don’t mind long pages myself, but it’s a good choice for the user to give them the option about which sections are relevant to them. Putting them in an accordion interface puts that control back in the hands of the user.

As per “Paul Boag’s article”:http://24ways.org/2009/what-makes-a-website-successful I think it’s really important that we try to create pages with very clear ideas of why they exist and what we would like them to achieve. If we can manage that then that clarity will benefit the people who use the site.

I tried a more rigid representation of this hierarchy, with the ‘Information’, ‘Links’, And ‘Tools’ block following one another in a single column. As you can see, it’s probably not a good idea to enforce such a rigid hierarchy.

Annoted design idea

I looked placing the tools on the left,but felt that they then became too prominent, since the info block is designed to be read first. It’s common practice to have in-page navigation on the left, and ordinarily I can see that it’s a convenient place to put things to make it easy to move around. However, since these pages are designed to be an end result of a search, it would be counter productive to then give links to take a user away the same visual priority as the main information

Annotated Design Idea

In the end I’ve settled on a right aligned tools block, with the idea that it’s easily placed for jumping off, but doesn’t demand too much attention. The task priority on this page would then be

  • Read
  • Act
  • Continue Searching

Annotated Design

Cartoons on the Homepage

Screenshot of old site

What we used to get away with

First of series of trips down memory lane with the aim of rediscovering some of the lessons of our history.

A review of the web presence of the University has just been completed and that process of examination has made me reflect on the strengths and weaknesses of our current site and the way that we are doing things. To that end, i thought it’d be useful to take a look at the way we used to do things and see where we’ve improved, how the web world has changed and what lesson our particular history tells us. It might also be an educational journey for those who don’t remember some of the things we’ve done.

My five year old could do better

As critiques of University web pages I’ll bet that not many have had that particular sentence thrown at them. It all came about when we decided that the existing style of home page that we had at that time was not really appealing to our target audience, so we resolved to produce something more in line with a younger audience. Looking back through at older versions of wired.com and my sketchy memory it seems that the web at that time was louder, bolder and brighter.

Web guru Jeffery Zeldman’s site from then was very different. The language of the time was the limited color palette, bold, often pixelated graphics and blocks of color. We were all using tables for layout back then. It’s funny looking back through the archives of pages I remember, and illuminating that the general excitement and enthusiasm of the time seems to come across. It was in that context that we decided to be bold and create some character illustrations that would be different from the stock images other places used at the time.

I’d produced some illustrations for various parts of the site and I’d love to be able to say we did some in depth user testing, allied to extensive market research but that would be a lie. In fact we spoke to the representative from marketing at the time and outlined our plans to take the site in this very bold new direction and to his/her credit they went with the idea.

So how far have we come?

What strikes me during this meander down memory lane is the lack of links – eight in total, linking to broad categories of information. The absence of a search button reveals that we were very much in the business of guessing what users might want and laying out browsable options for them. In the intervening years the web has changed very much to a searchable medium, where users expect a quick interaction will deliver the info that they require.

The intense demands for space on the modern day homepage make it feel that we need to revisit our search and really explore how it’s being used and how we can improve it. Perhaps the desire to be up front and on the homepage stems from anxiety about all the stakeholders’ information being discovered. The decision to put things in these broad categories was , i remember taken with marketing. The overall site was smaller which probably explains how it was possible to collect things in these areas.

The size of our site has grown dramatically, reflected in the 60+ links currently on our homepage. As a team we will need to really examine the function and purpose of the different parts of the site, and reassess the role of the homepage in that process. Should the home page function like a table of contents, or a brochure, or a billboard, or a directory, or a storefront. All the analogies are relevant but if we try to do all of them in one place then we will end up failing at them all.

My thoughts on IWMW2009

Where I went

I had the opportunity to attend IWMW2009 at the University of Essex in Colchester. To digest all I saw and did I thought I’d write a post.

What I saw

Headlights on Dark roads

Described on http://iwmw.ukoln.ac.uk/iwmw2009/talks/law/ ‘Derek will review the recent history of libraries and the challenges now facing them.’ In fact, the talk was far more interesting than that sounds and a wide ranging meditation on the state of current literacy, the culture that libraries have traditional worked in and the large changes that technology has wrought.

One of his interesting ideas was the shift from a literary culture to a visual one. He used a great slide to emphasize how images stay in our memories rather than words. With a challenge to name all the images. I think I got a few, but he didn’t put all the answers up.

Also very good from the whole conference was the use of twitter. Brian Kelly talks about this on his blog.

So, we can see what other people thought of the Plenary at the time via twitter

An Introduction to WAIARIA

I attended a BARCAMP where Dan Jackson from UCL took us through the concepts and some possible ways to implement ARIA. It was very good and you really need to view all the slides to appreciate how much info is there. Dan was an engaging speaker who helped me get to grips with a subject that I’d been putting off learning about because the whole issue seems wrapped up in a big W3C bun-fight at the moment.

Servicing ‘Core’ and ‘Chore’: A framework for understanding a Modern IT Working Environment

For me, this talk was a call to get to grips with the the emerging reality of users not being dependent on IT departments for their tools and the IT departments taking a much more active role in helping users. They are increasingly able to help themselves to the menu of external IT tools that give them what they need very quickly. Rather than competing with them perhaps we should form a relationship with users of our services that helps us and them work out where our best efforts should be directed. It seems very sensible that IT should be an unobtrusive part of people’s work and external services are part of our set of tools to achieve that.

Making your killer applications killer

Despite a technology failure, Paul Boag gave an enthusiastic talk about the context that Universities release their course information into. The rest of the web is increasingly using dynamic and interactive features on screen that give people the chance to try things like comparisons and reviews that help people make their choice. He contends that University’s need to start providing richer and deeper experiences around the course information. He rattled through some examples of sites that provided interactivity, and personality. I found this point particularly interesting, because it’s often the case that an organization’s persona becomes pretty dry and conservative. It’s quite a leap in mindset to have a clear and distinct character shine through the writing. Hard to do, but probably highly rewarding.

He also touched on the reasons why things are as they are, with Universities taking their requirements to produce accessible sites seriously, Limits on resources and a lack of experience in producing this more engaging and interactive experience. Universities have traditionally offered large amounts of rather dry information, but the nature of the web and the audience requires us to adapt the way we get our message over.

He then encouraged Us to ‘just do it’ – especially with regard to creating proof of concept things. He acknowledged the importance of showing a new feature rather than trying to describe it to get the go ahead to do the work. He presented the idea of HIJAX (which I’d never hea) to help with accessibility. To cut costs he advocated not reinventing the wheel and using existing libraries, APIs and third party websites.

Overall, a good call to arms if perhaps a little daunting. If we implemented at least some of the things he talked about we’d be heading in the right direction.

What is the web

James Curran ran a brave experiment in presenting an idea. He talked around the nebulous question of ‘What is the web’ , I think with the idea of getting people who work on ‘it’ every day to consider the fundamental concepts to help us have a vision of where it is taking us. The brave part was the continually refreshing twitter feed being displayed on the screen that James was attempting to respond to. It was intriguing, especially when people in the room were critical; I thought people might be too polite. Quite a tricky task to maintain focus of the talk, but thought it was definitely worth a go.

Hub websites for youth participation

I have to admit this talk didn’t really do much for me. I think I was expecting a more fully formed idea, and perhaps it suffered by being in the early stages of the project. At this stage it gave me the impression of a heavily academic treatment of a potentially very interesting project. Maybe it is too large in it’s scope. The idea of the opinions of a generation who are growing up with a technology, having a way to express that opinion seems good, but I wonder if the web itself will provide a place for those opinions to be expressed.

iTunes U

Attended a session on iTunesU, again, just to find out about something that I knew nothing about. It was great to see how much great content is available from the various universities, but Barry did a great job of explaining just how much work needed to be done around that content. Oxford had lecturers who had established podcasts well before the opportunity for iTunes U existed, which helped them greatly. There are lots of things that you need to do when creating the content and if you are thinking of this then Barry’s slides are a comprehensive guide to just how much work you are proposing to take on.

How the BBC make websites

Enjoyed the BBC session the most. Obviously, they have brilliant content as the organization’s whole business is producing great stuff. They emphasized that they see their main job as making that resource available, so everything is geared around that end. The bit about hackable URLs provoked lots of sage nodding from the audience. I was also surprised by how much thinking goes into things before they get anywhere near writing code. They did lots of paper prototyping, wire-framing and story-boarding, and once the code was written they emphasized testing, testing and more testing.

What I missed

The only thing I was midly disappointed about was not being able to catch some of the other barcamps, and hopefully some of them will appear online over the next week or so.

Resources

Lots of slides can be found on slideshare

What I did

All the talks were only one part of the experience for me. The rest of the time was taken up with meeting people from lots of other Universities, and realizing that we are facing the same issues and that sometimes we come up with ways to solve them. It was an eye opener for me just how many other Universities were in the looking for or implementing CMSs.

We were unusual in that we take a pretty open source approach to the CMS systems that we use, and talking to people it was clear that every CMS has strengths and weaknesses. If the mythical CMS exists that will magically transform business processes, make people better writers, satisfy end users, manage it’s own infrastructure and take University web presences to a new level, then I don’t think anyone there has found it.

On a personal note. I found it really useful to go on my own, which forced me to get out and say hello to people, which as it turns out is much easier than I’d thought. Despite being engaged in the dreaded ‘networking’ i enjoyed the chance to tell some people how impressed I am with the work they are doing. Hopefully I can go back next year with a list of things that we’ve done that we started by going to IWMW2009.

All Kicking Off

The IKO today seemed like a good to take stock of how I think things are going with our recently adopted agile approach, and to see what I’ve learnt along the way. We’ve been agile since the start of September 2008 and the time has really flown by. I’ve looked back through some of the things I’ve written about the process.

Early Enthusiasm

Back on September 2, I wrote

Now that I have a book describing the practices of an agile developer, I’m quite exited by the possibilities of following a method. Boundaries and Structures are good to work within.

The book in question, The Practices of an Agile Devloper was a good introduction to the tenets of Agile Development. Particularly good were the ‘What it feels like’ bits in the book, that softened the often technical and rational ideas on show. Even if the Agile Manifesto seems a little overblown, I was convinced that the realistic attitude that seems prevalent in Agile development is far better than unrealistic and unwieldy project planning that I have bumped up against in my time.

How we got the ball rolling

An external contractor was appointed to help us out with a chunk of the making IT personal project, and this was an opportune time for someone with practical experience of Agile to get us started. We initially decided that we would use physical cards, as it was suggested that they gave a strong sense of the reality of the work and the at a glance nature of the whiteboard would be an advantage. So, on September 11, I began the first story card of the first day of our first sprint.

I’ll just quickly say that in Agile, there is the concept of an iteration period, during which a team commits to a body of work to be done by the end of the period. Some teams choose longer, some shorter. We chose a week as the best fit, enabling us to report often to the customers (more of which later). This period is called a sprint – I guess no methodology is without it’s jargon.

I can clearly remember enjoying the novel sense of achievement engendered by picking up physical cards, which in Agile results in points towards the teams total. This method is to help the team get better at estimating how long it is taking to do the work that is coming in – Something that Hofstader’s Law suggests everyone is pretty bad at. The jargon for this in Agile is ‘Velocity’. In truth I think we struggled with the scoring system, based as it is, on a relative measure of how long something would take, and using a Fibonacci sequence of numbers, but I think we have since simplified that as a consequence of switching tools.

Initial Kick Off

Phew! Started our second sprint today after a pretty exhausting Initial Kick Off meeting.

Then it was a good buzz having a large stack of tasks to get through, and the atmosphere was really focused.
September 18, 2008

The Inital Kick Off (yet more jargon) is the name given to the meeting that we have at the start of the sprint where we look at all the work we have and our customers are asked about which work is highest priority for them. This is a great for concentrating minds and gettinge everyone to realise that not everything can (or should) be done at once. It’s a good time for the people who bring jobs to us to communicate how important it is to them and in return they get really realistic and useful information that they can then feed into their own planning processes.

Central to Agile is the idea that we do the work for customers, and rather than making them wait for a grand unveiling of a product, there is an incremental approach where they have a weekly opportunity to give us feedback. What I find refreshing is the idea that it’s natural for things to change, and much easier for people to discuss things that are shown rather than the abstract ideas thrown up by specification documents and wish lists; Because the sprints are short the theory is that the work is kept close to what the customers ask for.

All the work that can’t be done during the current sprint is put in a backlog, for all to see, again, making planning easier and as new work arises it is placed in the priorities in the backlog.

One problem that we have come up against in this process is the way the we estimate and frame the work so that it can be broken down into manageable chunks. It can be quite easy to be vague when writing the card for the work, which then can come back and bite you when you realise that the job is too big for one sprint. I guess the good thing is that the it’s becomes clear pretty early in the process when that is happening.

This can be a particular problem when bringing more design based work in to the process. By it’s very nature there’s a little more subjectivity and consequently how one defines the completion of particular parts of a design process can be tricky. Cenydd Bowles has written a nice article about this very thing

Virtually There

During the first 4 months of the process we tried physical cards on a whiteboard,the a google spreadsheet, then a ticketing system similar to the system we’ve used for years, before finally settling on our current tool, Pivotal Tracker, at the turn of the year. It’s been a good step forward. It gives us a good set of tools for managing our workload, and developing in full view of our customers is initially quite scary but turns out to be confidence building.

Wrapping Up

I’ve found the structure of developing this way suits me. It’s probably not for everyone.

  • I enjoy the short turnarounds where things are not allowed to drift.
  • The discipline of having to make an estimate (even if it’s wrong) feels worthwhile.
  • The (mostly) daily updates we have where we (quickly)communicate what we’re individually working on are useful.

Hope you’ve enjoyed your tour of our silo.

Git for Designers

Git for Designers

GIT is a version control system that is getting a lot of good reviews lately, and we’ve started to use. I thought I’d relate a methodology that we’ve arrived at in very simple terms, that even a designer can understand.

We all have GitHub repositories, and we are using local branches to work on our respective development tasks. So, the dilemma is to how to share work in our branches and repositories with each other without having to commit to the master repository.

An answer comes in the form of remote branches and pull requests.

The theory goes like this…

  • Make a copy of the latest version of the code, on one’s own repository, get a local copy of that code, and develop to your hearts content – including creating branches for different aspects of the work.
  • Periodically check the original to make sure one’s repos and local code is up to date.
  • Commit changes to local and own repositories – including any branches that you think you might want to share.
  • Get those changes into the master repository.

Ok, so let’s make it happen.

First thing to do is make a fork of the original repository, by pressing the fork option on the github home page.

This creates a repository that you can then get onto your machine with the git clone command.

git clone git@github.com:bossuser/forkedrepos.git

CD in to the folder that has created

Have a little check of all your branches with

git branch -a

So, the next thing we need to do is track the remote branch so that we get any changes from the original repos that we forked from.

git remote add branchname git@github.com:user/originalrepos.git

branchname is the name you give to the tracking branch. It doesn’t show up when you run git branch -a yet, but some lines have been added to your config – open up .git/config and you will see that the branchname points to the original repos.

[remote "branchname"]
url = git@github.com:user/originalrepos.git
fetch = +refs/heads/*:refs/remotes/practiceb/*

So now, I’d like to get something into this branch to check it’s tracking correctly. I can do that by running

git fetch branchname

Which gets some files from repos.So, this time when you run

git branch -a

Your remote branch called branchname appears in your list or branches.

So now you are tracking this branch, but According to the GIT Manual you cannot checkout a remote tracking branch. Instead you need to create a local branch, which you do with the following command.

git checkout --track -b newlocalbranchname branchname/master

Just to be sure have a look with git branch -a and you should see the new branch with an asterix, indicating the branch you are currently on.

So, if you remember the theory, we’ve now got a local branch with the changes from the original repository, and a local master branch with changes from our forked repos. This gives us the mechanism to get any changes from the original repos.

We run git pull whilst in our newlocalbranch. This fetches and merges. We can then switch to our master branch with git checkout master from where we run git merge newlocalbranch pulling the changes over.

The next bit of the theory was to put our changes up. That is pretty easy – after our minutes of productive work you decide you’re happy with your changes.

git commit -a

And, then git push to get it onto your github repository

The final piece of the jigsaw is to get your changes into the original repository. You can do this by sending a pull request to the admin for the repository. The button is on the github homepage of the repository.

And there you have it.

Click where?

When placing hyperlinks on a site, there can be a temptation to use ‘click here’ as the link text, assuming that the context of a link is immediately apparent.

The rest of this article contains some help and advice on this issue.

One of the issues that we face as developers of various CMS is what to do when the people writing the content write in a way that is contrary to the WCAG. Point 6.1 of the guidelines explains, in a rather technical way, the problem. For a more informal discussion and some real world examples I’ve linked to the article why ‘click here’ is bad practice.

I’ve selected a few highlights from the page –

“Click here” is device-dependent. There are several ways to follow a link, with or without a mouse. Users probably recognize what you mean, but you are still conveying the message that you think in a device-dependent way.

There’s usually a fairly simple way to do things better. Instead of the text “For information on pneumonia, click here”, you could simply write “pneumonia information”.

Accessibility isn’t something that can be left to developers to worry about.

Evaluating accessibility the TechDis way

Yesterday I received a word document via email from a colleague which served as my introduction to the http://www.techdis.ac.uk project. This is a JISC funded resource that seeks to a resource to assist in implementing accessibility and usability in a range of organisations. Read their site for a more detailed explanation.

I’ve had a quick look at the document and it piqued my curiousity. Over the past three or so years we’ve aimed to integrate accessibility and usability into all the new work that we create, but we’ve been remiss in formalising what we’ve done and documenting the methods we’ve used in aiming to make our sites usable and accessible.

Over that period we’ve had many discussions within the team about the pros and cons of particular methods but we’ve no record of our thought processes. I reasoned that an evaluation exercise would shine a light on our decision making and help us to do things better. An immediate attraction of the word document is its brevity.

Once one has selected the URL’s you wish to evaluate it’s away you go with the technical stuff. I’ve decided to look at Glamlife , one of our sites that should be pretty accessible, so that I don’t have too daunting a list of things to fix, and the evaluate the evaluation! The URLs I’ve chosen to test are

The home page, a section home page and a content page.

Technical HTML Conformance

“Each HTML page should be put through at least one HTML validator and each CSS should be validated using a CSS validation service.”

I used The W3C Mark-up Validation Service

The one remaining error on the home page is an element that we use in our rollover javascript. An explanation of how this works can be found on http://jehiah.cz/archive/simple-swap I currently don’t know how to change this so that it validates, but it shouldn’t cause the page to choke on any accessibility tests so we can leave it in. It’s only on this page so my efforts are better employed elsewhere.

The section homepages are a different layout and based on a different template, and all validate as XHTML 1.0 strict. Being based on templates the overwhelming majority of pages validate, unless there are things entered in the content that break the validation. This is an unavoidable hazard of a CMS. However, it’s worth remembering that valid does not equal accessible. Validation is just one of the tools available to us that assists in assertaining that our code is of a certain standard.

CSS Validation

A quick run through of the CSS we use for glamlife came up a with a few errors, which were easily fixed. I think the CSS for the site could do with some tidying up, to make development easier

Screen Size/Resolution

The TechDis site recommend http://www.anybrowser.com/ScreenSizeTest.html as a resource to test ‘common’ browser resolutions, though they seem pretty tiny to me. The copyright info on the page says 1995-2001, so presumably this is when the info was relevant. Instead of that I’ve chosen to check our general site stats, and as at Feb 2007 we’ve around 10% of users using 800×600 resoultions. We’ve been pretty conservative with our design to accommodate this size.

Enlarging the font size

The font size scales up well in Safari and goes up and down through the text size options on IE6 on XP. (the smallest size is almost illegible), which I think is a common problem.

Is the site usable without images?

Yes

Does the site work without JavaScript?

Yes

Can you use the site without using the mouse?

I can tab through the site, but to thoroughly test this you would need to have input from someone for whom this method of navigating a site is important. How one lets the user know about the keyboard shortcuts that we use is an important question that needs more work. We have however, chosen our access keys with regard to the UK government’s advice

Navigation without a mouse (or other pointing device) is something that we need to get out and learn more about. And the Tab order of the page is something that we can defininitely develop some University standards on. The access keys for Glamlife can be found on the Accessibility Statement page.

Automatic Checkers

The guidance from TechDis recommends some automatic checkers

http://www.cynthiasays.com/ Used this to check, using the WCAG priority 1,2 and 3 options. One error we had was on link text. The checkpoint is 13.1 Clearly identify the target of each link. The fix for this will be a small change to the content.

Webxact

We used webxact throughout the development phase of Glamlife and so it does not flag up many errors and/or warnings. The one error that comes up is something that we have looked into and may look into again. It certainly not a showstopper.

Accessibility Heuristic Evaluation

This sounded quite daunting to me until I realised that it’s a method to include some judgement based criteria for any site that you care to evaluate. Essentially one answers the following questions and assigns scores for how fully you feel the site satisfies the questions. There is a fuller explanation on the TechDis site.

  • Does the website have a clear, intuitive and logical navigation structure?
  • Does the website have a clean, consistent and clear visual design?
  • Does the site provide appropriate alternative descriptions for all visual elements?
  • Are all the website interactions, forms, navigation scripts etc accessible and usable?
  • Does the website use clear and simple language, appropriate for its audience?

We can answer yes to all these questions, gaining 3-4 points for each answer. Which tells us that we are doing ok. They are very useful for providing a mechanism to evaluate non-technical issues, matters of personal preference or other qualitative issues.

Usability Heuristic Evaluation

Similarly we score well on the usability Heuristics too. We’ve kept the site pretty free of inappropriate Frames, Java, Flash and animation, and the content is clear and well written.

Assistive Technology Testing

We have tested the site with the screen reader that is installed as standard in the IT labs here at Glamorgan, however, This is of limited value because we are not regular users of the software, and consequently this skews the tests. Useful feedback would come from a regular screen reader user.

Just prior to launch I posted a request for users opinions on the accessify forums, and was rewarded with some useful feedback.Specifically, a user explained what was being read out when he viewed the site suggested some changes to our access keys and a few other things about what was actually being read by his screen reader. Which we then implemented. The whole area of assitive technology testing is an area that the University needs to get to grips with. We have done what we can.

Browser Compatibility Testing

Have tested on IE6, Firefox,Safari which are the most popular on our site with approx 98% of users using them. We also feel that the web standards based approach that we’ve taken is likely to be helpful to users with browsers that we have not directly tested.

Conclusion

The evaluation exercise has been good to go through, but it does take some time. If one was doing it for sites with bigger issues about accessibility then I can imagine it would be an arduous task, but a completely necessary one. Accessibility and Usability was central to Glamlife from the start and it still threw up some issues, and continues to do so. They are not features that once provided can be ticked, it is a continual process of evaluation and development.

If you’ve found this article interesting, please add some comments. As a team we are always keen to get feedback on what we do.

A cite for sore eyes

Researchers are required and need to refer to papers, articles etc as evidence of their research activity, as a consequence the research sites will soon have lots of publication information available, and a quick glance at the sites shows that there is work to be done on semantically making sense of the information.

The LRC have produced
http://www.glam.ac.uk/lrc/about/help/guides/gencitations.php

This example of a journal publication from http://genomics.research.glam.ac.uk

  • Burke, S and Kirk, KM1
    Genetics education in the nursing professions: a literature review
    Journal of Advanced Nursing, 2006 54(2): 228–237,
    ISSN 0309–2402.

This comprises of the Author name/s, the Article title ( usually quite long), the Journal name, a date when the article was published, the volume of the journal ( in this case 54), the number. The pages the article can be found on, and an ISSN number (explanation here http://www.bl.uk/services/bibliographic/issn.html)

This is the textile necessary

*(pub) Burke, S and Kirk, KM"^1^":#1
??Genetics education in the nursing professions: a literature review??
_Journal of Advanced Nursing_, 2006 *54*(2): 228-237,
ISSN 0309-2402.

One of the problems with this is that only the title is contained within the cite tags. It would be better if all the text is contained within the cite. For such sematically rich information I think that HTML is the best way to mark it up.

  • Burke, S and Kirk, KM 1
    Genetics education in the nursing professions: a literature review
    Journal of Advanced Nursing, 2006 54(2): 228–237,
    ISSN 0309–2402.

I think this is a better more useful way to mark up a publication. The whole entry is now cited rather than just the title, there also more classes that we can hang CSS styles on, and the classes can also serve to explain a little to authors of the code what the information means.

It can serve as a useful starting point for when we start to pull publication info out of a database.

Other things to think about are the possible use of a citation microformat, and we also need to get further information about how we can display the data to correspond to the variety of citing style out there.