On that person who understands end users AND developers

When digital humanities projects started getting going many years ago, one of the prized members of any project team was the person who could connect what the researchers wanted and what the technical developers had to do.

That person never really had an official title, but if you didn’t have that role, you tended to end up with horribly ugly sites that helped the research aims of at most two researchers, if you were lucky. Think of 20 search boxes on a screen with drop down boxes with over 50 categories to choose from.

It’s amazing to think how the universe of web design has moved on since then. You never hear the term ‘webmaster’ any more; there’s a spectrum of different tasks and titles (from user researcher to front-end developer) needed to convert user needs into gleaming digital product.

Any digital humanities project (or better still, centre) that wants to manage successful and lasting services over time needs those roles. And as expectations about what the web can deliver continue to increase, so does the need for anyone can create a loop between what the users want and do, and what the developers then build.

In large private companies (oozing in the cash that digital humanities projects can only dream of), there are separate roles for this. Undertaking user research, drawing wireframe outlines, designing graphics and ‘look-and-feel’, user interaction and then user testing and feedback.

Most public sector bodies are fortunate to have one person to do any of that. Europeana has been lucky to have Dean Birkett as part of that connection between what users want and how a website works.

Dean’s work is highly impressive, being able to understand user needs and quickly sketch and conceptualising ideas that can be passed onto developers. He’s heading off to do some free lance work and he will be sorely missed in the office.

Before he left, he mentioned some books that are key for bridging that gap between users needs and completed digital products. They aer useful for any digital project that wants to make sure it is delivering what its users want.

Six Themes from Europeana Tech

There were numerous great presentations and round table discussions at the Europeana Tech event last week, held at the Bibliotheque national de France. Here are some of the key points that might be of interest to libraries and other cultural heritage organisations.

Bibliotheque National de France
Bibliotheque National de France

1. Maps and geo-referencing remain cool

Thousands of maps were extracted and identified from the British Libraries Labs collection, and Wikimedians then did an awesome job categorising the maps. Alternatively, use the LoCloud Historic Place Names service to help identify places within documents.

2. Publishing images online ? Use the International Image Interoperability Framework – IIIF

Want your digital images to be used in a controlled way by others and without the difficulty of ftp or hard disk transfers? If you use the IIIF standard for publishing images it becomes easy for you to both share images, manage their re-use and analyse their usage by others.

3. Metadata is often minimal … face it !

Collections like DigitalNZ and the Cooper Hewitt design museum have many records with sparse metadata. Interfaces accept this and try to adapt, rather than just leaving lots of inexplicable white space.

4. But there are ways to improve metadata

OpenRefine is ‘Excel on Steroids’ – powerful ways to make bulk adjustments to open data. Meanwhile, the amazing release of millions of Flickr from BL Labs has been accompanied by automated methods of tagging that data

5. Unconnected project-based services

There are plenty of great tools for cultural heritage created by different EU projects. But sustainability remains an issue. Could the Europeana Cloud service provide a better way to connect data to the most valuable services for curating, enriching and exploiting that data ?

6. Wikidata as the basis for everything

Wikipedia has increasingly been a popular way for libraries to embed links to their content. But there is increasing interest around what libraries and others can do with Wikidata – providing structured data about books, manuscripts, letter, diaries and other collections, and forming a backbone of verifiable statements that can actually support and improve Wikipedia.

Some quick principles for creating digitised culture

Getting lost in the mire of massive European projects, I am trying to put together some principles to remind me of what I am trying to work on. A first draft is below !

  • Always do user research. However great it is, your knowledge and intelligence cannot know what 10s, 100s or 1000s of users will do
  • Use existing infrastructure to make life easier. C’mon, Google Docs is pretty cool.
  • “Nobody ever complained about a website being too easy to read” (thanks Dean Birkett)
  • Data should be free and easy to download at a granular level. PDF bad, CSV good …
  • … but think context too … CSVs will mystify some people.
  • Be open and transparent in your process. Yes, it hurts. But then everyone knows where you are and what you are trying to do .
  • Avoid vapourware. If something’s not really ready yet, don’t say it is.

They are all pretty obvious, but are useful to remind yourself of from time to time. I’m also thinking about doing on workplace behaviour.

The Great Twentieth-Century Hole Or, what the Digital Humanities Miss


Presentation given at DH Benelux June 2014

Presentation on Europeana Newspapers

Presentation given at British Library information day on digitised newspapers

Digitisation Projects Classified by Date of Corpus

At the DH Benelux Conference in The Hague in June, I’m looking into the extent to which the Digital Humanties ignores the twentieth century. The abstract is here.

As part of this work, I’ve been investigating the projects undertaken at various DH centres, in particular those projects that are working with a specific corpus of data (as opposed to doing networking, or tools development), and the dates of those corpora.

I’ve taken some significant DH Centres and marked each of the projects according to a very rudimentary temporal classifications – ‘Classical, Medieval, Renaissance, 18th century, 19th century, 1900-1950, 1950 onwards’

The Google spreadsheet with the results so far is at

So far, I’ve included

Department of Digital Humanities, King’s College London
Huygens Institute, National Library of Netherlands (The Hague)
Maryland Institute for Technology in the Humanities, University of Maryland
Centre for Literary and Linguistic Computing, University of Newcastle
Center for Digital Research in the Humanities, University of Nebraska

There is a sixth tab with the total number of projects.

There is a fuller list I wish to explore on the ‘Totals’ tab of the published spreadsheet. Any more links to identifiable lists of projects based at DH Centres would be gratefully received !

PS I’m aware there is a whole bunch of methodological/sampling problems with focussing on ‘projects in DH Centres’ ! I’m hoping to bring this out in the paper.


Get every new post delivered to your Inbox.

Join 2,380 other followers