About Me
- Jeremy
- Web person at the Imperial War Museum, just completed PhD about digital sustainability in museums (the original motivation for this blog was as my research diary). Posting occasionally, and usually museum tech stuff but prone to stray. I welcome comments if you want to take anything further. These are my opinions and should not be attributed to my employer or anyone else (unless they thought of them too). Twitter: @jottevanger
Thursday, May 07, 2009
A new Head of Communications
Hopefully before that point we will have started or even completed the process of finding a replacement for Mia, who we lost to the Science Museum all those months ago. The vacant post is being boosted to "Digital Museum Manager", to make up for the fact that we have no manager responsible for web and digital media since October, for reasons it would be imprudent to expand upon here. We need someone at that level to take on the planning, policy and strategic work that the HoC will be too busy to deal with, given that he's covering the whole of communications (internal and external), but we also have to have a developer to fill the gap that Mia left so this will be a pretty hands-on post, with probably more time coding than managing. We'll have to see if this proves sufficient, since even when we were fully staffed we were short-staffed.
Looking at Mr Robbin's profile it is good to see that internal communications are part of his skill-set. I think it's broadly felt at all levels here that MOL needs to work on this area in order to strengthen us as a corpus of colleagues with a commonly understood direction, and it will be interesting to see how our internal comms evolve in the coming months. Between now and July there's a lot that needs doing, so we'll have to muddle on in the meantime, but overall an interesting time ahead.
Tuesday, May 05, 2009
CFP for VALA2010
VALA promotes the use and understanding of information and communication
technologies across the Galleries, Libraries, Archives and Museum sectors.
The CFP is here but the deadline is nearly up (although the conference isn't until Feb 2010)
Museums Association digital events
The only conference in that area that I recall the MA running was, ooh, 2006 or so, but there are two more coming up. In June we have World wide wonder: museums on the web (NOT to be confused with the long-standing MGC-run UK Museums on the Web conference that I presume will take place later that month). There are some great people lined up for that, with perspectives ranging from academic to managerial to dirty-hands coder to strategic.
Then on September 18th is "Go digital: New trends in electronic media", which looks like it draws upon the sources interviewed for the MP special (including the director of public programmes here, David Spence). In contrast to June, it looks like it's going to be focussed on off-line media.
Monday, May 04, 2009
A dawning realisation?
Nowadays everyone I talk to questions the metrics they use. More than that, people seem keener to dig into what they may mean in terms of value. Seb Chan is amongst those in our sector that's exploring how to make better, and better use of, measurements, and closer to home, Dylan Edgar's work with the London Hub dug into similar issues.
Last week in a catch-up with the director of my division we touched on his own objective of "improving the website". In itself it's encouraging that the objective is there, as part of the reorganisation we are currently experiencing, but "improving the website" is a pretty broad ambition. I think it's a subject that we'll revisit in more depth soon, but it was clear that our director was as aware as we web types were that when you lift up that rock you'll find a tangled mess of questions. Before you talk about "improving" you need to identify what you consider to be valuable, and to disentangle theoretical "improvements" from impact, preparedness, experimentation etc. Obviously a set of measurements that to some degree reflect these valued qualities are a sine qua non for managing their realisation, and so here's a reference to provoke a little more thought on the subject that I won't dig into here, but has had me rethinking my own attitudes web stats and the whole evaluation problem: Douglas W Hubbard, 2007, How to measure anything : finding the value of "intangibles" in business. *
In any case I find it encouraging that in this discussion and others with senior colleagues there seems to be a dawning awareness that we have a complex, multidimensional environment to deal with, wherein the varieties of "success" may be as varied as between all the departments within a museum. I'm not sure that it would always have been the case that the higher echelons were aware of the perils of trying to evaluate our digital programmes, although perhaps any senior manager worth their salt will have long ago twigged that a website is not "improved" merely by adding pages, Flash splashes and video - evaluating the more familiar physical museum is no easier, after all, and nor is improving it. We do need to have that conversation about what we mean by "website" with senior management, though. Is it only geeks that see this as only a part of our digital presence?
When it comes to the use of web stats of various sorts, there have always been lots of complaints about them, but I suspect that in this discussion too we are seeing greater recognition that it's not about visitors versus hits. Maybe it's not even enough to focus on "impact" since the heart of the matter arguably lies a level deeper than that: the first step is figuring out what impact itself means in the context of the museum's mission, and in this networked environment in the mission of the meta-museum that we must realise we are a part of.
Rhetorical question for the day, then: Is there a mission for the meta-museum, and do we measure up to it?
*I hope to post about this book properly, eventually, but don't wait for that, try to check out the book which, for all its flaws of repetition, is full of useful ideas and tools.
From the library: Renaissance and metrics
It's not a brilliant piece, to be honest; it's limited by reference to online publications and ends up muddling the question of what data are gathered with that of what is made available on public websites. Everitt was writing in advance of a review being conducted for the MLA (review FAQs) by an advisory group led by Sara Selwood, Phase 1 of which was to be completed last autumn so as to inform the business plan for the years ahead [note to self: track down other Selwood refs on data collection in cultural heritage]. Because of this it's quite likely that Everitt's findings were out of date before they were even accepted for publication. All the same there are some interesting points within the paper. For example, despite the declared intention of Renaissance to standardise methods of evaluating impact, Everitt finds notable variability in how this is actually undertaken. Two Public Service Agreement targets are applied to Renaissance, and measurements against these seem to be uniform, but beyond this and the headline figures there is less consistency; likewise the approaches to making their data, analysis and reports public vary greatly. I also discovered that the MLA also offer a set of Data Collection Guidelines and templates, which I now need to digest. Presumably this 2008 manual (PDF) is the replacement for the 2006 version that Everitt was refering to, and here's a page on the MLA site about the results to 2006.
I look forward to seeing whatever parts of the Selwood-led review are published. The overall direction of Renaissance is up for grabs, it would seem, which could have a big impact in the Museum of London, for one. I will be especially interested, though, in the data collection strand, and in how they suggest we evaluate impact.
ICHIM and DISH
I hadn't twigged that the 2007 ICHIM was in fact the last of that long-running series of bi-annual conferences, which ran, amazingly, from 1991. April's issue of Curator starts off with an interview with David Bearman on the ICHIM's history, why it ended, and what next. Let's not forget that dbear and Jennifer Trant also run the universally adored and enormous Museums and the Web conferences, but ICHIM covered somewhat different territory and arguably there's a space that needs filling now...
...which is why it was timely that on the same day I found that interview, I also read about DISH2009:
"Digital Strategies for Heritage (DISH) is a new bi-annual international
conference on digital heritage and the opportunities it offers to cultural
organisations."
DISH 2009 takes place in Rotterdam December 8-10th, and the CFP is up. It looks interesting: taking a step back to look at strategic questions of innovation, collaboration, management etc.
Thursday, April 30, 2009
NMM, YQL, COBOAT, CODS
As with similar tools/services (Pipes, Dapper, dbpedia, and various things nearer the surface like GMaps), YQL is clearly a blessing from both ends of the telescope: we get to use it for its intended purpose - to be "select * from Internet" is the grandiose ambition - knitting together data sources from Yahoo! and beyond; and we also get to offer our data in a developer-friendly way to encourage its reuse by creating OpenTables [note that these are purely a machine-friendly description of how to access data: no data is handed over as such]. Jim has already been busy creating Open Tables and experimenting with YQL.
Following the talk we headed for a pint (and one of themost jaw-dropping jokes I've heard, from Chris), and it was good to talk to Tristan from Cogapp. When I stopped raving incoherently about the marvel that is Solr (yes, still in love even as I gradually find out more about it), Tristan cleared up some questions for me about Cogapp's COBOAT app. They recently open-sourced this (as far as possible), in the context of the Museum Data Exchange project with OCLC (see Gunter Waibel's recent post), where it plays the role of connecting various collections management systems to an OAI Gateway-in-a-box, OAICatMuseum (well seems like it's only used with TMS in the project, but the point of COBOAT is that it just makes life easier for mapping one data structure to another, and another CollMS would slot in just fine).
For me, both COBOAT and OAICatMuseum are of interest for the role they could play in our the revamped Collections Online Delivery System* we'll build this year, resources allowing (in other words, don't hold your breath. Mission critical, yeah, but worth paying for? I await the answer with interest). Integrating and re-mapping data sources, an OAI gateway, and sophisticated and fast search are key requirements, as is a good clean API, and taking these two applications along with Solr I feel like I may have identified candidates for achieving all of these aims. We're a long way from a decision, of course, at least on the architecture as a whole, but I have some tasty stuff to investigate, and I'm already well down the track in my tests of Solr.
Thanks again to Jim for arranging the talk. He's got another great guest coming up, hopefully I can make it to that one too.
*I'm resigned to this thing being called CODS but still hoping for something less, well, shit
Sunday, April 26, 2009
Saturday, April 25, 2009
Catching up with Europeana v1.0 [pt.2]
So April 2nd/3rd were the kick-off meeting for Europeana 1.0, the project to take the prototype that launched last November and develop it into a full service. There may have been glitches at the launch but at the meeting there was a tremendous feeling of optimism, sustained I suppose by the knowledge that those glitches were history, and by the strength of the vision that has matured in people's minds.
The meeting was about getting the various re-shuffled (and trimmed) work-groups organised, with their scope understood by their members and refined in some initial discussions before the proper work begins. There are tight dependencies going in all directions between the work-groups. My problem was, on reflection, a very encouraging one: it was difficult to decide which WG I should work with, since they nearly all now have some mention of APIs in their core tasks. Given that concern over APIs was the reason I got involved with Europeana, it's great to see how central a place they occupy in the plans for v1.0. Not surprising, perhaps, given the attitudes I've discovered since joining, but feeling more real now that they're boosted up the agenda. For those who worry (as I used to) that Europeana was all about a portal this shows that fear is groundless. Jill Cousins (the project's director) distilled the essence of Europeana's purpose as being an aggregator, distributor, catalyst, innovator and facilitator; the portal, whilst necessary, is but a small part of this vision.
In the end I elected to join WG3.3, which will develop the technical specs of the service, including APIs. Jill is also organising a group to work up the user requirements (to feed to WG3.3), which I'll participate in. I guess this will also help to co-ordinate all the other API-related activity, and I'm thrilled to see several great names on the list for that group, not least Fiona Romeo of the National Maritime Museum. Hi Fiona! I hope to see more from the UK museum tech community raising their hand to contribute to a project that's actually going to do something, but for now it's great to have this vote of confidence from the museum that puts many of us to shame for their attitude and their actions.
So we heard about the phasing of developments; about the "Danube" and "Rhine" releases planned for the next two years; about the flotilla of projects like EuropeanaLocal, ApeNet, Judaica, Biodiversity Heritage Library, and especially EuropeanaConnect (a monster of a project supplying some core semantic and multilingual technology, and content too); and about the sandbox environment that will in due course be opened up to developers to test out Europeana, share code and develop new ideas. Though we await more details, this last item is particularly exciting for people like me, who will have the chance to both play with the contents and perhaps contribute to the codebase of Europeana itself, whilst becoming part of a community of like-minded digi-culture heads.
Man, you know, I've got so much stuff in my notes about specific presentations and discussions but you don't want all that so here's the wrap. As you can tell I've come away feeling pretty positive about the shape it's all taking, but there are undoubtedly big challenges, in terms of achieving detailed aims in areas like semantic search and multilinguality, but also in ensuring the long-term viability of the service Europeana hopes to supply; nevertheless the plans are good and, crucially, there are big rewards even if some ambitions aren't realised.
Within the UK there are a number of large museums with great digital teams and programmes that are not yet part of Europeana. There are also, obviously, lots of smaller ones with arguably even more to gain from being in it, but they have more of a practical challenge to participation right now. But why is it that those big fish are not on board yet? Is it just too early for them, or are there major deterrents at work? I know that there are people out there, including friends of mine, who are sceptical of Europeana's chances of success and sometimes of its validity as an idea. The former is still fair enough I suppose, or at least the long-term prospects are hard to predict; the latter, though, still mystifies me. If we want cross-collection, cross-domain search - and other functionality - based on the structured content of large numbers of institutions, there's really no alternative to bringing the metadata (not the content) into one place. Google and the like are not adequate stand-ins, despite their undoubtable power and despite the future potential for enabling more passive means of aggregation by getting, say, Yahoo! to take content off the page with POSH of some sort (which certainly gets my vote, but again relies on agreed standards). Mike Ellis and Dan Zambonini, and I myself separately, have done experiments with this sort of scraping into a centralised index, turning the formal aggregation model around, and there's something in that approach, it's true. Federated search is no panacea given that it requires an API from each content holder and is inferior for a plethora of reasons. Both are good approaches in their own ways and for the right problem - as Mike often reminds us, we can do a lot with relatively little effort and needn't get fixated on delivering the perfect heavyweight system if quick and light is going to get us most of the way sooner and cheaper. But I can't help but detect some sort of submerged philosophical or attitudinal* objection to putting content into Europeana - a big, sophisticated, and (perhaps the greatest sin of all) a European service. I sense a paranoia that being part of it could somehow reduce our own control of our content or make us seem less clever by doing things we haven't done, even if we're otherwise agile clever web teams in big and influential museums. But the fact is that a single museum is by definition incapable to doing this, and if you believe in network effects, in the wisdom of crowds, in the virtues of having many answers to a question available in one place, then you need also to accept that your content and your museum should be part of that crowd, a node in that network, an answer amongst many. If your stuff is good, it will be found. Stay out of the crowd and you don't become more conspicuous, you become less so. Time will doubtless throw up other solutions to this challenge, but right now a platform for building countless cultural heritage applications on top of content from across Europe (and beyond?) looks pretty good to me. It's heavyweight, sure, but that's not innately bad.
If your heritage organisation is inside the EU but isn't part of Europeana, or if it's in it but you aren't part of the discussions that are helping to shape it, then get on board and get some influence!
Flippin' 'eck, I didn't really plan on a rant.
*is this a made up word?
Catching up with Europeana v1.0 [pt.1]
Prototyping done, the bid was assembled to develop a full-blown service, "Europeana v1.0". This bid to the European Commission was successful and just before Easter a kick-off meeting was held at the Koninklijke Bibliotheek in the Hague to initiate the project. This is actually but one of a suite a of projects under the EDLFoundation umbrella, all working in the same direction, but I guess you could say it's the one responsible for tying them together.
So how is Europeana shaping up now? Having spent three days finding out I can tell you now that I came back feeling good - and not just because I was heading straight off again on holiday. Day 1 was about travel and (obviously) a long and lovely trip to the Mauritshaus, but it ended with an hour in the company of Sjoerd Siebinga, lead developer on the project, and a session with Jill Cousins, Europeana's director. I went to see Sjoerd because I wanted to find out how Europeana's technical solution would fit with our plans at the Museum of London for a root-and-branch overhaul of our collections online delivery system. I knew that they'd be opening the source code up later this year, and I also knew that in essence what Europeana does is a superset of what we want to do, so I figured, find out if there'll be a good fit and whether there are things I could start to use or plan for now. Laughably, I thought that we might actually be able to help out by testing and developing the code further in a different environment - as if they needed me! I'll save this for another post, but in short Sjoerd took me on a tour of what they use as the core of the system (Solr) and blew me away. There are layers that they have built/will build above and below Solr that make Europeana what it is and may also prove helpful to us, but straight out of the box Solr is, quite simply, the bollocks. I've known of it for ages, but until given a tour of it didn't really grasp how it would work for us. Many, many thanks to Sjoerd for that.
Next I met with Jill for an interview for my research on digital sustainability in museums, where we dug into the roots of Europeana, its vision, key challenges, and of course sustainability (especially in terms of financial and political support). This was fascinating and revealing and added a lot to my understanding of the context of the project's birth and its fit in the historical landsacpe of EC-funded initiatives in digital/digitised cultural heritage. As a research exercise it was a test of my ability to work as an embedded researcher; one who is not just observing the processes of the project but contributing and arguing and necessarily developing opinions of his own. I really don't know how well I did in this regard - I'm not sure how often my attempts to be probing may in fact be leading, or whether my concerns with the project distort the approach I take in interviewing. Equally I don't know if this matters. A debate to expand upon another time, perhaps.
Days 2 and 3 were the kick-off meeting, and I'll put that in another post.
Thursday, April 16, 2009
Museums and digital sustainability: the other meaning
Pirate Bay server becomes museum artefact
Whether the Swedish National Museum of Science and Technology will be sustaining the file-sharing service is another matter.
I guess, joking aside, that really does highlight the key difference between (my definitions of) sustaining and preserving: the latter is about keeping stuff in existence, the former about fulfilling their purpose.
Friday, March 27, 2009
Evaluate this
My problem: to create/declare javascript variable names dynamically. I have a loop in this little SVG experiment I'm doing with the Raphael javascript SVG library (another post to come on this) where I want to make a "set" of SVG elements out of each item in an array of unknown length. I also need to attach an onclick function to each set. For creating the set, putting items into it, and attaching the event handler I need a variable name to be made on the fly. This is how to declare it:
eval("var r" +i +" = dynamically named variable'");If i is currently 2, this creates a string variable with the name r2 and the value "dynamically named variable". To get the variable value you have to use the eval() method again, thus:
alert(eval('r'+i));Probably old hat, this stuff, but I'm not too proud to show my ignorance, at least when I've just reduced it slightly!
Wednesday, March 25, 2009
OT: Holy crap, it's the eefin' muppets
WFMU's Beware of the Blog: EEF BEAT MANIFESTO (MP3s)
Tuesday, March 24, 2009
Visualising SKOS
Hmm, there are quite a few flaws with the SKOS too (but see above). I fixed some last night - I found that I needed to train Dapper differently for a couple of pages and there was no way of reconciling it without doing two separate dapps. I guess that's not too surprising considering the number and size of some pages, and the fact that there are over 2000 terms each with multiple relationships. It precludes Dapper from being part of a live service, but I already established that there wasn't much sense in that - better to use it as screen scraper and then move on.
Incidentally I, as a total JSON noob, had problems accessing what looked like arrays within the JSON array - for narrower and related terms. I couldn't treat them as arrays but finally, using a tip I found on the O'Reilly forums, took the element, did toString() and split it into a proper array at the commas (I'd tried split before, but it failed without toString()), then I could cycle through the terms. Perhaps repeating the problem here will help others with the same issue.
Monday, March 23, 2009
I'm a Dapper Dan man
Next step is KML for this lot, but I will talk to the content owners before doing anything like that, which might put a load on their servers were I to layer their content onto our maps. One helpful thing, though, is that their app takes a parameter for the size of square to return results for, which means you could write a map application that would only query for the area you're showing at the time.
Sunday, March 22, 2009
Playing with SKOS
I took the British Museum's material thesaurus, which is hosted by CT here. I went to Dapper and tried to get it to learn well enough to go straight to nice XML with all the different relationships having their own elements. There were too many exceptions for that and it stopped learning them after a while and I was going in circles I'd never escape, so I made a simpler Dapp (here) which just puts out the term, the linked terms, and comments. I later had to retrain it to cope with the H page but since running that page correctly once it's refused to again: it shows the results to A instead. Not to worry, add a querystring and it thinks it's a new page.
Anyway, then I had XML but still wanted to get this into nice nodes for different relationship types between terms (though wasn't really thinking about SKOS at this point. Doh!). I had high hopes for Pipes. Another doh! Because I would need to go through each item multiple times, renaming each sub-element according to its contents (e.g. broader terms all start "BT ") and trimming the string contents, I was scuppered: you can't loop operator modules, which are the ones that would allow renaming. And you can't rename by a rule, or I couldn't find how and it would probably rely on an operator module anyway. So after a lot of time wasted I thought, sod this, I know how to do this in a minute using XSLT and how important is it to have this as a web service? Fact is, it's not, or at least not in the form of a simple list - I may as well jus have a static file.
So that's what I did. It took more than a minute, though the core code scarcely did. What took longer was digging into SKOS, once it had struck me that it would be the obvious (only) format of choice. It works in a pretty straightforward way, or at least it's easy to do the basics and I didn't need to do more than that. Finding out how to represent it as RDF/XML was not so easy, coz the W3C pages don't show any - they just show TURTLE which isn't that much use to me, really. I needed a full RDF document. XML.com came up with the goods - old, but hopefully valid. So I went ahead and knocked up SKOS RDF for all the letters of the alphabet (bar X - there's nothing in the list starts with X) and merged them into one RDF file, which I hope is valid. I actually have my doubts, but I do know that with this file I can navigate around terms in a way that would be useful to me so that's good enough for me. It's here. I think it would be useful to put a web service on top of this now (perhaps Pipes can come in useful at last) so that it's really an API. Feel free! Oh, go on then, here's a first pass. Won't render as RSS and (consequently?) the "Run Pipe" screen shows nowt, but in debug mode you see results, and as e.g. JSON and PHP.
Next up there are a bunch of thesauri on those sites that I'd like to do a similar thing with, though some are going to be more fiddly. Others may be easier to dapp, but actually I reckon going to SKOS is a better bet and take it from there, as long as the content owners aren't too pissy about me playing with their stuff. Actually what would be most useful is probably to play with some of the word/term lists e.g. the RCHME Archaeological Periods List.
I could get into this.
Tuesday, March 10, 2009
The Guardian's API arrives
Here's the documentation: http://api.guardianapis.com/docs/
And don't forget the various sets of data they've compiled (from government and other sources) for the mashing thereof: http://www.guardian.co.uk/data-store (pointing at Google spreadsheets)
Quite how we'd be able to use the Guardian API here will take a little consideration (and the answer may be "not at all"), but it's all good. It looks like newspapers will have to start tackling (or asking) their own version of the question that Richard Light recently posed and that Mia is chasing us museum tech types to answer: what can we standardise across the APIs that are starting to emerge from museums, and where do we start?
Monday, March 09, 2009
Shakespeare's first theatre
Thursday, March 05, 2009
Multi-sense VR helmet? Want one o' them!
Hmm, actually maybe I definitely do NOT want one o' them. Not only do I dislike the idea of not being able to tell reality from virtuality (and no, I didn't think The Matrix was a docu-drama), but you'd look a bit of a knob in this. But it's still a pretty cool bit of vapour-ware.
Wednesday, March 04, 2009
Uncontroversial title
Frankie kicked things off today, and after a bumpy start the discussion on those various venues has been fruitful I think. My thoughts are scattered around the place but, as I say, precede any real knowledge. Basically I think it's an idea with plenty going for it and which is a necessary experiment that we'll all learn from which, at the very least, will hopefully leave a legacy of some infrastructure (technical and organisational/political) for the 9 partners. It may be that there's a lot more than that, and it will make a very interesting case study. In the meantime, genuine congratulations to Carolyn Royston and her team for battling through all the challenges and getting this far. It's humbling, when you produce the sort of modest stuff I do, to see what's possible (if you have some resources).