About Me

My photo
Web person at the Imperial War Museum, just completed PhD about digital sustainability in museums (the original motivation for this blog was as my research diary). Posting occasionally, and usually museum tech stuff but prone to stray. I welcome comments if you want to take anything further. These are my opinions and should not be attributed to my employer or anyone else (unless they thought of them too). Twitter: @jottevanger

Thursday, December 20, 2007

The Reg: no fan of the PSP (I don't mean the Sony one)

The Reg is clearly not a great fan of Ofcom's Public Service Publisher idea!
TV licence fee 'to fund Welfare For W*nkers' The Register
I'd say not a huge fan of the Guardian either...

Monday, December 10, 2007

Cross-post: The web-monkey speaks

Cross-posted for completeness from our MoL blog

*************************************************

Hi. My name is Jeremy and I’m a museum web developer. There, I’ve said it. I’m a keyboard-jockey burdened (or as I see it blessed) by at least two sorts of geekdom: a late-born pleasure in ‘pooters; and a long-standing love of museums, and of the special sort of residue of our world that flocculates there (especially, it must be said, “real” things).

Seven-odd years ago I thought about setting up a museum-orientated web development company, to be called MuseioNet or some such nonsense. Thankfully it never got much beyond a cheesy name, because now I’m starting at least to get an idea of what I don’t know about the subject of building web-based resources for museums. I would have crashed and burned horribly if I’d tried to go it alone back then. I’ve been at the Museum of London about 6 years now, learning on the job. That’s pretty much inevitable anywhere, I suppose, and certainly in technology it’s a basic requirement owing to the speed of change – no matter how much you are on top of your chosen specialism today, tomorrow you’ll be slipping backwards.

Mia has already talked about her job, and although hers is more database-y and mine more webby, our roles have a fair degree of overlap so I’m not going to say much about what we do in a general sense; I want to talk instead about some current projects.

First thing to say is that roughly half of my time is spent on project-related work. Right now I’m really just wrapping up some odds and ends and catching up with some of the day-to-day stuff that’s piled up – bug fixes, data extractions, style-sheet changes. The odds and ends include a map interface for the London Sugar and Slavery website, which gives another way of exploring some of people and events that feature in the gallery of the same name. The map can be found in the gallery too. Things have been somewhat held up by our attempts to make the interface simpler and more intuitive for users, which often makes things dramatically more complex behind the scenes. I don’t know how many more
such applications we will build: we use ESRI products in-house, and their ArcIMS product powers the LSS map (and this one), but the power and flexibility of free mapping applications out there (Google and Yahoo! are amongst the most prominent) make them increasingly attractive. It would be a bit of a learning curve to learn to do all the things we want to do with these but ArcIMS is pretty complex too. We’ve already used Google Maps here and Yahoo! Maps to show our location map in context.

Another project that I dearly hope will soon be wrapped up is a very cool tool for creating quizzes and presentations, primarily for use within the context of our Learning Online site. The tool is complex and there have been problems with its development and implementation but you can see examples of what it has been used to create in the Black History section of that site. Teachers can use these interactives in a classroom situation on interactive whiteboards or regular desktop computers. The application is being developed by a Brighton company and my role has been as an advisor and in integrating it technically with our systems, as well as testing the darned thing when a fix is applied.

Mariruth Leftwich, who is overseeing the latter project, is also responsible for the recently launched poster maker, built by e-bloc to Mariruth’s brief. We can load up a bunch of images on a theme and visitors can put together a poster with them, print or submit it and finally see it in a gallery. It’s pretty cool. Again, my role was as advisor and in ensuring that it would work with our core systems and that we’d be able to live with it long-term without needing too much support. Mariruth is leading on another project I’m advising on too, which is a site for key Stage 1 kids about the Great Fire of London. This is a partnership with several other London organisations, which will fill a gaping hole of decent online resources on that subject for that age group.

The question of long-term sustainability is a key one to me, and has become so in large part because of another “project” on which I am working, namely my PhD on sustaining digital resources in museums. The museum gives me a great deal of support in this, and hopefully I am starting to return something to them as I develop as a practitioner. It’s not just me, I think that we’re all thinking a little more explicitly about the question of longevity now, bringing to the surface something that was always at least in the back of our minds. There’s a whole host of things going on in the MoL Group that tie into the question of digital sustainability: the projects I’ve mentioned and many others (not least Mia’s social software, including this blog); a review of records management; the evolving plans for IT in the Capital City Galleries that will open a couple of years hence.

Well, I’ve gone on enough for now. I should have said less about maps and more about Capital City but this is a week late already and I’d better get it online. It’ll wait till next time. Bye.

Fotowoosh breaks cover

IIRC, you can download the necessary code to run Fotowoosh or whatever they originally called it yourself, but who wants to do that? Now (I discover, thanks to OUseful) they have made it available through Facebook. I hope I'm not breaking any rules that I've not read about by embedding here an instance that I've made through that site. Here you go, anyway, a view of the Getty Museum with LA in the background. It's evidently not the perfect sort of photo for the job but you get the idea.





Friday, November 30, 2007

RWW's interview with Dr. Paul Miller

A good read. Talis seem a really interesting company (which I'd already gathered from what Keith Alexander was saying earlier this year). Which reminds me, I must find the time to pursue that lead properly... Too much on, dammit! Anyway, RWW are well into SW at the moment, likewise TechCrunch, and they're right, there is a lot going on and some really exciting convergence with social software. Can we keep up?

Semantic Technology In Action: An Interview with Dr. Paul Miller

Monday, November 19, 2007

OpenID in HE

eFoundations reports on last week's OpenID meeting which looked at the situation from the perspective of higher education (in part).

Friday, November 16, 2007

Brooklyn Museum and social software

Nins Simon has posted this interview with Shelley Bernstein on Museum 2.0. Interesting from the POV of museums experimenting with various current social software gubbins, and how they feel this fits in with their mission.

Wednesday, November 14, 2007

Odds and sods 3

Well what with disease, conferences and various other out-of-office experiences I have had very little time at my desk to do real work, let alone meta-work like this and I have some catching up to do. However real work must come first, so this will have to be super-skimpy.

First, Ross's new book should be out any day now. I can't wait to read it. I just flicked through the proof in his office and know it's going to be a great and stimulating read.

We've opened a new gallery at MiD , "London, sugar and slavery", which I can't wait to see tomorrow when I'm at Museum in Docklands for the MCG meeting. My part has been to do with the ArcIMS mapping application, which isn't yet on the web but is in the gallery. Let's be honest, ArcIMS is a pain in the rear and you need a pretty good reason to justify the effort involved if you choose to use this over one of the free mapping apps, although of course they also have their learning curves and limitations. What they don't have is installation issues; OTOH you can't install them, and hence your client machines must have web access enabled. As our experiences this summer with web access on gallery machines was so dreadful we're keen to avoid this, although from past experience we know it's perfectly possible to do this safely and effectively - we just seem to be lacking the skills at present. On the subject of installation, I should say that the current version of IMS is actually pretty straightforward, perhaps disturbingly so - I think I was looking for all sorts of post-installation configuration changes to do that didn't actually need doing. But there are always complicating factors, and it's still taken me the best part of 3 days to get the thing working on our internal CMS server.
Anyway, our app uses ArcSDE, a new departure for us, and Pete's written some cool queries to make this a little more interactive than some of our previous efforts. We've got some bugs to iron out, to do with our merging and over-riding tool behaviours, but it's reasonably presentable.

Next up, Mia. Our social software torch-bearer has been working hard in all sorts of directions trying to get us off the ground with blogs, forums etc., not to mention organising our chaotic efforts with Flickr and the like. She's now got us going here: http://mymuseumoflondon.org.uk/. BIG congratulations, we're in the 21st century! Now we need to work out our management practices, encourage authors, look at how to embed and integrate this with our main sites, and see how it takes off.

I guess I should mention Jonty, but I'd rather not. If you insist you can check him out on our sites or on YouTube.

CHArt: the conference last week deserves a post of its own. For now, I'd like to give honorary mentions to J Milo Taylor, Tara Chittenden, Jon Pratty and Bridget Mackenzie, Tanya Szrajber, and Douglas Dodds, whose presentations I particularly enjoyed.

EDLNet. Did I write about this yet? I hope so. Watching over the mail list and looking at some discussion documents (so far simply lurking) I have some hope that the project will place the right emphasis on function over interface, given limited resources. Jon and Bridget talked about "Your Paintings" at CHArt, so far just a proposal but one that I would think could be designed to mesh well with EDL. I hope to talk more to Jon about this tomorrow.

Martin Bazley and Nick Poole are keen to get together with some of the people involved in IT in the London Hub so we've set up a meeting at MoL next week to see what we can draw out, initially to help them with a strategy for the SE Hub. I'm interested to see what they come out with for a strategy there. I also know that Martin wants to pursue some of the issues around stats that Dylan and I were talking about before, since he has got the job of writing a report for the London Hub on the question. I'm just a half-blind opinionated fool on the subject but if I have anything useful to offer I'll try.

Kurt Stuchell has put together a widget bringing together podcasts and blogs from/about museums worldwide. I reserve judgement on the thing itself, which I'm sure will be of use to some, possibly me included. The main point is that it's nice to see this happening in museums, full-stop. There must be lots of other imaginative ideas out there for what museum material can be widgetised. The Rijksmuseem's widget is perhaps obvious but effective nevertheless and perhaps we should do something a little similar: push object data out in an RSS feed to be consumed via a client-side JS snippet, perhaps. As I say, not that imaginative but worth a crack.

Micah Blue Smaldone. Do yourself a favour and get some. He may not be your cup of tea but you need to find out for yourself. The more I hear the more I'm ensnared. Follow far enough from the link above and you'll reach this where you can hear some spell-binding live renditions.

Thursday, November 01, 2007

What else is going on...APIs again

There's a lot to say about EDL since I attended the meeting of Working Group 3 (which concerns users) last week. I haven't the time right now, but I should note that (a) I was quite taken aback by the way that my arguments for an API ended up sending the whole meeting onto a different track, about which I felt a little guilty, and (b) I am now clearer that an API is the essential thing, but that a portal site really does have a place, and that an API can help ensure that its potential is realised by opening up the possibility that third parties can build widgets etc to work within it, or integrate their applications with the portal. But a lot of the talk was like juggling jelly: so many things are not firrnly established, or at least weren't clear in that context (like data structure, or the location of the content vs the catalogue metadata), and it makes it rather tricky to keep all these unknowns in the air and have a sensible conversation. But it keeps me coming back to the point that concentrating on the functionality, not the user interface to it, is where EDL should expend its efforts.
 
On a very related tip, Andy Powell on eFoundations today mentions the "repository as platform" idea (http://efoundations.typepad.com/efoundations/2007/10/repository-as-p.html), in respect of recent ePrints developments. I haven't read a digested this lot properly yet but it's pretty much where I think EDL needs to be at too.
 
Finally, Google's latest endeavour finds them offering as a standard an API model for social networks, OpenSocial. Along with OpenID, we may find that this takes us a good way to standardisation. The next thing is for maps, perhaps. At present you can use libraries to negotiate the different APIs for the various providers, but a standard API would be better. More sustainability for all!

I'm through

Got through my APG transfer. Phew. It wasn't entirely painful (I had a sympathetic, patient and friendly panel) and I came out with some useful suggestions, especially WRT my research plans (ignore small museums), my schedule (give myself another year) and where I might go with this question of value (look into other types of value). Clearly I could have done a better job with my paper and my talking about it evidently cleared up some questions for the panel, but I don't think I gave a very good idea of the amount of theoretical work I've done. In other words I sort of cracked up. I know the paper was pretty crap, though, which has something to do with writing it in a rush after finishing paper 3 at the end of September, meeting Ross to discuss a fortnight later, and getting the APG paper in a week after that. It's also not unrelated to the fact that in 1000 words or so it's hard to summarise the 30,000 words I've written - all I could do was say what I'd looked at, not what I'd found.
 
But hey, I'm through, they said some nice things and made useful suggestions so now I must move on and knock that research plan into shape.

Friday, October 19, 2007

Names authorities

The Virtual International Authority File OCLC project looks like another possible step in the right direction, different from but kind of paralleling this JISC project. I don't really understand either but need to get 'em down here for their relevance to the Semantic Web (which in this context probably merits capitalisation)

Monday, October 08, 2007

The European Digital Library

I've been trying to get to the bottom of what is really envisaged with the EDL project. There seem to be a large number of overlapping or sub-setting activities going on (i2020, EDL and EDLNet, TEL, DELOS etc.), and what you read in one part sometimes seems to contradict what you see elsewhere. There's a bit written in various places about the digitisation and preservation aspects of EDL, but I've had trouble understanding the degree of centralisation that is planned for the access side (will it be a central repository of metadata or a more distributed approach?)Anyway, this FAQs page FAQs page seems to answer some of this, although it's very broad and I'm still unclear on the architecture - if any.
A couple of relevant quotes:
"The Commission is promoting and co-ordinating work to build a common European 'digital library', by which we mean a common multilingual access point to Europe’s cultural heritage."
"Technology is moving fast and there are potentially many different ways of creating virtual European libraries. We should not aim at one single site or structure, but combine efforts in all the countries. What matters is to integrate access. This does not mean that the libraries or digital collections should be merged in a single database or library." [but is there one service behind it all? Or are we talking distributed search?]
"The needs of the users should be central. Developments will be demand-driven, but it is important to take a longer term and visionary view of what the user will get from the library in the way of services. Different users will have different needs and uses: One can imagine researchers wanting annotation tools; other users may wish to develop their family histories and genealogies using the materials in historical community archives." [but who is going to be able to do this development? I want to be able to point to my own sites at EDL and use its searching power and language tools to build my own applications on, which might include UGC or whatever; it would be less satisfactory to have to depend on them to build any such tools. In other words, I would want an API]
I'm still looking for clarity, then, and I've written to find out more. EDL should be fantastic, but the more open it is the more fantastic it will be, and I think that institutions will be keener to provide material if they can then hook into the back-end and really make something of it. That can only foster innovation. Fingers crossed

Friday, October 05, 2007

Adactio on the new paradigm of public/private

Ever though-provoking Jeremy Keith writes in Lock up your data of the clash of paradigms between those who assume that all their data will be public unless they specifically say otherwise vs those he expect the default to be privacy (old skool, like Jeremy and me, I guess). He suggests that this is not a technological question but a cultural one, to be addressed by a conversation between those offering APIs/access to data, developers using those APIs, people creating the data, and indeed search engines that crawl API driven sites. I haven't much to add except that I find the point about the cultural implications being key to be especially interesting.

Thursday, October 04, 2007

.Net goes OS (sort of)

Microsoft to release .Net as Shared Source (as in, you can read it but you can't modify it). Still, an interesting development. However I would really like to be able to get into their damn calendar control and fix it, for one thing.

Friday, September 21, 2007

Digital collections policies

I just discovered from this page that we contribute to a network
"supporting organisations in London that hold collections of moving image material. It helps researchers and the public find these collections and tries to ensure the preservation of important material made in or about our city"

That page notes that "Film and video is barely mentioned in our current collecting policy", which is also true for multimedia more widely. Stumbled over it whilst googling for any digital collecting policies from museums. Few turned up, though of course libraries do, and there are odd statements from museums about digital material, but nothing very concrete.

This despite the fact my own university offers a module on the MA course looking at "particular issues of digital collections and digital collection management". I guess the problem is partly that I'm distinguishing between a lower-case "collection" of digital assets, and acquisition and accessioning of digital material into the upper-case Collection of the museum. Where the MLA have looked at "digital collections" they mean digitised collections, quite a different thing. Well that's not quite accurate (see here) but it's a somewhat different perspective, and certainly not guidance on a digital collecting policy as such. Netful of Jewels was the same: talking about collections of digital assets in the lower-case "c" sense.

A couple of exceptions: Denver Museum of Natural History's policy

Wellcome Trust's Library's preservation policy

And of course these folks at least have a digital collection, even if I haven't found the policy: http://www.brooklynmuseum.org/research/digital-collections/brooklynbridge/

Still reading Karen Verschooren's dissertation which includes others, so I know there are some out there. Google just hasn't got them all in its brain.

Tuesday, September 18, 2007

Forthcoming: Yahoo Teachers

Yahoo Teachers looks worth investigating, for my kids' school, as a(nother) model for collaborative web-scraping, and as a possible outlet for MoL educational materials.

Monday, September 10, 2007

MS REST interface action

http://astoria.mslivelabs.com/
didn't get far with this, it's been draft for weeks...

Cross-post: SL memorial to 9/11

[also on The Attic]

Museums and memorials both deal with memory (I know, the clue's in the name) and both need sustaining for this to work. Of course many museums act as memorials and sometimes it's not really clear which we're looking at.

The memorial to 9/11mentioned in 3pointDs post sounds like a case in point (I haven't seen it, still no SL account but slowly getting keener on trying it out). "Artworks" in SL are, it would appear, common enough, indeed so are museums and galleries; but something like this memorial seems to be on a level very appropriate to the question I want to ask regarding experience-level resources, and when and how we decide what will happen to them in the future. There are many other questions of a more museological bent: who can feel ownership of this? does that matter anyway? can we be confident of what we see there?


Maybe I'll sign up soon and decide for myself. If it's as powerful as it sounds I hope the memorial is durable, but the chances of that are hard to assess in these hosted virtual worlds. Empty as it may sound, my thoughts go out to those who lost loved ones that day six years ago, and indeed to those who were lost or scarred. The media is doing a good job of memorialising right now, and my mind is quite full of those terrible events; they never seem to settle down into becoming assimilated knowledge, bleached of much of their original emotion, in the way that other disasters so often seem to. The shock is still there.

Friday, September 07, 2007

DAMIA: IBM's "data mashup" editor

Here's IBM's answer to Pipes, Gears and the rest: DAMIA. It looks interesting in itself, and I liked to hear in an introductory video that they use the term "data mashup" to distinguish it from properly user-facing mashups, which might be built using the data produced by DAMIA. It takes the usual inputs - RSS, data as a spreadsheet - plus XML and, soon, database connections.
The other thing that is interesting about the phenomenon of the mashup editor as a whole is that it's an example of a class of application that has totally bypassed the "packaged software" phase. Although one could well imagine, in a previous age, some software company selling a mashup generator for installation on a developer machine and private server, it's only fitting that the sort of development tool that exists purely because of web services (lower case) should itself be born and flourish on the web.
There are of course great benefits arising from letting people build their mashups online, aside from not needing to buy the software (after all, Google et al could in theory charge). There is nothing to do to deploy, there's no need to have hosting for your software etc. The advantages are plain, but since you could see similar advantages for other software it's still interesting that we have entirely skipped the installed phase. Perhaps that's yet to come? There would doubtless be advantages to that approach, too.

Thursday, September 06, 2007

Ross Gardler on Web 2.0 and Sustainability at UK Web Focus

Brian Kelly's guest blogger today is Ross Gardler (of OSS Watch ) writing on Web 2.0 and Sustainability

Monday, September 03, 2007

And what are the IP connotations of this?

Glad I don't have to worry about this one, anyway:

http://simpsonizeme.com/

And here's my lovely daughter, Isabella Simpson. Sort of.



She's always wanted a dog

IP IP 'ooray

Although the life drains from my face when I contemplate IP I guess I'd better keep sort of on top of it. Here's a couple of things from today:

All useful stuff, though I just can't seem to keep my eyes open.

Friday, August 31, 2007

DesignMyRoom - any good for museums?

TechCrunch brought this to my attention: DesignMyRoom. Could be worth a shot for sharing exhibition ideas, mocking up historical scenes etc. Third party dependencies, we love you!?

Thursday, August 30, 2007

PLANETS in alignment?

Thanks to my co-stude at Leicester, Alex Whitfield, who drew this project to my attention (the British Library, where Alex works). From the "About" page of the site:

Planets will deliver:

  • Preservation Planning services that empower organisations to define, evaluate, and execute preservation
  • Methodologies, tools and services for the Characterisation of digital objects
  • Innovative solutions for Preservation Actions tools which will transform and emulate obsolete digital assets
  • An Interoperability Framework to seamlessly integrate tools and services in a distributed service network
  • A Testbed to provide a consistent and coherent evidence-base for the objective evaluation of different protocols, tools, services and complete preservation plans
  • A comprehensive Dissemination and Takeup program to ensure vendor adoption and effective user training.

Odds and sods 2

A couple of really interesting links from today:
  • From hand-crafted to mass digitized by Gunter Waibel at Hanging Together. There's loads I identify with in these summarised remarks, and a fair bit to argue with. An evolving discussion on the balance between the practical, the ideal, and the flexible. Gunter also wrote recently about the environment in which LAMs operate, pointing to Lawrence Lessig's "modalities of constraint" - factors regulating behaviour. The latter fits nicely into the paper I'm writing, at least if I can work through the 32 odd pages of this reference.
  • Where Do We Put It? Fitting the Web Into Museums from Nina Simon (on Museum 2.0) pointed me to this thesis by Karen A. Verschooren. Fascinating thinking and material on internet art, much of which logic can probably be applied beyond that precise field and onto other museum digital resources. I'll have to read another 200 pages or so to confirm this, but it looks promising! Nina's post in interesting in itself for its response to the thesis. I'd like to do them both justice here but it will have to wait.

Friday, August 17, 2007

O'Reilly: Virtual Worlds for Education

On the quandary posed to educational institutions (though could be CH) by the plethora of virtual world platforms:
Virtual Worlds for Insurrection and Revolution in Education
How to build for the future and generate a community?

Records management

[Hmm, looks like this has stayed in draft form for quite a while. Not finished but better get it up there anyway]

Yesterday [i.e. some time ago] I met up with Sarah Demb, another newish recruit to the London Hub, who is running a project looking at records management across the Hub museums (4 core partners and a smaller one, I believe), where she's interviewing a large number of staff. The ideas is to develop a co-ordinated strategy, which seems most sensible. I've got a shed load of notes I made on the train this AM but I'm going to cut them down.

We talked about what I do generally, website management questions (backups, archives, security), how the CMS works, documentation, data management and sources. I was interested that Sarah saw her brief so widely, since I had rather narrowly conceived of "records management" as being business-centred activity, lots to do with finance and collections data, e-mails too, and a dash of archiving. In fact I think she's right to look more widely because aside from anything else we have a dearth of policies WRT "records management" pretty much everywhere I'm concerned with.

In our conversation, I noted that we use some 3rd parties to hold or represent some data, such as Google and Flickr, and this will probably increase (not even counting the forthcoming picture library arrangement). I think we agreed on the need for a digital collections policy too, which to my mind would be assembled with reference to diverse categories of material: UGC; born-digital material from the museum itself; derivatives; some things that might be suitable for full accessioning, others that might be required for a good archive. I mentioned knowledge management issues, and the complexity involved in recovering old versions of the CMS site (owing to the interwoven nature of content, template definitions, TP files, and executables).

With luck, the outcome of Sarah's work will be some form of archive post within the Hub. We certainly need one here, and I hope that we can use it as an opportunity to develop ideas about how our digital activities fit into the picture: which materials are like traditional "archive" material, which are like publications or ephemera, which should be valued and treated differently from other material, how all this ties to collections, collections metadata, oral history etc. Such a conceptual framework seems essential for deciding how to treat any sort of asset.

Wednesday, August 15, 2007

Simulacra's Omega Day

I heard from Andy Sawyer on Monday that Simulacra has gone into liquidation. I think this is public knowledge and it actually happened a couple of weeks ago, although their websites don't show this. I wonder if they are expecting to keep things ticking over whilst all the parties decide what to do next. Andy was a key figure there and was also my non-academic supervisor. This unfortunate news also disrupts that, obviously.
Simulacra was formerly MWR, which took over the original Simulacra last year but adopted the name as it had a higher profile and an established product, OpenHarmonise. They were working hard to find a business model that balanced their values with the harsh financial realities of being a small fish in the software world, although MWR had a pretty long history in fact, stretching back to the '80s (I think). And whilst I say "small", a look at some of the work they've done and the clients they've left happy doesn't reflect this (the Churchill speeches interactive was so cool!). Whether life is especially hard in the education and cultural heritage sectors, which is where they pitched their camp, I don't know, but it's regrettable that even an adventurous and forward-thinking company like that can expire (with a little help from the bank, I think). I guess it's also part of the deal of being adventurous, but the museum world and teachers will be poorer, I think, if their Magic Studio, for one thing, is gone for good.
There's plenty to mull over for me in this story, but it feels (a) a bit close and (b) a bit unappealing to conduct an autopsy now. Also there's still the chance that something might rise from the ashes and I'm crossing my fingers. In any case I wish Andy and all of the other Simulacra folks well, and thank them for all the help they gave me with my PhD work, most especially Andy. Good luck all.

[with apologies to Bill Fay for the title]

Tuesday, August 14, 2007

Moaning about stats, coz I hate them, basically

We have been having a very stimulating debate on the whole "what's the point of (automated)web stats?" question (by which I mean to exclude data gathered through surveys, focus groups etc.) . I guess I should acknowledge from the off the my problem is that I'm lazy, so since doing anything useful with web stats involves some work and I'd rather be doing something else, I don't do enough with them. Consequently I subconsciously look for a reason not to use them that doesn't require acknowledging my lazy-arseness, and such reasons are abundant. However in truth I know that there is utility to be found in server logs or client-based methods or whatever you choose to employ, as long as you look deeply enough. The main issue I have is with using simplistic figures as KPIs that are meant somehow to make results comparable across projects and institutions. I still think that's stupid.

Of course, if I get what I wish for, there's going to be a whole load more work in the evaluation process than churning out some stats, which isn't ideal for a lazy sod, but it's not really the effort that bothers me, it's the pointlessness. Worse than that, it's the potential to distract you from what really matters, taking you in the direction of what doesn't, simply because that's what's measured. Happily we're in a sector where at least people don't die if you do this sort of idiot target-chasing thing, unlike health, but still, it's not good.

So below are e-mails relating to a forthcoming project to evaluate web stats methods in the London Hub: their use at present and recommendations for future practice. I wrote a response to the whole idea, which is being shepherded by the estimable Dylan Edgar (formerly of the Scottish Museums Council, and now ICT co-ordinator of the London Hub). Dylan wrote back with a comprehensive response, for which I'm most grateful.


[I wrote:]


If you stand in the sea in a pair of wellies waiting to see how quickly they fill up, in the hope that this will tell you how quickly sea levels are changing in response to global warming, you might learn something. However it will probably be more to do with what the tide is doing, how big the waves are, whether any large boats have passed by or whether you're generally unlucky and keep falling over. If your friend is doing the same, you can compare your results better if your wellies are the same height, but perhaps they're standing on a rock or by chance got hit by a large piece of chop.

Web stats tell you a similar story about how successful your web offering is in achieving your mission-relevant goals. If we have the same software as our peers, it's like having the same wellies - a small improvement but still measuring a proxy that is quite some way off representing what we're really interested in. I know we all recognise this, and I know too that we all have to, firstly, give our funders what they ask for in terms of KPIs, and secondly, be able to lean on some kind of indicator to see how well we're doing. But I think it would be much more interesting and much more useful if we were to explore means other than site stats as indicators of our impact and success relative to mission. Of course it would be good in a pedantic, mechanical way if we used a common platform to record and "analyse" these obligatory KPIs, but it's like polishing turnips (sic). I'd love to see us making an argument to MLA (and above that to the DCMS) to develop KPIs that actually mean something - and these may be different from institution to institution and project to project. They may be amenable. You'll know the debates going on about cultural/social value at the moment (Tessa Jowell, John Holden, Peter Cannon-Brookes, Jim Collins and others), so perhaps this is a cue to start prodding the funders about finding better ways of evaluating success - ones that might not resolve into easy figures but that keeps us focussing on what matters.

Having said all that, this is clearly an area that is thought by the Hub and its partners to be important and I certainly don't know all the ways in which people use our stats and may find them useful. Right now, though, what we are collecting is (as far as I can tell) chiefly used for reporting. We in MST look at them sometimes to give us a clue about what parts of the site are doing well, what search terms seem to drive traffic to us, who's linking to us etc., but we look through sceptical lenses and interpret everything we see with a lot of knowledge that simply isn't embodied in those stats, and isn't available to funders. In that sense they can be a useful tool for making improvements, but not really for demonstrating success except in the most clumsy, questionable way. Similarly, if the cows are lying down you might want to think about taking a mac with you on a walk, but if you were making a weather report you would only say "it rained today" if you saw it raining.

I should just expand on why stats are becoming ever more deficient PIs. Our digital resources aren't only used by our website visitors. Our data are taken from feeds and reused on other websites (for which we cannot access the stats), our images are taken and put into personal sites and blogs, we are bookmarked in de.icio.us and events taken from our pages and added to personal Google Calendars. As the semanticised web inches forward, more and more of our stuff moves into places where we aren't measuring it (and can't). The OAI data handed over to the PNDS is part of a Hub (and MoL) job of work, but if they were to decide to open up their repositories for other parties to build applications with, how do we measure the impact of our data? Not through measuring browser-based visits to our websites. We have started pilot work with Simulacra that encourages teachers to build interactives from our content, but whilst we will get stats from Simulacra we can't assess how good these interactives are, how much of their content is "ours" (they can combine all sorts of material), how they're used in the classroom once they're downloaded, and so on. And if we decide to put some curatorial time into creating great content on Wikipedia, how do we know if it's "successful" in the sense of contributing to our strategic goals? Not from web stats. But if we can't measure success like that, should we accept that the whole idea is wrong? If I make a KML file and upload it to Google, or even host it on our server, how do I know how much use someone has had from it by downloading it once and viewing it perhaps many times in their local installation of Google Earth? If we can't know, or can't prove it with hits and visits data, should we not do it? I'd say yes, we should do it, but we need some new ways of guesstimating how successful it is. I want to be able to do things that can't be demonstrated to have had a great impact through the current KPIs (or perhaps through any means). I think this means moving beyond crude web stats, as well as thinking more about what those stats mean.

I don't think the project is wrong, but I do think it's perhaps putting a lot of effort into improving something that is never going to be that useful, whereas it could be about making the case for useful KPIs and developing new techniques for assessing them. I note that there are some references to other tools in the brief (surveys etc.), which is good. Maybe we can slant it more in this direction, but keeping in mind the reason: not to come up with new ways of measuring usage, but assessing impact relative to mission.

See also Seb Chan, talking about how to keep an eye on what people are saying about you (citing our very own Mia Ridge, too): http://www.powerhousemuseum.com/dmsblog/index.php/2007/08/09/social-media-measurement-brand-awareness-and-trust-in-the-cultural-sector/


[and Dylan replied:]


Thanks for getting back to me on this. Of course you're right - web stats are always going to be a rather blunt instrument, and I'm not suggesting that they provide a complete means of establishing the impact or otherwise of our web sites. Throw in Web 2, 3 and the rest of it and it gets even messier of course! I've always been an advocate of taking a more rounded approach to evaluating the impact of online delivery - in fact I commissioned some research back in 2004 which resulted in guidance supporting museums to start doing exactly the kind of thing you're suggesting. This was subsequently embedded into government funding streams for ICT (although not as a KPI): http://www.scottishmuseums.org.uk/areas_of_work/ICT/digitisation.asp

One of the most important recommendations from this work was that museums shouldn't rely on web stats in isolation as a measure of impact. However, they can make a useful contribution to establishing impact if used in conjunction with other methodologies.

Some other work has been done in this area. The EU ran a conference last year which brought experts together from around Europe and set out to look at ways in which cultural heritage bodies can evaluate the impact that their digitised resources are having on people. I was invited along to talk on the subject, and there was some useful discussion on the various different approaches that could be taken as well as looking at definitions of 'impact': http://www.nba.fi/nrg

...however, we didn't get anywhere near to coming up with a consistent measure that could be applied across the board. Incidentally, MLA and DCMS were involved in this conference, so don't be too hard on them - they are well aware of the issues here!

The Hub partners are already doing a lot of this of course, but it's still not formalised as a KPI. I agree that in an ideal world we'd be working towards a more meaningful indicator for establishing the impact of web delivery. However, in reality this is still a very very long way off.

A recent review of a selection of Renaissance KPIs by Morris Hargreaves McIntyre recommended that the one measure relating to web use (visits) remain as is for the foreseeable future, but MLA should "assess need for additional web performance indicators". There's a very good reason for taking such a cautious approach when looking at new indicators, and that is museums' capacity to deliver. We have already found that the current web KPI is collected inconsistently across the country, and even within the regional Hubs themselves. This is mainly because of loose definitions and the range of different systems that museums are using to collect the data.

If we tried to go straight from this situation to working up a more complex set of National indicators that really do set out to measure impact in the round, I don't think that museums would be able to provide meaningful data consistently throughout the sector (at least without a great deal of investment in systems, training, staff etc.). This would be counterproductive because the resulting KPI would probably fail, and confidence in the whole idea of measuring the value of web delivery would be lost, both within government and the wider museum community. This is why I think we need to take a more gradual, staged approach.

Despite their limitations, I do believe that web stats are important. Firstly, they are important politically. For years we have been arguing that the web is an essential delivery route for museums. This is only now being acknowledged by government, who are (quite rightly in my opinion) asking us to justify their investment and tell them just how important it is through this KPI, admittedly in a rather primitive way. It's naive to think that we can continue to sermonise about the importance of the web if we aren't then prepared to stand up and be counted on it when we are asked to. So it's an important opportunity for us to engage and show willing, rather than dismiss out of hand.

Reporting web stats in this way is also important because it shows that it can be done! As you rightly point out, establishing the real impact of online delivery is complex and is by no means an exact science. However by getting this relatively simple measure right, we can demonstrate how museums can deliver consistent data on web use that can be built on towards more meaningful indicators for the future. If we can't or won't do it for a simple KPI, how can we expect to do the more fancy stuff?

As you say though, the stats are limited in what they can tell us in isolation. However analysis is becoming more sophisticated, and if used correctly stats can provide us with a usable indication of how much the different aspects of our sites are being used, and even something of how they are being used and who is using them. So I don't think this is worthless information by any means, as long as we accept that there are limits to what it can tell us and there's always going to be a certain level of inaccuracy involved (that everyone is going to be subject to of course, which helps). But then what data of this kind is ever 100% accurate? I would say that web stats are on a par with gallery visitor numbers, which are often collated automatically and don't tell us anything about visitor experience or the museum's ultimate impact on the people coming through the door. However, as a sector we've been happy to allow this to become accepted currency and have supplied the data for years now in return for government funding. So why not web stats as well? If anything, they tell us more about our users than automated gallery counters.

In this project we want to look at how the London Hub collects data for the existing KPI. We also want to build on this by establishing a more detailed set of quantitative measures that will provide more detail on the extent to which the sites are being used, and that a small group of four museums can collect in a consistent and meaningful way. This will be a challenge in itself. More importantly in my view, we also want to look at how the Hub can be using this information, and ultimately why we're collecting it. As you say much of the KPI data goes into a black hole, which doesn't help because museums don't necessarily see the relevance of the data that they have collected. So by producing practical recommendations on how the Hub can be actively sharing and using this information in the future, we make it more relevant and it becomes a more helpful resource.

This project isn't going on in isolation though. It's being supplemented by more qualitative work this financial year, profiling the Hub's online audiences, understanding their requirements and expectations, and establishing the extent to which these are currently being met. Importantly, this work will also provide a shared methodology for the Hub to continue this process in the coming years. This combination of quantitative and qualitative work will, I believe, gradually build up a better overview of the impact that the Hub's collective online product is having on users.

My own view is that we need to take it slowly, and better methodologies for establishing impact will emerge as we become more comfortable with the idea and as we start implementing this kind of initiative. The whole concept is still in its infancy, and we need to be careful to bring museums with us rather than imposing something that will be unworkable in the short-term and jeopardise the work that's already happened.


[There wasn't much need for further debate in a sense, since we both had pretty clear positions on the utility of stats and Dylan's case is strong for pursuing them in the context of this project, but I my final thoughts were:]


Thanks for the comprehensive response. I know I was very negative about web stats when they're not in fact the devil's spawn, and used wisely can tell us useful things. You've made a lot of very valid points (not least about visitor figures), and you clearly have a strategy for moving towards a situation where we have to rely on stats less whilst working with the political reality. Unfortunately the sophisticated analysis that can make these stats less blunt and more insightful isn't performed before they are given to the funders that use them as KPIs. Nor could it be, because such analyses are case-specific and context-dependent, so we keep on working to deliver targets that at worst might seriously divert us from doing what is really important, simply because we aren't allowed to demonstrate excellence in any other way. So whilst there's more that can be done with stats, we aren't allowed to do so where it counts, so they remain most useful internally (where we can try to use all the knowledge that can be squeezed out) rather than externally, where they cherry-pick the most dessicated fruit.
If MLA, DCMS et al really are interested in working towards better measures, and I'm sure you're right that they are, it would be nice to think that this project could in some small way start to explore that shift and demonstrate to them that we as museums are also keen to look at what really matters. Having said that, I accept what you say, that to move to a whole new national system of KPIs would be a huge step and not one to take in one go. But I wish we could, at least, start discussing with funders the possibility of experimenting with more interesting measures on a project-by-project basis.


[and then Dylan responded with more clarifications and some thoughts on the direction of the London Hub. Some of it is not really for public consumption but here's a bit]


Funding organisations like HLF, MLA etc have always expected museums to have clear plans for how they will evaluate the success of their projects. It's the job of the applicant to make the case for how they will do this - it's part of the application process and a good project needs to have good evaluation built in from the outset. Having said that though, funding organisations do have a role to play in providing guidance to applicants on the kind of thing they are expecting.

The way we approached this in Scotland was to come up with high-level guidelines for museums on how to start evaluating the impact of web resources, beyond using stats alone (see previous email). The guidance was quite generic, but designed to get people thinking about how they can establish the overall success or otherwise of their projects. This broke down into two areas - process and outcomes (i.e. impact).

One of the main reasons for doing this was to integrate it into our funding streams, which was starting to pay dividends by the time I moved down here. The way this worked in practice was if a museum came to us with an application for ICT funding they were expected to show us how they would integrate the recommendations in practice into their project, and they would be held to account if they didn't end up doing what they said they would...

The challenge is making the link clear to museums who don't necessarily see the value of evaluation, and just view it as another hoop to jump through.

Demonstrate the impact of web site = providing clear case to funders = more funding further down the line...


Many thanks to Dylan for the debate and for permission to publish that which isn't libellous! :-)

Thursday, August 09, 2007

Odds and sods

IBM posting on New elements in HTML 5 . WhatWG and W3C moving closer to HTML5, also known as Web Applications 1.0. I'm wondering if there will be an XHTML version of it, though, since evidently it's not planned to be XML-compliant itself (no need to close elements, for one thing). I'm not sure I understand the logic of that. Anyway, it's clearly making progress and I wonder what effect (if any) it will have on current microformat efforts, not to mention our CMS (MS CMS 2002 is no longer available and presumably won't be updated - it's been replaced by SharePoint Server 2007. Will the placeholder controls for the latter be usable in the former, and if so does this mean we'll be able to use HTML5-compliant versions if and when they release them for SPS2007?)
A good expression of some of the doubts and opportunities of 3d worlds WRT education as expressed by Tony Hirst on OUseful Info. Applies equally to the cultural heritage sector.
Nick Carr responding to Eric Schmidt of Google's somewhat reluctant attempt to define Web 3.0. As Carr sees it, the apparent disparity between the Semantic Web meaning of w3.0 and Schmidt's conception of it (a web where it is super-simple for people or machines to build apps dynamically from modules and data in the cloud - close to where we are now with Gears, Pipes, PopFly etc.) is not a real disparity.
JISC report on the challenges of shifting HEIs to XML-based office documents, and the importance of some guidance right now, as Microsoft brings native XML to Office 2007

Thursday, August 02, 2007

More Web2.0 scariness

Much of Artur Bergman's post Your browser is a tcp/ip relay goes quite high above my bald patch, but I get the gist: as users, we all need to worry a bit more about what the funky web apps appearing daily might be able to do, such as allow the website owner to grab hold of your machine via the browser. Wild! I guess we were lulled into a false sense of security by the assurances we've had for years that browsers would let us know if javascript was going to try anything iffy. In fact we did used to be a lot more cautious (and evidently Flash is a concern too; perhaps ActiveX and Silverlight too?) but since it's so hard to imagine a web now without AJAX we sort of laid the concerns aside.
Not a lot to do with my research, perhaps, but a little worrying

Wednesday, July 18, 2007

Seb Chan on web stats

A thoughtful post on web analytics, re-emphasising the different needs of museums compared to e-commerce sites with regard to making sense of their stats, and the confusion and tail-chasing that result. We aren't especially sophisticated users of stats, although I'm the worst culprit on the whole: I don't get WebTrends and leave it to our web content manager to set up the profiles and deal with all that. However I do keep a fairly close eye on our Google Analytics data, although I'm not exactly a power user. We also don't have the script on quite all our pages, which means that I'm missing data on some of our most popular but older parts of the sites.
With GA it's a lot easier that with WebTrends to get to lots of revealing information about the sourc of our traffic (geographically and referral-wise), or stuff like entry and exit pages - you can hop from one statistic to another quite intuitively, and that's encouraging me to delve a bit deeper. But I'm not particularly methodical, it's more if I see something unusual (like spikes in visits from the BBC or games websites) that I dig into it.
The other form of web-related stats we gather (but which, again, I pay probably too little attention to) is on the exit surveys from the museum or from temporary exhibitions. These typically ask whether users discovered the museum on the web or used our sites for visit information or anything else. We've tried web surveys to little effect, and they're too self-selecting anyway, but the exit surveys are pretty informative.
There's lots more to say but I'm clearing out a backlog of drafts and this will have to do for now...

The National Archives and MS

I'd wondered what to make of the news that MS are to provide a futureproofing "strategy" for TNA by giving them copies of Virtual PC 2007, apart from it being a pragmatic step, and fortunately Chris Rusbridge has done the thinking for me. I guess I agree also that the progress with Open XML can only be good, since it's pretty much irrelevant that they aren't using the same standard as Open Office - the point it it's an open standard and XML is designed to be transfomed.
I'd be interested to know how TNA plan to integrate the emulation approach with any need they may have to preserve non-Windows or non-MS Office formats.

Monday, July 16, 2007

eFoundations: The Repository Roadmap - are we heading in the right direction?

Too busy to have thoughts on this at the moment but here's the link:
eFoundations: The Repository Roadmap - are we heading in the right direction?

See also Nik Honeysett's sceptical posting on Trusted Digital Repositories. Right or wrong he encapsulates the sort of frustration we must all sometimes feel when we're looking at something a little outside our own area of obsession and wondering, "why does it have to be so complicated?" Actually that's probably misrepresenting what he's saying and where he's coming from; in any case he makes a good point: most of the time we're migrating stuff frequently, and most of the time this is perfectly adequate.

Friday, July 13, 2007

Monday, July 09, 2007

Perpetuity, huh? Remind me, how long is that?

Another thing I meant to write about an awfully long time ago: http://ahds.ac.uk/news/futureAHDS.htm
The death of the AHDS (but not, it appears, the ADS for now) is not just a blow for the service itself and its users, but for the very idea of data repositories funded by UK Research Councils are trustworthy. Data preservation is for the long term and a broken thread may as well never have existed. With the AHRC (declaration of interest: they fund my PhD) pulling the rug from beneath the AHDS, and JISC being obliged to follow, who is going to want to hand their data to another such data service? I don't think it needs me to say much, there are better informed people out there who have plenty to say.

Thursday, July 05, 2007

Ketchup!

Well here are a couple of things from the last couple of weeks that might turn out to be interesting.
The new Hub e-learning officer, Mairiruth, has a number of projects in mind, which will probably not involve us directly i.e. we won't develop them, but which we'll advise on and host - so ultimately they're our problem. She has specified a 5 year lifespan, which is great - not the length, but the fact that durability is considered and will hopefully be built in and accounted for in terms of the resources allowed for over that lifespan.
One idea Mairiruth is musing, inspired by the V&A's "Design a Worksheet", is especially interesting for (a) its complexity (b) its small budget! (c) the various possible solutions to it, including perhaps the use of Gathery. Others might employ Magic Studio.
We had a meeting with a mobile media company, discussing options for the Capital City galleries currently in development, which was stimulating and cleared up a few questions whilst, of course, raising others. I guess we're now a little more wary of RFID but we're too early in developing ideas of what we want to achieve to settle on technology yet. With any luck, by 2009 some of the barriers to mobile phone use may have been reduced (especially, for a PAYG tightwad like me, cost), but they may not be ideal anyway, or perhaps we can adopt a flexible enough strategy to work on various current and future platforms - this would seem ideal, but perhaps the costs of ultimate flexbility aren't justified - a real sustainability dilemma.
Later that day we had a meeting to bring us web types up to date with the Capital City plans and to talk about a (long overdue) plan to converge our online collections database infrastructure and data models more. That didn't put it very clearly, but in short whilst we're going in the right direction there's a fair bit to do, and with the upgrade to Mimsy XG later this year we have an opportunity to sort out a lot of stuff. Groovy.
Going home now so that will do.

Semacodes

http://semacode.org/ This has sat a while as a draft waiting for a comment. Hmm, what can I add? Could be handy in the museum? That'll do. See also Semafox

Wednesday, July 04, 2007

On the back foot with SMTP nightmares

Lots of catching up to do. Though I didn't post much before holidays, with too much going on at work, there are quite a few things I need to note down for my own benefit if no-one elses.
Firstly, though, my current trials.The last week or so have been a battle with e-mail. Mail from the web server has failed to get through and we have lost key capacity to diagnose and rectify the problem. The trouble is that it is a function that bridges several areas of comptetence and when we lost Rich May, network/helpdesk manager and good friend, back in April we lost vital knowledge, not to mention simple capacity. One of the great things about our team's structure is that we work side-by-side, pitching in as appropriate, whether we're developers, managers or HDEs, and I know it gives us an advantage over larger organisations with more differentiation/compartmentalisation because it we can have rapid, informal communication and the flexibility that comes from being all the same department. On the down side, the loss of one central member, not to mention the pitifully slow process of replacing him, leaves us badly holed. By the time a replacement is in position it will be three, perhaps four months since he left, plus one for his notice period. Of course, no-one will have exactly the same patchwork of skills nor the case-specific knowledge of the person they're replacing, but this is a study in how not to manage knowledge - far from allowing for a cross-over between outgoing and incoming employees the organisation has ensured that we have a 3 month gap between them. Of course, as much as possible was handed over to our excellent HDEs and to the rest of us on the team, but being short-staffed Help Desk have been unable to exercise much of its knowledge as they fight fires elsewhere. Projects have slipped and broken stuff gone unfixed - unavoidably, given the policies that left us underpowered for so long.
This brings me back to my e-mail issue. It turns out that e-mail from the web server does get through to external addresses (in fact, some, esepcially spam, gets through to our own mailboxes), and its looking like a spam filtering or probably DNS/SPF problem. I dabbled with these possibilities early on in diagnosing the problem but there were, as always, a number of overlapping or coinciding problems and red herrings and I spent a lot of time following these up. This is a very good example for me of how loss of capacity or knowledge can incapacitate our services or cost us dearly in time.

Friday, June 01, 2007

More mapping

Google's new Mapplets offering is a nice way to get going quickly with their Maps, I like its XML base. It prompted me once more to look, with more success this time, for something to translate our OSGB data to lat/long and I found this nice .Net project which has classes for back-and-forth translation as well as a SOAP interface. This should be good for either producing XML/KML data dumps or querying dynamically, with a little work. There are many potential uses for our geographical data and we are looking at ArcIMS's web service interface too, but this is hackable, could cut out an extra layer, and should let us offer data for people to mash up as they please.

Thursday, May 31, 2007

Is Surface more than superficial?

Now this looks like it has potential for the MoL's redevelopment
project, now underway and scheduled to open in '09.

http://www.microsoft.com/surface/
Coverage all round the web on this of course (though it's a pretty heavy
news time with announcements from all sorts at Where 2.0, Ingite and
D5).
Basically Surface is a touch-screen table, but with multi-touch ability.
You can not only drag stuff around but use two fingers or other tools to
perform other gestures(!), stretch images, write and draw etc. It
recognises things placed onto it and has proximity detection so you can,
for example, put a bluetooth phone or camera onto it and it will show
the photos on there. Video here:
http://on10.net/Blogs/larry/first-look-microsoft-surfacing-computing/

Supposedly the table is around the $10,000 mark, and if we come up with
appropriate ideas for its use in the Capital City gallery it's surely a
reasonable figure. There's obviously a nice user generated content
possibility and in any case it's nice to get away from keyboards and
mice, but Sufrace seems to add quite a lot to the basic touch screen
functionality. I'm assuming Silverlight is the platform.

Can't really put an intelligent sustainability slant on it at the moment
beyond the obvious. But in short, it's every bit as cool for museums as
for hotels and phone shops!

Google's stab at the online/offline thing

To join the various announcements from Adobe, MS, Dojo et al, here is Google Gears, a browser plugin and API (well, I believe 3 APIs) to enable online applications to work offline too. They are apparently working with Adobe to integrate with Flex but we need to see how all this shakes out in terms of standardisation. The plugin and SDK are released under BSD.
As always, what's the relevance to my project? Well, for one thing there's the changing nature of applications - where they reside, who has power over them, what they can do, how we maintain our identity in them. There are questions, as I say, related to standards or the need for them, and what their absence can mean for the long-term viability of applications built without them. And of course it's pretty cool tech.

Thursday, May 24, 2007

SL Van Gogh Painting

From Beth's Blog: Step into a Van Gogh Painting in Second Life.
Looks quite fun, though I haven't signed up to SL so I dunno. I was thinking about how one might be able to use the power of that new application for extracting 3D data from 2D images to interpret paintings, and at least get a head start (and better textures) for something like this. No idea how that would happen, but it has to be said that though this van Gogh reconstruction is nice and potentially useful and instructive in itself, it's pretty far from the painting.

Monday, May 21, 2007

Crosspost: Watercolors of Second Life

3pointD writes of a book of watercolours of SL now for sale. If this was purhased by a museum, accessioned, and perhaps digital derivatives of it put online, maybe even stuck on someone's wall in SL (or There, or anywhere), the head starts to spin. Unless, that is, we give up ideas of real and virtual as a pain in the butt and just MOVE ON!
Just kidding. Still, it's another little mind-twister to chuck into the mix. Real, authentic, virtual, inauthentic, valuable, worthless, durable, ephemeral and so on.
Whether her paintings are any good is something else entirely, but who am I to talk? Luca did a better lighthouse than me last time we had a draw-off. He probably wasn't yet 4 then. And my pirates and dragons suck next to his, too.

Also on The Attic

Friday, May 18, 2007

GRDDL

Following on from the eRDF post, I also need to look into GRDDL. Some useful sources:

Wednesday, May 16, 2007

eRDF - another way to semantic HTML for museums?

After posting some stuff the the microformats-discuss mailing list I had a fascinating off-list reply from Keith Alexander, suggesting that I look into eRDF (embeddable RDF) as an alternative to a new microformat (and not only because new ones seem never to get anywhere these days, and perhaps rightly so). The links he pointed at have really whetted my appetite. There are a couple of extra complications, at least from the point of view of keeping authoring as simple as possible (the need for link tags, for one), but it sorts out some of my concerns immediately and opens the "format" up to existing eRDF parsers. Perhaps if we were to come up with some recommended schemas and element sets for museum objects we could reach some sort of stability and with a little adaptation even the work I've already done on the Gathery might not be wasted, but as Keith pointed out, the data would also be available to Piggy Bank and more. Hmm, definitely one to pursue and I'm glad I exposed myself on the list, even if it does mean I drop the µƒ idea altogether.

More on eRDF: http://www.getsemantic.com/wiki/ERDF

Thursday, May 10, 2007

ExhibitFiles

http://www.exhibitfiles.org/They're archiving the process of exhibit design. Given Jim Spadaccini's digital concerns I wonder if he'll turn his attention next to the process of digital media design for museums? Probably not, but there are overlapping concerns and I look forward to wandering through the new site. See also http://www.ideum.com/blog/2007/04/23/the-exhibitfiles-is-live/

Small new app?

Probable reuse of events RSS feed or perhaps more directly from the data access layer: a screen in the Museum of London foyer to show the day's forthcoming events. We have most of what's needed to do this at present but need to (a) tweak things to show just the (remainder of) the current day and (b) to resolve questions of how to make it look nice, given that people aren't attaching images to events in the database at present, though they can.

tRaining in my heart

[with apologies to the Pretty Things]
Lots of training over the last few days.
Less useful: "using display screen equipment"
More useful: generic research skills at Leicester. The odd useful tip on explioting Word better; PhD planning; APG upgrade.
More useful: project management at Corporation of London (or City of London Corporation or whatever it's called now). This was a good full-day thing, not formal PRINCE2 training (though it was briefly discussed) but more about good analysis, preparation and, well, management. It brought out some useful themes for my next paper (on "the institution") as well as some concrete issues to follow up with the Capital City digital media sub-project, when that really gets off the ground (well actually, before then).

Forthcoming project: Slavery

A project in the works for Museum in Docklands will involve mapping slavery-related sites around London. We had a conversation yesterday about how we might accomplish the content creation for this. There are complications relating to other parties contributing content, and possibly some UGC too, but essentially we decided it was worth pursuing the idea of authoring in XML (probably TEILite, actually) since the work I've done with the format previously gives a good foundation for developing the links between sections, outputting geoRSS or KML, building indexes and glossaries etc, which would not really be an option with regular MCMS templates. Will monitor how this develops as it may turn out that the benefits don't justify the extra me-time that will be required, but there's potential there.

Extensible, reusable, Impressionable

Ben from Surface Impression yesterday came in to install the new presentation authoring tool they have built us. Things went pretty smoothly (great bloke too), we sorted out a couple of wee PHP issues relating to the old version we still run and then it all seemed to work. It's a cool app, basically we have a couple of framework SWFs that draw together a load of smaller ones for different interactions - quizzes, fill-the-gaps, matching words and pictures, video etc. - into a single "presentation" for use on the Learning Online site (or on whiteboards). Authoring is done in the same context via a collection of other SWFs and underneath it all XML is authored. We will upload the XML and collections of assets to the web server once it's all ready and pass into the framework SWF the path to the XML and hey presto. It's similar to but much more complex than other Flash stuff we've commissioned in the last couple of years and the authoring environment is cute. I'm really keen to get my teeth into the XML, actually, since I'd like to see how else we can use what is put out - a simple HTML version of a presentation should be a cinch, anyway, and it will also be easy to cut-n-paste or copy-and-edit existing presentations to create new ones.
So, we're not live with it yet but it's looking good.

Value of heritage sites

This series of workshops looks interesting, at least to keep an eye on (there's only one left anyway) as a locus of discussions on how to identify value in real-world heritage.
http://www.le.ac.uk/ms/contactus/valhistenvir.html

Fair referencing

A little bit of a dilemma. I did a lot of work over the last few weeks on a new section for my paper on definitions. I've greatly expanded and restructured a section on value, especially parts on reality and authenticity. Now I have come across a chapter by Evans, Mull and Poling in "Perspectives on object-centered learning in museums", and another by Frost in the same volume, which make a number of the same points that I had thought I'd been somewhat original in. In fact they cite some of the same sources as I'd already done. I suppose it just brings home how much we build on the work of others, that we have used the same sources and drawn similar conclusions. The dilemma is whether I should reference these Evans, Mull and Poling, and Frost, when in fact they were not the source of what I wrote (though they got there first). It's just pride, though, that makes me hesitate - after all, what they are really is useful citations in support of what I've written and it's not that important to lay claim for the original thought - especially when it turns out not to be so original!
So get on with it, cite them!

Thursday, May 03, 2007

Cross-posted comment: on real museums and digital value

Cross-posted comment to Holly Witchey's post on musematic (since it's what I had on my mind today anyway):

Hi Holly, you won’t remember me but we shared breakfast in Pasadena…. I do understand how you can sometimes get to feel like this. I don’t have much to add to your thoughts on communication breakdown – you’re right, communication takes both effective transmission and reception. Your remarks on the value of our whole digital enterprise, though, chimed very serendipitously with my own musings on the way to work this morning. I was thinking: some museums that hold a preponderance of “real” objects, others contain more in the way of dioramas, reconstructions, replicas, interactives and experiences; indeed some have nothing “real” at all. Does this lead them to have different attitudes to their digital holdings or place different value upon them? Despite the AAM’s Code of Ethics, not all museums (in the broad definition that the AAM also holds) have collections per se. In fact, the section you quote includes not just collections but “exhibition materials”, and maybe that’s where we can salve our consciences a little: exhibition materials could well include digital resources. In the end it’s true, most of the time in most cases it’s the collections that really count and they must take priority, and museums always have to balance, to choose, and they do. As well as going on building and maintaining collections they have to use them in all sorts of ways to get value now as well as in the future. That’s where we come in – only rarely are we creating works of art; mostly we’re making stuff that brings art, history, science, ideas to people. It would be foolish to spend too much on that, like it would be foolish to spend all the money on gallery refurbs and none on building and caring for collections, but still it’s valuable work. Occasionally we might even make something that could become digital heritage (i.e. a digital thing worth keeping) as opposed to digitized heritage, and in this time of flux and exciting experiments we are surely seeing some genuinely valuable bodies of knowledge and experience that perhaps we’ll want to “preserve for posterity”. I’m really interested to find out if and when our digital stuff is anywhere near as precious as our real stuff (it’s my research area, as it happens), but anyway don’t be too down! If people needed museums in the past, not just collections, then they need us now, in the same way

Wednesday, May 02, 2007

Silverlight dawns

Silverlight, the plugin and surrounding technology for rich internet applications announced yesterday by Microsoft, is getting a lot of people very excited. There's a strong OS element to it, although the plugin itself isn't OS, and it supports a bunch of new languages in the Common Language Runtime (that's a good thing) including such OS faves as Python and Ruby (not PhP, I think). Nice to be able to use Visual Studio to develop these, and I presume that we'll start to see them supported in other .Net scenarios too, along with JavaScript and the regular .Net languages. By all accounts it is massively fast compared to unmanaged client-side code (JavaScript in AJAX, for example) and has a really nice interface - this is the Windows Presentation Framework (Avalon as was) in action. Now waiting for the applications to start appearing. I will be keeping an eye on this and Flex (the Adobe competitor, basically) to monitor the continuing merging of web and desktop.

Monday, April 30, 2007

Convergence of metaverse modelling tools

Obviously SL isn't the only player in town but it is nevertheless significant that it is moving towards making it possible to import and export from other environments: modelling tools like 3D Max or other virtual worlds.
3pointD.com: Second Life Build Tools Support More Formats
Not that I need to spell it out, but if this is the start of a pattern then we will see greater incentives and lower risks to museums (and everyone else) to invest in Second Life, or more fundamentally in virtual objects that might be used in such environments, and corresponding improvements in sustainability.s
And why do I ALWAYS insert an unnecessary "e" in "environment"?

Somehow the API will make you pay for the honey

Nat Torkington on Six Basic Truths of Free APIs. Ties in well with my Paper 2, especially around the ecology of applications that API providers need to develop value for themselves - if this fails then they will have no incentive to support or develop the API. We have seen Google, for example, shifting the ground from under the feet of developers, and frankly it's a case of buyer beware (well, "freeloader beware" sounds a bit too judgemental). Brian Kelly's work on risk assessment is right in this area.

Friday, April 27, 2007

OSMOSA

3 point D points us to the new Open Source Museum of Open Source Art in Second Life. Lots of fun! Interesting to see that they are talking about documenting and archiving this thing, and of course there are relevant issues relating to authority, reality, value, fun, impact, education, mission and, indeed, whether it's all that museum-like at all. They do frame the background article in museological terms, it will be as interesting to see how the debate progresses as the museum itself.

Wednesday, April 25, 2007

Jeremy Keith on Identity and authority

Funny where stuff turns up. Jeremy Keith often has interesting things to say but I would not necessarily have expected him to talk about something that so directly hits a museological button - until, that is, I thought about it. After all, everything I've been writing for the last year has been observing and depending upon this sort of connection between disparate areas. So, Adactio talks about the problem of authority in the fragmented, distributed environment and I find I can use it almost without translation in the discussion I'm working on about museum authority and the impact of the web
Adactio: Identity and authority

The apps that just went online can now go...offline again

Brady Forrest wrote Dojo Offline Toolkit Released. There will presumably be a flood of taking-web-apps-offline technology in due course, once more changing the nature of the online/offline dynamic

Monday, April 23, 2007

Virtually altered state

Nicholas Carr's posting Go ask Alice's avatar
Those wild and crazy guys!
This is one for the reality/VR/authenticity file.

Wednesday, April 18, 2007

Salad Bowl alpha

A couple of weeks ago I put the Salad Bowl thing on to the MoL server here: http://www.museumoflondon.org.uk/museumObjects/index.aspx. Since access to this blog is, as from today, unrestricted I guess I need to explain this a little bit. I have imagined the stub of a museum object microformat (really the stub at the moment, hardly any of the data structure is really sorted, and I may drop it in favour of work-of-art anyway). Core parts of m-objects include identification with an institution, a URI and a unique identifier for the item (plus a name). To do something with data in this form (which I have enabled on a few of our catalogue driven sites like London's Burning and Exploring 20th Century London) there is a bookmarklet which highlights objects and lets you submit them to an application where your collection of objects is held - the Salad Bowl.
Right now it needs a log-in to see even the intro, which I should change. Anyway, the thing is full of bugs but essentially the bookmarklets work (on Mozilla and IE) and the new version, which submits data via forms rather than querystring, seems OK (as long as you are logged in BEFORE you submit an object, something else to fix). Once your objects are in there you can tag and describe them but no more at present. Obviously doing something cool with the tags would be best but this is a proof of concept and I'm not going to try to replicate work that could be borrowed from e.g. Steve.museum.
If you are interested in trying out the very alpha application, leave me a comment.

Brian Kelly's thoughts on SWTT and the MW2007 presentation

UK Web Focus: UK Museums and the Semantic Web Thinktank
Brian's input at (especially) the closing meeting was useful; it was helpful in keeping the balance between the various possibilities open to us, and in maintaining the focus on end users. It's nice to see, though, that he has also (like the rest of us) realised that there are more areas where SW may have an impact than we first thought, specifically in terms of intra-and inter-organisational communications of collections data.
My own preoccupation is with ensuring that there is evident reward for anyone that's putting effort into this, so that funders and directors see benefits and so do curators, documentation professionals etc., and it seems plausible that doing relatively simple stuff facing end users is a way to do this, so I'm to some extent in agreement with Brian. Equally, attacking SW from the other side with a more hard-core approach for collections data etc. will have its own benefits, and the two may meet in the middle to the advantage of all.

Thursday, March 29, 2007

"Repatriating" some of our collection

An organisation would like to use our data and images relating to objects we have on long-term loan from them. This is an opportunity to develop an (overdue) API or some other means for distributing content dynamically.

Why we should help:

·         it's their collection

·         we should anyway open our cleaned collection data to anyone for reuse (expansion of museum domain and role)

·         a model of good behaviour for the hub

·         a model of techniques for the hub – building distributed use with centralised authorship

·         opportunity to try new techniques – web services (broadly speaking), AJAX, Semantic Web or just RSS, perhaps m-objects.

How we could do it:

·         Assume minimal technical know-how and no facility for server-side scripting

·         Assume we want to keep the source data under our control so that we can amend as we please (although perhaps supplemented with input from the organisation in question?) This will mean less work for them on the basis of the first assumption

·         Offer AJAX interface onto a service or onto a static XML file. This will take a string of IDs. Alternatively, it could look to an RSS feed from my m-objects application)

·         We would supply

o       JS snippet and files to insert into each HTML file (including an onload() for the body tag)

o       Spreadsheet of objects and IDs

o       The XSL and possibly CSS

·         Methods for passing IDs could be via a hidden form element, JS include or querystring

·         If accessing (loading) static XML, XSL will need to deal with a string of IDs. If accessing a dynamic service need only transform whatever is returned

·         Other functions of service could include filtering by keyword, date, collector. Keyword needs to look at related people and themes

·         What format of data? OAI with unqualified DC? CDWA Lite? Talk to Mia.

·         Alternatively could develop the API to my app so that they can pick what they want