As many of you know, Tagger is an addictive game of 'say what you see'. Art UK's army of taggers
Professor Andrew Zisserman from Oxford University's Visual Geometry Group says, with regards to visual abilities, 'If a human can do it, a computer should be able to do it.' He and his team worked with Art UK and the BBC to develop some innovative image-recognition software to identify subjects in paintings. Such software can produce good results from photographs, but given the numerous stylistic representations of horses, clouds, vases etc. in paintings, using this software for art has proved more difficult. Until now…
Using the 3.5 million or so tags provided by taggers, the research team at Oxford 'educated' image-recognition software to recognise the top tagged terms. Professor Zisserman explains this is a
The results were impressive. Impressive but, unsurprisingly, not perfect. We asked some of our top taggers to help us perfect them; to look through machine-generated tags and tell us whether the computer was right. Through an interface designed by Oxford student Elliot Crowley, taggers were shown pages of 50
We aren't seeking to replace Tagger, but looking at a way of tagging some of the subjects in paintings at a greater speed. Tagging more esoteric subjects will continue to be the job of the human eye, as will tagging concepts, people, places and events.
Over the course of two months, 250 taggers made almost a million selections and tagged 99,600 individual paintings with over 200 varied topics. A big thank you to all those who participated in the project. The new tags were used to improve the visual models, making future image recognition more accurate. We may run the Art UK Image Recognition Check again. There is scope for further iterations of the project: the computer might identify the seagulls from the paintings containing birds, and the spitfires from the selection of aeroplanes.
So, what's next for the Visual Geometry Group? Professor Andrew Zisserman: 'Longer term we are investigating how to recognise anything in the paintings starting from a text search and learning from a Google Image search.' It is one day possible that visitors to the site (art historians and the general public alike) might be able to type questions about the national collection into a search box and receive an instant response. For example, type in 'When did earrings first appear in portraits?' and the computer would build an object model of an earring, learning from Google images, and then instantly search the Art UK database for instances of that object within paintings it recognised as portraits. Adding the information the database currently stores on execution dates would result in the selection of an image from the national collection, and a full answer to the question.
Rachel Collings, Art UK Senior Editor