Research Robots: More Than Meets The Eye
By Ari Popper
There has been a lot of buzz in the press about robots taking our jobs. Wired’s January edition has a cover story “The Robots Take Over!” and 60 Minutes covered it as well a few weeks ago: http://www.cbsnews.com/video/watch/?id=50138922n
A little closer to the MR industry, BrainJuicer has a very smart tool called Digividuals that uses smart research algorithms to automatically mine the internet for ethnographic insights.
This post is an addition to this recent trend in the form of a research robot that has the cute name Eye2d2. (In fairness, Eye2d2 is not a robot per se but a mathematical model of visual attention.)
The way it works is actually very simple:
- You take a picture of any image.
- You upload the image.
- A short while later it generates a saliency heat map of that image.
What is cool is that it is fast, cost effective, scalable and there is no need for any consumers or hardware.
The model claims to predict saliency or visual magnetism (predicting what will capture the eye in the first few split seconds of viewing) with over 80% accuracy when compared to hardware-enabled eye tracking. If you are interested in the validation work, here is a link http://eye2d2.com/eye2d2-science/
In full disclosure, its father the neuroscientist Thomas Ramsoy from Copenhagen’s School of Business has asked me to help him get it ready for prime time. So I took it along to CES for a test run among the madness of the ultimate ‘attention seeking’ environment.
Here are three images I took of different expo booths at CES:
Image 1 – Zagg
Image 2 – 3M
Image 3 – EckoUNLTD
Each of these images have different visual magnetism profiles as predicted by Eye2d2.
Image 1, Zagg – seems to perform well because the brand name is very salient and one would think people would be more likely to remember or be attracted to the Zagg expo space, especially if they had an affinity for the brand.
Image 2, 3M- does well on the logo saliency too but I can’t help but think of all the unnecessary images that exist on the back curved wall. The results seem to suggest that no one will notice those images and I wonder if 3M could have used that space more effectively for something else?
And Image 3, EckoUNLTD had very poor results according to Eye2d2. The brand name and logo is not salient and if you look very closely, there is a very large and presumably, very expensive to ship gold plated Rhino. According to Eye2d2, it is completely lost and probably should have stayed at home.
I don’t think we will be losing our jobs to robots like Eye2d2 anytime soon, or even worse, asking Eye2d2 “to open the door Hal!”. However, given the upside potential of the tool, this research robot innovation could change the way marketers work and quickly remedy incorrect assumptions of what they think is effective.
If you want to help Thomas out, please vote for his submission here: http://www.iicompetition.org/idea/view/120.