1. Research Now
  2. Aha!
  3. SIS International
  4. ORC International

How to Extract Insight from Images

Francesco D’Orazio on How to Extract Insight from Images



Seth’s Sentiment Analysis Symposium conference, taking place July 15-16 in New York is a MUST attend event for insights professionals who want to stay on the cutting edge of this exciting space that has such a profound impact on market research. I encourage all GBB readers to go if you can, and you can get a 10% registration discount with the code GREENBOOK.


By Seth Grimes

“Photos are the atomic unit of social platforms,” asserted Om Malik, writing last December on the “visual Web. “Photos and visuals are the common language of the Internet.”

There’s no disputing visuals’ immediacy and emotional impact. That’s why, when we look at social analytics broadly, and in close-focus at sentiment analysis — at technologies that discern and decipher opinion, emotion, and intent in data big and small — we have to look at the use and sense of photos and visuals.

Francesco D'Orazio, chief innovation office at FACE & VP product at Pulsar

Francesco D’Orazio, chief innovation officer at UK agency FACE, vice president of product at FACE spin-off Pulsar, and co-founder of the Visual Social Media Lab, has been doing just that. Let’s see whether we can get a sense of image understanding — of techniques that uncover visuals’ content, meaning, and emotion — in just a few minutes. Francesco D’Orazio — Fran — is up to the challenge. He’ll be presenting on Analysing Images in Social Media in just a few days (from this writing) at the Sentiment Analysis Symposium, a conference I organize, taking place July 15-16 in New York. And Fran has gamely taken a shot at a set of interview question I posed to him. Here, then, is Francesco D’Orazio’s explanation how —

How to Extract Insight from Images

Seth Grimes> You’ve written, “Images are way more complex cultural artifacts than words. Their semiotic complexity makes them way trickier to study than words and without proper qualitative understanding they can prove very misleading.” How does one gain proper qualitative understanding of an image?

Francesco D’Orazio> Images are fundamental to understand social media. Discussion is interesting, but it’s the window into someone’s life that keeps us coming back for more.

There are a number of frameworks you can use to analyse images qualitatively, sometimes in combination, from iconography to visual culture, visual ethnography, semiotics, and content analysis. At FACE, qualitative image analysis usually happens within a visual ethnography or content analysis framework, depending on whether we’re analysing the behaviours in a specific research community or a phenomenon taking place in social media.

Qualitative methods help you understand the context of an image better than any algorithm does. By context I mean what’s around the image, who’s the author, what is the mode and site of production, who’s the audience of the image, what’s the main narrative and what’s around it, what does the image itself tell me about the author, but also, and fundamentally, who’s sharing this image, when and after what, how is the image circulating, what networks are being created around it how is the meaning of the image mutative as it spreads to new audiences, etc., etc.

Seth> You refer to semiotics. What good is semiotics to an insights professional?

Francesco> Professor Gillian Rose frames the issue nicely by studying an image in 4 contexts: the site of production, the site of the image, the site of audiencing and the site of circulation.

Semiotics is essential to break down the image you’re analysing into codes and systems of codes that carry meaning. And if you think about it, semiotics is the closest thing we have in qualitative methods to the way machine learning works: extracting features from an image and then studying the occurrence and co-occurrence of those features in order to formulate a prediction, or a guess to put it bluntly.

Could you please sketch the interesting technologies and techniques available today, or emerging, for image analysis?

There are many methods and techniques currently used to analyse images and they serve hundreds of use cases. You can generally split these methods between two types: image analysis/processing and machine learning.

Image analysis would focus on breaking down the images into fundamental components (edges, shapes, colors etc.) in order to perform statistical analysis on their occurrence and based on that make a decision on whether each image contains a can of Coke. Machine learning instead would focus on building a model from example images that have been marked as containing a can of Coke. Based on that model, ML would guess, for instance, whether the image contains a can of Coke or not, as an alternative to following static program instructions. Machine learning is pretty much the only effective route when programming explicit algorithms is not feasible because, for example, you don’t know how the can of Coke is going to be photographed. You don’t know what it is going to end up looking like so you can’t pre-determine the set of features necessary to identify it.

What about the case for information extraction?

Having a set of topics attached to an image means you can explore, filter, and mine the visual content more effectively. So for example, if you are an ad agency, you want to set your next ad in a situation that’s relevant to your audience. You quantitatively assess pictures, a bit like a statistical mood-board to this end. We’re working withAlchemyAPI on this and it’s coming to Pulsar in September 2015.

But topic extraction is just one of the visual research use cases we’re working on. We’re planning the release of Pulsar Vision, a suite of 6 different tools for visual analysis within Pulsar ranging from extracting text from an image, identifying the most representative image in a news article, blog post or forum thread, face detection, similarity clustering, and contextual analysis. This last one is one of the most challenging. It involves triangulating the information contained in the image with the information we can extract from the caption and the information we can infer from the profile of the author to offer more context to the content that’s being analysed (brand recognition + situation identification + author demographic), e.g., when do which audiences consume what kind of drink in which situation?

In that Q1 quotation above, you contrast the semiotic complexity of words and of images. But isn’t the answer, analyze both? Analyze all salient available data, preferably jointly or at least with some form of cross-validation?

Whenever you can, absolutely yes. The challenge is how you bring the various kinds of data together to support the analyst to make inferences and come up with new research hypothesis based on the combination of all the streams. At the moment we’re looking at a way of combining author, caption, engagement and image data into a coherent model capable of suggesting for example Personas, so you can segment your audience based on a mix of behavioural and demographics traits.

You started out as a researcher and joined an agency. Now you’re also a product guy. Compare and contrast the roles. What does it take to be good at each, and at the intersection of the three?

I started as a social scientist focussed on online communication and then specialised in immersive media, which is what led me to study the social web. I started doing hands-on research on social media in 1999. Back then we were mostly interested in studying rumours and how they spread online. Then I left academia to found a social innovation startup and got into product design, user experience, and product management. When I decided to join FACE, I saw the opportunity to bring together all the things I had done until then — social science, social media, product design, and information design — and Pulsar was born.

Other than knowing your user really well, and being one yourself, being good at product means constantly cultivating, questioning, and shaping the vision of the industry you’re in, while at the same time being extremely attentive to the details of the execution of your product roadmap. Ideas are cheap and can be easily copied. You make the real difference when you execute them well.

Why did the agency you work for, FACE, find it necessary or desirable to create a proprietary social intelligence tool, namely Pulsar?

There are hundreds of tools that do some sort of social intelligence. At the time of studying the feasibility of Pulsar, I counted around 450 tools including free, premium, and enterprise software. But they all shared the same approach. They were looking at social media data as quantitative data, so they were effectively analysing social media in the same way that Google Analytics analyses website clicks. That approach throws away 80% of the value. Social data is qualitative data on a quantitative scale, not quantitative data, so we need tools to be able to mine the data accordingly.

The other big gap in the market we spotted was research. Most of the tools around were also fairly top line and designed for a very basic PR use case. No one was really catering for the research use case — audience insights, innovation, brand strategy, etc. Coming from a research consultancy, we felt we had a lot to say in that respect so we went for it.


Please tell us about a job that Pulsar did really well, that other tools would have been hard-pressed to handle. Extra points if you can provide a data viz to augment your story.

Pulsar introduced unprecedented granularity and flexibility in exploring the data (e.g. better filters, more data enrichments); a solid research framework on top of the data such as new ways of sampling social data by topic, audience, content, or the ability to perform discourse analysis to spot conversational patterns (attached visual on what remedies British people discuss when talking about flu); a great emphasis on interactive data visualisation to make data mining experience fast, iterative, and intuitive; and generally a user experience designed to make research with big data easier and accessible.

What does Pulsar not do (well), that you’re working to make it do (better)?

We always saw Pulsar as a real-time audience feedback machine, something that you peek into to learn what your audience thinks, does, and looks like. Social data is just the beginning of the journey. The way people use it is changing so we’re working on integrating data sources beyond social media such as Google Analytics, Google Trends, sales data, stock price trends, and others. The pilots we have run clearly show that there’s a lot of value in connecting those datasets.

Human-content analysis is also still not as advanced as I’d like it to be. We integrate Crowdflower and Amazon Mechanical Turk. You can create your own taxonomies, tag content, and manipulate and visualise the data based on your own frameworks, but there’s more we could do around sorting and ranking which are key tasks for anyone doing content analysis.

I wish we had been faster at developing both sides of the platform but if there’s one thing I’ve learned in 10 years of building digital products is that you don’t want to be too early at the party. It just ends up being very expensive (and awkward).

You’ll be speaking at the Sentiment Analysis Symposium on Analysing Images in Social Media. What one other SAS15 talk, not your own, are you looking forward to?

Definitely Emojineering @ Instagram by Thomas Dimson!

Thanks Fran!😸😸 


Our View on Facial Recognition Privacy Talks Breaking Down

Brian Brackeen from Kairos shares his opinion on the recent dramatic events surrounding the NTIA's talks on facial recognition technology and consumer privacy.

facial recognition


Editor’s Note: The biggest news in the facial recognition industry this month has been the walkout from the ongoing facial recognition privacy talks by the nine consumer/privacy representatives. While the media has widely reported the walkout, as in this New York Times article, often quoting from the privacy advocates’ press statements, there has been little public comment from members of the facial recognition industry. As a strong supporter of these talks, we invited our friends at facial recognition technology platform provider Kairos to share their viewpoint.


By Brian Brackeen

Over the last 16 months the National Telecommunications & Information Administration (NTIA), a division of the United States Commerce Department, has been hosting both facial recognition industry representatives, and delegates representing consumer and privacy groups. They have been working to come up with a voluntary agreement on how facial recognition data should be collected and used. The talks are known as the Privacy Multistakeholder Process: Facial Recognition Technology.

This process led by the NTIA has nothing to do with government collection and use of facial recognition data. These talks are designed to come up with a set of voluntary guidelines that commercial facial recognition companies could choose to adhere to or not. It is important to remember that any decisions from the talks led by the NTIA do not have jurisdiction for anything involving the federal government, law enforcement, military, etc.

The agreement envisaged by these talks is more akin to the voluntary privacy standards that companies on the web agree to on their own. Once they do publicly agree to abide by a standard, if they violate their own policy, they are subject to sanction by the FTC. The idea is that the working group will come up with something that the industry will volunteer for, giving some level of protection to consumers.

 Who Walked Out From These Talks, and Why?
There are nine groups representing consumers and privacy advocates involved in the talks. They are the American Civil Liberties Union, Center for Democracy & Technology, Center for Digital Democracy, Alvaro M. Bedoya (the executive director of the Center on Privacy & Technology at Georgetown University Law Center), Consumer Action, Consumer Federation of America, Consumer Watchdog, Common Sense Media, and the Electronic Frontier Foundation.

They set themselves a bottom line that companies should seek and obtain permission before employing facial recognition to identify individual people in public spaces. They believe, as do we at Kairos, “at a base minimum, people should be able to walk down a public street without fear that companies they’ve never heard of are tracking their every movement – and identifying them by name – using facial recognition technology”.

They want assurance that a person has full knowledge of how facial recognition will be used and that they will give meaningful prior consent. The prior consent is the main sticking point that caused the walkout.

 What is the Kairos View?
Our view is “We LOVE the talks”. We LOVE that we are working through the tough questions. We think that given Congress’s current makeup, binding rules will not be applied, and that this is an area that the United States will need to lead, to drive meaningful change.

We believe that all parties can find a path forward, but it’s going to take FULL participation and a commitment to a shared goal. Walking out of the process has been harmful to that reality.

We strongly value the contributions that the privacy groups have made to date. We think that these privacy groups are the protectors at the gate for all of us, and we want them to be part of the talks. We are people too, and we want our own privacy to be protected.

We believe the NTIA does an outstanding job in its attempt to bring this process together. They organize the sessions and bring in numerous speakers from both sides, as well as from the education and research sectors. It always feels deliberately inclusive. We enjoy working with John Verdi, Director of Privacy Initiatives, at the NTIA, who is tasked with the difficult job of “herding cats together”, which he does admirably with grace.

Is There Still a Future For the Talks?
Juliana Gruenwald, an NTIA spokesperson, is quoted in the New York Times article linked above as saying that she thinks the decision to walk away from the table will hurt the interest groups’ cause more than help it. Their departure is not going to halt the talks. Industry groups still intend to continue to pursue a workable code of conduct for facial recognition privacy, whether the consumer groups participate in the discussions or not.

Even if there is no consensus on the pivotal area of consent by those being observed, there are still areas where the stakeholders should be able to reach an agreement such as transparency, notifications, and data security.

Alas, if the privacy groups are not present for future discussions, there is a danger that any future agreement will lack the checks and balances that the privacy groups have been there to provide.

It is a pity that the walkout by some of the participants has disrupted the real focus and purpose of the talks. The talks have never had a significant focus of discussion in the media, and now they have concentrated on the wrong thing – the walkout, rather than the genuine issues involved. As we have shown in our recent blog post, Inspiring Uses of Facial Recognition in the Real World, there are many valuable and helpful uses for this technology, that are in danger of being overlooked because of the media focus on privacy threats. Discussion needs to move away from potential threats of Big Brother tracking your every movement, to showing how we can use the technology to better society.

At Kairos we are not against the concept of people having to opt in to using facial recognition in most circumstances, certainly in the commercial and retail situations that are the focus of these talks. We do feel that there are a few situations, for example with child trafficking (using the Helping Faceless app), where there are practical reasons when it is not feasible to get opt-in consent.

It is unfortunate that the privacy advocates will not be able to present their case for change if they are not present at future discussion meetings. Lines in the sand and entrenched bottom lines are never lead to genuine, meaningful discussion and agreements; only compromise, empathy and understanding does.

While we are disappointed that some important parties left the table, we are still wholeheartedly supporting the efforts, and we look forward to opting into a new standard. The multi-stakeholder process is inherently tricky, but we believe good people, with a little trust, a little compromise, and the best intentions, can overcome any hurdle. We would love to see the door left open for the privacy groups’ return.

At Kairos, we have found this process to be incredibly valuable. We welcome the opportunity to create a framework that can shine a light on those in our industry who do not put the level of importance on privacy that we believe is necessary. Privacy and commerce do not have to be exclusive. There is a way for all parties to win. We firmly believe in the individual, and yet we are a commercial enterprise. How? By focusing on outcomes and not transactions. We want people to have better experiences. For Kairos, serving people is the end goal, not making products out of them. We believe in humanity and privacy, and that we can still grow a viable business around it.


Buck the Trend of Market Research Mobile Rewards

Looking past the latest fad in incentive programs to the tried-and-true retention solution.

mobile rewards


By Greg Cicatelli 

If you’re involved in market research, you understand the importance of incentives. People are rarely willing to give up their time for free, and if they don’t feel that you’re going to reward them proportionately, you might as well forget about your study. As such, it’s crucial for market research organizations keep up to date with the newest, shiniest incentive program–and right now, that’s mobile rewards.

Let’s chew on some statistics. According to a November 2014 Salesforce Marketing Cloud report, mobile rewards programs ranked as the most effective outreach option for mobile marketers. Moreover, a report by 451 Research found that 37% of US mobile phone users have used a mobile reward program app to collect and redeem loyalty rewards, while another 21% were interested in trying one.
With figures like these, you’d have to be a fool not to be excited about mobile rewards programs–right?

Not so fast.

Let me clarify: I understand that rewards and incentives are a great way to encourage participants to contribute their time and information to your cause–but not all rewards are created equal. While mobile reward programs are becoming increasingly popular, the vast majority of these existing platforms offer just one type of reward: closed loop gift cards.

Out of the Loop

‘Closed loop’ refers to gift cards that can only be redeemed through their associated retailer. For example, an iTunes gift card can only be redeemed through iTunes. Likewise, a Starbucks gift card can only be redeemed through… well, you get the idea.

In the most common type of mobile rewards program, participants are awarded points that they can spend on closed loop gift cards from a selection of retailers and brands. Sure, it’s a monetary incentive–but once participants choose a gift card, they’re forced to redeem it with just one retailer.

Why is that a problem? Well, according to an Ebiquity study commissioned by American Express, nearly three-quarters of Americans prefer rewards programs that allow them to shop with many retailers–not just one. Furthermore, a 2013 survey conducted by Forrester Research found that consumers were far more motivated to share their personal information with companies in return for cash incentives (41%) than loyalty program points (28%). In short, when it comes to maximizing the returns on mobile rewards, companies need to start thinking past points and gift cards and instead focus their efforts on providing participants with what they really want–cash.

Power to the Participant

Cash-based incentive programs offer participants far more flexibility than closed loop gift cards. It goes without saying that individuals can use their incentive payments to make purchases with any number of retailers–but unlike gift cards, cash-based incentives give participants the freedom to put their rewards towards the things that they really need: rent payments, bills, and so on.

Beyond that, advances in payments technology have provided participants with better, more flexible access to cash-based rewards. Top incentive fulfillment platforms let individuals choose how they wish to receive their monetary incentive, whether it be directly to their bank, via old-school check delivery, or in a virtual prepaid card format. Worried about servicing on-the-go participants? Gift card loyalty programs aren’t the only ones with mobile access–the best cash-based incentive programs make it easy to accept, transfer, and manage their rewards in real-time, thanks to an online portal and native smartphone apps.

Recruitment & Retention

When it comes to improving market research participation specifically, cash-based incentive programs are ideal. For one, cash-based incentives have been shown to improve participant response rates. In a 2004 study on web-based survey incentives, cash-based rewards generated higher response rates than physical and digital gift certificates with both new and repeat participants. That means that cash-based incentive programs are a more effective method of attracting new respondents and retaining existing ones. For an industry that depends on high response rates and participant retention, that’s a big deal.

Going Global

Looking to expand your studies globally? Multi-currency cash-based incentive platforms make it easy to provider flexible incentives to a globally-dispersed participant pool. Sending gift cards internationally can be a major financial and administrative burden: there’s the issue of finding and purchasing region-specific gift cards, then organizing and paying for the distribution of the cards–which is to say nothing of the difficulties presented by foreign exchange. In contrast, a multi-currency incentive platform makes it easy to send rewards in the appropriate currency, quickly and affordably.

Trends Fade; Solutions Last Forever

In the past, cash-based incentive programs have been a frightening prospect for market research organizations. Monetary rewards used to come with their own set of challenges, both administrative and economic. Not so with a modern, top-tier payments provider. Hyperwallet simplifies cash-based rewards by managing the process from start to finish, distributing global incentive payments rapidly and securely through its international network of payment partners. Respondents are able to access and manage their incentive payments online or on-the-go, with options to receive the payment by check delivery, prepaid card, or bank deposit–all in the local currency.

So, sure, mobile rewards programs might be the hot new thing. But when it comes to increasing response rates, maximizing participant retention, and reducing churn, cash is–and always will be–king.


Hashtagging Your Way To Social Media Relevance

Those Seemingly Inconsequential Hashtags Are Crucial To Gaining More Exposure For Your Brand


By Jay York

Not so many years ago, many people probably paid little attention to that pound sign on the computer keyboard. You know, the one that looks like this: #.

Then along came Twitter and what we have come to call the “hashtag,” and social media marketing was changed forever.

Yet not everyone takes advantage of hashtags the way they should, and that’s unfortunate because if you are not using hashtags you are missing out on exposure for you and your brand.

When you are on social media sites such as Twitter or Instagram, your goal should be to become part of the conversation. The hashtag allows more people to find your contributions to that conversation. Without them, you miss out on lots of eyes that could be viewing your content.

For example, let’s say 1,000 people follow you on Twitter. Not counting re-tweets, only 1,000 people will see your posts if you don’t use a hashtag.

Add the hashtag, though, and you start picking up momentum because the post has the potential of being seen by, and re-tweeted by, any number of people.  A common hashtag, such as #love, can position your post to be seen by potentially millions of people.

But be warned.  While there are great benefits to hashtags, there also are pitfalls. Hashtags don’t come with exclusivity. Anyone can use them, so a hashtag can become a weapon that works both for you and against you. Critics of your brand, or just the usual assortment of Internet trolls, may attempt to hijack your hashtag, putting you or your business in a bad light.

A prime example of a hijacked hashtag happened a few years ago when McDonald’s, apparently hoping for a flattering conversation about the restaurant chain, introduced #McDStories on Twitter.

#McDStories went viral, but not in a good way as the Twitter world had a field day tweeting unflattering tales of their alleged bad experiences with the restaurant.

Don’t let such cautionary tales deter you, though. March boldly into hashtagging, but as you do keep in mind these suggestions for getting the most out of your efforts.

•  Use proprietary hashtags. One of the advantages to a proprietary hashtag, such as “Orange is the New Black’s” hashtag #OITNB, is that it is linked directly to your brand. These hashtags typically are not used as widely as a more generic hashtag, but the goal is to brand yourself through the hashtag with the hope it could go viral.

•  Don’t overdo it. A post littered with too many hashtags can be difficult to read, so your message might become obscured as your followers see what appears to be gibberish. Perhaps you saw the skit Justin Timberlake and Jimmy Fallon once performed in which they spoofed the device’s overuse by lacing their spoken conversation with seemingly endless hashtags. It was hilarious and annoying all at the same time.

Twitter itself suggests using no more than two hashtags per Tweet. Certainly, three should be the very maximum on Twitter. A different etiquette exists on Instagram, though, and most Instagram followers will tolerate excess hashtags. Meanwhile, although hashtags can be used on Facebook, there’s little reason to include even one. That’s not the way people use that social media site.

•  Think geographically. If you are a local company that depends mainly on local clientele, a hashtag that links to your location works well. Hashtags such as #Seattle or #Bangor drop you into numerous conversations about your hometown.

Since social media has become such a vital element of any comprehensive marketing strategy, understanding all of the nuances is critical.

A hashtag may not look like much, but it’s really a powerful tool that is a double-edged sword.  If used correctly it can greatly bolster your marketing reach.  Used incorrectly, it can have adverse effects or unintended consequences.

With social media, your hashtag is your brand, so use it wisely.


An Open Discussion on the Impact of Respondent Sourcing

Frank Kelly explains how the quality of your research relies on the source of your respondents.

By Frank Kelly


Research clients should learn how the source of their respondents affects research data. At Lightspeed GMI, we have been observing some very significant differences depending on the source of the data. The way a respondent enters a study will impact the overall data quality obtained from that respondent. It is not a story of good and bad, but rather just a case of significant differences; ignoring these differences could result in poor quality research.

Clients still believe the respondent world is a choice between research panel respondents and river respondents. In reality, the river sourcing technique has not existed for several years. What we have instead is a range of panel types that mostly have nothing to do with research; members of these panels need to do something in order to get something they want, in this case, complete a survey. The important factor here is that river used to indicate ‘freshness,’ which was seen as a virtue when compared to research panel respondents that were seen as ‘conditioned.’ A dynamically sourced respondent (which is what we call the process of amalgamating non-research panel respondents) actually takes many more surveys than a research panel respondent.

The majority of respondents sourced from most panel companies are actually dynamically sourced respondents that come from a variety of places (i.e., social media sites, game sites and loyalty sites).  These are often mixed with respondents sourced from traditional research panels. The key point here is that we get very different data depending on the source. Dynamic source respondents consistently rate product concepts higher, they consistently have a higher incidence of qualifying for studies (due to over-claiming behavior) and they are less attentive survey takers. We can fix attentiveness and over-claiming through our data quality process, but the difference in the way they answer questions cannot be fixed.

At Lightspeed GMI, we believe that research panel respondents are the preferred method for your research needs, but we also feel that a small amount of dynamically sourced respondents can be beneficial for most studies. We normally recommend an 85% panel/15% dynamic source blend. This blend enables us to take advantage of clear dynamic sourcing benefits: it is inexpensive and has complimentary demographics to panels (lower income, larger households, less education and more ethnic populations). If we blend at this level, we tend to stay closer to category benchmarks than at higher levels of dynamic sourcing.

Consistency is key. You should determine if you will be using research panel respondents, dynamically sourced respondents or a blend. The ratio of these sources should be help consistent for all quota cells in the study and if the study is a tracker or wave study, the blend should be consistent form period to period.


How the emergence of dashboard sampling is impacting quarantine periods

Mike Misel discusses the future of sampling science.


By Mike Misel

The growth in push-suppliers in the MR industry has been exponential, especially in the US market, and will continue to grow and become a real force in the sample supply chain.  This has been further consolidated by the emergence of dashboard solutions and advanced API technology to make the most of this movement. Push-supply will continue to provide an important sample source at a time where supply is a major consideration, and distractions for consumers – turning their heads from survey participation – are rife.

The surge in popularity of this ‘offerwall’ type survey delivery, however, presents questions surrounding the frequency of respondent participation and what this means for the industry. Quarantine periods exist to protect the respondent experience – as well as help to wheedle out professional respondents – to ensure they aren’t getting bombarded with emails to take surveys.

Upholding these quarantine periods is becoming more challenging as the chain has become more convoluted, resulting in respondents being able to take part in as many surveys as they like on a frequency of their choosing.   In the modern sampling world, we have a new respondent experience concern that didn’t exist when the idea of quarantine periods was conceptualized: device friendly studies. Nowadays most people are checking emails on their smartphone, and getting the 28 minute grid study creates an awful experience when taking it on mobile. So, dashboard sampling alleviates this by presenting opportunities when people are in a situation where they are looking to be monetized, because they are in a sense opting in, in real time.

There is concern in the industry about what this means for survey quality; does it open the doors for professional survey takers where gain of incentives is the main objective (and incentives are of course important to keep consumers engaged). Generally, a researcher may not be aware of the data gathering process and therefore not know about the potential for their insights to be based on opinions of people who take surveys very regularly.

How can this be addressed in a modern-day marketplace?

Some may argue that quarantine periods should be removed altogether, as there is no real way of policing them with the emergence of dashboard sampling, and with a demand for faster insights gathering, some researchers may not be overly concerned about duplication, if a quick return is needed based on a high volume sample.

Of course, those concerned about quality can take the blending route. This is a great solution to employ in any case where researchers are after a more accurate result with less bias or skew, which can sometimes occur from using one panel even when multiple survey-taking isn’t an issue. Respondents in any one panel will share common ground and so to gain a more widely representative opinion, pulling together and blending different sources, can provide a great result.

It’s difficult to state a right or wrong in the case of push supply and potential over use of respondents, as it will be of concern to some in the industry more than others – dependent on the type of insights needed or the project being worked on. But in a world where supply isn’t infinite, additional sources from push-suppliers certainly cannot be ruled out and will only grow further.

In addition, the growing reliance on what can be termed as ‘non-traditional’ sample sources also comes at a time when there is heightened scrutiny on the quality of online samples, highlighted recently by various papers presented at events hosted by the likes of the MRS and Esomar.

Reg Baker (and others) have called on the industry to revisit and review sampling practices, and raised awareness about the importance of insuring that the fundamentals and ‘science’ behind sampling, and setting sample-frames, remains high priority and central to all suppliers and their clients.   So in order to help, here are five tips to consider around using non-traditional sample sources:

  1. Know the source. Expect transparency over the sources you using in terms of how the respondents are recruited and where from
  2. Understand the incentive model
  3. Understand the respondent flow from the supply source origin to the survey and whether there is routing or other techniques applied prior to a respondent reaching your survey
  4. Enquire as to the profiling and targeting capabilities of the supply sources you are using. For example, can the same respondent be contacted in the future should you need to?
  5. Appreciate the potential biases which exist with any source or methodology of sample, and account for those in your sample-frame designs.

What The LRW Investment Means For The Market Research Industry

I interview David Sackman, CEO of LRW, on their just announced successful capital raise.


Today Lieberman Research Worldwide announced the successful raise of significant growth capital from Tailwind Capital, a private equity firm. This marks the first time LRW has raised outside capital, and the latest in a series of significant PE-backed moves in our category (previous notable examples include deals for Research Now, Macromill/MetrixLab, SSI, and Focus Vision).

Before we jump into my take, here is a snippet of the press release for details

Los Angeles – LRW (Lieberman Research Worldwide), a leading market research and data analytics company, announced it has raised significant growth capital from Tailwind Capital, a private equity firm focused on investing in growth-oriented middle market companies. This capital will be used to drive LRW’s innovation strategy and vision to be the leading integrated data analytics research and marketing strategy consulting firm. This capital infusion will help LRW more rapidly develop and expand its suite of innovations in the areas of Pragmatic Brain Science®, virtual reality, social media analytics and big data. The capital will also be used for targeted acquisitions to strengthen LRW’s global footprint and leading edge capabilities in the “new” market research industry. The current management team of Chairman and CEO Dave Sackman and President, Jeff Reynolds will continue in their current leadership roles and remain significant owners of the business. Terms of the transaction were not disclosed.

Dave Sackman, Chairman and CEO of LRW said, “The market research industry is at an inflection point, and we see tremendous opportunity. Our partnership with Tailwind will enable us to innovate even more rapidly than we have the past few years and to meet the changing needs of CMOs and their marketing organizations. The world of data and digital marketing is growing exponentially, and with growth capital, we can shape that future. We fully intend to be recognized years from now as one of the pioneers and true leaders in creating what people are now calling the ‘new MR’ and we’re calling Integrated Data Analytics Consulting.”

Adam Stulberger, Partner at Tailwind, said, “This transaction represents a tremendous opportunity for Tailwind to invest in a proven winner that provides high quality services to a dynamic industry in the midst of a transformation. LRW has a very experienced management team and is well-positioned for future growth and expansion. We look forward to supporting LRW as it pursues future organic initiatives and acquisitions.”

LRW is recognized as one of ten most innovative firms in its industry and is one of the top 25 largest marketing research firms in the world.  Since 1973, LRW has been providing its data-driven consulting services to management teams of top global brands on issues such as strategy, branding, communications, new product development, and customer experience.  LRW leverages its unique “so what?®” consulting model, sophisticated marketing science capabilities and recent innovations in Pragmatic Brain Science® to deliver real business impact for their clients across a wide range of industries, including entertainment, pharmaceutical, technology, consumer packaged goods, health care, retail, food service, financial and business services, automotive, and many more.

The LRW deal stands apart for one important reason: this is the first time in this latest wave of interest in MR that a primarily service-focused consultancy has raised capital to pivot to a combined technology and services growth strategy. LRW has certainly invested significantly in the last few years in technology (their VR and new BX offerings for example), but now they are poised to begin acquiring new technology capabilities that they can provide a service wrap around to, and perhaps even more importantly, leverage multiple data sources for a holistic data synthesis offering, which is an important distinction. The combination of globally leading consulting human capital, a developing contextual framework for data analytics, and a mix of proprietary technology and smart tech partnerships is a forward looking formula for success.

Will LRW pivot as a tech play? No, I don’t believe so, but they obviously are focusing on the intersection between consulting and technology (specifically around data that drive insights) and I think that is where the future of the industry lies.  Increasingly we are seeing the demand from clients on pure tech / self service offerings to augment with a service solution. Just as most consultancies can’t fully pivot to tech companies, pure tech companies can’t pivot to service organizations either: new hybrid models are necessary to meet the need.  This new SwaS  (Software with a Service) category is likely to be one of the major trends in the research space over the course of the next five years and is a smart play across the board.

I had the opportunity to chat with CEO Dave Sackman on the news of this deal and get his view on what it means for LRW and the industry as a whole. It’s a great discussion and one that everyone should pay attention to, since LRW is now poised to become not just one of the smartest companies in our space, but also likely one of the largest. Here is the interview:



One last take away from this deal: Private Equity money continues to be available in this space and many investors are looking for opportunities. If your business fundamentals are strong, you have a proven track record of success, and are making the necessary changes in your business to remain not just competitive but to actually lead with innovation that delivers real business impact then you can find deals. Despite all the myriad dynamics impacting our industry, it’s still a great time to be in the insights space  as LRW just proved yet again.


8 Reasons Your Company Needs an Open Data Strategy

Emily Fullmer discusses the multitude of benefits that an open-data platform could provide for your company


By Emily Fullmer

Data seems to like reinventing itself. We’re still in the midst of  ‘data’ becoming ‘big data’, while also coping and learning to capitalize on data as an asset class. And yet, we are faced with yet another simultaneous data revolution—the Open Data Economy. 

As a reader of the GreenBook blog, I know you either strive to be an innovator, or already are. For this reason, I urge you to consider exactly how the forthcoming restructuration of data will impact your firm, your competitors, and your value offering. As 2015 takes off, what will your Open Data Strategy be? Will your firm be proactive or reactive? You need a game plan. Here’s why: 

1) Data-as-a-service model

The entire concept of open data may seem counterintuitive to the private sector. Giving away data means giving up both earned and proprietary information, and putting it into the hands of competitors. But opening up your data portals doesn’t mean you have to open up all the floodgates—consider using your most basic data as a ‘hook’ for attracting new clients. SaaS business models are thriving by offering subscription and ‘freemium’ pricing models. Why shouldn’t it apply to data providers as well? The concept of data-as-a-service will become increasingly applicable to businesses that capitalize on merging government data with their own proprietary information.

2) Crowdsourced solutions

Crowdsourcing platforms have recently proven their value in a variety of spaces. For data, the possibilities of crowd-sourced solutions are endless. Platforms like Kaggle are bringing together self-proclaimed data scientists and hard-to-crack data sets. Organizations such as GE, Expedia, and Liberty Mutual sponsor challenges backed by cash rewards. In response to a challenge being posted, as many as 1,300 teams submit their predictive models.

The solutions being created are faster, more innovative, and of higher quality because an entire world of data scientists is being tapped, rather than a single department, individual, or algorithm. Crowd-sourced solutions should have a prominent role in your open data strategy, but the extent will depend on exactly how much information you’re willing to share.

3) Attract higher quality talent

When predictive models and solutions are submitted through crowdsourcing platforms, quality talent sets itself apart from the pack quickly. Instead of chasing the best and brightest, and then gambling on their expertise, why not let them identify themselves?  By attracting talent through this method, you can avoid the cost of hiring a full-time team and acquire the winning individual(s) before the competition discovers them. 

4) Contribution to a more efficient economy

It needs to be clear what the Open Data economy is and is not. It is an ecosystem, where every stakeholder benefits in some form. It is not Data Philanthropy (a noble cause, but a smaller concept within the whole mentioned below). By making your data public in some form, the entire ecosystem becomes more efficient, and thus, more profitable. McKinsey & Co estimates that there is approximately $3 trillion USD in “potential annual value enabled by open data in seven ‘domains’”. That’s a figure that would make anyone into a believer.

5) Better storytelling

A subset within the Open Data Economy is Data Philanthropy—the UN describes the neologism as “a new form of partnership in which private sector companies share data for public benefit”. It might not be the most glamorous way to give back, but sharing seemingly useless data sets could have profound social benefits. The UN cites, “academic researchers have shown how cell-phone location data was used to understand how human travel affects the spread of malaria in Kenya, while mining of anonymized Yahoo! email messages provided a detailed view of international migration rates.” Anecdotes such as these lead to far better material for compelling and organic press. And by capitalizing on your current assets and internal digital humanitarians, there’s no need to waste resources on campaigns that often negate the credibility of intentions.

6) Rejuvenation of innovation

In the MR space, innovation and tech often go hand in hand. At conferences like IIeX, we are often drawn towards the disruptors who have tangible, foreign technologies. We sometimes forget that innovation doesn’t always have to be hi-tech. By leveraging this new economy, innovation can emerge in the most unlikely of partners and situations if you allow yourself to be found through your data. 

7) Leadership positioning

Buying your way to the top of the thought ladder is becoming a thing of the past. Take a proactive role in accepting and understanding the open data ecosystem landscape. Strategize and position yourself within it now. What value will you be offering? What value will you seek? Early adopters will be rewarded through both value and leadership status. 

8) Unlock new value

It’s time to dust off our old data sets, and realize we have a hoarding problem. What’s the point of sitting on old data, while it depreciates in value? By sharing and publicizing the analysis and trends of the data you own, you open yourself up to others who may have complementary data that allows both entities to create new value from old data.

Still not convinced? Take a look at the McKinsey and Deloitte Open Data Reports for in-depth reads on the subject.


The Rise of Machines: Is DIY Going to Eliminate Your Job?

Sami Kaipa of GlimpzIt reflects on how DIY research tools have quickly taken the industry by storm.


By Sami Kaipa

As we bring a fully self-serve GlimpzIt offering live to the market in the next few days, I’ve had the opportunity to reflect on the revolution that is DIY research. The value of DIY research tools in a word – TREMENDOUS. I don’t think many sensible individuals disagree at this point. The concept of DIY, even just a few years after its birth, doesn’t divide market researchers like it had before. There is a very safe niche for full service market research professionals amidst the indispensable tools on the market – that niche is to use these tools to more effectively make sense of the world. And smart researchers and marketers get that. These tools help researchers do more, faster and better, and they help marketers do research that was otherwise untenable.

Today’s DIY tools accomplish varying degrees of work. In some cases, they help with data collection, in other cases, with data categorization and organization, and in some cases visualization. In other extreme applications, they do almost “human like” tasks like making sense of unstructured text. The fact still remains, however, that they make practitioners’ lives easier in all cases. A sensible data scientist running factor analysis these days wouldn’t think of proceeding without excel, or a calculator at the very least. These tools make his task far easier. A qual researcher, similarly, should consider the use of sentiment analysis tools, because, if you believe in their efficacy, they can make tasks far easier for the same reasons.

Ok, enough with my diatribe on why we should use DIY tools. We all get it right?

Well, I’m pleased to note that most people do. My experience at the North American Insight Innovation Exchange Conference in Atlanta this year was reassuring. When describing GlimpzIt, I could freely use terms like DIY, self service, or self guided with confidence, knowing that folks understood our offering and even more so, our value prop. Contrast that to just over a year ago at IIEX 2014 Amsterdam, when you could hear a scornful murmur in the crowd when Paul MacDonald from Google talked about how Google Consumer Surveys could help bring cost effective research to small businesses. Moreover, I didn’t sense the same sort of fear around losing business or even more personally, losing your job. That underlying sentiment seemed to permeate the Amsterdam conference center, and certainly still continues to at other conferences like CASRO.

It takes only a slightly deeper study of DIY tools to appreciate their value even more, and realize that they make all of our jobs more productive, not obsolete. During my exploration of the more common DIY tools in the insights space, I learned quite a bit about DIY strategies and their respective value. Let’s take a quick look:

Survey Tools:

SurveyMonkey, Qualtrics, Google Consumer Surveys – these are the giants and pioneers in the space and we all know what they do. The take away from these services is to make products valuable and applicable to researchers, but also accessible and usable by non-researchers. A marketer or product developer in the past, who might be completely unfamiliar with executing a survey, now has that potential, and that’s a good thing!


Zappistore, Qualtrics, CoolTools – they provide platforms for companies to submit their products for researchers to pull off the shelf and use independently. As an example, you might buy a “standard” NPS test where the question is pre-formulated, the survey is pre-programmed, and everything is executed for you automatically, all online. From these offerings, we’ve learned that DIY tools should strive more for turn-key solutions. We don’t need to stop at just delivering a function. There is merit to layering methods, analysis and presentation into our DIY offerings to make them more valuable and complete.

Other Somewhat DIY Solutions:

GutCheck – based on client needs, they make research methodology recommendations and are able to pull the appropriate “products” off the shelf to meet these needs. The learning from this strategy is that solutions that mix consulting and automation are just as effective, if not more so, than DIY alone.

DIY Panel:

Google Consumer Surveys, Branded Research, TAP Research, Cint, Federated – through the use of online web apps and APIs, anyone with some basic computer programming know how can recruit sample and achieve completes – no need for professional services anymore to confuse us with terms like LOI and incidence.

Passive Analytics:

Google Analytics, Kissmetrics – perhaps we don’t think of these tools as research, but their value is clearly aligned with the goals of many researchers, i.e. to understand consumer behavior and help with making more informed business decisions.

Our own tool, GlimpzIt, is used in scenarios ranging from ethnography and ideation to issue identification and political message testing. Cool stuff right? Sure, but we have a lot to learn from our predecessor DIY brethren. As a starting point, we adopted a similar vision – to democratize our brand of insight generation; make visual conversations accessible to anyone, regardless of job role, expertise, and budget.

I am happy to see the direction in which the industry has evolved just in a matter of a few short months. Its my firm belief that, as these tools get better and researchers get more comfortable using them, their use will become even more widespread. It’s time we sharpened our resolve to innovate, not dust off our resumés. With less time spent on the mechanics, the door is wide open for researchers to focus more on revealing deeper insights and deriving innovative methods to get to them.


The Technology Dilemma

We live in a tech-centric world. It is challenging and revamping almost every brand and industry, and market research isn’t immune to this change.

tech dilemma


By Zoe Dowling

We live in a tech-centric world. It is rapidly changing how we live. It is driving cultural change. It is challenging and revamping almost every brand and industry, and market research isn’t immune to this change.   The Insights Innovation Exchange (IIeX) conference, held in Atlanta 15th–17th June, demonstrated just how much technology has infiltrated our industry. Not only did the conference have a disproportionately large tech presence, its parlance infused many sessions with references to ‘lean’ and ‘agile’ approaches to ‘hacking’ solutions and ‘start-up’ mentality.

Technologies providing automation and computation are leading forces, followed closely by those providing access to consumers, be it as a sample source or a means to connect directly with them. The question then becomes how to leverage these technologies, how to innovate in this new order. And how to do so with the same refrain of ‘better, faster, cheaper’.

In one of the opening talks, Reality Mine’s Rolfe Swinton talked about the exponential growth of massive computing power, coupled with the trend towards technology becoming nearly ubiquitous and nearly free. This inadvertently hits upon a Catch-22. Technology is powerful and cheap, comparatively speaking from 10 or even 5 years ago. Furthermore we, as consumers ourselves, have become used to getting technology for free.  How many of the apps on your phone did you actually pay for?

This hides a truth that while the technology may be freely available via open source code or inexpensive off-the-shelf packages, customizing it to meet research needs takes specific skills, time and financial resources. This quickly removes free, and even inexpensive, from the equation and yet talk of better, faster, and cheaper prevails.  TNS’ Kris Hull suggested that a language disconnect between clients and agencies is a factor. When clients hear innovation, they are hearing ‘faster and cheaper’ but the agencies saying it are actually implying ‘upfront investment’. There needs to be more upfront conversations around the real cost of implementing these new technologies.

More broadly, it feels like the word ‘Innovation’ has become the ultimate buzzword. There are continual cries around the need for the industry to innovate or that we aren’t innovating fast enough by adopting all the new technologies available. This skirts over the gritty truth that innovation isn’t easy, even beyond the simple financial constraints. It was encouraging (perhaps even comforting) to hear some focus on this reality. Lisa Courtade, Merck & Co, spoke directly to the point that innovation is hard. It requires focus and tenacity. We need to ‘do it again, and again, and again’ before we’ll get it right. Another refreshing talk came from Lowes’ Tanya Franklin who highlighted the need to build a culture of innovation in order to be successful. This means bringing in the right skillsets and individuals to do so.

So what should we take from this? The first is to celebrate that there are a lot of exciting, and very worthy, technologies for us to employ. The second is to openly acknowledge that employing them involves a lot of hard work as well as significant human, financial and time investment to get it right.