Why Most Respondents Don’t Like Participating In Research (And What We Can Do About It)

The newly released GRIT CPR (Consumer Participation in Research) study showed that the majority of the people who have willingly given up their time, often for little or no reward, are dissatisfied with the their experience participating in research. As an industry we need to collectively increase our respect for those who take the time to answer our questions, and the study found that this can be done through things like shorter, more intuitive surveys sent with potentially less frequency.


In the Q3-Q4 2016 edition of the GRIT Report, we asked participants to rank various factors in importance when designing a study. Respondent Experience was at the absolute bottom of the list, which we found quite alarming. Participants are the lifeblood of market research, and disregarding the respondent experience in the research process is counter-productive to say the least.



Customer-centricity, user experience, engagement and design are the heart of product development and marketing, but yet are hardly even a consideration in research. In years past it didn’t have to be, but that is a legacy perspective that MUST be jettisoned in order for our industry to be effective in the 21st century. More than that, it’s necessary to survive as an industry: many, many options outside of the traditional MR space now exist that insights buyers can use to get the needed information to support their decisions; we stopped being the only game in town long ago. Those competing approaches have user experience built into them from the ground up and often reinforce brand relationships with consumers overtly.

As a researcher I get it; every time we field the full GRIT study I get an earful from other researchers about their experience with the survey design, usually negative. It’s easy to get defensive and rationalize those concerns away (and perhaps even rightfully so!), but the bottom line is people have a choice on how they spend their time, and if we are asking for some of that time and don’t make it a good experience then we run the risk of becoming like that friend or family member who always is asking for a favor of some kind (that we don’t want to do), so we just start ignoring them as much as possible. Or even worse, research starts to be equated with other unpleasant things like going to the DMV, preparing taxes, Dental appointments, or cleaning cat litter boxes!


We weren’t the only ones who found this situation to be cause for concern, so we reached out to various key stakeholders in the industry and developed a concept for asking consumers directly about their experience participating in research. AYTM – Ask Your Target Market, Dalia, Focus Pointe Global, G3 Translate, GRBN, the Global Research Business Network, reportbook by IfaD, Lightspeed, Mobile Digital Insights (MDI), Multivariate Solutions, RECOLLECTIVE (Ramius Corporation), Reconnect Research, Research Now, SSI, Toluna, and Virtual Incentives all joined us in fielding this new GRIT CPR (Consumer Participation in Research) study in March of 2017.

The groundbreaking study was conducted in 15 countries and 8 languages among 6,208 consumers via online, telephone, and mobile-only surveys.

We asked questions surrounding types of research they participate in (qual and quant), frequency of participation, preferred method/device for participation, how they want to receive invitations, what rewards they want, the impact of survey design, and more. In doing so, we discovered a tremendous amount about how consumers view research, and much of it is less than optimal for our industry.

It’s time to bring the participant experience to the forefront, and this report is a key tool to help us do so.

You can download the full report and access the data here, but I’ve included some of the highlights in this post as well.

A key finding is that, in aggregate, only a quarter of all respondents globally are satisfied with their experience participating in research, indicating researchers lack of prioritizing the respondent experience shows through to respondents.



Additional eye opening findings are:

  • Over half of all respondents admitted that the design of a survey impacts their willingness to complete it. 



  • 45% of respondents said surveys should be less than 10 minutes in length. 



  • 1/3 of all respondents cite a desire to earn rewards or prizes as their primary reason for participating.



  • Cash may be King, but Virtual Cards are Queen: across all sample types, countries and demographics respondents want incentive flexibility.


Overall, the results of the study just reinforced our belief, set forth initially in the GRIT Report, that our industry does a poor job of putting the respondent first, despite having the means and knowledge to do so. We should capitalize on that and bring the participant experience to the forefront.

So what to do? Well, based on these data a “Top 5” priority action list could be:

1.) Go “mobile first” in designing studies.

2.) Stay under 10 minutes.

3.) Think like game designers, marketers, or UI experts when designing research.

4.) Respondents want a fair value exchange: reward them the way they want to be rewarded and give them choices.

5.) Use research as a brand engagement and relationship building opportunity.

Other ideas can be found in the recent GRBN Special Report: Improving the online survey user experience.

The GRIT CPR study is a global call to action for the entire industry: clients, suppliers, and everyone in between. We MUST change, or risk losing access to respondents.

What is the “so what” in all of this? We as an industry must change our ways, and respondents have just given us a pretty clear set of directions on how to do that. The way we have always conducted research may have met our needs in the past, but the world has changed and people simply expect more from their relationships, including research.

We’ve distilled the message from the GRIT CPR study into a blueprint for success: a three-part action plan that we believe will go far in helping the industry capitalize on these learnings and overcome the challenges we have identified.


Finally, what isn’t measured isn’t managed, so we encourage everyone to participate in the GRBN TRUST & PARTICIPANT ENGAGEMENT Initiative for UX benchmarking. You can find out more here: http://grbnnews.com/pei_partners_set_goals/

If you want to explore the results of this study on your own, you can do so here:


The full report can be found here: https://www.greenbook.org/grit/cpr

Getting to Know All About You: Employing Empathy to Holistically Understand Your Consumer

To understand consumers holistically, we must look at a variety of contexts with their corresponding behaviors, cognitions, and emotions, as well as overall values and goals

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Katja Cahoon will be speaking at IIeX North America (June 12-14 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Katja Cahoon

Every day, 20 veterans commit suicide[1]. What does this shocking statistic bring to mind? Probably well-established images and associations with PTSD, reintegration struggles, and lack of support for our veterans. What kind of veteran do you picture? A young, Iraq/Afghanistan vet? Actually, 65% of suicides are committed by vets that are 50 years and older. That means we are mostly talking about long neglected Vietnam War veterans, who also struggle more with homelessness.

What about a completely different insight? Veterans are 45% more likely to start a business than people with no active duty military experience[2]. Is this something you knew? Something that neatly fit into your view (stereotype) of veterans? Despite their struggles, veterans are incredibly resourceful, resilient, and persistent.

My point here is that isolated facts and insights can make us miss the whole story and prevent us from developing a true, deep understanding of all aspects of consumers. They can also feed into existing, and sometimes damaging, biases and stereotypes (e.g., about the danger of people with PTSD or other types of mental illness). True cognitive empathy requires the whole story – a person’s struggles and challenges, their strengths, their goals, dreams, and the contexts in which they operate.

Cognitive empathy is the mostly conscious drive to recognize and understand another’s emotional state. This is sometimes called “perspective taking.” It is different from compassion (concern or pity for the suffering or troubles of other people). And empathy is not just necessary in difficult situations or circumstances, it is always necessary. A lack of true empathy is behind quite a few communication and product failures – they simply missed the mark because they did not truly understand consumer reality.  Misunderstandings and judgements can arise and lead to a lack of empathy when looking at consumers as one-dimensional beings, instead of seeing them holistically and with an understanding that consumers, and indeed all of us, consist of many selves.

For example, what comes to mind when I tell you (as you probably know) that Millennials don’t save enough money? Well-established stereotypes about the “lazy, careless” Millennial? Apart from the fact that many Millennials face very real challenges when it comes to income, debt reduction, and saving another aspects comes to bear: Kahneman’s Experiencing and Remembering Self. The Experiencing Self might feel “as if they would be giving up their money to an elderly stranger.[3]”  Daniel Goldstein’s TED talk[4] about that topic is insightful for anyone needing to understanding long-term decision making, especially as it relates to money. Who wants to give their hard-earned money to an elderly stranger as opposed to getting the immediate gratification of spending it on us, in the here and now? Knowing this leads to more empathy and hopefully better, behaviorally grounded, strategies.

In my recent articles on bias and empathy  I reference the concept of multiple selves and the importance of being aware of the multiplicity of our existence, as opposed to focusing on one small aspect of consumers’ lives. In psychology and psychotherapy, the idea of multiple selves is well established. Our personality, our reality, our very lives have many different facets, which are influenced by different contexts and experiences. As such they lead to different needs, behaviors, emotions, cognition, and of course decisions, including purchase decisions.

The goal of multiplicity awareness is to avoid bias and develop greater empathy. We have already touched on the Experiencing and the Remembering Self. Another important aspect is the idea of the inner and outer self. Rather than judging consumers for having different inner (“true”) and outer (often idealized) selves we have to understand that how we appear to others is a crucial aspect of human behavior. To be part of and belong is a strong driver. To be cast out is a significant fear, indeed, it is one of the most evolutionarily ingrained fears of humans. We know that consumers overstate, for example, how often they wash their hands. Rather than accusing them of “lying” or judging, perhaps we have to assume that they believe this to be true and/or subconsciously want to convey that they belong (being seen as dirty is a big part of being/feeling like an outcast). Again, this calls for behavioral approaches and methodologies that go beyond what people say. Advertising very much taps into the idealized self, often in stereotypical ways. Dove’s Real Beauty and Image_Hack initiatives challenge the fantasy and validate that the ideal self is not attainable. That is empathy in action! As Cindy Crawford famously said: “Even I don’t wake up looking like Cindy Crawford.”

Perhaps even more important for the development of deep and empathy generating insights is a broader view of multiplicity, a true exploration of the multiple selves. “The self in independent cultures is far from being unitary, consistent, and separate from social context….social roles are undoubtedly an important component of the self…. For other people, self-aspects might also consist of goals (e.g., who I want to be), affective states (e.g., being moody), and behavioral situations (e.g., meeting new people).[5]

In order to understand consumers holistically, aka their many selves, we have to pay attention to five factors, context being key. Context has a major influence on which self gets activated and acts. Context can be physical, online, time of day, situational (e.g., vacation vs. daily routine), and role related (e.g., dad vs. co-worker), among others. Four other factors play a role:

  • Cognition
  • Emotions
  • Values & Lifestyle
  • Goals (conscious or subconscious)

Depending on which part of a consumer’s life is activated, different self-brand interactions or connections are formed. For example, Smartwater might be proudly displayed on one’s desk at work, in the gym or yoga class. The same consumer might prefer simple, filtered water at home because she does not care about the badge value and this fulfills her need for frugality – all these aspects are part of her. Buying life insurance is driven by a responsible, rational, long-term focused self, and often activated by very specific life circumstances that add or change an aspect of oneself (e.g., marriage, birth of a child, loss of a parent). This long-term self is in competition with the many short-term focused selves a person experiences throughout the day, which all take energy and focus (e.g., employee, wife, mom). Empathy for that can help insurance companies develop easier approaches to getting life insurance and develop milestones along the way that make it more tangible and provide some instant gratification.

Let’s illustrate this with one final example. In a CPG study consumers had two dominant selves, as well as a few less dominant ones. In this case, they expressed themselves as archetypes with different priorities, behaviors, and emotions. In Caregiver mode consumers were focused on efficiency, routine, and a sense of pride about how much they are able to accomplish. They used the product in a routine and habitual way and appreciated any innovation that helped them be faster while also enabling healthy behaviors. It was all about their kids and their significant others. But consumers, especially busy moms, are not just Caregivers, nor do they want to be stereotyped as such.

The second dominant archetypes was the Innocent – a childlike, worry free self. In that mode, they were playful and took time to experiment and do more unusual, less routine things with the product. They were having fun and took pride in being open-minded and creative. This was much more about and for themselves. You can easily see that the same product has two related yet distinct strategy options. And you can see how it relates to stereotyped or overused communication. So often moms are depicted as the traditional Caregiver. Activating another archetype that taps into a different territory can be powerful and validate that they are more than a person who cares for others.

How did we get there? By not focusing too narrowly onto just one aspect of the consumer but by looking at a variety of contexts with their corresponding behaviors, cognitions, and emotions, as well as overall values and goals. In short, we asked more and different questions. And that exactly will be part of my very interactive presentation about bias and empathy at IIEX. I will – literally – make you get up from your seats. Please join me!


[1] https://www.va.gov/opa/pressrel/pressrelease.cfm?id=2807

[2] https://www.entrepreneur.com/article/246557

[3] http://www.theatlantic.com/magazine/archive/2008/11/first-person-plural/307055/

[4] http://www.ted.com/talks/daniel_goldstein_the_battle_between_your_present_and_future_self

[5] http://journals.sagepub.com/doi/pdf/10.1177/1088868310371101


Marketing Analytics for Data Rich Environments

A lot is changing in the world of marketing analytics. Marketing scientist Kevin Gray asks Professor Michel Wedel, a leading authority on this topic from the Robert H. Smith School of Business at the University of Maryland, what marketing researchers and data scientists most need to know about it.

By Kevin Gray and Michel Wedel

Kevin Gray: There has been a lot of buzz about data science, big data, analytics and so on in the past few years, and a lot of marketing researchers seem confused about what all this means. Could you give us a simple layperson’s definition of marketing analytics?

Michel Wedel: Marketing analytics involves the examination of data about how customers feel, act, and interact around products and services, using descriptive, diagnostic, and predictive metrics and mathematical methods. Its purpose is to obtain insights into customer behavior and to improve the effectiveness of marketing performance. Marketing analytics is increasingly interdisciplinary, combining methods from business, mathematics, statistics, econometrics, psychometrics, and computer science.

KG: How did all this evolve? Could you give us a brief history of marketing analytics?

MW:  The beginning of the systematic use of data in marketing is widely credited to Charles Coolidge Parlin around 1910, with his work on advertising for the Curtis Publishing Company in Boston. In the 1920’s and 1930’s the first marketing research companies, such as Nielsen, Burke and Gfk, were established. They have long used analytics to support their clients’ marketing decisions.  One could argue that the use of big data analytics in marketing begun in the 1970’s, with the introduction of IBM’s point of-sale scanning devices for Universal Product Codes. This marked the onset of large scale capture and analysis of digital transaction data by marketers. The availability of digital data exploded when in in the 1990’s the World Wide Web came into existence (clickstream data), and Google was founded (search data). Then, from 2000-2010 things quickly evolved with the launch of Facebook (social network data), YouTube (video data), and the iPhone (location data). These new sources of data are now widely used for analytics by marketers, which along with the analytical techniques developed for these data, has given rise to many entirely new forms of marketing.

KG: Flashing back a decade – to 2007 – what was the buzz then? What were experts predicting about marketing analytics then that they basically got right, and where did they go wrong?

MW:  Although the term big data had been used before, around that time it began appearing regularly in scientific articles, blogs, Google search terms, and job postings. It is now clear that initially, many businesses’ expectations about the potential of big data were overhyped. Much of the initial surge in technology evolved around data storage, and companies invested too much in data capture, without concise plans for how to the data should be used to improve marketing decision making. Moreover, investments in analytics capabilities lagged behind. Today, the success of industry leaders such as Amazon, Google, and Facebook has made it clear that the potential of big data for marketing decision making can be leveraged only through the use of analytical tools. In addition, it is becoming clear that the availability of big data itself may give rise to data-driven decision cultures in companies. Analytical tools have been shown to provide companies with competitive advantages, and to positively impact their financial performance.

KG: Thinking about data and analytics now, in 2017, what should marketing researchers and data scientists focus on most?

MW: Historically, the development of marketing analytics has progressed through three stages: (1) the description of observable market conditions through descriptive statistics and dashboards, (2) the development of statistical and econometric models as diagnostic tools, and (3) the evaluation, optimization and automation of marketing decisions. New, unstructured digital data in the form of blogs, search results, reviews, images, locations, video and tweets enable deep and actionable insights into the economics and psychology of consumer behavior. But, the usage of these new data sources in marketing practice is still mostly in the first stage of its development (description), and its full potential remains to be tapped. Machine learning methods such as deep neural networks and cognitive systems have ushered in the second stage of analytics for big data, and are becoming more and more popular in practice. The challenge is to combine and utilize all these sources of unstructured data to optimize and automate marketing decisions. Developments in practice have mostly involved “small stats on big data”, while academic research has used “big stats on small data”. Collaboration between researchers in academia and in practice is needed to develop methods that solve important marketing problems with an eye for application and computation.

Key areas of development involve: (1) how to include new rich data in marketing mix models to improve explanations and predictions of the marketing effects; (2) how to attribute elements of the marketing-mix to various touchpoints in the customer purchase funnel; (3) how to dynamically allocate marketing resources across various offline and online channels and multiple devices; (4) how to assess causal effects of marketing control variables through analytical methods and field experiments; (5) how to personalize marketing mix elements in fully automated closed loop cycles; and finally (6) how to best apply analytical methods to protect data security and consumer privacy.

KG: And, thinking a decade or so ahead, can you offer any thoughts on what marketing analytics might look like then?  Do you think there are major surprises in store for marketing researchers and data scientists?

MW:  In my view, two of the most important developments in the coming decade are the Internet of Things (IoT) and Natural User Interfaces (NUI). Well over 15 billion devices already have sensors that enable them to connect with other devices and transfer data without human interaction. Add to this the trend towards wearable devices which increasingly collect physiological data and communicate with other devices; in some cases blurring the line between the consumer and the devices. The IoT has already started to bring the offline world online, and thus open up many offline behaviors and interactions to the same type of analysis as are now possible for online behaviors. The IoT will change the way people interact with their man-made environment, become a major source of new product and service development, and generate massive data in the process. In addition, the rapid development towards NUI enables people to interact with their devices through voice, motion control, gaze, facial expressions, and even in some cases through mere thought. Thus, people are increasingly interacting with devices as if they were other people. This will change the nature of the interactions, and as data on speech, eye movements, facial expression and motions are recorded and available to marketers at massive scales, this will necessitate the development of tools for their real-time analysis. This will open up entire new ways for product and service customization and marketing-mix personalization, but very little work has addressed the analytical requirements for these developments as yet.

KG: Lastly, when looking to hire analytics staff or subcontract analytics work, what are the most important considerations? What should clients be looking for in an analyst or analytics company?

MW: Marketing analysts typically work at the interface of statistics, computer science, and marketing, and they need to have broad and deep skills. In addition, areas of marketing such as advertising, product development, search marketing, segmentation, each have their own requirements for data and analytics. Analysts therefore need deep knowledge of marketing modeling, marketing-mix optimization, and personalization, and must be able to apply state-of-the-art statistical, operations research and machine learning methods. People with only excellent technical skills are often not as effective in companies as people that also have significant domain knowledge, because they know how to interpret data and findings in the light of extant knowledge on marketing and consumer behavior. Moreover, successful analysts are capable of effectively communicating the results obtained from analytical techniques to decision makers.

Business leaders believe that the difficulty of finding talent with these skills is the main barrier toward implementing big data analytics. In successful companies analysts are often cultivated through continuous on-the-job training. The education of marketing analysts with such a broad and deep skill set has posed a challenge for business schools. Next to existing specializations in undergraduate and MBA programs at many universities, recently created masters programs in marketing and business analytics focus on developing these multidisciplinary skills in students who already have a rigorous training in the basic disciplines. These programs offer great promise. Closer collaboration between universities and companies is needed, however, to make sure that educational programs remain relevant to the requirements of the industry.

KG: Thank you, Michel!


For a detailed look at marketing analytics see Wedel, M. and P.K. Kannan, 2016. Marketing Analytics for Data-Rich Environments. Journal of Marketing, 80 (6), 97-121.

Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy. 

Michel Wedel is Distinguished University Professor, University of Maryland. He has received the Dr. Hendrik Muller Prize for outstanding contributions to the social sciences from the Royal Dutch Academy of the Sciences, the Charles C. Parlin award for exceptional contributions to Marketing Research from the American Marketing Association and the Churchill Award for lifetime contributions to the study of marketing research from the American Marketing Association. He is a fellow of the American Statistical Association and the Institute for Operations Research and Management Science.

Innovate With Confidence, Not Buzzwords

Too many organizations resort to innovation theater, but the only real option is to replace buzzwords with action.

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Thor Ernstsson will be speaking at IIeX North America (June 12-14 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Thor Ernstsson

By now you’ve seen the stats: the average duration of a company being listed in the Fortune 500 has rapidly decreased over the past few decades. Innovate or die is the mantra, but what exactly does that mean?

Unfortunately, too many organizations resort to innovation theater – setting up labs, yelling at people to think faster, and slinging buzzwords for PR purposes and not for any genuine emphasis on delivering value to customers. Announcements in TechCrunch, casual dress codes, and sleek offices work for awhile, enabling the company to feel like they’re making progress…at least at first.

But it all eventually comes crashing down when people start taking a look at real outcomes…or even the bottom line. Even well-intentioned companies try, but many learn the same lessons as Blockbuster, Kodak, and RadioShack a bit too late.

The only real option is to replace buzzwords with action. It’s not easy and there is no magic to it either. Here are three quick examples:

  • “Customer centricity”: When organizations talk about returning to their roots and becoming ‘customer-centric,’ they often limit that to their marketing team instead of the core organization. However, customer centricity isn’t just a codeword for another re-org. It’s about a simple shift in mindset from focusing on internal capabilities to externally focusing on customers’ jobs-to-be-done.
  • “Intrapreneurship”: I’ve had dozens of former business leader tell me about ‘inspirational’ speeches Fortune 500 executives give where the key takeaway if for everyone to “just be more creative like Apple.” Intrapreneurship isn’t about willing a startup culture into existence. It takes a realignment of incentives, risk tolerances, and a safe space for experimentation.
  • “Fail fast”: This Silicon Valley mantra has backfired for many global organizations who tarnished their brands by launching tons of new, un-validated products. In reality, the emphasis shouldn’t be on failing fast, but on learning fast. Product and innovation teams need to learn to cut through the red tape to optimize for learning and get actionable customer feedback before investing resources in engineering solutions.

If you’re interested in shifting from buzzwords to actual results, come hear my talk at IIeX NA: How to cut through bullshit to build great products. I’m also excited to attend talks by Paul Field and Lisa Courtade.

For further reading, consider:

Some Help in Evaluating Subconscious, Implicit, System 1 Measures

If we claim a measure is “implicit”, let’s define the implicit features we’re measuring.

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Paul Conner will be speaking at IIeX North America (June 12-14 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Paul Conner, Founder & CEO of Emotive Analytics

Ah, the subconscious! Many marketers are searching for what’s there because the subconscious can greatly influence consumer behavior.

There are many ways to see what’s in the subconscious. Psychophysiology (e.g., brain scans, biometrics, facial coding, etc.), metaphor elicitation and analysis, and implicit measurement are three prominent approaches. I can’t address all of them here, but I will comment on implicit measurement.

In marketing and consumer research, the subconscious is also referred to as nonconscious, implicit, and System 1. When it comes to implicit measurement, I’ll suggest the term “implicit” needs more clarification and should not be completely synonymous with subconscious, nonconscious, and System 1.

In our (Emotive Analytics’) implicit measurement work, we see confusion and disagreement on what is called implicit measurement. Specifically, we see measures characterized as “true implicit” versus “fast explicit”. The primary difference between the two comes from whether the stimulus of interest (SOI; e.g., a brand, ad, package design, etc.) is directly and consciously evaluated in the data collection process.

What are called “true implicit” measures use “indirect measurement” and sometimes, but not always, reaction time in their implicit scores. For these techniques, indirect measurement means that respondents are not directly, not consciously evaluating the SOI, but the SOI is incidentally influencing implicit measures.

What are called “fast explicit” measures use “direct measurement” and reaction time in their implicit scores. For these techniques, direct measurement means that respondents are directly, consciously evaluating the SOI. Implicit associations with SOIs are those that occur very fast (the exact threshold not universal), before explicit processing (a.k.a., “thinking” or System 2) has time to kick in.

Users of each approach call their technique “implicit”, implying that it measures subconscious, implicit, System 1 content. The true implicit camp claims that fast explicit approaches aren’t truly implicit because conscious reflection on an SOI, no matter how fast its evaluation, makes the process explicit.

Let me negotiate a truce. Jan De Houwer and Agnes Moors have extensively studied implicit processing and measurement. They suggest that implicit processing can possess any or all of the following features: uncontrolled, unintentional, goal independent, nonconscious (in one of several ways), efficient, and fast. Any or all allows many approaches to be called implicit, including those, like fast explicit, that involve direct, conscious evaluation of SOIs.

However, De Houwer and Moors (2012) strongly suggest that researchers identify the specific features that make up their implicit measure. In that way, clients can know, and make decisions based on, the features involved. For instance, if truly nonconscious associations are important, then fast explicit measures are not appropriate. However, if fast conscious reactions are OK, then fast explicit measures are OK, too, and they can be called implicit.

Let’s try following De Houwer and Moors’ advice. If we claim a measure is “implicit”, let’s define the implicit features we’re measuring. Furthermore, with those features defined, let’s make sure we use implicit measures with features consistent with our clients’ applications.

De Houwer, J. and Moors, A. (2012). What are Implicit and Explicit Processes? In R. Proctor & J. Capaldi (eds.), Psychology of Science, Implicit and Explicit Processes in the Psychology of Science.

Jeffrey Henning’s #MRX Top 10: Superpowers and the Mad Scientists of MRX

Posted by Jeffrey Henning Wednesday, April 26, 2017, 6:00 am
Posted in category General Information
Of the 2,663 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted...

By Jeffrey Henning

Of the 2,663 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. What Are Opinion Polls – With the UK’s snap General Election upcoming, the UK Market Research Society has republicized a useful document to educate journalists and other non- researchers on how to effectively understand and evaluate opinion
  2. ESOMAR Congress 2017 Programme – Get a sneak peek at the program for the 2017 ESOMAR Congress, which takes place this September in
  3. Understanding the Most Important Shopper – Hannah Chapple analyzes different types of mothers, from Corporate Moms to Celebrity Gossip Moms, and how the Consumer Packaged Goods industry can utilize these segments to more effectively reach their target
  4. 4 Ways to Raise the Profile of Your Insight Team – On behalf of Vision Critical University, Ray Poynter shares best practices from insight teams around the country to help you increase your influence in your
  5. What is the Scientific Method, and How Does it Relate to Insights and Marketing? – Ray Poynter outlines the scientific method and discusses and why it is critical for market researchers to follow
  6. Be a Part of the Future of Market Research (In Just 15 Minutes) – A call to action by Anke Moerdyck at InSites Consulting to take the most recent GRIT
  7. What’s Your Superpower? – Danielle Todd shares some inspirational advice and takeaways form the superpower-themed Women in Research (WIRe) London Spring
  8. Research Methodologies for Africa: Desktop Research – Yannick Lefang compares desktop surveys against mobile for the nascent market research landscape growing in
  9. These Natural Beauty Brands Are Using Big Data to Give Cosmetics a Makeover – Elizabeth Segran outlines cosmetic retailer Follian’s efforts to use a data-driven method to bring all- natural beauty products to
  10. Marketing Budgets Grow But Not for Market Researchers – A report by the Institute of Practitioners in Advertising shows a continuing trend of declines in market research budgets, even as overall marketing budgets

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. The following links are excluded: links promoting RTs for prizes, links promoting events in the next week, and links outside of the research industry (sorry, Bollywood).

3 Reasons Why Online Surveys are Better Today Than A Year Ago

There have been many advancements in the evolutionary process of online survey technology and online sampling.

By Jim Whaley

If you ask most researchers about online surveys I think most would agree they’ve become a principle practice and perhaps the first choice of clients and practitioners for commercial and a steadily increasing amount of public sector work around the globe.

There have been many advancements in the evolutionary process of online survey technology and online sampling. As a research industry management consultant, it’s a personal joy and yet constant vigil to remain on top of both short-term and long-term trends.

The following are 3 observations I firmly believe to be contributing to the shift in overall improvement of the respondent experience and the quality-value of the data ultimately delivered to the client:

“Our culture runs on coffee and gasoline, the first often tasting like the second.”

                                                                                        Edward Abbey

  1. Technology Democratization- Let the Specialization Begin-

In the past few years, technology has done 2 important things:

  • It has allowed us to reach or surpass critical mass connectivity across the globe.
  • Systems that attract and engage people in research have become commercially affordable to businesses unwilling to invest millions in proprietary systems.

As the title of this post alludes to, the days of mass optimization of the panelist experience and the client experience are being replaced with the “Experience Economy Age”. We are seeing the resurgence of many smaller firms with specialty experience in Healthcare, B2B or specifically Tech.

There are certainly firms that have great vertical skills such as programming and hosting as well as platforms that can deliver data in portals and dashboards providing you the ability to do your own online analysis and charting. Or, they can do it for you.

2.  Mobile – Just Not a Thing-

I participated in several marketing and industry shows this year (including technology-focused events) and I noticed that no one was passionately clamoring on about mobile and how we must solve the “Mobile Imperative”. At IIEX North America Patrick Comer of Lucid stated that of the 1 million respondents passing through Fulcrum® Exchange (the largest sample exchange in the world) daily, 30% of respondents are now on a mobile device, up significantly from the year before. This means that as an industry we have done the work to optimize our surveys and engaged respondents. Mobile is now mainstream, normal, and expected. We are taking advantage of the broad access and the deep insight experience mobile affords us.

3.  API – Abundant Persons to Interview

Seriously though, when we first started to build online panels we thought recruiting people that matched the census and paying them well to keep doing surveys would work. Well, it did for a while, but all we really did was take an old model and replicate it on new technology. We never achieved operational efficiency even after continuing to apply well-intentioned optimization paradigms like survey routers, etc.

Then we decided to let the Internet be the Internet

Thinking beyond the traditional market research panel… respondents live in many vibrant engaged communities throughout the WWW. They Opt-In for surveys they qualify for generally, or, for some people, just occasionally.

With each of these communities being open to each other, Application Program Interfaces (API) resource sharing is not only possible it’s fast and efficient. More people have the opportunity to take a survey now than have ever before. We know who respondents are and we can build very representative and/or targeted samples based on profile and prescreening data very rapidly and cost effectively.

The point is this: You can pick a boutique or a big house and know that they have the same level playing field to access the same quality respondents. You can work with a trusted firm with specialized skills and know that you will get a custom solution and a good value. And maybe best of all – your coffee can taste like anything you want.

Using Text Analytics to Tidy a Word Cloud

The trick to a great word cloud is to first tidy up the raw text using automated text analytics.

By Tim Bock

It is common when people create word clouds that they want more control. Limit the word cloud to frequently occurring words. Join together words in phrases. Automatically group together words that have the same meaning. The trick to doing this is to first tidy up the raw text using automated text analytics. Then, create the word cloud using the tidied text.

Why don’t people like Tom Cruise?

In my earlier post, I explained how you can create and interactively modify word clouds in Displayr using an example about why people dislike Tom Cruise. In this post, I use text analytics to create a better word cloud, faster.

As discussed in this post, text analytics routinely involves a pre-processing phases, where uninteresting and infrequent words are removed, spelling is corrected, words of common route are merged, phases are learned, and infrequent words are removed. This can be automated in Displayr by selecting Insert > More (Analysis) > Text Analysis > Setup Text Analysis, selecting the appropriate options in the object inspector, and then ticking Automatic.

Below, the left side shows the main output of the text analysis setup in Displayr, showing the frequency with which words appear after the text analysis. When this output is selected, as below, you can also see the settings on the right. For example, you can see the Text Variable being analyzed, which words have been removed, and that it is limited to showing words that appear 10 times or more. 

When doing this, keep in mind that pairs of words and phrases (e.g., don’t like) are better dealt with interactively in the word clouds, rather than by the text analysis.


Creating a word cloud from the tidied text


Now that we have tidied the text data, we need to create a new variable in the data file with the tidied text. We need to do this because the word clouds take a variable as an input. To create a variable, select the output, and then select Insert > More (Analysis) > Text Analysis > Techniques > Save Tidied Text, which causes a new variable to appear at the top of the data tree, as shown to the right.

To create a word cloud, we now create a new table by dragging the new variable onto the page, and then select Charts > Word Cloud, adding any phrases that we want to appear (e.g., Tom Cruise). We then get the much tidier word cloud below.

If you want to try it yourself, click here<

10 Best Practices to Fast Track Your Qual

Scoot Insights shares 10 of their Best Practices that allow teams to conduct agile qualitative research without compromising depth or quality of insights.

By Katrina Noelle and Janet Standen

We at Scoot Insights answer core business questions efficiently and effectively with agile, iterative, collaborative, qualitative research. We’d like to share 10 of our Best Practices that allow teams to conduct agile qualitative research without compromising depth or quality of insights.

  1. Set goals collaboratively and narrow in on one core objective
    • To propel research forward in an agile manner, scope creep must be resisted at all costs. Center your team on their primary reason for doing the piece of research, namely, the key business decision they will make based on the insights.
    • We recommend scheduling an internal objective alignment session even before briefing your research partner. Use this session to get input from the key stakeholders and understand the following elements that will impact your research design:
      1. Timing of decisions to be made based on the research
      2. Deliverables that will help you make those decisions
    • If other objectives start to be added to the project consider giving them their own fieldwork or expanding the timeline for the original one.
  2. Use experienced & immersed moderators
  • If you are looking to make business decisions quickly you need research partners who can be efficiently briefed on the objectives and roll with iterative development to the stimuli, discussion guide, etc.
  • Choose moderators with experience in your product category and chosen research methodology. This is not the time to train internal staff to moderate.
  • Ensure the moderator is comfortable with iterative materials and on-the-fly updates during the fieldwork.
  1. Maximize stakeholder involvement throughout
  • Immerse your team early in the research objectives and parameters.
  • Ensure they are actively involved in the fieldwork; give everyone a job/role.
  1. Integrate iterative design
  • Be open to changing materials. Be ready to change concepts, stimuli, and discussion flow as you learn throughout the qualitative fieldwork.  Once you hear enough helpful feedback on one version of your idea, integrate it and test that iteration, and so on.
  1. Use back to back, time-efficient audience sessions
  • We pack our research days, scheduling back-to-back mini group discussions over the span of an 8-hour workday. This allows us to hear from many more participants than traditional research scheduling.
  • Each participant has more airtime and a more intimate environment in which to share their opinion.
  • Another benefit is that the presence and time commitment of the behind the mirror team is fully maximized.
  1. Real-time synthesis by backroom facilitator
    • Leverage dual moderators: one in the front room and one in the backroom. This will help your team with your real-time iteration while providing live qualitative synthesis of the insights.
    • In the back room, be sure to use good old-fashioned flip charts, sharpies and post it notes to capture customer and team member insights in the moment.
  2. Conduct immediate debrief workshop
    • Inspire shared understanding by bringing cross-functional teams closer to customers.
    • Invite your backroom moderator to conduct a group discussion on the themes and take-aways from the day with team attendees.
  3. Merge client expertise and audience learning
    • This is where your expert moderating team steps in again to help. They can help your internal teams to identify competitively distinctive and customer-driven actions.
    • They can also keep the voice of the customer in the room during the debrief session, merging customer insights with client expertise.
  4. Report out in 24 hours
    • We recommend moving quickly to synthesize the interactive debrief workshop conclusions and action items into a short and sweet report.
    • Be sure to circulate this summary the following day to help the team take action quickly.
  5. Integrate insights into action quickly!
    • Work with your stakeholders to integrate the insights directly into business action. Keep the insights top of mind as your clients move through the decision-making process. If you’ve kept them engaged in the research process throughout, they should be more easily able to integrate those insights into business action.

Often qualitative learning is seen as a slow-moving, costly part of the research process. Consider revving up your qualitative insights so that business decisions can be informed by valuable in-depth qualitative feedback more efficiently and effectively.

How to Solve the Most Common Data Problems in Retail

The most successful retail companies use data science and predictive analytics to improve efficiency, improve marketing campaigns, and gain customer insights that give them a competitive advantage.

By Pauline Brown

In the retail business, big data is poised in the coming years to open up huge opportunities in the way stores (both physical and online) fundamentally operate and serve customers. Given the incredibly small margins, Big Data will also provide much-needed efficiency improvements – from tighter supply chain management to more targeted marketing campaigns – that can make a big difference to a retail business of any size.

Making data-driven decisions is no longer about learning from the past; it means making changes to the business constantly based on real time input from all data sources across the organization. Making predictions and applying machine learning is based on traditional data but also on new and innovative sources like connected Internet of Things (IoT) devices and sensors or, going a step further with deep learning, unstructured data from things like static images or cameras monitoring stock in warehouses. Consumers can be fickle, so being able to accurately anticipate what they will do next and quickly react is what puts the most innovative and successful retailers above the rest.

Data science software maker, Dataiku, recently explored the types of data problems facing retail, the problems they solve, and the steps that any retail organization can take to become more data driven.

PROBLEM #1: Siloed, Static Customer Views

Many retailers still struggle with siloed data – transaction data lives apart from web logs which in turn is separate from CRM data, etc.

SOLUTION: Complete, Real Time Customer Looks

Cutting-edge retailers look at customers as a whole, combining traditional data sources with the non-traditional (like social media or other external data sources that can provide valuable insight).


  • More accurate and targeted churn prediction.
  • Robust fraud detection systems.
  • More effective marketing campaigns due to more advanced customer segmentation.
  • Better customer service.

PROBLEM #2: Time Consuming Vendor & Supply Chain Management

Supply chains are already driven by numbers and analytics, but retailers have been slow to embrace the power of realtime analytics and harnessing huge, unstructured data sets.

SOLUTION: Automation and Prediction for Faster, More Accurate Management

Combine structured and unstructured data in real time for things like more accurate forecasts or automatic reordering.


  • More efficient inventory management based on real-time data and behavior .
  • Optimized pricing strategies.

PROBLEM #3: Analysis Based on Historical Data

Looking back at shoppers’ past activity often isn’t a good indication of what they will do next.

SOLUTION: Prediction and Machine Learning in Real Time

Instead, real-time prediction based of current trends and behaviors from all sources of data is the key


  • Anticipating what a customer will do next.
  • A more agile business based on up-to-the-minute signals.
  • The ability to adapt automatically with customer behavior.

PROBLEM #4: One-Time Data Projects

Completing one-off data projects that aren’t reproducible is frustrating and inefficient.

THE SOLUTION: Automated, Scalable and Reproducible Data Initiatives

The best data teams in retail focus on putting a data project into production that is completely automated and scalable.


  • More efficient team that can scale as the company grows.
  • With reproducible workflows, team can work on more projects.

While each organization is different, data challenges are the same.  It takes a data production plan to guide any sized team to successfully producing a working predictive model that yields meaningful insights for the business.

How to Complete any Data Project in Retail

The most successful retail companies worldwide solve these four issues by efficiently leverage all of the data at their fingertips by following set processes to see data projects through from start to finish. They also ensure those data projects are reproducible and scalable so the data team is constantly able to work on new projects vs. maintaining old ones. This is as easy as following the seven fundamental steps to completing a data project:

  1. DEFINE: Define your business question or business need: what problem are you trying to solve? What are the success metrics? What is the timeframe for completing the project?
  2. IDENTIFY DATA: Mix and merge data from different sources for a more robust data project.
  3. PREPARE & EXPLORE: Understand all variables. Ensure clean, homogenous data.
  4. PREDICT: Avoid the common error of training your model on both past and future events.  Train only on data that will be available to you when a predictive model is actually running.  Choose your evaluation method wisely; how you evaluate your model should correspond to your business needs.
  5. VISUALIZE: Communicate with product/marketing teams to build insightful visualizations.  Use visualizations to uncover additional insights to explore in the predictive phase.
  6. DEPLOY: Determine if the project is addressing an ongoing business need, and if so, ensure the model is deployed into production for a continuous strategy and to avoid one-off data projects.
  7. TAKE ACTION: Determine what should be done next with the insights you’ve gained from your data project.  Is there more automation to be done? Can teams around the company use this data for a project they’re working on?

There is no doubt that data science, machine learning, and predictive analytics combined with Big Data will become an even more fundamental part of both online and traditional retail in the coming years.  All retail organizations will use it, but only the successful ones will have an effective data production plan that yields the most effective insights into their business that gives them an edge over the competition.