1. Research Now
  2. SIS International
  3. ORC International
  4. 20-20ad

Five Things to Look Forward to at the ESOMAR 2015 Congress

JD Deitch details what he's most looking forward to about ESOMAR Congress next week.



Editor’s Note: The 2015 ESOMAR Annual Congress begins this Sunday, September 27th in Dublin, Ireland. JD Deitch, (twitter:@JDDeitch) frequent contributor to GreenBook and COO of Ask Your Target Market sets up the event with this article and will be providing daily updates.


By JD Deitch

The Annual ESOMAR Congress is always an event to look forward to on the conference circuit. The event is well-attended, content and networking are of high quality, and the locations are always terrific. Here’s what I’m looking forward to at the 2015 installment.

1: The Networking

ESOMAR brings together a broad swath of the industry in terms of size and geographic reach. Big research suppliers—global by nature and naturally well represented in Europe (which is what the E in ESOMAR stands for)—are always there in force. While it’s not “the” event to see clients, big clients are always represented. This year’s attendees list includes recognizable CPG, beverage, media, and fashion brands, among others from all over the globe. Finally, while it can sometimes be a budgetary stretch, many medium-sized companies (clients and suppliers) come as well.

What this means is that the opportunity for expanding one’s horizons is greater than many other conferences. Personally, I have found ESOMAR to be a great conferences for serendipitous connections.

2: The Exhibition Floor

I love to stroll through the exhibition floor at a Conference as it says a lot about the firmographics of the event. It also helps me stay up-to-date on what my suppliers and competitors are doing. By virtue of its broad reach, the ESOMAR Congress delivers on this front as well with a great global mix of big and not-so-big providers. I find it particularly interesting to talk to the smaller companies who are spending a meaningful part of their marketing budget for a dedicated 3 square meters of space. There’s a reason they’re there, and it’s interesting to hear their stories.

3: Good Content

ESOMAR’s content is always a broad and balanced mix. Specialists beware: this isn’t the show to go to if you are trying to get real lay of the land of what’s new in the industry. IIEX holds that distinction for me. Nor is it the place to go for a more detailed look at methodology, as ARF and CASRO do this better. But there is something of interest for everyone, which is a credit to the organizers especially. This year’s event blends both new and traditional approaches from methodology to application.

Here are some sessions that look promising to me:

Sunday (Workshop day)

  • Survey Design for Mobile Devices (3pm) – presented for free by SSI. This is a must for anyone who writes surveys or designs survey research who has been living in isolation for the past two years or otherwise has ignored the bell that’s tolling for long, punishing, non-smartphone friendly questionnaires. SSI doesn’t have any particular insight into this topic that distinguishes it from other panel and research companies. They do, however, have motive. Regardless of their interest, the hands-on opportunity to get free advice from a smart researcher (Pete Cape) is worth it.

Monday (Day 1)

  • The Chance of a Lifetime (2:40pm Room 1) – presented by Namita Mediratta from Unilever UK’s market intelligence group. A client side researcher puts her career on the line with one piece of research? I hope the presentation is as good as its abstract. If it is, then this is a must for all research suppliers. You can’t possibly hope to deliver anything meaningful without empathizing with what your client is really going through.
  • Reliability and Predictive Validity in Consumer Neuroscience (3:50pm Room 1) – presented by Michael Smith from Nielsen’s Consumer Neuro group. Neuroscience is one of the most fascinating new branches of marketing science out there. Its detractors, however, point out that typical sample sizes raise issues of generalizability, reproducibility, and explanatory power. This is the first work I’ve seen that addresses these issues head-on. If Michael has, as his abstract states, found that neuro results are reliable and predictive, then it would be a huge affirmation of this technique.
  • Leveraging Passively Monitored Communities for Ongoing Insight (4:10pm Room 2) – presented by Chad Maxwell and Dave Choate from Starcom Mediavest. I like the look of this for two reasons. First, it’s great to see an advertising agency take on research as they’re going to be laser-focused on results without the fluff. Second, they’re blending data sources (survey and social), which I believe is going to define the industry in years to come.

Tuesday (Day 2)

  • Wow, You Do Research WHERE? (10am Room 1) – a panel moderated by Jon Puleston from Lightspeed GMI. I’ll confess that I’m jealous of Jon’s spot on this panel. Anyone who tends to focus on the highly-developed world should attend this just on general principle. I’ve had the opportunity to work in both big and small markets and it has made me indisputably better as a researcher and a business partner. The origins of the participants on the panel should make this a can’t-miss conversation.
  • Hooked on Shopping (11:45am Room 1) – presented by AOL UK, USA, and Canada. The folks at AOL observe that people are, in effect, constantly shopping. Their work aims to uncover these motivations from which advertisers can develop creative and editorial content. It doesn’t sound terribly new, but I am curious about the underlying motivations, and the list of presenters suggests a multinational focus. But there’s a time conflict with the presentation below.
  • You Call it a Snack? (11:55am Room 2) – presented by Brett Ao, Labbrand, a candidate for Young Researcher of the Year. A European entrant in the Chinese food market tries to understand the country’s rich diversity while maintaining brand authenticity? This sounds like a “boil the ocean” type of project, but I suspect it’s on the Young Researcher of the Year shortlist because Brett has been able to find clarity for the client. There’s a time conflict here with the AOL shopping session above though. Right now I’m leaning toward Brett’s session.
  • Brand Tracking Revelations (2pm) – roundtable moderated by Infotools. If there is one body of work that is in desperate need of overhaul, it’s brand tracking studies. The traditional industry is hopelessly stuck in a vicious circle of economic self-interest that prevents meaningful change short of disaster. I’m really hopeful this is a provocative discussion and not warmed-up platitudes about disruption and change.
  • Insight to Action: Using Survey Data to Target Customers and Increase ROI Through Digital Media (5:05pm) – presented by TNS. TNS has gotten a lot of mileage out of the Jan Hofmeyr’s Conversion Model, but here they claim to have connected it to actual programmatic marketing operations. In case anyone has any doubt, this is what it means for research to create real value: it needs to be tied to execution. Firms competing for advertising business that can’t do this don’t have a chance.

4: Catching Up with Former Colleagues & Friends

I genuinely like the research industry and the people who work in it, and there is no better time to reconnect with colleagues and friends than the nightly cocktail hours and festivities. After a long day of presentations and meetings, it’s really pleasant to catch up with people you haven’t seen in a while, trade stories, and just relax. We’re a group that knows how to have a good time and is honest enough to admit it, though I think it’s fair to say we’re all glad these shows don’t last more than a few days.

5: Dublin!

Long may ESOMAR reign for its selection of great destinations! The Congress always ends up on my short list for this alone. This is not an airport-hotel-in-a-nondescript-city show. Dublin adds its name to the list of fantastic ESOMAR destinations—Berlin, Istanbul, and Nice, the previous three—that any traveler would be happy to visit.

Resist the urge to go the cliché tourist route. The Guinness and Jameson’s factory tours are nice, but not where you should spend your limited time. Give Temple Bar, Dublin’s version of Bourbon Street, the swerve as well.

It’s a twenty minute walk to Trinity College and the eastern end of a huge concentration of great restaurants and pubs, Dublin Castle, its glorious cathedrals, and St Stephens Green. Dublin has a great food scene. As you might expect, you’ll be able to find top-notch Irish cooking, traditional and modern dishes, at all price points. If you’re seeking something more cosmopolitan, there are sushi bars, wine bars, a very good Mexican restaurant, and dozens of others. I typically triangulate through TripAdvisor. Email or tweet me if you’re looking for ideas or a companion!

Stay tuned to the GreenBook blog for daily updates. See you there!


Storytelling: New Science Is Enriching An Ancient Art

To better understand the importance of storytelling and how to master the art and science of it, it helps to take a look at what experts across a number of industries are saying.

magic open book of fantasy stories

Editor’s Note: Communicating impactful information in a concise, engaging way is not just a business imperative, but also a cultural shift. As we increasingly move to a universal “visual literacy” model driven by technology, the need to condense needed information into intuitive graphics is only going to increase in importance.

In today’s post, David Paull of Dialsmith uses examples from his business to explore how this topic is impacting not just their deliverables, but also design and use cases. It’s a great lens to view a big topic that will continue to be an imperative for the insights industry.


By David Paull: 

In the current landscape of buzzwords, “storytelling” is certainly up there. The question is, why is one of the most ancient forms of communication getting so much attention right now, especially in the market research landscape? I think the answer lies within the fact that people are looking for a more authentic way to connect, to be understood, and to persuade.

In a well-communicated story, there is a connection, and at the heart of storytelling is a two-way interaction. While one side is more active (that of the story-teller) and the other side is more passive (that of the story-receiver), there is still an interaction that is connecting both sides and it’s that interaction that’s so unique to effective storytelling. But, what makes for a well-communicated story and how do our interactions differ between stories told verbally versus stories told through written word or graphics?

My company has been fascinated with this for some time, especially because so much of our clients’ research is focused on how to craft and deliver a compelling story through advertising, messaging, legal arguments, political dialog, and entertainment. To better understand the importance of storytelling and how to master the art and science of it, it helps to take a look at what experts across a number of industries are saying.

Telling a story that resonates begins, first and foremost, with having something compelling to say. As Russ Rubin of Cambiar puts it,

“[It] is the proverbial, ‘tail wagging the dog.’ You can’t be a good storyteller without having a good story. And the telling of the story isn’t necessarily dependent on tools, technologies or methods. It’s dependent on having something worth sharing that will matter…”

When not focused so much on tools and technologies, storytellers can focus more on other verbal, and non-verbal, techniques to connect with their audience. Vanessa Van Edwards, Science of People behavioral investigator, asserts,

“Content is less important than the way the information is presented.”

Her work focuses on the importance of body language, vocal modulation, and mannerisms to connect with an audience. This maps interestingly with work done by Elizabeth Merrick, former senior manager, customer insights at global retailer HSN, in her study on how viewers reacted differently based on familiarity to HSN’s on-air hosts. Merrick said,

“We learned that while more- and less-familiar hosts eventually got to the same [viewer opinion] rating level, more familiar hosts started out the segment with a faster boost while less familiar hosts got there at a much more gradual pace. This is important to know because if we’re running a shorter segment, or need to get to the value quicker, a more familiar host will better accomplish that. However, there is a higher cost to HSN for more familiar hosts and when we can reduce costs and use less familiar hosts we do so and that has a positive impact on the bottom line.”

What this tells us is that to have success in convening a story to viewers when time is short, it’s more about the non-verbal connection viewers have with more-familiar hosts then the fact that both more- and less-familiar hosts may be saying the same thing.

Of course, not all stories are told verbally. Especially in market research, stories are often told through data, charts, infographics, and other visuals. In those cases, how visuals tell a story is critically important. Derrak Richard, senior information designer at Market Strategies International, puts it this way,

“Today, it’s all about storytelling and infographics, and visuals are a key part of that. All researchers want to connect the reader with the data, and that’s what data visualizations and storytelling can do. But finding the story is the key. The infographics and other visuals are on top of that, helping to communicate an already good story.”

As much as effective storytelling is about communicating the right material in the right way, it’s also largely about what’s selectively left out. Kristin Luck, serial entrepreneur and growth-hacking expert, encourages efficiency in storytelling, saying,

“Applying the principles of brutal efficiency and distilling down the essence of your message both really resonate with me. Too often I see marketing and sales folks get bogged down in the details and lose the audience before they’ve communicated their message.”

Kristin also touts finding “the hook,” saying,

“I think what you leave out is important because it’s the essence of what defines ’the hook’. You want to leave your audience hungry for information.”

Dial testing (a topic near and dear to my heart) is one method Kristin relies on to help keep a story tight and on message, saying,

“Dial testing is amazing at determining which specific points are resonating with your target audience and which aren’t, and testing the impact of your ‘hook’.”

Another example of dial testing as a means of measuring the impact and persuasiveness in how a story is told is work done by Dr. Charlton McIIwain, associate professor of media, culture and communication at New York University. Dr. McIIwain and his colleague used dial testing to dissect race-based messaging in political ads and this is what they learned:

“We set up an experiment where we wanted to expose people to political ads that had either no race-based message, one that had an implicit race-based message or one that had an explicit race-based message.

For this study, we chose ads that featured a white candidate discussing the difference between himself and his opposing African American candidate. The ads were not created as attack ads, but the candidate did focus on pointing out that the other candidate was “different.”

Our goal was to see if there was any impact on participant’s view of the candidate based on which ads they viewed. We tracked 113 participants’ reactions with the Perception Analyzer® dials. We looked to see if their ratings changed when the race-based content was introduced in the ad to see if there was a connection there.

In previous research, we were only able to gauge at the end how participants felt about a candidate, but we wanted to see how the message, images and text during the ad impacted that end judgment while participants watched and rated it in the moment. The dial results clearly showed that there are a lot of complicated and sophisticated things going on over the course of an ad that have an impact on where people end up at the end. We were able to see the movement (of the dial result lines) at precise moments— the clear static lines and then the stark point where the ratings start to fall in very close proximity to the moment when the race-based content was introduced.”

The dial results showed a clear correlation between the introduction of race-based messaging and losing the audience, which was even starker when the race-based messaging was implicit. This could certainly be useful intel to storytellers out on the campaign trail.

Storytelling as a means of communication has been around since the beginning of human interaction and it’s the compelling nature of this communication method that has us continually studying its merits. However, storytelling best practices are not always clear cut. You can’t be a good storyteller without a good story, yet how something is communicated is often as important as that which is being communicated. And, what is left out is often as important as what is included. What research techniques such as dial testing allow us to do is pinpoint the behaviors of the storyteller, and the elements of the story itself, to know what to leave in, take out, and change in order to best connect each story with its intended audience. It’s that chase for benefits that come from the perfect connection and interaction with an audience that makes storytelling so fascinating to keep studying and refining.


Ethnography Essentials

There are several steps to accomplishing a successful ethnography study. Here are key pieces of advice for starting your own:


By Brian Fletcher:

In new product development, how do companies determine what the consumer truly wants and needs?

Without a clear picture of how the target lives, interacts with others, uses the category and/or the products in it, it is impossible to allocate resources toward the product development process. There is nothing more frustrating than coming up with solutions to the wrong problems—but such scenarios are difficult to avoid without good, solid information about what category and/or product users want and need. More and more, product developers rely on ethnography to study people’s behavior in their actual environment to generate insights about their needs. It could be a new product or if the product is not available, a competitor’s product.

There are several steps to accomplishing a successful ethnography study. Here are key pieces of advice for starting your own:

  • Understand the objectives. As with any project, the objectives should be clear and agreed to among the Team. Clear objectives for observing consumers will help the Team identify the areas they want to mine, pain points that are important and what may not fall into the scope of the project.
  • Know your target. It is important to clearly define the target before observing users. The biggest mistake the Team can make is observing/talking to the wrong consumer as it could provide direction that is at odds with how the target consumer behaves.
  • Homework could be helpful. While the methodology designed to watch consumers function in their actual habitat, it is sometimes helpful to have them do some pre-work to make the time with them more productive. Having them shop for certain products, for example, can help get them to the point you want to observe or even video tape a behavior that may be infrequent that you can then view together will make the findings richer and more efficient.

Limit the amount of attendees. Be mindful in considering how many observers will attend. Quite often, all key stakeholders want to attend. However, the more observers there are, the more unnatural the environment and the more uncomfortable the participant is likely to feel. That has the potential to result in a less impactful research project, so be prepared to make choices in terms of who will attend which interviews and help the rest of the team understand why it’s so important.

  • Keep an eye on the time. Schedule enough time to allow participants to understand the objective, get comfortable with the people in attendance and begin to act naturally. Trying to mold that behavior into too short of a timeframe is, well, unnatural – contradicting the main point of the methodology.
  • Keep questions to a minimum. If the objective is to observe behavior, the Team should do just that. Interrupting activity both interrupts the flow of an otherwise natural behavior and has the effect of making consumers think more about what they are doing and how they are doing it. This could actually get them to change their natural rhythm. Save questions until the end and ask about behavior observed.
  • Logistics are key. You’ll want to:
    • Allow enough travel time from one location to the next.
    • Have a schedule, directions and maps handy for all observers; send electronically beforehand and have hard-copies on hand.
    • During in-homes, be sure to find out if there will be any circumstances that would be helpful for attendees to know (e.g., large dogs in the house?)
    • During in-homes, always ensure that at least one adult will be home.
    • Let consumers know how many will be in attendance so they can be prepared when they open the door.
    • Additionally, it is less threatening if there is a mix of genders in attendance at the interviews. If that is not possible, it is important to communicate it ahead of time to the consumer.

There are challenges that come with an ethnography study, especially in new product development. For example, they involve a smaller sample size, which means more observations may be required to get a representative sample of behaviors across a cross section of the target.. And, of course, there are the normal human quirks to account for, such as people who are running late and cancellations. Given the limited number in the study, those obstacles can be particularly challenging.

However, when the product Team is looking to understand actual, not claimed behavior or there is a need to see how consumers live and interact with each other and with categories/products, this technique will help you understand your consumer like never before.


First, Psychology Studies – Is #MRX Next?

Recent results from the Reproducibility Project cast a pall of doubt on any study done by anyone in any field. What are the implications for MR?

scrutiny on research


By Zontziry Johnson

August 27, an article was published in the New York Times detailing the efforts of a team called the Reproducibility Project to replicate findings from psychology studies published in reputable journals (and by reputable, I’m referring to peer-reviewed journals like Science). In short, the results for a number of those studies could not be recreated, casting something of a pall of doubt on any study done by anyone in any field.

Should we really worry?

The first time I read through the article, I worried for anyone doing any type of research and trying to get it published. I’ve worked at a scientific research institution and am familiar with the various levels of trust-worthiness of scientific journals. There’s a reason studies take so long to be published in the most credible journals: they go through a rigorous peer-review process to be sure that the way the study was conducted followed sound scientific principles, like passing a scientific “sniff test,” if you will.

However, closer scrutiny made me wonder a bit about the way that the studies were being reproduced. This quote in particular bothered me: “…there could be differences in the design or context of the reproduced work that account for the different findings.” One such example cited was a study that was reproduced using women from the United States instead of women from Italy. In this study, the findings from the reproduced study were found to be weaker than in the original study; a closer look shows that cultural differences can certainly play a role in the findings.

What’s the real issue?

I think there are two real issues at play here. The first is a question on how we are talking about original studies. Are global inferences being made on studies focused on one particular culture? For example, in the study on how attractive women rated men based on their time of fertility which used a sample of women primarily from Italy, are generalizations being made without taking into account factors such as cultural biases? Recently, another study made headlines for finding that, as the headlines went, “Having children is one of the crappiest things that can happen to an adult.” An actual reading of the study showed first, the study was done regarding German parents’ experiences with parenthood, and not only that, it was looking at why German parents were more likely to have only one child, even if they were expecting to have two when they were first thinking about how many children they wanted to have. The idea explored was how supported parents were by their peers and families, and their perceptions of how the parenting experience would be. Those who didn’t have good support in place when they had their first child, and whose experiences didn’t work out as they had expected, were less likely to have a second child.

So, we need to stop generalizing results, misinterpreting them, and misrepresenting them when talking about them in the media – from well-known media outlets to our own blogs and social media shares.

Second, when a study is being reproduced, well, it should be reproduced, not approximately reproduced. I understand that doing such a thing will take significant time and effort and money – much like the original studies took, I’m sure. But in order to really be credible, you can’t say you’re going to recreate an apple and end up with a jicama instead (if you haven’t eaten a jicama, the texture and flavor is close to that of some apples), and then say the apple wasn’t an apple after all.

Implications for market research

What does this mean for the field of market research? I’ve been thinking about this since reading the NYT article a couple of weeks ago. Here are some of my conclusions.

  • Be sure we’re using sound methodology for our studies. Be up-front when reporting on the results, specifically identifying the sample used (again, cultural biases play a factor in results) and whether the results are representative of the population being studied. Remember to publish the sample size and the confidence interval for your results. I think in the current push for faster studies and visual reports, the rigor behind some of the research can be lost, and we can end up with poorly-run projects with misleading results.
  • When talking about other studies, be careful of making broad generalizations or misrepresenting the original data.
  • If something you see reported seems a bit outlandish, or very surprising, go check the original source of data.
  • Don’t just re-share a headline because the headline seemed interesting and because it’s gone viral. Read the source material. Too often, items are being reshared on social media or commented on by others without people taking the time to read the original source material. Conclusions are too often made based on others’ comments, not based on reading the original item that was shared.
  • Some studies in market research won’t be reproducible simply because in our research, we often are measuring changing perceptions among audiences. Based on a variety of factors – marketing campaigns, market influences, etc., those perceptions are likely to change, or have changed by the time the same study is conducted, even if it’s done among the same exact respondents in the original study. I don’t even think trackers could be reproduced for this very reason; they are typically tracking changes in an audience, from changes in satisfaction to changes in perception to changes in behavior.

In short, do good research and take the time to review claims before passing them along. Let’s be good stewards of our own and of others’ data.


 “Pull” vs. “Push” Market Research

Have you combined "Push and Pull” Market Research techniques?



By Adriana Rocha

The business terms push and pull originated in logistics and supply chain management, but are also widely used in marketing:

Push Marketing pushes content to the consumers. Also known as “traditional marketing,” or “outbound marketing”, push is the “grandmother” of modern marketing. Direct mail marketing, such as catalogs and brochures, as well as Radio and TV Ads are prime examples of push marketing. The marketer is in control of what the message is, how it is seen, when and where.

Pull Marketing is the opposite of push marketing. Also known as “inbound marketing”, this type of marketing “pulls” a consumer into the business, meaning: the customer seeks out your company. Today’s consumer is an avid researcher. He or she reads reviews, conducts keyword searches and asks Facebook friends for suggestions. Pull marketing creates an opportunity to attract the customers who want answers you already provide. When you see a social media offer for a product you love, this is pull marketing at work. Blog posts, eBooks and other online-content machines are also forms of pull marketing that live on the web.

Let’s face it – traditional push marketing tactics are pricey and steadily becoming less effective. With an ever increasing number of ways for consumers to easily ignore advertising and find the information they want quickly online, it’s critical to understand the paradigm shift in consumer behavior that continues to rapidly proliferate: people are increasingly ignoring push marketing, and embracing inbound, or pull marketing.

What Caused the Shift from Push to Pull

More than anything, the Internet  is what has had the greatest effect. Through search engines and social networks, consumers have all of the information they need right at their fingertips, and no longer want or need unwanted ads to tell them about products or services – they find out about them on their own terms.

Until the early 2000’s when the Internet exploded into mass popularization, the main methods for businesses to market to consumers was through traditional advertising and PR. And for a long time, this worked just fine. Consumers received ads without a big fuss, and they were fairly effective at generating sales, although nearly impossible to accurately track.

But as advertising messages became more and more prevalent in virtually every aspect of daily life we as consumers have become immune to them, and started to subconsciously filter and tune out anything that smelled of advertising or sales. Then DVRs, satellite radio, email, and countless other filtering mechanisms empowered us to ignore advertising even more easily.

“Push Market Research” is in decline. 

For many years, marketing has also relied on “push market research” methodologies, in order to understand consumers’ needs, habits and behaviors. “Push market research” could be defined as a research methodology in which a marketer or researcher attempts to get their questions in front of their potential respondents, with or without them having a desire or interest to respond. Example: door-to-door, street intercept, telephone surveys, online surveys. Unless someone is passionate about answering surveys, then they probably don’t find those surveys to be entertaining or prompted at the right moment.

I’m not saying that “push market research” should be immediately considered as negative, since it can be very efficient if executed properly. However, with the quick growth of online research methodologies and proliferation of online panel companies, survey routers, etc. consumers have been bombarded with invitations to participate in surveys. People have created similar immunization to ignore survey invitations, especially because of the poor user experienced offered in most of those surveys. Additionally, consumers nowadays have many ways to contact directly the brands and organizations they want to communicate with, especially through social media. They can also express their opinions, sentiments and share experiences with other consumers, easily at any time using their mobile devices, so they don’t need to answer surveys in order to get their opinions out and to be heard.

“Pull Market Research” is on rise. 

Brands have already learned the importance of “pull market research”. Instead of just using push methods, Brands have used social media listening and advanced text analytics tools in order to understand what consumers spontaneously share in social media and public websites. However, analyzing public social media data for understanding consumers’ needs, habits and behavior has many limitations. Additionally to the lack of profiling data and superficial information available – that don’t allow researchers dig into the whys –  accessing public social media data has become increasingly challenging with social media networks such as Facebook and Twitter limiting the access to their data through public API’s.

Smart Brands have then created their own private spaces for consumers to dialogue with them directly. “Pull market research” creates an opportunity to attract the consumers who you want to talk with, empowering the users with tools to let them spontaneously express themselves, and then you can “push” questions just when needed.

I truly believe that successful market research should adopt the best of both worlds, push and pull methods, such as   insights communities – one of the fastest adopted methods by the market research industry worldwide. Companies have built research communities for many years now, but more than ever we see how important those communities have become as a source of innovation and inspiration for marketing, as well as a key part of the standard consumer insights tool-kit.

Have you combined “Push and Pull” Market Research techniques? I would love to know your thoughts and experience.


5 Differences between Push Marketing and Pull Marketing ( http://www.dmn3.com/dmn3-blog/five-differences-between-push-marketing-and-pull-marketing)

Push Marketing (http://www.marketing-schools.org/types-of-marketing/push-marketing.html)

The Rise of Pull Marketing & Why it’s Crucial for Franchisees (http://empowerkit.com/blog/marketing/the-rise-of-pull-marketing-why-its-crutial-for-franchisees/)

How To Balance Push And Pull Marketing(http://www.digitaltonto.com/2015/how-to-balance-push-and-pull-marketing/)  


Participate In The Q3-Q4 2015 GreenBook Research Industry Trends (GRIT) Survey

Join thousands of global researchers and help our community better understand where we are and where we’re headed.

gritbanner (1)

We’d like to invite you to share your experiences and perspective with us in the Q3-Q4 2015 GreenBook Research Industry Trends (GRIT) Survey.

Join thousands of global researchers and help our community better understand where we are and where we’re headed.

Here is the link to make your voice heard (the hyperlink is in the text in case you have a display issue):

Participate in the Survey!

Only with the support of marketing and insights professionals like you can GRIT continue to yield insights into how research buyers and providers are adapting to the rapidly evolving research landscape, and we appreciate your participation very much!

We’re always working to improve the survey to make it more engaging, more device agnostic, and most importantly, SHORTER! The survey takes less than 15 minutes to complete.

What’s new:

  • Panel Providers: A new set of questions has been added to better understand your involvement and satisfaction with panel providers.
  • Professional Development: How proactive are research firms when it comes to staff training? What organizations are filling this need?
  • Market Research Transformation: We’ve added questions to gauge both your opinions and strategic reactions to industry disruption.
  • Defining & Understanding Partnership: New open-ended questions aim to understand what a successful partnership consists of and how it can be achieved.

Tracking questions:

  • Frequency of use for qualitative and quantitative methodologies
  • Adoption of new methods and technology
  • Evolution of the modern researcher
  • Budget/revenue projections for 2016

Don’t miss this chance to give back and support your profession. All who complete the survey will receive:

  1. Full version of the GRIT report detailing the results of this survey
  2. Exclusive access to an interactive online dashboard with the complete dataset for your own analysis
  3. Priority registration to webinars featuring industry experts and thought leaders who will discuss GRIT’s results and implications

As our industry changes rapidly, it’s more important than ever to truly understand what is happening and what the implications are for the business and profession of market research.

Who Should Participate

  • marketing insights and intelligence  suppliers, technology providers, and consultants
    Client-side marketing and insights professionals

Special Thanks to All GRIT Partners

RESEARCH PARTNERS: Dapresy, Gen2 Advisors, Keen as Mustard, Lightspeed GMI, NewMR, Q Research Software, Researchscape

SAMPLE PARTNERS: ACEI, AIM, AIP, AMAI, AMSRS, APRC, ARIA, AVIA, BAQMAR, Blauw, BVA, CASRO, CEIM, ESTIME, FeedBACK, Gen2 Advisors, GIM, Insight Innovation, Lightspeed GMI, LYNX Research, Michigan State University, MRIA, MROC Japan, MRS, New MR, NGMR, NMSBA, NYAMA, OdinText, PROVOKERS, QRCA, Researchscape, SAIMO, Sands Research, The Research Club, Toluna, University of Georgia | MRII, University of Texas, Vision Critical, Wisconsin School of Business

Thank you in advance for sharing your time and experience!


Using Creative Incentives to Increase Panel Engagement

There are many ways that companies can create better experiences for research participants through the use of incentives.



By Jonathan Price

The end goal of any good market research project is easy: garner actionable insights that can advise successful business decisions. Getting there is the hard part. Researchers must have high respondent completion and response rates and, in order to get the right data, the respondent must be engaged. This piece of the puzzle can sometimes fit perfectly in place when appealing to the respondent with just the right incentive. Having the right incentive, at the right time, can have a huge positive impact on engagement, response and appeal.

In the past, many research companies relied on one type of reward for research respondents. Many times, the same reward over and over causes target audiences to lose interest. In addition, as companies expand and possibly shift product offerings across multiple consumer and business audiences, a single type of reward loses its panache.

There are many ways that companies can create better experiences for research participants through the use of incentives. Some things to keep in mind when creating an incentive program include:

  • Choice: Make the reward compelling by offering respondents a myriad of reward types to fit their lifestyle and needs. A single reward type can result not only in additional hard costs as companies try to motivate respondents with increasingly large rewards, but could also bias results and adversely affect data.
  • Immediacy: Create an instant reward delivery platform so respondents don’t have to wait for their incentive. Having an automatic streamlined delivery process can free up time for turning research data into actionable insights and take less time troubleshooting incentives.
  • Partner Technology: Consider using an integrated API (application program interface) that allows the automatic delivery and fulfillment of rewards. By creating the right partnership with a company that specializes in incentive delivery, one-click can accomplish a complete solution.
  • Customization: Not all market research is created equal. It’s important to make sure that each incentive program fits the client, the study, the desired sample and more. Thinking ahead with careful planning can help with developing creative new ideas for incentive fulfillment and types of offerings. This approach promises to bolster the upward trend in quality response rates and keep panelists continually engaged.

Matt Thurston, COO of icanmakeitbetter, recently partnered with incentive solution company Virtual Incentives for his company’s respondent rewards program. Thurston stated, “As we expanded we worried that a single reward type could cause response rates to become stagnant. By working with a company that specializes in incentives we were able to get creative, develop new products and make a streamlined reward process. For us, this has increased response rates and saved our staff and our clients’ time.”

Thurston stated that in the year icanmakeitbetter has been working on providing incentives through this new partnership, he’s seen response rates improve considerably and membership attrition rates cut in half. This equals a high rate of return for his clients and, in the end, gives a more complete data picture.


Jeffrey Henning’s #MRX Top 10: Social Media Engagement, Analytics and Forecasting

Of the 29,013 (!) unique links shared on #MRX last week, here are 10 of the most retweeted.


By Jeffrey Henning

Of the 29,013 (!) unique links shared on #MRX last week, here are 10 of the most retweeted that rose above the spam…

  1. Using social to fast forward to the future – Rosie Hawkins of TNS, writing for Research, sees social-media market research as posing “the ‘microscope vs. telescope’ dilemma.” She writes, “There is a balance to strike between the strategy (based on a longer-term view of category and consumer dynamics – the telescope) and taking quick, reactive advantage of specific opportunities or disruptions in the market (the microscope). Marketers need to be truly agile as too much focus on real-time marketing risks losing the bigger picture.”
  2. MR disruption continues: Barnes & Noble College rolls out research offering– Lenny Murphy discusses how the Barnes & Noble campus-bookstore arm is extending its research to more than 5 million U.S. college students, offering students points towards bookstore purchases in exchange for survey responses, while developing custom Millennial research services.
  3. Why Big Data alone is an inadequate source of customer intelligence– Tyler Douglas of Vision Critical covers three reasons why the ROI from Big Data can be low: “1. Most companies don’t know how to use Big Data for strategic decisions. 2. Big Data doesn’t provide a complete picture. 3. It lacks the ‘why.'” Big Data doesn’t remove the need for traditional types of research.
  4. Social media engagement rates decline– Bronwen Morgan of Research recaps Forrester and Mobile Marketer studies on social media; as the volume of social-media posts has increased, engagement with those posts has decreased.
  5. The future of research in talent and training– Simon Chadwick of Cambiar recaps research from his organization into how the skills necessary for market research are shifting from the tactical (e.g., report writing, project management, PowerPoint skills) to the strategic (e.g., storytelling, consulting, synthesis). Unfortunately, firms are doing a poor job of training their staff for these strategic skills.
  6. AMSRS 2015 conference workshop wrap-up– Victoria Gamble of Blaze Research compiled a round-up of links to the tools and books discussed at this AMSRS workshop for streamlining social-media marketing and presenting.
  7. Social media analytics: promises, challenges and the future– Marketing scientists Kevin Gray and Koen Pauwels raise questions about the claims of social-media market research, especially the claim that social desirability bias is a bigger issue with surveys than social media.
  8. While some things change, others will stay the same– Zontziry Johnson argues that the core approach and ethics of market research will remain constant, even as new techniques and technologies emerge.
  9. Smartphones become the ‘remote control’ to people’s lives– Jane Bainbridge of Research summarizes Deloitte research into UK consumers’ pervasive usage of smartphones: 55% check their phone within 15 minutes of waking up, 36% look at their phone at least 25 times a day, and 28% check their phone within 5 minutes of going to bed.
  10. When Big Data becomes bad data: The limits of analytics– Lauren Kirchner of ProPublica discusses how algorithmic bias from data mining and predictive analytics can amplify the impact of past discrimination.

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.


Cross-Platform Data: Where Sound Bites Meet Research Reality

The idea of “truly 360-degree measurement” is not new, and attempts to get closer to it are always appearing; but they bump up against hard, cold realities.

reality-check (1)


By Florian Kahlert

At a recent research conference, a panelist (representing an agency that shall remain unnamed) was yet again proclaiming the death of panel data and advocating a move to cross-platform, census-level data for planning and buying.

I had to bite my tongue, as I often do in these cases. Actually, in theory, I agreed with him. If we had reliable cross-platform data for TV, radio, print, online and mobile for the same individuals, accurately linked and at levels that approach census-level (millions, not thousands) — and then could connect that data to the same person’s product ownership and consumer behavior — we would be in advertising Nirvana. (Or, Orwell’s 1984.)

The idea of “truly 360-degree measurement” is not new, and attempts to get closer to it are always appearing; but they bump up against hard, cold realities. Let’s talk about a few:

First, that same agency person who would be so delighted by the abilities of this system would likely be unwilling to pay for the services. To do what he envisioned would be prohibitively expensive.

Second, what sounds awesome as a conference sound bite (or written in a blog post, like this one) is technologically incredibly complex, massively big, and – unless you are the NSA, with virtually unlimited funds – extremely hard to do. To combine passive data for the same person reliably across multiple platforms at scale is something Richard Branson might consider beyond rocket science.

Just to provide a simple detail – managing a passive panel (not even census) requires constant oversight as people change devices, move to different states, and more. It also demands ongoing software development, as mobile companies change the way they do things; and it generates terabytes of data every day that need to be cleaned, processed, and made actionable by running them through ever-evolving taxonomies. And that is just for one platform; do this across digital and TV, and you are multiplying the complexities.

Third, we have not even talked about the definition of “census-level.” Does it mean all 200 million US adults? Is a sample of 20 million enough? In other words, do big numbers without actual sampling methodology truly “represent”? What if my 20 million represent a populous of mostly high-income people living in big metro areas? Is that “good enough”?

A path forward

Now, just being a nay-sayer is not really helpful. Let us look at some things we can actually do.

First, there are companies out there that have excellent program level TV data – Nielsen and Rentrak come to mind.

Second, other companies have awesome product ownership and print media consumption information (such as yours truly, GfK MRI); but this is mostly recall data, not passive.

Thirdly, there are companies like Tapad that do an excellent job of connecting different devices (mobile online), but they do not know much about the person’s TV viewing habits or OOH behavior.

The key to a way forward is to acknowledge that no one can afford to own and generate all the data anymore by themselves; we need to find ways to combine different data sets to come up with a more universal view. And the only way I see to do this that will not bump up against Orwellian levels of privacy intrusion is in anonymized matching, and modeling unmatched sets, and then calibrating them all against representative, carefully managed reference panels. And there again, you have the need for panels — the very opposite thing of census data.

Needless to say, we are constantly working on it. Living in the real world is no small task, it seems.


The Statisticians Have Taken Over The Segmentation Asylum?  Hardly.

Over twenty years of working with these tools have convinced me that conducting segmentation studies using MaxDiff and Latent Class models represents a powerful combination of tools for marketing researchers.



By Steve Cohen

It’s not often that I take offense at something written on the Internet.  After all, it’s a wild, wild West out there where moderation is often in short supply and signs of intelligent life are hard to find.

With this in mind, during a recent Google search I came across a White Paper that claims to “debunk” the use of “fancy, schmancy” segmentation procedures.  In particular, the author laments the fact that the statisticians who “are running the asylum” recommend for segmentation studies the use of MaxDiff Scaling and Latent Clustering (which is called Latent Class or Mixture Models by all people I know).  In fact, the author states that using both in segmentation studies is a “recipe for disaster.”

Wow.  Just wow.

As someone who has won several awards for my work introducing and using MaxDiff and Latent Class segmentation in the marketing research community, I had to read this document in depth.  When I did, I took immediate offense at several of the boneheaded assertions in it.

Let’s take a brief tour of what the author claims.  First, MaxDiff

“… is a great measurement tool that should not be used as the source of segmentation inputs. Sound segmentation inputs need to be measured at the individual level and use a method that can be readily reproduced in “short forms” applied during follow-up research.  MaxDiff does neither.”

I have two very serious problems with this.   First, in my experience using MaxDiff in segmentation studies since 1997, I know that MaxDiff can produce stable, reliable, and very usable results that provide much better differentiation and  interpretability than traditional methods.   And, second, I contend that a clever analyst can develop a very compact and accurate short-form using MaxDiff results that can be applied in follow-up research.


MaxDiff does measure segmentation inputs at the individual level.  These measures are the responses collected in the best-worst choice tasks.  Under certain circumstances, what MaxDiff can do is yield individual-level utilities that are estimated using a Hierarchical Bayesian multinomial logit (HB-MNL) model.

The author seems to be blissfully unaware of the discussions in the marketing science literature these past few years about the nature of segments.  Prof. Greg Allenby of Ohio State has argued that heterogeneity (segments) should be measured on a person-by-person basis and we should think of segments as people who behave at the extremes, based on an examination of the individual-level utilities.  Others, like Michel Wedel at Maryland and Wagner Kamakura at Rice, claim that segments are really constructs that help managers deal with the complexity of markets by providing shorthand ways of talking about consumers and customers in aggregates — which we call segments.

My own view leans heavily to not using individual-level utilities estimated with hierarchical Bayesian tools since the utilities are assumed to be drawn from a normal distribution  — meaning the distribution of utilities is smooth and thus does not display any obvious places to “cut” into groups.  What I do instead is use Latent Class Models, which assume that the utilities can be estimated to be lumpy and multi-modal – meaning that segments, if they exist, can be discovered.

By the way, since Choice-Based Conjoint Analysis also uses choice inputs and then estimates individual-level utilities using HB-MNL, would the author make the same argument to debunk CBCA?  Somehow, I think not.

My guess is that the author has been using Sawtooth Software, which does generate individual-level utilities, in a rote way too often and has not paid much attention to the behavioral science behind segmentation nor to the assumptions underlying these tools.

Short Form MaxDiff?

Let’s examine the second claim that MaxDiff does not yield a method that can be used in a short-form after the segmentation study is complete.  Specifically, the author says,

“There is no way to reproduce the MaxDiff importance scores in a short-form classification algorithm.”

First of all, follow-up short-form classification surveys are never be designed to reproduce the MaxDiff importance scores.  What is this claim all about?

Rather, as in traditional segmentation studies which employ Discriminant Analysis for post hoc classification, the function of the short-form is to assign people to known segments which have known characteristics by using as few questions as is reasonably possible.  Got that?  We are not looking to reproduce importances, but just to put people into groups with good accuracy.

I find it hilarious that this wrong-headed assertion is compounded by this declaration about short-form classification tools:

“… the accuracy rates are so low they would scare you.  As a result, short forms generated off MaxDiff segmentation schemes tend to be both lengthy and inaccurate.”

I can state categorically that, in my experience, we can create such short forms which are as accurate, or even more so, than traditional methods and are much more compact than traditional methods.  I have personally created such short-forms and these contain typically less than 10 questions with accuracy rates in excess of 85%.

Latent Clustering (sic)

Latent Class (LCM) or Mixture Models are based on sound statistical foundations and have a long history of use in marketing science and many other disciplines for uncovering hidden (latent) groups (classes or segments).

So what is the author’s beef with Latent Clustering (sic)?  Again, I quote:

“Consumer segmentations are generally done on survey data and respondents have the unfortunate tendency to use scales in slightly different ways from each other (see benefits of MaxDiff). The reason this is a problem in Latent Clustering is that frequently the model tends to form segments based on how people use the scale (e.g., high raters or middle raters) rather than what people were trying to tell us on the scale.”

Hello?  Respondents using a rating scale badly is a ubiquitous problem, not only for clustering or grouping of any flavor, but also for brand ratings and many other typical marketing research tasks.  Blaming LCMs for how people answer surveys in a biased way is just absurd.

Is there a suggested alternative?

“Transformations (e.g., within-respondent-standardization) that are an effective solution to this issue in Euclidean distance models do not prevent Latent Clustering from generating these meaningless groups,”

I really tried to untangle this word salad, but there are so many ideas happening in this one sentence, I was forced to reach for the aspirin bottle.

But suppose just for example that there are some survey respondents with no or little within-person variation.  Claiming that the within-respondent standardization supposedly solves this issue is wrong; it can create yet another set of thorny problems.  Think about it.  If a respondent “straight-lines” a series of survey attitudes (which happens quite frequently), a within-respondent standardization will require dividing the mean response for each person by his/her own standard deviation, which is exactly equal to or very close to zero.  Good luck with that being an effective solution.

Mixed Levels of Measurement

Yet another beef with LCMs!

“The ability to mix metrics generates the temptation to throw in the kitchen sink and segment on virtually the entire survey (attitudes, needs, behaviors, demographics and even brand usage!).”

Good lord!  You mean to say that there are researchers in our industry who dump the kitchen sink in a segmentation analysis without even thinking about what they are doing?  Oh, no!  Where have I been all these years?

My contention is that, used judiciously and wisely, variables at mixed levels of measurement are a great help in developing actionable segmentation solutions.  Dumping everything in at once is not a flaw of LC models, but rather of an ineffective analyst.

So what is the suggested alternative?

You dear readers who have actually spent the time to read the quoted article were, no doubt, eager to hear the punch line.

Once the author has “debunked” these tools, surely the magic bullet, the keys to the kingdom, the secrets of life, and the sacred tablets as written by the author will be shown to us lowly mortals.

And what do we get?  What do we hear? What is the long-awaited wisdom?  What should we do instead of using these heinous methods?

(That is the sound of crickets.)


I suggest that this author clearly needs to get a firm grip on the behavioral and statistical assumptions, theories, and methods of MaxDiff, Latent Class Models, and Hierarchical Bayesian modeling.  Spending time trashing these modern advances, misunderstanding their uses and application, and then suggesting nothing to replace them is not even remotely helpful.

Expecting everyone in marketing research to be an above-average analyst born in Lake Wobegon is foolhardy.  Perhaps the author will come to realize that some people are just good examples of the Dunning-Krueger effect.

Over twenty years of working with these tools have convinced me that conducting segmentation studies using MaxDiff and Latent Class models represents a powerful combination of tools for marketing researchers and is not at all a recipe for disaster.  Is this combination to be used all of the time?  Of course not.  Marketing researchers should select the best methods and statistical procedures to meet the objectives at hand.

Are the statisticians running the segmentation asylum?  Hardly.

Let’s not follow flawed guidance that may not be based on a full picture of the collective experience and best thinking of many experts (not just me!).  Otherwise, the incompetents may end up running the segmentation asylum and that is really why it could get scary out there.