1. RN-GBook-480-60-B2B-Banner-5-16
  2. Greenbook 2
  3. Greenbook-Mobile-6.29.16-
  4. mfour_new_1

Skin In The Game: Revealing The Honest Truth

Skin in the game is a powerful concept and is necessary to understand true human reality.

skininthegame

 

By Anouar El Haji

There’s data, and there’s data. The most challenging type of data is data collected from people. It’s challenging in the sense that it’s hard to verify whether what people claim to value and believe is true. For example, Bob might say he prefers coffee to tea but there’s no way to be sure—he alone knows whether this is really the case.

To reveal what Bob truly prefers, we should present him the real option to be served either coffee or tea and observe his preferred drinking choice. Bob is now much less likely to misrepresent his actual preference because he’ll have to bear the consequences of consuming his choice. This is called having ‘skin in the game’.

If you have skin in the game it means that you expose yourself to the real consequences of your claims; positive or negative. Skin in the game ensures that your statements and actions are aligned with your true preferences and beliefs. It’s no surprise that people who have skin in the game are taken more seriously than those who don’t. A recent study shows that stock analysts (whose job is to figure out which shares to buy or sell) are considered more trustworthy if they follow their own advice while analysts who have no skin in the game are mostly ignored.

Here’s the problem though: most data is collected from people who have zero skin in the game. Whether it’s collecting data using surveys, social media, focus groups or interviews, data without skin in the game is at worst simply wrong, and at best doubtful. That’s quite alarming because a lot of decision-making, both in the public and private sector, depends on this type of data.

In the academic community, especially among behavioral economists, this problem is well understood. For this reason experiments in which participants have skin in the game are considered much more trustworthy. To illustrate this point researchers asked participants whether they would be willing to donate $8 to a college scholarship fund—hypothetically speaking. The participants were explicitly asked to imagine that the choice is real. A clear majority of 71% claimed ‘Yes’. Afterwards, however, the very same participants were asked whether they would be actually willing to donate $8 to the fund. The percentage of participants who said ‘Yes’ dropped all the way to 38%!

Skin in the game is a powerful concept and is necessary to understand true human reality. High-impact recommendations that are not based on reliable data can have far reaching negative consequences for business and society. The increasing availability of data is actually harmful if the data cannot be relied on. In fact, more spurious data just leads to more doubtful conclusions. Data with skin in the game is not a luxury but a necessity.

Share

Five Reasons Why the Market-Research Industry Is Ready to Join the Sharing Economy

Market research is clearly one of those industries primed and ready to join the sharing economy to spur meaningful growth.

sharingeconomy900x4002

 

 

By Peter Zollo

The “sharing economy,” also known as the “collaborative economy,” is rapidly spreading across key industries with tremendous success. And, even more sectors are poised for leveraging “sharing” to catapult growth.

Some of the most prominent of the new sharing brands have become household names, including Uber, Airbnb, and Kickstarter, as they harness the Internet to connect like-minded partners and customers, while driving down costs. There are now – count ‘em! — 17 companies with revenues in excess of $1 billion that operate within the realm of the sharing economy, employing more than 60,000 people.

Market research is clearly one of those industries primed and ready to join the sharing economy to spur meaningful growth. The two most important pillars of successful sharing businesses — collaboration and technological innovation — are already cornerstones of the industry, reflecting market research’s readiness for serious sharing. Yet, despite the current level of innovation we’re enjoying, when nearly two dozen market-research leaders were asked how technology is currently impacting the industry, not a single comment even touched on integrating technology and the sharing economy for the industry’s benefit.

So, here are five key reasons why the market-research industry is ready to join the sharing economy right now:

  1. Data has become less proprietary because of technology

For many, the idea of sharing data (aka collaborating with competitors on research projects) sounds counterintuitive. As Bob Meyers, former Global CEO of Millward Brown, put it: “It’s 2016. It’s no longer about data being proprietary. Today, data is ubiquitous. So, embrace this reality and look for ways to reduce your data-collection costs by sharing and collaborating. That way you can focus on what really matters: analysis.” There’s an inherent advantage in sharing. As Stan Sthanunathan, SVP of Consumer and Market Insights at Unilever, advocates: “In a rapidly changing world, we cannot operate in silos.”

  1. We have actually been doing this (without a tech platform) for years.

Something that most of us don’t really think about: All those syndicated research studies you’ve been purchasing are shared with your competitors. You’ve probably also “co-sponsored” a study with others along with way. And, that’s good, showing an openness to sharing that can now be enabled and accelerated at scale. Whereas the cost of producing a large-scale syndicated study can approach or exceed seven figures, an individual subscription is priced at a fraction of that. Why? Because syndicated subscribers share the research and the costs — that’s the model. So, imagine the benefits of an online sharing marketplace, where insights projects are posted that address timely issues and opportunities rather than simply the same old studies.

  1. Collaborating with others can often result in better outcomes than by going at it alone.

Here’s one quick example market researchers can learn from: Physicians Dr. Ijad Madisch and Dr. Sören Hofmayer, and computer scientist Horst Fickenscher, believe in the power of collaboration. So, they created a company that removed academic and scientific research outside of the “silos” to which Stan Sthanunathan refers. They founded ResearchGate in 2008 to connect scientists and researchers in order to collaborate, coordinate, and advise their peers on current research. The site’s 10 million members have taken a dramatic new approach to research, changing what used to be a solitary and slow process to one of collaboration, resulting in faster and better results, disrupting the way in which scientific and academic research had always been conducted.

  1. Sharing drives down costs.

Many of our new realities include working in a zero-based budgeting environment, making the funding of unplanned-for insights projects more challenging than ever. We’re asked to be more agile — to do more with less. Guess what? Sharing data can dramatically drive down costs, allowing you to fund more projects and to re-allocate your budget to analysis and insights.

  1. There’s a new platform, recently introduced on stage at IIeX, that brings market research into the sharing economy.

Research suppliers and clients alike can post research projects on Collaborata, allowing multiple parties to share in the cost of funding a project for as little as 10% of its original cost. And those funding the project can also help guide and shape it. So, Collaborata resembles syndicated research in that it makes large-scale projects affordable for the individual client – all for the pragmatic trade-off of sharing results. In fact, everything about Collaborata has been strategically and carefully designed to promote collaboration. For example, anybody registered on Collaborata can “refer” a project to anybody else, leveraging his or her professional network to help get a project funded. Collaborata credits your account (of, if you prefer, they’ll actually send you a check) in an amount equivalent to 20% of the spend of whomever you referred, assuming the project funds. And, if a project exceeds its funding goal, any extra revenue is shared with the research supplier and the client who originally posted the project.

Collaborata helps to solve many of the problems currently faced in the market research industry. A 2015 Greenbook blog post by Leonard Murphy noted “dwindling budgets” as one of the top 10 challenges in the industry. Collaborata reduces redundancy by connecting clients with similar research needs, so they can pool their funds. Michalis Michael’s 2015 Greenbook blog, regarding the future of the marketing-research industry, noted two significant predictions. First that “ traditional market-research agencies that refuse to change will go out of business.” Collaborating, in order to drive down costs, seems not only like reasonable, but also, strategic change. Michael also suggested that: “Agile research will become mainstream, ” reflecting that the time is right for the sharing economy to be embraced within the industry now.

There are already nearly 20 projects, totally $2 million in value, posted on Collaborata, ranging in content from “Millennial Mealtime” and “Gen Z Luxe” to “Omni-Channel Shopping” and even “Sizing the Global Legal Marijuana Market.” A 60-second video, residing at www.collaborata.com, explains how Collaborata works and how you can quickly become a sharer — to the benefit or your company or organization.

Share

Research on Research Respondents: What Have We Learned?

Kerry Hecht details learnings from meetings with real respondents at IIeX.

roundtable discussion

As always, IIeX was a great experience for the entire Recollective team and for me, personally.  We made some great connections, learned a lot and participated in collaborative problem solving.

Part of the experience for us was hosting a roundtable with people who participate in marketing research studies. We did this to further our ongoing conversation about what our industry is like for them. Our goal is to create a more open dialogue between our side of the industry and theirs; the hope being that we can work towards some foundational shifts in our thinking and find some solutions to our shared problems with quality and trust.

First, as a reminder, this wouldn’t have been possible without the help and support of our industry. I’d like to give a personal thank you to:

While Jessica Broome and I conceptualized and brought this to life we never would have been able to do this without help…  lots and lots of help!

To bring you up to speed – we first conducted an online community among people who participated in multiple kinds of research. To read the initial blog posting about those findings, please follow this link.

Next, we ran a 1,500 person quant study to further explore and validate what we learned in Phase 1, but also to explore a few theories we came up with. Our thinking was that we might want to be considering or profiling things like creativity levels, empathy levels and learning styles of the participants as we are developing our questionnaires and think through how we recruit for different kinds of methodologies. We were fortunate enough to be able to present these findings at IIeX. If you missed our workshop, but would like to see the presentation you can read through our findings here.

The roundtable discussion, with actual research participants, proved to be as fruitful as the first two phases.  We had six participants recruited through a variety of sources and incentivized by Tango Card. Each of the participants was, again, recruited to have participated in multiple kinds of research. We learned so much from them, but there are definitely themes that have emerged across these three phases.

Some of them are:

  • They are all what we consider to be professional respondents. There is definite cause and effect going on, though. They do not feel informed or respected by the way we screen or the information we provide them around how we select them.
  • The screening takes up too much of their time – so, they feel misled and therefore they try to ‘game’ the system.
  • Additionally, they often feel in the dark about why we ask what we do, so they try to give us the answers they think we want. This creates an environment where, at the point of screening, they fudge…  not lie, but ‘fudge’.  Sometimes, though, this is because they can’t come up with accurate answers to the ‘impossible’ questions we ask.

It’s my fundamental belief that we, as an industry, already know this and just don’t know what to do about it.  So, I throw out a few thoughts and hopefully a foundation for some solutions..

If we can’t beat them, join them

  • Instead of trying to trick them through complicated and impossible to answer screening questions, can we be more transparent about what we are looking for upfront?
  • We know they don’t like to participate in exercises where they feel like they don’t have a lot to add, so if we told them what we need and why (as much as we can), maybe they wouldn’t feel like screening was a chess match and it was more of a conversation. We could, potentially, also avoid scenarios where they think we want an expert on a topic so they educate themselves before the research to be helpful, when we actually wanted a novice.
  • It’s also obvious that these aren’t inherently manipulative or dishonest people, they want to help – so let’s tell them how they can in a straightforward fashion.
  • While there are few things more important to them than the monetary incentives, there are some things that are equally as important – they love seeing the results, it doesn’t have to be the final results, but just enough to even know where they sit in the group or in the survey. We definitely could be doing more of this.
  • We need to talk to them more – about us; who we are, why we need them, why the truth is important…  we can do this in an automated fashion – embedding video into screeners or surveys. Let’s entertain them and help them understand us.
  • Ultimately – it comes down to respect, transparency and cooperation. These things are the foundation for all good relationships.  This is no different.

Jessica Broome will have more on this and we’ve got Phase 4, 5 and 6 in the hopper. In the meantime, we’d love to hear from you about what you’d like to learn, so please reach out and let’s further the conversation.

Share

Going Big with Qualitative Market Research

The ability to ask and analyze at scale has given rise to a different breed of qualitative

 

By Adam Rossow

The qualitative research renaissance is in full swing. Brands have figured out that context is the key to understanding what’s relevant. And, we have seen through countless examples that left to its own devices, data of any size can lead to missteps based on assumptions.

Despite qualitative’s growing value, drawbacks, both perceived and actual, are still a drag on the more touchy-feely research discipline. A major one being its lack of scale. Many people need the security blanket of large base sizes; the ability to say “3000 people said this”, even if that mass response is too vague or cloudy to prompt action.

It’s understandable. There is always safety in numbers.

But qualitative is branching out. While it’s still the home of the intimate focus group and in-depth individual conversations, it can also be the vehicle that empowers 1,500 consumers to paint a picture of the type of person that stays at an Airbnb, or the platform for 1000 digital natives to provide insight into technology’s role in the shopping experience.

Thanks to advances in sampling, outreach platforms and text analytics, quickly launching a series of open-ended questions to the masses is possible. And, extracting insights from a mountain of responses isn’t nearly as tedious or susceptible to missteps and vagaries as it used to be. That ability to ask and analyze at scale has given rise to a different breed of qualitative. And while a few questions don’t provide the same depth as hour long IDI’s, they do provide a valuable utility for brands – that of quick contextual insight from a large number of individuals.

With just a few open ends clients can get an up to the minute view of their customers, understand if what they are communicating to consumers resonates, uncover which product features are most important, see how their competition is perceived, and more.

Despite needing only a few key ingredients to achieve qualitative at scale, the recipe for success in not so simple. Anyone who has experience working with open ends and text analytics has seen their fair share of one-word answers, word clouds, and uninformative findings. Effectiveness lies in meticulous question formulation and testing, as well as analysis that provides a story, not just word counts and nebulous themes. As with any good qualitative initiative, preparedness and skilled people are vital.

So next time you need answers from the masses, don’t immediately default to quantitative research that’s void of context. You may be missing out on the opportunity to have your scale and your story too.

 

Share

Personalizing Respondent Rewards for the Global Marketplace

Today’s global marketplace necessitates a thoughtful and nimble approach to doing business. For market research, this means building that connection with respondents to gather actionable data that brands can use on a broad level.

continents-international-relations-global-studies

By Jonathan Price, CEO, Virtual Incentives

Globalization has had a significant impact on nearly every facet of our daily lives – scholars continue to explore and study its effect on diversity, culture, politics and more. From a business standpoint, the economic implications of globalization are immense ranging from increased competition and more efficient markets all the way, some would argue, to wealth equity. One thing is for sure: globalization is changing the way we do business on a fundamental level.

A far cry from the idyllic “It’s Small World” concept of each nation and culture working together in harmony under “just one moon and one golden sun”, today’s global marketplace necessitates a thoughtful and nimble approach to doing business. The number of consumers in developing countries continues to rise rapidly, and the global flow of products, services, money and even information is rising in tandem. (Throw government, labor and risk management in the mix, and you have a complex web of opportunity and challenges that we won’t get into here.) Reaching these audiences means an able wielding of technology (which is driving globalization in the first place) in order to reach them in a space that makes sense, both to the target audience and from a business standpoint.

To narrow this giant concept down to its implications for the market research industry, we need to look at ways to effectively and efficiently embrace the global nature of today’s marketplace. Without doing so, they will quickly be left behind. The essence of doing this lies past simply making products and services available in multiple languages and countries. It means building that connection with respondents to gather actionable data that brands can use on a broad level.

For research projects that have a global scope, or are being conducted beyond the “likely suspects” (e.g. the United States), a respondent reward offering can be important in boosting response rates. Virtual Incentives has always had a global offering, with technology like our real-time API and easy-to-use management portal. But to us, “going global” means more than just technology that works across country lines. One good example is the Global eGiftcard, which goes beyond just taking an existing product and sending it overseas. This solution offers advanced personalization, customization and ordering options with more than 600 brands to choose from in various denominations – all with seamless delivery to more than 43 countries in 16 currencies.

Perhaps the most important point that this solution drives home is its focus on the survey respondent. The instant, flexible delivery of culturally relevant, top in-country brands for more than 40 countries goes beyond simply changing language or currency to make a U.S. product fit across borders. In order to effectively conduct research, partners in the process need to deliver true international solutions.

So if “going global” is part of the plan – and it probably should be – there are several things to keep in mind:

  • One size doesn’t fit all when you are crossing geographic and culture boundaries. Customization that goes beyond the “superficial” is key to success.
  • Think about the respondent audience and their needs first. Technology can make the world a whole lot smaller, so it needs to resonate with respondents no matter the country where they reside.
  • Cover the basics – make sure you are delivering your solution in the language, currency and method (e.g. mobile friendly) that fits the audience. A no brainer, right?
  • Evaluate continuously by reviewing the effectiveness of any global solution – gather data and feedback and implement it in future products for constant improvement.

With something as vast as a global presence, it’s important to make sure all the Is are dotted and Ts are crossed. The world may be getting smaller, but for most businesses this means a larger audience and a new approach.

Share

5 Guardrails to Guide Qualitative Learning

It is the perfect time for customer information to meet customer understanding. Acquiring valuable qualitative insights before applying assumptions to data just might provide that powerful point of difference opportunity no one else has discovered.

THE WALL

Earlier this year, Schlesinger Associates installed THE WALL at select facilities.  This dynamic, multi-window interactive wall serves as both a free form and structured display of qualitative stimuli. 

By Mark Murray, Managing Director, MarketResponse International

It’s not in my nature to be the adult in the room, but it’s time to clear the air on using “qualitative” to breathe one’s own exhaust.

Each morning research facilities the world over are replenishing M&M dispensers, filling minibars and testing sound levels for the next episode of “how much do they like us.” And while it would be easy to launch into a cynical diatribe, the productive course of action is to implore the researcher to cling to their objectivity, hone interpretive skills and apply new qualitative methods able to reveal the next product or service consumers couldn’t imagine life without.

Here are some guidelines.

1. Bring “Context” to the project.
There’s value in any opportunity to hear customers and prospects reactions to concepts, products, and offers. Be sure to clarify what to expect, what’s the realistic take-away and perhaps most importantly, the comments which must be ignored.

Sure, it’s invigorating to be the purist and say “exploratory research” is not the proper forum for a Facebook thumbs up or down reaction. Instead, be the realist. Rally around creating a productive conversation behind the glass. Just keep the words “Research” off the report. “Customer Audit” has a nice ring to it.

Accept that in some situations your talents are being used to “facilitate” instead of “moderate”. Recognize the difference and get the work done.

2. Objectivity
It must be unwavering. Some marketers design, field and report qualitative implications internally. Many have productive, long-term relationships with go-to moderators. Both can be valuable resources steeped with an understanding of customer and category.

That said, if there’s a fork in the road between self-preservation and autonomy you’re on the wrong path.
In your first visit with a client’s research director, you’ll know the level of objectivity, senior management sponsorship and respect they’ve achieved through their candor. Cherish those who have it. Propose quant for those who don’t.

3. Annihilate Narcissism
It’s fine to applaud passion for the business, but keep it out of the moderator’s guide.

Esprit de corps is healthy. It fuels incredible accomplishments. Celebrate opportunities to live vicariously through your client’s success. Just remember, a productive devil’s advocate can be the saving grace in getting a strategy right and keeping expectations in line with results. You weren’t invited to validate. Qualitative’s role is to investigate.

4. Research Designed to Stimulate
Perhaps there was a day when research came in chocolate and vanilla. And while the first question is often quant or qual, the first response should be, “what will you do with the findings?”

The menu of exploratory methods is ever growing. Researching research has become a more important part of being a resilient practice. The tools and forums to deliver stimulus are far-reaching, and the morsels of characteristics available for recruiting are virtually limitless. Without letting the process become cumbersome, embrace and use all means available.

We’ve organized methods across Brand, Engagement, Product, and Communications. Each of these Practice Areas house methods designed to deliver the learning needed to answer and measure specific client requests.  For instance, Brand understanding asks us to isolate the core motivations that form strong connections. Communications checks focus on an interpretation of message, capacity to understand and overall appeal. Each request calls for a specific approach and offers the challenge to add new techniques and exercises over time.

Recognize that today life is woven with threads of e-mails. And our conversations have been replaced by a series of texts. The challenge of hosting a focused dialog today makes research design paramount. Commit to learning and incorporating new platforms while distinguishing the “tool” from the job of getting constructive observations and valuable insights through the consumer narrative.
With all that said, the simple handwritten notes of participants’ “ideal moments” remain some of most insightful treasures of our studies – there’s nothing wrong with the tried and true.

5. The Art of the Question and the Empathetic Ear
The proliferation of bias in virtually all content consumed these days is overwhelming. Developing an objective question as a means of getting an unadulterated response has become more difficult. More than ever, we need to guard against discussion guide rewrites that unconsciously entice respondents to draw a target around your dart. Continually ask yourself if a question is designed to prompt an answer or launch a narrative that reveals their story. You need to understand the world in which your client hopes to play a role and not the other way around.

It’s time to face the facts with respect to Qualitative. “We” have reached a point where behavioral tracking, algorithms, and rigid experience designs are asking consumers to do business on the marketers’ terms. All of these factors make it the perfect time for customer information to meet customer understanding.

Acquiring valuable qualitative insights before applying assumptions to data just might provide that powerful point of difference opportunity no one else has discovered.

Share

Do We Have a Place in the Lives of Respondents?

For online data collection companies, respondents are every bit as important as clients. When we don’t value our respondents’ experience, our data becomes compromised.

Handshake

By Sima Vasa

For online data collection companies, respondents are every bit as important as clients. When we don’t value our respondents’ experience, our data becomes compromised.

Among all the data-collection techniques used, the internet offers the least direct human contact. This can leave respondents feeling alienated, as though they are a mere statistic. As leaders in online data collection, we must place greater emphasis on our respondents’ experience. Without respondents, we have no customers — and without customers, we have no business! The technology driving our industry is incredible, and it’s worthy of attention. But we often forget that the feedback we collect from people is what’s driving this technology.

In this article, I’d like to take some time to learn more about our respondents. After all, they’re the ones who help us make a living! They want to share their opinions and earn rewards, true, but we also have a place in their daily lives.

Whether we acknowledge it or not, the experience we deliver to our respondents has an impact. We need to make sure that our respondents are as satisfied as our customers. After all, they are the heart and soul of what we deliver.

Adopting this perspective, I chose to look more deeply into how the experience we offer shapes respondents’ opinions of us. We developed and fielded a survey of 2,500 of our most active panelists. Here is what we found:

  1. Respondents count on us and plan to take our surveys. 80% of people indicated that they like taking surveys on weekdays, but only when they have a free moment. We learned from this that we need to provide more opportunities for those who can’t make time to respond during the week. One respondent said:

The only thing I have trouble with is doing these surveys on weekdays […] I am doing this one on a Wednesday.  I am a teacher, and weekdays are the most difficult for me to get them done and they sometimes expire before I get a chance to do them.

  1. The Older Demographic has value and wants to contribute in a more meaningful way. The younger demographic, out of all age groups, rated their experience with us the highest. As I dug deeper, I learned that the older demographic wished we offered more surveys geared toward them. Here’s some of what they said:

[I’d like] more surveys geared to older adults. Just because we are over 65 doesn’t mean we don’t contribute.

I may be older but I’m quite aware of what’s going on in the world.  I’m an avid reader, [and] love music, including what my grandchildren listen to.  I’m interested in tech, though [I’m] not a geek.

Reading these comments, and many others like them, we saw the need to give our older demographic more opportunities to share their opinion, and to target more surveys to their age group. They are loyal and want to contribute.

  1. Integration of mobile technology enhances respondents’ experience and increases participation. Respondents frequently requested that surveys be optimized for mobile devices.  We received many comments like these:

“Ensure surveys can be completed on a mobile device…it’s no longer new technology.”

‘Many surveys are not able to be taken on a mobile device. This needs to be changed as more and more people use their mobile devices for things other than talking/texting.”

  1. Respondents want us to value their time. Many respondents expressed frustration at being disqualified after answering fifteen questions. 87% of our respondents indicated that they prefer to respond to 2-5 preliminary questions (even if these questions are repeated in the main survey) than respond to 15-20 questions before being disqualified. Most would prefer to know, at the cost of a redundant question or two, if they qualify upfront.  They understand the trade-off, and prefer being pre-screened.
  1. While the primary reason for membership in our panel is to earn rewards, this varies by age groups. Earning rewards was, unsurprisingly, cited as the primary reason for membership in our panel. While this response was the most common, it varied by age group. For the 18-34 segment (69%), fun rewards were the largest driver. In contrast, 25% of the 35-54 age segment cited “having my voice heard” as the primary driver for membership, while only 15% of the 18-34 segment listed this as their motivation.

It’s natural for us to be concerned with the service we offer our customer base. This article is designed not as an opposition, but as a supplement, to that demand for excellence. We offer our customers data from respondents — if we don’t consider their concerns, the data we offer is compromised.

Our respondents are as essential as our customers. Their concerns must be taken into account if we seek to maximize our product’s value. When we consider our respondents, we ensure higher-quality data for those with whom we do business.

Share

Why You Should Never Sample On Auto-Pilot

How do you decide the right sample variables to control on? The sample supplier needs to understand the objectives of the research as well as the analytic plan in order to make solid recommendations.

dreamstime_xs_30869237

By Susan Frede, VP of Research Methods and Best Practices, Lightspeed GMI

Sampling often seems to be an afterthought with clients as many simply state they want a ‘nationally representative sample.’ The question is what does the client mean by a nationally representative sample? One client might think it means representation on age and gender only, while another might expect it to include controls on additional variables like region, income, education, etc.

How do you decide the right variables to control on? The sample supplier needs to understand the objectives of the research as well as the analytic plan in order to make solid recommendations. Without this understanding it is difficult to build an appropriate sample. This understanding should include a discussion of the category and how different groups react to the category. Clients may not always know every group that is important, but most will have a general understanding of how various groups might respond.

Research-Live (May 2016) recently reported an excellent example of the importance of understanding the objectives and the category. Voters in the UK will soon be voting on a referendum on whether or not to remain in the European Union. Results of polls have varied greatly and originally people thought the difference was driven by online versus phone. However, with further digging it was discovered that the decision to remain or not is highly correlated with education. Many of the polls are not controlling on education so that can lead to skews in the results. Those online are also more likely to have higher education levels so that exasperates the difference between online and phone.

Sampling differences may also be accounting for some of the large differences in political polling in the U.S. for the next presidential race. It is important to look at the types of people who support each candidate and ensure the groups are appropriately represented in the sample. In some cases it may go beyond demographic variables. Certainly in U.S. politics, political party is key as many people vote along party lines.

Some might be saying ‘but you have just given us two political examples and this doesn’t apply in the marketing research world’. But it does! Say a client is testing a new idea for a high end product with an expensive price tag. Logic suggests that those with higher income will be more likely to afford the product and purchase it. If the income of your sample skews low then it may appear the product is not viable. Income might become even more important if you are comparing several product ideas and trying to pick a winner. If one of the samples skews high on income and the other low on income, it could look as if the one with the higher income is the winner when in fact it is the sample that is driving the difference.

Generally age and gender are the most common quota variables, but below are a number of examples of what might be important to control on depending on the category. For any category, the key is to think about what demographics might impact respondents’ behaviors and answers.

  • Banking and finance – Income impacts the types of financial products people may own and use.
  • Product consumption – Household size is key because larger households have higher consumption levels.
  • Shopper study – Stores can vary by region.
  • Entertainment/music – Tastes may vary by race/ethnic group.
  • Insurance – Insurance needs change as life stage changes so controlling on things like marital status or presence of children is important.
  • Toys – Age and gender of children can drive toy preference.
  • Hispanics/Canadians – Language is important because it can drive product choice.

Even when sampling is carefully done there can still be unexpected results. This is why it is imperative that the first thing to check when receiving a data file should be the demographics. Do the demographics look like what is expected of the target group? Next brand usage and category habits should be examined. Balancing on demographics reduces the chance that there will be brand usage and habit skews, but differences can still occur. For example, having significantly more users of the brand can greatly impact key measures. When differences in demographics, brand usage, and category habits are discovered, data can be weighted to bring the differences in line with expectations.

Bottom line, sampling needs the same consideration as the rest of the research design and should never be done on auto-pilot.


References

Bainbridge, J. (May 2016).  Education not taken into account sufficiently in polls.  Retrieved from https://www.research-live.com/article/news/education-not-taken-into-account-sufficiently-by-polls/id/5007442

Share

The Analytics of Language, Behavior, and Personality

Computational linguists and computer scientists, among them University of Texas professor Jason Baldridge, have been working for over fifty years toward algorithmic understanding of human language. They’re not there yet. They are, however, doing a pretty good job with important tasks such as entity recognition, relation extraction, topic modeling, and summarization.

human connectivity

By Seth Grimes

Computational linguists and computer scientists, among them University of Texas professor Jason Baldridge, have been working for over fifty years toward algorithmic understanding of human language. They’re not there yet. They are, however, doing a pretty good job with important tasks such as entity recognition, relation extraction, topic modeling, and summarization. These tasks are accomplished via natural language processing (NLP) technologies, implementing linguistic, statistical, and machine learning methods.

Computational linguist Jason Baldridge, co-founder and chief scientist of start-up People Pattern

Computational linguist Jason Baldridge, co-founder and chief scientist of start-up People Pattern

NLP touches our daily lives, in many ways. Voice response and personal assistants — Siri, Google Now, Microsoft Cortana, Amazon Alexa — rely on NLP to interpret requests and formulate appropriate responses. Search and recommendation engines apply NLP, as do applications ranging from pharmaceutical drug discovery to national security counter-terrorism systems.

NLP, part of text and speech analytics solutions, is widely applied for market research, consumer insights, and customer experience management. The more consumer-facing systems know about people — individuals and groups — their profiles, preferences, habits, and needs — the more accurate, personalized, and timely their responses. That form of understanding — pulling clues from social postings, behaviors, and connections — is the business Jason’s company, People Pattern, is in.

I think all this is cool stuff so I asked two favors of Jason. #1 was to speak at a conference I organize, the up-coming Sentiment Analysis Symposium. He agreed. #2 was to respond to a series of questions — responses relayed in this article — exploring approaches to —

The Analytics of Language, Behavior, and Personality

Seth Grimes> People Pattern seeks to infer human characteristics via language and behavioral analyses, generating profiles that can be used to predict consumer responses. What are the most telling, the most revealing sorts of thing people say or do that, for business purposes, tells you who they are?

Jason Baldridge> People explicitly declare a portion of their interests in topics like sports, music, and politics in their bios and posts. This is part of their outward presentation of their selves: how they wish to be perceived by others and which content they believe will be of greatest interest to their audience. Other aspects are less immediately obvious, such as interests revealed through the social graph. This includes not just which accounts they follow, but the interests of the people they are most highly connected to (which may have been expressed in their posts and their own graph connections).

A person’s social activity can also reveal many other aspects, including demographics (e.g. gender, age, racial identity, location, and income) and psychographics (e.g. personality and status). Demographics are a core set of attributes used by most marketers. The ability to predict these (rather than using explicit declarations or surveys) enables many standard market research questions to be answered quickly and at a scale previously unattainable.

Seth> And what can one learn from these analyses?

People Pattern Portrait Search

People Pattern Portrait Search

Personas and associated language use.

As a whole, this kind of analysis allows us to standardize large populations (e.g. millions of people) on a common set of demographic variables and interests (possibly derived from people speaking different languages), and then support exploratory data analysis via unsupervised learning algorithms. For example, we use sparse factor analysis to find the correlated interests in an audience and furthermore group the individuals who are best fits for those factors. We call these discovered personas because they reveal clusters of individuals with related interests that distinguish them from other groups in the audience, and they have associated aggregate demographics—the usual things that go into building a persona segment by hand.

We can then show the words, phrases, entities, and accounts that the individuals in each persona discuss with respect to each of the interests. For example, one segment might discuss Christian themes with respect to religion, while others might discuss Muslim or New Age ones. Marketers can then use these to create tailored content for ads that are delivered directly to the individuals in a given persona, using our audience dashboard. There are of course other uses, such as social science questions. I’ve personally used it to look into audiences related to Black Lives Matter and understand how different groups of people talk about politics

Our audience dashboard is backed by Elastic Search, so you can also use search terms to find segments via self-declared allegiances for such polarizing topics.

A shout-out —

Personality and status are generally revealed through subtle linguistic indicators that my University of Texas Austin colleague James Pennebaker has studied for the past three decades and is now commercializing with his start-up company Receptiviti. These include detecting and counting different types of words, such as function words (e.g. determiners and prepositions) or cognitive terms (such as “because” and “therefore”), and seeing how a given individual’s rates of use of those word classes compares to known profiles of the different personality types.

So personas, language use, topics. How do behavioral analyses contribute to overall understanding?

Many behaviors reveal important aspects about an account that a human would struggle to infer. For example, the times at which an account regularly posts is a strong indicator of whether they are a person, organization or spam account. Organization accounts often automate their sharing, and they tend to post at regular intervals or common times, usually on the hour or half hour. Spam accounts often post at a regular frequency — perhaps every 8 minutes, plus or minus one minute. An actual person posts in accordance with sleep, work, and play activities, with greater variance — including sporadic bursts of activity and long periods of inactivity.

Any other elements?

Graph connections are especially useful for bespoke, super-specific interests and questions. For example, we used graph connections to build a pro-life/pro-choice classifier for one client to rank over 200,000 individuals in greater Texas on a scale from most likely to be pro-life to most-likely to be pro-choice. By using known pro-life and pro-choice accounts, it was straightforward to gather examples of individuals with a strong affiliation to one side or the other and learn a classifier based on their graph connections that was then applied to the graph connections of individuals who follow none of those accounts.

Could you say a bit about how People Pattern identifies salient data and makes sense of it, the algorithms?

The starting point is to identify an audience. Often this is simply the people who follow a brand and/or its competitors, or who comment on their products or use certain hashtags. We can also connect the individuals in a CRM to their corresponding social accounts. This process, which we refer to as stitching, uses identity resolution algorithms that make predictions based on names, locations, email addresses and how well they match corresponding fields in the social profiles. After identifying high confidence matches, we can then append their profile analysis to their CRM data. This can inform an email campaign, or be the start for lead generation, and more.

Making sense of data — let’s look at three aspects — demographics, interests, and location —

Our demographics classifiers are based on supervised training from millions of annotated examples. We use logistic regression for attributes like gender, race, and account type. For age, we use linear regression techniques that allow us characterize the model’s confidence in its predictions — this allows us to provide more accurate aggregate estimates for arbitrary sets of social profiles. This is especially important for alcohol brands that need to ensure they are engaging with age-appropriate audiences. All of these classifiers are backed by rules that detect self-declared information when it is available (e.g. many people state their age in their bio).

We capture explicit interests with text classifiers. We use a proprietary semi-supervised algorithm for building classifiers from small amounts of human supervision and large amounts of unlabeled texts. Importantly, this allows us to support new languages quickly and at lower cost, compared to fully supervised models. We can also use classifiers built this way to generate features for other tasks. For example, we are able to learn classifiers that identify language associated with people of different age groups, and this produces an array of features used by our age classifiers. They are also great inputs for deep learning for NLP and they are different from the usual unsupervised word vectors people commonly use.

For location, we use our internally developed adaptation of spatial label propagation. With this technique, you start with a set of accounts that have explicitly declared their location (in their bio or through geo tags), and then these locations are spread through graph connections to infer locations for accounts that have not stated their location explicitly. This method can resolve over half of individuals to within 10 kilometers of their true location. Determining this information is important for many marketing questions (e.g. how does my audience in Dallas differ from my audience in Seattle?) It obviously also brings up privacy concerns. We use these determinations for aggregate analyses but don’t show them at the individual profile level. However, people should be aware that variations of these algorithms are published and there are open source implementations, so leaving their location field blank is by no means sufficient to ensure your home location isn’t discoverable by others.

My impression is that People Pattern, with an interplay of multiple algorithms and data types and multi-stage analysis processes, is a level more complex than most new-to-the-market systems. How do you excel while avoiding over-engineering that leads to a brittle solution?

It’s on ongoing process, with plenty of bumps and bruises along the way. I’m very fortunate that my co-founder, Ken Cho, has deep experience in enterprise social media applications. Ken co-founded Spredfast [an enterprise social media marketing platform]. He has strong intuitions on what kind of data will be useful to marketers, and we work together to figure out whether it is possible to extract and/or predict the data.

We’ve struck on a number of things that work really well, such as predicting core demographics and interests and doing clustering based on those. Other things have worked well, but didn’t provide enough value or were too confusing to users. For example, we used to support both interest-level keyword analysis (which words does this audience use with respect to “music”) and topic modeling, which produces clusters of semantically related words given all the posts by people in the audience, in (almost) real-time. The topics were interesting because they showed groupings of interests that weren’t captured by our interest hierarchy (such as music events), but it was expensive to support topic model analysis given our RESTful architecture and we chose to deprecate that capability. We have since reworked our infrastructure so that we can support some of those analyses in batch (rather than streaming) mode for deeper audience analyses. This is also important for supporting multiple influence scores computed with respect to a fixed audience rather than generic overall influence scores.

Ultimately, I’ve learned to think about approaching a new kind of analysis not just with respect to the modeling, but as importantly to consider whether we can get the data needed at the time that the user wants the analysis, how costly the infrastructure to support it will be, and how valuable it is likely to be. We’ve done some post-hoc reconsiderations along these lines, which has led to streamlining capabilities.

Other factors?

Another key part of this is having the right engineering team to plan and implement the necessary infrastructure. Steve Blackmon joined us a year ago, and his deep experience in big data and machine learning problems has allowed us to build our people database in a scalable, repeatable manner. This means we now have 200+ million profiles that have demographics, interests and more already pre-computed. More importantly, we now have recipes and infrastructure for developing further classifiers and analyses. This allows us to get them into our product more quickly. Another important recent hire was our product manager Omid Sedaghatian. Omid is doing a fantastic job of figuring out what aspects of our application are excelling, which aren’t delivering expected value, and how we can streamline and simplify everything we do.

Excuse the flattery, but it’s clear your enthusiasm and your willingness to share your knowledge are huge assets for People Pattern. Not coincidentally, your other job is teaching. Regarding teaching — to conclude this interview — Sentiment Analysis Symposium in New York, and pre-conference you’ll present a tutorial, Computing Sentiment, Emotion, and Personality. [Use the registration code GREENBOOK for a 10% discount.] Could you give us the gist of the material you’ll be covering?

Actually, I just did. Well, almost.

I’ll start the tutorial with a natural language processing overview and then cover sentiment analysis basics — rules, annotation, machine learning, and evaluation. Then I’ll get into author modeling, which seeks to understand demographic and psychographic attributes based on what someone says and how they say it. This is in the tutorial description: We’ll look at additional information that might be determined from non-explicit components of linguistic expression, as well as non-textual aspects of the input, such as geography, social networks, and images, things I’ve described in this interview. But with an extended, live session you get depth and interaction, and an opportunity to explore.

Thanks Jason. I’m looking forward to your session.

Share

GRIT Says Panel Woes Are Jeopardizing MR’s Future. There’s an Answer.

State-of-the-art mobile research is the innovation our industry needs to embrace. But before that can happen, we have to overcome a common misconception about what mobile research really is, and what it can accomplish.

Mobile-world

By Michael Smith

A running theme through the 82 pages of the most recent Greenbook Research Industry Trends Report (GRIT) is that the quality of survey sample has eroded to the point of crisis for market research.

GRIT sounds a repeated alarm over what its authors call “a known problem…with no solution in sight.” But there is a solution — all-mobile panels — which I’ll explore in a bit.

First, some facts and quotes from the report that lay out the dimensions of the panel crisis:

  • 38% of GRIT’s more than 2,000 industry respondents expect sample quality to get worse over the coming three years; fewer than 28% believe it will improve – and among clients of market research providers, optimism sank to 23%.
  • “Clients and suppliers agree that sample quality is getting worse, and there is little alignment on what to do about it. This is a perennial topic; when will the industry do something about it?”
  • “The smartphone revolution and declining participation are indeed problems that need to be addressed. Few disagree with this belief, but there is far less consensus around the extent of the problem, its implications and the range of solutions.”
  • “The difficulty of accessing truly representative sample sources….could be viewed as the single largest area of concern for the industry….We are running out of online panelists…”
  • “There are few legitimate excuses one can muster for not confronting the sample problems that plague the industry. There’s no doubt that the solutions are hard, but…far too many people…are dragging their feet.”
  • “The real existential threat to our industry is…the future of research participation. The real question therefore is when will people catch on? When will responses to these questions drive change?”
  • “We believe that the death spiral is accelerating for those researchers who fail to act. The poor experiences they create are starting to contrast markedly against the unique and engaging experiences by new entrants as well as the small number of innovators who’ve been unafraid to embrace change.”

The last sentence points the only way forward. Innovate. Embrace change.

A Formula for Successful Mobile Research

My argument is that state-of-the-art mobile research is the innovation our industry needs to embrace. But before that can happen, we have to overcome a common misconception about what mobile research really is, and what it can accomplish.

The misconception is borne out by one of GRIT’s most telling findings: 74% of respondents think they’re already doing mobile research, more than any other “emerging method.” An additional 17% are considering trying mobile for the first time.

MFour has long struggled to make the industry realize that not all mobile research is created equal. There’s good mobile and bad mobile, mobile that’s artless and mobile that’s state-of-the-art. There’s pure mobile that’s solely geared to smartphones, and diluted mobile that ties smartphone respondents to fading online survey technology. There’s mobile that fails and mobile that works.

MFour Mobile Research, Inc.’s aim since 2011 has been to define what mobile research can and should be, then create the new software and new approaches to panel-building that alone can make mobile work. Success means solving both ends of the equation: developing the right technology and recruiting and cultivating the right panel.

Developing The Right Mobile Technology

We’ve broken with all trappings of online research. Instead, we deploy technology that’s new to market research, the native app. Our proprietary app, Surveys on the Go®, instantly loads an entire survey into the respondent’s phone – including any pictures or multimedia content needed to enhance questions and answers. Embedding the survey into the phone is what makes it “native.”

Why does it matter? Because it frees respondents to complete surveys at their convenience. They don’t have to interrupt what they’re doing. They don’t need to be connected to the internet. Consequently, there’s no risk that the survey will become intolerably slow because of poor connections that lead to snail’s-pace downloads and data transfers. The survey can’t be dropped because of a lost signal.

At the opposite end of the spectrum are hybrid approaches that tether mobile devices to online survey software. A separate, back-and-forth exchange must take place for each and every question and answer. It’s a method that puts the respondent’s experience and the survey’s success at the mercy of internet connections that, as we all know, can bog down or disappear.

Essentially, mobile surveys embedded through a native app don’t have to be short and simplistic. Immune to smartphone signal issues, they can be long and sophisticated, and exploit special smartphone capabilities such as multimedia and Geo-location, which allows inviting panelists to surveys while they are still shopping or have just left a store. In our experience, app-based interviews run smoothly, regardless of location and even with interview lengths exceeding 20 minutes. Even at that LOI, we’ve experienced just a 6% drop-off rate. So much for the five-minute survey limit that’s commonly but wrongly posited for mobile research.

As for building a reliable, representative sample, good technology that begets a good respondent experience goes a long way toward drastically improving participation.

Curating A Winning Mobile Panel

With the right mobile technology, it’s possible to recruit the right kind of mobile panel. Ours numbers more than a million active respondents who take surveys solely on their smartphones and other mobile devices. They seem to like it, as reflected in strong ratings and comments at the App Store and Google Play. The mere fact that respondents can give us direct, unsolicited and very public feedback on their survey experiences makes app-based mobile a superior tool for becoming aware of panel problems as they arise – and taking quick action to solve them. It makes us accountable – as any firm that’s serious about its responsibilities and confident in its capabilities ought to be.

There’s much more to tell about the all-mobile approach – not least its ability to reach Millennials, Hispanics and African-Americans who, as GRIT notes, are vital to research but increasingly inaccessible to online surveys.

The Solution to Successful Mobile Research

But my main point is that the industry needs to understand that the available mobile technologies differ drastically. Then firms can make the natural comparisons, try different mobile providers, and see which can deliver a good panel and fast, reliable, representative data.

I think the most important sentence in the new edition of GRIT comes near the end, in a section called “Opportunities for the Market Research Industry” that examines ways forward from the current dead end.

“Mobile research has been seen as an opportunity for many years, but there is a sense that now we are at the stage where we can really start to exploit mobile data gathering techniques.”

Before you can exploit mobile techniques you must get to know one technique from the next. You have to stop stereotyping all iterations of mobile research as prone to the same limitations and drawbacks.

GRIT has done our field a great service by refusing to sugarcoat the sample problem and by sounding a clear alarm that something has to be done about it. There’s just one point I dispute.

I wouldn’t say that market research’s pervasive sample woes are “a known problem…with no solution in sight.” There is a solution, but until now it has been overlooked.

That appears to be changing. MFour’s year-by-year growth since we debuted our native app in 2011 suggests that an increasing number of researchers are starting to make the kinds of distinctions about mobile research that need to be made.

Market researchers need to do some research in their own backyards, to gain insights into their own most crucial interests – especially, as GRIT makes clear, when the industry’s health depends on it. Getting a more sophisticated understanding of mobile, market research’s most widely-adopted but least understood “emerging method,” would be a good start.

Share