Who Are The 50 Most Innovative MR Industry Firms? Take the Latest GRIT Survey & Tell Us!

We’d like to invite you to participate in the Q1-Q2 2017 GreenBook Research Industry Trends (GRIT) Survey which helps us write the GRIT Report.

We’d like to invite you to participate in the Q1-Q2 2017 GreenBook Research Industry Trends (GRIT) Survey which helps us write the GRIT Report.

The GRIT Report was created to help insights professionals like you better understand where the industry is heading so you can make the right decisions for your organization. Click here to check out the most recent GRIT Report.

This edition of the GRIT Report features the always-popular GRIT Top 50 – a tracker of the 50 suppliers and clients perceived to be the industry’s most innovative.

Plus, we’ve updated the survey to include some of the most hot-button topics in our industry today, including:

  • The Role of Sample Providers
  • Sample Quality
  • The Future of Sampling
  • Adoption of Automation Technologies/Strategies
  • Post-Secondary Education Programs in Market Research
  • US MMR Programs Brand Ranking
  • Skillsets for MMR Program Graduates
  • Project Spending/Revenue in 2017
  • Industry Benchmarking:
    • Success Factors Of The “Perfect Study”
    • Importance of Knowledge, Influence & Marketing impact for study
    • Organization Ratings and Benchmarking
    • Organization Technology Strategies
    • Methodology Prioritization Factors
  • Prediction market Exercise: Emerging Methods Adoption

Will you support our profession by completing the survey? It shouldn’t take you more than 15 minutes.

Take the survey now.

As a thank you, we’ll send a PDF copy of the report straight to your inbox as soon as it hits the (virtual) shelves so you’ll be among the first to see it.

Prefer to take the survey in Chinese, German, Japanese, or Spanish? You’ll be able to select one of those languages at the start of the questionnaire.

Thanks in advance for giving back in support of our profession!

Thanks to Our GRIT Partners

Research Partners
Ascribe, AYTM – Ask Your Target Market, Bakamo Social, Consensus Point, G3 Translate, Gen2 Advisors, Lightspeed, Michigan State University, MROC Japan – Community Solutions Company, mTAB, Multivariate Solutions, NewMR, OfficeReports, Research Now,Researchscape International, Stakeholder Advisory Services, Virtual Incentives

Sample Partners
A.C. Nielsen Center for Marketing Research at The Wisconsin School of Business, AIM, AMAI, American Marketing Association New York,Asia Pacific Research Committee (APRC), ARIA, Australian Market & Social Research Society (AMSRS), BAQMaR, BVA, MRS, Next Gen Market Research (NGMR), OdinText Inc., Provokers, Qualitative Research Consultants Association, The Research Club, The UTA MSMR Alumni Association, University of Georgia | MRII, Women In Research

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Using Text Analytics to Tidy a Word Cloud

The trick to a great word cloud is to first tidy up the raw text using automated text analytics.

By Tim Bock

It is common when people create word clouds that they want more control. Limit the word cloud to frequently occurring words. Join together words in phrases. Automatically group together words that have the same meaning. The trick to doing this is to first tidy up the raw text using automated text analytics. Then, create the word cloud using the tidied text.

Why don’t people like Tom Cruise?

In my earlier post, I explained how you can create and interactively modify word clouds in Displayr using an example about why people dislike Tom Cruise. In this post, I use text analytics to create a better word cloud, faster.

As discussed in this post, text analytics routinely involves a pre-processing phases, where uninteresting and infrequent words are removed, spelling is corrected, words of common route are merged, phases are learned, and infrequent words are removed. This can be automated in Displayr by selecting Insert > More (Analysis) > Text Analysis > Setup Text Analysis, selecting the appropriate options in the object inspector, and then ticking Automatic.

Below, the left side shows the main output of the text analysis setup in Displayr, showing the frequency with which words appear after the text analysis. When this output is selected, as below, you can also see the settings on the right. For example, you can see the Text Variable being analyzed, which words have been removed, and that it is limited to showing words that appear 10 times or more. 

When doing this, keep in mind that pairs of words and phrases (e.g., don’t like) are better dealt with interactively in the word clouds, rather than by the text analysis.

TextAnalyticsOptions

Creating a word cloud from the tidied text

NewVariableTextAnalysis

Now that we have tidied the text data, we need to create a new variable in the data file with the tidied text. We need to do this because the word clouds take a variable as an input. To create a variable, select the output, and then select Insert > More (Analysis) > Text Analysis > Techniques > Save Tidied Text, which causes a new variable to appear at the top of the data tree, as shown to the right.

To create a word cloud, we now create a new table by dragging the new variable onto the page, and then select Charts > Word Cloud, adding any phrases that we want to appear (e.g., Tom Cruise). We then get the much tidier word cloud below.

If you want to try it yourself, click here<

10 Best Practices to Fast Track Your Qual

Scoot Insights shares 10 of their Best Practices that allow teams to conduct agile qualitative research without compromising depth or quality of insights.

By Katrina Noelle and Janet Standen

We at Scoot Insights answer core business questions efficiently and effectively with agile, iterative, collaborative, qualitative research. We’d like to share 10 of our Best Practices that allow teams to conduct agile qualitative research without compromising depth or quality of insights.

  1. Set goals collaboratively and narrow in on one core objective
    • To propel research forward in an agile manner, scope creep must be resisted at all costs. Center your team on their primary reason for doing the piece of research, namely, the key business decision they will make based on the insights.
    • We recommend scheduling an internal objective alignment session even before briefing your research partner. Use this session to get input from the key stakeholders and understand the following elements that will impact your research design:
      1. Timing of decisions to be made based on the research
      2. Deliverables that will help you make those decisions
    • If other objectives start to be added to the project consider giving them their own fieldwork or expanding the timeline for the original one.
  2. Use experienced & immersed moderators
  • If you are looking to make business decisions quickly you need research partners who can be efficiently briefed on the objectives and roll with iterative development to the stimuli, discussion guide, etc.
  • Choose moderators with experience in your product category and chosen research methodology. This is not the time to train internal staff to moderate.
  • Ensure the moderator is comfortable with iterative materials and on-the-fly updates during the fieldwork.
  1. Maximize stakeholder involvement throughout
  • Immerse your team early in the research objectives and parameters.
  • Ensure they are actively involved in the fieldwork; give everyone a job/role.
  1. Integrate iterative design
  • Be open to changing materials. Be ready to change concepts, stimuli, and discussion flow as you learn throughout the qualitative fieldwork.  Once you hear enough helpful feedback on one version of your idea, integrate it and test that iteration, and so on.
  1. Use back to back, time-efficient audience sessions
  • We pack our research days, scheduling back-to-back mini group discussions over the span of an 8-hour workday. This allows us to hear from many more participants than traditional research scheduling.
  • Each participant has more airtime and a more intimate environment in which to share their opinion.
  • Another benefit is that the presence and time commitment of the behind the mirror team is fully maximized.
  1. Real-time synthesis by backroom facilitator
    • Leverage dual moderators: one in the front room and one in the backroom. This will help your team with your real-time iteration while providing live qualitative synthesis of the insights.
    • In the back room, be sure to use good old-fashioned flip charts, sharpies and post it notes to capture customer and team member insights in the moment.
  2. Conduct immediate debrief workshop
    • Inspire shared understanding by bringing cross-functional teams closer to customers.
    • Invite your backroom moderator to conduct a group discussion on the themes and take-aways from the day with team attendees.
  3. Merge client expertise and audience learning
    • This is where your expert moderating team steps in again to help. They can help your internal teams to identify competitively distinctive and customer-driven actions.
    • They can also keep the voice of the customer in the room during the debrief session, merging customer insights with client expertise.
  4. Report out in 24 hours
    • We recommend moving quickly to synthesize the interactive debrief workshop conclusions and action items into a short and sweet report.
    • Be sure to circulate this summary the following day to help the team take action quickly.
  5. Integrate insights into action quickly!
    • Work with your stakeholders to integrate the insights directly into business action. Keep the insights top of mind as your clients move through the decision-making process. If you’ve kept them engaged in the research process throughout, they should be more easily able to integrate those insights into business action.

Often qualitative learning is seen as a slow-moving, costly part of the research process. Consider revving up your qualitative insights so that business decisions can be informed by valuable in-depth qualitative feedback more efficiently and effectively.

How to Solve the Most Common Data Problems in Retail

The most successful retail companies use data science and predictive analytics to improve efficiency, improve marketing campaigns, and gain customer insights that give them a competitive advantage.

By Pauline Brown

In the retail business, big data is poised in the coming years to open up huge opportunities in the way stores (both physical and online) fundamentally operate and serve customers. Given the incredibly small margins, Big Data will also provide much-needed efficiency improvements – from tighter supply chain management to more targeted marketing campaigns – that can make a big difference to a retail business of any size.

Making data-driven decisions is no longer about learning from the past; it means making changes to the business constantly based on real time input from all data sources across the organization. Making predictions and applying machine learning is based on traditional data but also on new and innovative sources like connected Internet of Things (IoT) devices and sensors or, going a step further with deep learning, unstructured data from things like static images or cameras monitoring stock in warehouses. Consumers can be fickle, so being able to accurately anticipate what they will do next and quickly react is what puts the most innovative and successful retailers above the rest.

Data science software maker, Dataiku, recently explored the types of data problems facing retail, the problems they solve, and the steps that any retail organization can take to become more data driven.

PROBLEM #1: Siloed, Static Customer Views

Many retailers still struggle with siloed data – transaction data lives apart from web logs which in turn is separate from CRM data, etc.

SOLUTION: Complete, Real Time Customer Looks

Cutting-edge retailers look at customers as a whole, combining traditional data sources with the non-traditional (like social media or other external data sources that can provide valuable insight).

RESULTS:

  • More accurate and targeted churn prediction.
  • Robust fraud detection systems.
  • More effective marketing campaigns due to more advanced customer segmentation.
  • Better customer service.

PROBLEM #2: Time Consuming Vendor & Supply Chain Management

Supply chains are already driven by numbers and analytics, but retailers have been slow to embrace the power of realtime analytics and harnessing huge, unstructured data sets.

SOLUTION: Automation and Prediction for Faster, More Accurate Management

Combine structured and unstructured data in real time for things like more accurate forecasts or automatic reordering.

RESULTS:

  • More efficient inventory management based on real-time data and behavior .
  • Optimized pricing strategies.

PROBLEM #3: Analysis Based on Historical Data

Looking back at shoppers’ past activity often isn’t a good indication of what they will do next.

SOLUTION: Prediction and Machine Learning in Real Time

Instead, real-time prediction based of current trends and behaviors from all sources of data is the key

RESULTS:

  • Anticipating what a customer will do next.
  • A more agile business based on up-to-the-minute signals.
  • The ability to adapt automatically with customer behavior.

PROBLEM #4: One-Time Data Projects

Completing one-off data projects that aren’t reproducible is frustrating and inefficient.

THE SOLUTION: Automated, Scalable and Reproducible Data Initiatives

The best data teams in retail focus on putting a data project into production that is completely automated and scalable.

RESULTS:

  • More efficient team that can scale as the company grows.
  • With reproducible workflows, team can work on more projects.

While each organization is different, data challenges are the same.  It takes a data production plan to guide any sized team to successfully producing a working predictive model that yields meaningful insights for the business.

How to Complete any Data Project in Retail

The most successful retail companies worldwide solve these four issues by efficiently leverage all of the data at their fingertips by following set processes to see data projects through from start to finish. They also ensure those data projects are reproducible and scalable so the data team is constantly able to work on new projects vs. maintaining old ones. This is as easy as following the seven fundamental steps to completing a data project:

  1. DEFINE: Define your business question or business need: what problem are you trying to solve? What are the success metrics? What is the timeframe for completing the project?
  2. IDENTIFY DATA: Mix and merge data from different sources for a more robust data project.
  3. PREPARE & EXPLORE: Understand all variables. Ensure clean, homogenous data.
  4. PREDICT: Avoid the common error of training your model on both past and future events.  Train only on data that will be available to you when a predictive model is actually running.  Choose your evaluation method wisely; how you evaluate your model should correspond to your business needs.
  5. VISUALIZE: Communicate with product/marketing teams to build insightful visualizations.  Use visualizations to uncover additional insights to explore in the predictive phase.
  6. DEPLOY: Determine if the project is addressing an ongoing business need, and if so, ensure the model is deployed into production for a continuous strategy and to avoid one-off data projects.
  7. TAKE ACTION: Determine what should be done next with the insights you’ve gained from your data project.  Is there more automation to be done? Can teams around the company use this data for a project they’re working on?

There is no doubt that data science, machine learning, and predictive analytics combined with Big Data will become an even more fundamental part of both online and traditional retail in the coming years.  All retail organizations will use it, but only the successful ones will have an effective data production plan that yields the most effective insights into their business that gives them an edge over the competition.

Ever Thought of Researching Ethnic Minorities Online?

A treasure trove of insights awaits those who have the confidence to challenge mistaken assumptions and to give a proper voice to ethnic minorities.

By Dr. Marie-Claude Gervais

I simply cannot recall the number of times clients have ruled out conducting online research with people from ethnic minority backgrounds on the grounds that, somehow, ‘ethnic minorities are not online’. As face-to-face research is too expensive, clients quickly give up altogether on the idea of conducting research specifically with ethnic minority customers. Sadly this means that a whole segment of the population – one whose needs and experiences may be very specific – ends up excluded from research, and companies and service providers fail to reach and engage this lucrative group of consumers.

It is puzzling that assumptions about digital non-participation should persist, especially when robust evidence shows that, in the UK, people from ethnic minority backgrounds are more likely to have broadband, to own a smartphone, to be active online and to have positive attitudes towards new technologies. Surprised? Well, this is true of all the main ethnic minority communities in the UK, whether they are of Indian, Pakistani, Bangladeshi, Black African and Black Caribbean descent, or more recent migrants from Eastern Europe.

This evidence is not new either. An authoritative report produced by OFCOM in 2013 showed that internet penetration is deeper within these groups, with considerably more people from ethnic minority backgrounds owning a broadband connection. This is often because possessing a good internet service is essential to maintaining family ties and connecting with people ‘back home’ (on Facebook or Skype) in order to get international news and remain linked to their culture and society. This early internet adoption has increased their technological confidence; usage is more frequent, innovative and complex, as indicated by the fact that people from ethnic minority backgrounds are more likely to connect to WiFi hotspots. Importantly, ethnic minority people are also significantly more influenced by comments and reviews posted online than White British respondents. I know from my own research that people from all ethnic minority backgrounds place a greater importance on community networks and word of mouth, as a source of social and cultural capital. The fact that minority experiences are often marginalised, misrepresented or ignored in the media and elsewhere means that people develop the habit of turning to their network of trusted family and friends for advice, to share experiences and to access a perspective on the world (e.g. events, politics, brands and products) with which they can identify, and to which they can relate.

A similar picture emerges with respect to usage of mobile phones. Ownership of mobile phones is very deep, with virtually every ethnic minority household having at least one mobile phone and fewer people relying on a fixed landline. While the data is based on households instead of individuals, we also know that a greater proportion of ethnic minority individuals also own mobile phones.

Looking at the attitudes which people from ethnic minority backgrounds have towards the technology that enables and enriches online research sheds no further light on why they are currently often excluded from digital research. Indeed, all ethnic minority groups report being ‘less confused by computers’, ‘loving gadgets’ more and generally being keener to welcome ‘the latest technology’.

With all this in mind, why would we not seek to engage with these connected, forward-thinking, technology-loving consumers online? The answer is more likely to be found in the minds of researcher buyers than in the attitudes and lifestyles of the consumers. A treasure trove of insights awaits those who have the confidence to challenge mistaken assumptions and to give a proper voice to these consumers. Online research communities enable qualitative researchers to work people from ethnic minority backgrounds to explore their world, wherever they are (not just in London, Birmingham or Bradford!), in a cost-effective and meaningful way.

I certainly can recall the number of times that clients, who have used online research communities with ethnic minority consumers, have expressed their delight at the outcomes of these projects. These experiences not only had a positive impact on their business but, perhaps more importantly, also on their mindset.

Originally posted here

Five Challenges to Overcome for Successful Market Research Online Communities

As with every methodology, there are some potential challenges to overcome with online communities to ensure you generate strong results.

By Abby Pearson

Market research online communities are a popular, fast growing research method with numerous benefits for researchers and participants alike. When well-managed, MROCs can deliver unbeatable insights instantly, giving researchers access to participants’ thoughts that they just didn’t have before. Not only that, but because they take place online in a safe and secure setting, participants feel comfortable voicing their opinions and can do so anytime, anywhere – meaning that MROCs easily fit in around respondents’ busy lifestyles. When you add in the fact that market research online communities can often be quicker and more cost efficient than traditional methods, it’s clear to see why they are such a hit in the market research world. However, as with every methodology, there are some potential challenges to overcome to ensure you generate the results you deserve…

  1. Eliminating recruitment risks and the danger of dropouts

If you have certain quotas and specific segments you need to adhere to, it can be a struggle to find respondents – and when you do find them, you’ll want to do everything you can to keep them! There are several strategies you can enlist during recruitment to ensure that your MROC runs smoothly and your respondents stay engaged. Firstly, make sure you let them know exactly what is expected of them so there are no surprises. Do they need to dedicate 30 minutes a day for a month? Or is your project more ad-hoc and flexible? Either way, outlining the requirements from the beginning in a clear document will make a dramatic difference. Additionally, we always recommend an over-recruit of 20-30% so if the worst does happen and you do have dropouts on the day, you’ll be prepared.

  1. Ensuring your onboarding goes smoothly

Not having a sufficient onboarding process can impact the whole study. From problems logging onto the platform to being asked to carry out tasks they weren’t expecting to complete, not having an onboarding strategy can result in disgruntled respondents dropping out. Don’t worry, though – there are a few things you can do to prevent this happening, such as allowing plenty of time for onboarding before the community starts (up to 48 hours where possible) and including an ice-breaker to help respondents feel more comfortable with each other – which in turn leads to more honest and insightful responses in the tasks.

  1. Encouraging participation and boosting engagement

It’s important that your participants feel valued throughout the project to keep them engaged, and offering bonus incentives in addition to the initial incentive received for taking part in the community can be a fantastic way to do this. You can also enlist a Community Manager to deal with any concerns and answer any questions to help boost engagement. It’s important to keep in contact with your respondents, so think about how long your community will last and plan how often you will follow up with your participants – if your research community is lasting a week, for example, you might want to reach out to participants everyday. A phone call is generally the best way to do this as it means any issues can be addressed immediately. You should also think about how you will remind participants to complete their tasks – a gentle nudge via text message can work well to remind them of upcoming deadlines.

  1. Overcoming tricky technology issues

One of the biggest challenges to overcome is choosing the right software, with problems such as a lack of support or software that is difficult to use resulting in dropouts. That’s why it’s important to ask yourself exactly what you need from the research and tailor your software accordingly. It’s also worth considering using an app to make life easy for your participants, enabling them to access the community and complete tasks on the go. Before the community starts, be sure to provide respondents with an information sheet outlining what to do and who to contact, and you should also demo the software as well as testing it on respondents beforehand to iron out any issues. To combat any problems once the community is up and running, make sure there is a contact number for respondents to call both in and outside of office hours, so any issues are dealt with swiftly. Find out more about choosing the right software here.

  1. Increasing engagement through interesting tasks

Another issue with market research online communities is ensuring you choose the right tasks. By creating a variety of different tasks, you can keep the project exciting and your respondents engaged. From blogging and discussion tasks to idea generation and picture books, making sure you offer fun, frequent and flexible activities that appeal to your participants can ensure your respondents are keen to get involved and share their opinions with you. Have a look here for some inspiration on how to choose exciting tasks to keep your community engaged.

The High Cost of Cheap Sample

The fact is, bad or "cheap" sample can give you information that is dead wrong.

By Rob Berger

“At least it will give us directional information. I mean, how wrong can it be?” I’ve heard those remarks and many like them bantered about when people rationalize using cheap sample sources.

The fact is, bad or “cheap” sample can give you information that is dead wrong. So wrong that the information it provides is directional—it’s just pointing in the wrong direction.

We’re into the third year of a study where we test the reliability and validity of a popular and well known consumer survey service. We compare the findings of that research with the same data tracked by the Pew Research Center, who are well known for their methodological rigor. The question we are tracking is about the use of social media sites and apps.

The consumer survey results and the Pew Research Center findings could hardly be more different. For starters, the consumer survey suggests social media usage is half the level that Pew and many other sources say it is at.

The consumer survey source says, for example, that 39% of online Americans are using Facebook. Pew puts that number at 79%. For other social media sites the differences are equally stark. Furthermore the consumer survey data would suggest great volatility in the use of social media—with vast surges and declines. Pew shows a slow steady growth.

Why this discrepancy? We believe a lot of it has to do with why people are completing the survey. The consumer survey source obtains respondents by working with publishers to intercept people who are seeking to access “premium content” on their sites. Potential respondents are asked to answer a few questions in order to get access to content. In a whitepaper on this topic we consider a host of potential reasons, but the one that seems to be the most important is respondent motivation.

These people are not taking the survey because they want to. They are doing it to get to their desired content. They don’t have any stake in how they answer—in fact, the question is just a nuisance. When you treat people like that, it is no surprise that the data they give you may be inaccurate. Why would they bother? After all, they have just been frustrated in their pursuit of something they want.

That’s why we take respondent engagement so seriously. Whether it be on our Market Communities—recruited to be representative of the US and Canadian populations—or our client’s specially recruited Insight Communities, we take care to ensure respondents know that their opinion is valued.

We’ve researched why people respond, and we work hard to let respondents know their opinion is important to us and that their input makes a difference. We strive to provide them with feedback on what we’re learning and we try to expose them to interesting new things.

When we invite people to join our communities we mean community in the fullest sense of the word. That’s an important part of what allows us to collect accurate and consistent information to inform our client’s decision making.

People conduct research to help them make better decisions. When the research is wrong, they make bad, even terrible, decisions. In those cases the data is hurting rather than helping them. That makes cheap sample very expensive.

To learn more, download our whitepaper The High Cost of Cheap Sample: Evaluating the Reliability and Validity of a Publisher-driven Online Sample Source.

The NPS Recoding Trick: The Smart Way to Compute the Net Promoter Score

The Net Promoter Score is most people’s go-to measure for evaluating companies, brands, and business units. However, the standard way of computing the NPS is a bit of a pain.

By Tim Bock 

The Net Promoter Score is most people’s go-to measure for evaluating companies, brands, and business units. However, the standard way of computing the NPS – subtract the promoters from the detractors – is a bit of a pain. And, in most apps, you cannot use it in stat tests, so you are never really sure when the NPS moves whether it reflects a change in performance, or just some random noise in the data.

The standard way of computing the NPS

likelihoodtorecommend

The table to the right shows data for Apple.  Fourteen (14.4%) said they were Not at all likely to recommend Apple, 2.6% gave a rating of 1 out of 10, 2.0% gave a rating of 2, etc. If we add up the first seven categories (0 through 6), 51% of people in this data set are Detractors. Adding up categories 7 and 8 gives us 31% Neutrals, and then the remaining 18% are Promoters. So, in this data set, Apple’s NPS is -33.3, which is not great. (Among Apple customers, the NPS is much higher.)

A smarter way

The table below shows the raw data for the 153 people in the data set. The actual ratings, out of 11, are shown in the Recommend: Apple column. The second column shows the recoded data, where the original values are replaced by new values. The trick is to replace values of 0 through 6 with -100, values of 7 or 8 become 0, and values of 9 or 10 get a new value of 100. The column on the right shows the recoded data.

Why is this recoding clever? Once you have recoded the data this way, you can compute the NPS by computing the average, and you get exactly the same answer as you do when using the standard way.

Why the smarter way is so much smarter

The genius of the smarter way is that we can now compute NPS using any analysis app that is able to compute an average. For example, I have used the multiway table feature in Displayr to compute the NPS for Apple by age and gender, by just selecting the three variable (age, gender, and the recoded NPS variable).

The multiway table is created using Insert > More > Tables > Multiway Table.

Doing this with your own data using Displayr

The fastest way to do do this is to start using Displayr, and then:

  1. Import a data set: Home > Data Set (Data). If you want to play around with a live example where the data is already in Displayr, click here.
  2. Drag the variable containing the likelihood to recommend data from the Data tree onto the page, so that it creates a table (like the first table in this post)
  3. Select the table, and select Data Manipulation > Utilities (far right) > Compute > Net Promoter Score. This will add the NPS to the bottom of the table.
  4. (Optional) Select the variable (in the Data tree, bottom-left), and change the Structure to Numeric (in the Object Inspector on the right). This will mean that you only ever see the NPS, rather than seeing both the NPS and the percentages in each category.

Alternatively, if you want to do the calculations “by hand”:

  1. Import a data set: Home > Data Set (Data). If you want to play around with a live example where the data is already in Displayr, click here.
  2. Drag the variable containing the likelihood to recommend data from the Data tree onto the page, so that it creates a table (like the first table in this post).
  3. Select the variable in the Data tree.NPSRecodedValues
  4. Select Data Manipulation > Values (Data Values). 
  5. Change the entries in the Value column so that they look like those to the right, and press OK.
  6. Change the Structure to Numeric (in the Object Inspector on the right). This will cause the table to show the Average, which gives us the NPS.

 

Jeffrey Henning’s #MRX Top 10: GRIT, SMEs, and Other Research Stories

Of the 3,001 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted...

By Jeffrey Henning

Of the 3,001 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. Participate in the GRIT Study – A call to market research professionals to take the twice-annual GreenBook Research Industry Trends survey.
  1. The Tragic Tale of Research Participants – Melanie Courtright of Research Now discussed past GRIT research on research that found that only 15% of surveys are optimized for Worse, under 10% of researchers considered it to be of significant importance for research participants to have a positive impression of market research after participating, and even fewer felt it was important for participants to speak highly of their research experience. As Melanie writes, “And we wonder why respondent rates are falling?”
  2. Pepsi’s Ad Failure Shows the Importance Of Diversity and Market Research – Did you know the Kendall Jenner ad was produced by an in-house creative team rather than an outside agency? Interviewed by Marketing Week, Andy Nairn of agency Lucky General commented, “If you run an in-house creative department like Pepsi does you need to really interrogate your own approach and make sure you’re not blinkered by your own “
  3. MRS Introduces Market Research Training Simulation – The Market Research Society has introduced a simulator where students design a research program to support a brand launch, using qual and quant techniques across six
  4. The State of the SME Nation – Microsoft sponsored a pan-European study of 13,000 SMEs (Small and Medium-sized Enterprises): “69% of SMEs want to be known for the quality of the service they provide, but despite this, interactions with their customer rely on low-tech solutions, such as face-to-face interactions (26%), or phone “
  5. ComRes Launches Brexit Unit – The agency ComRes has created a new unit that will aggregate public research about the impact of Brexit, supplemented by its own proprietary
  6. Analysis Shows Long-Term Effect of Newsbrands Research Live discussed Peter Field’s research into advertising with newspapers (online and offline): “Campaigns using newsbrands were 43% more likely to deliver very large market share growth, 36% more likely to deliver profit, and 85% more likely to drive customer “
  7. Better Marketing for Market Researchers: Lessons from Insights Marketing Day – Stefanie Mackenzie recaps presentations from Dan Brotzel of Sticky Content, “7 Ways to Nudge Your Company Towards More Conversions”, and Tom Ewing of System 1, “The Cat’s Lifejacket: Thought Leadership in a Thoughtless “
  8. What’s the Story with Market Research? Q&A with Author and Leading Business Storytelling Expert Paul Smith – Brian Izenson of Dialsmith interviewed Paul Smith, who provided two useful templates for research storytelling, the “How We Got Here” story and the “Discovery Journey” story–“Explain in story form: context, challenge, conflict, “
  9. Time and Convenience Biggest Opportunities for Engaging Post-Millennials – IGD ShopperVista surveyed 500 British 18-25 year olds. The top 5 time-savers for grocery are: “self checkouts (77%); buying prepared food in jars, tins, packets or cartons ( 69%); spending less time cooking (68%); shopping in smaller convenience stores ( 67%); and buying food-to-go ( 64%).”

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. The following links are excluded: links promoting RTs for prizes, links promoting events in the next week, and links outside of the research industry (sorry, Bollywood).

Food for Thought: Challenges and Ideas for Addressing Bias

If given the right tools, bias can be overcome.

By Katja Cahoon

In a previous article, I outline a topic that is not talked about (a lot). Bias impacts all of us, even, and sometimes especially, experts, as Kahneman and others outline in detail in their writing. I promised ideas for how to address it. Daniel Kahneman himself states that it is very hard, if not impossible, to change bias and cognitive errors[1] and I am humble enough not to think that I can outdo the Master! Research has even shown that certain kinds of bias training do not lead to noticeable change and can have the opposite effect.

On the other hand, I believe in our innate ability to change if given the right tools. Innovation guru Stephen Shapiro states: “Branch out and start studying tangents. I learn about innovation by studying magic.” I am not going to draw on magic but on my training and experience as a psychotherapist as well as having worked with dozens of wonderful, smart, inspiring insights and brand teams all over the world. As a psychotherapist, I am acutely aware of the amazing human ability to change unhelpful thought patterns and behaviors, in some cases quite fundamentally and profoundly.

Specifically, I am going to use the 12 Steps of Alcoholics Anonymous. Bear with me, please! They are actually very applicable as they deal with recognizing maladaptive patterns of thought and behavior, working to change those, and finally integrating new ways of thinking into one’s daily routine. The 12 Steps break down neatly into three distinct categories, which I will translate here into a business context.

Market research and marketing is of course not as dangerous or even lethal as addiction, but mistakes can certainly be costly (as in losing an election or expensive product or communication missteps). So, let’s look at the steps, how they create a structure for change, and end with some practical and inspiring ideas:

1) In AA, steps 1 to 3 are called the surrender steps. This is where a person realizes and accepts that they have a problem. We as an industry know that consumers are biased (it is pretty well established at this point). Most of us know that this applies to all human beings, marketers included. But, have we truly accepted it?

Knowing is different from accepting. The former happens at a cognitive, detached level. It is not that hard to say, “of course we are all biased.” It is even easier to point the finger at bias and we tend to forget that in those cases three fingers are pointing back at us. Acceptance happens at a deeper, more emotional level. It is unpleasant to realize that I have been biased, as I also discuss in my previous article. And more importantly, acceptance does not just theoretically call for some type of action but rather makes that action necessary. Acceptance requires more work, reflection, discussion, and practice.

  • The fundamental question is, what would it take to move you and/or your organization – meaning the people within it- from knowledge to acceptance and consequently action?
  • One starting point is to challenge yourself and encourage others to do so. Harvard’s Implicit Bias project offers a variety of well-designed tests that are pretty quick to take. They are eye-opening with regard to well-established cultural and societal biases (gender, race, ageism, etc.) – you will be surprised one way or another.
  • Another one is the BigThink article testing how rational you are – as with the IAT, these are quick, fun, and eye-opening.

Core Task: Move from knowledge to acceptance.

2) Steps 3 to 9 are called the working or action steps. These are designed to “clean house” and change patterns of behavior. If I have accepted that I am biased, I am open to a “Kaizen” approach about it: continuous learning and improvement, resulting, over time, in different associations, attitudes, and behaviors.

  • One of the action steps calls for a personal inventory. What would it look like if you took a “bias inventory?” What are the most common biases you, your team, your organization experience? Does it differ by context, e.g., is there a risk of confirmation bias skewing your views when observing a focus group or indeed any research?

It goes beyond the scope of this article to provide a comprehensive overview of common biases and heuristics. The following are important with regard to insights and research: anchoring bias, availability heuristic, bandwagon effect, confirmation bias, stereotyping. Here is a helpful quick list of the biases I mention and a few others.

With regard to the workplace in general it is helpful to understand performance bias, competence/likeability bias, and information bias. Information bias refers to the need to seek more and more data without the information impacting action. It is particularly worthy of attention in the context of market research as it is rampant and costly. Is your company culture one in which you are more likely to run with a small amount of quality research, or problematically, biased and surface level research, or is it a company that tends to postpone and delay by getting ever more research that either goes unused or cripples action?

Core Task: Make a bias inventory and use it for learning and improvement.

3)Steps 10 to 12 are called the maintenance steps. They are designed to integrate new behavior and thought patterns in order to keep the house “clean.” This does not call for dramatic, one time action but rather continued diligence, awareness, and a practical tool kit. Small changes over time have a significant impact, a point also reiterated by the well done and publicly accessible Facebook Bias Training.

  • Attune yourself to bias before important insight or brainstorming meetings. My former co-worker, Olson Zaltman Account Director Jessica Kukreti, starts many of her client meetings with a quick exercise around common biases among research professionals. She thereby integrates bias language, thinking, and awareness into the discussion, which also gives people permission to call it out.
  • Focus groups are viewed increasingly critically, among other things for their high bias risk. Interestingly, many brainstorming meetings are sort of like focus groups. One simple trick is to write down ideas, opinions, judgements about important questions/topics before the discussion and make sure to hear all voices, especially dissenting ones.
  • Put yourself into consumers’ shoes – literally! One of my clients used to make her team use their product at home, while on the road, and in other situations. That does not sound very impressive until you realize that this involved incontinence products. Yes, the team wore diapers to create more empathy and consumer understanding!
  • And that is my last point: work on seeing consumers as holistic, diverse, complex human beings. As one participant stated during a study on Millennials we conducted a while ago: “I want companies to realize that we are people just like them. Don’t look at me and my friends as a just another profit but people with voices. People with dreams, aspirations and goals. If they can see that then we can create dialogue to better deliver the products and services we need.”

Having true empathy is a way around bias and a way around this is to understand consumer complexity and multiplicity. I will write more about this soon.

Core Task: Integrate bias busting processes into your routine.

To wrap it up, I am honored to be speaking about this and related topics at IIEX North America (June 12-14). If you are attending please come and see me: I would love to hear your thoughts, experiences, learning, and stories about this important topic. Let’s work together to tackle bias, one day at a time!

[1] Daniel Kahneman Thinking, Fast and Slow