Researchers and the Love of Learning

Posted by Jeffrey Henning Wednesday, November 22, 2017, 6:55 am
How is the market research industry doing in career satisfaction, growth opportunities, and learning preferences? Find out in this new report from MRII.


By Jeffrey Henning

On behalf of MRII, Researchscape surveyed 129 market researchers from the US, UK, and Australia using an online survey in order to better understand career satisfaction, growth opportunities, and learning preferences. The survey was fielded from July 23 to August 7, 2017. Half of the sample worked for research agencies, 19% for corporate researchers, and 20% are suppliers to both (e.g., panel companies, tools providers).

Fielded to a panel and a house list, this survey differs from many industry studies in reaching a younger, less-connected audience within the research industry, including part-time telephone and field interviewers as well as project managers. Nearly a third of respondents have worked in the research industry less than a year, and the majority had worked in market research for just 1 to 9 years (59%). In fact, a third of respondents are employed only part-time, reflecting the only-as-needed staffing of interviewers.

A quarter of respondents had planned a research career while in college, while a third entered the industry with their first job and simply stayed.

Job Satisfaction

Not quite half of researchers surveyed (47%) were very or completely satisfied with their job overall.

What is your overall satisfaction with your job?

Sample Size: 105 (81% of Respondents)

Respondents then rated their satisfaction across 13 aspects of their jobs. The following quadrant analysis of satisfaction vs. derived importance (shared variance) splits the axes by the median value of each dimension.

How satisfied are you with each of the following aspects of your job?

Higher Importance Weaknesses

  • Pay
  • Opportunities for advancement
  • Workload
  • Expectations set for work
Key Strengths

  • Opportunities to learn and grow
  • Freedom to innovate
Lower Importance Vulnerabilities

  • Level of communication from executive management
  • Benefits
  • The company’s executive management
Assets

  • The materials and equipment provided to do the job
  • Relationship with your direct reports (if any)
  • Relationship with manager
  • Relationship with coworkers
Lower Satisfaction Higher Satisfaction


The top right quadrant contains attributes that respondents ranked above the median in terms of satisfaction and have a shared variance with overall satisfaction (derived importance) that is also above the median. These key strengths include opportunities to learn and grow as well as freedom to innovate: education and self-improvement are key strengths of research jobs. Weaknesses were pay, opportunities for advancement, workload, and expectations set for work.

Skills and Learning

Qualitative comments back up the fact that “opportunities to learn and grow” are vital to the industry. Researchers are enamored with learning. It’s a central theme about what they like about a research career:

  • “Learning new skills all the time.”
  • “It’s an awesome job and I learn new things every day.”
  • “You are always learning new things in every sector. You learn how to face problems and how to solve them in the best possible way. It’s very interesting and challenging.”
  • “I enjoy learning about new advances in healthcare (which is my market research focus), and analyzing data to come up with conclusions.”
  • “It is a very interesting science.  I like studying my company’s business and learn how it works so at some point I can get to a similar point as a successful company with my own company.”

Company-provided in-house training (42%) and formal schooling (39%) are the most common ways that respondents have learned the required skills to be a market researcher. For formal schooling, 16% of respondents learned research as part of their undergraduate major and 27% through a graduate degree (4% took both). Three out of ten learned by reading on their own.

Career Satisfaction

Only 43% are very or completely likely to recommend research as a career, but three out of four researchers (77%) have said something positive about market research as a career to someone else directly, compared to 22% who have said something negative. And 18% have posted a positive comment online, compared to 1% who have posted a negative comment.

Only 59% of researchers think that it is very or completely likely that they will be working in market research a year from now, and 16% think of leaving the industry on a daily (11%) or weekly (5%) basis. Of the respondents who have worked less than a year in market research, 39% reported they were not at all likely or slightly likely to be working in the industry a year from now in contrast to 11% of those with 1 to 9 years of tenure.

Industry Outlook

While a research career might not be right for them, that didn’t mean researchers thought the industry was in trouble. For all the doom and gloom from industry pundits, four out of five researchers (81%) reported that the research industry today was generally headed in the right direction, with only one respondent (1%) saying the industry was on the wrong track (18% were unsure).

72% of corporate researchers report that market research is very or extremely important to the mission and purpose of their employer, similar to the 74% of agency researchers and suppliers who believe market research is very or extremely important to the success of organizations in general.

Recommendations

  1. Firms can improve employee satisfaction (and research quality) by providing more opportunities to learn. Given the appetite for learning, and that only 42% of researchers reported they receive in-house training, this is an important step that organizations should be taking. Unfortunately, the broader trend – in the research industry as well as with employers in general – has been to cut-back on employee-provided training.
  2. Unfortunately, when in-house training is provided, it tends to be very narrow and focused on the elements of a specific job done according to that single firm’s specifications. Organizations too often train people to work in silos. This is not a substitute for learning the fundamentals of market research in ways that are transferrable to other job functions within the organization.
  3. Third-party training courses are even less used: only 13% of research employees had received them. Organizations like the Burke Institute, RIVA, Research Rockstar, and – yes – the University of Georgia offer affordable training opportunities for research staff.
  4. Economical ways that organizations can support autodidacts (30% of researchers) and encourage a higher proportion of them among their staff include building a research library of reference books and providing reimbursements for business books.
  5. The front lines of research – soliciting opinions from the general public through telephone and face-to-face interviews – are hard, often unsatisfying jobs, with high attrition. But battle-tested interviewers develop a richer understanding of respondent behavior than project managers who field only online surveys and achieve a better appreciation of well-designed questionnaires. Organizations should formalize career paths from interviewing into research design and analysis.

This exploratory research was designed to help the MRII better understand career satisfaction and education in the market research industry. Sadly, like most research within the research industry itself, this study uses convenience samples, in this case derived from a Researchscape house list and from self-identified market researchers across a range of panels.

The hope is that trade associations and other organizations will expand on this research in the future.

The full report can be downloaded from the MRII, a non-profit focused on offering global, market-leading continuing education programs for the practice of market research and insights.

4 Agile Market Research Techniques to Expect in 2018

Posted by Kevin Lonnie Tuesday, November 21, 2017, 6:55 am
See what market research techniques are expected to make news in 2018.

 
By Kevin Lonnie

Businesses in almost every niche need to stay agile in order to survive. Agile market research focuses on delivering the best outcomes in the quickest amount of time. Expect this practice to dominate in 2018: Seventy-seven percent of market researchers plan to increase adoption of agile methods during the next 12 months. Here are four agile market research techniques to look out for over the coming year.

1. Innovation

Agile market research is all about innovation. The best agile market researchers are able to blend both traditional market research methods — surveys, focus groups, etc.— with new and original ways to gather information about core consumers. They often capitalize on current events such as award ceremonies and sports competitions in order to reach target demographics and use platforms like Facebook and Twitter to collect valuable information from social-savvy customers. They rely on real-time methods to ascertain what people are saying about products and services at a precise moment in time.

2. Productivity

In 2018, more businesses will utilize agile market research to boost productivity in the workplace. Agile market research techniques improve workflows and streamline many front and back-end office functions, too. Not only does agile market research improve productivity, but customers prefer it, too. No longer do consumers have to fill in long surveys with dozens of questions or stay in a focus group for hours at a time. Instead, they can interact with market researchers through the web, email, social media and SMS. The best agile market researchers ask short, relevant questions across various digital platforms — and get accurate results.

3. Integration

As more businesses become familiar with agile market research, they will incorporate this method into their traditional market research processes. Companies can still take advantage of conventional market research procedures such as surveys and sampling, but will profit from newer techniques like real-time analytics. “Agile market research is an approach that takes its inspiration from agile software development which values: numerous small experiments over a few large bets, rapid iterations over big-bang campaigns, testing and data over opinions and conventions, and responding to change over following a plan,” says Mark Antonacci, Global Head of Sales at social media network SERMO.

A holistic approach to market research will provide companies who adopt agile market research with a competitive advantage, too. These companies can ask consumers fast, flexible questions about a wide range of topics and get products and services to market in a quicker time frame. Using both conventional and agile market research techniques can also solve some of the problems associated with traditional methods, such as survey non-responses and survey bias. Professionals use a combination of methods to increase participant engagement and gather samples that are more reflective of society.

4. Social Media Mining

Unlike conventional market researchers, agile market researchers have a wealth of data at their fingertips. Real-time social media analytics, for example, provide them with insights into customers who use Facebook, Twitter and Instagram — three social networks that have a combined 3 billion active users.

Social media analytics provide agile market researchers with information about how customers interact with social content. Businesses can then use this valuable data to optimize their marketing campaigns, reduce churn rates and engage with existing customers. They can also utilize this information to move prospects through the sales life cycle and generate new leads.

Agile market researchers should have a clear focus when using social media analytics. “If marketers don’t know where their marketing problems are and what they should be testing, they put themselves at risk of being bogged down in data that doesn’t actually add anything to their campaigns,” says Youtse Sung, writing for Econsultancy. “In many ways, this can be an agile marketer’s worst nightmare.”

12 Crucial Tips to Designing Smartphone Qualitative that Gets Great Insight

Posted by Ross McLean Monday, November 20, 2017, 6:55 am
Learn 12 crucial tips to designing successful and engaging smartphone-based qualitative projects.


By Ross McLean

Over the Shoulder has been helping put smartphone-based qualitative into the toolkit of qualitative researchers and insight seekers for almost nine years now. We’re often asked by clients to list the biggest tips and “watch-outs” that smartphone qual practitioners should bear in mind to make their jobs easier and their deliverables to clients more valuable.

So we put the question to our in-house team of smartphone qualitative designers and turned their years of experience into the list of 12 crucial tips below.

1. Design your project to be entertaining and engaging.

Great smartphone qualitative leverages the intimacy people have with their smartphones, and the enjoyment participants get out of telling their stories and sharing their truths. Your study design should always reflect this. So deformalize and “conversationalize” your language. “Gamify” your assignments. Fill your study with “Easter Egg” questions that provide moments of levity and little emotional rewards.

Use a platform that lets you set up logic to give real-time acknowledgement to your participants. For example, if you as a participant to rate the test-drive experience from “Amazing” to “Disappointing” and they chose “Disappointing,” follow up with “Oh no! What do you mean when you say ‘Disappointing’?” It makes participants feel like they’re engaging with someone who really wants to hear what they have to say, not a machine.

Use emoticons to let participants express themselves, and use them in your design for standout, visual appeal and clarity.

If you include scales, remember that rating the moment you’ve just experienced on a scale of “Best time ever” through “Major bummer” is more fun and conducive to emotional disclosure than rating it on a scale of “1-7, with 7 being extremely enjoyable.” Just about any project can be designed to be engaging to interact with, and the insights you get back get better when your project is entertaining and fun to be part of.

2. Ask only the most important questions, and as few of them as possible.

The biggest surprise our first-time clients get is the sheer volume of response you get from a project. And that’s great, as long as you’ve been disciplined in your design, and kept the number of audio and video recordings you ask for to a reasonable level. But if you ask too many audio and video questions, you’ll be awash in media response, and all the time we spend building tools to make your analysis more efficient will be powerless to help you. Asking for too much is the number one mistake first-time practitioners make and it can simultaneously kill your project profitability and annoy your participants while adding nothing to the insightfulness of your project.

3. The right sample is the smallest one possible.

Again, smartphone qualitative produces a large quantity of rich photo, audio, video and other data. Keeping your sample as small as possible reduces the amount of data you’ll need to analyze.

Most studies don’t need more than 20-30 carefully-selected, engaged participants to produce great insight. Our “rule of thumb” is that each segment of your sample that you want to be able to understand and isolate from the others should have about 15 participants in it. 15 participants typically gets you to the feeling of “saturation” (where you start hearing the same themes and stories repeatedly, and the number of new ideas diminishes quickly).

4. Choose a study length that lets you see a proper window into the behaviors you want to understand.

Smartphone qualitative projects can be as long or as short as needed. We’ve helped clients do everything from single-day projects to ones where they participate in journaling for over a year (!).

The basic guidelines we recommend are:

Keep your study period as short as possible (more study days = more data to go through).

Design your study so that it’s long enough to let you see the relevant behaviors you want to understand. For example, if you want to understand daily snacking behavior, be sure to include weekdays and weekends, as behaviors tend to vary between the two.

Remember that your participants DON’T have to be doing assignments every day. If you want to understand how they find recipes, shop and prepare new foods, you can design your study length so that it spans they typical length of the behavioral cycle you want to observe. If your consumers typically do the “inspiration-preparation-serve & reaction” cycle over the course of 2 weeks, that’s a good study length. You may not need your participants to be answering assignments every day over the whole two weeks. You can give them “rest days” and let them journal the behaviors as they naturally unfold. There’s often no need to make up a daily assignment to understand a weekly behavior.

5. Always participate in an on-device test of your project before you launch it.

Seriously. We never let a study we’ve designed and built for a client go into the field without its designer going through it on their own smartphone. Even our most experienced Project Designers, who have designed hundreds of smartphone qualitative projects for our clients, will tell you they almost invariably learn something that can make the participant experience better and the project more successful. Walking through your project on your smartphone will instantly reveal if you’ve broken #1 or #2. Plus, knowing what it feels like to be on the receiving end of your assignments journals and questions is always good practice.

6. Make participating easy for your participants.

Eight years of experience in smartphone qual have taught us one important (if obvious) truth. “Make it easy for participants = Get better insight.” Our whole platform is designed around having the simplest, clearest and easiest participant experience in the business.

When it comes to project design, there are many ways to make life easier for participants. Always make it clear to participants where they are in your study and what’s coming up. Tell them how long the assignment they’re about to to start is going to take them (and never underestimate it).

If you have people journaling their “joys and frustrations” in the moment, make sure your journaling assignment takes 60 seconds or less from opening it to hitting “submit” and don’t make them wait while their answers upload. You’ll be amazed how many more moments you capture, and the quality of those moments.

Use logic and skip patterns so that participants never have to “forward through” questions that aren’t relevant to them.

Be incredibly clear on when projects start and end. Our rule of thumb is that any important project detail needs to be communicated three times to avoid confusion. Participants should be able to get back to an explanation of the rules, dates and expectations of your project right within the app at any time.

Ensure that you’re clear up front (at the recruiting stage) exactly how much time participating in your project is going to take.

Avoid extending studies past their original finish dates, and if you have to extend them, offer bonus incentives. Few things irritate participants more than “adding on a few extra assignments” that will require them to continue participating after the date you told them they’d be finished.

7. Choose response media so that participants can easily and comfortably express themselves.

One of the most exciting things about smartphone qualitative is obviously the ability to submit beautiful “selfie” videos in answer to your questions. And there’s no doubt that a great in-the-moment HD video can be an insightful showstopper in a presentation. But video isn’t the right capture medium in all situations.

For example, if you’ve sent your hemorhoid-suffering participants into the drug store to survey the shelves and tell you about the product that’s most relevant to them and why, asking for a selfie video will make them uncomfortable (or should we say “even more uncomfortable”). But they can easily take a photo of the product, then hold their phone up to their ear (feigning a phone call) and tell you about their inner monolog in an audio recording. You’ll get far better insights for it, not to mention better compliance. More on choosing the right media can be found in “In praise of audio recordings.”

8. Review your results while your study is live, and ask follow-up questions.

The beauty of good smartphone qualitative lies in its interactivity and flexibility. A good smartphone qual platform will let you know the instant participants submit responses and let you send them individual probes and follow-up questions when you need to. Over the Shoulder not only lets you send probes and follow-ups, it even lets you ask for responses in ANY media you choose.

We recommend you or someone on your team dedicate a block of time each day during your fieldwork so that you can take advantage of the opportunity. Someone will need to go through your participants’ submissions regardless: that someone may as well be doing it in real time so that they can take advantage of the ability to probe individuals and get to deeper layers of insight.

Another important advantage of working in “real time” with your project (or having a Community Manager who does it for you) is how motivating it is to your participants. Remember, participating in a smartphone research project feels, well, weird at first for participants. Imagine yourself sending your personal thoughts off into the ether as photos, text, audio and video recordings, and wondering if anyone’s even looking at them on the other side.

Giving individual participants a push notification addressing them by name and telling them that they’re doing a great job is extremely engaging and will get you better insight. And it’s something that we can say with authority will increase engagement with your project and the quality of the response you get.

Same goes for participants who aren’t performing up to grade. An artful prod from a Community Manager can often turn a marginal participant into a superstar and avoid dropout and replacement costs and delays.

9. Ensure that you have participants who are real people, engaging fully in your project.

Getting the right participants into your project takes effort, planning and money. But unlike online quant and even some online qual, the audios and videos smartphone qualitative generates will reveal a poor recruit immediately.

Find a good recruiting partner (we’re happy to help), pay a motivating incentive, and manage your participant community actively (if you’d like Over the Shoulder can assign an internal Community Manager to your project to ensure this happens).

Some of our practitioners even schedule a 10-minute live intro call with each participant at the beginning of the project to get them warmed up, comfortable and engaged before they download the app and start participating.

Pay participants an incentive that makes it worth their time to engage fully with your project (we recommend paying roughly the same per-hour rate that you’d pay for face-to-face qualitative). The recruiting and incentive tab for your job may be a little higher, but so will the engagement with your study and the value of the insights it lets you bring your client. Replacing bad participants and chasing down participants who are not fully engaged will end up costing you more than having a compelling incentive and well-screened participants in the first place, as well as keeping your project on schedule. “Pay peanuts, get monkeys” definitely applies here.

10. Have an analysis plan, and design your study to make analysis easier and efficient.

Our smartphone qual designers quite literally consider what kind of answers they’ll get to an assignment or question, and how those answers will be analyzed and presented as they work through their initial design. It’s like a research geek’s version of “beginning with the end in mind.”

You should ensure that you’re familiar with the tools you’ll be using to monitor and analyze your results well before the submissions from your participants start rolling in. If you’re using a new platform, or its your first smartphone qual project, make sure you’re totally comfortable using it BEFORE your responses roll in. We actually set new clients up with access to a “demo” Project Viewing Portal so that we can make sure they’re up to speed with how it works, and the analysis tools they’ll be using before their fieldwork starts. And, we make sure that they have an analysis plan specifically based on their project’s design so they know what they’ll be doing with the answers to all of the questions and assignments before the project even starts.

11. Make your project entertaining and engaging for your clients.

Remember that your clients are giving up the “focus group ritual” when you use smartphone qualitative. The “back room chatter” and the group focus that comes with it are a valuable part of the face-to-face research process.

Happily, good smartphone qualitative platforms offer lots of ways to rally your key clients around the project and engage them with it so that they get great value out of it. Try “daily reports” with key submissions. “Buddy up” key clients with a research participant and make it easy for them to see their buddy participant’s responses as they come in. Even let them ask follow-up questions and probes (with you as editor and controller) so that they feel involved and learn as the project unfolds.

Use your “ripped from reality” photos, audios and videos to make a powerful presentation. Even use tools like online media collages that let your clients share and access key participant submissions long after the project is over.

12. Respect the privacy of your participants.

In many studies, we’re asking participants to capture and share moments that are intensely private. So, it’s crucial to be good guardians of the secrets of people who participate in studies.

Ensure you’re using a system that actively protects participant’s identifiable information and everything they submit. If your research subject is intensely private, ensure you’re working in a system that can keep participant identities entirely separate from what they submit.

If a submission looks so great that you can’t resist using it outside of the project that collected it, make sure you can get the participant’s explicit permission to use it. We find that most participants are happy to have their submissions shared if they can review what’s going to be shared, and the exact way their submissions will be used.

We’ve built technological, operational and organizational security at the highest level right in to the Over the Shoulder platform so that we can keep people’s secrets safe, and help you make sure you’re using their submissions appropriately and respectfully.

Smarten Up: 5 Tips to Become a Research Technologist

Posted by Stephen Phillips Thursday, November 16, 2017, 6:55 am
Technology is changing the market research industry. Learn how to stay one step ahead by becoming a research technologist.


By Stephen Phillips

In research we are often accused of being too conservative; trying to give our (typically marketing) clients what they are used to rather than rocking the boat. But we all know times are changing and we need to provide great quality research, faster and cheaper than ever before if we are to stay relevant.

The times, they are a-changin’

You can see its effects right now: areas of research are being taken from under our noses. The likes of Qualtrics, Medallia, and Survey Monkey have snatched swathes of revenue particularly in customer satisfaction, while not being hugely engaged with the research industry itself.

With the advent of programmatic ads, there is a chance this shift could spread to the whole creative development area of research. More and more, we might see micro surveys cut into areas such as tracking, product development, and general brand positioning – all owned and managed by technology companies who may not value the training and rigour of our research thinking. We need to fight back, fight for great quality research, and to do this we must embrace a technology mindset.

This of course requires using new technology, but few of us have had the technical training that would helps with this transition. As someone who has made the jump from a research company to a technology company (in the field of research), I thought I could suggest some things that have helped me on my journey:

1. Read ‘The Lean Startup’

Whether you’re starting a new business, trying to change a company you work for, or launch a new product or service, you must read this book by Eric Ries about testing your vision and learning what your customers want. It is also a great help in understanding how technology is built and how you can work with it.

2. Understand AI (or IA)

Uncover which role you can play in and around the emergence of Artificial Intelligence. AI will compliment everything we do in research in less than two years. Get a view of how we as human researchers can overcome the ‘AI will do everything’ crowd by watching this Ted Talk by Gary Kasparov. At Zappi, we believe in IA – see my Greenbook blog post on this.

3. Code in a day

As technology continues to swamp not only market research, but life in general, it’s important to contextualise its inner-workings even if you don’t work in code. To do this send yourself on a ‘code in a day’ course (via decoded.com). It will give you a much better appreciation for what software development is.

4. Imagine a single platform and your role in it

Imagine a world wherein clients have just one technology platform for running all of their market research and data analytics. As SalesForce has taken over CRM, Google is search and Amazon is retail; clients will come to have one single data platform and your job will be providing something within that platform. Try to understand what your role could be in helping facilitate their use of this platform.

5. Stop thinking questions, start thinking data

Too often we think about question formats when actually it’s only the data that matters. It is a hard shift, but important. See how you can add value to the morass of data clients already have (whether making sense of data, integrating data, analysing data, or helping clients act off data).

As usual, I will await the GRIT Report with baited breath. It’s important to keep a finger on the pulse (rather than bury your head in the sand) and make moves to embrace disruptive technology – or risk becoming a research dinosaur!

I would love to hear any other suggestions for making researchers more technically astute; if you post comments, I will compile the ideas and post an updated version soon.

Market Research Firms Fight to Survive: Top 5 Ways MR Firms are Overcoming Challenges

The market research industry is changing. Find out the top five ways MR firms are overcoming challenges from this shift.


By Jitesh Marlecha

The market research industry has been hit hard by the rise of the digital consumer. With enterprises increasingly emphasizing a consumer-centric approach, their consumer insights teams are constantly pushed by the C-Suite to get data-driven insights in a very short period of time. This drives a greater need for agile research, which traditional market research firms are not set up to cater to. So, enterprises are turning to DIY technologies to collect their own data and conduct research. As a result, the global growth rate for the industry has remained flat at 2%, driven only by inflation. It’s evident that traditional leaders in this space are struggling to sustain and scale.

The market research firms that have been able to sustain, and in some cases, even showcase double-digit growth, are those that recognize the urgency of transformation. As those that are slow to evolve bleed faster each year, those that are managing to adapt and succeed are doing it in a few key ways:

Specialization

It’s no longer enough for a firm to provide generic market research. The key to attracting customers is to figure out how to be unique in some way, usually by delivering a unique type of data, or by providing multi-source data or some type of unique technology. Successful firms are the ones that combine deep domain expertise in a given industry with a robust data inventory that isn’t available with others. Promising startups are now usually built on the premise of some niche capability, enabling them to find investors that can allow them to continue their journey.

Acquisitions

Many larger firms that have become more specialized have done so through acquisitions. These organizations acquire a specialist firm so they can sustain their business and continue to attract new clients with a unique proposition. Stagwell Group, for example, bought the National Research Group from Nielsen in 2015 and revived its entertainment research specialty, and then this summer added to that by acquiring TV pilot testing assets from Nielsen as well.

Medium independent generalist MR firms are the ones that are struggling the most. They lack a niche, as well as the resources to acquire a firm that has one. They also tend to execute research operations in-house instead of using partners. This eats into their margin and limits the funds they can invest in R&D to stay unique and drive future growth.

Outsourcing non-core functions

This is how adaptive firms are staying nimble and focused only on what they do best. They use trusted partners to help with everything that falls outside of their core competency. This can include all sorts of research operations services, including data collection, data processing, project management, QA services, text analytics, data warehousing and reporting.

Many clients are moving research in-house as technology is making it easier to do it themselves, versus working with a research agency. Unless those clients see deep expertise, actionable insights and high value-add from their MR firm, they are likely to make this move. So it is critical for MR firms to make themselves indispensable by doubling down on their core competency. They should not dilute their focus by engaging in research operations, if the client can handle that part on their own anyway. End clients come to them for their research expertise, not their research operational capabilities.

Embracing new technologies

It used to be that research was considered a “nice-to-have.” But now, all business decisions are backed with data and research. Tech giants like Google and Amazon do loads of research at each stage of their product and services lifecycle. The new imperative is to move beyond traditional data collection and analysis methods. Every day, new technologies are emerging which can not only collect and analyze customer, employee and brand experiences, but also provide action and impact measurement based on the insights.

Firms that are slow to adopt new technologies will fall behind their customer needs and will experience a faster churn-rate in their customer count.

Delivering faster turnaround times

There is greater pressure now to offer products and services that are highly competitive in terms of both price and faster turnaround times. Even MR firms that are solving highly complex problems with deep domain expertise and unique data still need to offer greater speed and lower cost. In many cases, they’re doing this by building strategic partnerships with suppliers that can deliver things like technology infrastructure and expertise, flexible operating models, automation, and process-driven, efficient setup, as well as 24-7 setup and global setup.

Market research firms around the world, in all stages of growth, are getting stuck. Getting unstuck will mean delivering value faster, cheaper and more uniquely than ever before.

Lloyd Shapley’s Value

Learn the basics of the Shapley Value, a solution concept in cooperative game theory, and then explore its most common uses in market research.

 
By Michael Lieberman

The Shapley Value, named in honor of Lloyd Shapley, who introduced it in 1953, is a solution concept in cooperative game theory. To each cooperative game it assigns a unique distribution of a total surplus generated by the coalition of all players.

Basically, the Shapley Value is the average expected marginal contribution of one player after all possible combinations have been considered. This has been proven to be the fairest approach to allocate value. A ‘player’ can be a product sold in a store, an item on a restaurant menu, a party injured in a car accident or a group of investors in a large real estate deal. It is employed in economic models, product line distribution, procurement measures for embassies and industry, market mix models and calculations for tort damages.

The Shaply Value shows up in several popular marketing research techniques. Below, we will set out the ABCs of Shapley Values, and then show its most common uses in marketing research.

Shapley Value ABCs

Here’s the simplest case of the Shapley Value. Let’s say there are three players, A, B, and C. When they enter a game, they add points to the score. The total point-value in the game is 10.

As the chart below illustrates, when the order of entry is A B C, A’s and B’s contribution is 4; C’s is 2. However, in the second round of the game, A’s contribution is 3, while B’s is 5.

 

In total there are six possible different orders of entry. If we play all six, and then take the average contribution of each player, we arrive at the Shapley Value.

Now we will see several common applications of the Shapley Value in marketing research. The Value is quite useful: it yields the highly equitable solutions and thus provides several vital research measures.

Shapley Value – Regression and Brand Equity

Let’s say that a major automobile company has a public relations disaster. In order to regain trust in their brand equity, the company commissions a series of regression analyses to gauge how buyers are viewing their type of vehicle. However, what they really want to know is how American auto buyers view trust.

The disaster is fresh, so our company would like a composite of which values go into ‘Is this a Company I Trust’ across industry. Thus, it surveyed ten of its major competitors on various elements of automobile purchase. We then stack the data into one dataset and run a Shapley regression. What we hope to see are the major components of Trust.

 

Not surprisingly, family safety is the leading driver of Trust. However, we now have Shapley Values of the major components. These findings would normally be handed over to the public relations team to begin damage control.

Shapley Value – Product Design

The Shapley concept of relative importance comes from product design, where we are able to piece together components in any way we wish. Products are bundles of attributes, and attributes are collections of levels. We’ll take a typical conjoint study for a product design.

An energy drink company may be thinking of how best to configure a package with attributes like number of cans in a bundle, size of ounces in a can, amount of caffeine, flavor and price. By systematically varying these attribute levels according to an experimental design, they can generate descriptions of a hypothetical energy drink that are presented one at a time to respondents, who rate their preferences for all the product configurations.

In a conjoint study, relative importance is defined as the percentage contribution of each attribute. We sum the effects of all the attributes to get the total variation, and then we divide the effect of each attribute by the total variation to get the percent contribution. The attribute with the largest percent contribution is where we have the most leverage. This is, in effect, the Shapley Value. For our energy drink client, the Shapley Values for three different customer bases are shown below.

 

Changing the number of ounces in a bottle impacts most heavily the likelihood of purchase. Price is way up there too, with a Shapley Value of around 25%. Flavor and strength (caffeine) are really secondary factors in purchase intent, but they still matter.

Shapley Value – Attribute Attrition/Maximizing Product Lines

In our final example, we will demonstrate how to use a Shapley Value to maximize product lines displayed in a store. Adding the right combination of new items will grow your business; introducing the wrong new items will result in no growth or even cannibalize your top performers, leading to a revenue decline.

Perhaps a supermarket chain, Gigantic Market, wishes to determine the maximum number of laundry soaps it should display. The first thing to do is deploy a Maximum Difference (MaxDiff) choice exercise. For purposes of illustration, let’s say that Gigantic is trying to decide which of 28 brands to carry.

We take our 28 brands and divide them into 7 questions of 4 products. That way, each respondent sees each brand once. Below is a sample question from the MaxDiff.

Of the laundry brands shown below, which are most likely to purchase and which are you least likely to purchase?

  1. Woolite
  2. Wisk
  3. Cold Power
  4. Daz

The beauty of this analysis is that we can create many different splits (a split is the 7-choice question) in random order so that each respondent sees a different set of questions. This is performed using a random-design Excel macro. If the sample is, say, 2000, we may design 200 splits so that each is seen 10 times. We could, if requested, design a split for each respondent, but it is not usually necessary to do so.

The MaxDiff exercise yields a data structure in which we can calculate a Bayesian coefficient using logistic regression for each brand for each respondent. The coefficients are then normalized across each respondent. That is, the sum of all brand coefficients equals 0 for each respondent. Thus, some are positive and some are negative.

In a nutshell, have had the odds of purchase for each brand for each respondent—the likelihood or purchase. If we take the average across the entire sample of the coefficients, we get the average contribution of each brand to the store. That is the Shapely Value.

In the table below we see the Shapley Value for each of the 28 brands. Those in blue are positive. Those in red are negative.

 

Once the Shapley value is calculated, we simply choose those brands which add a positive revenue stream to the product line. Those in red that are near 0 such as Surf and Persil may be added to the inventory if Gigantic would like to sell 14 brands.

We would tell Gigantic Supermarket to stock those brands in blue. To maximize product placement, we would then suggest a TURF analysis. A full explanation is beyond the scope of this article.

Conclusion

The Shapley Value makes a positive allocation of items or value to that which generates positive revenue. How will this help a marketing research professional? In maximizing flows. The conditions under which the Shapley value makes a positive allocation exclusively to items or value involved in maximizing flows is of extreme interest to our clients, and thus to us.

The Right Reward: Fifty Percent of Respondents Demand It. How to Deliver?

How do respondents prefer to be rewarded for their survey participation? Find out how to take a strategic approach to boost engagement.

 
By Jacilyn Bennett

Did you know that more than half of market research respondents participate in order to win rewards or prizes? That’s what our white paper, which is based on data from the bi-annual GRIT CPR (Consumer Participation in Research) study, found. That in itself is not surprising: today’s consumer population is used to being rewarded for everything from credit card purchases to travel. In addition, they are used to being in control and are demanding more from their interactions. A new survey from the CMO Council, in partnership with SAP Hybris, found that consumers want service and experience wherever they go. Rewards feed right into this mindset.

So what’s next? We know that most respondents want to be rewarded, but how? Highly personalized experiences are the name of the game, and this also applies to rewards – people want what they want, when they want it. The data showed that respondent satisfaction is tied up in incentive type.

“When we were analyzing the data from the study that applied specifically to respondent incentive preferences it became clear fairly quickly which options stood out from the crowd,” said Lenny Murphy of Greenbook, publisher of the bi-annual GRIT studies. “Cash is always a welcome reward, but when you look at the type of incentive that makes sense for market research companies, virtual cards led the pack.”

When it came to the types of rewards respondents prefer, virtual cards were the number one selection in North America. While cash was the number one reward overall, it presents complications for market research companies and isn’t a practical option in most cases.

In fact, across all demographic cuts and comparisons by other variables in the study, virtual cards scored well. When broken down by age group, the sought-after Boomers picked it as a number one choice and, factoring out the impractical cash choice, virtual cards were the top choice almost across the board for every generation. In addition, those elusive, high quality respondents who participate in research less frequently have a strong preference for virtual cards.

Data likes this means that market research companies need to be thoughtful about their approach in incentives for respondents, asking questions like:

  • What are the top reward choices by various age groups and geographic regions and what constituents make up my target audience?
  • Where is my audience participating in the research: mobile, online in-person, telephone, mail? The platform may better advise preferred reward type.
  • Is my audience made up of frequent or infrequent participants?
  • What is the respondent’s motivation for participation in research?

All of these factors can be examined when looking at specific study in order to tailor an incentive program that resonates the most with the target audiences. Taking a strategic approach like this can help boost engagement and, ultimately, market research outcomes.

 

The CPR study, on which the “Improving the Research Respondent Experience” white paper is based, was conducted in 14 countries and 8 languages among 6,750 consumers via online, telephone, and mobile-only surveys. The full white paper can be found here: http://www.virtualincentives.com/improving-research-respondent-experience

 

Growing the Industry by Funding More Research

Welcome to our next post featuring two insights projects currently offered on Collaborata, the market-research marketplace. GreenBook is happy to support a platform whose mission is to fund more research. We believe in the idea of connecting clients and research providers to co-sponsor projects. We invite you to Collaborate!

Collaborata is the first platform to crowd-fund research, saving clients upwards of 90% on each project. We’ve asked Collaborata to feature projects they are currently funding on a biweekly basis.

Collaborata Featured Project:  

“Hacking Longevity: A Three-Generation Perspective on Living to 100-Plus”

Context: Fundamental shifts are transforming the older life stages of each generation of Americans, but the effects are largely reported only anecdotally. This study will bring to light the implications of increased longevity on three generational cohorts in the second half of life.

Pitch: To date, increased longevity has been treated as conceptual and aspirational, as in “What will you do with 30 extra years?” Most of what we know about this expansion is anecdotal, even though we see and are experiencing seismic shifts at every stage of life.

Rather than approaching this as an “aging” study, we will be studying these shifts — some subtle and some quite large — with a fresh eye. We want to understand how people are “hacking longevity” and if the idea of longer life informs plans and thinking.

This study is designed to frame the issues for brands and organizations who want to play an active role in serving the longevity economy in meaningful, informed ways.

This research is being underwritten in significant part by AARP; please join the AARP by co-sponsoring this landmark study and assuring its successful launch.

Deliverables: Formal report, including insights and recommendations, detailed analysis and full data tables. In-person and web-based presentations available.

Who’s Behind This: The Business of Aging helps businesses and organizations advance their goals, while advocating for and serving the mature market. Lori Bitter, President, is a well-known, well-respected expert. She is the author of “The Grandparent Economy” and was recently named as one of “The Top 50 Influencers in Aging” by Next Avenue.

To purchase this study or for more info: click here or email info@collaborata.com

 

Know someone who would benefit from this project? Head here for a referral link to offer your “friend” a 10% discount and you a check in the same amount!

Researchers: Is There Poop in Your Brownies?

Posted by Ron Sellers Wednesday, November 8, 2017, 6:55 am
Posted in category Industry Trends, Quality
With the drive for speed in research, are you sacrificing getting quality respondents?

 
By Ron Sellers

Business solutions in 48 hours! Get your survey data overnight! Do agile research! Fast, faster, fastest!

Yes, it seems the insights world is moving faster and faster every day. Many companies are promising turnaround times that would have seemed absurd just a decade ago. Shorter questionnaires, automation, and DIY solutions all offer speed and more speed.

But there’s one big question with this race to be faster than everyone else: what’s getting sacrificed?

No matter how a questionnaire is designed or how data processing or reporting are automated, there’s still an important component to any quantitative study: respondents. And while online research panels can give you access to thousands of respondents in just hours, panel quality ain’t gettin’ any better, folks.

As regular users of panels, we are also regular recipients of bad respondents mixed in with the good ones:

  • Research bots
  • Duplicate respondents
  • Straightliners
  • Speeders
  • Other kinds of obvious cheaters

But aren’t panel companies and field agencies screening out the bad respondents for you? Well, they’re trying, but many of their solutions are automated (again, in the interests of being cheaper and faster). For example, they’ll employ an algorithm that automatically tosses any respondent who answers a questionnaire in less than 50% of the average length, or one that catches straightliners in all your grids (that is, if you’re still using lots of grids).  

Frankly, they just miss a lot.  

Panel quality is atrocious today. Grey Matter Research has adopted the position that every respondent we get is a bad respondent, until we can demonstrate otherwise. This takes a lot more than digital fingerprinting or pre-programmed algorithms. Usually, it requires going line-by-line through the data to find and remove problem respondents. Just a few ways we do this:

  • We review every response to every open-end. Even once the field agency or panel has done their quality control checks, we regularly receive verbatims that just say “great,” give answers that have nothing to do with the question, or even are actual copies of the question itself that the bot picked up from the questionnaire and inserted as the answer.
  • We look hard for duplicates. Despite the claims of how digital fingerprinting removes this problem, we regularly find dozens of duplicates in a sample. The chances that a survey database of 600 respondents contains two 43-year-old Hispanic women from Iowa?  Possible. The chances that both are football fans who spelled their favorite team as the Pittsbergh Stellers? And that they just happened to complete the questionnaire 15 minutes apart? Not so possible.
  • We search for logical anomalies, which are different in every questionnaire. In various recent studies, we’ve thrown out people who claimed to have been in both Boy Scouts and Girl Scouts as kids, those who make under $30,000 annually but had given $40,000 last year to charity, those who supposedly live one mile away from four different local hospitals which are 75 miles apart, and those who belong to a non-existent organization (with a name that couldn’t be confused with a real one).  

Of course, respondents do make mistakes or misread questions, so usually the decision to toss a respondent is from a combination of factors. They straightlined the one short grid we included? Mark ‘em yellow. They further completed the 12-minute questionnaire in 8 minutes? Downgrade to orange. Also answered the question “What are the main reasons you are not at all interested in learning more about this product” with “I like this advertisement the best”? Buh-bye.

So what does any of this have to do with speed? (Or with brownies…but I’ll get to that in a moment.) Simple: this cleaning process is not a fast one. It doesn’t have to take days, but it won’t be done in minutes, either. In the quest for getting your data faster, how many of the respondents you’re getting are bots, duplicates, satisficers, or those who just didn’t actually pay attention to the questions you were asking?

Do you have any idea how many respondents had to be replaced on your last study? Or what criteria your vendor used to identify fraudulent or poor-quality respondents?

Most importantly: Did your vendor even do anything beyond some basic, automated checks to assure you got real, quality respondents?

Make no mistake – this is not just a problem with quick turn-around surveys. I’ve seen plenty of databases delivered in no particular hurry that still lacked proper quality control. But going all-out for speed dramatically increases the chances that your data includes some bad respondents, because putting everyone on a rush basis makes it far less likely that there will be time available for quality control.

In a qualitative interview last month, I had a respondent object to a product concept, because she felt one small part of the statement was not true. When I probed for why this undermined the whole concept, she earthily explained, “Even a little bit of poop in the brownie batter means I’m not going to eat the brownies.”

So what proportion of bad respondents are you willing to accept in order to get your data faster:  2%? Five percent? Ten percent? Twenty?  

Or, to paraphrase my favorite respondent of the year so far: How much poop will you accept in your batter in order to get your research brownies baked faster?

Complicated vs. Complex

The world has always faced the unpredictable and unexpected. However, for the past decade it seems that major unforeseen events are happening more and more often taking a toll on all of us.

By Ruben Alcaraz

This acceleration of unforeseen events goes hand-in-hand with the technological boom societies around the world are experiencing & the emergence of ubiquitous informational billion user hubs — like Google, Amazon, Twitter, Facebook, and LinkedIn — that easily give access through mobile and many other web connected devices. Informational hubs facilitate public access to any individual topic, news or opinion (no matter how obscure) and makes this connection almost effortless. Some claim that the 21st century will be known by future generations as the ‘Age of Connectivity.’

This age of connectivity brings with it major deviations from historical norms… for example, in the past, all major events such as wars, revolutions, or rivalries were extremely public events that took years to unravel. Today, those look very different, events seem to emerge out of nowhere and spread globally in a matter of hours.  The staggering speed of events has rendered previous tried and true approaches useless. This has not just been a problem to businesses, even governments have been caught in situations where they were unable to properly identify the situation and react timely.

The uber connectivity has changed the nature of everything. An invisible informational battle is constantly taking place, interconnected technology is propagating /branching/breaking down information, regardless of the distance from the truth or source. Exploring and understanding these digital fields will become as important as mastering marketing, advertising and insights once were.

Before I continue, I think it is important to mention that one of the key benefits of technology is the gift of time. We adopt technology more easily when it makes life easier, simplifies tasks and frees us to focus on things we’d rather be doing. For example, it is conceivable that a person could buy a birthday gift, plan a party, send an emails, talk to family in another state, and read the news within a span of 60 minutes. This means that value of time is not constant; an hour today is worth more than an hour ten years ago.

Understanding that technology and time are conversely related serves as the foundation for what has been happening. Technology is an enabler for communication and has impacted time to such degree that coordination of events can take minutes. The days when a grace period existed between hearing about an event and reacting to it has been drastically reduced as the Egyptian government found out in 2011. Structures and methods born out of experiences from past generations are not as effective as they used to be when addressing an age in which information can spread like a virus.

It is understood today that connection is remarkably non-local, meaning that things can start in places well beyond our physical space and imagination. The scary part is that much of the world is not yet connected so we have not seen the full effect of time compression. Psychologically, this situation creates a constant fear of vulnerability and calls for new ways to navigate a virtual battlefield. To get there, thinking about and making the distinction between two very different sets of systems is required:

1) Complicated systems are often engineered. A cell phone is a complicated system. Their inner workings may be difficult to grasp but their outcomes can be reliably reproduced and their outputs are fully predictable. It is possible, with enough time and help, for most people to systematically figure its inner workings and assembly out. In other words, complicated things have fixed rules that can be systematically understood by taking them apart and analyzing their details.

2) Complex systems are similar to complicated systems in that they also have many components but this is where the similarity ends. Parts of a complex system are unpredictable and can never truly be replicated. Imagine a thunderstorm, we know how they start and what the interactions generate, what they sound like but we cannot predict nor control them — and may never be able to. However, ways to deal with complexity more gracefully exist.

A complicated system approach assumes a linear future based on past history, it makes life easy; but in today’s world it creates a false sense of confidence. On the other hand, a complex system approach recognizes that everything is in a constant state of change and demands hard work continuously. The latter way of thinking will be crucial in dealing with or containing the impact of unexpected events as these continue to accelerate.