Smarten Up: 5 Tips to Become a Research Technologist

Posted by Stephen Phillips Thursday, November 16, 2017, 6:55 am
Technology is changing the market research industry. Learn how to stay one step ahead by becoming a research technologist.


By Stephen Phillips

In research we are often accused of being too conservative; trying to give our (typically marketing) clients what they are used to rather than rocking the boat. But we all know times are changing and we need to provide great quality research, faster and cheaper than ever before if we are to stay relevant.

The times, they are a-changin’

You can see its effects right now: areas of research are being taken from under our noses. The likes of Qualtrics, Medallia, and Survey Monkey have snatched swathes of revenue particularly in customer satisfaction, while not being hugely engaged with the research industry itself.

With the advent of programmatic ads, there is a chance this shift could spread to the whole creative development area of research. More and more, we might see micro surveys cut into areas such as tracking, product development, and general brand positioning – all owned and managed by technology companies who may not value the training and rigour of our research thinking. We need to fight back, fight for great quality research, and to do this we must embrace a technology mindset.

This of course requires using new technology, but few of us have had the technical training that would helps with this transition. As someone who has made the jump from a research company to a technology company (in the field of research), I thought I could suggest some things that have helped me on my journey:

1. Read ‘The Lean Startup’

Whether you’re starting a new business, trying to change a company you work for, or launch a new product or service, you must read this book by Eric Ries about testing your vision and learning what your customers want. It is also a great help in understanding how technology is built and how you can work with it.

2. Understand AI (or IA)

Uncover which role you can play in and around the emergence of Artificial Intelligence. AI will compliment everything we do in research in less than two years. Get a view of how we as human researchers can overcome the ‘AI will do everything’ crowd by watching this Ted Talk by Gary Kasparov. At Zappi, we believe in IA – see my Greenbook blog post on this.

3. Code in a day

As technology continues to swamp not only market research, but life in general, it’s important to contextualise its inner-workings even if you don’t work in code. To do this send yourself on a ‘code in a day’ course (via decoded.com). It will give you a much better appreciation for what software development is.

4. Imagine a single platform and your role in it

Imagine a world wherein clients have just one technology platform for running all of their market research and data analytics. As SalesForce has taken over CRM, Google is search and Amazon is retail; clients will come to have one single data platform and your job will be providing something within that platform. Try to understand what your role could be in helping facilitate their use of this platform.

5. Stop thinking questions, start thinking data

Too often we think about question formats when actually it’s only the data that matters. It is a hard shift, but important. See how you can add value to the morass of data clients already have (whether making sense of data, integrating data, analysing data, or helping clients act off data).

As usual, I will await the GRIT Report with baited breath. It’s important to keep a finger on the pulse (rather than bury your head in the sand) and make moves to embrace disruptive technology – or risk becoming a research dinosaur!

I would love to hear any other suggestions for making researchers more technically astute; if you post comments, I will compile the ideas and post an updated version soon.

Market Research Firms Fight to Survive: Top 5 Ways MR Firms are Overcoming Challenges

The market research industry is changing. Find out the top five ways MR firms are overcoming challenges from this shift.


By Jitesh Marlecha

The market research industry has been hit hard by the rise of the digital consumer. With enterprises increasingly emphasizing a consumer-centric approach, their consumer insights teams are constantly pushed by the C-Suite to get data-driven insights in a very short period of time. This drives a greater need for agile research, which traditional market research firms are not set up to cater to. So, enterprises are turning to DIY technologies to collect their own data and conduct research. As a result, the global growth rate for the industry has remained flat at 2%, driven only by inflation. It’s evident that traditional leaders in this space are struggling to sustain and scale.

The market research firms that have been able to sustain, and in some cases, even showcase double-digit growth, are those that recognize the urgency of transformation. As those that are slow to evolve bleed faster each year, those that are managing to adapt and succeed are doing it in a few key ways:

Specialization

It’s no longer enough for a firm to provide generic market research. The key to attracting customers is to figure out how to be unique in some way, usually by delivering a unique type of data, or by providing multi-source data or some type of unique technology. Successful firms are the ones that combine deep domain expertise in a given industry with a robust data inventory that isn’t available with others. Promising startups are now usually built on the premise of some niche capability, enabling them to find investors that can allow them to continue their journey.

Acquisitions

Many larger firms that have become more specialized have done so through acquisitions. These organizations acquire a specialist firm so they can sustain their business and continue to attract new clients with a unique proposition. Stagwell Group, for example, bought the National Research Group from Nielsen in 2015 and revived its entertainment research specialty, and then this summer added to that by acquiring TV pilot testing assets from Nielsen as well.

Medium independent generalist MR firms are the ones that are struggling the most. They lack a niche, as well as the resources to acquire a firm that has one. They also tend to execute research operations in-house instead of using partners. This eats into their margin and limits the funds they can invest in R&D to stay unique and drive future growth.

Outsourcing non-core functions

This is how adaptive firms are staying nimble and focused only on what they do best. They use trusted partners to help with everything that falls outside of their core competency. This can include all sorts of research operations services, including data collection, data processing, project management, QA services, text analytics, data warehousing and reporting.

Many clients are moving research in-house as technology is making it easier to do it themselves, versus working with a research agency. Unless those clients see deep expertise, actionable insights and high value-add from their MR firm, they are likely to make this move. So it is critical for MR firms to make themselves indispensable by doubling down on their core competency. They should not dilute their focus by engaging in research operations, if the client can handle that part on their own anyway. End clients come to them for their research expertise, not their research operational capabilities.

Embracing new technologies

It used to be that research was considered a “nice-to-have.” But now, all business decisions are backed with data and research. Tech giants like Google and Amazon do loads of research at each stage of their product and services lifecycle. The new imperative is to move beyond traditional data collection and analysis methods. Every day, new technologies are emerging which can not only collect and analyze customer, employee and brand experiences, but also provide action and impact measurement based on the insights.

Firms that are slow to adopt new technologies will fall behind their customer needs and will experience a faster churn-rate in their customer count.

Delivering faster turnaround times

There is greater pressure now to offer products and services that are highly competitive in terms of both price and faster turnaround times. Even MR firms that are solving highly complex problems with deep domain expertise and unique data still need to offer greater speed and lower cost. In many cases, they’re doing this by building strategic partnerships with suppliers that can deliver things like technology infrastructure and expertise, flexible operating models, automation, and process-driven, efficient setup, as well as 24-7 setup and global setup.

Market research firms around the world, in all stages of growth, are getting stuck. Getting unstuck will mean delivering value faster, cheaper and more uniquely than ever before.

Lloyd Shapley’s Value

Learn the basics of the Shapley Value, a solution concept in cooperative game theory, and then explore its most common uses in market research.

 
By Michael Lieberman

The Shapley Value, named in honor of Lloyd Shapley, who introduced it in 1953, is a solution concept in cooperative game theory. To each cooperative game it assigns a unique distribution of a total surplus generated by the coalition of all players.

Basically, the Shapley Value is the average expected marginal contribution of one player after all possible combinations have been considered. This has been proven to be the fairest approach to allocate value. A ‘player’ can be a product sold in a store, an item on a restaurant menu, a party injured in a car accident or a group of investors in a large real estate deal. It is employed in economic models, product line distribution, procurement measures for embassies and industry, market mix models and calculations for tort damages.

The Shaply Value shows up in several popular marketing research techniques. Below, we will set out the ABCs of Shapley Values, and then show its most common uses in marketing research.

Shapley Value ABCs

Here’s the simplest case of the Shapley Value. Let’s say there are three players, A, B, and C. When they enter a game, they add points to the score. The total point-value in the game is 10.

As the chart below illustrates, when the order of entry is A B C, A’s and B’s contribution is 4; C’s is 2. However, in the second round of the game, A’s contribution is 3, while B’s is 5.

 

In total there are six possible different orders of entry. If we play all six, and then take the average contribution of each player, we arrive at the Shapley Value.

Now we will see several common applications of the Shapley Value in marketing research. The Value is quite useful: it yields the highly equitable solutions and thus provides several vital research measures.

Shapley Value – Regression and Brand Equity

Let’s say that a major automobile company has a public relations disaster. In order to regain trust in their brand equity, the company commissions a series of regression analyses to gauge how buyers are viewing their type of vehicle. However, what they really want to know is how American auto buyers view trust.

The disaster is fresh, so our company would like a composite of which values go into ‘Is this a Company I Trust’ across industry. Thus, it surveyed ten of its major competitors on various elements of automobile purchase. We then stack the data into one dataset and run a Shapley regression. What we hope to see are the major components of Trust.

 

Not surprisingly, family safety is the leading driver of Trust. However, we now have Shapley Values of the major components. These findings would normally be handed over to the public relations team to begin damage control.

Shapley Value – Product Design

The Shapley concept of relative importance comes from product design, where we are able to piece together components in any way we wish. Products are bundles of attributes, and attributes are collections of levels. We’ll take a typical conjoint study for a product design.

An energy drink company may be thinking of how best to configure a package with attributes like number of cans in a bundle, size of ounces in a can, amount of caffeine, flavor and price. By systematically varying these attribute levels according to an experimental design, they can generate descriptions of a hypothetical energy drink that are presented one at a time to respondents, who rate their preferences for all the product configurations.

In a conjoint study, relative importance is defined as the percentage contribution of each attribute. We sum the effects of all the attributes to get the total variation, and then we divide the effect of each attribute by the total variation to get the percent contribution. The attribute with the largest percent contribution is where we have the most leverage. This is, in effect, the Shapley Value. For our energy drink client, the Shapley Values for three different customer bases are shown below.

 

Changing the number of ounces in a bottle impacts most heavily the likelihood of purchase. Price is way up there too, with a Shapley Value of around 25%. Flavor and strength (caffeine) are really secondary factors in purchase intent, but they still matter.

Shapley Value – Attribute Attrition/Maximizing Product Lines

In our final example, we will demonstrate how to use a Shapley Value to maximize product lines displayed in a store. Adding the right combination of new items will grow your business; introducing the wrong new items will result in no growth or even cannibalize your top performers, leading to a revenue decline.

Perhaps a supermarket chain, Gigantic Market, wishes to determine the maximum number of laundry soaps it should display. The first thing to do is deploy a Maximum Difference (MaxDiff) choice exercise. For purposes of illustration, let’s say that Gigantic is trying to decide which of 28 brands to carry.

We take our 28 brands and divide them into 7 questions of 4 products. That way, each respondent sees each brand once. Below is a sample question from the MaxDiff.

Of the laundry brands shown below, which are most likely to purchase and which are you least likely to purchase?

  1. Woolite
  2. Wisk
  3. Cold Power
  4. Daz

The beauty of this analysis is that we can create many different splits (a split is the 7-choice question) in random order so that each respondent sees a different set of questions. This is performed using a random-design Excel macro. If the sample is, say, 2000, we may design 200 splits so that each is seen 10 times. We could, if requested, design a split for each respondent, but it is not usually necessary to do so.

The MaxDiff exercise yields a data structure in which we can calculate a Bayesian coefficient using logistic regression for each brand for each respondent. The coefficients are then normalized across each respondent. That is, the sum of all brand coefficients equals 0 for each respondent. Thus, some are positive and some are negative.

In a nutshell, have had the odds of purchase for each brand for each respondent—the likelihood or purchase. If we take the average across the entire sample of the coefficients, we get the average contribution of each brand to the store. That is the Shapely Value.

In the table below we see the Shapley Value for each of the 28 brands. Those in blue are positive. Those in red are negative.

 

Once the Shapley value is calculated, we simply choose those brands which add a positive revenue stream to the product line. Those in red that are near 0 such as Surf and Persil may be added to the inventory if Gigantic would like to sell 14 brands.

We would tell Gigantic Supermarket to stock those brands in blue. To maximize product placement, we would then suggest a TURF analysis. A full explanation is beyond the scope of this article.

Conclusion

The Shapley Value makes a positive allocation of items or value to that which generates positive revenue. How will this help a marketing research professional? In maximizing flows. The conditions under which the Shapley value makes a positive allocation exclusively to items or value involved in maximizing flows is of extreme interest to our clients, and thus to us.

The Right Reward: Fifty Percent of Respondents Demand It. How to Deliver?

How do respondents prefer to be rewarded for their survey participation? Find out how to take a strategic approach to boost engagement.

 
By Jacilyn Bennett

Did you know that more than half of market research respondents participate in order to win rewards or prizes? That’s what our white paper, which is based on data from the bi-annual GRIT CPR (Consumer Participation in Research) study, found. That in itself is not surprising: today’s consumer population is used to being rewarded for everything from credit card purchases to travel. In addition, they are used to being in control and are demanding more from their interactions. A new survey from the CMO Council, in partnership with SAP Hybris, found that consumers want service and experience wherever they go. Rewards feed right into this mindset.

So what’s next? We know that most respondents want to be rewarded, but how? Highly personalized experiences are the name of the game, and this also applies to rewards – people want what they want, when they want it. The data showed that respondent satisfaction is tied up in incentive type.

“When we were analyzing the data from the study that applied specifically to respondent incentive preferences it became clear fairly quickly which options stood out from the crowd,” said Lenny Murphy of Greenbook, publisher of the bi-annual GRIT studies. “Cash is always a welcome reward, but when you look at the type of incentive that makes sense for market research companies, virtual cards led the pack.”

When it came to the types of rewards respondents prefer, virtual cards were the number one selection in North America. While cash was the number one reward overall, it presents complications for market research companies and isn’t a practical option in most cases.

In fact, across all demographic cuts and comparisons by other variables in the study, virtual cards scored well. When broken down by age group, the sought-after Boomers picked it as a number one choice and, factoring out the impractical cash choice, virtual cards were the top choice almost across the board for every generation. In addition, those elusive, high quality respondents who participate in research less frequently have a strong preference for virtual cards.

Data likes this means that market research companies need to be thoughtful about their approach in incentives for respondents, asking questions like:

  • What are the top reward choices by various age groups and geographic regions and what constituents make up my target audience?
  • Where is my audience participating in the research: mobile, online in-person, telephone, mail? The platform may better advise preferred reward type.
  • Is my audience made up of frequent or infrequent participants?
  • What is the respondent’s motivation for participation in research?

All of these factors can be examined when looking at specific study in order to tailor an incentive program that resonates the most with the target audiences. Taking a strategic approach like this can help boost engagement and, ultimately, market research outcomes.

 

The CPR study, on which the “Improving the Research Respondent Experience” white paper is based, was conducted in 14 countries and 8 languages among 6,750 consumers via online, telephone, and mobile-only surveys. The full white paper can be found here: http://www.virtualincentives.com/improving-research-respondent-experience

 

Growing the Industry by Funding More Research

Welcome to our next post featuring two insights projects currently offered on Collaborata, the market-research marketplace. GreenBook is happy to support a platform whose mission is to fund more research. We believe in the idea of connecting clients and research providers to co-sponsor projects. We invite you to Collaborate!

Collaborata is the first platform to crowd-fund research, saving clients upwards of 90% on each project. We’ve asked Collaborata to feature projects they are currently funding on a biweekly basis.

Collaborata Featured Project:  

“Hacking Longevity: A Three-Generation Perspective on Living to 100-Plus”

Context: Fundamental shifts are transforming the older life stages of each generation of Americans, but the effects are largely reported only anecdotally. This study will bring to light the implications of increased longevity on three generational cohorts in the second half of life.

Pitch: To date, increased longevity has been treated as conceptual and aspirational, as in “What will you do with 30 extra years?” Most of what we know about this expansion is anecdotal, even though we see and are experiencing seismic shifts at every stage of life.

Rather than approaching this as an “aging” study, we will be studying these shifts — some subtle and some quite large — with a fresh eye. We want to understand how people are “hacking longevity” and if the idea of longer life informs plans and thinking.

This study is designed to frame the issues for brands and organizations who want to play an active role in serving the longevity economy in meaningful, informed ways.

This research is being underwritten in significant part by AARP; please join the AARP by co-sponsoring this landmark study and assuring its successful launch.

Deliverables: Formal report, including insights and recommendations, detailed analysis and full data tables. In-person and web-based presentations available.

Who’s Behind This: The Business of Aging helps businesses and organizations advance their goals, while advocating for and serving the mature market. Lori Bitter, President, is a well-known, well-respected expert. She is the author of “The Grandparent Economy” and was recently named as one of “The Top 50 Influencers in Aging” by Next Avenue.

To purchase this study or for more info: click here or email info@collaborata.com

 

Know someone who would benefit from this project? Head here for a referral link to offer your “friend” a 10% discount and you a check in the same amount!

Researchers: Is There Poop in Your Brownies?

Posted by Ron Sellers Wednesday, November 8, 2017, 6:55 am
Posted in category Industry Trends, Quality
With the drive for speed in research, are you sacrificing getting quality respondents?

 
By Ron Sellers

Business solutions in 48 hours! Get your survey data overnight! Do agile research! Fast, faster, fastest!

Yes, it seems the insights world is moving faster and faster every day. Many companies are promising turnaround times that would have seemed absurd just a decade ago. Shorter questionnaires, automation, and DIY solutions all offer speed and more speed.

But there’s one big question with this race to be faster than everyone else: what’s getting sacrificed?

No matter how a questionnaire is designed or how data processing or reporting are automated, there’s still an important component to any quantitative study: respondents. And while online research panels can give you access to thousands of respondents in just hours, panel quality ain’t gettin’ any better, folks.

As regular users of panels, we are also regular recipients of bad respondents mixed in with the good ones:

  • Research bots
  • Duplicate respondents
  • Straightliners
  • Speeders
  • Other kinds of obvious cheaters

But aren’t panel companies and field agencies screening out the bad respondents for you? Well, they’re trying, but many of their solutions are automated (again, in the interests of being cheaper and faster). For example, they’ll employ an algorithm that automatically tosses any respondent who answers a questionnaire in less than 50% of the average length, or one that catches straightliners in all your grids (that is, if you’re still using lots of grids).  

Frankly, they just miss a lot.  

Panel quality is atrocious today. Grey Matter Research has adopted the position that every respondent we get is a bad respondent, until we can demonstrate otherwise. This takes a lot more than digital fingerprinting or pre-programmed algorithms. Usually, it requires going line-by-line through the data to find and remove problem respondents. Just a few ways we do this:

  • We review every response to every open-end. Even once the field agency or panel has done their quality control checks, we regularly receive verbatims that just say “great,” give answers that have nothing to do with the question, or even are actual copies of the question itself that the bot picked up from the questionnaire and inserted as the answer.
  • We look hard for duplicates. Despite the claims of how digital fingerprinting removes this problem, we regularly find dozens of duplicates in a sample. The chances that a survey database of 600 respondents contains two 43-year-old Hispanic women from Iowa?  Possible. The chances that both are football fans who spelled their favorite team as the Pittsbergh Stellers? And that they just happened to complete the questionnaire 15 minutes apart? Not so possible.
  • We search for logical anomalies, which are different in every questionnaire. In various recent studies, we’ve thrown out people who claimed to have been in both Boy Scouts and Girl Scouts as kids, those who make under $30,000 annually but had given $40,000 last year to charity, those who supposedly live one mile away from four different local hospitals which are 75 miles apart, and those who belong to a non-existent organization (with a name that couldn’t be confused with a real one).  

Of course, respondents do make mistakes or misread questions, so usually the decision to toss a respondent is from a combination of factors. They straightlined the one short grid we included? Mark ‘em yellow. They further completed the 12-minute questionnaire in 8 minutes? Downgrade to orange. Also answered the question “What are the main reasons you are not at all interested in learning more about this product” with “I like this advertisement the best”? Buh-bye.

So what does any of this have to do with speed? (Or with brownies…but I’ll get to that in a moment.) Simple: this cleaning process is not a fast one. It doesn’t have to take days, but it won’t be done in minutes, either. In the quest for getting your data faster, how many of the respondents you’re getting are bots, duplicates, satisficers, or those who just didn’t actually pay attention to the questions you were asking?

Do you have any idea how many respondents had to be replaced on your last study? Or what criteria your vendor used to identify fraudulent or poor-quality respondents?

Most importantly: Did your vendor even do anything beyond some basic, automated checks to assure you got real, quality respondents?

Make no mistake – this is not just a problem with quick turn-around surveys. I’ve seen plenty of databases delivered in no particular hurry that still lacked proper quality control. But going all-out for speed dramatically increases the chances that your data includes some bad respondents, because putting everyone on a rush basis makes it far less likely that there will be time available for quality control.

In a qualitative interview last month, I had a respondent object to a product concept, because she felt one small part of the statement was not true. When I probed for why this undermined the whole concept, she earthily explained, “Even a little bit of poop in the brownie batter means I’m not going to eat the brownies.”

So what proportion of bad respondents are you willing to accept in order to get your data faster:  2%? Five percent? Ten percent? Twenty?  

Or, to paraphrase my favorite respondent of the year so far: How much poop will you accept in your batter in order to get your research brownies baked faster?

Complicated vs. Complex

The world has always faced the unpredictable and unexpected. However, for the past decade it seems that major unforeseen events are happening more and more often taking a toll on all of us.

By Ruben Alcaraz

This acceleration of unforeseen events goes hand-in-hand with the technological boom societies around the world are experiencing & the emergence of ubiquitous informational billion user hubs — like Google, Amazon, Twitter, Facebook, and LinkedIn — that easily give access through mobile and many other web connected devices. Informational hubs facilitate public access to any individual topic, news or opinion (no matter how obscure) and makes this connection almost effortless. Some claim that the 21st century will be known by future generations as the ‘Age of Connectivity.’

This age of connectivity brings with it major deviations from historical norms… for example, in the past, all major events such as wars, revolutions, or rivalries were extremely public events that took years to unravel. Today, those look very different, events seem to emerge out of nowhere and spread globally in a matter of hours.  The staggering speed of events has rendered previous tried and true approaches useless. This has not just been a problem to businesses, even governments have been caught in situations where they were unable to properly identify the situation and react timely.

The uber connectivity has changed the nature of everything. An invisible informational battle is constantly taking place, interconnected technology is propagating /branching/breaking down information, regardless of the distance from the truth or source. Exploring and understanding these digital fields will become as important as mastering marketing, advertising and insights once were.

Before I continue, I think it is important to mention that one of the key benefits of technology is the gift of time. We adopt technology more easily when it makes life easier, simplifies tasks and frees us to focus on things we’d rather be doing. For example, it is conceivable that a person could buy a birthday gift, plan a party, send an emails, talk to family in another state, and read the news within a span of 60 minutes. This means that value of time is not constant; an hour today is worth more than an hour ten years ago.

Understanding that technology and time are conversely related serves as the foundation for what has been happening. Technology is an enabler for communication and has impacted time to such degree that coordination of events can take minutes. The days when a grace period existed between hearing about an event and reacting to it has been drastically reduced as the Egyptian government found out in 2011. Structures and methods born out of experiences from past generations are not as effective as they used to be when addressing an age in which information can spread like a virus.

It is understood today that connection is remarkably non-local, meaning that things can start in places well beyond our physical space and imagination. The scary part is that much of the world is not yet connected so we have not seen the full effect of time compression. Psychologically, this situation creates a constant fear of vulnerability and calls for new ways to navigate a virtual battlefield. To get there, thinking about and making the distinction between two very different sets of systems is required:

1) Complicated systems are often engineered. A cell phone is a complicated system. Their inner workings may be difficult to grasp but their outcomes can be reliably reproduced and their outputs are fully predictable. It is possible, with enough time and help, for most people to systematically figure its inner workings and assembly out. In other words, complicated things have fixed rules that can be systematically understood by taking them apart and analyzing their details.

2) Complex systems are similar to complicated systems in that they also have many components but this is where the similarity ends. Parts of a complex system are unpredictable and can never truly be replicated. Imagine a thunderstorm, we know how they start and what the interactions generate, what they sound like but we cannot predict nor control them — and may never be able to. However, ways to deal with complexity more gracefully exist.

A complicated system approach assumes a linear future based on past history, it makes life easy; but in today’s world it creates a false sense of confidence. On the other hand, a complex system approach recognizes that everything is in a constant state of change and demands hard work continuously. The latter way of thinking will be crucial in dealing with or containing the impact of unexpected events as these continue to accelerate.

 

Monthly Dose of Design: Improve Your Questionnaires with Visual Design

Learn how visual design can improve your surveys in the latest edition of Monthly Dose of Design.


By Emma Galvin & Nicholas Lee

In last month’s Monthly Dose of Design, we identified how to improve your discussion guide with visual design. This month we will focus on how visual design can improve surveys.

Nowadays, most quantitative research is done online. However, this provides researchers with some problems:

  • Online content is consumed via a skimming culture, therefore it is harder than ever to capture participant’s attention
  • The standards for online content presentation – and therefore survey appearance –  are not set by research providers. They are set by mainstream platforms such as Instagram, Facebook, Twitter and the BBC

For online surveys, this means that design is as much about user experience (UX) as it is about using the right scales. Poor UX equals poor participant engagement, which leads to poor data, resulting in poor insights and eventually poor business decisions.

To avoid this chain of events, and create a more engaging online survey experience, use these visual design tips:

1. Know Your Audience & Their Design Preferences

In the era of personalisation, we should not have a generic survey appearance. Engaging tech savvy millennials will require different visuals than those needed to engage C-level executives. To begin to understand the appropriate visual cues you will need to get to know your audience and what resonates with them. Do this by looking at any existing research and imagery on the target audience and choose your questionnaire design elements accordingly.

2. Engage Participants Throughout

Make sure that participant engagement begins on the landing page. The landing page should be an advert for participation. Use an enticing image and a benefit statement to get participants into the survey in a way that means they are motivated and engaged. Once in, make sure you clearly sign-post progress, questionnaire content and deliver on the ‘advert’ you have used on the landing page using design cues like icons and appropriate colours. And finally, make sure the participant leaves feeling positive and treat your closing page like and advert for future research participation.

3. On-Screen Layout

Online surveys have a whole screen to utilise – so make the most of it! Use images to fill the screen, whilst making your page look more engaging. On the screen, make sure your content is well organised. This means using a mixture of appropriate spacing between questions and answer options and keeping question length consistent.

Gestalt theory can help greatly with this. Gestalt theory is the idea that visual elements work together to communicate more successfully and stronger than they would working separately. This is achieved through characteristics like similarity and proximity. For example, by grouping related questions, you can make the survey more relatable for the participant and help develop their engagement with the bigger picture – and makes questions easier to answer. If you are worried about giving participants the wrong impression through your use of layout, you could still have less space between the related questions, but also place a thin bounding box around each individual question or faint coloured backgrounds (with clear borders between each question) behind each question. This way the viewer can identify that they are all related but still separate questions.   

Use Appropriate Typefaces & Colours

Use one legible typeface, with a lot of different variations, i.e. light, regular, semi bold and bold. This will support your hierarchy of information and allow your audience to easily distinguish importance. Copy is central to a successful survey and the correct typeface is vital in communicating your questions successfully.

A complimentary colour palette can bring your questionnaire alive and make it more visually impactful. Remember to use contrasting colours that are easy to read. Try using interesting background imagery or a unique visual pattern. However, make sure your background doesn’t interfere with your content. If this happens, try putting a white box, at a slightly lower opacity, underneath the text to make it stand out.

What’s next…
Next time we go back into the world of qualitative research and show you how to make qualitative outputs more visually impactful and the key design rules you should use communicating qualitative insights to clients.

I Have A Dream

Category management is not a new idea, but is it something the industry should return to for shopper marketing?


By Dr. Stephen Needel

So began a pretty famous speech, and so begins any number of stories about Category Management 2.0, in which there is once again a call for retailers and manufacturers to work together to deliver what shoppers want.  For those of you unfamiliar with CatMan (as those in the know call it, because who can resist a catchy abbreviation), it was the 1990s brainchild of Brian Harris. He created a process by which retailers would have category captains – a major player in the category – who would help guide the retailer to better assortments, shelf layouts, and pricing. The retailers got more profitable categories and, in return, the captains got somewhat preferential treatment when it came time to allocating space or promotion slots or delisting.

As I wrote for ESOMAR back in 2007, this all went horribly wrong. The process to do CatMan “correctly” filled large 3-ring binders with pages of forms that nobody ever looked at. Manufacturers created armies of category analysts at major retailers to assist them with the intricacies of the process, which often ended up being free labor for the retailer (planogramers of the world, unite!). The town of Rogers, Arkansas, it is said, was created for this very purpose. I talked with a number of manufacturers for the ESOMAR paper and asked them the very pregnant question, “Are you making any money off this process?” Mostly people would look with eyes downcast and not say much; they either didn’t know or, more often, didn’t want to know.

The concept was broken for a two very simple and obvious reasons:

  • What is good for the retailer is not always good for the manufacturer, and vice-versa.
  • Understanding shoppers, which is supposed to be underpinning the whole CatMan process, is not an easy thing and few do it well.

ASL has been doing CatMan research for 25 years now. As we’ve repeatedly shown, the ability to produce a shelf assortment and/or a shelf layout that helps both the category and the manufacturer’s brand has been limited. Only 15% of the time have we seen a win-win outcome, in a process where all the outcomes should be win-win. Trust me when I tell you that most of the scenarios we’ve tested have never seen the light of day at a retailer presentation.

RESULTS OF 327 CATEGORY MANAGEMENT TESTS

  CATEGORY    
  POSITIVE NEUTRAL NEGATIVE  
BRAND POSITIVE 15% 18% 7% 40%
BRAND NEUTRAL 1% 38% 6% 44%
BRAND NEGATIVE 1% 7% 7% 15%
  18% 63% 19% 100%

 

There are a number of reasons why these attempts to improve the assortment or the shelf layout failed; but mostly, they fell into two camps:

  • We think we understand the shopper but our understanding is incomplete or incorrect. So when we make a change, it’s not aligned with what the shopper wants or needs.
  • We understand what the shopper wants or needs, but we either can’t translate that to the shelf or we translate it incorrectly.

The new calls for category management simply echo the past rationales for the process. CatMan has always been designed to be shopper-centric, it has always been designed for all parties to share relevant information, and it has always been Pollyanna-ish in believing altruism will prevail over self-interest.

At its heart, category management and its poor step-child, shopper marketing, has always been about fact-based selling, which existed well before these came to life. We have yet to meet a retailer who wasn’t interested in a better way to merchandise a category. Being a neutral third party, we always get a hearing for our recommendations. Do good shopper research. Design better in-store programs based on that research. Test the programs before going to the retailer. Stop dreaming about unicorns and true category management.

The Insights Revolution: Where Behavioral Science Fits In

How is behavioral science changing the insights industry? Find out below, and join us for IIeX Behavior on November 7-8 in Chicago.


By Alex Hunt & Tom Ewing

It’s no secret to readers of GreenBook that the insights industry is changing. And the exact shape of that change is becoming clearer by the day. What’s not always so obvious is where behavioral science and the latest findings in human psychology fit in. Talk of “System 1 and System 2”, of heuristics, and of measuring the non-conscious has become as common at insights industry events as talk of big data, automation and AI. But these two forces of change have often felt like they’re trains hurtling down quite separate tracks. It wasn’t obvious how these two might be part of the same revolution in our industry.

That is changing, and the IIeX Behavior held in Chicago next week will showcase exactly how behavioral science and new technologies fit together. The key lies in what behavioral science, as well as the metrics based on it, let us achieve: more accurate prediction of human behavior.

But before we talk about that, let’s recap how the center of gravity is shifting in the insights industry today. As followers of GreenBook know, the most important factor in change isn’t technology per se, it’s the precision and speed technology can unlock. In a broader business environment, where new competitive challenges often come from more agile and disruptive start-ups, client-side buyers of market research have to move ever faster and make quicker, better decisions in support of their businesses’ growth agenda.

Marketers have more data than ever to help them do that, and finding a niche in such a world is the central problem faced by research providers. The traditional, complex, customized ad hoc project – i.e. the blue-chip product full-service agencies have been selling for years – is becoming increasingly rarified. Even tracking, which has been the bread-and-butter of many big research providers, is under substantial cost and timing pressure, with some research buyers dropping tracking programs entirely.

What marketers still need is accurate prediction. They need to know which new product ideas will succeed, which ads are worthy of media spend, which pack and promotional choices will boost sales, and where their brands are headed. Sometimes a launch-and-learn philosophy is tempting, but more often the guiding principle should be test-and-learn: aim for zero waste by accurately predicting which pieces of marketing are going to drive profitable growth rather than throwing stuff at the wall and seeing what sticks.

Because marketers today are operating so fast, in so many markets, and through so many channels, testing needs to happen at scale, and it needs to be both affordable and rapid. Otherwise the advantage you gain from accurate prediction can be offset by slow decision cycles or expensive research. That’s where technology advances help researchers, utilizing greater automation to turn results around faster than ever. It’s been possible for a while to get next-day or same-day results on testing. At System1 Research we anticipate the norm for delivery of consumer predictions becoming almost instant, demonstrating this is why we tested and published all Super Bowl advertising LIVE earlier this year.

The trade-off for researchers looking to improve speed is a loss of detail. Most research work today includes a lot of legacy questions and “nice to have” data. That has to go. But that doesn’t mean all research will be quantitative and without diagnostic. From our conversations with insight buyers, we know there’s still a vast hunger for meaning and strategic insight, as well as an urgent need not just to filter and sort new product ideas, communications and brands, but to improve them. If the job of research is to accurately predict which marketing efforts will drive growth, why not go further and improve the work which doesn’t quite get there? This requires on collaboration, creativity and specialist insight, and so it wouldn’t be as cheap as testing should be. But it’s less expensive and faster than developing a new idea, ad or brand positioning from scratch.

So that’s two niches researchers can fill in a changing marketing world. First, they can provide accurate prediction at scale to guarantee profitable growth. In addition, they can deliver on optimization and creative consultancy to achieve maximum potential and zero waste.

But we still have to answer that initial question. Where does behavioral science fit in?

Simple: It’s the only foundation upon which accurate and predictive research can be done. To predict response to marketing accurately, you have to understand how people make decisions. Wrong assumptions will lead to wrong predictions. Behavioral science is what allows us to understand human decision making. Researchers and marketers need to be guided by behavioral science and psychology when building their predictive models, and they have to apply those lessons when doing consultancy work.

This sometimes means embracing ideas and results that seem counter-intuitive. For instance, when Les Binet and Peter Field published their findings about emotion being the driver of advertising effectiveness in The Long & Short of It, they faced plenty of pushback from marketers worried about message, persuasion and other rational measures they’d become used to monitoring. In the decade since then, almost every copy-testing supplier has embraced emotional measurement – even if for many the core model hasn’t evolved to match the rhetoric! What used to be a dangerous idea has become standard thinking thanks to a better understanding of behavioral science.

That’s why behavioral science matters in the fast-changing research world. Tools and methods which take behavioral science as their foundation will be more accurate in their predictions and more valuable to marketers. Of course, they must also be cost-effective, automatic and able to work at scale.

Naturally at System1 Research we don’t simply feel our own methods do this best; we know it to be true. But we aren’t the only insights provider at the IIeX Behavior and competition is especially healthy for this still-emerging discipline. What you’ll see at IIeX next week are the best and brightest companies and the most forward thinking clients helping to cement behavioral science in its rightful place, at the center of the ongoing insights revolution.