1. Research Now
  2. SIS International
  3. ORC International
  4. 20-20ad

The Sad State Of Mobile-First MR: A GRIT Sneak Peek

Suppliers who aren’t leading with a mobile optimized and shorter solution to client requests are doing the client, themselves, and the industry a disservice.

sad phone


Also new to this wave of GRIT was a series of questions about how GRIT respondents are adapting to mobile. This is an excerpt from the rough draft (the charts will be much prettier in the report!) on the findings from that area of exploration in the study.

First, we asked how many of their surveys are designed for mobile. This was a verbatim response where they were asked to enter a whole number, and for ease of analysis we have grouped responses into five buckets.

The results show that we still have a ways to go. 45% of suppliers and 30% of clients indicate that between 75% and 100% of all surveys they deploy are designed for mobile participation, but that leaves well over 50% of all surveys NOT mobile optimized.

A few stark differences standout between clients and suppliers, perhaps understandably so. If we accept the proposition that buyers of research expect suppliers to drive best practice adoption for all studies deployed through them, then the higher percentage of suppliers ensuring that the majority of surveys they field is good news, however it’s still a minority overall.


The numbers don’t look much better when looking at the total sample (N=1,497),  with respondents citing their percentage of mobile optimized surveys being deployed as 0 – 13%; 1 to 25 -22%;  26 to 50 – 15%, 51 to 75- 8% and 76 to 100 – 42%. The bottom line is that only about 50% of all surveys are designed to be mobile. That is progress to be sure, but we as an industry still have much work to do to fit well in a mobile-first world.




In the next question, perhaps we uncovered the reason why mobile is still not the first thing surveys designers focus on: the average length of surveys being fielded.

In general, roughly one out of three of all GRIT respondents fall into one of three groups: surveys of less than 10 minutes, of between 11 and 15 minutes, or over 16 minutes.  To dive a bit deeper, here is a full breakdown of the specific brackets we identified:



Perhaps most surprisingly is the finding that clients in a two to one ratio are reporting fielding surveys of less than 5 minutes, which is a hopeful sign to be sure, although suppliers seem to be the culprits in conducting longer surveys of between 11 and 20 minutes.



Overall, when averaging all responses the average length of survey is 15 minutes, but as the range of responses clearly show over 1/3 of all surveys are still over that number, almost by default that means they are likely unsuited for a mobile participant.

Finally, we asked GRIT participants what they thought the maximum length of an online survey should be. Importantly, we did not ask specifically what the maximum length should be on a mobile device, so it is perhaps our own oversight in making that clear and it is possible results might have been different if we had explicitly stated that. However, since this question was clustered with the previous question specifically related to mobile, we do expect that mobile was at least a consideration in their responses. Regardless, the results could perhaps best be summed up with “whatever we answered previously as the average length, is the best length”, since the results do in fact closely mirror one another overall, although a higher percentage of research clients felt that under 10 minutes was ideal vs, what is currently being deployed.

Again, when averaging all responses, 15 minutes is the result, which also corresponds to the largest response category.

Considering the myriad studies by both panel and survey software providers presented in whitepapers,  at events, via webinars, etc.. that support the idea that the optimal length of a surveys in a mobile-first world is less than 10 minutes, it’s encouraging to see that 52% of research clients report that as the ideal, while only 36% of suppliers report a similar goal, with fully another 32% of suppliers stating between 11 and 15 minutes is the ideal.


Often we hear suppliers stating that their inability to migrate to a mobile first or shorter survey design is due to client demand. Certainly GRIT data shows that this might be true to some extent, but a large contingent of clients seem to be embracing shorter mobile optimized surveys (which also likely explains the rapid growth of many suppliers such as Google Consumer Surveys for example).

For this round we also tracked participation by mobile vs. PC and the results are telling: Almost 40% of GRIT respondents participated via a mobile device (phone, tablet or “phablet”) with the remainder via a desktop/laptop. GRIT was optimized to be device agnostic in terms of the survey design, although the length of survey was still around 15 minutes, but interestingly the drop-off rate was higher among  PC/Laptop users, indicating that a “mobile first” design can mitigate completion rates and increase respondent engagement.

Based on our own sample it must be pointed out that it is increasingly important to account for a large percentage of mobile device users within any sample, whether it be B2B or consumer.

Perhaps the key implication here is that suppliers who aren’t  leading with a mobile optimized and shorter solution to client requests are doing the client, themselves, and the industry a disservice.


Where Are They Now? Catching Up With Insight Innovation Competition Winners RIWI

RIWI won the North American IIeX Competition in 2013 for its disruptive online data capture methodology, and in this article we discuss their place in the insights industry – successes, growing pains, advice, and other good stuff.

By Gregg Archibald

Welcome to the second installment of a new series that focuses on companies that won the IIeX Competition.  Over the next several months, we will be interviewing these companies to discuss their place in the insights industry – successes, growing pains, advice, and other good stuff.

This installment features Neil Seeman, Founder and CEO of RIWI.  As CEO, Neil leads overall strategy for the company globally, with a focus on healthcare and international security solutions. He is responsible for overall company performance.  RIWI won the North American IIeX Competition in 2013 for its disruptive online data capture methodology.

Tell us, in a few sentences, what your business is all about:

riwi-logoRIWI is a global survey technology and risk measurement company using its proprietary, patented methods to capture a new stream of opinion data in any region of the world. RIWI offers access to true market and citizen opinion data in all regions of the world – including in otherwise hard-to-reach markets. These data can, for example, be descriptive or predictive of emergent attitudinal trends in brand affinity, purchase intent, civil unrest, human rights, or economic, health policy and risk or other indicators.

What has changed for your business since winning the competition? (Listing on CSE: RIWI) 

As of Aug 31, 2015, RIWI (CSE: RIW) was listed on the Canadian Securities Exchange. Listing on the CSE is a natural step for a company like ours. The CSE is a modern exchange that offers all the benefits of the public markets without the cost and burdensome attendant regulation for small companies. RIWI is a global data technology platform designed for rapid scalability and a public listing is a milestone enabling our reach, customer base, partnerships, and corporate professionalism to continue to grow. 

The press release announcing our public listing on August 31, 2015 is here: https://riwi.com/wp-content/uploads/2015/08/RIWI-NR-Aug-27-15-Final.pdf 

Neil & EricWinning the IIeX competition affixed significant brand value to our company, which, in turn, was instrumental in securing attention and revenue-generating contracts and strategic investments from angel investors in our pre-public company financing. Most important, I feel the IIeX events are self-selecting in terms of the attendees from other companies and clients who attend and who are therefore intent on seeking innovative solutions to short- or long-term data needs outside of the traditional boundaries of market research; this was helpful to me as an Internet and health policy researcher whose background in traditional market research was limited. I see RIWI as part of a growing ‘data ecosystem’ of diverse offerings that recognize that data solutions are often complementary. I attribute IIeX to cementing that vision in our strategic path.

Since winning the award, we have expanded our work in the global NGO sector, largely due, we feel, to the global recognition we have earned from the global data community that vouches for our reach into all parts of the world. The recognition from the award directly contributed to our data science in inferential analytics and geo-location that we worked on to support the 2013 and 2014 GRIT Consumer Participation in Research studies (published by Greenbook), which also reviewed the differential typology and respondent profiles of habitual vs. non-habitual survey respondents globally. We have expanded into the risk and security sectors, and, especially warm to my heart, have expanded into the global public health sector; this is pleasing to me since the roots of RIWI lie in pre-commercial work in government-commissioned pandemic surveillance in my former research unit. 

Where do you see your business going in the next couple of years? 

Our focus is expanding our customer base and growing our revenues. We expect revenues will, in 2015, be three times those of 2014 and we believe we have reached the stage where past efforts in development and sales, since 2012, are bearing fruit.

Like other companies in the global data ecosystem, we recognize that long-term partnerships and integration with other data suppliers and global analytics companies and in-market experts are critical to revenue-generating success. We see ourselves as the ‘plug-in’ that can enable any number of different companies in the technology and analytics sectors to expand globally, to reach more diverse and random audiences of non-panel respondents, and to reach voices that would not otherwise be heard. Specific applications include novel data-driven insights into issues ranging from human rights audits to the measurement of new global indicators to supporting research into the design of new policy solutions for human health challenges, such as the needs and gaps in caring for those with mental illness, Alzheimer’s or other diseases and conditions. We are, for example, very proud of a forthcoming academic paper that marks the first time researchers have measured mental health stigma globally and self-reported rates of mental illness in all countries. Although we have enjoyed many academic and expert reviews of our patented data collection methodology, this is a third party expert validation of the global replicability and falsifiability of our proprietary data stream in a prestigious scholarly journal (the Journal of Affective Disorders) and it is a data set of over 1 million people run over 19 months in more than 200 countries. Further, we know that it will help researchers understand baseline data in mental health stigma and surveillance, thus enabling intervention impact assessment.

What has taken you by surprise? 

The large diversity of sectors that has expressed interest in our random global data stream. Further, I’ve been pleasantly surprised by the global nature of our enthusiastic supporters. We now self-identify as a ‘micro-multinational’.

What is the most important thing you’ve learned since winning the competition? 

It’s all about the people and the team; this includes every employee, client, board member, investor, advisor, consultant, mentor and friendly supporter. Our success is positively co-linear with the success of the global data ecosystem.

What advice would you give a company getting started in the insights industry?

Be wary of free and paid advice early on — nobody knows more than you about your vision. That said, I recommend ‘productizing’ your vision before anything else. Only bring people on board who share that vision passionately. Be confident but humble. Admit when you’re wrong and abandon innovations gone awry (credit to Steve Jobs RIP), yet, at the same time, be careful about leaning too heavily on the advice of anyone with impressive-sounding credentials despite the instinctive temptation to do so. This means being wary of advice from industry veterans, or from marketing mavens or self-proclaimed Internet ‘experts’. And it means asking a ton of questions.


‘Tis the Season: What’s Your Holiday Gift Strategy?

In a study by the Advertising Specialty Institute they found that only 62 percent of businesses they surveyed were planning to give gifts to employees, clients, & prospects. Really? Here's a solution for the remaining 38%.



By Jonathan Price

This season means many things when you are a business owner. Year-end reporting and ROI delivery from all your teams, strategic planning for the upcoming New Year, budgets, numbers, staffing and lots of people taking time off for the holidays. Not to mention the infamous holiday office party. The other item on the checklist is holiday gifts: for employees, clients, prospects.

In a study by the Advertising Specialty Institute they found that only 62 percent of businesses they surveyed were planning to give gifts to the above audiences. If that sounds low to you, it is. The survey cited was closed on October 27. That means the remaining 38 percent just hadn’t made any plans yet for gift giving. Like them, many of us are caught up in the busyness of the season and may have put this task off until last minute. Thus, we need something that can be delivered quickly and still get our message across.

Let’s take a couple steps back. Why are we giving the gift in the first place? For employees, it’s easy. Boost morale, a “thank you” for a job well done. After all, without them, your company probably wouldn’t exist. With unemployment rates dropping and the demand for good employees rising, it’s a good idea to reward your employees and keep them around. Think about how hard it is to buy one person just the right gift. How much harder is it to make that connection with 500, 5,000 or even 50,000 employees?

These companies need a quick solution that goes beyond a tchotchke with a company logo slapped on (which will often find its way to the “round file”) or the frozen turkey or popcorn tin which poses a delivery nightmare. For progressive employers, an instant, digitally delivered reward makes the most sense for making sure a diverse, and sometimes far-flung, employee base receive their gift in an effective, efficient manner.

A gift card that is delivered virtually and instantly can have many benefits beyond a simple year-end “thank you” to employees:

  • Delivers seamlessly to the recipient’s platform of choice: desktop, tablet, mobile
  • Fills demand for instant gratification and ease-of-use
  • Fills demand for options and choice – recipients can use the gift for what they want, when they want
  • Allows options for customization and personalization based on company branding
  • Reinforces a company’s brand internally as progressive, savvy and technology driven

In addition, rewards are proven to increase engagement across the board. Sending a virtual gift card to employees gives employers an opportunity to push messages out internally that will actually be consumed. For example, you can consider including a message from the CEO in the gift card delivery email, providing a recap of the year and a look to the future, taking advantage of a chance for a meaningful touchpoint with their employees.

For clients and prospects, a virtually delivered gift card provides many of the same value propositions. It gives you another connection with them, a chance to reinforce your branding, message and create goodwill. In a competitive marketplace, you should get your company in front of those who matter most at any appropriate opportunity.

So, if nearly 40 percent of you are late to the game here (see above), pushing send on an instant delivery gift card is not just an option that saves time. It can meet your target audience of employees or clients on their own turf by providing mobile options that are instantly redeemable, while conveying your important messages – among many other benefits. They’ll think of you at the holidays and all year long… synergy, all the way around. Season’s greetings!


Robert P. Moran’s Taxonomy of Insights

Robert’s taxonomy presents a powerful framework for thinking about the design and delivery of insights in our new age of information abundance.

By Jeffrey Henning

My absolute favorite presentation from the Marketing Research Association’s Corporate Researcher Conference was “What the Hell is an Insight?” by Robert P. Moran (@robertpmoran) of Brunswick Insight.

He referenced the classic information hierarchy and asked, “What is the research industry’s product?”

DIKW Pyramid

Too often, researchers have thought of themselves as providers of data. “But what do you call a product that is priced by volume?” Robert asked.

A commodity.

So we’ve limited ourselves to something with the value of cotton, or grain, or metal ingots.

Mike Cooke of GfK, at the 2009 ESOMAR Annual Congress, said, “As an industry, we’re often criticized for our lack of insight and an over-reliance on an industrialized view of research.”

But Robert was forgiving of our old business paradigm, arguing that inventing market research in the industrial era made it hard to pattern ourselves after anything but an industrial model. So it is no surprise that in the first era of mass markets, mass advertising, and mass production, researchers would build “data production factories.”

Yet the Industrial Age model doesn’t work in the “Dream Age” (a term coined by Rolf Jensen in Dream Society). “Market research was born when information was scarce,” Robert said. “Information is now super-abundant.”

Not only is information super-abundant, but complex practices are becoming automated. As Cory Doctorow wrote, “Pick something that’s difficult, complicated, and expensive for people to do, then imagine that thing becoming easy, simple, and inexpensive…. That’s what’s happening today.”

In a world of information abundance, we need to move up the hierarchy to knowledge and wisdom. We need to sift and synthesize the many different sources of data. We need to move beyond research being “some thing you produce” to research resulting from synthesis.



Robert then loaded us into Peabody and Sherman’s WABAC Machine and took us back to the days of the Library of Alexandria. When the Library was founded, papyrus – native to northern Africa – was the dominant source for recording information, on papyrus scrolls. The Library came to monopolize these, even taking scrolls from all the ships that put into the port of Alexandria. For the Ptolemaic dynasty that ruled Egypt, if you controlled the data (the scrolls), you controlled the information, knowledge, and wisdom. It was an ancient instance of the strategy of information scarcity. The Library was a national strategic advantage, with knowledge producing power. The dynasty even placed export controls on papyrus: “If we deny other nations our papyrus, we deny them the ability to encode information. And we win!”

With papyrus no longer exported from Egypt, rivals needed to find other methods. This led to the Pergamum developing a technology based on animal skins, which became charta pergamena (parchment). Ultimately, this led to an information explosion, and eventually, the book.

If, as an industry, we follow the Library of Alexandria strategy, we will find that it only creates new rivals.  We need to “focus more on scholars and less on scrolls”, Robert said, meaning that we need to move beyond “data encoding” to “information synthesis” and “knowledge extraction.”

Robert said that research “may not be the single worst name for the industry and its product, but it is awful.” It focuses on the activity rather than the benefit and omits our role in the development of research. No wonder there has been a shift to customer insights!

“But, if we sell insights, shouldn’t we have a definition?” Too often the popular definitions today are “I know it when I see it” and the “A-HA moment.” Robert said that these are subjective definitions that immediately discount the value of the new knowledge provided by research.

In Creating Market Insight: How Firms Create Value from Market Understanding, Brian Smith and Paul Raspin argue that an insight has value, rarity, inimitability, and is actionable.

  • Value – “Does this knowledge help the firm respond to market opportunities?”
  • Rarity – “Is this knowledge owned only by the firm and not competitors?”
  • Inimitability – “Is this information difficult for competitors to obtain?”
  • Organization – “Can the firm organize in a way to exploit this knowledge?”

This provides a useful framework for identifying an insight for Robert, though he would advise against including rarity, since it is impossible to know for sure if competitors have reached the same insight.

So if an insight is valuable, rare, and inimitable, what types of insights are there?

Here’s Robert’s taxonomy:



So confirmed assumptions are building blocks (if they have a tactical impact) or operating principles (if they have a strategic impact). Discoveries resulting from a systematic process can produce adjustments (think of a concept test of a tactical product line extension) or gamechangers (insight into how to meet a previously unmet need). Finally, unanticipated discoveries are outliers (if tactical) or wildcards (if strategic). The example of a wildcard that came to my mind as Robert presented was 3M’s Arthur Fry [pictured above] realizing that Dr. Spencer Silver’s accidental invention of a weak adhesive could be used as bookmarks or notes (Post-it notes).

This insight taxonomy is a powerful way to think about the types of knowledge produced by our research. The one gap I saw in it, and brought up in the Q&A, was the challenge of “disconfirmed assumptions.”



I admit my names for them are probably too dark: roadblocks (tactical impact) and landmines (strategic impact). But I think disconfirmed assumptions are important, if perilous, insights. Disconfirmed assumptions are the hardest to present. They call into question the validity of the research at all stages (from sampling to interpretation), and they are hard to convey to decision makers. Instead of presenting in an Associated Press pyramid style of most important takeaways first, I start by describing the research methodology and sample sources in detail, then slowly build the case with individual data points, drawing to a conclusion of the overturned assumption.

For one recent study, we found that the old paradigm of doing business was far more prevalent and entrenched than our disruptive, rapidly growing client believed. After cutting and re-cutting the data 3 times, the insight still persisted. It was a landmine, and blew up in our face. That’s the challenge of strategic disconfirmation.

On the tactical side, for PR surveys, we often begin by brainstorming great headlines that might come out of the research. From that, we formulate a list of hypotheses, and approach the research design by objectively testing many hypotheses. In this case, the disconfirmed assumption is more of a roadblock, a temporary obstacle, as we look to other hypotheses for our key takeaways from the research.

Robert’s insight taxonomy has changed how I think about presenting findings in my executive summaries, and has helped me realize the different communications challenges that emerge from different type of insights. I’m still pricing survey research projects as a commodity (my bad), but that’s something I need to rethink.

Robert’s taxonomy presents a powerful framework for thinking about the design and delivery of insights in our new age of information abundance.


The 10 Things Researchers Should Be Thankful for this Holiday

Here are 10 things we as researchers should be thankful for this holiday.



By Adam Rossow

When we’re drowning in data or working on the eighth version of a guide it can be hard to remember how lucky we are to be in the insights field. But now it’s that time of year – time to reflect and appreciate. As members of the research community we’re all grateful for our clients, colleagues, and the opportunities we have to make an impact. But there’s still a lot we take for granted. As a reminder of all that we have going for us, here are 10 things we as researchers should be thankful for this holiday.

  1. We should be thankful that companies have yet to figure out Big Data, which means researchers can still claim to have the secret formula to make it all come together.
  2. We should be thankful that selfies are now socially acceptable making respondent photo uploads all the rage.
  3. We should be thankful that big brands still don’t know it all and their actions often remind everyone why a little research always helps.
  4. We should be thankful that marketers are always scrambling to figure out how to engage customers on the latest “game-changing social platform”. Yes marketers, we’re happy to run a study to see what people want from brands on SnapFaceChat8.
  5. We should be thankful that we’re smart enough to compete in the insights space with a Jeopardy dominating super-computer from the future.
  6. We should be thankful that many brands think they need to figure out how to incorporate “On Fleek,” “DadBod,” and “Bae” into their lexicon.
  7. We should be thankful that we don’t need those intimidatingly smart neuroscientists to tell us how people really feel any more. Self-reporting is back and better than ever….
  8. We should be thankful that seemingly every retailer is hopping on the “lets close on Black Friday” bandwagon. One less actual shopping day, one more survey taking on shopping day.
  9. We should be thankful that Gen Z is on the scene, meaning now only 75% of our work week is dedicated to understanding the mystical millennial.
  10. We should be thankful that Data Science is somehow sexy and we’re all a little cooler to be even loosely associated with it.

Happy Thanksgiving from your thankful friends at iModerate.


The Top 20 Emerging Methods In Market Research For 2015: A GRIT Sneak Peek

An advance view of one of our most popular question areas from the forthcoming GRIT Report: the adoption of emerging approaches in the industry.

gritbanner (1)


Editor’s Note: The latest wave of the GRIT Report is in the hands of the designers now and will be published in just a few weeks! However, I’m a big fan of releasing sneak peeks of some of the findings, so today we’re giving you an advance view of one of our most popular question areas: the adoption of emerging approaches in the industry. Ray Poynter wrote the analysis for the report, and here is his take right from the rough draft.  There are some important insights here, especially regarding the continued client-side adoption of some approaches that market research suppliers may be missing out on, so we hope you use this as a comparison point for your own offerings as we head into 2016.


By Ray Poynter

When reviewing the market research approaches and techniques being used or considered we need to keep in mind that the GRIT sample tends to be drawn from people more interested in change and new approaches. This means the data should not be taken as being an audit of the whole research industry; rather the data are an indication of change and rate of change.

Four Categories of Adoption of New Techniques

As the chart below shows the GRIT participants usage of techniques produces four categories of adoption: Already Mainstream, Wide Level of Interest, Third Tier, and Niche.



Already Mainstream

This group consists of Mobile Surveys and Online Communities, which as the trend data shows has been the picture for a couple of years.

Wide Level of Interest

This group has two elements, the first is the analytics/Big Data group and the second is the mobile enabled qual group. Both of these groups score well in terms of ‘In Use’ and in ‘Considering’.

Third Tier

This group show interesting levels of adoption and interest, but have not really broken through. This group comprises Eye Tracking, Micro-Surveys, Behavioral Economics, and Research Gamification.


The remaining items are all clearly niche at the moment. Only a few of the GRIT participants are using them and relatively few are considering them.

The Trends

The table below shows the key data since Q1 2013, i.e. over the last 2.5 years.


% In Use Q1-Q2 2013 Q3-Q4 2013 Q1-Q2 2014 Q1-Q2 2015 Q3-Q4 2015
Mobile Surveys 42% 41% 64% 67% 68%
Online Communities 45% 49% 56% 59% 50%
Social Media Analytics 36% 36% 46% 45% 43%
Text Analytics 32% 33% 40% 38% 38%
Big Data Analytics 31% 32% 31% 34%
Mobile Qualitative 24% 22% 37% 43% 34%
Webcam-Based Interviews 26% 27% 34% 38% 33%
Mobile Ethnography 20% 21% 30% 35% 31%
Eye Tracking 22% 26% 34% 28% 28%
Micro-surveys 19% 25% 30% 25%
Behavioral Economics Models 25% 27% 21%
Research Gamification 15% 16% 23% 21% 20%
Facial analysis 9% 13% 18% 18% 18%
Prediction Markets 17% 17% 19% 21% 17%
Neuromarketing 9% 11% 13% 14% 15%
Crowdsourcing 13% 14% 17% 19% 12%
Virtual Environments/VR 17% 14% 17% 15% 10%
Biometric Response 7% 8% 13% 10% 10%
IoT/Sensor based Data Collection 12% 10% 9%
Wearables Based Research 7% 7% 8%
Sensor/Usage/Telemetry 7%


The key change over the last 2.5 years has been the arrival (in Q1 2014) of Mobile Surveys as the most widely adopted new technique.

The most recent data suggest that people are beginning to specialize, to pick those techniques which best suit them. For example, the average number of techniques mentioned as ‘In Use’ in Q1/2 of this year was 5.8, by Q3/4 this had fallen to 5.2.

The data do not show any sign that the newest or ‘hottest’ techniques, for example Wearables or Internet of Things are gaining widespread traction yet.

Users and Providers are Not the Same

When we look at buyers/users and seller/providers of research we see lot of similarity and some interesting differences.


% In Use Buyer/User Provider Gap
Mobile Surveys 54 72 -18
Online Communities 46 50 -5
Social Media Analytics 53 41 12
Text Analytics 38 38 0
Mobile Qualitative 26 36 -10
Big Data Analytics 40 32 8
Webcam-based Interviews 27 34 -8
Mobile Ethnography 25 33 -8
Eye Tracking 28 28 0
Micro-surveys 17 27 -10
Behavioral Economics Models 17 23 -6
Research Gamification 12 21 -9
Facial Analysis 14 19 -5
Prediction Markets 22 16 6
Neuromarketing 15 15 0
Crowdsourcing 16 11 5
Virtual Environments/VR 9 10 -1
Biometric Response 11 10 1
Internet Of Things Data 10 8 2
Wearables Based Research 5 9 -4
Sensor/Usage/Telemetry Data 6 7 -1

Base: Buyer/User=212, Provider/Vendor=810

The cells where the differences are highlighted in blue show where the In Use figures are higher for the providers of research. These may reflect the greater awareness that providers have about the techniques that are being used, for example an awareness that mobile is being used or that research gamification has been employed to optimize the research design.

The cells highlighted in red are those where the users/buyers of research have higher numbers for In Use. These cases may reflect situations where clients are not buying their services from traditional market research sources, for example Social Media and Big Data Analytics, although interestingly Prediction Markets and Crowdsourcing, which have vibrant and growing suppliers outside of mainstream research, are also included.

The implication here may be that research suppliers are missing in out on both new revenue opportunities and serving a larger client base by not offering these capabilities credibly.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Facebook: Your New Marketing Research Consultant?

Topic Data, a new product/service from Facebook, offers marketers access to what users are saying about brands, products, events and activities.



By Doug Pruden and Terry Vavra, 

As more and more organizations are ‘following’ their brands in Facebook posts to gain insights about their products and customers, the social media giant has recently announced a new service to help marketers learn even more.  Topic Data, a new product/service from Facebook, offers marketers access to what users are saying about brands, products, events and activities.  A likely impetus for the introduction was Twitter’s reported revenues of $47 million in the fourth quarter of 2014 from similar “data licensing and other information services”.

The Allure of the Social Media

So what value is Facebook offering?  We’ve always believed ‘observational data collection’ was underutilized in marketing practice.  Too often companies rush to conduct an opinion survey when answers to their questions may already be available through observation of behaviors, their customer database, comments, or posts.  Attractive potential, but some pitfalls as well.  As with any new research tool, users should gain a sound understanding of the technique’s limitations and have a plan on how to best capitalize on its exclusive strengths.

Some Unique Opportunities

Here are some suggestions on possible uses for research implemented by the social media.  We draw on a Social Media Today article by Ray Nelson.

  1. Tracking Trends  Periodic searches of the social media for brands or product categories can provide a wonderful longitudinal reading of the strengths and weaknesses of a product; the salience of a brand or product category, etc..
  2. Understanding the Language of Your Customers  Within posts about a brand or product may be wonderfully descriptive information about how customers describe a product and/or the way they use it.  Such descriptions help create a rich lexicon for the marketer and its promotional agencies.  (Focus groups are often assembled to ‘explore the terminology’ of a brand.  However, a social media search is likely more representative of actual, geographically diverse terms.)
  3. The Real-Time Aspects of Social Media Promise Faster Research  Obviously social media channels with established users can offer accelerated speed in fielding a research study, but there’ll still be the need to strategically plan the information project and the time to properly analyze the results.
  4. Discover Emerging Trends and Insights   By ‘observing’ current (social media) behavior – posts, etc. marketers can learn much without ever having created a ‘questionnaire’.  Questionnaires – because they depend upon language to poise questions – unavoidably influence the responses they collect.  Eaves-dropping on ‘native’ and ongoing conversations escapes this problem.
  5. Using Social Media May Reduce Costs  Social media channels and their users already exist and are in operation, eliminating the need for sampling, recruitment, gaining participation, and even remuneration.  Consequently research conducted with social media can be more cost efficient.
  6. Social Media Can Extend the Reach of Research  Consider broadening the representation of your next research project by using a ‘cell of social media respondents’.  These respondents will increase your total sample size and may add informational value as well.

Strong Potential, But for Specific Uses

In our previous discussion of the current fascination with pull research, we warned against the unconventionality of using social media as a primary research tool.  We’re not reversing our position.  We firmly believe that any process that allows research participants to ‘self select’ violates the discipline of good sampling and therefore produces findings that aren’t projectable to any extended group of customers.  However, we’re also about using the most powerful and penetrating methods available to complement our current repertoire of techniques.  It appears to us that social media offers a unique approach to matters like: issue discovery; exploration of terms and language pertinent to a product category; and as a tracking tool – to name just a few.  Social media based ‘research’ shouldn’t be touted as a replacement for current methods.  But, we encourage you to consider some of these possible uses as adjuncts to your current marketing research processes.


Goodbye Big Data, Hello Thick Data

Big Data is here to stay, but it’s only half the job - Thick Data fills the gaps and enables truly people-shaped or human-centred development and visceral business.

Thick arrow pointing to the left and up, made from plenty of colorful jigsaw puzzle pieces separated on white background.


By Stephen Cribbett

I hate to start this article talking about Big Data because it’s been well-covered, some might even say exhausted, but in doing so you’ll get a better sense of where I’m going.

So let me start by just saying that Big Data has been around the block and on-stage more than most hot topics over the last five years. What I see Big Data as is data that is delivered with a velocity and veracity hitherto unseen. It comprises data sets large and small, from all areas of a business such as customer loyalty programs, customer service centers, online transactions, social etc. The promise of Big Data was that it offered the potential to better understand and anticipate consumers’ behavior and turn that into competitive advantage. You get the idea right?

From all the conversations I have with brands and from the many talks I attend, few brands have capitalized on this promise because they simply ended up drowning in the immense volume of data and it’s various formats, but also didn’t do the job of turning data into action insight the business could work with. (Note: I plan to share my ideas of a new role ad activity called Data Scouting in a later post, so keep watching!).

There are brands and organisations out there that are getting it and doing some spectacular work using Big Data and real-time mission rooms that help them join the dots and act on the learnings fast, Unilever being one such example.

From the outside looking in, one might think that all brands, like Unilever, have impressive Big Data dashboards and mission control rooms where things happen in an instant, managed centrally, gracefully and with great agility. Not true. The Head of Customer Insight at BT herself confessed that this just wasn’t what it looks like on the inside, and that in reality it was paddling hard to stay afloat on the see of data.

So here what this gets interesting. Big Data is what it is, and the majority of brands and organisations find it to be a bit of a Damocles Sword. And due to the sheer size and volume of these data sets they are largely structured and quantitative. It can, I believe, still help understand new trends, behaviors and preferences, but like most other quant data, it still leaves the organisation bereft of any knowledge of why its customers are doing what they do.

Thick Data plugs the gap between what organisations have, and what they need to be more instinctive and truly understand how people feel and their emotions that underpin the customer experience. For as long as I can remember, designers have been working with Thick Data in the creation of human-centered products and services. Now is the time for marketers to take advice from these practices and be more people-shaped, more emotional and empathetic in their approach. Engaging with Thick Data enables organisations to develop real, positive relationships with people (their targets) and stop thinking about them as numbers, or respondents. It brings in the context of their complex lives also, which as you well know plays a significant role in what they do and why. Take for example the act of buying a new car. Its a long process that involves research and interaction across many channels, online and offline. But there are many actors influencing decisions, like friends, family, and of course the female head of the family. Without this insight, car brands and showrooms would be missing the target and continue to focus on the male of the species.

By harnessing qualitative research techniques and tools, you can build a organisational empathy and instinct towards your customer. You’ll start to think like them, sympathize with them and be able to turn this into valuable use cases that lead to game-changing products and services that they want and that solves their problems for them. Thick data thinking must be embedded within the organisation, not just handled by an external agency since your employees are the conduits, the activators and the front-line. This is important and must be considered carefully, perhaps using customer closeness programs where your teams get to meet people regularly to share and understand the issues they face first-hand.

Big Data is here to stay, and we’re getting better at making use and sense of it. But it’s only half the job – Thick Data fills the gaps and enables truly people-shaped or human-centered development and visceral business.


A Debate Between Survey Length and Data Quality

The next time you’re thinking of fitting an existing survey to a mobile experience, try starting fresh with a mobile-first approach.

Quality And Quantity Computer Keys Showing Choice Between Excellence Or Numbers


By Zontziry Johnson

The setup

Here’s the scenario for you: a new panel has been identified that has high-quality, pre-qualified respondents for a survey you have fielded a few times in the past using other panels. The original survey is hovering near the 30-minute mark (taken online), and, because it has been fielded a few times already, there are a number of stakeholders who use the data to inform various decisions and efforts and are interested in keeping trending intact. The idea is that the survey should be shortened to about a fifth its original size using this new panel, so it’s up to you to trim the survey. As a side-note, this particular panel is comprised of heavy mobile-users — previous surveys done with this panel have shown that the majority take the surveys via mobile devices.

The issues

The first issue is that the panelist information being used to validate responses can only be fitted after the survey results are back, meaning there is a risk of ending up with a smaller sample size than desired by the time the data has been cleaned.

The second issue is that the panelist information being used doesn’t contain all of the information needed, so at least some of the questions used to determine how to pipe respondents to the rest of the survey (to ask about product usage) need to remain in the survey.

The third issue is that the survey is being used for multiple objectives. While the objectives have some overlap, it’s not enough to mean that we can measure along both with the same set of questions. Instead, we need to find a way to add the minimum number of questions possible to the core set in order to achieve both objectives with this single study.

The give and take

Ultimately, this combination of issues makes for a very difficult time trying to trim a survey to the desired length. At first pass, we only trimmed ten minutes from the survey. Between the fact that most of the questions were matrix questions and the number of “must-haves” that were being included, it was growing more and more difficult to see where questions could be cut. Finally, it took multiple discussions about the number of objectives behind the study needing to be trimmed, and all interested parties sitting together to really ask the question, “Do we actually need this in the study?” for each question in the survey to get us down to a study that was roughly one-third the size of the original.

The debate between length and data accuracy

This exercise caused me to reflect quite a bit on the debate between the length of the survey and the data accuracy that can be gained from a super-short survey.

For this particular scenario, the desired end-state for the survey was that the survey was no longer than five minutes. I get it: our attention spans are shrinking dramatically (such as seen in Canada, per a Microsoft study), response rates are getting more and more difficult to achieve, and so the shorter the survey, the higher the likelihood of achieving the desired response rates. But I’m not entirely convinced that 5-minute studies can meet the same needs that longer studies achieve.

Please note: I am not advocating for a 30-minute online survey.

Instead, I’m calling for a need to examine this type of exercise and call for ever-shorter surveys from a few different angles. First, how rigorous do you need to be with respondent and data quality (before applying data cleaning processes)? Depending on the panel being used, you may need more respondent qualification questions up-front. For example, you want to get opinions from doctors, but if you field the survey with a medical panel full of professionals involved in the medical industry, you will need something more than a quick, “Are you a doctor?” to be certain you’re getting the responses from the group you need. And while high-quality panels can help (i.e., a panel that is comprised only of doctors to begin with), some surveys may still need to hone in on the desired audience (are they family practice physicians or podiatrists).

Second, what are the actual data needs? Sure, there are many, many questions that would be “great to know.” But when faced with needing to make the most of the short time you have from your respondents, you need to stick with what you need to know, and discard the rest for another study. The reality, though, is that for surveys that are routinely fielded, that list of items that we need to know gets inevitably longer and longer as the group of stakeholders gets wider and wider.

Third, while response rates might be fantastic with five-minute studies, when dealing with studies that need an extra level of rigor around the respondent qualification process, I think expanding that to 10 minutes to increase the confidence in the data and reduce the amount of data that has to be discarded is just fine. Ultimately, it can come down to this: do we create a five-minute study without a rigorous respondent qualification process that results in only 200 of 500 responses being usable, or do we create a ten-minute study with a rigorous respondent qualification process that results in 200 of 200 responses being usable?

Why mobile-first is still better

The scenario I’ve described involves trimming an existing study instead of starting from scratch. I know that’s going to be a method that is taken often, especially for existing studies with existing stakeholders. But I still think that whenever possible, it’s better to start focusing on the mobile-first approach, as opposed to the adjust-it-for-mobile approach. This allows for things like starting with what information is available about panelists that can be used to already pre-qualify them for the study, creating questions that are mobile-friendly from the beginning (as opposed to trying to figure out how to make that long and wide matrix into a more mobile-friendly question), and naturally wanting to keep the study focused on a single objective knowing that drop-out rates will increase quickly as the length of the survey increases. So, the next time you’re thinking of fitting an existing survey to a mobile experience, try starting fresh with a mobile-first approach.


Jeffrey Henning’s #MRX Top 10: The Sharing Economy & The Seat at the Table

Of the 18,729 unique links shared on the Twitter #MRX hashtag last week, here are 10 of the most retweeted...


Of the 18,729 unique links shared on the Twitter #MRX hashtag last week, here are 10 of the most retweeted…

  • Evaluation highlights areas for improvement in long-term conditions care – On behalf of the UK’s National Health Service, the Ipsos Ethnography Centre of Excellence conducted 36 ethnographic interviews with people suffering from long-term conditions. Such patients report that the healthcare system is often not aligned with the complex needs caused by chronic ailments.
  • On the job – Matt Valle of GfK, writing for the American Marketing Association (members-only post), provides 8 reasons why market research makes for a great career.
  • CASRO & MRA to merge – In a joint statement, CASRO (historically a vendor-oriented association of survey-research organizations) and the MRA (a professional society with a mix of corporate researchers and research vendors) have announced their intention to explore a merger and welcome feedback from their respective memberships.
  • Marketing seat at the table– The Marketoonist pokes fun at the desire of marketing (and, by extension, marketing research) to have “a seat at the table” of C-level decision makers.
  • Want to win $25,000? Submissions & voting are open for the Insight Innovation Competition at IIEX Europe 2016!– The next Insight Innovation Competition is a chance for organizations with new research offerings to win publicity and additional capital.
  • Video creative in a digital world – Millward Brown’s AdReaction study looks at how multiscreen users across 42 countries watch video. Even if you’re not interested in the slow fade of live TV to static, this report is worth checking out just for its interactive infographic.
  • Success lies in collaboration, not competition– Writing for Research, Bronwen Morgan recaps a presentation for the Customers Exposed conference, in which Alison Camps of Quadrangle discusses the early adoption of the sharing economy and its impact on research.
  • Caring about sharing – Jack Miles of Northstar looks at entrepreneurs in the sharing economy, fusing the traditional needs of a consumer with the need to promote their microbusinesses. Instead of B2B or B2C, Jack calls this new sector B&C, Business and Consumer.
  • Embracing the future– NewMR is offering a great webinar November 18th, with Sue York presenting “Creating a Personal Social Media Brand” (I saw her present this in Sydney and highly recommend it), Gaelle Bertrand discussing “Strategic Social Media Listening,” and Martina Olbertova looking at “Brand Curation.”
  • Getting valid results from surveys: Meet the Survey Octopus– Caroline Jarrett uses the many arms of an octopus to discuss improving your survey research projects.


Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.