1. SIS International Research
  2. RN_ESOMAR_WP_GBook_480_60_11_14
  3. usa-banner2015
  4. aha-banner-480x60-ver01gif

Change In Marketing Will Never Be As Slow As It Is Today

Is MarTech improving the CMO’s performance? If so, what is the nature of the improvement: tactical or strategic?

slow

 

By Peter Orban

MarTech entered the marketing subconscious with a bang ever since Gartner predicted that – in a span of five years, by 2017  – the CMO will spend more on IT than the CTO. Interestingly, the average tenure of the CMO doubled in the past six years – despite increased expectations of marketing. Is MarTech improving the CMO’s performance? If so, what is the nature of the improvement: tactical or strategic?

I was pondering these and similar questions while attending the second MarTech conference in early April, chaired by Scott Brinker, the creator of the MarTech Lumascape. Based on the presentations and conversations with participants and exhibitors alike, here are few observations.

  • The big change in how marketing operates and fits into the enterprise is not coming – it is here.
  • The foundation of the emerging new marketing function is a combination of internal & external data.
  • Sitting on top of the data is the “decisioning layer,” the intelligence which – combining deep user/consumer insights with business logic and machine learning – provides direction.
  • Finally comes the “execution layer” which carries out the interaction with customers and captures the resulting data generated in the process, feeding it back to the first layer.

David Raab’s presentation about Customer Data Platforms illuminated the new architecture. Corey Craig, HP gave a great example of combining business logic with the understanding of the user journey. Tony Ralph, Netflix, shared a story about building its own execution layer when the one for sale was not good enough.

  • To successfully implement the emerging function you cannot piecemeal it: one needs to adopt systems thinking covering structure, processes, and people at the same time.
  • A new organizational blueprint is emerging incorporating Marketing, IT, and even Sales. It is not a new silo replacing old silos, but a flexible and agile organizational “quick sand” adopting as necessary, frequently in real time.

Laura Ramos, Cynthia Gumbert, and others spoke about a merger between Marketing & IT. Jill Rowley had similar thoughts but related to Marketing & Sales, arguing sales is becoming more social, and marketing is capable not just delivering a lead, but also converting it. Jeff Cram included customer service and the principles of service design to integrate thinking around people, content, processes, and platforms when designing customer experiences. In any case, most in the room thought that – to paraphrase David Packard – IT is too important to leave it to IT.

  • With the death of “the campaign,” processes also need to become flexible, and able to handle exceptions to deliver “uninterrupted experience” – just like a computer program.

Advocating Lean and Agile approaches, Jeff Gothelf, (Neo.com) underscored the need to combine customer-centric experimentation, data-informed iteration, and humility. Isaac Wyatt pointed out the increasing similarity between creating a marketing and a computer program. According to Pat Spenner, traditional marketing is like an orchestra (or waterfall), while the new one is more like a jazz band (agile).

  • The new org will need new frameworks, tools and trained marketing technologists. The road to this is via culturally fitting experiments pointing to the many variables in the path of becoming a marketing technologist and specifying role of the new MarTech department.

Joseph Kurian, (Aetna) and Saad Hameed, (Linkedin) described their respective journey of identifying or ‘creating’ hybrid talent necessary for their operations.

However, the other big change – a tectonic shift in consumer/customer behavior – was not nearly as much represented on the agenda. Granted, most speakers suggested as a starting point that consumer and user intent is well understood, signal and noise have been separated, and the ingredients of user experience have been successfully isolated/attributed. But only a few actually dissected the benefits of new insights and a deeper understanding of consumer behavior.

Maybe because of the relative scarcity, the few “outside in” sessions were eye opening.

  • With the right consumer insights MarTech can effectively extend the business model of the organization into the digital realm. Also inevitably, return on technology will start to diminish at some point, but return on creativity – rooted in ever improving consumer insights – will not. (Gerry Murray, IDC)

Perhaps it is not surprising that Venture investors interviewed on the closing panel agreed that future investments will target the “decision layer” while the other layers are more likely to experience consolidation.

This tells me that the most exciting days of MarTech are yet to come.

Share

Why I Don’t Miss Angry MR Client

Fortunately, Angry MR Client, Angry MR Respondent, and Angry MR Vendor seem to have faded away.  Unfortunately, too often the complaints vendors and clients have about “the other side” are still there.  Don’t get angry – get better.  Or buy a saddle (and you’ll have to read to the end to know what that means.)

 amrc1

 

By Ron Sellers

You know what I don’t miss?  The multiple blog posts from Angry MR Client and Angry MR Respondent that hit the industry a couple of years ago.  There were also a few posts by those claiming to be an Angry MR Vendor, along with plenty of angry vendor rebuttals to the other angry folks.  Remember those?

Why was everyone so angry?  Are they still in a rage, or have they simmered down to being mildly annoyed or maybe just dyspeptic?

While these specific Angry bloggers seem to have stepped away from the keyboard, there are still a lot of complaints vendors have about clients, and clients about vendors.  Too often, these complaints are directed at an entire class of researchers or at the industry as a whole.

Having been both a vendor and a client, I understand many of the complaints from each perspective – shoddy service from vendors, repeated abuse of respondents, unreasonable client demands, payment problems, unhelpful reporting, stale methodological approaches, awful sales attempts, micromanagement, lack of project management, etc.  In fact, I’ve complained about many of these myself, either here in the Greenbook blog or in informal gripe sessions with other researchers.

I get the desire for our industry to be better, but I just don’t get the anger directed so often at an entire sector of our industry, or at the tendency to lump all clients or all vendors together.  Too many times, sentences start with “What vendors fail to do is…” or “What clients don’t understand is…”

Many times, I’ve been a survey respondent, and I’ve experienced the horrors of what’s out there masquerading as research.  But I don’t get angry – I just refuse to waste my time with poorly designed questionnaires.  I see no reason to get angry; instead, I just walk away and let the researchers get angry at the low completion rate and high abandonment rate on their shoddy surveys (if they even know or care what those problems are).

I’ve been badly treated by clients.  I’ve had people fail to pay me on time (or at all to the tune of $40,000 in one case).  I’ve lost a $30,000 project solely because someone underbid me by $100.  I’ve had a client ask me to falsify data.  I’ve had a client try and frame me for a major mistake she made (fortunately I had a copy of her mistake in her own handwriting).  I’ve had a client scream at me because he never provided me with approval on the questionnaire and I did not field it without his approval (leading to one of my all-time favorite lines from a client:  “I don’t care if it’s right!  I need data!  Just get me data!”).

And certainly I got angry.  But I only got angry at those people.  Not even at their companies, because there were other very fine people in those firms.  Just at those people.  I’m not angry at clients in general.  I’m not expecting the worst from every client with whom I work, and I’m happy to say I truly enjoy working with most of my clients.

I also learned early in my career that anger just isn’t worth it.  My boss at that time (who was the owner of the company) heard about the client screaming at me.  Her response taught me a lot about how to conduct my business:  “Do your best to finish up this project and once it’s done, don’t even accept any phone calls from him in the future.  I will not have my employees treated that way.  We don’t need his business that badly.  I refuse to work with him again.”

This was why, a decade later, I fired my biggest client.  Well, I didn’t specifically tell them not to contact me any more – I just stopped asking for their business and instead concentrated on finding replacement business.  They sort of quietly went away to inflict torture on other vendors.

I got tired of the fact that they rotated people in and out of the research department so often that I never worked with the same person twice and had no chance to build relationships.  I got tired of their ridiculous demands (like me begging them for two weeks to allow me to over-recruit a project, then having them demand the night before the focus groups that we add more recruits; or having their people wake me up early on a Sunday morning to discuss something insignificant).  I got tired of their nickel-and-dime approach to work, like the fact that they refused to reimburse vendors for lunch in their travel expenses (because if I were in my Phoenix office rather than on the road, I’d be going out to lunch anyway, so I could darn well pay for my own lunch in Atlanta or Detroit just like I would in Phoenix).  Toward the end, one of their analysts confided to me that one-third of the RFPs they sent out would come back marked “declined to bid.”  Seems that many others had the same perspectives I did.

But other than swapping funny war stories with other researchers, I’m not angry at that client, nor at clients in general.  Rather than getting angry, I got better.  I got better at finding people I could respect and who would respect me, and I concentrated on doing my best to serve them.

I’ve also used scores of vendors over the years, and found some of them to be so incompetent as to boggle my mind.  Like the project that was supposed to go into the field on a Thursday night, but I couldn’t get through to the company at all Friday morning for an update.  Right before noon, someone finally picked up the phone, and told me that his company had just bought the vendor I had contracted with.  They had fired the entire staff (giving them one hour to clean out their desks), and he had taken over.  When I asked about my project, his response was, “We don’t have time to do it, and even if we did, they underbid the project, so we would charge you three times the amount for it.”  And in those days of printed phone lists, he had no idea where my one copy of the list was, nor when he could be bothered to return it.

Or how about the company that was recruiting clergy for my focus groups.  When I asked them to send me a list of local churches, moments later I was the surprised recipient of a list of every Church’s Fried Chicken restaurant in their market.

Or maybe the three different field vendors I’ve had who have utterly falsified data they tried to give me.  One simply made up quantitative surveys and tried to pass them off to me as completed interviews; when I started questioning some of the oddities a junior staff member admitted their attempted swindle.  Two were qualitative recruiters who didn’t want to face up to the fact that they weren’t getting recruits, so they plied me with fictitious reports until the day before the groups, when they finally admitted they had almost no recruits.  Yes, I was very angry – even to the point of getting one person fired for the ruse.  But I don’t lay the blame at the feet of all vendors for these misdeeds.

If you simply cannot find good vendors or good clients, maybe you need to reconsider your approach.  For difficult clients, consider charging them more (at least to be compensated for your misery), or standing up to them (in a nice way) and explaining why you are not going to work all weekend because they forgot to give you something until Friday at 4:59 p.m.  If it’s bad enough, maybe you need to find other clients.

If you simply do not have good vendors who can meet your needs, maybe you need to look harder for new vendors, or do a better job of vetting them and investigating their work before you trust them with a project.  If your vendors are consistently underwhelming you with their work, why do you continue to use those vendors?

Or maybe you need to reconsider your expectations.  I would love to find a car that has 500 horsepower and gets 100 miles to the gallon, but I’m not going to curse all car companies because no one is giving me what I want.

Constant complaints about how vendors are incompetent remind me of the single guy who gripes that there are no good women available – but his definition of “good women” is that they are rich, gorgeous, compliant, and willing to devote themselves entirely to his needs.  (Ironically, this also is usually the guy who hasn’t come within ten yards of a stick of deodorant in days and who thinks a classy date is buying the name-brand pork rinds instead of the store brand for when the two of them watch Married with Children reruns.)

Or maybe you need to reconsider how you work with vendors.  Vendors can’t write a strategically meaningful report if you refuse to tell them what your strategic needs are, or you make them do their work in a vacuum.  Vendors are unlikely to move heaven and earth to meet your deadlines if they know you sat on the RFP for two weeks, or that you’re likely to pay them six weeks late.  Vendors are not going to wrack their brains to come up with innovative ways of getting you what you need if you’ve previously used their ideas but given the actual work to someone less expensive, or if you’re constantly micromanaging their work (or taking credit for it).  Vendors probably won’t absorb unexpected costs or gladly do extra work if you make a habit of demanding they reduce their bid by 20%, or requiring line-item bids so you can question every expense individually.

I have been extremely angry at individual clients and vendors at different times, and I have no doubt I have made some clients and vendors angry at me over the course of my career.  I’ve messed up royally a few times (although I also try to acknowledge and correct the mistakes and make things right with the affected party).  But I don’t see any of this as a reason to feel there are no good clients or no good vendors, or that our industry is a morass of feeblemindedness, group think, and incompetence.  There are plenty of bad clients and vendors in our world, but also plenty of very, very good ones.  That’s why I have a variety of clients I’ve worked with for multiple decades:  we both make a habit of working towards a great partnership where everybody benefits.

I received a valuable piece of wisdom years ago from my pastor, who said this:  “If one person calls you a horse’s behind, don’t worry about it.  If two people call you a horse’s behind, take a good hard look in the mirror.  If three people call you a horse’s behind…buy a saddle.”

If your vendors inevitably fail your expectations or your clients generally make your life miserable, don’t get angry, get better.  Get better at finding good people to partner with, and get better at giving them what they need to succeed.

Or take a good hard look at the price of saddles these days.

Share

The 2nd Edition Of the GRIT Consumer Participation in Research (CPR) Report Is Available

Now in its second year, the GRIT Consumer Participation in Research (CPR) report is our effort to answer the who, what, when, where, and why of global consumer participation.

grit-cpr

Respondents are the lifeblood of market research. Whether it’s qual or quant, surveys or communities, neuromarketing or ‘Big Data’ and everything in between knowing how to reach, engage, and understand people is the very bedrock of insights.

In our interconnected world, achieving that goal is in some ways easier, and in many more ways more difficult. Until now, little data have existed to help researchers understand this basic question: how do we get consumers to engage with us and what do those folks look like?

Now in its second year, the GRIT Consumer Participation in Research (CPR) report is our effort to answer the who, what, when, where, and why of global consumer participation.

VIEW GRIT CPR REPORT ONLINE »

The report includes the most up to date data in the world on the profiles of fresh vs. frequent responders. It answers questions such as:

gritcprcover

  • Are “Frequent Responders” categorically different from “Fresh Responders”, and, if so, in what ways? Does this matter? Why?
  • Is the difference significant enough that it should be of concern, or be of strategic benefit, to different stakeholders in the research process?
  • Do the differences necessitate a form of ‘data triangulation’ whereby customers need to receive a blend of respondents, some “fresh”, and some less so? Or should all respondents be “fresh”? Why?
  • Is there a confounding factor at play? If a majority of all responders online share a more dominant characteristic about which we do not know, such as intellectual curiosity (no matter how frequently they answer a survey), how much weight should we assign to the “freshness” findings shown here?
  • The people who were intercepted are likely somewhat biased toward heavier Web users. Since one can make this same observation of all Web-based respondent data capture modalities, does this matter? Why?
  • What are the implications that need to be addressed as an industry from these findings, specifically, for those who make data-based decisions?

We hope this report will become the go-to resource that researchers globally can use to validate and benchmark their own research. Enjoy!

Share

How Addressable TV Changes Media Measurement Forever (Infographic)

If you haven’t heard of addressable TV, it’s time to start getting familiar with the concept.

If you haven’t heard of addressable TV, it’s time to start getting familiar with the concept. Addressable TV is a technology and marketing practice that selectively segments the ads seen by TV viewers, allowing groups of people to watch the same program yet see different, more effectively targeted ads- regardless of their physical distance from one another.

Here’s how it works: marketers use data-driven household profiles to send targeted ads to specific households. With information like income, family composition, and even car leases and mobile contracts, marketers can designate specific ads to be shown to certain families. This effective targeting increases ROI in sales and enhanced analytic potential, just like targeted online advertising.

It’s Programmatic for Television. Which also means it’s driven by a virtuous cycle of consumer-centric data, including viewing and impact.

Sounds complicated, but as it turns out, setting up the technology is simple. Boxes on top of TV sets have their own IP addresses, which allows a TV’s Nielsen data to be integrated with the data from other devices and databases.

Addressable TV advertising has several advantages that sets it apart from its advertising counterparts. Television has the largest audience reach of any media today at 96%, and it draws more than $70 billion annually in media spending. And even though spend is trending away from broadcast media (radio and television), the average adult in 2014 spent five hours a day watching television, despite the growth of mobile.

In 2014, addressable TV was estimated to only represent $200-$300 million of the $70 billion ad spend on TV. However, industry leaders predict that 25% of TV ad budgets will be spent on addressable TV within three years. Addressable TV will revolutionize the way that advertisers plan their campaigns, and the focus will change from quantity of ads to quality.

As that shift happens, it will speed the transition from panel-based measurement to real-time single-source market measurement. The implications for researchers, marketers, consumers, and advertisers is simply immense.

Check out this great infographic (click on it to make it bigger) developed by the fine folks at Signal, a tech company that is aiming to play a central role in this brave new world. It’s well worth a read and paints a compelling picture of the data-driven marketing world of Addressable TV we’re entering now.

Addressable-TV-Infographic

Share

“Analytics is Easy”

Posted by Kevin Gray Thursday, April 9, 2015, 10:37 am
Posted in category General Information
Analytics is a lot harder than some seem to realize.

make-it-easy

 

By Kevin Gray

Erroneous thinking about analytics continues to hang on in the marketing research community.  Often it is tacit, but at times articulated candidly. This is worrisome given that marketing research is a research industry and no longer a young industry.  Some, for example, see analytics as little more than cross tabs and charting that can be done by anyone who has point-and-click software installed on their PC.  This is a bit like saying that if you can talk, you can do qualitative research.  Others think it’s “just programming.”  There are other misperceptions as well and one consequence of all this confusion is shoddy analytics which, in turn, raises doubts about the value of analytics.1  In this short article, I will demonstrate that analytics, in fact, is not easy and why this mistaken belief is potentially costly for marketing any researcher to hold.

Cross tabulations and graphics are an indispensable part of analytics but only part of it, and marketing researchers have long had a vast assortment of sophisticated tools at their disposal.  Even basic analyses should not be undertaken in a slapdash fashion, however.  Churning out stacks of cross tabs is not unheard of in our business but is very risky because even with big data there always will be fluke results.  Instead of placing our bets on shotgun empiricism, as researchers, we should plan cross tabulations and other analyses when designing the research, and interpret the patterns of our findings in the context of other pertinent information, not simply highlight isolated results.  The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day by David Hand, a past president of the Royal Statistical Society, is a great read and I can recommend it to marketing researchers.

Another example of substandard analytics can be found in mapping.  Nowadays mapping, in practice, frequently seems to mean junior research execs or even clerical personnel mass producing correspondence analysis maps, usually with the software’s default settings.  The maps are nearly always brand maps and user maps and other kinds of mapping are underutilized, in my opinion.  Moreover, though correspondence analysis is a wonderful technique it is just one of many appropriate for mapping, and biplots, MDPREF, MDS, factor analysis, discriminant analysis, canonical mapping or other methods may be better suited to the problem at hand.  What’s more, I still see maps being interpreted incorrectly.

Somewhat more elaborate but, nonetheless, debatable practice is psychographic segmentation with what has been called the tandem approach.  Though it began to be seriously questioned many years ago this method is still quite popular and, put simply, consists of K-means or hierarchical cluster analysis of factor scores derived from attitudinal ratings.  Tandem refers to the dual use of factor and cluster analysis in the segmentation.  The psychographic statements respondents rate are often improvised, making matters worse.  Poor questionnaire design plagues many kinds of marketing research and items that make little sense to respondents or mean different things to different people will sink a segmentation whatever statistical methods are used.  In the tandem approach, segments obtained from the cluster analysis are cross tabulated with demographics and other data in the hope meaningful and actionable segments will materialize.  They often do not and, accordingly, I sometimes call this the “Factor, Cluster & Pray” method.

Regression is perhaps the most widely-used statistical method of them all but is also deceptively complex.  Many books have been written which detail how regression analysis can be badly abused and Frank Harrell’s Regression Modeling Strategies is the most comprehensive and hard-hitting I’ve read.  Marketing researchers seem to make the sorts of mistakes people working in other disciplines do, though perhaps more often.  Some examples are using highly correlated predictors, neglecting residual analyses, ignoring correlations across time (e.g., in weekly sales data) or space (e.g., regions of a country), categorizing the dependent variable and confusing correlation with causation.

Another concern I have, in fact, pertains to causation.  Whenever we say things like “This sort of consumer does this because of that,” we are making a statement about causation whether or not we are conscious of it.  Causal analysis is a subject even bigger than regression and one bible is Experimental and Quasi-Experimental Designs for Generalized Causal Inference (Shadish et al.).  Trying to establish causation can be likened to walking though a minefield, to paraphrase a comment once made to me by a Marketing professor with a PhD in Statistics.  We need to tread carefully!

The next time you’re in a very brave mood, ask your senior finance director if they are no better at their job than they were 10 years ago.  Common sense should tell us that experience counts, particularly in highly technical professions.  Formal education only lays the groundwork for statisticians and even veterans are constantly learning new things and new tricks.  The list of viable analytic options continues to grow (for examples see Analytics Revolution) and we’ve reached the point where we now have so many tools that skill levels are becoming diluted.  Over-specialization, on the other hand, is also something we need to be wary of and some less-experienced analysts lean on a pet method for nearly any situationif all you have is a hammer, everything looks like a nail…

Now, here comes the bad news: The math stuff can actually be the easiest part of analytics!  Every so often I’m asked questions such as “If I give you 10 million customer records, what technique would you use?”  To characterize questions like these as naive would be too diplomatic, as they reveal little grasp of the fundamentals of research.  The Cross Industry Standard Process for Data Mining (CRISP-DM), illustrated in the diagram below, will help make clear what I mean by this.

 

CRISP-DM_Process_Diagram

 

Here are very succinct definitions of each CRISP-DM component, courtesy of Wikipedia.2 

Business Understanding:

This initial phase focuses on understanding the project objectives and requirements from a business perspective, and then converting this knowledge into a data mining problem definition, and a preliminary plan designed to achieve the objectives.

Data Understanding:

The data understanding phase starts with an initial data collection and proceeds with activities in order to get familiar with the data, to identify data quality problems, to discover first insights into the data, or to detect interesting subsets to form hypotheses for hidden information.

Data Preparation:

The data preparation phase covers all activities to construct the final dataset (data that will be fed into the modeling tool(s)) from the initial raw data. Data preparation tasks are likely to be performed multiple times, and not in any prescribed order. Tasks include table, record, and attribute selection as well as transformation and cleaning of data for modeling tools.

Modeling:

In this phase, various modeling techniques are selected and applied, and their parameters are calibrated to optimal values. Typically, there are several techniques for the same data mining problem type. Some techniques have specific requirements on the form of data. Therefore, stepping back to the data preparation phase is often needed.

Evaluation:

At this stage in the project you have built a model (or models) that appears to have high quality, from a data analysis perspective. Before proceeding to final deployment of the model, it is important to more thoroughly evaluate the model, and review the steps executed to construct the model, to be certain it properly achieves the business objectives. A key objective is to determine if there is some important business issue that has not been sufficiently considered. At the end of this phase, a decision on the use of the data mining results should be reached.

Deployment:

Creation of the model is generally not the end of the project. Even if the purpose of the model is to increase knowledge of the data, the knowledge gained will need to be organized and presented in a way that the customer can use it. Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data scoring (e.g. segment allocation) or data mining process. In many cases it will be the customer, not the data analyst, who will carry out the deployment steps. Even if the analyst deploys the model it is important for the customer to understand up front the actions which will need to be carried out in order to actually make use of the created models.

Bravo!  Properly understood, analytics is not just cross tabs, visualization or programming, or even fancy statistical techniques.  It is a process intended to enhance decision-making.  The first step listed above, Business Understanding, is often the most demanding and, along with Data Understanding and Data Preparation, can absorb the bulk of a project’s time and energy.  CRISP-DM was not developed specifically for marketing research but is applicable to our business and drives home the point that analytics is a multifaceted, iterative process which involves more than narrow technical skills…or the ability to use a mouse.  Serious errors can occur anywhere, anytime and even simple mistakes can have important consequences.

So, the next time someone even suggests that analytics is easy, I’d advise you to be on guard.  It just ain’t so.

_________________________________________________________________________

Notes

1 Some other reactions I have come across are that analytics is “too complicated,” or that isn’t needed or that it doesn’t work.

2 For a brief summary of CRISP-DM see Wikipedia: http://en.wikipedia.org/wiki/Cross_Industry_Standard_Process_for_Data_Mining.  For a more in-depth look, see Data Mining Techniques: For Marketing, Sales, and Customer Relationship Management (Linoff and Berry), a popular, non-technical introduction to Data Mining and Predictive Analytics.

Share

The 15 Most Innovative Market Research Clients (GRIT Spring 2015 Sneak Peek)

New in the GRIT study this year is an investigation of the most innovative clients. The table below shows the top 15.

GRIT-50-Logo

Editor’s Note: The newest edition of the GRIT report will be published at the end of May. It’s all new in many ways: a new design of the report, new sections, new data, new commentators, and of course, new insights. I promise it will be worth the wait!

One of the new sections is part of the sneak peek we’re offering today. As part of the “GRIT 50″ section, for this wave we decided to expand from the supplier-centric questions to ask all participants (nearly 2000 globally!) who they considered to be the most innovative client-side companies!

Like the GRIT 50 Suppliers ranking, the questions are straight forward verbatims:

Q27. Now we want to switch gears and think about innovative client-­‐side organizations. A clientSide organization is defined as: an organization that commissions research or data analysis projects using external suppliers. When thinking of market research client-­‐side innovators, which companies come to mind? Please rank your top 5.

Q28. What factors make your first pick firm most innovative?

The goal here was to uncover not just which clients are driving innovation in the marketplace, but in fact which are leading it by words, actions, and spend. We’re also looking for the overlap between what these client organizations are doing and how it might relate to the GRIT data on emerging methods and the 50 Most Innovative Suppliers. We’ll reveal those results in the final report.

GRIT authors Amber Strain and Ray Poynter led this analysis, and here is an except from the upcoming report on the responses to this question area. It’s compelling in many ways, not least of which is validation that these leaders are increasingly taking an active role in the industry dialogue via events and public partnerships to engage with the market (for instance, almost all of these brands are also IIeX Corporate Partners and regular participants at our events).

The essence here is that despite all of the shouts that innovation in MR is supplier led, the truth is much more a reflection of traditional demand-side economics: clients are stating a need, and suppliers are working to try to meet that need. These are just some of the clients leading that charge.

 

By Ray Poynter & Amber Strain

New in the GRIT study this year is an investigation of the most innovative clients. The table below shows the top 15, with P&G being ranked as number 1.

1          Procter & Gamble

2          Coca Cola

3          Google

4          Unilever

5          General Mills

6          Lowes

7          Apple

8          3M

9          Microsoft

10        PepsiCo

11        Mondelez

12        ESPN

13        Intel

14        L’Oreal

15        RedBull

This table is based on 1,871 brand mentions (with over 600 unique brands) from 787 respondents.

Who’s on the list, and who is missing?

There are two interesting things about most lists, who is on the list and who is not, and this is very true of this list.

On the list

Business sectors that show themselves to be innovative include:

  • CPG – P&G, Unilever, General Mills, Mondelez, and L’Oreal – with Kimberly Clark and Nestle just outside the top 15.
  • Non-alcoholic Beverage – Coke, Pepsi, and relative newcomer Red Bull
  • Tech – Google, Apple, 3M, Microsoft, and Intel. With Google being the only company to be in the top 15 for both supplier and buyer categories.

Less innovative

There was only one media company in the top 15, ESPN, and only one retailer (Lowes). Note, Lowes performance was particularly strong given its North American focus and the global nature of the responses – quite possibly a result of Lowes track record at speaking at and being involved in international MR events.

Missing?

Major buyers of research who did not make the top 15 include:

  • Auto
  • Finance
  • Pharma
  • Consumer durables

What’s driving innovation?

Decooda analyzed the open-ended reasons for selecting innovative choices and identified three core themes:

  1. Using novel technologies.
  2. Delivering impressive products and services.
  3. Cutting edge and taking risks.

Here are their notes about the top five brands in the list.

 

220px-Procter_and_Gamble_Logo.svg

#1 Procter and Gamble

P&G was perceived as being innovative because of its effective and useful methodologies, their cutting edge research, and their future-thinking mindset. Specifically, P&G was acknowledged for focusing on human emotion and its correlates to behavior, learning new methodologies for analyzing qualitative data instead of relying as heavily on quantitative data, and for hosting webinars and conferences in order to share their methodologies and insights with others.

 

1280px-Coca-Cola_logo.svg

#2 Coca-Cola

Coke was recognized for its commitment to staying at the forefront of technology, techniques, and methodology. Commentators applauded Coke for its innovative ways to understanding consumers at a deeply emotional level and to then engage with them at that level. They also admired Coke for its willingness to take big risks that pay off in the long run. For instance, it was highly risky to release the “America the Beautiful” spot at the Super Bowl, but despite the detractors the spot was still a big success.

 

#3 Google

Google was lauded at being one of the most cutting edge companies in the field. People reported that Google is so far ahead of the game that they are able to give answers to problems before anyone even realizes there is a problem. “First to market” was an expression that was commonly used in reference to Google. This indicates that respondents view Google as an industry leader that brings forward new ideas that are often rapidly adopted by other companies.

 

Unilever.svg

#4 Unilever

Unilever was recognized for its smart, fast, and effective methodologies, use of novel technologies and techniques, and cutting edge work. Unilever was applauded for being the first to make significant discoveries, and for then sharing their discoveries with others in a collaborative way. Respondents mentioned that Unilever also has a unique and special ability to challenge their business partners. Unilever is forward thinking and has high expectations, which causes them to have high expectations for those that work with them.

 

general_mills

#5 General Mills

General Mills received the unique distinction as being a company that not only utilizes new technologies and techniques, but that also helps to establish the benchmarks that need to be in place for other companies to use those techniques and technologies successfully. They were also applauded for being bold and risky in their approach, something that many respondents reported as being an important facet of innovation.

The full list and additional analysis will be revealed in the upcoming GRIT report. Stay tuned!

Share

Copy, Copy, Copy: Asking “What Kinda?” Questions To Transform Insights

Have we – in our laudable search for better technique and technique better grounded in contemporary science’s account of how people behave – missed a bigger opportunity? Especially at scale.

copycopycopy

 

Guest blog by Mark Earls, who will be presenting a NewMR webinar on this topic on Thursday April 16 – click here to register.


Boom time

The last decade has been boom-time for insights professionals who embrace innovation.

 

Whereas back in the 2000s, many in our community were still angst-ing about the impact of doing online surveys and groups on the quality of the data; today, we are awash with new techniques and new frameworks (from neuroscience to semantic analysis, from agent-based modelling to emotional response). Just look at the changing agendas of any of the many insights conferences around the world to see how far we’ve come. Some of this flourishing is down to the continuous outpouring of new insights into our specialist subject (human behaviour) from the cognitive and behavioural sciences which has called into question many of the assumptions behind established research practices. Equally, changes in available technology are also driving innovation (some of it, it must be said, seem more like technology in search of a solution, than market-led innovation). And of course demand-side pressures continue to draw out new practice: the need to provide more powerful techniques to “get behind” the consumer and their unreliability as witnesses to their own lives (e.g Zaltman’s critique in Inside the Mind of the Consumer). And to do so faster and and cheaper.

 

The flow of innovation doesn’t seemto be slowing either – the insightshosepipe continues to spray us with new stuff. Each and every week, sites like GreenBook document more striking new ideas. And there is still a lot of workto be done inmaking the science work for us: neuroscience is still relatively young as an academic discipline; it still has a long way to go todemonstrate a reliable link between observable physiological response and real world behaviour.For my own part, having championed the social aspect of human nature for more than a decade (back before it was cool!), I believe we’re still too stuck on looking at what goes on between an individual’s ears and not open enough – ideologically and in terms of practical technique – to the power of what goes on between an individual’s ears. We still cling – willingly or otherwise, to the notion that insight is somehowto be found insideindividuals (in their brain or similar),rather than in the space between them. It’s not that all choices are social but – as great work by the likes of the HM Government’s Behavioural Insights Team (“The Nudge Unit”) and Ogilvy Change have shown – what others are doing and thinking and saying remains a central heuristic used by all kinds of people in all kinds of situations to make their choices.So far, so good. However, my challenge is of a very different nature: have we – in our laudable search for better technique and technique better grounded in contemporary science’s account of how people behave – missed a bigger opportunity? Especially at scale.

Beware the singularity

Here’s a thing: each time we approach a problem, it’s fair to say that most of us treat it as if it were unique. Something no-one has ever seen before. And we assume that the way to unlock a singular problem is to examine it in ever greater detail – to dig deeper (pace Zaltman) to find that nugget or to find a technological way to see the problem better (e.g. using brain scanning or some new analytic technique)This is what you’d expect from an innovative insights professional of the last 10 years.

But is it the most useful approach for the business paying for it?

“Kinda” questions, cycling & surgery

Copy Copy Copy argues that the most powerful innovations are not to be found in sweating the singular problem but by seeing individual things and problems as instances of other things and bringing solutions from distant sources to new contexts.

David Brailsford

For example, when David Brailsford took over the British Olympic Cycling team, he didn’t just sweat the track and road performance issues from the perspective of cycling (although as a big fan of Moneyball and sports stats, he did examine them that way at length). No, his strategy of “aggregating small advantages” was based instead on identifying problems of a different sort – sleep problems, health problems etc etc. In each case, once the problem is identified – once you know “what kind of thing” you’re dealing with, it’s easy enough to work out where to look for good solutions. Brailsford is no expert in epidemiology but recognising he had a epidemiology-shaped problem was essential to finding and applying the best expertise to his cycling team. Similarly, when Professor Martin Elliott of Great Ormond Street Hospital sought to improve the outcomes for his tiny heart-surgery patients, he looked to F1 rather than other medics, because he saw the problem as a handover one: from an exhausted theatre team and their machines and wires and so on to the ICU team. Ferrari rather than the Lancet. Again asking “what kinda” questions was an essential step in him garnering insight.

What kinda “what Kinda” ?

Because I’m primarily interested in behaviour change (rather than any particular sphere of human behaviour) the map I use to ask “Kinda” questions is built on how individuals choose – the same one at the heart of “I’ll Have What She’s Having” (Bentley Earls O’Brien). {See below}

What kinda thing?

This allows me to ask “what kind of behaviour is it? Is it a considered choice – in which the individual is choosing independently of their peers based on the relative qualities or utilities of the options? Or, is it the kind of choice that gets shaped by what experts and authorities say or do? And knowing what kind of thing I’m dealing with provides a clear and useful filter on possible solutions – it helps me find appropriate solutions (rather than merely clever or popular ones)

The question is how this challenges insight research practice: what happens when the big questions the team is asking of a market or a behaviour are “kinda” questions, rather than “singular” ones (how big, how tall, how small etc)?

How does that change the kind of research you might do? How does it change the kind of knowledge needed to operate like this? Certainly, I and my collaborators have found it necessary to collate solutions that we find across many different contexts and sort them into the 4 boxes, so {see fig 2 below}

Mark Earl's cards

Conclusion

We have undoubtedly created a host of much better practices to reflect better the descriptions of how people behave that contemporary science gives us (and harnessing the technology now available to us). And we should be proud of our achievements.

But in creating this new toolkit, have we perhaps forgotten what our users want to get out of it? We’re still using the new toolkit as we did the old – to better describe the characteristics of the singular phenomena we study. 

What if we took the step up to “kinda”?

Want to find out more?

You can buy Mark’s new book COPY, COPY, COPY and/or you can sign up for Mark’s webinar, April 16.

Mark will also be doing a full workshop based on his latest work at IIeX North America: register now to get a seat while you can!

Share

Social Listening And Online Communities: 1+1=3?

We have written about private online communities and social media listening separately many times before, but this blog post is dedicated to the power of integrating the two disciplines.

1+1=3

 

By Michalis Michael

Two out of the top 3 trends in market research repeatedly reported by Greenbook’s GRIT report are social listening and online communities. The third is mobile research which, being a method of collecting data for surveys, can be part of online communities anyway.

We have written about private online communities and social media listening separately many times before, but this blog post is dedicated to the power of integrating the two disciplines.

Back in February, the CEO of Kantar Research Eric Salama spoke at the Insight Innovation Exchange conference in Amsterdam, about his view of the future of market research. One of the concepts that stuck with me was that in the future, market research will be divided in “learning applications” and “action applications”. My interpretation of these two types of apps is that the former is pure market research as we know it, and the latter are adjacent marketing activities that today are not governed by the ESOMAR or the MRS code of conduct. Examples of action applications are programmatic advertising, customer advocacy, and agile customer engagement.

Two of the following three ways to integrate social listening and online community platforms are action applications, and one is a learning application. Let’s see if you agree that 1+1 will equal more than 2 in these three cases:

  1. Member recruitment for online communities
    For the first time in the history of marketing and market research, we can now find respondents for ad-hoc research or members of communities based on their perceptions, without having to use a screener questionnaire. We can use social listening to gather all the posts from the web that: are aligned with an idea, agree with a concept or express love for a brand. Because the expressed opinions on social media posts are unsolicited, they are of better quality than those expressed in a screener questionnaire used with people from a consumer panel. The panelists have an interest to figure out how to answer “right” so that they will be invited to participate in a survey (expert respondents).
  2. Listen-probe-listen-probe
    A virtuous circle can be created by integrating listening and communities. A brand or organisation can first “listen” to what people say on the web about the subjects of interest, and then engage with the members of their private online communities to ask questions (probe) about what they learnt from the harvesting and analysis of online posts. Through the probing they are bound to discover information that will improve the way they do their social media monitoring. And so on and so forth… Every time they complete a listen-probe-listen cycle, new valuable insights can be extracted that were never attainable before.
  3. Amplified customer advocacy
    Product category influencers can be identified through the content of their online posts and the size of their networks. They can then be invited to join an exclusive private online community for co-creation of digital content and customer advocacy amplification i.e. the sharing of the digital content with their friends and network.

Connecting the dots is a very powerful notion in market research. As shared on this blog several times, we firmly believe that a true business insight is more likely to be the result of synthesizing data from multiple sources as opposed to analysing a (small) data-set to death. The insights expert is a necessary part of this equation (1+1=3). There is also a new breed of a human skill-set that is becoming more and more an integral part of those market research agencies that “get it”; it is the data scientist who is among other things a machine learning specialist not daunted by tera-, peta-, hexa or zeta-bytes.

Thoughts?

Share

Brave Researchers? Help us to celebrate their stories.

Nominations are now open for the Ginny Valentine Badge of Courage Awards

Ginny_Valentine_2

 

Guest Post by Fiona Blades & John Griffiths

When was the last time you described a market researcher as brave? We are not firefighters, soldiers, or disaster relief workers. Yet, market researchers are performing acts of bravery that also deserve recognition. Virginia Valentine was a pioneering woman who brought semiotics to the market research industry through persistence and dedication. Though she won many awards in market research, the one that meant the most to her was the one she received for being a research revolutionary from her peers at the Research Liberation Front event in 2007. When Ginny died, it seemed fitting to commemorate her achievements and honor her tremendous spirit. Thus, in 2012, the Ginny Valentine Badge of Courage Award was established.Since then, the many stories that have come to light have been inspiring. For example, many clients and market research agencies are ignorant to the dangers that can be faced by fieldworkers on the front line. In 2012, ESOMAR nominated ORCA, a market research agency based in Afghanistan, which works to collect unbiased data out of this country. In doing so, two of ORCA’s employees were killed on suspicion of being American spies. In 2014, a fieldworker in South America named Catalina was awarded on behalf of all fieldworkers in the region for resilience in the face of danger. While conducting research, Catalina was attacked by a gang of men, escaped and she returned the following day to complete her work. While her story was nominated as a representation of the dangers fieldworkers face in South America, it also sparked a debate about the safety conditions faced by all women in market research. Following Catalina’s acceptance of the Ginny Valentine Badge of Courage, WiRe (Women in Research), held a webinar on women’s safety in research and at which Susan Steele (previously Global Chief Human Resources Officer at Millward Brown) pointed out the dangers that existed within the office and at market research social occasions where women can find themselves in awkward and vulnerable situations.

Bravery can come in many forms. We have seen clients like Ana Alvarez (Brazil), awarded for “Going off Roster” and thereby encouraging new innovative agencies, Jackie Braggs (UK), awarded for “Supporting Brave Creative Work” and Manish Makhijani (UK), awarded for” Grace Under Fire”, for the courageous way he responded to criticism from the industry after initiating a qualitative accreditation system at Unilever. Like Ginny bringing semiotics to the industry, others have been awarded for their persistence in bringing new thinking, such as John Kearon (UK) for “Waking up the Industry,” Stan Sthanunathan (US) for “Leading by Principle” and Steve Cohen (US) for “Taking an Academic Methodology and Giving it Commercial Credence.”

Ginny, a great supporter of women in their careers, would have been delighted at Kristin Luck’s (US) award for “Fearlessly Advocating Gender Equality” and for Batlool Batalvi’s (Pakistan and Canada) for “Raw Bravery for the Greater Good” and Catalina MeÍja Rozo’s (Columbia) for “Giving a Voice to the People.”

Now in its 4th year, we are looking for new stories of bravery. These could come from any part of the market research industry, from any country, from someone of any age or gender. Variety is at the heart of these awards. Nomination takes five to ten minutes by going to www.ginnyvalentineawards.com and the deadline for nominations is April 30th.

There is a perception that market research is a conservative industry, but the stories that have already been collected demonstrate the contribution the people in our industry are making to society. Help us to unearth more awe-inspiring stories that make us proud of what we do.

Further information
The Ginny Valentine Badge of Courage Awards will be held on June 16th at IIeX Atlanta with the support of GreenBook, the Research Liberation Front, KL Communications and TNS.

Fiona Blades is the CEO and Founder of MESH – The Experience Agency, which manages the experiences that build brands. www.meshexperience.com

John Griffiths is the CEO and Founder of Planning Above and Beyond, a consultancy offering research, communications planning and facilitation. www.planningaboveandbeyond.com

Share

Safe Harbor: Is it safe ?

Safe harbor is vital to US data collection companies and needs to be kept safe.

SafeHarbor Logo-Lines

 

By Andrew Jeavons

Safe Harbor is a US government program in co-operation with the EU and Swiss governments providing self-certification for companies concerning the security of data gathered outside of the USA, but residing on servers within the USA. It tells the overseas participants, the EU and Switzerland, that the data will be kept private and secure within the USA. Norway, Iceland and Liechtenstein have also agreed to be bound by this agreement.  You can find out if a company is Safe Harbor compliant on the Safe Harbor website, http://www.export.gov/safeharbor/ .

The Safe Harbor framework is vital for any company in the US that carries out data collection (data import in Safe Harbor terms) in Europe using computer systems based in the USA. Without it, the nightmare of having to comply with 30 countries differing security requirements would be crippling to data collection activities.

The introduction by CASRO  of a Safe Harbor assistance program is a tremendous help to US based MR or survey companies who carry out research in Europe. This program makes it easier for CASRO members to become Safe Harbor certified and also provides a mediation channel for dispute resolution, a requirement for Safe Harbor compliance.

So all is right in the world. Become Safe Harbor compliant and you are now all set to collect data from Europe without violating any security requirements of European countries!

The problem is that this isn’t quite true.

There is a threat to Safe Harbor and it raises the specter of a world without a substantial Safe Harbor system. This threat started in Düsseldorf, Germany in 2010. Germany has a federal system of regional government, each of the 16 states within the German federation has significant legal powers. In April of 2010 the “Düsseldorf Circle” met. This was an informal group of data protection officials from each of the 16 states within Germany. They passed a resolution that meant that they no longer accepted membership to the Safe Harbor agreement as reliable enough to allow data collection by US entities within each of the German states. They stated that there was a requirement for further due diligence on the part of German companies “exporting” data to the US beyond those required by Safe Harbor. In short, they needed to undertake their own due diligence with the US data importer and the onus is on the German companies to make sure they are satisfied that the US importer is secure enough.

In practice this means that when you agree a deal with a multinational European company to collect data from all their companies in Europe, you have to not only be a member of the Safe Harbor program but often also sign a separate agreement with the Germany subsidiary company because of German federal law. It also applies to global US based companies; the German subsidiary will often require an agreement of their own. This agreement is often part of the EU directive on data storage, a sort of re-affirmation that the data will be kept safe while in the US. Sometimes the German company simply decided not to be part of the global master agreement and to use local facilities to store German data so it never crosses the shores of the USA.

So far this seems only to be happening with Germany, but it represents a crack in the Safe Harbor system. The United Kingdom has some very strict laws regarding data collection and privacy. For instance, you have to actively agree to allow websites to use cookies on your computer. All UK websites will ask for this permission when you first visit them. Very often UK companies will require that data collected within the UK resides on servers in the UK and that it is not exported to the USA. This trend is becoming more common, companies want their data in the their country. It may only be a matter of time before other European countries follow the lead of Germany and require data exporters to have their own agreements, outside of Safe Harbor, with US data importers.

After the controversy surrounding the revelations by Edward Snowden concerning the USA and government spying, the USA is unfortunately regarded with suspicion in much of Europe when it comes to data security. Earlier last year the French and German governments held talks regarding an Internet communications system that would avoid data (mainly email) passing through the USA to shield it from USA government spying. This shows the level of concern in Europe about USA data security.  It is not in anyone’s interest to go back to having agreements with each nation within the EU concerning data exporting to the USA, it will be very time consuming, chaotic and only to serve to stifle business for US companies who want to collect data globally.

Companies such as Amazon can provide one possible technical solution to local country storage requirements. Amazon, along with selling anything you could possibly think of, also sells cloud-computing resources via “Amazon Web Services” (AWS). AWS is also able to localize the cloud services so that your data can be in a specific place, for instance Frankfurt or Ireland. It could be a solution for US based companies gathering data but needing the data to be stored in another country. But it is by no means simple to split data storage across facilities in this way, so while it sounds like a solution, implementing it could be harder than it looks.

Safe Harbor is very much in the interest of global MR client companies. It allows streamlined data collection operations from a single US source, rather than having to have data collected from many different countries individually. It makes data collection much more efficient and hence more economical, not to mention cutting down the time taken to implement data collection agreements. Safe harbor is vital to US data collection companies and needs to be kept safe.

Share