1. RN-GBook-480-60-B2B-Banner-5-16
  2. Greenbook 2
  3. Greenbook-Blog-banner-May-Global
  4. mfour_new_1

The Return on Investment from Insights Part 1 – Why You Need to Care

What would happen to YOU if your CFO / your clients’ CFO’s demanded to know, right here, right now, the return on investment from money spent on customer insight and market research last year?

pepper

Work on the client-side and want to grow your budget? 

Work on the agency-side and want to win more business? 

The answer to both your dreams might well be a ROI AUDIT.

I know you’re probably pressed for time and it will take you 5 to 10 minutes to read this blog, so please answer these two questions in order to evaluate whether it is worth you investing your time in this post or not:

  1. Do you believe it is important for market research and customer insight to deliver a strong return on investment (ROI)?
    1. If yes; answer Q2
    2. If no; go read something else
  2. Do you currently measure the return on investment you deliver?
    1. If yes; congratulations! Go read something else
    2. If no; invest 5 to 10 minutes of your time. It might just be worth it.

staircaseOn The Way Up Or On The Way Down?

A cautionary tale* illustrating that how you deal with the ROI question today might well affect your future quite dramatically

One month earlier at the national association’s CEO networking evening…

Bob, CEO at Great Tools Research, a mid-sized research company, was chatting with happily with Susan, his counter-part at Big Impact Research, explaining what a great year his company had had last year growing the margin from 12 to 15%. Little did he know how things were about to change…

Read below to follow Bob’s path

∞∞∞∞∞∞

Sharon, V.P. Customer Insights at Tight Ship Inc, was sitting alone in her office head in hands. Four weeks ago she had learnt that her budget proposal had not gone through and that instead her budget was to be cut in half. As a result, she had just had to fire two of her four team members and had just got off the phone with Bob, CEO at her primary research vendor, Great Tools Research, to tell him that he needs to come up with a proposal for delivering as much as they can for a 50% reduced budget and that she has no choice but to put her account out to tender.

∞∞∞∞∞∞

Two months ago everything had been so different. She thought back to her appraisal meeting with her boss, the Marketing President, and the nice bonus she had received for meeting her key targets of getting 10% more projects out of her budget and creating a fantastic new customer insights portal. She recalled the meeting with the Account Director at Great Tools Research, where they had talked about setting up a customer panel and introducing a new interactive reporting tool this year. She sighed knowing that both projects would now have to be shelved.

∞∞∞∞∞∞

Bob was sitting alone in his office head in hands. He had just had to fire ten of his staff. Two weeks ago he had learnt that their two biggest clients were both cutting their research budget in half, leaving him with a 15% revenue gap versus budget, and his board had agreed that serious cost-cutting was needed to maintain margins.

Four weeks ago everything had been so different. Generous bonuses had been given to the two account directors for increasing the margin on these established “cash cow” clients, and Bob had announced to his staff the plans to invest in new technology, as well as in the hiring of a new Vice President of Business Development.

∞∞∞∞∞∞

Bob still was wondering what the heck at happened at Tight Ship Inc, such a long-standing and reliable client, and Sharon was still in disbelief that half of her budget had been given to the Digital Marketing team, kicking herself for not seeing this coming.

∞∞∞∞∞∞

Six months ago Tracy had joined Tight Ship Inc as the new CFO. Her first job was to ask all of the Presidents in the company to provide a report on the return on investment from each of their budget lines. Unfortunately, the Marketing President was not able to provide any hard numbers for the customer insight budget line, since they had always viewed this as a cost item and not an investment and therefore had no ROI metrics in place.

Upon analyzing the ROI reports, Tracy could clearly see the positive impact of digital marketing on the top and bottom lines, but could not see the impact of their investment in customer insight in the same way. Therefore, cutting the customer insight budget in half and reallocating the spend to activities with a demonstrable return on investment was a “no brainer”, which would immediately demonstrate to her boss her effectiveness as a CFO who drives profitable growth. Tracy set the wheels in motion, which would lead to such a negative impact on Great Tools Research and the people working there.

Read below to follow Susan’s path

∞∞∞∞∞∞

Mark, V.P. Customer Insights at Ahead of the Comp Inc, was having lunch with his team. The team were delighted to hear about their department’s budget increase and Mark’s plans to hire a new Insights Manager to the team. Earlier that day he had spoken with Susan, CEO at his primary research partner, Big Impact Research, to share the good news with her and discuss how to use the extra budget to deliver even greater value to the business.

∞∞∞∞∞∞

Mark reflected that only two months earlier he had proudly presented his latest ROI Audit to his boss, the Marketing President, and received a bonus for meeting his key targets of increasing Customer Insight ROI by 20% and creating a new customer insights portal. He recalled the meeting he had had with the Account Director at Big Impact Research, where they gone through the agency’s own ROI audit report had agreed on how to adjust the spend for the year ahead in order to focus even more resources on the high-ROI-delivering activities.

∞∞∞∞∞∞

Susan was sitting in the conference room with her team enjoying a glass of champagne. She had just hired four new researchers to her key account team. One month ago she had learnt that their two biggest clients were both increasing their research budget by 20%, and the board had agreed to further investments in order to deliver even more value to their key clients. The key account teams had just presented their plans for improving the return on investment delivered to their clients and everyone had agreed on the internal changes needed to deliver on those plans.

∞∞∞∞∞∞

Susan reflected on how fortunate her company was to have clients who understood the importance of a partnership based on trust and transparency. Mark reflected on how fortunate he was to have a partner agency, who shared his goal of increasing the Customer Insight ROI and who worked openly together with him to audit the return on investment they delivered.

∞∞∞∞∞∞

Six months ago Simon had joined Ahead of the Comp Inc as the new CFO. His first job was to ask all of the Presidents in the company to provide a report on the return on investment from each of their budget lines. The Marketing President presented the data from their recent Customer Insights ROI Audit, which showed that last year Customer Insights had contributed towards both sales growth and cost savings, delivering an overall 27X return on investment.

Upon analyzing the ROI reports, Simon could clearly see the positive impact of Customer Insight on both the top and bottom lines, and therefore had no problem recommending that the customer insight budget be increased by 20%. Increasing spend on an activity with such a demonstrable return on investment was a “no brainer”, which would immediately demonstrate to his boss his effectiveness as a CFO who drives profitable growth. Simon set the wheels in motion, which would lead to such a positive impact on Big Impact Research.

∞∞∞∞∞∞

One month later at the national association’s CEO networking evening…

Susan looked across the room at Bob, feeling slightly sorry for him, but at the same time was wondering which of his clients she would be calling in the morning.” THE END.

* This is a work of fiction. Names, characters, businesses, places, events and incidents are either the products of the author’s imagination or used in a fictitious manner. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.

∞∞∞∞∞∞

bookFact or fiction?

Definitely a bit of both no doubt, but the question is which of these two scenarios best reflects what would happen to YOU if your CFO / your clients’ CFO’s demanded to know, right here, right now, the return on investment from money spent on customer insight and market research last year?

Put simply:

If you work client-side, what return on investment did your customer insights budget deliver to the business?

If you work agency-side, what return on investment were you able to help each of your clients deliver from the money they spent with you?

In either case, assuming that the 80/20 rule also applies to research budgets, do you know which 20% of the budget delivered 80% of the return on that investment? Do you also know why?

keyAre you a Customer Insights budget holder?

If so, then I recommend you run a ROI audit, if you do not already do so. Even if your CFO isn’t currently asking you for your ROI report, I believe it will only be a question of time before she/he does, and when that time comes, unlike Sharon in the story above, you need to be prepared.

If your company currently considers customer insights as a cost item and not an investment, then preparing and presenting an ROI audit report to senior management will help change that viewpoint and help protect, if not even grow, your budget.

sparksIf you happen to work agency-side…

…with client budget responsibility, then my request to you is to not just passively wait and see if your clients do come to you with a ROI audit request, but that should proactively seek to work with your clients to understand and improve the return on investment you are helping them create.

handshakeForming a win-win ROI partnership to co-create a brighter future

Over the next few years, I believe that more and more companies will make customer-centricity an even more central part of their company’ strategy, and more and more companies will see the effective use of market / customer data and insights as a key driver of competitive advantage.

Customer Insight budget-holders therefore can have an even more critical role to play in guiding business decisions, but they will need to prove their worth and demonstrate the return on investment are delivering now and can deliver in the future. The onus is on those working agency-side to help give them the ammunition needed to do this.

I ask you to consider making a ROI audit the foundation of a strong client-agency partnership, which unites everyone behind a common objective that will help ensure that everyone’s budgets grow.

Part 2 of this post shares advice on how to set up a ROI Audit and will be released after IIeX North America. If you are going to be in Atlanta for IIeX and want to get further insight into this topic, please come listen to the panel discussion in track 2 on Tuesday from 10.40 to 11.20, during which Andrew Cannon (GRBN), Simon Chadwick (Cambiar), Kathy Cochran (Boston Consulting Group), Lisa Courtade (Merck) and Alex Hunt (Brainjuicer) will share their thoughts on what insights teams and research companies alike should do to add value and deliver a stronger return on investment. Welcome!

 

Share

5 Reasons Online Studies Fail

Based on his experience over the last 12 years as an online qual researcher, Ray Fischer shares five reasons why most online qual studies do not deliver on expectations.

dreamstime_xs_50764488

By Ray Fischer, CEO, Aha! Online Research Platform

Let’s admit it: many market researchers are either uncomfortable with online qual or they don’t get the results they expect, and therefore shy away from it.  That’s too bad, because the new wave of online qual tools and techniques is producing incredible insights for clients who have discovered the benefits of both technology, and adopting best-practices based on years of learning. Based on my experience over the last 12 years as an online qual researcher, I have seen five reasons why most online qual studies do not deliver on expectations.  And all of them are fixable…with the right amount of experience and skills.  Here is my take:

1. Not Enough Experience

Online qual can be a bit of a black box if you have not used the method before.  It is a bit like skydiving – you might want a guide attached to you on your first jump or two, but after that you’ll feel like an old pro.  In those early studies, make sure your platform provider is committed to your success; they should offer study design consulting and a dedicated project manager to share best practices along the way.  The same is true if you are new to online or are simply trying a different platform.  All platforms are not the same, nor are the services and support they offer.

2. A Boring Activity/Discussion Guide

The discussion (or activity) guide needs to be clear, concise, and dynamic.  Go beyond a battery of open-ended questions and use the variety of projective techniques that modern platforms offer.  A study that goes beyond open-ends and mixes in respondent video, collages, perceptual maps, social activities, and storytelling will make things SO much more interesting to your respondents and your clients.

3. Committee Approach to Study Design

Avoid the committee approach where everyone gets to add in everything they could ever want to learn in one study.  Don’t let it become a free-for-all.  I’ve seen these more than a few times: you create a mountain of unstructured data loaded with redundancy and irrelevance, ultimately detracting from your objectives.  Not only will the data haul yield insufficient results, it will also bore your respondents.  A key sign from respondents that the committee has left its mark is when you see comments from respondents like “I just answered that question…3 times!”  Stick to your guns and assure clients that the insights will come out if the questions and projective exercises are well thought out and diverse.

4. Lack of Communication

I firmly believe that communication with respondents – from the beginning of the recruiting process through the completion of the study – is key.  Be clear with respondents in the screener sharing exactly what the study is about, why they are important to the research, how much time is expected of them, how many days the study will take, and activities they are required to complete in what time frame.  Moderators – send a morning note to everyone each day of the study giving them group encouragement and letting them know what they are doing on that particular day.  Send at least one probe to all respondents on day 1 telling them, personally, how much you appreciate their contributions.

5. Insufficient Incentives
Nothing will discourage a respondent more than doing a lot more work than they anticipated when recruited.  Typically, a multi-day study should require a respondent to commit at least 30 minutes per day. If the study is interesting and well-designed, respondents will often spend a lot more time sharing because they want to, not because they have to.  Typical incentives for online qualitative tend to be $100 for 3 days, $125 for 4, and so on.  Store trips including video and/or pictures with added open- and closed-ends should include an additional $25+.  Of course numbers can vary, but these are pretty tried and true guidelines. I have heard a few clients who tend to use $.50-$1 per minute that they expect the respondent to engage.  I encourage higher incentives if the budget will allow.

After reading this you may think a few of these points are a blinding glimpse of the obvious.  And they are.  The lessons learned are pretty straightforward:  Keep it simple. Pay attention to the basics of good research. Work with an online qual platform that is intuitive and user-friendly, and most importantly, is supported by seasoned consumer researchers.  With a skilled team guiding you through the process, you should EXPECT better results with your next online study.

 

Share

Online Panels: The Black Sheep of Market Research – Part 2

Panel companies are the ones with background, knowledge, tools and technologies to help online market research be great again.

black_sheep

By Adriana Rocha

A few months ago, I wrote the first part of this article, “Online Panels – The Black Sheep of Market Research?, and all the positive reaction and feedback were very inspiring and overwhelming. So, in this second part, I’d like to explore why I believe panel companies have the background, knowledge, tools and technologies that can help online market research be great again, as well as how they can “turn the table” and lead innovation in this industry.

Building and managing online panels for so many years, I’ve felt on my own skin the pains and frustrations of panel companies with bad surveys they need to fulfill. In our case for example, we have tried to be creative and tested several ways to improve UX, from developing research games, to building 3D worlds where respondents can participate with their avatars, to designing beautiful survey templates, and, more recently, applying Gamification techniques.  Regardless, the years have proven that all of those methods still need a vital part in order to improve the user experience: how well the questionnaire is written and designed.

It has been a difficult mission to make our clients write user-friendly surveys, or take UX as a priority; however we have learned a lot by listening and collecting feedback from our users.  And then we realized we could go far beyond delivering consumer data just based on surveys but increasingly from the data spontaneously shared by the users, including their experiences with products and brands, as well as their mobile and social data.  The more data our panelists share, the greater the potential to apply such data to extract consumer intelligence, as well as to improve user experiences with market research.  That is where Machine Learning, Deep Learning and AI can play a key role in understanding consumers, and also helping improve surveys. See a few examples below:

  • Based on users’ profiles, interests and behavior, panels don’t need to keep sending users repeated questions and surveys. They can use existing data and ML algorithms to respond to known questions and just ask the needed ones;
  • Using data shared by the users on their experiences with products and brands (eg: product reviews and customer service satisfaction) , panels can apply ML methods and deliver brand KPIs, product preferences, etc.
  • Panel companies have extensive profiling data about their members, from socio-demographics to thematic profiles on health, travel, technology, etc. By combining such data with users’ own social media data (eg: Facebook or Twitter), panels can provide to market researchers insights they will never get by just analyzing public social media;
  • Using historic survey data and users’ input, from both researchers and panelists, panels can create “smart surveys”, that is, surveys that improve over time, recommending the right questions to the right audiences.

Well, the application of machine learning and multi-source panel data is an example of how old and new technologies can make a big impact and create new opportunities in the Market Research industry. As commented before, I still believe in a bright future for online research and panels, at least the ones who are taking the right steps now. Am I dreaming too high?

(BTW, I’ll be at IIEX in two weeks presenting “The Next Generation of Online Research: when Machine Learning empowers Surveys”, and a platform designed to implement it. I would love to continue the conversation with those of you who will attend the conference) .

 

Share

Jeffrey Henning’s #MRX Top 10: The Research Renaissance in the Age of Analytics

Of the 10,441 unique links shared on #MRX in the past two weeks, here are 10 of the most retweeted…

Twitter

Of the 10,441 unique links shared on #MRX in the past two weeks, here are 10 of the most retweeted…

  1. Time for a Research Renaissance? – Paul Laughlin argues that corporate market researchers, often facing pressure from internal Big Data and analytics teams, need to bring an understanding of customer emotion to their work to differentiate themselves from their number-driven colleagues. And to get their results across they need to develop multiple styles of influencing their coworkers.
  2. Market Research Needs Less Insight (Fast) – Christopher Martin of FlexMR argues that if every research agency claims its unique selling point is providing insight, than it’s not a unique selling point at all. Agencies must do a better job differentiating themselves beyond simply providing insights.
  3. GreenBook Research Industry Trends (GRIT) Report – The latest wave of this twice-annual industry benchmark looks at the attitudes of researchers around adopting automation, the future of sample, and seeking training.
  4. Young ESOMAR Society Pitch Competition – Researchers under 30 are invited to participate in a competition at the 2016 ESOMAR Congress, which will be in New Orleans in September. Winners of the round of 60-second presentations will earn a spot for a Pecha Kucha presentation (20 slides shown for 20 seconds each).
  5. IIeX North America – This year’s Insight Innovation Exchange will once again be held in Atlanta: June 13-15.
  6. Does Machine Learning Signal the End for MR Pros? – Sinead Hasson goes full Star Trek: “Not all insights can be gained from logic. The ‘right’ decision isn’t always made from an operational analysis, or as a result of a statistical forecast. It’s why Mr. Spock needed Captain Kirk… machine learning is Kirk delegating to Spock so he can get the broadest possible picture, upon which he can then impose his ego-centric, ever-so-human judgement.”
  7. How to Understand the Next Generations and their Trends for Guaranteed Reach – Peggy Bieniek interviews Jane Buckingham of Trendera about when to take a generational approach to marketing and when to look at “mindset over the market.”
  8. The OmniChannel Movement Forces Retailers to Move Faster to Keep Up with Shoppers – In another interview for The Market Research Blog, Alana Joy Feldman of Bayer HealthCare looks at the importance of linking digital and physical shopper marketing: “Access to information via technology is now the norm, which has created an omniscient shopper who has taken control of the retail experience.”
  9. Affectiva Raises $14 million to Bring Apps, Robots Emotional Intelligence – Writing in TechCrunch, Lora Kolodny discusses how Affectiva is diversifying from market research to adding emotional intelligence (recognition of faces and non-verbal cues) to technology products and services.
  10. Should We Ditch Segmentation, Targeting and Positioning? – Nigel Hollis of Millward Brown argues against taking a one-size-fits-all approach to segmentation. For instance, a needs-based segmentation might be perfect for developing a new product portfolio – but completely useless for a mass-media marketing campaign.

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.

Share

Putting Memory Under the Microscope

What can be done to overcome the challenges around recall- and memory-bias in market research?

microscope-in-a-laboratory

By David Paull

At the upcoming IIeX North America conference, I’ll be moderating a panel on the topic: How Flawed Recall and Memory Bias Pollute Market Research and What Can Be Done About It. The panel is part of a broader program centered on exposing the challenges around recall- and memory-bias in market research. The idea for the program came out of the multitude of conversations with clients and colleagues both around the challenges they’ve encountered with recall-based research approaches and the growing demands and tightening budgets market researchers are facing.

The epiphany came when I tried to reconcile these demands with the continued dependence on recall-based methods and approaches that produce questionable or unusable results. How exactly do you deliver, “More for less… “ when you’re basing outcomes and conclusions on flawed methods that depend on respondents answering questions about what they think they thought?

With this program, I’ve had the opportunity to moderate and participate in discussions with a panel of experts from both the market research and academic sides of the aisle to flesh-out their encounters with flawed recall and memory bias in their research. The discussions have brought to light many things we, in the market research community, already knew about recall-based research but have failed to admit or do much about—such as evidence that memory is malleable and can be biased by all sorts of factors including time, interviewer bias, other respondents, etc. Our discussions have also uncovered some new areas worthy of further exploration. Of particular interest is what market research can learn from academic studies into memory manipulation. Because, unlike in market research, the academic work being done by our expert Dr. Elizabeth Loftus at the University of California-Irvine, actively introduces factors to try to manipulate and impact memory, and she knows when asking her study participants questions, what the right answers should be.

Understanding what factors impact memory the most will help us design methods and approaches to better control for and, when appropriate, mitigate it. We look to uncover those specific factors and what we, in the market research community, can and should do to address them as our discussion continues at IIeX and beyond. We hope you’ll join us.

——————————————

This series of articles is part of a broader program, developed and sponsored by Dialsmith, centered on exposing the challenges around recall- and memory-bias in market research. The program consists of a series of discussions with experts from the market research and academic communities and features a live panel session at IIeX North America and a webinar later this year.

Share

People Come First, Or Do They?

Customer centricity needs everyone in the organisation to think customer first in everything they do. It necessitates exceeding and not just meeting their expectations, handling their complaints more quickly & effectively, and co-creating the future with them.

clients

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Denyse Drummond-Dunn will be speaking at IIeX North America (June 13-15 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Denyse Drummond-Dunn, President and Chief Catalyst, C3Centricity

Every year I attend IIeX in Atlanta and Amsterdam. Each time I count down the days to the event with a frenzy of excited anticipation. This is the only place I know where “normal” business people, with curious minds, can hear about the latest new thinking on insight. Yes CES may have the groundbreaking technologies, but it’s IIeX that brings us the more practical inspiration for real change.

In my own small way, I will be a part of this. I will be challenging everyone to think beyond the exciting new technologies. I will be reminding participants that it’s the customers who define a business.

Marketing is mutating and the role of the CMO is changing. This is not (only) due to technology or big data, although these have certainly put additional strain on the profession. No, it’s because the customer too is changing, and faster than ever before. The annual marketing plans are insufficient to capture the risks & challenges we face as a result of this transformation.

Several Global CPG companies, including P&G & Nestle, have moved from marketing to brand building, but I don’t believe this is the answer. From what I have seen, they may have changed the name, but they continue to develop their brands & communications in exactly the same way.

People care about solutions not brands, so brand building won’t get companies any closer to their customers than did marketing. And their demand for immediate gratification means that they won’t tolerate poor service, sub-optimal products or slow responses.

I believe that there is a big change that business needs to embrace today. An imperative for all organisations to speed their journey to increased customer centricity. It’s no longer an option & unfortunately it’s not a destination either. Executives can’t just talk about it, they have to be seen to be placing the consumer at the heart of business.

It still surprises me that companies delay walking the talk of customer centricity. After all, it makes sound financial sense. According to Forrester, businesses that prioritise the customer experience grow three times faster than the S&P index! Know any companies that wouldn’t like that sort of progress?

So what is customer centricity really about? After all, putting the customer at the heart of business sounds easy doesn’t it? Well it’s not, because it involves a culture change and that is the biggest hurdle. Customer centricity needs everyone in the organisation to think customer first in everything they do. Seeing every decision they take from their customers’ perspective. It necessitates exceeding and not just meeting their expectations, handling their complaints more quickly & effectively, and co-creating the future with them. In a word “involving” them in everything that is done.

Businesses today are more transparent than ever. Their customers scrutinise and judge everything they do. Putting the customer first is the only way to build and maintain their trust and advocacy.

Share

Causation Matters

Whether or not we are conscious of it, much of marketing research involves ideas about causation.

domino-163523_1280

By Kevin Gray

Whether or not we are conscious of it, much of marketing research involves ideas about causation, for example that a new product failed because it didn’t meet certain consumer needs, or that sales are up because our new ad is working.

“The Environment and Disease: Association or Causation?” is a must-read paper on the topic of causation.  Written by eminent statistician Sir Austin Bradford Hill, the paper is regarded as a classic by statisticians and has been highly influential in the formulation of health, safety and environmental regulations in many countries. 

__________________________________________________________________________

“The Environment and Disease: Association or Causation?” by Sir Austin Bradford Hill CBE DSC FRCP (hon) FRS (Professor Emeritus of Medical Statistics, University of London) in Proceedings of the Royal Society of Medicine, 58 (1965), 295-300.

Amongst the objects of this newly-founded Section of Occupational Medicine and firstly ‘to provide a means, not readily afforded elsewhere, whereby physicians and surgeons with a special knowledge of the relationship between sickness and injury and conditions of work may discuss their problems, not only with each other, but also with colleagues in other fields, by holding joint meetings with other Sections of the Society’; and secondly, ‘to make available information about the physical, chemical and psychological hazards of occupation, and in particular about those that are rare or not easily recognized’.

At this first meeting of the Section and before, with however laudable intentions, we set about instructing our colleagues in other fields, it will be proper to consider a problem fundamental to our own. How in the first place do we detect these relationships between sickness, injury and conditions of work? How do we determine what are physical, chemical and psychological hazards of occupation, and in particular those that are rare and not easily recognized?

There are, of course, instances in which we can reasonably answer these questions from the general body of medical knowledge. A particular, and perhaps extreme, physical environment cannot fail to be harmful; a particular chemical is known to be toxic to man and therefore suspect on the factory floor. Sometimes, alternatively, we may be able to consider what might a particular environment do to man, and then see whether such consequences are indeed to be found. But more often than not we have no such guidance, no such means of proceeding; more often than not we are dependent upon our observation and enumeration of defined events for which we then seek antecedents. In other words we see that the event B is associated with the environmental feature A, that, to take a specific example, some form of respiratory illness is associated with a dust in the environment. In what circumstances can we pass from this observed association to a verdict of causation? Upon what basis should be proceed to do so?

I have no wish, nor the skill, to embark upon philosophical discussion of the meaning of ‘causation’. The ‘cause’ of illness may be immediate and direct; it may be remote and indirect underlying the observed association. But with the aims of occupational, and almost synonymous preventive, medicine in mind the decisive question is where the frequency of the undesirable event B will be influenced by a change in the environmental feature A. How such a change exerts that influence may call for a great deal of research, However, before deducing ‘causation’ and taking action we shall not invariably have to sit around awaiting the results of the research. The whole chain may have to be unraveled or a few links may suffice. It will depend upon circumstances.

Disregarding then any such problem in semantics we have this situation. Our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?

(1) Strength. First upon my list I would put the strength of the association. To take a very old example, by comparing the occupations of patients with scrotal cancer with the occupations of patients presenting with other diseases, Percival Pott could reach a correct conclusion because of the enormous increase of scrotal cancer in the chimney sweeps. ‘Even as late as the second decade of the twentieth century’, writes Richard Doll (1964), ‘the mortality of chimney sweeps from scrotal cancer was some 200 times that of workers who were not specially exposed to tar or mineral oils and in the eighteenth century the relative difference is likely to have been much greater.’

To take a more modern and more general example upon which I have now reflected for over fifteen years, prospective inquiries into smoking have shown that the death rate from cancer of the lung in cigarette smokers is nine to ten times the rate in non-smokers and the rate in heavy cigarette smokers is twenty to thirty times as great. On the other hand the death rate from coronary thrombosis in smokers is no more than twice, possibly less, the death rate in non-smokers. Though there is good evidence to support causation it is surely much easier in this case to think of some feature of life that may go hand-in-hand with smoking – features that might conceivably be the real underlying cause or, at the least, an important contributor, whether it be lack of exercise, nature of diet or other factors. But to explain the pronounced excess of cancer of the lung in any other environmental terms requires some feature of life so intimately linked with cigarette smoking and with the amount of smoking that such a feature should be easily detectable. If we cannot detect it or reasonably infer a specific one, then in such circumstances I think we are reasonably entitled to reject the vague contention of the armchair critic ‘you can’t prove it, there may be such a feature’.

Certainly in this situation I would reject the argument sometimes advanced that what matters is the absolute difference between the death rates of our various groups and not the ratio of one to the other. That depends upon what we want to know. If we want to know how many extra deaths from cancer of the lung will take place through smoking (i.e. presuming causation), then obviously we must use the absolute differences between the death rates – 0.07 per 1,000 per year in nonsmoking doctors, 0.57 in those smoking 1-14 and 2.27 for 25 or more daily. But it does not follow here, or in more specifically occupational problems, that this best measure of the effect upon mortality is also the best measure in relation to etiology. In this respect the ratios of 8, 20 and 32 to 1 are far more informative. It does not, of course, follow that the differences revealed by ratios are of any practical importance. Maybe they are, maybe they are not; but that is another point altogether.

We may recall John Snow’s classic analysis of the opening weeks of the cholera epidemic of 1854 (Snow 1855). The death rate that he recorded in the customs supplied with the grossly polluted water of the Southwark and Vauxhall Company was in truth quite low – 71 deaths in each 10,000 houses. What stands out vividly is the fact that the small rate is 14 times the figure of 5 deaths per 10,000 houses supplied with the sewage free water of the Lambeth Company.

In thus putting emphasis upon the strength of an association we must, nevertheless, look at the obverse of the coin. We must not be too ready to dismiss a cause and effect hypothesis merely on the grounds that the observed association appears to be slight. There are many occasions in medicine when this is in truth so. Relatively few persons harboring the meningococcus fall sick of the meningococcal meningitis. Relatively few persons occupationally exposed to rat’s urine contract Weill’s disease.

(2) Consistency: Next on my list of features to be specially considered I would place the consistency of the observed association. Has it been repeatedly observed by different persons, in different places, circumstances and times?

This requirement may be of special importance for those rare hazards singled out in the section’s terms of reference. With many alert minds at work in the industry today many an environmental association may be thrown up. Some of them on the customary tests of statistical significance will appear to be unlikely to be due to chance. Nevertheless whether chance is the explanation or whether a true hazard has been revealed may sometimes be answered only by a repetition of the circumstances and the observations.

Returning to my more general example, the Advisory Committee to the Surgeon-General of the United States Public Health Service found the association of smoking with cancer of the lung in 29 retrospective and 7 prospective inquiries (US Department of Health, Education and Welfare 1964). The lesson here is that broadly the same answer has been reached in quite a wide variety of situations and techniques. In other words, we can justifiably infer that the association is not due to some constant error or fallacy that permeates every inquiry. And we have indeed to be on our guards against that.

Take, for instance, an example given by Heady (1958). Patients admitted to hospital for operation for peptic ulcer are questioned about recent domestic anxieties or crises that may have precipitated the acute illness. As controls, patients admitted for operation for a simple hernia are similarly quizzed. But, as Heady points out, the two groups may not be in pari materia. If your wife ran off with the lodger last week you still have to take your perforated ulcer to hospital without delay. But with a hernia you might prefer to stay at home for a while – to mourn (or celebrate) the event. No number of exact repetitions would remove or necessarily reveal that fallacy.

We have, therefore, the somewhat paradoxical position that the different results of a different inquiry certainly cannot be held to refute the original evidence; yet the same results from precisely the same form of inquiry will not invariably greatly strengthen the original evidence. I would myself put a good deal of weight upon similar results reached in quite different ways, e.g. prospectively and retrospectively.

Once again looking at the obverse of the coin there will be occasions when repetition is absent or impossible and yet we should not hesitate to draw conclusions. The experience of the nickel refiners of South Wales is an outstanding example. I quote from the Alfred Watson Memorial Lecture that I gave in 1962 to the Institute of Actuaries:

‘The population at risk, workers and pensioners, numbered about one thousand. During the ten years 1929 to 1938, sixteen of them had died from cancer of the nasal sinuses. At the age specific death rates of England and Wales at that time, one might have anticipated one death from cancer of the lung (to compare with the 16), and a fraction of a death from cancer of the nose (to compare with the 11). In all other bodily sites cancer had appeared on the death certificate 11 times and one would have expected it to do so 10 – 11 times. There had been 67 deaths from all other causes of mortality and over the ten years’ period 72 would have been expected at the national death rates. Finally division of the population at risk in relation to their jobs showed that the excess of cancer of the lung and nose had fallen wholly upon the workers employed in the chemical processes.

‘More recently my colleague, Dr. Richard Doll, has brought this story a stage further. In the nine years 1948 to 1956 there had been, he found, 48 deaths from cancer of the lung and 13 deaths from cancer of the nose. He assessed the numbers expected at normal rates of mortality as, respectively 10 to 0.1.

‘In 1923, long before any special hazard had been recognized, certain changes in the refinery took place. No case of cancer of the nose has been observed in any man who first entered the works after that year, and in these men there has been no excess of cancer of the lung. In other words, the excess in both sites is uniquely a feature in men who entered the refinery in, roughly, the first 23 years of the present century.

‘No causal agent of these neoplasms has been identified. Until recently no animal experimentation had given any clue or any support to this wholly statistical evidence. Yet I wonder if any of us would hesitate to accept it as proof of a grave industrial hazard?’ (Hill 1962).

In relation to my present discussion I know of no parallel investigation. We have (or certainly had) to make up our minds on a unique event; and there is no difficulty in doing so.

(3) Specificity: One reason, needless to say, is the specificity of the association, the third characteristic which invariably we must consider. If as here, the association is limited to specific workers and to particular sites and types of disease and there is no association between the work and other modes of dying, then clearly that is a strong argument in favor of causation.

We must not, however, over-emphasize the importance of the characteristic. Even in my present example there is a cause and effect relationship with two different sites of cancer – the lung and the nose. Milk as a carrier of infection and, in that sense, the cause of disease can produce such a disparate galaxy as scarlet fever, diptheria, tuberculosis, undulant fever, sore throat, dysentary and typhoid fever. Before the discovery of the underlying factor, the bacterial origin of disease, harm would have been done by pushing too firmly the need for specificity as a necessary feature before convicting the dairy.

Coming to modern time the prospective investigations of smoking and cancer of the lung have been criticized for not showing specificity – in other words the death rate of smokers is higher than the death rate of non-smokers from many causes of death (though in fact the results of Doll and Hill, 1964, do not show that). But here surely one must return to my first characteristics, the strength of the association. If other causes of death are raised 10, 20 or even 50% in smokers whereas cancer of the lung is raised 900 – 1000% we have specificity – a specificity in the magnitude of the association.

We must also keep in mind that diseases may have more than one cause. It has always been possible to acquire a cancer of the scrotum without sweeping chimneys of taking to mulespinning in Lancashire. One-to-one relationships are not frequent. Indeed I believe that multi-causation is generally more likely than single causation though possibly if we knew all the answer we might get back to a single factor.

In short, if specificity exists we may be able to draw conclusions without hesitation; if it is not apparent, we are not thereby necessarily left sitting irresolutely on the fence.

(4) Temporality: My fourth characteristic is the temporal relationship of the association – which is the cart and which is the horse? This is a question which might be particularly relevant with diseases of slow development. Does a particular diet lead to disease or do the early stages of the disease lead to those particular dietetic habits? Does a particular occupation or occupational environment promote infection by the tubercle bacillus or are the men and women who select that kind of work more liable to contract tuberculosis whatever the environment – or, indeed, have they already contracted it? This temporal problem may not arise often, but it certainly needs to be remembered, particularly with selective factors at work in the industry.

(5) Biological gradient: Fifthly, if the association is one which can reveal a biological gradient, or dose-response curve, then we should look most carefully for such evidence. For instance, the fact that the death rate from cancer of the lung rises linearly with the number of cigarettes smoked daily, adds a very great deal to the simpler evidence that cigarette smokers have a higher death rate than non-smokers. The comparison would be weakened, though not necessarily destroyed, if it depended upon, say, a much heavier death rate in light smokers and a lower rate in heavier smokers. We should then need to envisage some much more complex relationship to satisfy the cause and effect hypothesis. The clear dose-response curve admits of a simple explanation and obviously puts the case in a clearer light.

The same would clearly be true of an alleged dust hazard in industry. The dustier the environment the greater the incidence of disease we would expect to see. Often the difficulty is to secure some satisfactory quantitative measures of the environment which will permit us to explore this dose-response. But we should invariably seek it.

(6) Plausibility: It will be helpful if the causation we suspect is biologically plausible. But this is a feature I am convinced we cannot demand. What is biologically plausible depends upon the biological knowledge of the day.

To quote again from my Alfred Watson Memorial Lecture (Hill 1962), there was

‘…no biological knowledge to support (or to refute) Pott’s observation in the 18th century of the excess of cancer in chimney sweeps. It was lack of biological knowledge in the 19th that led to a prize essayist writing on the value and the fallacy of statistics to conclude, amongst other “absurd” associations, that “it could be no more ridiculous for the strange who passed the night in the steerage of an emigrant ship to ascribe the typhus, which he there contracted, to the vermin with which bodies of the sick might be infected.” And coming to nearer times, in the 20th century there was no biological knowledge to support the evidence against rubella.’

In short, the association we observe may be one new to science or medicine and we must not dismiss it too light-heartedly as just too odd. As Sherlock Holmes advised Dr. Watson, ‘when you have eliminated the impossible, whatever remains, however improbable, must be the truth.’

(7) Coherence: On the other hand the cause-and-effect interpretation of our data should not seriously conflict with the generally known facts of the natural history and biology of the disease – in the expression of the Advisory Committee to the Surgeon-General it should have coherence.

Thus in the discussion of lung cancer the Committee finds its association with cigarette smoking coherent with the temporal rise that has taken place in the two variables over the last generation and with the sex difference in mortality – features that might well apply in an occupational problem. The known urban/rural ratio of lung cancer mortality does not detract from coherence, nor the restriction of the effect to the lung.

Personally, I regard as greatly contributing to coherence the histopathological evidence from the bronchial epithelium of smokers and the isolation from cigarette smoke of factors carcinogenic for the skin of laboratory animals. Nevertheless, while such laboratory evidence can enormously strengthen the hypothesis and, indeed, may determine the actual causative agents, the lack of such evidence cannot nullify the epidemiological associations in man. Arsenic can undoubtedly cause cancer of the skin in man but it has never been possible to demonstrate such an effect on any other animal. In a wider field John Snow’s epidemiological observations on the conveyance of cholera by water from the Broad Street Pump would have been put almost beyond dispute if Robert Koch had been then around to isolate the vibrio from the baby’s nappies, the well itself and the gentleman in delicate health from Brighton. Yet the fact that Koch’s work was to be awaited another thirty years did not really weaken the epidemiological case though it made it more difficult to establish against the criticisms of the day – both just and unjust.

(8) Experiment: Occasionally it is possible to appeal to experimental, or semi-experimental, evidence. For example, because of an observed association some preventive action is taken. Does it in fact prevent? The dust in the workshop is reduced, lubricating oils are changed, persons stop smoking cigarettes. Is the frequency of the associated events affected? Here the strongest support for the causation hypothesis may be revealed.

(9) Analogy: In some circumstances it would be fair to judge by analogy. With the effects of thalidomide and rubella before us we would surely be ready to accept slighter but similar evidence with another drug or another viral disease in pregnancy.

Here then are nine different viewpoints from all of which we should study association before we cry causation. What I do not believe – and this has been suggested – that we can usefully lay down some hard-and-fast rules of evidence that must be obeyed before we can accept cause and effect. None of my nine viewpoints can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question – is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?

Tests of Significance

No formal tests of significance can answer those questions. Such tests can, and should, remind us of the effects that the play of chance can create, and they will instruct us in the likely magnitude of those effects. Beyond that they contribute nothing to the ‘proof’ of our hypothesis.

Nearly forty years ago, amongst the studies of occupational health that I made for the Industrial Health Research Board of the Medical Research Council was one that concerned the workers in the cotton-spinning mills of Lancashire (Hill 1930). The question that I had to answer, by the use of the National Health Insurance records of that time, was this: Do the workers in the cardroom of the spinning mill, who tend the machines that clean the raw cotton, have a sickness experience in any way different from that of the other operatives in the same mills who are relatively unexposed to the dust and fibre that were features of the card room? The answer was an unqualified ‘Yes’. From age 30 to age 60 the cardroom workers suffered over three times as much from respiratory causes of illness whereas from non-respiratory causes their experience was not different from that of the other workers. This pronounced difference with the respiratory causes was derived not from abnormally long periods of sickness but rather from an excessive number of repeated absences from work of the cardroom workers.

All this has rightly passed into the limbo of forgotten things. What interests me today is this: My results were set out for men and women separately and for half a dozen age groups in 36 tables. So there were plenty of sums. Yet I cannot find that anywhere I thought it necessary to use a test of significance. The evidence was so clear cut, the differences between the groups were mainly so large, the contrast between respiratory and non-respiratory causes of illness so specific, that no formal tests could really contribute anything of value to the argument. So why use them?

Would we think or act that way today? I rather doubt it. Between the two world wars there was a strong case for emphasizing to the clinician and other research workers the importance of not overlooking the effects of the play of chance upon their data. Perhaps too often generalities were based upon two men and a laboratory dog while the treatment of choice was deducted from a difference between two bedfuls of patients and might easily have no true meaning. It was therefore a useful corrective for statisticians to stress, and to teach the needs for, tests of significance merely to serve as guides to caution before drawing a conclusion, before inflating the particular to the general.

I wonder whether the pendulum has not swung too far – not only with the attentive pupils but even with the statisticians themselves. To decline to draw conclusions without standard errors can surely be just as silly? Fortunately I believe we have not yet gone so far as our friends in the USA where, I am told, some editors of journals will return an article because tests of significance have not been applied. Yet there are innumerable situations in which they are totally unnecessary – because the difference is grotesquely obvious, because it is negligible, or because, whether it be formally significant or not, it is too small to be of any practical importance. What is worse the glitter of the t table diverts attention from the inadequacies of the fare. Only a tithe, and an unknown tithe, of the factory personnel volunteer for some procedure or interview, 20% of patients treated in some particular way are lost to sight, 30% of a randomly-drawn sample are never contracted. The sample may, indeed, be akin to that of the man who, according to Swift, ‘had a mind to sell his house and carried a piece of brick in his pocket, which he showed as a pattern to encourage purchasers.’ The writer, the editor and the reader are unmoved. The magic formulae are there.

Of course I exaggerate. Yet too often I suspect we waste a deal of time, we grasp the shadow and lose the substance, we weaken our capacity to interpret the data and to take reasonable decisions whatever the value of P. And far too often we deduce ‘no difference’ from ‘no significant difference.’ Like fire, the chi-squared test is an excellent servant and a bad master.

The Case for Action

Finally, in passing from association to causation I believe in ‘real life’ we shall have to consider what flows from that decision. On scientific grounds we should do no such thing. The evidence is there to be judged on its merits and the judgment (in that sense) should be utterly independent of what hangs upon it – or who hangs because of it. But in another and more practical sense we may surely ask what is involved in our decision. In occupational medicine our object is usually to take action. If this be operative cause and that be deleterious effect, then we shall wish to intervene to abolish or reduce death or disease.

While that is a commendable ambition, it almost inevitably leads us to introduce differential standards before we convict. Thus on relatively slight evidence we might decide to restrict the use of a drug for early-morning sickness in pregnant women. If we are wrong in deducing causation from association no great harm will be done. The good lady and the pharmaceutical industry will doubtless survive.

On fair evidence we might take action on what appears to be an occupational hazard, e.g. we might change from a probably carcinogenic oil to a non-carcinogenic oil in a limited environment and without too much injustice if we are wrong. But we should need very strong evidence before we made people burn a fuel in their homes that they do not like or stop smoking the cigarettes and eating the fats and sugar that they do like. In asking for very strong evidence I would, however, repeat emphatically that this does not imply crossing every ‘t’, and swords with every critic, before we act.

All scientific work is incomplete – whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time.

Who knows, asked Robert Browning, but the world may end tonight? True, but on available evidence most of us make ready to commute on 8:30 the next day.

__________________________________________________________________________

Source: https://www.edwardtufte.com/tufte/hill

Share

Will MR Follow the Path of Business Intelligence?

What can MR learn from Business Intelligence to gain more share of mind and market?

Business intelligence

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Steve August will be speaking at IIeX North America (June 13-15 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Steve August, Chief Marketing Officer, FocusVision

It may be hard to believe, but the term ‘Business Intelligence’ (BI) was coined in the 19th century. In his 1865 work, “Cyclopedia of Commercial and Business Anecdotes,” Richard Millar Devens first used the term to describe how banker Sir Henry Furnese obtained an understanding of political issues and the market before his competitors:

“Throughout Holland, Flanders, France, and Germany, he maintained a complete and perfect train of business intelligence. The news… was thus received by him first.”

BI came into the modern parlance with the IBM scientists Hans Peter Luhn’s 1958 article, “A Business Intelligence Systen.” Luhn – considered the father of BI – described as “an automatic system developed to disseminate information to the various sections of any industrial, scientific or government organization.”

BI truly exploded into the business mainstream in the 1990’s after Gartner analyst Howard Dresner used the phrase as to describe technical names for data storage and data analysis such as DSS and EIS (executive information systems.).

Since the 1990’s BI has continued grow in both mind and market share, with the tools making it ever easier for non-technical staff to do reporting, attracting vast amounts of investment capital, expanding to the cloud, and morphing into Big Data.

So what does all this have to do with the Insights Industry? What can MR learn from BI to gain more share of mind and market?

Going back to Luhn’s definition of BI, note how he puts a big emphasis on disseminating information to the organization. This is where I believe the insights industry needs to focus more if we want to reach BI’s level.

In our conferences and publications, so much of our discourse is focused on collection and analysis methods, technologies, and practices. And all of these have merit. However, we spend far less attention on effectively disseminating insights to the organization. Ultimately, the worth of even the best-executed research, using the latest methods and technology is measured by its impact to the organization.

It is not a stretch to argue that market research is a actually critical aspect of business intelligence: it illuminates the human stories underlying the data, providing the why to BI and Big Data’s what. But to truly succeed, MR’s information needs to be received by the people who need it.

However, comparatively few organizations have set up the equivalent of a BI system for insights. It is a complex exercise requiring coordination amongst internal departments, technology and panel providers and research agencies. But the benefits and opportunities are huge for organizations who create such systems.

One organization that has created an insights system is Eli Lilly. That’s why I am especially excited to be co-presenting on June 13th at 11:20am at IIeX NA, “Architecting an Insights Nervous System” with David Moore, Senior Director of Global Market Research at Eli Lilly.

Share

Hot Dogs, Sandwiches and The Digital Reward: How Incentives Are Changing the Market Research Space

With instant delivery on multiple digital platforms, along with the opportunity for pushing out branding messages, virtual cards are only going to become more and more mainstream.

5425579784_07303d1b40_o

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Jonathan Price will be speaking at IIeX North America (June 13-15 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Jonathan Price, CEO, Virtual Incentives

Have you ever considered whether or not a hot dog is a sandwich? There was a piece a few months in USA Today about this subject – a topic that has been raging on the Internet for years. In this particular article, the author attempts to put the debate to bed once and for all – yes, a hot dog is a sandwich. Served on a bun. Between two pieces of bread.

While this “controversy” is superficial it can be superimposed on some other business issues – particularly those that have been impacted by technology. In the rewards industry we can ask is the virtual card still a card? It isn’t the rectangular, plastic item that most think of when the word “card” comes to mind. However, the “card” is simply a medium to convey the value of a specific account. Does it need to be a tangible object in the hand?

Digitally based rewards fill today’s consumer demands for speed, convenience and personalization. Going virtual with the “card” is the natural reinvention of a product that has become almost antiquated. A traditional, physical card has limitations that are unnecessary in today’s technology-driven world. With a physical card you encounter issues like shipping, waste, production, timing and limits on personalization and engagement.

In the market research industry in particular, it’s important to take a close look at wielding every technological advancement we have at our disposal. Respondents have become fickle and are constantly pressed with multiple demands on their time. Our world is moving at breakneck speed. Rewarding respondents instantly and digitally is one way to gain a more complete data picture. Digital “cards” – of all kinds – benefit research companies by offering:

  • Speed: Instant gratification is the name of the game. With the right digital rewards provider, companies can deliver fast, customized incentive delivery in real-time or at least the same day. With no gap or downtime, digital rewards can fill the consumer demand for speed.
  • Customization: Delivering digital rewards allows the creation of a memorable interaction…ever more important as consumer attention spans become shorter and shorter.
  • Personalization: Digital rewards can provide the multiple touchpoints that respondents need in order to form a connection. Technology has allowed people to demand “me first” interactions on every level of their lives, and personalization isn’t just a nice touch anymore. It’s expected.
  • Mobile: Latest estimates show that there are 3 billion smartphone users worldwide and this number continues to rise. Virtual delivery of rewards allows for mobile delivery so recipients can use rewards directly from mobile devices.
  • Delivery: Emailing rewards is fast, easy and reliable. From delivery on multiple platforms all the way to managing lists and data, the delivery vehicle for your rewards program is key to a successful program and to measuring important metrics.

While a hot dog may have just been one more product that falls in the sandwich industry, a virtual card is an innovation that will likely push the physical card out of existence. Consumers need what they need when they need it. Companies need to appear savvy, modern and utilize technology to its full advantage in order to remain credible with their audiences. With instant delivery on multiple digital platforms, along with the opportunity for pushing out branding messages, virtual cards are only going to become more and more mainstream. I’d argue that they go beyond an additional to a category, and will eventually completely define the space. One last question: would you like mustard with that?

 

Share

4 Tips for Successful Customer Co-Creation

Kevin Lonnie shares his best practices in customer co-creation.

co-creation3

Editor’s Note: This post is part of our Big Ideas Series, a column highlighting the innovative thinking and thought leadership at IIeX events around the world. Kevin Lonnie will be speaking at IIeX North America (June 13-15 in Atlanta). If you liked this article, you’ll LOVE IIeX NA. Click here to learn more.

By Kevin Lonnie

One of the central tenets behind successful co-creation is to fail early and often. The idea is to keep moving iteratively towards a successful solution.

That said, it’s preferable to start closer to the solution in the first place, so the ramp up between multiple failures and ultimate success is reduced.

With that in mind dear reader, I would like to share four tips towards successful customer co-creation.  Tips that I have earned through my own “extended” learning curve that I am happy to share with you now:

  1. With increasing pressure to demonstrate ROI out of the research function, customer co-creation provides an opportunity to boost the odds of a successful product introduction.  But be very selective and wait for the perfect opportunity.  Find some low hanging fruit, where nothing has worked particularly well in a long while.  Aim for the fences but protect yourself by keeping expectations low.
  2. To inspire collaboration in a co-creation environment, make gamification a central tenet to the experience.  For example, have participants work together during certain activities to earn participation rewards.  This inspires them to work together, but also appeals to their sense of competition, so they are willing to collaborate and work together to earn more points and fully compete. 
  3. Don’t just settle for brainstorming.  Bring together your internal team, integrate customer inspiration and in a short sprint, actually build something!  Keep your time commitments short and build something actionable on a shoestring budget.  Fast, cheap and actionable is music to procurement’s ears!
  4. If you want customer co-creation to take root in your company, it’s critical that your internal clients are active participants in this agile process. Assure them that customers will inspire & focus their creativity.  This way, the customer is viewed as a friend and not as a threat to their authority.  NEVER underestimate the combined powers of inertia & internal politics.

Interested in hearing more about best practices in customer co-creation and ways to maximize success in your organization?  We’ll be hosting a workshop Tuesday (June 14th) at IIeX along with a few of our clients who will share their tips (A.KA. War Stories) on introducing customer co-creation within their respective organization.

lonnie_imaegAnd if you would like to do some reading on co-creation before IIeX, I recommend “Sprints, how to Solve Big Problems And Test New Ideas In Just Five Days.”

The book is written by a team from Google Ventures.  Check it out and see if it doesn’t stimulate your thinking on leading collaborative sprints in your organization.

Share