Introducing Savio: A Marketplace for the Insights Industry

Posted by Leonard Murphy Thursday, August 10, 2017, 7:00 am
In response to trends in marketing research as well as in the broader economy, GreenBook has been working to create an online marketplace for the insights industry. We’re now launching the platform in beta and would like to invite you to explore Savio - the marketplace where research buyers and experts connect, transact, and communicate, all within one application.


As technology is reshaping our world, the tension between the roles of human work and automation has become more and more apparent. The effect of the perennial “cheaper, faster, better” client priorities is compounded by market dynamics such as increasing competition, budget pressures, technology disruption, and disintermediation. The insights industry, like most service-based sectors, is still trying to find balance between “man and machine”.

 

 

However, even as research technology solutions grew in use and share of wallet, a strange thing happened: clients were not equipped to make full use of these solutions because they often needed some level of service during the project: study design specialists, programmers, project managers, moderators, translators, coders, tabbers, designers, report writers, strategy consultants, analysts, etc.

This created a challenge: tech companies are usually NOT built to deliver consultative service: it’s not part of their DNA nor of their model. Tech companies are managed in a radically different way than service businesses. Tech is all about low variable costs and scalable growth. Service is inherently about high variable costs and is difficult to scale. They really don’t sit well together under one roof.

Every tech company in the insights industry has been forced to deal with this issue. Some have bitten the bullet and built a service capability out of sheer necessity. Some have tried to build affiliate programs or partner networks which has worked fairly well but still required a lot of hand holding. None of those solutions have created a solution that acts as a bridge between tech platforms and professional services organizations or individual experts.

Until now.

Leveraging a proven marketplace model, a deep understanding of the needs of clients, suppliers, and technology companies in our industry, and using input from key stakeholders, GreenBook has been working on a “gig economy”-inspired solution for the past 18 months.  We’re now launching the Savio platform in beta.

In simplest terms, Savio is a talent marketplace where buyers and suppliers connect, transact projects, and communicate all within one platform. Unilever has already called Savio the “Fantasy Football League” of market research services. We believe it is a solution that meets the unique needs of our industry.

Not familiar with the “gig economy”?  Whatis has a good overview:

A gig economy is an environment in which organizations contract with independent workers for short-term engagements.

A study by Intuit predicted that by 2020, 40 percent of American workers would be independent contractors. In this digital age, the workforce is increasingly mobile and work can increasingly be done from anywhere, so that job and location are decoupled. Freelancers can select among temporary jobs and projects around the world, while employers can select the best individuals for specific projects from a larger pool than that available in any given area.

In a gig economy, businesses save resources but also have the ability to contract with experts for specific projects who might be too high-priced to maintain on staff. Ideally, the model is powered by independent workers selecting jobs that they’re interested in, rather than one in which people are forced into a position where, unable to attain employment, they pick up whatever temporary gigs they can land.

This model has been growing steadily in recent years. FastCompany writes:

As a recent PwC report asserts, “Economic shifts are redistributing power, wealth, competition, and opportunity around the globe,” and “the expectations of organizations and the aspirations of the people who want to work for them [are] diverging into three distinct ‘worlds’ of work”:

  • The world of corporate capitalism
  • The world of social responsibility
  • The world of smaller, collaborative networks and specialization

It’s interesting that PwC was cited; they drank their own Kool-Aid and restructured their own business by shifting a big chunk of workforce into a marketplace called the Talent Exchange:

We paid close attention to what PwC and other marketplaces did to ensure Savio could build off of their good work while being purpose-built for the insights industry.

Savio was built to function as an extension to online research tools that don’t offer a service component so users of these tools can get help with anything they need: design, moderation, data analysis, consulting, and virtually any other aspect of the insights process. It’s also a stand-alone application, so anyone can access it whether they are using an online research tool or not.

The system is set up for ratings & reviews (for sellers and buyers), with communication and project management tools, and payment processing. We have begun the process of having Savio approved by a number of major brands as part of their procurement systems, with the goal of eliminating the need for individual experts and supplier organizations in the marketplace to undergo the painful approval process themselves.

Savio is NOT an RFP engine, a directory, or pay-to-play lead-gen network. There is no cost to register in Savio and, like all true marketplaces, Savio pays for itself via a transaction fee.

While it’s obvious that it will take some time for Savio get fully going, we’re ramping up now – inviting the industry to participate in the beta launch in three ways:

  • Be a seller: Create a profile and be part of the talent pool available for buyers to choose from. Access new business opportunities by helping research buyers with their projects in your area of expertise (questionnaire design, field management, moderation, analysis, strategy, and many more). Work flexible hours to match your lifestyle or other needs.


 

  • Be a buyer: Use Savio to find contractors to work on your projects: short- or long-term, ad hoc or ongoing. Savio gives you scale to flex up or down with the right talent to meet your needs. Explore the marketplace to find experienced research pros ready to work on your project and deliver the service you need.

 

  • Be a partner: Integrate Savio into your online research technology and let the marketplace deliver the service you don’t provide but your users are looking for. The robust API is fully documented, making integration a matter of hours, not weeks. Use Savio to offer service so you can focus on software. By integrating with Savio, you will increase customer satisfaction and make your application more “sticky”.

The Savio team is currently working with 30 online research platforms, exploring and implementing integrations. These online research partners will be announced as they become ready to connect their users with Savio. We’re still in the very beginning, but our beta partners already take advantage of a unique opportunity to help enhance their offering.

Equally important, we are are also working closely with 20 brands to onboard their teams with Savio over the next few months.

Savio features include:

  • Multiple Profiles: You can have both an individual and a company profile.
  • Flexible Payments: Prefer to work off a retainer? Would you rather be paid by milestones? Or just invoice your client once the project is completed?
  • Teams & Permissions: Whether you are a buyer or a seller, you can manage multiple people within your team.
  • Sometimes you’re a vendor, and sometimes a client: Being an expert doesn’t mean you can’t post a project yourself.
  • Build Reputation: Get good reviews, and buyers will send you more work.
  • Project Room: Real-time chat built right in. Share documents. Review and sign off on project milestones.

A marketplace has benefits to anyone who functions as a freelancer or individual consultant today, and the positive implications for online research tools and their users are clear as well. But what about agencies? After meeting with many agency leaders, we think it will be a boon for them as well.

Savio will enable any agency to scale globally by contracting talent on demand. Agencies can participate in the talent pool themselves by placing their teams into the marketplace as consultants, with the agency remaining the billing agent. Long term, it’ll be even possible for agencies to train and certify others on their proprietary techniques, expanding their business through a network of partners.

As with any marketplace, Savio may be disruptive for some, yes; but it’s “incrementally disruptive”. Other marketplaces are targeting our sector without the understanding and the deep relationships necessary to design a solution that limits the disruption and rather builds upon the way the industry operates in a new, positive way.

As Unilever also told us, “It’s the right solution at the right time.” We hope you’ll join us in making that true!

What’s Hot on Collaborata

Welcome to our next post featuring two projects now offered on Collaborata. GreenBook is happy to support a platform whose mission is to get more research funded. We believe in the idea of connecting clients and research providers to co-sponsor projects. We invite you to Collaborate!

“The Kringle Report: Kids’ Holiday Wishlists Quantified”

(Separate Boys’ and Girls’ Editions)

 

Purpose: To provide the first-ever holiday wish-list tracker quantifying kids’ awareness and purchase interest in toys and games. These wish lists will be augmented by parent interviews to capture the power of “the nudge factor.”

Pitch: Each year as fall approaches, toy and game companies make bets as to what will be on kids’ holiday wish lists, often with only intuition and retailer reaction to guide them. “The Kringle Report” offers a never-before-available look at kids’ wish lists, starting in September and tracking them all the way to and beyond the big day.

Who’s Behind This: ConsumerQuest, Inc., a firm with more than 25 years’ experience conducting children’s research in the toy industry.

Funding stage: Currently funding

Watch a 60-second video previewing this study

For more details, click here or email info@collaborata.com

 

“Scorning Stereotypes: Gender and Identity for Millennials & Gen Z”

Purpose: To provide a deep understanding of young consumers’ shifting perceptions toward gender and personal identity by exploring everyday contexts, key influencers, and how brands can gain and maintain relevance.

Pitch: Young consumers are increasingly looking to brands and influencers that reject stereotypes and defy convention when it comes to gender and identity.

Leveraging ethnography and semiotics, this project delves into the behaviors, attitudes, and key human truths illuminating this young audience’s perceptions of personal identity and the implications for brands.

Who’s Behind This: Spinach Ltd., a U.K.-based boutique research consultancy with expertise in Millennials and Generation Z, is leading this research. Sign Salad, a global semiotics cultural-insight agency, is partnering on this project.

Funding stage: Currently funding

For more details, click here or email info@collaborata.com

Visionary: ESOMAR Congress at 70

ESOMAR celebrates 70 years by reflecting on the past and looking to the future of insights.

By Rhiannon Bryant, ESOMAR Congress Programme Manager

70 years ago at Manchester University in England, two little known engineers by the names of Freddie Williams and Tom Kilburn developed the aptly named Williams-Kilburn tube. The tube was the first high-speed, entirely electronic memory system. This would be the first instance of random access computer memory, and arguably with it the modern age of computers was born.

Around the same time, against the backdrop of a divided world piecing itself together in aftermath of global conflict, 29 market and opinion researchers from eight countries met in Amsterdam to establish an apolitical, multi-country professional community dedicated to a better understanding of the world around them.  This was, in effect, the first ESOMAR Congress.

Fast forward to 2017, and those early academic experiments in computing have launched a profound shift in the way we view the world and conduct business.  The digital era has also provided the insights industry with significant opportunities and challenges. So it feels fitting that as the world celebrates 70 years of modern computing, ESOMAR is celebrating 70 years as a key figure in the development of the industry with the 70th “town hall” of the insights community, ESOMAR Congress.  

This year at Congress, in Amsterdam, ESOMAR welcomes speakers and delegates from over 70 countries as they come together to share knowledge, discoveries and innovations that will drive the future of the industry for another 70 years. Appropriately the theme for this year’s event is “Visionary” and we’ll be reaching into the worlds of artificial intelligence, automation, big data and virtual reality, to show you just how far we have come in the last 70 years.

At this year’s Congress we’re also delighted to be sharing the stage with brands that have a long history and heritage of consumer research. Coca-Cola, PepsiCo, Nestle, KLM AirFrance, Heineken, the BBC, Danone; all brands that have long and illustrious histories of consumer insights, will be there to share the work in cutting edge research that maintains their place as some of the world’s best known brands.

You can hear how Coca-Cola Japan adopted a tiny UK automation start-up to solve their biggest advertising headache, and how the techniques were adopted by the global team. How AOL are using video to amplify research in the boardroom by using video storytelling to increase stakeholder engagement. You’ll be able to find out how Microsoft are making sense of big data, and how Rotary International are using global research to permanently improve the lives of the less fortunate.    

Most importantly Congress, since 1947, has been home of the discussion of the global research community on where we are and where we are heading. It’s a time to celebrate the value and legacy of research and insights, and share knowledge as a community. So, Join us on 10 – 13 September in Amsterdam to celebrate market research and marketing at one of the biggest events of the year.

Jeffrey Henning’s #MRX Top 10: Millennials and Gen Z; the Curious Computer and the Future of AI

What's trending in #MRX? Get up to date with this recap on the latest buzz in the industry.

By Jeffrey Henning

Of the 2,921 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. Millennial Myths and Realities – Ipsos MORI argues that “many of the claims made about millennial characteristics are simplified, misinterpreted or just plain wrong.” They’ve created an online assessment to help you evaluate your own understanding of this generation.
  2. Online Survey Engagement is an Oxymoron – Reg Baker, executive chairman of the Market Research Institute International, writes “Survey participation has shrunk to a smaller and smaller slice of the population. The vast majority of people are giving us the ultimate dis, ‘I wouldn’t do this even if you paid me.'”
  3. TV & Social Media: Working Together in Harmony – What happens when you mix popular TV shows with social media? You get trending topics, live-tweet-alongs, and massive amounts of reactions to everything.
  4. The Curious Computer – The Market Research Society has rounded up its thinking on uses of artificial intelligence for market research.
  5. Upgrade and Partnerships for the World’s MR Job Board – The Research Club and Women In Research (WIRe) have added the MrWeb job board to their websites, broadening the reach of job listings by an additional 20,000 researchers (gross).
  6. 50 Smartest Companies 2017MIT Technology Review runs down its top 50 smartest companies, defined as those that combine an effective business model with innovative technology. The top 3 this year are Nvidia, SpaceX, and Amazon.
  7. The Not So Ordinary – Writing for RWConnect, Hari Blanch Bennett of Kantar Added Value, shares a case study of a new skincare brand called The Ordinary. Stripping fancy, gender-focused packaging and opting rather for quality and transparency, The Ordinary is far from its name as it rattles the beauty industry’s cage.
  8. Tick-Tock: It’s Time to Start Paying Attention to Gen Z – Jailene Peralta, herself a Generation Z member, looks at characteristics of those born 1995 or later: entrepreneurial, globally aware, and disliking corporate BS.
  9. AI May Soon Replace Even the Most Elite Consultants – Writing for Harvard Business Review, Barry Libert and Megan Beck of Open Matters highlight how Amazon’s Alexa enables clients of UBS Wealth Management to ask UBS financial questions—and what this might foretell for providers of professional services.
  10. Why Market Research is Important for the NHS – No longer simply reacting to the perceived needs of patients and their families, the NHS is turning to market research to understand what patients really need, and what they feel.

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. The following links are excluded: links promoting RTs for prizes, links promoting events in the next week, pages not in English, and links outside of the research industry (sorry, Bollywood).

Presenting The Top 150 Global Companies In Market Research! Kinda…

Posted by Leonard Murphy Sunday, August 6, 2017, 18:30 pm
Lenny Murphy compiles his own list of the top 150 market research companies in the world, explains why it's both right and wrong at the same time, and why that needs to change for the industry to thrive.

 

We at GreenBook are proud to present our first Global 150 Research List!

Before we dive into the list though, first a disclaimer:

The list below is wrong.

But it’s mostly right as well.

Sorta. But not really. Maybe….

Confused? Yeah, so was I, for years now. That is why I undertook this exercise, because not only are our industry definitions all over the place, but getting a solid read on company performance to evaluate how the industry is doing (whatever your definition!) is challenging.

I am a huge fan of the iconic  AMA Gold Top 50 Report (formerly the Honomichl Report) and the variation of it used in the ESOMAR Global Market Research Report, MRS ResearchLIVE Report, and RFL Communication’s Global Top 50 Research Organizations report. All give a unique perspective on the industry that is valuable and I applaud the work that these organizations put into them.

These reports have evolved over the years to encompass an ever expanding definition of what constitutes market research, but have left some critical gaps by not including sample companies, technology platforms, and organizations such as Google, Facebook, Equifax, etc… companies that fit within other categories but yet have active research divisions that are players in the market. So although incredibly useful and important, I think they are incomplete views of the industry, and that is a significant issue.

Here is why: for many years the need for data-driven insights has generated adjacent categories to research that have very different business models to MR, and very different capital structures. Business Intelligence, EFM, CX, Big Data, AI, Analytics, Data Visualization, Biometrics, Web Analytics, Social Media Analytics, etc… are all examples of defined categories that are largely based on the fundamentals of market research, and all claim “insights” as one of their primary use cases. All also have very different valuation formulas for their businesses and access to capital vs. traditional MR. In many cases, companies that should rightfully just be considered research companies (Qualtrics and SurveyMonkey come to mind) bend over backwards and twist themselves in messaging knots to avoid being considered research companies and position themselves as much sexier “Enterprise Feedback Management platforms” or ” consumer intelligence data platforms”.  And no wonder; those two companies alone have a combined market cap of around $4 Billion! Similar companies that have embraced the market research categorization and have great financial fundamentals can’t claim the same.

The crux of the issue is this: if we cannot develop a comprehensive view of the value of our industry, how can we convince the rest of the world of our value? Clients, investors, VC firms, Private Equity groups, strategic acquirers, even prospective employees use industry reports and rankings to validate their decisions to engage with companies. We as an industry have a very hard time delivering on this for a variety of reasons, but we need to get a lot better at it quickly.

I make a big part of my living advising the stakeholders I just listed on this space and helping to identify companies to work with, and even I, perhaps one of the most well-connected individuals in our industry, struggle to define the category and identify companies that are attractive to engage with based on their financial performance. The needed info is often very hard to come by.

These reports are not exercises in collective navel gazing or bragging rights: they are vital information resources that play a critical role in facilitating the growth of the industry.

At IIeX in Atlanta Simon Chadwick presented his view on the structure of the industry as well as an analysis of the flow of capital related to the industry. It’s a great piece of work and helps to underline the challenges MR has in redefining itself (and the companies in it) in order to be better positioned for growth.

 


 

As an industry we could far worse than adapting Simon’s structure as the basis for analyzing the industry. In fact, I hereby challenge ESOMAR, the MRS, Insights Association, the AMA Gold Report and RFL to use this model for all future reviews of the industry and in developing industry company rankings.

We need a consistent view definition of the industry and a consistent means of evaluating the companies in it that not only is comprehensive but also can support the argument that the research industry is the core of all marketing insights-related categories and the companies that participate in it deserve serious attention as growth opportunities.

And that brings us back to the list below. As mostly an exercise in information curation I combined all of the most recent reports from the four sources above, and then added my own list of companies that I thought needed to be included, primarily sample providers, data collection technology companies, and various research suppliers that I suspected were large enough to fit.

In my seemingly contradictory opening for this piece I said it was both right and wrong. Kind of. Here is why. I’m quite sure it’s missing companies that should be on there, especially at the lower levels. In some cases the revenue for these companies were estimated when I couldn’t find an accurate source.  I did not include all of the companies that Simon listed simply out of time constraints, but it’s an aspiration of mine for the future to perhaps  re-work this list with that goal in mind. If and when I do so, it will look VERY different than the one below.

The total annual sales revenue attributed to the company is based on several sources, usually the original report I combined. For those that that I added, some sales revenue data is directly pulled from financial statements or other filings (actual data), while other data is estimated or modeled based on a host of sources.

Similar to the model used in the GRIT 50 Most Innovative Companies, we have rolled up branches, subsidiaries, divisions, etc.. into the parent company, while also attempting as best we could to only consider revenue from “marketing intelligence”  operations, which generally meant the revenue came from a few key activities:

  • Access to consumers for research purposes
  • Data collection services and/or technology (quant, qual, behavioral, and syndicated)
  • Data Analysis and Reporting
  • Insights-based consulting or advisory work

If you are the CEO of a company that looks at this and says “Hey, my company is $20M, why am I not listed!” hold your righteous indignation: the reason is that neither I nor the original report authors I built off of, knew that information. Update your Hoovers, Techcrunch, CB Insights, Owler, or even LinkedIn profiles or better yet join one of the relevant trade associations and disclose that info when asked. Transparency is the key to efforts like this.

Finally, I can’t iterate enough this is an intellectual exercise more than anything. While “the whole is greater than the sum of it’s parts” is true here for sure and I think the revenue is pretty close even when estimated, there is much room to argue specifics and it is not complete. It is only as complete as the knowledge of myself and the authors of the four reports I based it on.

However, that said I do believe it is the most comprehensive view of the players that make up the research industry as loosely defined today.

Now, without further ado, here is my take on the Top 150 companies in Market Research. Congratulations to all the companies listed and take pride in the fact that your company is kicking butts and taking names!

 

Rank Company 2016 Revenues ($Mil)
1 Optum $7,333,000,000
2 Nielsen $6,309,000,000
3 Video Research Ltd. $6,288,000,000
4 Equifax $3,985,740,000
5 Kantar $3,847,000,000
6 QuintilesIMS $3,301,000,000
7 Ipsos $1,972,800,000
8 Gartner $1,829,700,000
9 GfK $1,677,200,000
10 Verisk Analytics $1,270,900,000
11 IRI $1,026,700,000
12 Acxiom Corp. $880,000,000
13 Tableau Software $826,900,000
14 Experian Consumer insight $563,000,000
15 Westat, Inc. $512,000,000
16 Rocket Fuel $456,900,000
17 Wood Mackenzie $442,800,000
18 Dunnhumby $429,000,000
19 Intage Holdings $419,200,000
20 Harte-Hanks Marketing $404,400,000
21 iDC $400,000,000
22 Informa Financial Intelligence $393,000,000
23 NPD Group $341,000,000
24 J.D. Power $340,000,000
25 comScore $339,800,000
26 Macromill $300,000,000
27 Simon-Kucher & Partners $266,700,000
28 Qualtrics $250,000,000
29 Gallup $249,200,000
30 Research Now Group, Inc $238,100,000
31 ICF International $224,000,000
32 Information Services Group $216,500,000
33 Forrester Research $214,500,000
34 SurveyMonkey $200,000,000
35 Toluna $194,400,000
36 DRG (Decision Resources Group) $178,000,000
37 MaritzCX $170,000,000
38 Survey Sampling International $167,900,000
39 Abt SRBI $147,200,000
40 LRW (Lieberman Research Worldwide) $144,400,000
41 GlobalData Plc $135,500,000
42 YouGov $130,000,000
43 Leger $120,000,000
44 Mediametrie $118,800,000
45 ORC International $118,500,000
46 Creston $117,200,000
47 National Research Corp $109,400,000
48 FocusVision $102,000,000
49 PRS IN VIVO $97,000,000
50 NRC health $96,000,000
51 Cello health $95,100,000
52 Teradata UK $90,906,000
53 Euromonitor $88,356,000
54 Confirmit $88,000,000
55 Vision Critical $84,700,000
56 C Space $83,400,000
57 Mintel Group $82,974,000
58 Google Analytics 360 $80,412,750
59 Blueocean Market Intelligence $80,400,000
60 Burke, Inc. $80,000,000
61 Periscope by McKinsey $80,000,000
62 Ebiquity $73,874,000
63 Schlesinger Associates $73,200,000
64 Medallia $70,000,000
65 Convergys Analytics $67,400,000
66 Market Strategies International $65,400,000
67 MarketCast $61,300,000
68 Morpace, Inc. $60,400,000
69 Nepa AB $56,600,000
70 Market Force $55,000,000
71 Service Management Group (SMG) $53,300,000
72 Idea Couture $52,000,000
73 LRA by Deloitte $46,800,000
74 Focus Pointe Global $46,200,000
75 Directions Research $45,000,000
76 Precise Media Monitoring $44,012,000
77 Hanover Research $40,900,000
78 MARU / Matchbox $40,400,000
79 Environics $40,200,000
80 De La Riva Group $40,000,000
81 SKIM $39,600,000
82 Phoenix Marketing International $36,200,000
83 MarketVision Research $35,000,000
84 Radius Global Market Research $33,600,000
85 Simmons Research $33,300,000
86 NatCen $32,933,000
87 Brainjuicer $32,800,000
88 Kelton $32,800,000
89 Fors Marsh Group $31,100,000
90 Provokers $30,500,000
91 Isobar Marketing Intelligence $30,000,000
92 Insites Consulting $30,000,000
93 Hall & Partners $29,500,000
94 SSRS $28,600,000
95 Decision Analyst $28,200,000
96 Twitter $28,100,000
97 Double Helix $28,000,000
98 The Link Group $28,000,000
99 Flamingo Research $26,148,000
100 MMR Research Worldwide $25,980,000
101 Zappistore $25,000,000
102 20/20 Research $24,600,000
103 Forethought Research $24,000,000
104 Gongos $23,200,000
105 Stylus $23,000,000
106 Acturus $22,700,000
107 Lucid $22,000,000
108 The Research Partnership $21,814,000
109 NAXIoN $21,800,000
110 KS&R $20,500,000
111 QuestionPro $20,000,000
112 Bellomy Research $19,800,000
113 Chadwick Martin Bailey $18,800,000
114 Illuminas $18,300,000
115 Frost & Sullivan $18,280,000
116 Hypothesis Group $17,700,000
117 Future Thinking $17,416,000
118 RTi Research $17,000,000
119 Azure Knowledge $16,000,000
120 Cint $16,000,000
121 Luth $16,000,000
122 Netbase $16,000,000
123 WorldOne Research $15,500,000
124 Hay Group Insight $15,050,000
125 Amazon Mechanical Turk $15,000,000
126 Hotspex Inc $15,000,000
127 Populus group $13,489,000
128 BDRC Continental $13,302,000
129 Defaqto $13,190,000
130 SurveyGizmo $13,000,000
131 Eolas International $13,000,000
132 Incite Marketing Planning $12,926,000
133 Markit Economics $12,445,000
134 Adelphi international Research $12,100,000
135 Q Research Solutions $12,000,000
136 Datamonitor $11,653,000
137 InCrowd Inc $11,400,000
138 Firefish $11,237,000
139 Realeyes $11,000,000
140 MSW ARS $11,000,000
141 Business Research group $10,974,000
142 Avention UK $10,788,000
143 Quadrangle $10,678,000
144 Marketing sciences Unlimited $10,618,000
145 Strategy Analytics $10,387,000
146 Brandtrust $10,200,000
147 iCM Research Unlimited $9,963,000
148 2CV $9,865,000
149 The Planning shop international $9,633,000
150 Ameritest $9,200,000
Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

The future of video for MRX

Technology is making the use of video in the market research industry easier, cheaper and faster to discover and communicate insights.

By Alistair Vince

‘Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte’
Blaise Pascal

(“I would have written a shorter letter, but I did not have the time.”)

Time is something everyone seems to be struggling with. So it’s our responsibility as people who work within Insights to help.

Help by being able to communicate insights more succinctly.

To be able to save all our clients time across processing, understanding and using insights.

Without losing the quality.

It’s a great challenge and video is a key part of this.

Whilst technology is enabling us to solve most of these problems, it is also creating more in the process of solving others (e.g. making video easier to make meant more of it – creating a search/storage problem, now solved).

30 years ago the use of video in research was almost non-existent. Now video should be an integral part of the insight jigsaw.

There are a number of reasons why. Firstly, access. Think of the number of people willing
to give up their evening to attend a group or invite a stranger into their house. Now
compare that to the number of people who might make a film on their smartphone
recording something they are already doing, without going anywhere, without anyone else being involved. It also enables consistency across geographical borders. It’s also quicker.

Secondly, ability. Making a video of yourself used to be a novelty, now it’s commonplace.
Think of the amount of video made every day – it’s something people have become used to, and as such people are much more comfortable in front of the camera.

Thirdly, discovery. Due to the above two reasons, the likelihood of insight discovery
through video is higher now than it’s ever been. Higher, as well as more authentic, as you
can get as close to natural/normal as possible, you can observe what happens, before
asking why.

Finally, impact. Quality video is giving people what all market researchers talk about – the ability to tell a better story, to communicate an insight more effectively.

So why is it still so underused…could it be that many of the well documented problems
with video still remain in the minds of so many? Are the companies who specialize in video (like us) not doing a good enough job of telling people that’s changed?

Well, here we go. Those problems I’ve listed below. They’ve gone away, they simply don’t
exist anymore.

  • Audio/image/response quality of video is poor
  • Low respondent engagement/response rate
  • Storage difficulties & costs – who’s hard drive is it on?
  • Finding the clip you want from the hours of footage
  • Extracting/editing the bit you want from the video
  • Sharing the video with research company/colleagues/clients
  • High cost

They’ve been solved because of the focus of a multitude of companies in this space, not
just us. So if you see one you’re still experiencing, then you’re using the wrong supplier.

A lot of this possible because of the rapid developments in technology and the collapse of
barriers such as storage costs (you’ve seen the graph I hope).

When we opened up the use of searcheditshare.video to qualitative researchers, agencies
and clients, we did so to drive the use of video in market research. Because we have seen
first-hand what can be achieved in using video within our industry which include, amongst
others:

  • Speeding up and generating better quality insights across observation studies (for
    innovation) with high numbers of respondents (180-200) to gather data points
    previously covered with in-home visits by multiple individuals. This enabled the
    client to replace a process which had been the same for years, reducing the cost,
    increasing the speed, and amplifying the impact of the study across the business.
  • Testing new product, pack, web, app, advert prototypes in multiple countries.
    Capturing in-usage response and emotional reaction. Using outputs to sell the
  • Bringing to life segmentation studies through authentic, visually stimulating content.

Technology underpins all this. Technology enables the videos to be used.

One client said to us that they would dread the day their boss would come to them and
say,

‘You remember that video we had last year when that woman said something about health – can you find it and just send me the bit where she mentions our brand for a presentation I have tomorrow?’

That used to take them 3-4 hrs to do. With systems available now (like ours), that takes
about 3-4 minutes to complete. That’s a game changer right there.

We feel strongly that by making video easier, cheaper and faster to work with, we’ll help others uncover its power to discover and communicate insight too.

 

ESOMAR at 70 – The future of research and insights associations

ESOMAR celebrates their 70th anniversary and reflects on how the market research industry has changed over the years, and where it is heading in the future.

By Finn Raben  – Director General, ESOMAR

When I joined ESOMAR 8 years ago the world of research and insights was in the throes of major changes: The financial crisis was in full swing, changing the market research landscape wholesale; the “comfort” with which many research associations and societies had operated in previous years was being stripped away. But this stripping away forced ESOMAR, as well as many other industry associations, to refocus on what was important, redefine our key services and ensure our relevancy to the industry.  

While it may not have felt like it at the time, it can be considered a blessing in disguise. Since that time, further changes and challenges to the industry have presented themselves but we find ourselves in a better position to tackle them, and even get ahead of them in some instances, thanks to that earlier refocusing exercise. Most pertinently, those challenges aren’t going to go away – indeed they will probably grow and multiply; as our profession evolves so to will the threats and the opportunities we will face, and we will need to continually refocus and improve – KAI ZEN.

As ESOMAR celebrates its 70th birthday this year it feels to me that insights associations across the world are more important than ever before. Now, many will say “well, you would say that, wouldn’t you!”…. but as a practitioner of research, I am always impressed by the associations ability to harness diverse opinions and perspectives, and focus them into practical, industry-wide solutions and guidance.

Challenges, of course as the old adage goes, are just opportunities in disguise. But framing those challenges to see the opportunity can sometimes be a difficult task. The most important element of that process is the understanding that no-one person, researcher, company or association, is an island. Furthermore, no one is unique in the problems they face as researchers. In that same vein, I will also say that it is very unusual for ESOMAR, or any of the other global and local research associations, to uniquely have the answers to these challenges, or to be able to convert them into opportunities in isolation.

Community is key, and herein lies the strength (and in some cases the challenge) in each and every research association across the globe. As a self-regulating industry, community and collaboration is where we can find the answers and where we can convert those challenges into opportunities. At its heart ESOMAR facilitates a world-wide discussion amongst practitioners, buyers, providers, associations and legislators around the changes in the industry. It also allows the members of the market research community to work together to move the industry forward and celebrate the value of insights and data in driving societal improvement and business growth. The events, papers, publications, and platforms provided by all our associations allow those with the means to innovate technologically and methodologically to share that knowledge and experience with those that aren’t able to do so. This provision for the sharing of ideas, for collaboratively pushing the boundary of what research can do and achieve for businesses and the public sector, has been the key to our profession’s success for seven decades, and an aspect that will continue to be vital for the health of the industry for (I hope) many decades more.   

Associations, of course, do much more than simply facilitate dialogue! Many associations work extensively with national and regional governments in promoting research to law-makers to ensure that research and data collection can continue to be a self-regulated industry. The core of this work is achieved through updated and relevant codes of ethical practice, along with championing compliance of initiatives such as the GDPR (General Data Protection Regulation). In a landscape where consumer data is being produced on scales never before seen, such association activity is vital in ensuring the continued existence of market and opinion research as we know it.

Many reading this may already be a member of a research association, and I hope that support continues in the future. For those that aren’t yet members, supporting an insights association, at a national or global level will provide you with the resources to allow you to see challenges as opportunities – as well as the means to deploy them – irrespective of whether you are a client-side or agency researcher.

To find out how data, research and insights is innovating, and how you can ensure success in the coming years, please join ESOMAR as we celebrate our 70th anniversary, at ESOMAR Congress in Amsterdam from the 10th – 13th September. www.ESOMAR/congress

Vital Statistics You Never Learned…Because They’re Never Taught

Marketing scientist Kevin Gray asks Professor Frank Harrell about some important things we often get wrong about statistics.

By Kevin Gray and Frank Harrell

 

KG: Starting from the beginning, what is statistics and how did it come about? Could you give us a short definition and history of the discipline?

 

FH: That’s a loaded question and best answered by referring readers to the many writings of one of the greatest of our statistician historians Stephen Stigler (see for example https://en.wikipedia.org/wiki/History_of_statistics). In a brief nutshell statistics began as a way to understand the workings of states, productivity, life expectancy, agricultural yields, etc., and to make estimates of things from samples (an statistical example of the latter dates back to the 5th century BCE in Athens). Roughly speaking, statistics has developed into a few broad areas: descriptive (e.g., the usual baseball statistics), inferential (e.g., do baseball hitters have different success probabilities when playing at home?), estimative (e.g., from a factorial experiment what is the effect of changing baking temperature if we hold the amounts of flour and sugar constant) and predictive (e.g., financial forecasting or predicting how long a patient will go until a disease recurs).  

Concerning a definition for statistics, it is a field that is a science unto itself and that benefits all other fields and everyday life. What is unique about statistics is its proven tools for decision making in the face of uncertainty, understanding sources of variation and bias, and most importantly, statistical thinking. Statistical thinking is a different way of thinking that is part detective, skeptical, and involves alternate takes on a problem. Statistics involves measurement refinement, experimental design, data analysis, inference, and interpretation of trends and evidence.

 

KG: What are the most fundamental things decision makers need to know about statistics in order to use it effectively for decision making?

 

FH: Always the most important issues are understanding the meaning and reliability of measurements and understanding the linkage between interpretation of the data and the experimental design. As data have gotten more voluminous, the average data analyst has gotten more relaxed about design, and we are seeing many data interpretation fiascos as a result (see https://youtu.be/TGGGDpb04Yc for a great example). When there is no design (as in casual data collection) or the design used is not consistent with the project goals (prospective vs. retrospective designs; randomized vs. observational designs, etc.), it is very seldom that a statistical analysis can come to the rescue. A famous quote by one of the founders of modern statistics R. A. Fisher nicely summarizes this issue: “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.”  

Regarding measurements, I see a lot of statisticians forgetting the adage “question everything” and trusting the client’s selection or computation of measurements. For example, nice continuous measurements are often categorized, resulting in great loss of information, power, precision, and generalizability. Or an investigator may derive a response variable using a normalization procedure that would best be modeled than used to create a ratio. Given a good design and appropriate measurements, the approach to statistical analysis needs to be based on good statistical principles as I attempted to overview here. Then the result needs to be actionable by estimating things on scales that are useful to the client (e.g., relative treatment effects, predicted risks, life expectancy, Bayesian posterior probabilities).  A very common problem with the rise of machine learning (see below) is improper use of classification as opposed to prediction; classification makes too many assumptions for the client and does not provide a gray zone.  I discuss this in detail here.

 

KG: What are the main areas or branches within statistics? How do they differ?

 

FH: There are three schools of thought: The most commonly used is frequentist statistics, which involves estimation, significance tests, confidence limits, and hypothesis testing in a scenario of imagined repetitions of a study (sampling distributions). Because of the need to consider sampling distributions and the “sample space,” frequentist methods can get quite complicated and require customized solutions for each sampling scheme, e.g., when one does sequential testing and wishes to stop when there is sufficient evidence for an effect. Statistical statements of frequentist results have been shown to be very difficult for non-statisticians (and some statisticians) to interpret.  

Next comes the Bayesian school of statistics, which actually preceded the frequentist school by more than a century due to work of Bayes and Laplace. It was not used very much until powerful computers became available to statisticians. The Bayesian approach requires one to specify an anchor/starting point (“prior distribution”) which can require much thought but can also just specify a degree of skepticism to apply to the data. The benefits of going through this step are great – no need to create one-off solutions to complex design/sampling schemes, and Bayes provides directly actionable probabilities – probabilities that effects are positive, for example, as opposed the frequentist p-values which are probabilities of making assertions about effects being positive when in fact they are zero.  

Finally, there is the likelihood school which is like the Bayesian school without prior distributions. Likelihood methods like Bayesian ones avoid the sample space so are much cleaner, but they mainly provide relative and not absolute evidence and don’t handle models containing a very large number of parameters. Besides the three schools there are different tools within in each school, especially within frequentism – such as the bootstrap, nonparametric methods, and missing data imputation methods.

 

KG: Are machine learning and data science different from statistics in your view?

 

FH: Yes.  To oversimplify things, I would say that data science is applied statistics + computer science, with less attention to statistical theory and hypothesis testing, in favor of estimation and prediction. Machine learning is an extremely empirical way of doing statistical modeling, without caring very much about being able to separate effects of variables. Many machine learning practitioners are well grounded in statistics but many are not. The latter group seems to be constantly reinventing the wheel and using approaches that statistics has shown decades ago don’t work.  

A hallmark of a good statistician is knowing how to quantify accuracy of estimates and predictions. The latter group of machine learning practitioners have never learned the principles and theory behind measures of predictive accuracy (including proper probability accuracy scores) and are constantly developing “classifiers” when predictions or optimal Bayes decisions were needed for the problem. These classifiers have a host of problems including failing to generalize to new samples with much different outcome frequencies, as discussed in more detail here. Machine learning practitioners also seem obsessed with feature selection and don’t realize that torturing data to attempt in vain to determine the “important” predictors is at odds with getting maximum information out of all the predictors, the latter having a lot to do with maximizing predictive discrimination.

 

KG: You question many common statistical practices and are frequently quite vocal in your criticisms. What are the most important things practitioners get wrong?

 

FH: First, I start with arithmetic. It is amazing how many people don’t know that you don’t add ratios unless they represent proportions for mutually exclusive events. In general ratios multiply.  I see papers  all that time that either analyzed ratios without taking the log or that analyzed percent change from baseline, failing to note that the math doesn’t work. Take for example a subject who starts at a value of 1.0 and increases to 2.0. This is a 100% increase. Then consider a subject starting at 2.0 who decreases to 1.0. This is a 50% decrease. The average of 100% and -50% is +25% whereas the two should cancel, arriving at an average of 0%. Percent change is an asymmetric measure and can’t be used in statistical analysis except under special restrictions.  

Regarding the improper addition of ratios, many medical papers add odds ratios or hazard ratios when they should have added the logs of these. A simple example shows why. When developing a risk score suppose that two risk factors have regression coefficients of 1 and -1 in a logistic regression model. The two odds ratios are 2.72 and 0.37. Adding these pretends that both risk factors are harmful when in fact the second risk factor is protective. Change from baseline has a host of other problems as described in my blog. Instead we should be analyzing the raw response variable as a dependent variable, covariate-adjusting for the raw baseline variable. Statisticians and other data analysts need to carefully critique the math being used by their collaborators!  

Turning to statistical models there are many common pitfalls, including trying to learn too much (using feature selection or estimating too many parameters) for what the sample size allows, resulting in overfitting/overinterpretation; making assumptions such as nonlinearity that are unlikely to be true; trying lots of transformations and pretending the final transformation was pre-specified, destroying statistical inferential properties of the result (as opposed to using spline functions); using improper accuracy scores; using classification instead of prediction in non-large signal:noise situations; trying different transformations on the dependent variable or being affected by outliers in that variable, as opposed to using robust semiparametric ordinal regression models. Then there is stepwise regression – don’t get me started …

Dichotomania is one of the greatest crimes against data. This is information-losing, arbitrary, and assumes discontinuous relationships not found in nature. It is virtually never a good idea to categorize a continuous dependent or independent variable.

There are so many other problems we see every day, including the use of ineffective graphics such as pie and bar charts.

 

KG: Finally, statistics is evolving very rapidly and new methods are continuously being developed. What do you think statistics will look like in 10-15 years?

 

FH: Whew – another difficult question!  I am certain we will see Bayesian models used much more frequently as they give us the outputs we really need (forward-time, forward-information-flow probabilities) and allow us to formally incorporate external information, even if that information is, for example, just that it is impossible for a certain treatment to have an odds ratio greater than 10 against an outcome. We will also see more interpretable, flexible, and robust predictive methods, more intuitive and powerful statistical software and graphics, and in general more statistical methods that do not assume normality or rely on large sample theory.

 

Thank you, Frank!

 

Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Frank Harrell is Professor of Biostatistics and Founding Chair of the Department of Biostatistics at the Vanderbilt University School of Medicine.  He also works as an expert statistical advisor in the Office of Biostatistics, Center for Drug Evaluation and Research, FDA.  He is the author of numerous publications, the influential book Regression Modeling Strategies and the R packages rms and Hmisc. He can be followed on his blog Statistical Thinking.

Empathy2 – An Innovation Catalyst

Get a behind the scenes look at how P&G's Sion Agami makes innovation happen in this new series with Jeffrey Resnick.

By Jeffrey Resnick

Prologue

This blog represents the kickoff of a new series. This series explores the opposite side of my Transformation IQ eBook and blog series. While the Transformation IQ series focused on CEOs of research supplier-side firms tackling disruption in the industry, this new series focuses on individuals within client-side organizations that help make innovation happen. These are individuals who have that coveted ‘seat at the table’ with executives. In this first article, Sion Agami, a Research Fellow at P&G shares his thoughts on innovation and how to be a catalyst that makes it happen.

Empathy2 – An Innovation Catalyst

Sion brings the experience of 25+ years in research at P&G. His experience leads him to extol the importance of consumer-centricity as the core of innovation. Sion, and his team, are at the heart of the intersection of technology and the consumer. When creating a product and package that delights consumers, they also need to find the right midpoint of what is profitable, manageable and can be produced on time. From his perspective, understanding a consumer’s struggle is where innovation begins. Sion believes innovation is driven less by a sexy new tool or methodology but much more by fully understanding the business question and the finding the optimal approach to answer it. New research methodologies play an important role but are rarely the silver bullet. Additionally, a researcher’s active participation is a crucial element of successful innovation.

Sion draws a distinction between revolutionary innovation – truly disrupting a market with a product or service that has not previously existed and evolutionary innovation, where the goal is to better address consumer needs through modifications/improvements to an existing product. Our discussion focused within the evolutionary spectrum. Within this context, he holds a deep conviction on two points:

  1. The insight professional must actively engage in the research process. He or she knows the product and the value proposition it intends to deliver to the target market the best. It is through the lens of this understanding that product innovation must occur. This doesn’t eliminate the potential for powerful co-creation between a trusted research partner and his insights team. In his world, research conducted in the absence of a member of his team who is in touch with both the product technology and the consumer, is largely a non-starter.  
  2. He subscribes to ‘lean’ methodologies, where agile research plays a critical role. Explore a key business question using a rudimentary concept, learn from the results, persevere or pivot, and move on to the next required insight – rapidly. Failure should be fast and cheap. This is a very different approach to product development than historically followed – where the goal was to minimize risk to the organization behind a fully developed product concept that required large-scale investment often over several years. Simply stated – innovation is easier to achieve if you approach it in steps, with each step forward driven by a deep understanding of the consumer’s dilemma.

Augmenting these two basic principles, Sion reflected on several additional themes.

Understanding the holistic customer experience is required to drive customer empathy.
Knowing part of the customer’s story is insufficient. Sion believes we have yet to reach the ceiling on empathy. He believes the future holds achieving “empathy-squared (Empathy2),” deeply understanding the patterns and habits of individuals and how this affects their daily choices. He believes we too often research consumers in silos – the toothpaste bought, the hair products used etc. Understanding the consumer holistically requires approaching he or she as a user of multiple brands with multiple connections, going beyond what they say and observing what they do.  

Innovation requires transforming data into knowledge to drive action.
Empathy helps create the story behind the numbers and consumer insights convince organizations to act on the data. In order to capture the insights, creating a consumer model helps a lot, as it is when data is linked with theory… “Data without theory is just trivia, and theory without data is just an opinion”. Consumer models leverage both data and theory, and formalize the consumer story highlighting the relative importance of product experiences and consumer reactions.

Don’t be afraid to go out on a limb – it won’t always break.
Sion’s current primary area of focus is feminine care. He relayed a story that predates today’s ubiquitous presence of mobile phones and demonstrates the meaning of going out on a limb with a new approach to answer a pressing business question – how could P&G improve the placement of a pantyliner? A timid researcher might take the approach of asking female respondents to show how they place a pantyliner on a pair of underwear within a focus group or research lab environment – give them a pair of women’s underwear and ask them to place a pantyliner on them. Sion took a less timid approach and asked women to take a picture of their panties with their smartphone (when smartphone penetration was just 15%) once they had placed the pantyliner on the underwear they were wearing. The ability to see the placement in a realistic environment led to a deeper, better understanding of how P&G could improve the process.

Co-create research solutions with passionate partners.
Sion sees leveraging the intelligence of strong research partners as accretive to the innovation process. He identifies research partners suitable for co-creation as those who not only bring new tools but also passion and the ability to innovate– the conviction that their solution will provide insights other solutions cannot, penetrating deeper in consumer’s minds. Again, the operative word is co-creation. The researcher must be actively involved in the process, not simply a bystander waiting for results.

Harness the power of AI/machine learning.
Machines won’t replace humans in Sion’s viewpoint but they will be able to see patterns across large-scale multi-source information that will enable the generation of unique insights. He gets visibly excited when contemplating the impact machine learning can have on our understanding of the consumer. From his perspective, leveraging this technology will create new frontiers in insights.

Challenge yourself to always learn – or become extinct.
Driving yourself to continually learn is a core principle to which Sion holds himself accountable. In this fast evolving industry, where technology is growing exponentially, you need to take action. The reason is simple, he fully believes that if you fail to continually learn, “you can become obsolete in the blink of an eye.”  

Sion’s wisdom and experience permeated the interview. My primary takeaway from the interview, however, is that the active participation by the researcher is the secret sauce. This will be very difficult for a machine to replace.

 

______________________________________________________________________________

Sion Agami – Research Fellow.  Procter & Gamble (Feminine Care) Insight Alchemist

Research fellow in P&G, but likes the “Insight Alchemist” description as it reflects what he does.

25 years of experience in Product Research/ Product Innovation, inventing and launching new products that have left a mark in consumer minds and in business across the globe, transforming knowledge into action.

Started in Latin American R&D, with assignments in Detergents and Fabric Softeners. In US R&D worked in Air Care, Snacks, and Feminine Care. Well known for unlocking business building insights with cutting edge Product Research methodologies.

  • A change agent, developing new methodologies, consumer relevant test methods, and creating consumer/ technical models.
  • Modernized how Product Research is done by establishing contact with consumers’ real time and at relevant moments.
  • Recognized as a master in translating consumer insights into product innovation, and creating holistic product propositions.

“If I Had Asked People What They Wanted…” & Other Elitist Myths

Customer led inspiration, while often dismissed, is an essential part of a company's success.

By Kevin Lonnie

A popular rebuke to customer led inspiration is attributed to the great 20th Century industrialist, Henry Ford.

If you’re a senior executive who believes on going with his gut, you can trot out (no pun intended) the Ford quote about faster horses as a reproach to customer led inspiration.

Well, our story could end there, but it’s actually where it starts getting interesting.  

Turns out, Ford never uttered those famous works.   

In fact, the first dated reference to Ford’s quote doesn’t appear until roughly the year 2003 per a Harvard Business Review article written by Patrick Vlaskovits. Further research on the topic indicates the quote was originally used in the 3rd person to describe how Ford would have responded to critiques that his designs were missing the mark.

So why would this very popular 21st century business quote find itself attributed to a man whose success was 100 years ago? Very simple, it’s highly effective at shutting down the idea of customer led inspiration. Heck, even Steve Jobs often cited that quote.  

You could, of course, choose to position yourself as the next Steve Jobs and convince yourself that your own intuition and instinct will successfully chart the firm’s future course. This is what Jobs disciple Ron Johnson did when he took the helm at J. C. Penney. Despite colleagues concerns that Johnson was making radical changes without consulting with customers he responded, “We didn’t test at Apple.” Apparently, the key independent variable is having a visionary like Steve Jobs at the top. Unfortunately, for every Steve Jobs, we have thousands of Ron Johnson’s who are ready to crash and burn at the feet of customer displeasure.  

As MR evolves from the reactive tools of the past century to proactive insight generation, customer led insights will prove essential to a firm’s ideation engine. Anyone who feels they can map this strategy relying solely on their gut instincts does so at their own risk.   

At the same time, researchers shouldn’t expect a welcome party. The incumbent creative elitists will look to hold their ground by downplaying the value of customer led inspiration.  

At least you’ll be ready to debunk their favorite Henry Ford quote.