1. Clear Seas Research
  2. SIS International Research
  3. RN_In_App_Effectiveness_GBook_480_60
  4. grit2014_480x60

Lessons From Amazon vs. Hachette: Focusing On What Counts

To win any game, you have to know what counts. Then, you have to execute better than everyone.



Editor’s Note: Market Research is a business, and one facing many changes from multiple angles in the marketplace. It’s important to be reminded that the keys to success in business are fairly universal, and in today’s post master strategist Larry Gorkin reminds us of a few of those core principles, using the recent standoff between Amazon and Hachette publishing as an example. It’s good stuff.


By Larry Gorkin

Leaders that want to win long term need to clearly define their business’ key success factors, and consistently deliver them with excellence. Failure to do so can undermine the business’ core strategic intent, weaken its competitive position, and erode results. That’s the lesson from the current stand-off between Amazon and book publisher Hachette.

For those unfamiliar, Hachette is a leading global book publisher, representing popular authors like Malcolm Gladwell, James Patterson, and Stephen Colbert. Its dispute with Amazon centers on pricing and other terms the online retailer wants to sell and promote its books. Amazon wants more favorable terms versus history to reflect its dominant size, and the changed economics of e-books.

With the two sides unable to reach agreement, Amazon has delayed shipping Hachette books, refused to take pre-orders on upcoming releases, and reduced marketing promotions. While Hachette’s business has suffered, the dispute has been a public relations black eye for Amazon. Critics have accused it of abusing its market power, and hurting both customers and authors.

Amazon has responded with a hard line, positioning the negotiation as part of its on-going effort to give consumers the best possible prices and service. The company has apologized for the inconvenience, even suggesting that customers buy impacted books elsewhere until the dispute is over.

What’s important here is the clarity with which Amazon has identified and executed on its key success factors. Amazon needs the best possible terms from every vendor to maintain its own competitive price position. As part of this, they see continued opportunity to disrupt the book market, particularly with the growth of e-books. They are willing to take a short term hit to win this battle long-term.

Hachette has similarly crystallized what counts, albeit from a clearly defensive perspective. The company sees Amazon as a fundamental business threat, and has defined preservation of its traditional business model as essential. From this view, Hachette may be focused on a critical issue, but its goal of preserving the status quo may not be realistic or achievable.

Of course, the idea that companies should define and execute against the key requirements of their strategy is not a new one. Wal-Mart established supply chain leadership as a foundation of its strategy and continuously invested to maintain advantage there. Steve Jobs made ease of use and elegant design the basis upon which Apple would compete; the company still benefits from that.

Yet, many companies fail to crystallize their key requirements for success. Others know what counts, but lack the organization discipline to deliver them with excellence.

Using today’s book market example, Hachette sees its future success dependent on preserving its historic business model with Amazon and other retailers. But, I’d argue that’s mistaken. Instead what is essential for Hachette is to create a new business model that reflects the reality of today’s changed and evolving book market. Even if Hachette strikes an acceptable deal with Amazon, their long term outlook won’t change; they may survive, but won’t thrive.

Importantly, a company will have multiple success drivers, all of which should flow from its current strategy and market reality. Both the strategy and success imperatives should change over time. And once defined, leaders must ensure the resources are in place to deliver them successfully.

Given this important issue, here are five ways to identify and focus on what counts for your business.

1. Identify Dependencies– What does your strategy depend on? What must happen for the strategy to succeed? What elements are in place versus missing?

2. Define Drivers– What’s driving growth in your market and business? What capabilities and assets are needed to support that growth? Where are your strengths and gaps?

3. Examine Changes– What is changing in your market? What are the implications for your business? What are the opportunities to target and threats to defend?

4. Evaluate Progress– Where do you stand on previously identified imperatives? What gaps remain? How should priorities change?

5. Align Resources– What resources are needed to deliver the key requirements? What gaps are there? How can they be filled?

To win any game, you have to know what counts. Then, you have to execute better than everyone.

Questions: Do you have a clear definition of what counts for your business? How well do you deliver against those factors? How could you do better?


Volunteered Personal Information (VPI) and valuing your personal data

Volunteered Personal Information (VPI) plays a critical role in moving the debate about permission marketing forward. It’s a shift that requires everyone to see their profile as an asset - where individuals are actively engaged in valuing and protecting their own data.



Editor’s Note: Presenting the flip side of the data privacy debate started with yesterdays post by ESOMAR is a potential solution to privacy hawks: the personal data economy. This model empowers consumers to leverage their personal data as an asset via a variety of online exchange models and holds much opportunity for researchers since by default consumer permission driven data collection, aggregation, and synthesis. My friends at Pureprofile (full disclosure I am on their advisory board) is one of the few companies with their roots in research (they started as a panel provider) that has gone far down the path to embracing this new paradigm and thinking through the value proposition for consumers. With that in mind, I thought getting their view on how this shift looks today and the impact on privacy in the future.


By Kim Anderson 

Data anxiety is normal. When Google posts a quarterly profit of $15.4 billion, we shake our heads in disbelief. Just how much money are they are making out of selling big data to advertisers? And not just big data, but data about us – obtained in exchange for the ‘search’ service most of us use daily. Personal data is a hugely successful and growing asset – one that many brands are profiting handsomely from.

However, what interests us at Pureprofile is not so much the data being traded behind closed doors (such as the data sets used in programmatic marketing). For us, it’s the information (data) people are happily volunteering that’s really interesting.

Volunteered Personal Information (VPI) plays a critical role in moving the debate about permission marketing forward. It’s a shift that requires everyone to see their profile as an asset – where individuals are actively engaged in valuing and protecting their own data.

There aren’t too many consumers who don’t want to participate in digital life to some extent. Whether it’s offering our details in exchange for a discount, receiving convenience in the form of online buying, or sharing our latest thoughts and moments via Facebook and Instagram. What we tell the world about ourselves through forms, sign ups, petitions, participation and personal web pages is a powerful thing. It allows us to express ourselves and have a voice. It also allows advertisers or anyone in the business of selling, to do their job with greater precision.

VPI is good because it moves us from interruption marketing (TV, radio and pre-roll video) to content such as newsletters, reports, brand monographs or books that are delivered as a result of permission being gained from the individual.

Innovation in content marketing of value has risen greatly in the last couple of years, as organisations recognise that focusing on helping, not selling out, their consumer is the key to obtaining VPI.

The new market for data

Peak consumer research body Ctrl-Shift has identified a huge ‘shift’ back to people power in the Personal Information Economy marketplace. During extensive research, they’ve identified key trends that place consumers in the drivers seat.

First and foremost they see increased agency for the individual – painting them more as collaborators than consumers. Audiences of the future play a powerful role in reclaiming their data and their right to participate in the free market.

Secondly, they are keen to communicate that our biggest currency as a consumer is not just money, but critically, our time and attention. In this new consumer paradigm everyone’s data profile is regarded as an asset. That is, something of value to be traded, given, or volunteered in return for something tangible and meaningful to the individual.

Profit flows from data

We know that data brokers make their money out of selling individuals’ data. It mostly lacks legitimacy due to its very nature – the fact that it was not volunteered by the individual. Collected without informed consent, and used out of context, it fast becomes irrelevant and devoid of amenity – pushing undesired marketing messages at people that rapidly reject them.

Ctrl-Shift’s research demonstrates that by giving power and more control over online identity and personal data, we create immense new value – worth ten of today’s Google business model within a decade.

This impacts the format and rules of this market, and most organisations are scrambling to prepare for this transformation. Brands must find new ways to engage consumers who seek to better manage their data. Adapting the services they deliver, how they create value for their audience, to new strategies for managing customer information – and all the while, still driving efficiency and growth for their business.

New flows of personal data will take over and emerging fourth-party services (i.e. those on the side of the individual) will be widely deployed. Businesses need to understand this framework, or be left behind as consumers become advanced in their knowledge of storage, management and data privacy.

Passionate about the role of fourth party services that place control back in consumer hands, Ctrl-Shift predicts enormous growth in decision support services that choose to help individuals research choices and manage their affairs using digital technologies. Examples include storing your own data securely and blocking advertising, to managing identity and providing insights. Within the next ten years, they believe this will evolve into a new UK market for personal data worth £20bn a year.

Indeed, many new services are poised to enter the marketplace in 2014, and some have already emerged – MiiCard, Mydex, nFluence, Pureprofile, Qiy, Reevoo and VisualDNA. The one thing they all have in common is that they use information volunteered by the consumer to add value to the consumer, but also to address particular challenges, stripping out high levels of cost and waste.

What’s next?

With this in mind, Pureprofile believes customers will transform to become active participants in the world of brands, rather than passive consumers.

A great many positives will flow from this data economy –  a landscape where people will be equipped to store, manage and selectively share their own data. A marketplace where consumers (and brands) will greatly value people’s time and attention. We see this evolution as fair and just.  There are many possibilities and implications of users being empowered to better manage their personal data.

One thing is for sure – the value generated will also be shared with the right person. The individual who shared their data.


Is A Digital First World War Looming – And Would We Survive It?

The free-flow of information is critical, not just for us as market, social, and opinion researchers but for the whole of society. By working together, we can ensure that the smoke over EU/US Safe Harbour does not turn into a real fire.



Editor’s Note: Regardless of your position on digital privacy laws, the reality is that many legislative bodies are enacting laws that are often complex, contradictory, and inconsistent. This is new territory for us all, and as an industry that is based on handling consumer data it is very easy for insights pros to get caught in the morass of these disparate regulations.   Our trade organizations, most notably ESOMAR and the Global Research Business Network (comprised of most of the national trade orgs around the world), are attempting to help MR firm navigate the minefields of the rapidly changing digital privacy landscape.

Today’s guest post by Kim Smouter of ESOMAR is an example of the type of leadership  and assistance they can provide to researchers who may be (and rightfully so!) confused by the various laws we need to comply with in different areas of the world.  We’re very pleased to post it here on GBB and hope you find it helpful and interesting.


By Kim Smouter

For centuries, European and US historical paths have been inextricably linked. In war and in peace, Europeans and Americans have found many reasons to trade, talk, and even wage war together as allies in a tireless effort to impose a shared worldview built on the principles of democracy and self-determination.

Between the clichéd stereotypes, is mutual admiration and a fascination with each other’s histories and achievements. Few societies in this world are quite so intertwined.

Yet the whole topic of personal privacy seems to be a case where the bonds of brotherly love are increasingly giving way to mutual suspicion, jealousy, and a desire to impose a world view designed and defined by “one camp.”

The situation is not only driven by economic concerns but also by real fundamental values resulting from differences in historical, cultural, and social experiences. One does not need to look very far to see how visible the cracks of discord are when Europe responded to the revelations of the US spying on its allies by calling for immediate changes to the EU/US Safe Harbour framework in place since 2000.

The ripple effects of the loss of the EU/US Safe Harbour framework should not be under-estimated. The framework was put in place to enable transfers of data between the EU and the US. It was an important legal fix as EU data protection law makes data transfers outside of Europe only possible with countries offering the equivalent levels of protection (adequacy), or through complex company contractual structures which most small and medium enterprises find difficult to implement.

The US has adopted a very different data protection approach compared the EU’s own global coverage approach. The US has elected to respond only to sectors where there are specific concerns using primarily consumer and unfair commercial practice as the legal basis for action, with the Federal Trade Commission (FTC) as the enforcement body. The US’ sector-specific approach to privacy and data protection is considered inadequate in light of the EU’s own global coverage approach. It is only through the EU/US Safe Harbour scheme that data has been able to flow freely between the two markets. The scheme offers a voluntary self-certification model whereby US companies’ commit to providing certain levels of redress that comply with the requirements of EU law. Without the Safe Harbour, most cloud services, and any projects involving the transfer of data out of the EU into the US would be unable to operate legally.

The Snowden revelations woke Europe to the fact that its citizens benefited from lower levels of protection (and particularly levels of redress in the event of abuse from either public authorities or companies) on US soil. Additionally, it was also clear that the EU/US Safe Harbour had been laxly enforced in recent years.

So when Europe’s leading officials on data protection called for the strengthening of the EU/US Safe Harbour scheme or its suspension, leading companies on both side of the ocean were deeply concerned. These calls emanated from numerous places, from the European Commission [the closest thing Europe has to a federal government], from the European Parliament [its Congress], as well as the European equivalent to the FTC – the Article 29 Working Party.

The EU followed up by presenting a shopping list of recommendations to its US “partners” who expected the issue would be resolved by this summer. These recommendations included requirements that (1) privacy policies be disseminated to the public at large and (2) US regulatory authorities step up their non-compliance enforcement as well as beefing up of redress options offered to EU residents whose data is being sent to the US for processing.

The FTC’s first response was to step up enforcement action taking 12 companies to task because they had failed to renew their EU/US Safe Harbour certificates and were falsely claiming compliance. The certificates have to be renewed every year. The companies have been hit with 20-year orders against them or face additional civil penalties if they fail to meet the requirements of the order to not misrepresent their compliance to schemes like the EU/US Safe Harbour.

At a recent meeting of ESOMAR’s Legal Affairs Committee, companies present at the table were asked whether the loss of the EU/US Safe Harbour scheme would impact their business. Every company around the table agreed on how important the EU/US Safe Harbour is to enable market, social, and opinion research to be conducted effectively across all our operating bases. This is especially important to small and mid-sized companies who stand to lose from the simplified processes that the EU/US Safe Harbour affords them, saving them having to make major investments in legal support to draft and implement the other burdensome schemes available under EU law.

Whether the FTC’s recent enforcement actions will appease Europe enough remains to be seen, but it is clearly the latest in a series of tit for tat actions that highlight the differences in approach and attitude towards privacy and data protection on the two sides of the Atlantic. There is not much market research can do about this, but there are some concrete steps that research companies and the associations tasked with representing them are and should be doing.

Market, social, and opinion research companies must be careful to ensure that when transferring data between the EU and the US, they do take the time to self-certify through the EU/US Safe Harbour and to renew their certifications every year. Ensuring that a company’s entire supply chain is EU/US Safe Harbour compliant is also extremely important (this can be guaranteed through contracts and periodic audits). Offering comprehensive redress in the face of respondent complaints or requests to remove their personal data is also an extremely important requirement for self-certified companies. Losing your EU/US Safe Harbour coverage would mean that the data transfer is illegal and could mean facing legal actions both in the EU and in the US.

ESOMAR, and partner national associations on both sides of the ocean are also working hard to remind legislators of the importance of getting the EU/US Safe Harbour right and not escalating the situation into a full digital world war where we would all lose. Don’t hesitate to let us know how your companies would be affected by the loss of such a scheme so that we can reinforce our messaging to decision makers.

The key decisions that societies make, both in the private and public sector, are increasingly driven by data, both big and small. The free-flow of information is critical, not just for us as market, social, and opinion researchers but for the whole of society. By working together, we can ensure that the smoke over EU/US Safe Harbour does not turn into a real fire.


Kim Smouter is Government Affairs Manager at ESOMAR. For more information on legislative developments in your region visit www.esomar.org/government-affairs


#MRX Top 10: Visualizing World Trade, Consumer Attitudes and Digital Cameras

Of the 2,347 unique links shared on the Twitter #MRX channel in the past two weeks, here are 10 of the most retweeted.


 By Jeffrey Henning

Of the 2,347 unique links shared on the Twitter #MRX channel in the past two weeks, here are 10 of the most retweeted.

  1. The Patterns of World Trade – Euromonitor shares a stunning data visualization of leading exporters: http://euromonitor.typepad.com/.a/6a01310f54565d970c01a3fd2240af970b-800wi
  2. Secrets to Surviving the Customer Revolution – Megan Clothier of Vision Critical recaps a recent webinar involving author John C. Havens. This image, illustrating how the word “consumer” makes people feel, went viral on #MRX: https://pbs.twimg.com/media/BqbrsdbIIAAwhlY.png
  3. Public Views on Ethical Retail – According to a survey that Ipsos MORI conducted for the Department for Business, Innovation and Skills, 49% of UK adults 16 and up believe that UK retailers aren’t very ethical.
  4. Quality of Own-Label Brands on a Par with Branded Goods – Jane Bainbridge of Research describes a survey of 1,000 UK “consumers” questioned by Perception Research Services: 63% consider store brands to be the same quality as national brands, with 14% believing they are better quality; only 3% are embarrassed to by store brands.
  5. IIeX Atlanta 2014 – Zoë Dowling of Added Value recaps the Insights Innovation Exchange conference for North America: “The industry is on the cusp of a new era; one that is causing a lot of soul searching but also a lot of excitement.”
  6. Picture This: A Smartphone That Satisfies All Your Photo Needs – Adelynne Chao of GfK shares the results of a survey of German and UK consumers about when they prefer to use digital cameras versus the built-in camera of smartphones. http://blog.gfk.com/wp-content/uploads/2014/06/Smartphone-camera-experience-1.jpg
  7. What Does Gamification Offer Healthcare Research? – Joanna Thompson of Adelphi Research, Paola Franco of Janssen, and Jon Puleston of GMI summarize the paper they presented at the British Healthcare Business Intelligence Association annual conference. In a test of a gamified survey of doctors against a conventional survey format, respondents to the gamified survey had a better experience, provided more information, and yet completed the surveys more quickly.
  8. Driving Change: Public Concerned About Safety of Young Drivers and Back Licence Restrictions: 68% of UK adults support a “graduated driver licencing scheme” for new drivers, although young people are less persuaded, according to an Ipsos MORI survey.
  9. Embracing change, cultivating opportunities – Magali Geens and Saartje Van den Branden write, “Researchers are still largely preoccupied stuffing over-abundant PowerPoint decks with sensible graphs drawn from respectable representative samples of meticulously screened participants; thus maximizing the chances of bringing nothing new to the professional who is in dire need of true insights to challenge the status-quo.” Ouch!
  10. Turning Social Media Monitoring into Research: Don’t Be Afraid to Engage – Margaret Roller argues that confining ourselves to monitoring social media handicaps our ability to gain the fuller understanding that comes from asking questions of social-media users. 

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX. Only market research links are considered, although the #MRX hashtag is occasionally used for other types of tweets, including – recently – tweets about Mr. X, an upcoming Indian 3D thriller film.


The Honomichl / AMA Gold Top 50 Market Research Companies, Confirmit Tech & Innovation Survey & ESOMAR Global Price Study

The 2014 AMA Gold Top 50 Research Companies, Confirmit Research Technology & Innovation & ESOMAR Global Prices reports are out! Here are links to all.

Survey image


We’re gearing up for the next round of the GRIT Report right now, so it seems appropriate to share the latest from the other research on research that inspires us in our own efforts.

First, the Honomichl Top 50 Report has now been renamed the AMA Gold Report for the first edition after the death of Jack Honomichl. It’s still the same great content, just under a new brand.

Here are the headlines from the press release:

The U.S. economy’s progress in 2013 was halting and uneven, with a few bright spots, and the U.S.-based market research industry’s progress was no different. Overall, the results are positive, with the 2013 growth rate besting the three previous years, research firms’ full-time employment increasing steadily and the Top 50 firms achieving the highest per-employee productivity rate in the past decade, but not all results were as positive.

Spending for U.S. marketing/advertising/public opinion research services in 2013 reached $10.7 billion among for-profit research firms, up 3.6% over the prior year, according to this 41st-annual analysis of market research industry trends. When 1.5% inflation is taken into account, net growth was 2.1%, the highest since 2009 and mostly in line with annual growth since 2001, with a few exceptions.

The 2013 revenue total was determined based on the individual, research-only revenues of 196 research firms. Those firms include the top 50 revenue generators—which are invited to submit their calendar 2013 research-only revenues and are then ranked in this Top 50 Report—and 146 firms from the Council of American Survey Research Organizations (CASRO), which provided combined revenues for these firms. Note that 32 CASRO-member firms are also reported individually among the Top 50 and so are excluded from the remaining CASRO aggregate numbers. Also note that when a firm made an acquisition or divestiture during 2013, adjustments were made to get an apples-to-apples comparison, as with Nielsen’s acquisition of Arbitron.


The Top 50 firms in 2013 had total revenues of $9.8 billion, up 3.7% over 2012. This growth rate is more than double the previous year’s 1.7% increase, which is a definite improvement, but the rate is only about 60% of the average growth rate for the years going back to 2001, not counting the recession years of 2008 and 2009.

The remaining CASRO firms reported total revenues of $836 million, up 3.1% from 2012 and accounting for just 8% of the total for all 196 firms analyzed in this report—with the Top 50 firms’ revenue accounting for the lion’s share at 92%.

Perhaps more relevant than the inflation-adjusted 2.1% net growth for all 196 firms included in this analysis is the comparison to gross domestic product, the value of all goods and services produced in the U.S., which is calculated by the federal government. Over the last 25 years, the Top 50 firms’ revenue growth rate has regularly surpassed the GDP’s growth rate, indicating the Top 50’s and the general market research industry’s long-term robustness. But since 2008, the firms’ results have fallen more in line with the GDP’s meager growth. This is a sign that the market research industry’s relative value has been on the wane, but with the marketplace’s increasing focus on data-led innovation, that soon may change.

There were a number of changes in the Top 50 rankings, as there are every year. The major change is Nielsen’s acquisition of Arbitron, ranked No. 6 last year. This served to bolster Nielsen’s No. 1 ranking. The industry leader now accounts for nearly one-third of Top 50 revenue and is further ahead in revenue of its nearest Top 50 competitors.

It’s worth noting that besides the Arbitron acquisition, only nine acquisitions (or divestitures) were identified among Top 50 firms in 2013. This is the lowest count in the last several years, including the 2008-2009 recession. Research firms have chosen to grow organically, rather than by acquisition.

However, IMS Health, which regained its No. 3 ranking after listing on the NYSE via an IPO in April, revealed for the first time in its SEC filings not only the firm’s total revenue over the past several years as required of public companies, but also nine acquisitions in 2013 and eight in 2012 worldwide. While a private company, IMS Health previously had kept those acquisitions confidential.

Four firms are “new” to the list: Decision Resources Group, No. 16, operating 10 different healthcare information companies; MetrixLab USA, No. 31, a Dutch research firm that acquired MarketTools Research Solutions in the U.S. in 2012; MarketCast, No. 39, which came back on the list after a one-year absence while its ownership changed; and Bellomy Research No. 47, which returned after a two-year absence. Three firms dropped off of the list: Leo J. Shapiro & Associates, which declined to participate; LRA Worldwide, whose revenue fell below the threshold revenue of $16.7 million; and Public Opinion Strategies, a major player in political polling, which dropped below the threshold because of the lack of election polling in 2013.

Firms improving their position among the largest 20, besides IMS, are comScore, Symphony Health Solutions and ORC International. Most of the remaining 30 firms also earned reshuffled rankings, as is typical of the ups and downs in this segment every year. This is all displayed the Top 50 rankings chart on page 39.


The Top 50 for-profit research firms are just the tip of the iceberg when accounting for all of U.S.-based research spending. In fact, they account for about 60% of it, depending on what you include or exclude. The profiles later in this report have been expanded to include research support industry firms, those that provide research products and services to the full-service Top 50 firms as well as end users such as small businesses, educators and the government. Many of them will process the data collected and provide summary data reports. Turn to page 35 for a broad but brief sampling of these firms. Their revenues were not available for this year’s report, but plans are to expand the list in future years and to collect or estimate their revenue.
Another segment reported for the first time are report and advisory services firms, which include companies such as Forrester, Gartner, Mintel, Ovum, IHS and more.  Their revenue is subject to wide swings, much more so than the Top 50. In the future, these firms also will be profiled individually and their revenue reported.
As we move to a broader definition of for-profit market research, consideration must be given to new firms and new sectors. Among the new firms are Facebook, Google and IBM, which now host their own recent startup in-house research services. New sectors gaining prominence include marketing mix modeling and ad targeting, social media-based firms that scrape the Web and apply text analysis for understanding behavior, and management and marketing consulting firms—McKinsey, Bain, etc.—which contract with firms such as those in the Top 50 to provide them with data and reports that are then sold to clients at a significant markup. A fair estimate of their total revenue would be up to $700 million.
Reading further, you will find profiles of each of the Top 50 firms that include top management, ownership, acquisition activity, if any, and a description of their offerings. Together with the Top 50 rankings and accompanying charts, a complete picture of the U.S. market industry emerges.


We are THRILLED to hear that the Gold Report will continue to expand their definition of what market research is and which companies qualify. As I argued upon the publication of the ESOMAR Global Report, the traditional view of the industry is incomplete and must be expanded in order for us all to fully understand the marketplace today.

As always hats off to Larry Gold and his team for pulling this together; it’s a monumental effort and a vital source of much needed intelligence in a quickly changing industry.

Another perennial favorite is the annual Confirmit Market Research Technology & Innovation Report, by Tim Macer of meaning, ltd.  While there is some overlap with GRIT,  overall Tim and his team tend to focus on a smaller sample of primarily field companies in the traditional MR space to get a feel for what is actually happening from a tech adoption perspective. It is a great snapshot of where a very large chunk of the industry is.

Here is an excerpt from the press release:



Mobile technology, social media and multi-modal research have been the most positive technological developments in Market Research over the past decade, but firms have been challenged by disruptive elements including DIY surveys, data privacy and the displacement of fixed-line telephony during that time.

These are some of the key findings of the Confirmit 2013 Market Research Technology and Innovation Survey, produced by meaning ltd.

The survey, which analyzes the technologies and developments that shape the MR industry, also reveals that there has been a significant shift towards working with businesses on their Voice of the Customer (VoC) and Customer Experience Management (CEM) programs by research companies.

The study reveals 70% of research firms are now undertaking some VoC or CEM work and a majority are using NPS in some way. However, ‘traditional’ customer satisfaction studies are by far the most prevalent way of exploring the Voice of the Customer.

Tim Macer, managing director at meaning said, “The research industry clearly is adapting to the new data-rich, ‘always on’ landscape that we find ourselves in today. VoC is a natural place for research companies to be, but it will not be credible or sustainable if the industry does not embrace innovative methods to deliver a more holistic and integrated product. There are noticeable gaps in provision when it comes to integrating research with non-MR sources of data, and in dealing with the explosion of comment and other unstructured data.”

Meanwhile, the mobile channel remains a key area of development potential for research firms, according to the survey.

Ole Andresen, director of product management at Confirmit said, “The last ten years have been dubbed the ‘decade of the mobile device’ and mobile has certainly been a key technology for MR organizations. The growth in this area shows no sign of slowing down, with 42% of MR organizations now able to deliver surveys via mobile app, and 70% able to run them via mobile browser.”

“The survey findings reflect our own experiences at Confirmit, with many customers now seeking to embrace the mobile channel, using either mobile app, browser or even SMS, across the different programs they run,” added Andresen.

Other key findings of the survey include:

  • Data privacy issues are a major disrupter for organizations collecting large amounts of data
  • The use of the web as a data collection tool remains strong, but CATI is decreasing
  • There is a measurable move towards integrated MR software platforms, and away from separate tools
  • PowerPoint remains the most popular method of delivering research results, although the use of dashboards is expected to grow
  • 10% of firms use word clouds as the main tool for presenting text analytics results – but their usefulness as a standalone tool is questioned.

This year sees the 10th anniversary of the survey. The findings are based on the responses of 240 MR firms globally.Click here to access the full 2013 Market Research Technology Report.


Last but not least we have the ESOMAR Global Prices Study, an invaluable guide and benchmark on global pricing for many research approaches broken down by country. Both suppliers and clients benefit from this review of average pricing, and of course folks looking to disrupt the market can use this as a way to look at their own pricing models.

Here is a description from the website:


For both suppliers and buyers of market research, the ESOMAR Global Prices Study is an essential element of any reference library. A unique guide in the planning and purchasing of market research.

The Global Prices Study, run every two years, provides insights in the price of research around the world. Differences in pricing that exist between countries, between types of research projects (and methodology) and over time. For almost 20 years, this biennial comparative analysis has been consistently regarded as one of the most important yardsticks in our profession.

The Global Prices Study 2014 is based on a set of dummy projects for which participating agencies prepare bids. The bids were submitted in response to a set of seven market research projects: six consumer research projects (four quantitative, one qualitative and one using online communities) one B2B project and a set of commercial tariffs for staff time.

This year a new estimate is added based specifically on a mobile research project. The report also includes some additional qualitative feedback on the understanding of ‘nationally representative’ sampling, which is a must-read section for both suppliers and buyers alike.

Quotes were provided by 736 agencies across 119 countries (up from 633 agencies and 106 countries in 2012). The data were collected between February and April, 2014.

The Excel tool is made available to you to:

  • Build custom reports by selecting regions, countries and projects of interest
  • Compare median prices
  • Explore price indices and details of samples and quote ranges

Access to the Prices Study report and the Excel tool is available free to ESOMAR members at MyESOMAR.


We’ll begin data collection for the next wave of the GRIT study this month and will publish the report in the Fall. Keep a watch out for more info soon!


Artificial Intelligence 101

Kevin Gray offers a glimpse of a very complex subject that has wide ranging implications for not just insights, but the world as we know it.



Editor’s Note: AI may sound like science fiction. but it is not. Just last month it appears that a computer allegedly passed the Turing Test for first time by convincing judges it is a 13-year-old boy. Ray Kurzweil, author of The Singularity is Near (the AI visionary bible) leads Engineering at Google, which has been making massive investments in AI, robotics, and quantum computing  to crack the AI issue. He has also launched Singularity University as a think tank/accelerator with NASA, Elon Musk, and many others (including MR’s own Kyle Nel of Lowe’s) to help bring AI and other new technologies to life.  There are hundreds of other examples in universities, private businesses, public companies, and government labs globally. Financial resources that surpass the GDP of many countries are deployed annually to realize the promise that AI holds for our future.

Since AI is inherently based on advanced mathematical models and is driven by data, it dovetails with the world of MR in many ways and we are already feeling the impact through the early stage advances of related technologies such as text analytics, predictive analytics, “Big Data”, data mining, agent based  modelling, etc…  With that in mind, regular blog contributor and Marketing Scientist Kevin Gray has put together a fantastic primer for everyone interested in the topic, especially insights pros.

This is a topic that will only gain more attention as the future unfolds, so I hope you find it as useful as I do!


By Kevin Gray

For decades computers and robots able to think have captivated our imaginations, sometimes terrifying us and sometimes charming us. Few will fail to recall R2-D2 and C-3PO from Star Wars, or the loquacious android named Data in Star Trek: The Next Generation that also helped popularize the term neural networks. We humans empathize with robots, sometimes to a disturbing degree.

On the other hand, computers and robots are also viewed more and more as threats to our livelihoods and it’s not difficult to locate blogs and newspaper articles offering tips on how to compete with machines in your job hunt.  (Hint: be more charming.) Best-selling author Ray Kurzweil has written at length on Artificial Intelligence and even predicts a “singularity,” when Artificial Intelligence exceeds human intelligence, by the year 2045, with radical implications for humanity.

Humans have contemplated the human mind and human behavior for centuries and names such as Aristotle, Hobbes, Descartes and Hume will be familiar to all of us. The earliest calculating machine was probably built in the 1620′s by Wilhelm Schickard, a German scientist, and the first programmable machine, a loom that used punch cards, in 1805 by Joseph Marie Jacquard. However, in historical terms, AI is a very new field and only after WW II began to receive serious attention. The term “Artificial Intelligence” was not coined until 1955, by John McCarthy, who was then at Dartmouth College. It encompasses several disciples, for example psychology, neuroscience, natural language processing, machine learning and robotics, and has progressed in a somewhat bumpy fashion over the course of the past half century. It has grown into a major industry and the subfields within AI have become better integrated, as has AI with other disciplines.

It would be hard not to be interested in the subject of Artificial Intelligence but it’s also hard to separate fact from science fiction. Not being a computer scientist, let alone a specialist in robotics, it was difficult for me to get my human head around what is really happening in this field and to educate myself on the subject I decided to look beyond popular media. Artificial Intelligence: A Modern Approach (Russell and Norvig), now in its third edition, appears to be the leading textbook on the subject, and I also found Probabilistic Graphical Models: Principles and Techniques (Koller and Friedman) and Data Clustering: Algorithms and Applications (Aggarwal and Reddy) instructive regarding specialized topics within AI. Wikipedia is also a good source as is What is Artificial Intelligence?, a wide-ranging interview with John McCarthy.

What follows is a snapshot of what I’ve learned from my informal research.

Something that struck me immediately was that the mathematics and mathematical notation were clearly terrestrial in origin. Probability also plays a leading role in AI. Likewise, terms such as Bayesian Networks, SVM, State Space, MCMC, Utilities and Game Theory will be not be alien to most marketing scientists, and particularly those with experience in data mining and predictive analytics will note many similarities with their own work. Computer Scientists of course will feel even more at home in this field.

First, though, what is Artificial Intelligence? The term “agent” appears recurrently in this literature and refers to something that perceives and acts in an environment. Russell and Norvig define AI as “the study of agents that receive percepts from the environment and perform actions. Each such agent implements a function that maps percept sequences to actions, and [there are] different ways to represent these functions, such as reactive agents, real-time planners, and decision-theoretic systems.” AI is thus concerned with both reasoning and behavior and different researchers have variously emphasized thinking humanly, thinking rationally, behaving humanly and behaving rationally (i.e., getting it “right”, given the goals.)

From an AI perspective, there are three fundamental ways to see, or represent, the world. There are atomic representations, in which each state of the world is treated as a black box, i.e., something taken as a given that we are unable to explain. There are also factored representations, in which a state is a set of attribute/variable pairs. Finally, there are structured representations, where the world consists of objects and relationships among them. The last of these presents the greatest challenge to programmers.

A perfectly rational agent is able to find the best solution, given the information it has or has discovered. In reality, the calculations required to achieve perfect rationality are too time-consuming in most settings, therefore perfect rationality is not a practical goal. Bounded optimality, in which the agent behaves as well as possible within its computational constraints, is more realistic. The goal is the optimal program and not the optimal solution, and the agent is able to adapt to the environment in which it finds itself and is able to “guess” efficiently and accurately. As mentioned, it must be able to learn from experience and also deal with ambiguity and uncertainty, thus the relevance of Bayesian probabilistic reasoning to AI. Near-instantaneous access to massive data bases will facilitate these goals but the programming will not be trivial and the AI counterpart to general intelligence remains elusive.

The contention that machines could operate as if they were intelligent is called the weak AI hypothesis while the assertion that machines that do so are in fact thinking, not merely simulating thinking, is known as the strong AI hypothesis. The distinction may not have practical relevance to many working in the field, however. The well-known Turing Test was proposed by Alan Turing in 1950 and intended as an operational definition of intelligence. A computer “passes” if a human interrogator is unable to tell whether written responses to written questions came from a person or from a computer. (My own half-serious variant is the more stringent Cowell Test in which, to win, the program must fool the human judge into believing it is Simon Cowell.)

AI is off the drawing board and already used in medical diagnosis, education, navigation, operations, planning and scheduling, security, simultaneous interpretation and, of course, marketing and advertising. Since 1999, for instance, the Educational Testing Service in the US has used software to grade millions of essay questions on GMAT exams. A company based in Hong Kong has recently appointed AI as an official and equal board member. Even primitive expert systems are examples of AI and, in one form or another, it is working in the background with or without our being aware of it.

Computers have made small but noteworthy discoveries in astronomy, mathematics, chemistry and other fields requiring performance at human expert level. They do well at combinatorial problems (e.g., chess) but now are also able to learn from experience. That at times they can best human experts will come as no surprise to those who’ve worked in predictive analytics, since an important reason for using algorithms is that they frequently perform better than human experts at certain tasks. Their use is not simply a matter of cost.

None of this means computers use insight and understanding to perform these jobs but it does underscore that the same behavior can originate in different processes.

So where is AI headed? Artificial Intelligence has come a very long way though, obviously, “Data” remains a TV character. To quote directly from Russell and Norvig:

“Very powerful logical and statistical techniques have been developed that can cope with quite large problems, reaching or exceeding human capabilities in many tasks – as long as we are dealing with a predefined vocabulary of features and concepts. On the other hand, machine learning has made very little progress on the important problem of constructing new representations at levels of abstraction higher than the input vocabulary. In computer vision, for example, learning complex concepts such as Classroom and Cafeteria would be made unnecessarily difficult if the agent were forced to work from pixels as the input representation; instead, the agent needs to be able to form intermediate concepts first, such as Desk and Tray, without explicit human supervision.”

My self-study has been brief but here are some of my key takeaways:

• AI is no longer Sci-Fi. It’s very real but still very much a work in progress.
• It’s hugely complex and some of the best human brains are hard at work on it, though there remains disagreement among experts regarding important issues.
• AI is not an entirely new discipline without roots, and mathematics and probability lie at its heart.
• Machines still cannot truly think and lack genuine self-awareness. They do not have feelings, irrespective of our own feelings about them.
• Regarding future developments, there is no compelling need to emulate the precise functioning of the human brain which, at any rate, is still inadequately understood.
• AI need not be perfect to be very powerful. Moreover, some problems are unsolvable.
• At some point AI will begin to have an enormous impact on our lives and perhaps eventually human nature as we know it will cease to exist.
• We need to be on-guard against potential abuses of the technology.

The foregoing pertains to software and how advances in hardware such as Quantum Computing will influence developments in AI is another subject even farther from my areas of expertise. Again, I am not an authority on AI and this has been a short rundown of what I think I know.

That is, of course, if I really am…

1 Robot Abuse Is A Bummer For The Human Brain 
2 If You Want To Avoid Being Replaced By A Robot, Here’s What You Need To Know 
3 What Is Artificial Intelligence?
4 Venture Capital Firm Hires Artificial Intelligence To Its Board Of Directors  


Tom H. C. Anderson Explains: “What Is Text Analytics?”

Tom H. C. Anderson explains Text Analytics and the difference between First Generation approaches and Next Generation software 


By Tom H. C. Anderson

While text analytics has been around for quite some time and has reached mainstream to the point of becoming a buzz word, few really know what it is. It’s not a word cloud. It is not a qualitative tool. IT IS data mining.

We’ve long felt the need to clear the confusion around text analytics in our industry. Surprisingly there aren’t really any good videos on the subject. Until now. Here is our video explaining text analytics, using our own OdinText software as a point of comparison. That may seem a bit self serving, but we think there needs to be a baseline for reference and what better point to use than the software we have spent years building based on our market knowledge? I trust you’ll forgive the promotional aspects and just enjoy the learning we have tried to distill.

In making a video to explain what text analytics is, we first had to decide who our audience was. So many business videos these days are trying to reach such a broad audience that they become totally void of any real information. On the other hand, we didn’t want to make a geeky video just for ‘data scientists’ either.

We hope that the middle ground we chose to communicate to here, basically our core customer base (the consumer insights analyst/manager/research director), will provide the right level of detail within a reasonable amount of time, about 4 minutes (down from our original 7 minute version).

Thank you in advance for watching and sharing our video. Should you want to discuss text analytics in greater detail, please don’t hesitate to reach out.


What’s To “Like” About Facebook’s Experiment?

Facebook announced this week that they conducted a blind experiment on emotional triggers among its members in 2012, testing psychological reactions to messaging on nearly 700K of its members. What do we like about the experiment, what do we think is problematic, and what does it mean for us?



Editor’s Note: When the news broke a few days ago about the Facebook Emotional Contagion Experiment  my initial reaction was focused more on the results: compelling proof of the virality of emotions in a social network. The implications for the social sciences, behavioral health, and yes, marketers is pretty astounding.

And then the outcry against the process emerged, and I scratched my head a bit.

My thinking was that this was covered under the ToS of Facebook, so it was OK. Caveat Emptor, yada yada yada…

And how was this different than live A/B testing, a standard and accepted practice in marketing and market research that also deals with large samples. Was it because of the “emotional manipulation” component of the study? Isn’t EVERY form of media designed to manipulate emotions for a desired effect and outcome? Billions of dollars a year are spent optimizing the emotional resonance of advertizing, movies, TV, etc…, money which flows into the coffers of the MR industry.

After all, as Alex Batchelor spoke about at IIeX just 2 weeks ago, the ultimate goal of MR is behavior change.  And no, we don’t get informed consent from targets of the results of our research who’s emotions are manipulated: it is implied because they choose to view it.

In short, I just didn’t get why all of the hullabaloo was being raised from anyone in the marketing or MR space: it struck me as short sighted at best, hypocritical at worst.

But then I started to think about the broader implications. Things like Privacy. Corporate Social Responsibility. Ethics. Doing No Harm. In that regard the issue got very cloudy for me. My “daddyness” started to shine through: I have teen girls who are active Facebook users. Were they part of the experiment? Was one of the days they were filled with angst and sadness that broke my heart for them (an admittedly common state for teens, at least mine) fueled by this manipulation? Not cool.

Finally, what about how the backlash here could impact MR? Would this be another straw on the camel’s back for the reactionary element who so often end up influencing legislation that is short sighted and limits our industry?

Honestly, I am still not sure where I stand. Not so much regarding this particular experiment (I think they mishandled it on many levels, but the results are compelling), but about how our technologically connected social age may force us to rethink many sacred cows.

All of that was in my mind when I reached out to folks I trust and respect to see if they wanted to take a stab at writing a post from the MR perspective on this, and one of my all time favorite people, Melanie Courtright of Research Now, jumped in. Mel is a fantastic thinker, a wise leader in the industry, and a straight shooter. There is no one better to dive into this topic with the impact of MR in mind.

So, here is Mel’s take. I suspect this won’t be the last time this topic is addressed here, and I look forward to your comments!


By Melanie Courtright

Facebook announced this week that they conducted a blind experiment on emotional triggers among its members in 2012, testing psychological reactions to messaging on nearly 700K of its members. News of the test has elicited both positive and negative reactions, from member “furor” to researcher intrigue. The pertinent questions are, what do we like about the experiment, what do we think is problematic, and what does it mean for us?

So what was the experiment?

Facebook wanted to test the theory that members going to its site and seeing negative content made them more negative about their lives, while seeing more positive content made them more positive. So they created an algorithm that would automatically omit positive or negative word associations from users’ news feeds for one week. During that same time, they would score the users content and see if those whose positive content was reduced became less positive in their posts, or if those who negative content was reduced became less negative in their posts.

Now never mind that the results were determined based on what people chose to share as a result of any psychological change. Never mind the theories around the scoring of the words that were removed or how they scored user created content to determine if they became more or less negative during that time. Never mind any other methodological concerns. Let’s even say never mind to the actual findings, though if you are interested, even those are up to interpretation. What’s really interesting is, what do we think about the test?

Should we “like” what they did, or “dislike” it? Should we applaud them or chastise them? Or both?

First, okay, I’ll say it… A psychographic test among 700,000 people, who were unbiased on their data input as a result of a truly blind experiment. Wow! That’s a huge data set, and creates the potential for broadly sweeping implications! Am I jealous of that data set? Maybe just a little.

But here it comes. They didn’t tell people what they were doing? And they removed positive content from some feeds to see if they felt more negative? And they removed negative content from feeds to see if they felt more positive? And they didn’t ask permission? They literally ran the risk of affecting people’s psychological state of mind without their approval?

Uh-oh. I think that might go against a few ethical principles we hold very dear.

• Get permission: This one is easy. Nope. No they did not.

• Be transparent: They said nothing, even afterwards, for years.

• Do No Harm: Were some people psychologically harmed? Slate.com is quoted as saying “Facebook intentionally made thousands upon thousands of people sad.” Facebook said the statistical results showed only a small statistical difference in “sadness” results, but they didn’t know going in that would be the result. They did make some people more sad, and what if the results had been more dramatic? What if someone was already in an emotional state? Were other people’s perceptions of a user impacted when their “positive” content was removed from friends’ feeds, leaving only the negative content?

You might say that people accept these risks in the Terms of Service. Okay. Maybe. But I have two issues with that. Reasonableness and Research. Is it reasonable to think, based on the terms, that FB would experiment with your moods? Most people I’ve spoken to would say no. And if you are going to call it research, shouldn’t it adhere to research standards? Most I’ve spoken to would say yes.

If you attended the CASRO session at IIeX on data privacy, you learned that 40% of US research participants have “very little” trust of the MR category with their personal data, and 51% have “very little” trust of social media companies. As a result, 97% say that getting their approval is a universal mandate. This is a classic example of their concern. People don’t want big brother affecting their content. They certainly don’t want to feel like human guinea pigs. And they won’t stand for feeling manipulated. So if you ask me, there’s nothing to “like” about this experiment.

I think FB owes an apology, not only to their members for violating their trust, but to the research industry as well for labeling this social experiment as Market Research.

What do you think?


Visionaries, Innovators, Disruptors & Change Agents Wanted

The Call For Speakers for the next 7 IIeX events are open. Join us in transforming the insights industry!
insight-innovationI don’t consider myself an event producer, but as some folks pointed out to me during IIeX Atlanta , I am now. The success and demand for what we’re trying to do with the IIeX events, and other opportunities that have come to us as a result, have resulted in our beefing up our team to handle the demand and developing a full calendar of upcoming events. And great events need great speakers, so…The call for speakers for our next events is now open!

We are looking for dynamic presentations, case studies and provocative points of view.  We want to hear about “the next big thing!”.  All of our events showcase the very best from inside and outside of the market research realm, with a focus on technology innovation, thought leadership, collaboration and networking.

Please use the speaker proposal form here: http://www.insightinnovation.org/callforspeakers/

If you have problems using the form, you can send your synopsis directly to lmurphy@greenbook.org.

Here are the events we have planned in upcoming date order:


Insights Marketing Day presented by GreenBook | SEPTEMBER 23, 2014 IN NEW YORK

One-day event specifically designed for everybody in a marketing function at a market research agency. IMD is filled with the latest best practices, practical tips, demonstrations, and new marketing technologies. Attendance is free to marketing employees of market research agencies that participate in the GreenBook directory.


The Retail Innovation Summit | OCTOBER 16, 2014 IN COLUMBUS, OH

An inaugural event designed to connect senior leaders in the retail industry value chain with thought leaders and emerging and disruptive solutions providers to create a vision of the future of retail collectively. RIS is an exclusive one day event that will focus on enabling solutions for critical topics like The Maker Economy, Supply Chain Optimization, Experience Design, HR & Training, Big Data Analytics, and Shopper Insights & Impact. With a unique design that includes highly interactive sessions with Senior retail industry leaders and new players creating the solutions of the future for each session, RIS is much more than a conference: it is a collaborative problem solving work session for the industry itself.


The Nonconscious Measurement Forum, in partnership with Burke Institute | NOVEMBER 2014 IN NEW YORK

A first-of-its kind event focused squarely on the latest innovations, best practices and thought leadership in neuromarketing and non-conscious measurement.


Insight Innovation eXchange Asia-Pacific 2014 (IIeX AP 2014) | DECEMBER 4-5, 2014 IN SYDNEY

Insight Innovation eXchange Europe 2015 (IIeX EU 2015) | FEBRUARY 2015 IN AMSTERDAM

Insight Innovation eXchange Latin America 2015 (IIeX LA 2015) | APRIL 2015 IN MEXICO CITY

Insight Innovation eXchange North America 2015 (IIeX NA 2015) | JUNE 2015 IN ATLANTA


Information on all of these events will be available at http://www.insightinnovation.org/ in the coming weeks.

There are many priorities to balance when building the agenda, so a few things to keep in mind -

None of our events are EVER “pay to play”: we select speakers based on the size of their ideas, not their marketing budgets.

Not all submissions are selected because we also highly curate the content and invite speakers we think are exceptional too.

We default to showcasing the newest, most interesting topics whenever possible.

We think sponsors who support our events deserve time on stage as well as a “thank you”.

Most of all we try to ensure that all speakers help us tell the story we want to share with the world: that the insights space is changing fast, and the industry is rising to the challenge by embracing new ideas, technology, and models to deliver on the value of MR.

So throw your hat in the ring and join us!


The 36 Most In-Demand Types Of Suppliers At IIeX

An analysis of the IIeX Corporate Partner Meetings in Atlanta, and what they might tell us about the future of insights.



IIeX North America was held Atlanta June 16-18, and once again it can only be described as a huge success. With 650 attendees roughly divided between a third client-side, a third technology providers (many from outside MR) and a third  more traditional suppliers/consultancies it was exactly the mix of stakeholders the event was designed to interest. To paraphrase one of my favorite 80′s TV heroes “I love it when a plan comes together!”.

Much has already been written about the public facing event (see herehere, here, here, here, here and here for examples), but that is just part of the story.

We’ve been very clear that IIeX serves multiple purposes, but in essence I see the events as a funnel to accomplish 6 things:

  1. Connect  client-side research organizations with new potential supplier partners
  2. Connect established MR suppliers with potential new emerging technology partners
  3. Connect emerging and early stage technology companies with clients and partners
  4. Connect emerging and early stage companies with funding or acceleration opportunities
  5. Serve as a thought leadership platform and inspiration source for the entire industry
  6. Become a bridge for trade organizations and new players

Of course there is more to it than just these basics and lots of other great byproducts emerge from the process, but those 6 priorities lie at the heart of why we launched this initiative to begin with.

To make this happen it all starts with working with our Corporate Partners to understand what they are looking for and building the event around those needs as much as possible. Part of that is a public process such as the Insight Innovation Challenge, but even more happens in private consultation with the Corporate Partners. To a great extent they are our de facto advisory board.

For IIeX North America the partners were:




The culmination of their work with us occurs at the conferences: right before the event we share with them a list of all attending companies with descriptions of their core offering and then set up private meetings during the conference for them to meet with the companies they find interesting. It’s a great win/win for all and we continue to expand the program to partners globally.

Oh, and we do all of that for free.That’s right: no cost. Zilch. Nada. 

Unlike other events that charge suppliers for the “speed dating” options (sometimes for $20k or more) we don’t charge clients or suppliers for participation.  The selection process has nothing to do with whether someone is a sponsor, speaker, partner, or even a paid registrant. In fact, in some cases the suppliers selected are start-ups that we have invited to exhibit for free.  Selection is based solely on the needs of each corporate partner: the suppliers picked, the length of meetings and even the number of meetings are determined by the client.

Obviously we cannot talk about specific meetings or vendors, or even which Corporate Partners are interested in what approaches, but in a general sense looking at the classification of vendors from an offering perspective is enlightening.

17 Corporate Partners selected 129 suppliers to meet privately with, for a total of 293 meetings (yes, some individual companies were very much in demand and met with many folks). That is a pretty decent B2B sample, so although certainly this can’t be considered quantitative it is strongly directional of the types of things these large client companies find most interesting right now.

Of course the implication is that they are interested because they believe they can address business issues within their organization, and that is incredibly useful to know as the rest of the industry continues to adapt to the changing marketplace.

Here is a total breakdown:

Unique Meeting Requesters: 17
Total Meetings Scheduled: 293
Unique Vendors Selected: 129
Vendor Categories: 36


And here is the breakdown by supplier category:


Category Meetings Per Category
Nonconscious Measurement 29
Insight Consultancy 26
Gamification 20
Virtual Qualitative 15
Communities 14
Facial Coding 13
Data Collection 13
Full Service MR 13
Big Data Analytics 12
Mobile Behavior Data 11
Text Analaytics 11
Image Analytics 10
Behavioral Economics 9
Retail Analytics 9
Digital Ethnography 8
Prediction Market 8
Social Media Analytics 8
Wearable Tech 8
Virtual Environments 5
Loyalty Analytics 5
Crowdsourcing 5
Foresight Consulting 5
Internet of Things Analytics 4
Data Visualization 4
Augmented Reality 4
Cross-platform Tracking 3
Technology Consulting 3
Micro Surveys 3
Data Curation 2
Virtual Reality 2
Video Management 2
Predictive Analytics 2
Web Analytics 1
Sensory Testing 1
Media Measurement 1
Design Consultancy 1
Branding 1


To dive a little deeper, here is the  breakdown by the number of companies per category selected (click on it to open a larger version):


IIeX Company Meetings

A few notes on the segmentation we applied:

  • We have bundled many techniques for emotional measurement (EEG, voice analytics, biometrics, and implicit or cognitive based approaches) but separate  facial coding/scanning due to the massive consumer technology adoption of the core tech.
  • We separate Big Data analytics and Predictive Analytics, although arguably there is significant overlap between the two. The reasoning is based on the supplier’s approach. Predictive Analytics suppliers may work with smaller or more discrete data sets that fit witrhin the traditional MR paradigm, while BIg Data Analytics work with larger, more disparate data types that are usually novel to MR.
  • Data Collection constitutes any technology platform that is primarily survey based. Most in this category would also be considered DIY.
  • Full Service MR firms are traditional suppliers that generally have in house design, field, and analytical capabilities. They tend to focus on the research process under clearly defined project definitions.
  • Insight Consultancies look more like strategy firms or agencies and are “methodologically agnostic”; although they may have some proprietary approaches or focus areas, overall they tend to focus on the business issue & outcome vs. the process.

Obviously this is a bit of an “off the cuff” cut, but it reflects how we tend to group suppliers at a high level. Additional levels of gradation could easily be applied, and conversely some rollups might be warranted as well, but for our purposes I think it’s reflective of the highly fragmented and increasingly specialized  nature of the supplier marketplace, especially in light of the continued influx of potentially disruptive technology providers.

Despite the apparent low uptake registered in the most recent GRIT study for some of these categories like Gamification and Neuromarketing/Biometrics, clients are still keenly interested in the approaches and collectively they accounted for a large percentage of the meetings. That should be a big wake up call for the rest of the industry: clients want to get to the nonconscious drivers of behavior as well as new ways to engage consumers and are looking at many solutions to achieve it. My personal belief is that any firm that can address the speed, cost, and scale issues related to data collection in those categories will quickly grow at the same pace that we’ve seen with social, mobile, and communities.

The other big winners echo GRIT: social, mobile and analytics. All were well represented, with analytics being of especially high interest.

Interesting and promising new technologies such as Wearables, Virtual Reality, Augmented Reality, The Internet of Things, Image Analytics & Loyalty Analytics (think single-source data) all had respectable showings and I fully expect to see more of these companies being a part of IIeX in the future due to client demand.

Like everything we do at GreenBook, we share this analysis so that the MR industry can stay on it’s toes and continue to be a dynamic and creative global sector. Our belief is that since this particular initiative involves some of the largest global research buyers, it is indicative of where clients are looking for new insights-driven competitive advantage today and in the future. As an industry we’d do well to pay close attention to these signals.

As we move forward with more IIeX events we will continue to track this and share with the industry what we can. Since IIeX is focused on connecting supply and demand we’re uniquely positioned to understand what’s happening and share that to help everyone thrive.

I’d be remiss if I didn’t throw a plug in here to wrap things up:

If you’re a client organization and want to join us as a Corporate Partner, send me an email at lmurphy@greenbook.org. We’d love to help you as well.

If you’re a supplier, remember that in order to meet with our Corporate Partners you actually have to be in attendance. Book your spot at one of our future events right now so you can be in the consideration set.