1. Official
  2. Clear Seas Research
  3. eu2015_480x60
  4. RN_ESOMAR_WP_GBook_480_60_11_14

Why Johnny Can’t Forecast

Why do forecasts go wrong? There are a three reason; the GIGO principle, competitive reaction, and the precarious nature of asking purchase intent.

 8ball2

 

Dr. Stephen Needel

I’m reminded of Fleisch’s 1986 book Why Johnny Can’t Read, wherein he blames the American education system and progressive teaching methods for why our kids can’t read.  I don’t think we can blame the education system for why we don’t do a great job at forecasting new product performance.  What can we blame? How about blaming ourselves for not educating our clients about why these forecasts can, and often do go awry.

Our ability to forecast new product sales has not changed much over the years.  New products are still failing at an extraordinary rate; most experts put this in the 80% range.  BASES, the leader in new product forecasting, often quotes an error range of +/- 20% 80% of the time. Put these together and you might wonder why we even bother.  One would think that we’d wise-up by now and either develop better models or give up on the idea.  We keep at it for a number of reasons – because most of the time, our forecasts are either close enough we can explain why they went wrong or we think it improves our chances of being right.

Why do forecasts go wrong? There are a three reason; the GIGO principle, competitive reaction, and the precarious nature of asking purchase intent.

New Product Forecasting (NPF) is very susceptible to the garbage in – garbage out principle. If your seasonality estimates are off, if your distribution build estimates are off, if you radically alter the media plan, or you change your product’s line items, you’ll get a different result from what you forecast. And don’t ignore the impact of repeat and depth of repeat. I’ve seen lots of products fail, not because of trial problems (you can always buy trial) but because the product didn’t live up to expectations; repeat and depth of repeat are inputs to a forecast and the category average may not be appropriate for your product.

NPF is also susceptible to competitive reaction. Your $50million advertising and promotion plan may be great, but if your competitor comes back with an equally strong plan, your product isn’t going to do as well. As my friend, Pete Mimnaugh, who’s been forecasting new products since dirt was invented puts it, category stress is just as important as absolute spend.  Category stress is a relative measure tied to the ratio of marketing category expenditures to category sales.  A large, highly profitable category such as pain relief carries an extremely high marketing stress level. The advertising and promotion plan needs to be matched to the category stress level.

Perhaps the most important cause of failure, though, is the fact that we rely on purchase intent as our starting point.  It turns out that purchase intent can be a pretty mediocre predictor of actual purchasing. For purchase intent to be useful, we have to believe that people can give us a true answer to the question. This may not be hard when (a) the respondent is familiar with the product type, (b) the context of the purchase decision is available to them, and (c) they are the consumer of that product. For example, if you tell me about a new breakfast cereal that’s made of oats, is heart healthy, tastes chocolate-y (indeed, it tastes better than Cocoa Puffs, my favorite) and is priced like other cereals, I can give you a good guess as to my purchase intent. I’m thinking chocolate flavored Cheerios and I can imagine that.

Now imagine back 25 years ago and someone tells you they’ve created a new portable device. You can make and receive phone calls from it, you can take pictures or videos with it, you can pay at stores just by waving it in front of scanner, it can give you directions, tell you what’s nearby when you ask it to, you can type on it and send what you typed to friends who can write back to you instantly, it will fit in your pocket, and only cost $500 and $50 per month. This person asks you how likely you’d be to purchase it – how do you possibly respond? You have no context for the product or the things it does. Email and texting didn’t exist back then – who thought we’d need that? And that’s why forecasting innovative products is nearly impossible – we rely on people to take a guess about something they can’t relate to, for which they have no context.

In the CPG world, there is a simple solution to new product forecasting dilemmas – do test marketing. While this business has shrunk from its heydays of the 1990s, it is still the most effective way to generate a new product forecast. You get to find out that your package is ugly or falls down on the shelf, that sales are so slow the product develops maggots, that once people taste your product they never come back (all things I’ve seen in test marketing) and more. You get a read on trial that is not dependent on purchase intent, and you actually measure repeat and depth of repeat. While it might cost you a couple of million dollars to run, a test market keeps you from blowing $50 million on a bad national launch.

Agile research, among this year’s hottest buzzwords, is not the solution for consumer packaged goods. Unlike software or technology products, there is no inherent demand for a new product. The sell-in process is fundamentally different, with little actual lead time necessary. The industry has set up consumers to expect bugs in the early versions that can, and will be repaired shortly after introduction. In the CPG world, you can’t fix the flavors or the packaging or the communication points after introduction – the product is already off the shelves and there are others waiting to take its place.

Niels Bohr is quoted as saying, “Prediction is very difficult, especially if it’s about the future.”

Share

Disrupting Market Research: An Update

It is tempting to try to reduce the industry’s challenges to bite-sized pieces, as if using big data, understanding millennials, or telling better stories might turn the tide. These things need to be done, but the issues are bigger.

 disruption-1

 

Editor’s Note: Earlier this year JD Deitch wrote what I consider to be one of the best deconstructions of the systemic challenges facing the market research industry I have ever read. I was so impressed I asked him to come to IIeX in Atlanta to present his thoughts as well. Last week JD asked if I’d like to post a follow-up, and I jumped at the opportunity. As we close out 2014 and turn our eyes towards 2015, this should become a foundational POV to inform your strategy for the foreseeable future.

 

Jonathan D. Deitch, Ph.D. (Connect on Twitter @JDDeitch)

About ten months ago I wrote an article entitled Disrupting Market Research: Five Keys to Survival. In it, I observed that, like most sectors, traditional market research is being disrupted by new techniques and environmental factors that pose an existential threat to laggard firms regardless of size. I concluded that, to merely survive, research companies needed to stop talking about value and start delivering it, and that this would only happen if they created game-changing assets and approaches that gave clients the tailored services they wanted.

My argument was predicated on changes in spending as data penetrates the business world, changes that have effectively ended the monopoly on budget and methodology traditional researchers once enjoyed. From 2010-2012, spending on traditional MR remained virtually flat, while spending on new techniques grew at a healthy double-digit rate.

With the recent release of 2013 estimates from ESOMAR (as estimated by Outsell) I thought I’d update the original article and reevaluate its themes. The data continue to point to the same conclusions. The industry is innovating and incumbents are facing stiffer headwinds. Worst of all, research participation is now at the edge of an abyss.

2013 Spending: Traditional MR still losing ground, online trends down

Spending on traditional MR in 2013 grew by a scant 2% compared to 8% growth in what ESOMAR calls the ‘expanded market’. Traditional MR thus continues to lose ground to new techniques at what is roughly the same pace as previous years. Moreover, the share of spending on online research declined by 1 point, meaning about $600 million shifted to other modes. ESOMAR attributes these factors to price competition in online research and competition from other methodologies, explanations which are entirely consistent with my observations in January’s article.

For sample and panel companies, one would expect similar topline headwinds as traditional MR companies are their core clients. Nevertheless, that sector grew by about 5%. There is evidence that sample/panel companies are diversifying revenue sources, e.g., by going directly to end clients or otherwise leveraging their technology investments.

Meanwhile, new sectors of the insights industry continue their strong growth. Online analytics, web and social media research, social media communities, and marketing reports and research grew by at least 10%. 2013 saw a very healthy 13% increase in the survey software sector as well, which seems a clear vindication of DIY.

2014: More of the same?

In my original article, I postulated that industry growth would top out when clients, recognizing they were spending ever more money on insights, would start making zero-sum decisions, i.e., they would redirect budget from traditional techniques to new methodologies. While the “legacy” MR industry may have staved off the worst in 2013, there are ill winds blowing in 2014.

Dark clouds for traditional MR firms

In July, two of the biggest traditional MR firms announced first-half revenue shortfalls and warned of full-year consequences. The capital markets delivered swift judgment. In October, one the biggest conglomerates showed a revenue decline in Q3 and warned of weaker sales to come. The weakness appears to be hitting hardest in mature markets.

While they don’t report their finances like publicly-quoted companies, the “upstarts” show no signs of flagging. The leaders of the insurgency might well be not one but two DIY (do-it-yourself) survey software companies, both of which have putative valuations of over $1 billion, making them worth more than all but the biggest players in the industry. At what point do they “acqui-hire” a full service firm for some additional research bona fides for clients who might need higher-level consulting?

Trends to watch: Research “off the rack”

Keeping with the DIY theme, for years now insights clients have been complaining about the haute couture world of full-service research with its high prices, slow delivery, and methodological ostentation. In response, researchers have argued back that DIY solutions are like buying factory seconds: you get what you pay for.

Enter “Ready-to-Wear” Research. A UK-based company offering services in Europe, North America, and Australia, now allows customers to buy, with a credit card, the industry’s leading intellectual property for concept testing, copy testing, and brand measurement (with other products coming soon) in explicit partnership with the IP owners. By buying “off the rack”, clients don’t get every bell and whistle, but they do get reliable results using proven techniques, faster and at a lower price, which is exactly what they have been saying they want. This hybrid approach—not exactly DIY but not exactly full-service—gets sample via an API from a large global sample company. The whole thing oozes efficiency, explicitly addresses the issue of quality, and appeals to clients’ own personal consumer experiences of getting just what they want while executing the transaction quickly and conveniently.

This company is not alone in its efforts to create a more templated solution. Many research companies are trying to find the sweet spot between their full service offerings and “lighter” designs that are faster and cheaper. The limiting factors are twofold. One is the operational challenge of simplifying. The other is the deep concern that the lighter product will cannibalize revenue.

Trends to watch: Automation

With sample and panel providers leading the way, the industry is now headed full speed into programmatic execution. From lower operations payrolls to highly efficient and cost-effective matching and delivery of respondents to studies, automation holds a lot of promise.

Suppliers are now offering pricing advantages for using self-service APIs instead of their human project managers. Automation may be both blessing and curse, though. In the longer term, automation must surely reduce the information asymmetry between sample buyers and suppliers (which currently favors suppliers) as it promises far greater price transparency and liquidity. This, in turn, should promote greater pricing parity and create both top- and bottom-line pressure. Of course there is more to sample than just price—respondent quality and a firm’s reliability are important considerations, too—but at some point this information will become equally available and get priced in.

On the acquisition side, panel builders continue to recruit in the fully automated and magnificently intricate digital media ecosystem. The process is entirely data-driven and managed in real-time: predictive models determine, in a matter of milliseconds, whether the person currently looking at the screen (whose demographics are inferred and appended from big data) is likely to be a panel joiner to determine whether they should bid for the ad spot and show creative that has been optimized for the body viewing the screen. The competition for “eyeballs” is fierce. The advertising and affiliate networks and co-registration marketing companies who have traditionally provided the industry’s panelists are also showing a willingness to experiment with new techniques to drive conversion.

Tipping Point: Smartphones and Young Respondents

Despite all the talk by both suppliers and end clients, and despite more than adequate research-on-research to provide guidelines, the industry is moving at a snail’s pace in its adoption of mobile-first, device-agnostic surveys—and it’s starting to materially affect feasibility in younger audiences.

Even a casual review of research company websites and apps provides ample evidence that there isn’t nearly enough inventory of smartphone-friendly completes. Users give middling ratings to mobile apps and remark that “there are never any surveys available”. In app, users still get diverted to the big-screen versions of surveys and websites. They watch as the “please wait” icons spin, only to finally deliver the frustrating news that the survey is closed.

The industry has reached tipping point particularly in the US, where somewhere between 30-50% of new panel recruits (perhaps more depending on the weight of mobile in the media plan) are joining from their smartphones. The consensus is that better than half, and perhaps as much as two-thirds, of all emails are opened on mobile devices, which is important as email remains the primary method of informing people about surveys. The numbers skew higher for those under 35 as smartphone penetration is higher.

This is a massive slice of the available universe, one that panel companies have paid for and their clients would love to speak to, that is simply unavailable for projects that are not adapted to smartphones. Being tablet-friendly isn’t enough either: both the user experience and demographics are different. In the US at least, tablet penetration is around 40% and is skewed toward higher incomes, whereas smartphone penetration is 20 points higher and, importantly, shows no significant difference with racial minorities and less important differences by income.

This problem is of particular concern for trackers and normed studies. While one can understand their reluctance to change, it is no longer avoidable. There are only two sustainable options: they either (a) get shorter and adapt their data collection to be device agnostic or (b) they tolerate feasibility declines that lead to trend breaks and bias. Researchers and clients need to confront these issues so they can plan and act deliberately.

If the industry is fortunate, the need to adapt to smartphones will drive greater attention to the user experience and lead to shorter surveys, more reasonable designs, more focused questions, less cumbersome response mechanisms, and ultimately higher participation rates and better data. There really is no other alternative.

Five Keys to Survival: They Still Make Sense

In the context of the above developments in the industry, the Five Keys to Survival from my original article remain relevant.

1: Make money for clients

At the moment, this appears to be taking the form of leveraging technology and tools for greater speed. With data and analytics continuing to be built into execution, the need to undertake a distinct “research project” is shrinking. While this need won’t disappear, it is clear that understanding how clients execute and developing offers that fit their execution is essential.

2: Create new assets

At the moment, this is all about automation and tools. From “ready to wear” research to sample automation to horizontally-integrated field/project management platforms, insights firms are investing in technology that enables them to streamline their own execution to save time and money. If there is nothing else certain about the future of the industry, it is that it will be far less manual than it is today.

What remains to be seen still is whether firms will be able to develop new data assets either from the data itself or the analytics on top. A quick scan of the job boards shows a healthy market for data scientists, particularly in the new expanded sectors of the market. Yet while big data is interesting and necessary, it is not sufficient. Even Google is using survey data to inform its own models. It’s hard to see how the insights of the future would not be driven by a combination of techniques, both innovative and traditional.

Equally uncertain is the state of consumer panels in mature markets. The situation is especially disquieting among young audiences. The industry and its clients would be wise to remember that, still, the decision to join a panel or to participate in research studies remains a function of the participant’s perceived value given the cost. All the technical sophistication in the world will not overcome a poor experience, and the bar for what constitutes “acceptable experience” is always rising.

3: Embed Quality

The industry continues to make great strides in terms of understanding the drivers of quality in research design and sample management, particularly on connected lifestyles and mobile research experience. The work is as good as it’s ever been. Self-service tools are getting better, too.

The real issue here though, still, is around translating the research-on-research into action to improve research participation. The solution is a tricky one for incumbent firms. End clients need to agree to more sensible designs, yet these typically imply not only lower price points but also difficult change. Suppliers aiming to avoid big change may, for a time, be able to maintain revenue, but will unquestionably see margins shrink as studies become increasingly costly to field. It is a quintessential prisoner’s dilemma, and escaping it is the biggest challenge the industry and its individual firms face.

4: Ooze efficiency

I’ve mentioned several examples above which illustrate this point. Efficiency via automation is clearly the focus for many companies as they seek to improve speed and cost. These efforts will surely continue in 2015. Expect to start seeing real differences (and consequences) between those who do this well and those who don’t.

5: Reboot the business model

Are we on the cusp of this heading into 2015? It seems so. News has just broken of a merger between two analytics and insight firms. These companies combine a variety of assets, experience, and global presence to form a wholly integrated, and quite innovative looking, global top 10 competitor. It will be interesting to see if they get traction.

Conclusion

It is tempting to try to reduce the industry’s challenges to bite-sized pieces, as if using big data, understanding millennials, or telling better stories (all big topics in the past year’s conferences) might turn the tide. These things need to be done, but the issues are bigger. I noted in the original article that the biggest mistake firms being disrupted can make is to think marginally when more profound action is needed. Firms need to act strategically by envisioning the business they would build if they were starting from scratch.

 

Copyright © 2014. Jonathan D. Deitch, Ph. D. All Rights Reserved.

Share

Submissions and Voting Are Open for the Insight Innovation Competition at IIeX in Amsterdam

Submissions and voting are now open for the latest rPrintound of the Insight Innovation Competition (IIC), to be held at the Insight Innovation eXchange (IIeX) 18-19 February in Amsterdam.

Submissions and voting are now open for the latest rPrintound of the Insight Innovation Competition (IIC), to be held at the Insight Innovation eXchange (IIeX) 18-19 February in Amsterdam.

Imagined and organized by GreenBook, the Insight Innovation Competition helps entrepreneurs bring disruptive ideas to life while connecting brands to untapped sources of competitive advantage through deeper insights.

The Competition works as follows:

  1. Innovators submit a great idea that will change the future of marketing research and consumer insights.
  2. The market research industry votes on the ideas that have merit.
  3. Five finalists (and possibly a couple of wildcard entrants selected by the judging committee) are selected to pitch their ideas to a panel of judges at the Insight Innovation eXchange (IIeX) Europe 2015.
  4. The winner gets mentoring, fame and exposure to potential funding partners.
  5. The industry benefits from a great new solution that improves how companies understand consumers.

The Competition has been a huge success story. It’s a win for everybody: entrepreneurs with great ideas for improving the business of insights, investors looking for new opportunities in the insights space, and the corporate-side end-users of market research who are looking for new solutions. Additionally, the influx of new thinking and new technology helps further the evolution of the market research community as a whole. IIeX EU 2015 Chair and Chief Architect Ray Poynter and I are looking forward to a spirited round of the Competition at IIeX in Amsterdam.

Stephen Phillips, CEO of past IIC winner ZappiStore, said winning “not only helped us feel great about what we were doing but also helped us attract both clients and potential investors.” Similarly, according to David Johnson of Decooda, winning the IIC “helped accelerate the entrance of our company into the marketplace, gave us massive visibility to potential partners and clients, and led very directly to new business.”

Submissions and voting take place on the Insight Innovation Competition website athttp://www.iicompetition.org until 9 January, 2015.

Past Winners of the Insight Innovation Competition

 

 

Share

To The Future And Beyond: What GRIT Does—And Doesn’t—Tell Us About The Future Of The Insights Industry

While the GRIT report remains the most forward-focused perspective on the forces shaping our industry, more profound change will occur as we release ourselves from pre-defined notions of what research is.
Technology

By Greg Heist

The semi-annual release of the GRIT report is always a must-read item. This temperature check of the insights space illustrates a collective view of our unfolding future.When reading GRIT, I tend to look for divergences and gaps. How has time shifted perceptions?  Where are the disconnects between client-side and agency-side researchers?  What do the results say about what we should pay attention to as we forge this brave new world?

After digesting this issue of GRIT, three things are on the forefront of my mind:

MOBILE HAS BOTH ARRIVED AND BECOME A CENTERPIECE OF OUR INDUSTRY’S FUTURE

Of all the things that mobile promises, the mobile survey is arguably the least compelling. While GRIT cites 64% adoption of mobile surveys, mobile qualitative is really where the game will change. It’s obvious from these results that a whole new tranche of emerging applications is poised to drive a sea change – shifting our discussions from mobile-as-a-collection-device to mobile-as-a-window-of-measurement.

In one sense this shouldn’t be surprising, since the smartphone revolution was really the impetus for this. But, at this point in time, it’s merely an alternative medium for completing a traditional survey. Though, this “first phase of adoption” is rapidly approaching maturity.

When looking at future consideration of mobile qualitative by clients, it’s only a matter of time before this “second phase of adoption” will bring us a much more vivid representation of consumers and their worlds.

APPLYING FORESIGHT TO THE FRINGES

“Sleepers” often lie where the crowd is most bearish. In GRIT, several techniques scored as very low future consideration, including facial analysis, biometric response, wearables, and the Internet of Things.  While this may be true today, there is heavy investment in these areas outside of our industry.  Therefore, we simply cannot lose sight of these fringe applications. As technologies like Nest, Hue, Sense, and Quirky gain market traction, they are sure to permeate both the insights and foresight industries.

BIG OPPORTUNITIES LIE BEYOND GRIT

It’s increasingly clear that the insights industry can no longer be defined solely by the collection of primary data and its analysis.

In fact, I believe that the biggest future opportunities lie outside the data collection techniques measured in the GRIT report.

Case in point, while big data is briefly mentioned in the current incarnation of the report, it’s only a matter of time before its disruptive effects are felt broadly within the industry. As organizations learn to harness the fusion of primary and enterprise data, a new tipping point will be reached—one that is less focused on reporting the past and more focused on predicting the future.

Beyond this, organizations are awash in insights gathering “dust” on shared drives. There is incredible opportunity to help clients derive greater meaning by curating and socializing this latent wisdom. This discipline is in its nascent stages, but nonetheless offers great promise for better informing organizational decision-making at all levels.

So while the GRIT report remains the most forward-focused perspective on the forces shaping our industry, more profound change will occur as we release ourselves from pre-defined notions of what research is.

When that occurs, the conversation will shift from incremental change to visible transformation.

Share

Conclusive Proof That Social Media Data Predict Sales…Now What?

Now that we know that social media data are quantitative and predictive, we must create research protocols to harness their full transformative power.

Social-mobile-shopping

 

Editor’s Note: We’ve previously posted a few articles by Michael Wolfe on the progress in utilizing social media data to predict sales, and now thanks to an ambitious multi-sponsor study it appears that we have independent corroboration that when using the right tools and the right data that social data does indeed have a high correlation to predictive accuracy.  On a personal level I have seen this first hand via my work with Decooda, which has also demonstrated this consistently in non-public research with several major CPG clients, as well as a deeper level of insight available through combining other frameworks such as the Censydiam model by Ipsos.

Joel Rubinson is THE thought leader in market research on developing a new model for utilizing a variety of datastreams to produce game changing insights, and in today’s post he highlights recent public studies that pretty conclusively demonstrate that the long awaited new reality is indeed upon us. he laos gives some specific and very useful suggestions on how market researchers need to adapt.

Before we go into a lull for the Thanksgiving holiday here in the U.S. that kickoffs the busiest shopping period opf the year, there is no more important message we can deliver to the industry than this one.

 

By Joel Rubinson

Last Tuesday, the results of a landmark study were made public proving that the quantity of social media conversations about a brand has a statistically significant relationship to changes in its sales.

 

“Researchers today announced the results of a landmark study that measured the impact of “consumer word of mouth” in six diverse categories, finding that online and offline consumer conversations and recommendations account for 13% of consumer sales, on average…About one-third of the sales impact is attributable to word of mouth acting as an “amplifier” to paid media, such as television, with consumers spreading advertised messages. The study was based on sophisticated econometric modeling of sales and marketing data.”

–Word of Mouth Marketing Association (WOMMA), sponsors include AT&T, Discovery Communications, Intuit, PepsiCo, and Weight Watchers.

 

This industry learning comes on top of an academic paper by Prof. Wendy Moe at the University of Maryland that showed a correlation of .8 between social media listening data and brand equity metrics derived from survey questions.

So now that we know that social media data are truly DATA…with predictive value, how do we act on this?

First, research needs to take social media listening seriously

As I said in an earlier post, “…finding the prediction question”, research needs to become an equal opportunity employer.  If the data has predictive value, it should be hired! Traditional survey researchers need to come to grips with the proven predictive value of social media data.  We need to stop treating social media listening as a hobby and find its mainstream roles alongside surveys and other important data streams such as clickstream and transaction data.

Second, we should create new brand metrics from social media data.

In my last post about the marketing ATOM, I demonstrated how building brand audiences is the key to success in a digital age. People become part of an audience for a brand that is significant and relevant to them and audiences talk about the brands they join.  Hence, it is no surprise to me that social media would provide an important set of brand metrics.  Once researchers enrich brand KPIs beyond the venerable survey tracker with social (and other digital) data, they will become an agent of change for the enterprise. Social KPIs will encourage marketers to focus on building their audiences, creating content that is worth sharing, and tracking advertising and promotional campaigns through peoples’ willingness to talk about them and share them. In fact, turning social media data into must have metrics has already been done via Social TV ratings that both Nielsen and Rentrak offer and it affects pricing of TV spots.

Third, we extend.

I plan to investigate if social media listening can replace continuous tracking of attributes.  I am optimistic that we can do dipstick studies with attributes but track brand perceptions throughout the year via social data, creating a leaner, more agile, and more effective tracker program.

I would like to see us begin to partition social media conversation by client segment or audience.  To illustrate, it is now possible to match social media profiles to customer lists using machine based logic that matches on name, e-mail, etc.  As such, for example, Verizon could create a segment of customers called “On the bubble” who are more likely to defect and, as an aggregated segment, their social media conversation could be monitored.  What are they saying about Verizon, competitors, life, TV programs, etc.?  Where are the conversations occurring?  This would be very powerful and the technology in fact does exist.

I’d like to see research turn report card trackers into predictive engines built from time series data that includes social media, digital, weather, survey tracking results, etc.  Our goal is to get ahead of future trends for a brand so we can influence these outcomes positively before they happen.

I urge research panel providers and brand websites to encourage social log-in so the power of Facebook and Twitter profiles can be harnessed.  In this way, interest profiling and ad targeting merge into one thing.

Yes, the genie is out of the bottle but as you head into this world of integrative measurement, please be mindful of rigorous practice for social media listening.  Different providers can actually produce very different data streams for the same brand, depending on whether they access the full Twitter firehose, include all social channels, how their semantic engine works etc.  To understand the complexity on the last point, consider social media listening for Target the retailer.  Extracting meaningful conversations on the retailer “Target” rather than Seeking Alpha talking about a company hitting its financial targets is not trivial. Also some conversations map to brand preferences while others map to a hot promotion or topic.

This is a significant stage in the journey the ARF started in 2008 when I was Chief Research Officer.  We began to explore how social media listening could become a valuable partner or even partial replacement for surveys.  The first meeting included Unilever, Procter, and General Mills…a highly unlikely event…but we all agreed that social media listening had tremendous potential for insights value creation.  This then became the big springboard into the ARF Research transformation super-council.

Now that we know that social media data are quantitative and predictive, we must create research protocols to harness their full transformative power.

Note: For both studies, the social media data streams were provided by Converseon, to whom I am a strategic adviser.

Share
Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Text Analytics for 2015 – Are You Ready?  

OdinText SaaS Founder Tom H. C. Anderson is on a mission to educate market researchers about text analytics

140717_hamerman_18Judging from the growth of interest in text analytics tracked in GRIT each year, those not using text analytics in market research will soon be a minority.  But still, is text analytics for everyone?

Today on the blog I’m very pleased to be talking to text analytics pioneer Tom Anderson, the Founder and CEO of Anderson Analytics, which develops one of the leading Text Analytics software platforms designed specifically  for the market research field, OdinText.

Tom’s firm was one of the first to leverage text analytics in the consumer insights industry, and they have remained a leader in the space, presenting case studies at a variety events every year on how companies like Disney and Shell Oil are leveraging text analytics to produce remarkably impactful insights.

Lenny: Tom, thanks for taking the time to chat.  Let’s dive right in! I think that you, probably more so than anyone else in the MR space, has witnessed the tremendous growth of text analytics within the past few years.  It’s an area we post about often here on GreenBook Blog, and of course track via GRIT, but I wonder, is it really the panacea some would have us believe?

Tom: Depends on what you mean by panacea. If you think about it as a solution to dealing with one of the most important types of data we collect, then yes, it can and should be viewed exactly that way.  On the other hand, it can only be as meaningful and powerful as the data you have available to use it on.

Lenny: Interesting, so I think what you’re saying is that it depends on what kind of data you have. What kind of data then is most useful, and which is not at all useful?

Tom: It’s hard to give a one size fits all rule. I’m most often asked about size of data. We have clients who use OdinText to analyze millions of records across multiple languages, on the other hand we have other clients who use it on small concept tests. I think it is helpful though to keep in mind that Text Analytics = Text Mining = Data Mining, and that data mining is all about pattern recognition.  So if you are talking about interviews with five people, well since you don’t have a lot of data there’s not really going to be many patterns to discover.

Lenny: Good Point!  I’ve been really impressed with the case studies you’ve releases in the past year or two on how clients have been using your software. One in particular was the NPS study with Shell Oil. A lot of researchers (and more importantly CMOs) really believed in the Net Promoter Score before that case study. Are those kinds of insights possible with social media data as well?

Tom: Thanks Lenny. I like to say that “not all data are created equal”. Social media is just one type of data that our clients analyze, often there is far more interesting data to analyze. It seems that everyone thinks they should be using text analytics, and often they seem to think all it can be used for is social media data. I’ve made it an early 2015 new year’s resolution to try to help educate as many market researchers as I can about the value of other text data.

Lenny: Is the situation any different than it was last year?

Tom: Awareness of text analytics has grown tremendously, but knowledge about it has not kept up. We’re trying to offer free mini consultations with companies to help them understand exactly what (if any) data they have are good candidates for text analytics.

Lenny: What sources of data, if any, don’t you feel text analytics should be used on?

It seems the hype cycle has been focused on social media data, but our experience is that often these tools can be applied much more effectively to a variety of other sources of data.

However, we often get questions about IDI (In-Depth-Interviews) and focus group data. This smaller scale qualitative data, while theoretically text analytics could help you discover things like emotions etc. there aren’t really too many patterns in the data because it’s so small. So we usually counsel against using text analytics for qual, in part due to lower ROI.

Often it’s about helping our clients take an inventory around what data they have, and help them understand where if at all text analytics makes sense.

Many times we find that a client really doesn’t have enough text data to warrant text analytics.  However this is sad in cases where we also find out they do a considerable amount of ad-hoc surveys and/or even a longitudinal trackers that go out to tens of thousands of customers, and they’ve purposefully decided to exclude open ends because they don’t want to deal with looking at them later. Human coding is a real pain, takes a long time, is inaccurate and expensive; so I understand their sentiment.

But this is awful in my opinion. Even if you aren’t going to do anything with the data right now, an open ended question is really the only question every single customer who takes a survey is willing and able to answer. We usually convince them to start collecting them.

Lenny:  Do you have any other advice about how to best work with open ends?

ODIN AD 1 300X250

Tom: Well we find that our clients who start using OdinText end up completely changing how they leverage open ends. Usually they get far wiser about their real estate and end up asking both less closed ended questions AND open ended questions. It’s like a light bulb goes off, and everything they learned about survey research is questioned.

Lenny: Thanks Tom. Well I love what your firm is doing to help companies do some really interesting things that I don’t think could have been done with any other traditional research techniques.

Tom: Thanks for having me Lenny.  I know a lot of our clients find your blog useful and interesting.

If any of your readers want a free expert opinion on whether or not text analytics makes sense for them,  we’re happy to talk to them about it. Best way to do so is probably to hit the info request button on our site, but I always try my best to respond directly to anyone who reaches out to me personally on LinkedIn as well.

Lenny: Thanks Tom, always a pleasure to chat with you!

For readers interested in hearing more of Tom’s thoughts on Text Analytics in market research, here are two videos from IIeX Atlanta earlier this year that are chock full of good information:

Panel: The Great Methodology Debate: Which Approaches Really Deliver on Client Needs?

Discussing the Future of Text Analytics with Tom Anderson of Odin Text

Share

What Do Clients Think About MR Impact?

As part of the last round of GRIT we asked 185 MR buyers about their views on the impact and effectiveness of market research studies. Contrasting the ideal characteristics of a market research study and our actual practice reveals a number of interesting gaps.

business impact

 

By Niels Schillewaert & Katia Pallini

As a special addendum to the most recent wave of GRIT we wanted to get a deeper understanding of the impact and effectiveness of market research studies from the client side perspective of. We partnered with InSites Consulting and Gen2 Advisors on this special “MR Impact Study” addendum. 185 market research users (marketers and insights managers, excluding professional research providers) participated in our survey and reflected about their most recent market research study as well as their ideal study[1].

We share the results of this fascinating study around 3 uncovered facts linked to our profession.

niels1An adaptive system is a flexible organism that changes its behavior in response to its environment. Such change is often required to improve performance or increase chances of survival. Consumers (our context and most important resource) have changed their behavior significantly over the last years. The surge of social media and mobile has been a major driving force behind consumers gaining power over brands. Accompanying consumer behavior (as a cause and result) such as participation, information contribution and sharing, social networking, brand liking, product reviewing, user collaboration and co-creation… has become the new ‘normal’ when it comes to consumer behavior. Gradually, we see digital companies, marketers, software providers… move up to collaborate with consumers and achieve goals through them. Gone are the days when we sent out a message and waited for people to respond. Today, marketers need consumers to want to participate in brand activation and market through them, not to them. With 6 in 10 research users indicating they believe in proven and traditional methods, our study indicates that research and the use thereof may not have made that shift to the same extent and has not aligned with contemporary consumer behavior.

While survey research is mainly conducted online, there is a platform gap. Even though 19% of consumers fill out surveys on a mobile device (GRIT study 2014, Greenbook), only 5% of all surveys are actively programmed to be fit for mobile.

 

niels2

Qualitative research is mainly conducted offline. 1 in 2 research users still work with traditional focus groups or in-depth interviews. Online research communities are growing as a method, but only 19% of researchers actually uses research communities to learn from and collaborate with consumers.

It is not only the channels or platforms that are lagging but also the techniques and tools. Only 9% of quantitative projects apply creative research techniques – at best, surveys use graphical scales (36%). Despite the fact that gamification has been in vogue for quite a few years, leaderboards, badges, challenges and tasks, feedback systems or social interaction are hardly used in surveys. Still, gameplay, audio-visual or creative techniques allow getting a better and deeper understanding of consumer behavior. Such tactics allow for better engagement with participants which leads to a richer consumer understanding. The latter might explain why the picture is different in qualitative research: 81% of research users feels that qualitative research helps them engage with how consumers really live, while only 1 in 3 believe surveys are capable of bringing consumers to life.

niels34 in 5 research users stated that the research output was actionable and readily usable for their marketing teams. An overwhelming 92% reported their research projects generate insights worth sharing with their colleagues. Great job, right? Yet only 65% actually share the results of their research internally. So it seems there is a lot of unused potential when it comes to leveraging research internally. In fact, the research we conduct does not seem conducive to telling a good story and it is not the start of a conversation. The majority of researchers use PowerPoint reporting to present the research results: 86%. A mere 22% have an interactive workshop to discuss the research findings and less than 10% use creative reporting formats such as interactive videos or infographics.

Related to our first fact, it would be better if research relied on content-rich methodologies and used creative communication channels to convey research results. All too often, we rely on numbers and text as well as single media. We need to combine video, photos, physical spaces (e.g. exhibitions), (private) social media, quizzes, infographics and apps. It would be so much more enriching to have consumers upload pictures and complete a mini-ethnographic self-description in a survey. Make sure you have the ingredients to tell a good story: use consumers as characters, describe their ‘who, what, when, where’ and also explain the ‘why (not)’ of their behavior.

 

niels4

It seems research users are satisfied yet not delighted or overly proud to share the results throughout their organization. So, the time is now to step up our game and create reporting formats that help research users share consumer stories with all internal stakeholders more easily.

niels5The first two facts about the status of market research are linked to that fact that our profession is far from adaptive and lacks creativity in the way research projects are conducted; furthermore the (presentation) output is far from inspirational. Nonetheless, our data indicate that research users are quite proud of what they do and consider what they do as being great. Researchers even seem somewhat tenacious: if we had to run a similar project again, only less than 1 in 10 would advise a different approach. 86% of researchers believes their research leads to actionable results and 3 out 4 declare using the information of their study to steer very concrete actions. This is surprising, considering the fact that we admit that our research does not entirely allow us to engage with how consumers really live. Even stronger: we found that 60% (even 71% for surveys) of all research just confirms executives’ thinking and less than half of all research studies is perceived to generate surprising results (and for quantitative surveys we only generate surprise about 30% of the time). Only 1 in 2 projects lead to change within an organization.

It is our interpretation that these number are way too low if research wants a seat at the boardroom table. It is about time that we as researchers start to think and self-reflect on that. What service are we providing if we do not make a difference? If we are repeating ourselves continuously, then in the end, what is our value proposition?

Conclusion: we do not deliver on our own expectations

Based on a MaxDiff analysis we assessed what research users want the most. Choosing from 20 characteristics, research users composed their ideal study. By far the most important element was the research’s ability to ‘change the attitude and decisions of marketing executives’, followed by establishing a ‘good connection between researchers and marketers’. Next, ‘rigorous analysis’ and a ‘clear storyline’ shared a tied 3rd place in importance. Research as a positive touch-point experience for consumers which provides a ‘good consumer connection’ and results based on ‘a representative sample’ completed the top five of a study’s most desirable characteristics.

Interestingly, ‘low price’” research and the ‘use of proven traditional methods’ were the least important features of the ideal market research study. The agency’s ‘reputation’ or ‘collaboration with third parties’ were classified as less important overall – while ‘experience with the client’ and its ‘flexibility’ were more important.

But it is apparent there is a gap between what we ‘want’ and what we ‘do’. Contrasting the ideal characteristics of a market research study and our actual practice reveals a number of interesting gaps. First of all we underachieve in making the change happen in executives’ minds and actions, we do not provide systematic rigorous analyses, clearly underperform in creatively reporting research results and could do better at using innovative methods.

 

niels6

These findings are in line with previously discussed facts and provide clear guidance to researchers as to what to focus on to make a difference. However, we can learn quite a bit from our ultimate clients – the marketers. It is our firm belief that market research results should be managed along the lines of content marketing (based on “Insight as Content”, presented by Niels Schillewaert and Mark Uttley at IIeX 2014 in Atlanta). While research findings are our core product, we do not manage it as a ‘product’ or ‘service’. We are actually bad at marketing it – we do not think about its promotion, distribution and delivery, let alone about the ‘experience’ marketers go through when utilizing it. At best, we are good at delivering findings based on solid methods and representative samples. We should make the presentation of results to be more ‘experiential’. If executives feel consumer realities, experience the findings and co-create the implications, they will feel ownership and we can extend the shelf-life as well as the impact of our work.

 

niels7

There are systematic steps a researcher should take in order to treat insight as content. These include:

  1. PLAN – define the goals, develop a strategy and create a calendar.
  2. DO – install research methodologies that allow for a structural collaboration with your consumers, but make sure you produce content-rich observations.
  3. FEEL – market your findings, as if you launch a product. Because of the very end goal of research, it is best to promote your findings experientially. If executives experience the data, it will amplify the usage and impact of research.
  4. REVIEW – analyze and measure the impact of what you are doing.

Installing a virtuous circle of treating insight as content will make your insights go viral in your company and enter the consciousness of your executives.

[1] The study was global with 46% of its participants based in the United Stated, 17% of the sample from Europe and 11% from Asia. The majority of our participants work in a consumer environment and 37% are focusing on only B2B clients. 4 out of 5 participants are active in market research or have a consumer intelligence role for a brand or company, while 19% have a more marketing-oriented function. As for sector spread: 31% were active in professional services; 1 in 4 of the participating professionals came from the financial industry; 22% from CPG / FMCG and 21% in technology.

 

 

Share

Is Data Science Friend or Foe of Marketing Research?

The term data science has entered business vernacular with a bang...but what exactly is it?

 Toward Digital Encryption

 

By Kevin Gray

The term data science has entered business vernacular with a bang…but what exactly is it?   Despite all the media buzz, one story that has gone largely untold is that statisticians are asking themselves the very same question: “The exact meaning of this term is a matter of some debate; it seems like a hybrid of a computer scientist and a statistician.”  I have quoted from Statistics and Science: A Report of the London Workshop on the Future of the Statistical Sciences, a product of a meeting in London in November, 2013 that was attended by more than 100 prominent statisticians from around the world.

If such a distinguished body doesn’t have the answer, for me to declare that I do would strain credibility.  In place of suggesting my own definition of data science I will offer some thoughts about it and what I feel is its place in marketing research, based on my experience as a marketing science person as well as interaction with contacts and business associates who describe themselves as data scientists.

The first dimension

As noted in Statistics and Science, “data science” is loosely used to refer to lines of work that make extensive use of computer science and statistics.  Most of these occupations are not directly related to marketing, genomic research and seismology being two examples, and now play a role in many fields.  Data science is often coupled with the term big data, and I should note that there doesn’t appear to be much agreement about what big data means either (see, for example  http://datascience.berkeley.edu/what-is-big-data/?utm_source=linkedin&utm_medium=social&utm_campaign=blog. )

Many working in these areas are computer scientists and mainly concerned with IT matters.  However, I perceive a rough continuum, on the opposite side of which there is greater emphasis on analysis and interpretation of data.  Statisticians and marketing scientists (with assorted job titles) are mostly located on that side of the continuum.  Of course, it’s not quite as simple as IT people on one side and statisticians on the other and there are other attributes, such as industry or subject matter expertise, that distinguish the various kinds of data scientists. There is more than one dimension to data science.  Here is another, psychographic, perspective on data scientists that may also be of interest: http://www.information-age.com/industry/uk-industry/123458536/uks-data-scientists-face-burnout-due-work-related-stress .

The extremes of my (real or imagined) continuum have become increasingly mindful of one other and in LinkedIn discussion groups and other public forums there are often heated exchanges between them.  The former often characterize the statto types as stuck in the past and out of touch, while the latter frequently see the IT focused as lacking in basic analytical skills and scientific thinking.  Both score points in these debates but what I think is more important is that these two groups differ in educational background and skills, and also seem to be different sorts of people.  Statisticians, for instance, are notoriously comfortable with uncertainty; probability, after all, lies at the heart of their discipline and if you want a quick yes-or-no answer, don’t ask a statistician.  (I confess…)

Heavily IT focused data scientists are often not well-versed in statistics and some are actually distrustful of statistical models.  Data management and related tasks are their main concerns.  Conjoint, structural equation modeling, time series analysis and many other statistical tools widely-used in marketing research are a foreign world for some, and statisticians often criticize current data science practice as mechanical and algorithm driven or as focusing too much on the What and not enough on the Why.

To flesh out these criticisms, let’s consider an example from marketing.  While we may be able to predict future purchase patterns of consumers from their demographics and past purchases fairly accurately, by integrating data from various sources, such as consumer surveys, and by using advanced statistical modeling, we can gain insights into why certain types of consumers behave the way they do in certain situations.  Marketing is also about changing behavior, not just predicting it, and these insights can help us develop more effective and profitable marketing, as well as improving our predictions.  Generally speaking, I believe these criticisms have substantial merit but will concede that causal modeling is not feasible or necessary in every data science project.

Quite a few universities now offer Data Science or Analytics programs that blend statistics and computer science but, with swift advances and increasing specialization within each discipline, these programs may be difficult to sustain.  Needless to say, it will always be hard to develop individuals who are highly competent in statistics and computer science, to say nothing of subject matter expertise or the political savvy needed to survive in today’s rough work environments.  Admittedly, I am greatly simplifying here and quite a few job descriptions for data science positions I’ve seen are not that dissimilar to what I do for a living, and I now include data science in my LinkedIn headline and company website.  More importantly, though, data science teams can include computer scientists, statisticians, economists, psychologists and specialists from many other backgrounds and there is no mandate for such teams to be comprised of only one type of data scientist.

Not quite plus ça change, plus c’est la même chose

So, what is the role of data science in marketing research?  Many aspects of data science are actually already part of marketing research, even if the term data science is fairly new.   Beyond doubt, in the last few years there has been an explosion in the amount of data we are able to capture, store and retrieve, accompanied by rapid developments in computer hardware and software.  Nevertheless, over the past several decades many organizations have increasingly been using data and analytics in decision-making, including marketing.  Since the 1990’s, much of this activity has been referred to as data mining or predictive analytics, though data science is now commonly used in their place.

I can recall a senior colleague who had spent much of his career with multinational manufacturers commenting, in this context, that the strongest competitors MR agencies faced were their own clients.  That was back in the last century!  The popular data mining software was developed by a company called Integral Solutions Limited (ISL) and originally known as Clementine and released in 1994.  SPSS acquired ISL four years later and SPSS Clementine was launched with much flourish – the rollout event I attended drew a crowd of more than 1,000 people.  So, while many things have changed, many things have remained more or less the same.

That said, I wouldn’t agree with those who believe data science and big data can be dismissed as mere semantic fiddling.  I also disagree with MR colleagues who fear them as tsunamis racing towards us and, instead, I see data science and big data more as opportunities for marketing research than as threats.  Though gut feel will always be a part of most decisions, I concur with those who predict that data and analytics will play a much larger role in management than is now normally the case, and this dovetails very neatly with the essential purpose of marketing research.

Back to the present

We shouldn’t let ourselves get carried away, though.  In A Practitioner’s Guide to Business Analytics, Randy Bartlett devotes considerable space to organizational cultural challenges and more than he does to technical matters.  We should note that the author is not a journalist or software vendor but an analytics veteran of more than 20 years who holds degrees in both computer science and statistics.  I share his view that the old ways still dominate true science in most decisions: “Corporations are not as sophisticated or as successful as we might grasp from the sound bytes appearing in conferences, books, and journals.  Instead opinion-based decision making, statistical malfeasance, and counterfeit analysis are pandemic.  We are swimming in make-believe analytics.”  That is the real world as I see it too.

Big data and data science are Big Business and in my opinion have been overhyped.  We humans do not appear to be hard-wired to use data to make decisions and for years, if anything, managers have complained about information overload.  Our schooling by and large has not prepared us fully exploit new data sources and advanced information technology.  Even if there were radical changes in the way we are educated, as long as there are human managers and human consumers, data and analytics will never entirely replace gut feel in decisions.  We are emotional and not easily persuaded by logic or evidence and the often rancorous debates about data science are ironic reminders of that part of our nature.

Besides, many important decisions cannot simply be calculated; after all, even thermostats are regularly overruled by humans!  Something else we should be alert to is that more data, particularly when the numbers aren’t trending in the same direction, will be more fuel for organizational politics in some companies and only make decision-making more unwieldy.  Add to these our natural inclination to stay the course and the very bureaucratic character of many organizations, and an abrupt and radical transformation in the way we make decisions would seem unlikely.

The future?

Decision-making will gradually evolve and become more, if never wholly, evidence-based.  Over the next few years I foresee decreasing emphasis on data infrastructure and more emphasis on what data tell us and how they can be leveraged.  With bigger and frequently messier data, understanding people will become more critical, not less, and demand will rise for marketing scientists able to see beyond math and programming who truly understand marketing and consumers.  Incompletely observed behavior or conversations only tell us part of the story and have the potential to mislead.  More analytic options also mean more risk and increase the need for well-trained and experienced researchers.  The resurgence of Bayesian statistics is further evidence that human judgment cannot be purged from analytics; as Noel Cressie and Chris Wikle point out in their heavily mathematical textbook, Statistics for Spatio-Temporal Data, “Science cannot be done by the numbers.”

An unfortunate corollary of rapid technological change is increasing specialization and even more silos and misunderstandings.  Buyers may not really know what they’ve bought and sellers may not really know what they’ve sold.  Closer to home, in marketing research, the well-rounded generalist is already becoming hard to find and I think over-specialization is hurting our profession.  We have lots of shiny new tools that many of us don’t know how to use properly, and MR educational and training programs will need to provide more cross-training to counteract this flip side of progress.

Though there will always be things outside our control, there is much we marketing researchers can do to shape the destiny of our profession.  Besides embracing new technologies and methodologies, less exotic activities such as educating clients about how to use marketing research to make better decisions will not lose their importance.  Just the opposite.  Changing habits of thinking will be crucial and improving our own decision-making skills would do us little harm.   We must also be on guard against dubious claims and pseudo-science, which I see as threats to genuine innovation.  After all, not everything that is far-fetched actually works!

We must also learn to be more effective at marketing marketing research; paradoxical though it may be, I think many us will admit that our industry has historically been pretty lousy at marketing itself.  We must compellingly respond to contentions that data science has made marketing research irrelevant, and one way is to demonstrate that “data science” has, in fact, been part of marketing research for quite some time.

Data science is not entirely new and not entirely old.  It can do amazing things but cannot work miracles.  Despite the hype and hogwash, I see it much more as friend than foe.

Share

CASRO Transformation Series: Research Now – Necessity is the Mother of Transformation

In this edition of the Transform blog we interview Kurt Knapton, President and CEO of Research Now. Kurt identifies the principles for company transformation that continue to drive Research Now to success.

 

 

By Jeff Resnick of Stakeholder Advisory Services

Kurt Knapton   JLD_4284Most of us think of Research Now as one of the hugely successful companies in our industry. It hosts online survey panels worldwide with 24 offices in 15 countries. Ten years ago, however, the firm’s revenues were around $10 million (more than 30 times lower than today), and roughly twelve years ago it almost went out of business when the dot-com bubble collapsed around them. A bold decision to transform the company not only saved it, but created the foundation for its tremendous growth over the next decade. In this edition of the Transform blog we interview Kurt Knapton, President and CEO of Research Now. Kurt identifies the principles for company transformation that continue to drive Research Now to success.

Do not fear the ‘pivot’. Research Now (originally named e-Rewards) was originally in the advertising space – offering a ‘by invitation only’ program that rewarded individuals for time spent viewing advertisements. While this business model ultimately proved unsuccessful at the time, it demonstrated that online technology could efficiently incentivize targeted groups of people – whether consumers or business executive populations – to open and respond to email invitations. While this doesn’t sound revolutionary today, Research Now was at the forefront of this technology in the early 2000s. The ‘big pivot’ was the movement from the advertising space to the market research industry. The willingness of the early team to step back and objectively analyze its initial failures and to refocus as the market changed were core elements of its ultimate emergence as a highly successful company.

Reinvention is a necessity. One of Kurt’s favorite quotes is from Winston Churchill, “to improve is to change; to be perfect is to change often.” This is the essence of the strategy Research Now follows to ensure its sustainability and success; the firm reinvents itself on a regular basis. According to Kurt, the very assumptions on which your company is based must periodically be challenged. A management team must always be able to answer the question “where do we need to move faster and what do we need to do better?” For example, with people spending enormous amounts of time on their mobile devices, the conversion from a tethered to an untethered world is in full swing. Understanding this behavioral change has huge implications for the market research industry in terms of skills of the people it employs and the technology it must develop and harness. As Churchill implied, perfection is no more than the willingness to identify what needs to change and having the fortitude to do it.

Celebrate continuous improvement.  This is one of Research Now’s core values. The firm empowers and encourages employees to challenge the company status quo in order to affect positive change. Research Now’s leadership believes collaboration and cross-pollination of ideas lead to business improvement and innovation from the lowest level of the organization to the top. Kurt highlighted one example, a ‘dumb rules contest’. This contest asked employees to identify the dumbest company rules that Research Now had implemented (for whatever reason) over the years. The official killing of those ‘dumb rules’ was celebrated at a large employee meeting, with the result being an ever transforming company.

Embrace new imperatives.  Research Now is evolving from a technology-enabled business to a technology-driven business. While this change may appear subtle, the impact is not. Client demand is moving toward a requirement for real-time data. The implication of this ranges from exploring new methods of data collection and assembly, to the increasing demand for sophisticated visual data display. The Research Now team knows that if they do not provide these tools to their clients, others will.

Give employees a reason to be proud beyond business success. Research Now leadership fundamentally believes that successful companies should give back. Among other charitable causes, Research Now is an active supporter of Kiva, an organization that provides microfinance solutions to fight poverty in developing countries. Through its employees and panel members, Research Now has funded more than $1 million of microloans through Kiva since announcing its partnership in September 2012. Getting involved with non-profit organizations can provide employees with a new perspective – again helping to fuel ongoing transformation.

Research Now is a story of a great idea transformed out of necessity, where the willingness to step out of one’s comfort zone, the tenacious pursuit of a vision by its founders, coupled with a culture of continuous improvement and exploration result in business success. It is something the entire Research Now team can be proud of.

 

Research Now, the global leader in permission-based digital data collection, powers business insights for its clients worldwide. Founded in 1999 and headquartered in Dallas, Texas, Research Now’s integrated data collection capabilities enable fast and accurate business decision-making.

 

Share

Move Beyond The Insight To Find The Prediction Question

In the age of data driven marketing, we need to find the prediction question in every study and address it.
m4s0n501

business woman - growth and success graph

 

By Joel Rubinson

Yes, insights are powerful but there is a downside. Focusing on “insights” as the endgame might be a barrier to what we must do…embrace big data.

So if “insights” aren’t the end game, then what is? In the age of data driven marketing, we need to find the prediction question in every study and address it.

When you are in the prediction business, you are trying to predict unknown values of importance to the enterprise.  It might be a prediction about the future share of a brand, identifying which users are most likely to be in play from their cookies to deliver advertising selectively to the right user, at the right time on the right screen, or modeling what is the most relevant content possible to serve up a personalized experience. Or it might be how to most accurately predict who will control the senate, as Nate Silver just did using his data science-based approaches.

To be good at prediction, you will need to integrate as many data sources as possible to determine empirically which ones demonstrate predictive value. That is why prediction questions encourage big data approaches. No NIH, no statistical snootiness about whether or not the data came from a random sample (as if that really exists anymore…). A focus on prediction leads you to integrate data from different sources and score the usefulness of information based on its incremental prediction value.   Data science is an equal opportunity employer.  If the data make sense to use AND they have predictive value, they’re hired!

The prediction business is critical for marketers because it drives up marketing ROI in a repeatable way.  Consider the world of programmatic digital advertising.  For every million page view requests, algorithms are PREDICTING which one thousand should be targeted with your ad because they are most likely to respond.  Such targeting can be based on models that use surveys, clickstream patterns, social media profiles, demos, time of day, weather, etc. and has been proven to drive up marketing ROI.

A great example of moving to prediction-based thinking comes from Nate Silver, creator of the fivethirtyeight blog and author of the book, “The signal and the noise”. Also, acknowledged to be the most accurate source of election results predictions and he nailed it again in the U.S. mid-term elections earlier this month.

Before Nate, political polling was centered on the single proprietary study. Each pollster ran their own poll, trumpeting its superiority, and implied who will win an election as if no other pollsters or predictive factors existed. Nate takes an unprejudiced view.  ALL polls have value and need to be weighted together but the weights are not equal…they depend on prior track record, sample size, “house effects” leaning towards one party vs. the other, etc.  Also, he doesn’t only use polls.  He finds that other factors add predictive value such as fundraising, candidate ideology vs. voter views, economic index, job approval ratings, etc.  In other words, each poll, for all its sampling purity is INADEQUATE on its own at maximizing prediction accuracy.  However, insights ARE important to framing his model.  He would not use some data stream that made no sense, regardless of statistical correlation, like which league won the World Series this year. What he does is essentially use big data principles.  He has Moneyballed political polling and is paid millions because he is the most accurate political forecaster on the planet.  I think marketing research practice needs to follow Nate’s footprints in the snow and go beyond the survey.

In marketing research, we can find the prediction question by thinking about the future, differentiating one user from another in terms of how they would respond differently to a marketing stimulus, or sales response to a marketing activity.

For example, when we make a trial forecast from a concept test for a health-oriented new product we do so without reference to prior studies. Purchase intent results are just accepted without adjustment or enhancement.  Is there really no Bayesian prior that we can extract from the hundreds of other concepts that were tested based on similar health claims? Also, we drop out at launch.  Using prediction approaches we could provide guidance to algorithmic media approaches to predict and target the likely users. Couldn’t we harness other predictive factors like frequent shopper data patterns for that user or possibly that those visiting retailer websites are more likely to try new things?

Another example is brand tracking.  Stop focusing on the report card and start thinking about brand health…predicting the FUTURE trajectory. To do this, certainly we need to include digital and social signals about the health and positioning of the brand. (Note: I am currently working with a leading supplier and have begun bringing this out to the marketplace.)

To become like Nate Silver, the Moneyballers, and the data scientists, Marketing Insight teams need to challenge themselves to find the prediction question in every study and commit to bringing together the data streams or conducting the experiments that are needed for prediction and then marketing action.

Share