1. Clear Seas Research
  2. SIS International Research
  3. RN_ESOMAR_WP_GBook_480_60
  4. TopBanner480x60_Greenbook

Who’s Afraid Of The Big Bad Algorithm?

Captain America: The Winter Soldier deals with many of the very real concerns we have in our very real lives today, including what might be modern society’s new unmentionable “A”-word: Algorithm.

ArnimZola2.0

 

Editor’s Note: Like any good geek, I went to see Captain America: The Winter Soldier this weekend. Suffice to say I LOVED IT, but for reasons I didn’t expect going in. It was a very smart movie in all ways, but perhaps smartest was the integration of smart tech, Big Data, and predictive analytics into the major plot of the story. It was actually called “Operation Insight” in the movie! After the mandatory discussion about the flick with my family who went with me, I broadened  the discussion on social media. Immediately Eric Swayne started tweeting back about his take on the movie along the same lines as mine and we both agreed it would make a great blog post. Since I knew there was no way I could fit it into my schedule this week, Eric volunteered to do the honors and it is an excellent post! I hope it’s the first of many from Eric: he is brilliant. Enjoy!

By Eric Swayne

WARNING: The following post will contain copious references to Captain America: The Winter Soldier, some of which may reveal key plot points in the movie. Proceed on at your own personal superhero movie preferred level of risk.

image

Saw the aforementioned movie this past weekend, and I must admit, it was awesome. And I’m not alone: the movie has already set a box office record for April (beating Fast and Furious 5), and has received critical acclaim from even the most staunch of fanboys. Sure, there were Easter Eggs and references galore, including one brilliant nod to Samuel L. Jackson’s iconic role in Pulp Fiction. But what makes this movie really pack a punch isn’t your standard fare of pecs and abs – I think it wins with audiences at a deeper level because it deals with many of the very real concerns we have in our very real lives today. Drones and electronically-controlled death from the sky? In the movie. Government being the entity you can’t trust anymore, because of their hidden agendas? Got it. Questions about the nature of privacy and the terrifying power of data mining? Check.

But beyond those, you see discussed what might be modern society’s new unmentionable “A”-word: Algorithm.

Algo-fiction

In the movie, a mad scientist (and I’ll leave the description there) creates an ultimate algorithm that can predict which individuals will be dangerous to the “bad guys” in the future, thus giving them targets to attack in their nefarious scheme. Within the movie, it’s stated that this algorithm uses the detritus of our digital lives to accomplish its evil machinations: credit card statements, phone calls, text messages, social networks, et cetera. Of course, this level of data collection sounds a lot like some recently-unveiled REAL government programs that are rocking the international community, so you immediately see the monsters in the shadows the film’s creators are implying. Moreover the concept implies that, given enough data about an individual, an algorithm has almost infinite powers of clairvoyance with deadly accuracy. Here is where fiction diverges from reality, because while the data may be limitless (tremendous privacy issues aside), algorithms have considerable limits they don’t discuss on the silver screen:

1. Algorithms are based on assumptions.

The most common assumption baked into most algorithms is that past performance predicts future behavior. This is often the case, but isn’t correct in all cases for all time. In the film’s case, the algorithm assumed people were binary – either enemies or not. Pay close attention when assumptions hit binary “chokepoints” like these when dealing with human data – humans are extremely messy from the perspective of their data, at an aggregate level. In fact, assumptions one makes about the world could be very correct at the time they are made, but can rapidly become obsolete. One classic example of this is in Natural Language Processing for sentiment analysis: most solutions currently on the market equivocate their answers into discrete buckets of “positive,” “negative,” or “neutral.” This already presupposes the content has a sentiment, and that it can be categorized for the whole of the analyzed segment. It also assumes certain linguistic patterns are reliable clues for that sentiment, when we all know language evolves rapidly, and words of a negative connotation can change or even reverse their value. Sentiment algorithms are great for high-level directional measurement, but at ground-level, can be insufficient.

It’s extremely critical to be self-aware when baking these assumptions into any equation, because they necessarily limit the outcomes you can create. After all, if the world is only black and white, it’s extremely hard to see color.

2. Algorithms are probabilistic.

Most behavioral algorithms are based off of statistical models, using past events to find patterns that appear with a significant level of consistency, then applying these patterns against new data gathered to score potential outcomes for the future. The key word there is “score” – very often, algorithms provide a confidence interval, not an “answer.” These scores define a range of futures that may be more possible than others, but they all could exist. For the ultimate sport of stat nerds – baseball – ESPN often provides probabilities for a given team winning a given game at a given time. Even though these may say my beloved Rangers have a 99% chance of winning, there’s always a chance for the other team to find that 1% – usually with a ball somewhere in the bleachers. I have a lot of certainty before that point, but that certainty isn’t absolute.

3. Algorithms are reactionary.

Let’s say you wanted to create an algorithm to predict what time I would get home from work every weekday. How would you start? Like any good behaviorist, you’d like to have a set of information about my habits to start from. Especially useful would be data that have a high correlation with the event you want to know about – things like what time every day I pass the gas station down the street. But the important nuance here is that you have to start from informative data about this event, not just any old data. For example I could tell you my car is grey, has a v6 engine, and the right-front tire is about 4psi low. All very personal data points, but totally useless for this purpose. They’re not an effective data set for training an algorithm in my behavior. In fact, it’s very possible to create a bad algorithm using this data, using an assumption (there’s that word again) that people who own grey cars consistently arrive home after 5pm. This can be entered into an algorithm, but it’s still totally incorrect. Every algorithm created is reactive to the data set available to the creator, and can not be pulled out of thin air.

Algorithms are tools. Like all tools, they carry no inherent “good” or “bad.” And, like all tools, they carry the flaws of the humans that create them. So the next time you see movies (or real life) treat algorithms as some omniscient source of clairvoyance, just remember they’re only that way in the comics.

Share

Is Your Focus Strategic or Tactical? Why Not Both?

As mobile usage becomes more dominant the options for better, faster and more useful information grow exponentially.

brand-strategy

 

By Ellen Woods

In most research organizations the research is primarily tactical with the more strategic initiatives left to other areas. If you’ve been at this awhile, you probably have sat through more than your share of meetings where horns have locked on the best approach with those who have a strategic voice often at odds with the more tactical assessments.

In fact, that is one of the biggest reasons why digital marketing has changed the face of most marketing organizations and why, in many organizations, market research finds itself struggling to stay relevant.

Tactical research is far cheaper to execute and provides answers to specific questions. It usually has little impact beyond the question or project at hand unless normatives are involved and the execution is usually done quickly. Therefore, it wasn’t a big surprise that it became the basis for tracking and the method of choice when Internet research made frequent data collection cheaper and quicker and at least by “time to the boardroom”, better.

Therein lies the rub.

Cheaper, quicker, better worked for research up until the time data analytics entered the stage. By design, the research was meant to be directional and it was quickly discovered that most people would barely sit through a fifteen minute instrument, let alone longer without an incentive. What happened next is a reality we are living, but it’s important to understand that even in the glory days of trackers and long surveys, there were market scientists who were looking for more. The short term solution was self-administered surveys that provided insight to more specific questions and allowed dwindling outside panels to be reserved for the larger surveys. Then came communities, the power and value of which was solely in the hands of the administrator. Planning was hard and in many cases, respondents became bored. Surveys came fast and furious from the check-outs, pop-ups, special requests and direct mail.

The need for speed accelerated and as it did, mobile technology changed the playing field and tactical surveys became even shorter and less effective.

The strategist on the other hand, being the turtle chasing the rabbit, decided to invest in the data. As data analytics became widely accepted, first with real time transaction measurements, the power of existing data began to flourish. The camps, now fully divided, took their corners and their cases moved to the boardroom.

In all fairness, data analytics can never replace tactical evaluations. While “data” can provide context, on its own, it can never answer the all-important why or how questions. Tactical research does a really good job of identifying what doesn’t work but not such a good job of identifying what does. Neither tells us how or why choices were made.

The strategist understood this dilemma far earlier than those of us seeing the trees rather than the forest. Enter stage left, behavioral analytics.

Behavioral measurements have the same problems that plague tactical research, because humans often aren’t logical. Measurements exist largely in snapshots and aggregation was iffy at best.

Meanwhile, back at the ranch, digital marketing was advancing rapidly and taking market analytics along with it. Geo-location, search incorporation and the general big brother nature of data measurements was advancing rapidly into a science know as predictive marketing.

Many researchers, still stubbornly stuck in quagmires of quadrants were now trying to understand neuroscience and patterning to create relevancy.

Strategists understood what they didn’t know and they knew many of their answers were in the data, lots of data. By harnessing IT to “sort” the data, a new model was emerging. Applying behavioral measurements to data began to yield a new kind of segment, the kind that exists in real time and has relevancy to the problems at hand. Enough data, they thought and there is an ability to predict at least what the range of reactions might yield for some very specific populations.

The best part, it could match activities in real time. As most researchers and strategists understand, people often say what they think is wanted in a survey, interview or community, or they have an agenda. Now, we know what they actually do. When the pieces of the puzzle are connected, we know why.

But there is still a piece of the puzzle missing. We don’t know to what extent. That’s where the tactical aspect of market research loops back into the picture. Short surveys, communities, tactical assessments (taste tests, IDI, etc.) yield a great deal of insight into the potential success of products and services and they tell us in real time how we are doing.

Concept, product and advertising tests yield an assessment of the degree of potential success. Since we know the range of possibilities within our strategic assessments, we can now understand the degree within a specific circumstance and with a very discreet audience. We are one step closer to an ROI and a lot closer to meaningful assessments.

As mobile usage becomes more dominant, the options for data collection grow ever narrower, but the options for better, faster and more useful information grow exponentially. The biggest danger in any new method is the damage is does to the consumer or corporate buyer. The next big frontier for market research may be in determining the responsible use of data, especially if the trend toward more localized purchasing continues to accelerate.

Share

Big Data, Big Research Possibilities Emerge At Re:Think

This is the second of two blogs on the ARF ReThink 2014 conference.

rethink-398x120[1]

 

By Joel Rubinson

How should researchers think of big data?

When your data move beyond crosstabs and excel…when you are predicting rather than profiling…when your data do not fit neatly into rows and columns because they are unstructured in their natural state…when you are anonymously matching data at the user level that come from different sources and requires some analytic detective work to optimize the match…when you are extracting information in real time from a massive database on servers that does NOT fit on your laptop.

All of these are crosstabs hallmarks and there were numerous world class talks at the ARF Re:Think 2014 conference that fit this practical definition. And practically speaking, big data can take you to new territories of insights and actions where 20 minute online surveys cannot go.

There are three broad classes of big data applications that were brought to life:

  • Matching different databases
  • Using data science to create predictive meaning from massive data
  • creating structure from unstructured data

Matching different data sets

Different presentations made it clear that the following matches are possible.

  • Cable subscriber TV viewing matched with voter records and via predictive analytics, precisely targeting advertising to swing voters.
  • Individually matching TV viewing, Facebook, digital clickstream, and radio listening to frequent shopper data. Nielsen has combined their audio panel with frequent shopper data via Catalina and their own Homescan panel. IRI and comScore have linked clickstream and purchase data. In every case, we now have a direct linkage between brand communication exposure and sales outcome that can be used for ad targeting and for determining return on marketing more precisely than macro-regression-based marketing mix modeling.
  • Facebook emphasized the importance of matching behaviors across screens using a persistent log-in. Linking behaviors across screens is critical to properly allocate advertising funding in a world where 40% or more of online behaviors seamlessly and subconsciously go from one screen to another. Retailers should pick up on this, linking behaviors by frequent shopper number log-ins irrespective of screen and whether the purchase occurs online or in-store.
  • IRI bringing together store scanner data, TV viewing data and digital behaviors for marketing mix modeling at a much more granular geo level.
  • Conducting surveys among people whose clickstream behavior is known helped Ford to gain great insight into digital behaviors leading up to acquiring a vehicle.
  • Matching attitudinal segmentation with third party data such as hobbies/interests, viewing and purchasing behaviors via data fusion.

Using data science

CivicScience (disclosure:  I consult with them) presented a new way of collecting massive amounts of data that can be connected.  Instead of a lengthy survey, they ask only three questions at a time but on a scale such that they have tens of millions of answers across nearly 30,000 questions in their database available for data mining. While the matrix is sparse (i.e. no one has answered all 30,000 questions), any question can be analyzed by any other question so, for example, you can find unexpected correlations of lifestyle and media factors with being persuadable regarding a media property or brand.  This has great insights, media targeting, and prediction value.  In particular, they presented something they call “expectation science” where they are able to cookie respondents with a good forecasting track record in a given domain and then ask expectation questions of them, such as, “What movie will win best picture Oscar?”, with very impressive results (8/9 winners corrected called).

Creating structure from unstructured data

Oculus 360 presented a way of mining social media conversations to understand what certain central concepts (like romantic or bohemian in the world of fashion) really mean to consumers and how you can tell if your brand is fully aligned.

Because of the massive number of possibilities to click on, clickstream behaviors can also be thought of as unstructured. Ford and Luth research conducted surveys among those whose online behaviors were metered, and analyzed clickstreams to understand digital behaviors along the path to purchase.

So what do all of these big data applications have in common?

  1. They allow marketers to target advertising based on behaviors and interests rather than simply based on demographics. I call this “precision marketing” and it will improve advertising effectiveness as well as change the ways we measure what is working.
  2. We are using digital data to measure things that respondents cannot accurately recall, like their clickstream behaviors or the effect that a fleeting (but potentially impactful) ad had on purchase outcomes.
  3. To handle massive amounts of data, often unstructured, and needed in close to real time, big data applications require technology solutions that go beyond current research tools like cross-tabs, and CSV files.
  4. They extract quant insights from naturally occurring data streams like digital, social, and customer data, rather than relying exclusively on surveys.
  5. They require new statistical analysis tools.

The ARF conference did not give us the full array of big data applications that marketers need to focus on (for example, there was little on understanding the power of first party data that come from brand websites and customer data) but enough to represent a call to action that research tool kits and skill sets must evolve beyond the “n=1000, 20 minute survey”.

Share

Mobile Qualitative – How Does It Fit In The Research Toolkit?

Can technology experts, however user-experience focused, create platforms that truly help the touchy-feeling world of Qualitative Research? If Revelation is an example, then it appears so.

mobile-qualitative

 

Editor’s Note: Edward Appleton is doing a series of posts focused on the client-side view of mobile research, with an emphasis on use cases and best practices learned so far. This is the fifth  post in that series that we’ll be publishing over the rest of the month. Parts 1 – 4 can be found here.

 

Edward Appleton

If you’d asked qualitative researchers – in Europe at least – what they thought about online qualitative methods or platform possibilities ten, or even five years ago, you may well have drawn a blank stare, a pause, followed by reasons why face to face is actually superior.

Add “mobile qualitative” to that, and the reactions are likely to have been similar, perhaps even more intense.

Revelation (http://bit.ly/Lv4efy), a US Oregon-based research software company, has arguably lead the way showing skeptical, skilled but partially tech-averse qualitative research practitioners how technology – notably Smartphones – can be used to help enrich and enhance a multi-modal qualitative research design.

Founded in 2007, Revelation currently employs 25 people, is in growth mode, and is expanding its international reach – their mobile app is currently available in 16 languages.

As an innovator in the mobile qual. space, and an Agency that counts Procter and Gamble as one its clients, it’s the sort of New Market Research Company – driven by technology, scale and a visionary approach – that is currently revolutionizing and improving Market Research.

Start-up mode is definitely behind Revelation, as is the validation/proof of concept phase. Their research platform is increasingly being used by Researchers the world over. Its white label offering allows easy-to-do customized branding; 80% of the Company’s work is providing a platform for Researchers. This makes the question of whether Revelation is effectively the Intel of the mobile qual world, “powering” qualitative research experiences, so to speak – a tantalizing one.

It also raises the question: can technology experts, however user-experience focused, create platforms that truly help the touchy-feeling world of Qualitative Research?

I first met Steve August, Revelation’s CEO, at an ESOMAR Conference in Valencia, then again in Ghent, Belgium. I took the opportunity to chat with him about Revelation’s approach to and experience with online qualitative and more particularly mobile market research.

Why Mobile?

mobile-apps-1

I blame my wife for all this” is Steve’s tongue-in-cheek response to my question as to how Revelation originally got started on its path to mobile qual.

The origins of Revelation are an interesting example of how successful innovation is born – taking two approaches or worlds – digital technology and established qualitative Research practice and protocol, then merging the two to create something synergistic.

Steve’s spouse Kimberly – the founder of Revelation’s legacy company, KDA – was an ex-Fitch Design user experience research professional who had chosen to go freelance as from the mid-1990s. Specializing in immersive in-depth qualitative rather than Groups, she often worked in Diary studies as a facet of in context research, executing traditionally using pencil-and-paper formats. These were copied, posted out, the content sifted through manually at analysis stage. A mixture of the mechanical and the meticulous, time consuming and with elements of non-added value cost.

Steve’s route to mobile research was more indirect and partly fortuitous. He describes himself as a creative technologist – having worked variously as a producer of multi-media CD Roms, documentaries, and as a Business Intelligence Consultant. He occasionally happened upon his wife’s Research diaries at the copying stage, and questioned if existing technology could make the process more efficient.

Online blogging and web-diaries were at the time – 2002 – just becoming part of the web landscape, with increasing numbers of internet users choosing to engage and collaborate online.

Why not take MR diaries online?

The potential to transform pencil-and-paper data collection seemed to make sense – the benefits stood out:

  • Eliminate non-added value logistical activities (copying, postage)
  • Participants could upload photos, videos when equipped with flip-cams
  • Real-time reporting
  • Easy sharing
  • Quicker analysis through text tagging.

The combination of these sounded enticing, game-changing even, but: would respondents share their feelings online? Would the technology work?

An early stage pilot aimed to address these questions.

Piloting Qualitative Mobile: “Understanding Parenthood Better”

??..0603.AH

Steve and Kimberly had recently become parents, and chose a topic close to their own heart – if or how becoming a parent changes personal identity, and if the sense of freedom is diminished or lost.

The design adopted was projective – participants were equipped with a flipcam, blogging software and asked to find and upload images of themselves both before and after parenthood, then comment on what had changed.

Paired-friend recruitment was chosen as a method – with the aim of accessing both individual and shared responses.

Mobile: a simpler Triangulation Option

The pilot worked well – the technology performed without glitches. Respondents shared a wealth of emotional responses, indicating that they were not inhibited by the medium.

The study also revealed how the online mobile medium allows researchers to pursue different insight avenues more easily through digital sequencing.

Dads were interviewed about their experience with exactly the same questions, then Mums, and then both were “confronted” with the others’ viewpoint – leading to surprises as well as confirmation.

This sort of sequencing is relatively easy to execute in online mobile, offering an efficient route to first-stage triangulation.

Advantages of Mobile Qualitative over Traditional Qualitative Techniques.

mobile anywhere

The Parenthood pilot study indicated that digital technology could indeed play a new, complementary role in qual. research. This lead to further validation studies, each of which highlighted the difference to face-to-face qualitative – Focus Groups, In-depth-Interviews.

Mobile qual. “took us into people’s living rooms” to use Steve’s words, allowing participants to comment in their own time. Software also offered the option of making comments personal or open to a group – allowing peer-to-peer interaction, and the beginnings of a community-style set-up.

Revelation’s core argument for online and mobile qual. is built on the simple insight – that most of the interesting things happen in participants’ lives when the researcher isn’t there.

Mass ethnography, whilst arguably methodologically superior, quickly reaches its limitations – time and cost are invariably prohibitive.

Mobile qual. goes the final 3 – 4 yards, to quote Steve, it “puts us into people’s back pockets”.

Face-to-face qual., by contrast, has systematic limitations:

  • momentary and thereby limited to point-in-time snap-shots
  • reliance on memory
  • empathy gap (an inability to imagine our “hot state” reactions when in a “cold state” of non-arousal)

Revelation summarizes how they see the advantages of online mobile qualitative:

  • ongoing dialogue (as opposed to a point-in-time snapshot)
  • immediacy: participants can record meaningful moments as they happen
  • Capture reactions more vividly, minimizing the process of distortion through rationalization. Mobile can get closer to System 1 type reactions.
  • easily-executed self-recording
  • ability to show not just tell (through use of photos, videos)
  • Directly access areas of the home that a laptop wouldn’t easily get to.
  • cost and reach (geographic restrictions are overcome,  travel is eliminated, saving both out-of-pocket and opportunity costs)
  • non-intrusive

Researchers can observe participants’ behaviors in more detail and more frequently, providing overall a higher level of granularity. Asking the question “why” is practically not limited, can be focused on behaviors that are particularly relevant to a brief, or that are not clear to the Researcher.

Mobile Qual. – Re-Evaluating Methodology

Steve is clear that he sees mobile as a medium, a means to an end – it’s not a goal in itself, nor is it new methodology.

Methodology could and should adapt, however, to what the new vehicle can to, its functionalities, to maximize its value and potential. The data-collection mode needs to adapt.

This means respecting and utilizing the medium’s versatility – especially true for smartphones.  These devices profiting from the convergence of Telecommunications, Digital and Entertainment technologies; one small phone can take photos, record sounds, transfer messages, phone calls….the list is long, if not endless.

Good mobile qual. research design means mimicking mobile usage habits, creating a design to suit that. Revelation refer to this as “participant-centric”.

It means moving on from legacy approaches.

Relying on a Q&A approach, however skilfully executed, is likely inadequate – mobile is a playful medium, capturing people’s imagination and holding their attention is key. New approaches need to be more involving, gamified, and above all – in Revelation’s view – activity-based.

Steve gives the following example to illustrate.

If a Client wishes to know the contents of a respondent’s fridge, it is a less enriching to say “Tell me what’s in your fridge”, far better to say “Give me a video-tour of the fridge”.

Video footage shows better than what a respondent can say in words – how tidy the fridge is, how well-stocked, if labelled, if certain people have sections, not just simply an enumeration of what is there.

Framing is also a key aspect to help maximize success with the mobile medium.

If a client wishes to understand how a certain audience enjoys cake eating, Steve suggests that it’s better to create a contextually focused challenge.

This could be framed as:  “Think about cake moments – which we’ll define as any moment you have cake, or any time when cake makes a moment better”. This would be preferable to simply asking: “what do you like about cake?” The “cake moments” approach results in a “Moments diary”, full of rich detail on the whens, the whats, the who-withs, the what-withs……a richly divergent process full of associative detail that a skilled qualitative practitioner can assemble meaningfully.

mobile-iphone-android-cakes-cupcakes-mumbai-6

This leads to the center of what Revelation refer to as “Online Immersive Qualitative”.

Mobile – Closer to Experiences

Mobile is an immediate and quasi-omnipresent medium, with the ability to capture and transmit pictures, texts, impressions, feelings, behaviors as they occur. 

Mobile qualitative can profit from this by adopting what Revelation describes this as “immersive online qualitative”. It encompasses three core dimensions that help understand behavior:

  • Contexts: where are you? Who are you with?
  • Behavior: what are you doing?
  • Emotion: how do you feel?

Mobile can capture this triad of behavioral understanding when a particular experience occurs, making it much more likely to be an authentic and accurate record of an event.

Optimized mobile = imaginative technology + creativity in research

The learnings gained from the various validation and piloting studies reached critical mass in 2007, the year of launch. Revelation accompanied the launch with the announcement of their own mobile qual. App.

The App has evolved to third or fourth generation, but the principles applied are constant – “merge imagination in technology with creativity in research” – so the experience is smooth and engaging.

Some of the stand-out features are as follows:

  • Mimics mainstream current Social Media user experience - the App feels like Instagram, is visually driven, with room for text comment
  • Works off-line – ensuring thoughts or comments are not lost if connectivity breaks down.
  • Superior video and photo handling. The compression offered allows longer videos to be made, they are also easier and faster to upload. Videos can be uploaded in the background with no interruption of other device activity.
  • Push notification. Participants can be pinged a reminder or request by a Researcher who has seen something posted that is particularly interesting or pertinent, asking for more detail or clarification.
  • Device optimization. The interface is highly responsive and adaptive to whatever the device may be, re-sizing and re-visualizing automatically.

For What types of Research?

Steve named the following research areas as particularly appropriate for mobile qual.

  • In-store: respondents can take a picture, make a video
  • Online communities with diary-style activities: participant recording gives greater detail, richer contextual understanding
  • Outdoor activities – anywhere where mobile is at hand, overcoming recall issues associated with capturing later on a laptop.

Revelation see mobile qual. as particularly useful in what Steve refers to as “foundational studies” – where clients are going back to basics, to the fundamentals of what motivates and moves, often against a backdrop of wealth of existing quantified data. Mobile qual. is used to unlock, unleash, bring alive, develop an engagement strategy – move from a static to a dynamic insights approach.

An example: a customer has existing segmentation data, typologies have been identified, but there are questions on how best to engage key segments? How approach them? In this context of segmentation and typologies, mobile qual. works well in bringing customer types to life.

Case Study: Digital Dads

digital dads

In 2010 Yahoo wished to better understand the changing behavior of American “Dads” in the aftermath of the 2008/9 financial crisis. Some of these Dads had recently experienced being laid off, so had begun to assume a different role at home. Yahoo wished to know if, how and to what extent household tasks – cleaning, shopping, looking after the kids – typically assumed to be the role of the “Mom” – were affected by this dynamic.

Were some modern Dads being overlooked by brands and marketing as at least joint decision makers? 

The research design was mixed-methodological – qual/ quant., using online diaries and mobile qualitative.

The mobile qual. piece was chosen for the immediacy offered in low-interest categories – shopping for household goods, for example, or carrying out chores about the house. Activities that are easily forgotten. The design looked not just at Dads, but also recruited their families and their network of friends.

The findings suggested that men indeed were being overlooked by advertisers, sometimes portrayed as people unable to do simple chores.

Mobile qual. delivered authenticity - in-store reactions in particular – that brought quant. findings to life, adding a level of veracity and persuasiveness.

Case Study: USA Latinos and Hair Care

latinohair2

Another challenge posed to Revelation was in the area of hair care amongst the population of Latinos living and working in the USA. There are currently 52 million of them, they represent a fast growing opportunity as wealth levels rise, with a new and growing Latino middle-class segment emerging.

The client in question – P&G – could see a massive market and wished to gain a cultural hair-care perspective, understand how best to tap into this audience: what were their hair needs, how did they differ if at all, what products and brands did they know and use, which did they aspire to?

Smartphones were known to be the medium of choice for this audience, with many mobile-only households.

Mobile qual. was an obvious research approach suited to the brief – foundational insights - because mobile was a medium the audience would more easily engage with.

The design adopted was both fast – with a 3 day fieldwork period – and immersive. 20 Latinos were recruited and asked to perform exercises designed to understand their personal concepts of health and beauty. Using text and uploaded images, they were asked to use analogies and metaphors incorporating their five senses.

P&G gained an extremely rich, quasi real-time and cultural picture of what healthy and unhealthy hair meant. The insights were of clear value to the company’s R&D efforts.

Comment/ Outlook

  • Mobile qualitative research offers the ability to deliver the pictures, reactions, words of experiences as they happen. Often meaningful moments occur when the Researcher isn’t present – mobile can help overcome that.
  • Mobile qual. is well placed to provide an “aha” moment that a quantitative survey and arguably Group discussions can’t. It illuminates in a unique way.
  • Mobile qual. complements other research forms – it does things, gets to places, that traditional qual. doesn’t or can’t. Its delivers particular value in immersive type insights studies.
  • It takes Researchers into areas – the shopping aisle, the kitchen, the pub or restaurant – where they traditionally haven’t been, and where memory often plays tricks when respondents relate based on memory.
  • The method also allows a social and contextual component to be built in more easily – responses are given with the sense of place, occasion, atmosphere, and to what extent other people were part of an experience.
  • As part of the modern Researchers’ multi-modal armory, mobile qual. seems invaluable – relatively quick, authentic, cost-effective.
Share

Jeffrey Henning’s #MRX Top 10: Bold Experiments in a Multi-Screen World

Of the 2,271 unique links shared by the #MRX community the past two weeks, here are 10 of the most retweeted.

Twitter

 

By Jeffrey Henning

Of the 2,271 unique links shared by the #MRX community the past two weeks, here are 10 of the most retweeted.

1. Six Lessons From The MRS IMPACT 2014 Conference – Tom Ewing of Brain Juicer shares the six best ideas he took away from the Market Research Society’s annual conference: 1. Always be testing, 2. Fund bold experiments, 3. Beware stories and their limits, 4. Trust your actual stakeholders, 5. Use influencers to recruit additional hard-to-reach research participants, and 6. Diversify the ranks of researchers.

2. Market Research Debunked – Writing for RW Connect, Martina Olbertová debunks 5 myths: 1. Research is a data report, 2. Research is a substitute for lack of vision, 3. Answers are locked in the heads of consumers, 4. Research is only good for validation, 5. Research is purely analytical.

3. Ipsos European Pulse – An Ipsos survey of 7,000 Europeans in 9 countries reveals that citizens foresee a rise in anti-European movements in upcoming elections, while personally preferring that their country remains in the European Union, especially if the EU’s powers are reduced.

4. Google Flu Trends Gets It Wrong Three Years Running – The Google flu tracker has overestimated the incidence of flu for the last three years, according to David Lazar, of Northeastern University, who argues that adjusting the weighting can improve its accuracy.

5. Top 10 Key Mobile Facts For Market Research – Edward Appleton has compiled ten statistics about mobile device usage, from around the world. Key surprise to me: “The average level of mobile device ownership in 21 countries across the globe was 87% as measured in a 2012 Pew Global Attitudes Survey. The highest level of penetration was 94% (Jordan), the lowest Pakistan (52%).”

6. ESOMAR CEE Forum Bucharest 2014 – Betty Adamou recaps the keynote from ESOMAR’s Central/Eastern Europe conference in Romania (which promoters tout as “the Silicon Valley of Europe”).

7. Beauty Industry Robust Despite Slowdown – Euromonitor shares an infographic on changes in the global beauty market, including continued growth in the Middle East and Africa.

8. Spring into Action: Optimizing Tomorrow’s Market Research Effectiveness – The New England MRA has a call for speakers for its one-day conference in Waltham, Massachusetts, in May.

9. Half of Social Media Activity While Watching TV Relates to TV, Says Study – The Council for Research Excellence conducted a mobile diary study of 1,665 15 to 54 year olds and found that one in six times they watch TV they use social media and social media is twice as effective for attracting viewers to new shows as to returning shows.

10. How to Advertise in a Multiscreen World Where Mobile is the “First Screen” – Millward Brown conducted research in 30 countries to better understand multiscreen use. Consumers watch 7 hours of media today, with smart phones the primary screen in many countries, taking an average of 2.5 hours a day.

 

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX.

Share

Cat Polling To Replace Traditional Political Polling

Posted by Brian Singh Tuesday, April 1, 2014, 0:13 am

With remarkably low response rates to telephone and online panel surveys at this time, Canadian polling & market research agency Zinc Tank is pleased to announce the launch of its Pet Electoral Tracking Study (PETS).

CAT POLLING

 

By Brian Singh

After a host of polling issues in the 2012 elections, today there is a real option in more accurate public opinion research. Cat polling. But not only cats. All pets.

With remarkably low response rates to telephone and online panel surveys at this time, Canadian polling & market research agency Zinc Tank is pleased to announce the launch of its Pet Electoral Tracking Study (PETS).

With approximately 80 million pets (2012) in the United States, it’s no surprise that owner’s love to speak on behalf of their cats and dogs. The Zinc Tank research team has tapped into this passion and developed the proprietary neuroscience technique Anthro Linguistic Projective Organization (ALPO). This has led to remarkably incredible response rates and more representative data than has been seen for years. Zinc Tank found that cat surveys are highly accurate because cats are highly random – a 1,000 cat poll is by definition a “random” sample. Such a sample constitutes a herd, and “herd” is the new listening – the successor to social media monitoring.

“We have found that owners considered their pets politically engaged and accurately reflect the opinion of the household,” stated Chief Methodologist, Brian F. Singh. “An intriguing finding was that grey cats tended to vote Republican, while black & white short hairs leaned Democrat. Siamese were almost exclusively Independent.”

“As a critic of all traditional polling methods, we feel that cats and dogs opinions best represent the state of party preference and performance of leaders. PETS is proven out-of-the-litter box thinking. With its success, and US Census Bureau’s cat census notoriously out of date, we are in discussion with Google to develop “Google Cat,” to help calibrate their insights algorithms.”

President Obama’s dogs, Bo and Sunny, were unavailable for comment. But rumor has it they are independent voters.

I think we can all look forward to PETS playing a key role in the 2014 mid-term and 2016 election cycles.

We’ll be showcasing this amazing new innovation at the 1st ever IIeX Animal Innovation Forum in collaboration with the ASPCA and American Kennel Club at our event in Atlanta in June.  Stay tuned for more details!

 

 

*Note: This is an April Fool’s prank. With the exception of the opening statement, none of this is true. Polling is an important part of living in a democracy, but we can also have some fun at its expense. We hope that this parody made your day a little more entertaining. No animals were hurt in the preparation of this news release.
Share

From the Client Side: Interview with Stacey Symonds, Senior Director of Consumer Insights for Orbitz Worldwide

To hear more from the perspective of the corporate researcher, this occasional Greenbookblog.org feature will spend some time with a researcher From the Client Side. This fifth interview continues the discussion about Next Generation insights techniques and the future of the consumer insights industry with Stacey Symonds of Orbitz, Worldwide.

client side

 

Editor’s Note: In preparation for the last GRIT report Ron Sellers conducted a series of IDIs with client-side MR professionals. Some of those have been turned into interviews for this blog series on client-side views, and today’s is another amazing example of the very pragmatic and progressive view of the evolution of the role and process of strategic insights espoused by many client leaders. Stacey Symonds offers a very grounded and reasoned view into how Orbitz is embracing the best of the industry, both “traditional” and “emerging” and it should be required reading for all suppliers.

Stacey will be one of over 40 clients joining us on stage at IIeX North America in June and I for one can’t wait to chat with her more in person!

 

By Ron Sellers

This interview features thoughts and observations from Stacey Symonds.  Stacey is Senior Director of Consumer Insights for Orbitz Worldwide.  In her current role, she partners with a range of internal business leaders to integrate the voice of the customer into day-to-day as well as long-term development.  She has over 15 years of experience in client-side customer insights, brand strategy, and market analysis in the automotive, retail, financial services, and travel industries.  Stacey holds an M.A. in Applied Social Research from the University of Michigan and a B.A. in Textile/Apparel Management from Cornell University. She serves on the External Advisory Board of the A. C. Nielsen Center for Marketing Research at the University of Wisconsin-Madison School of Business, and lives in Madison, WI with her husband and two energetic kids.

Ron:  First of all, what research or consumer insights methodologies or approaches have you used over the past 12 months or so?

Stacey:  Well, it’s been a very wide range because in my role I have responsibility for all consumer insights at Orbitz, so that can range from brand tracking to product development where we’re doing some things like workshops with consumers to do some co-creation, to online discussions, to bulletin board discussions.  So really it runs the gamut.  I would say I skew toward quantitative studies, though, like discreet choice and other trade-off modes.

Ron:  Are there any methodologies or approaches you’ve intentionally stopped using, or even considering, over the past 12 months or so?

Stacey:  I’m not a huge fan of traditional focus groups with the one-way-glass kind of interactions.  I just think there’s too much of a group dynamic that goes on.  There are very specific cases where they might be okay to use, and I rarely have those cases.  I need decision-making data.  I don’t feel like I can get it from that mode.  I’ve used them more in other places I’ve worked, but here that’s something I definitely don’t tend to use anymore.

Ron:  Is that something you’ve become more uncomfortable with because of some of the new techniques that are out there, or is that totally independent of these new techniques? 

Stacey:  It’s actually independent of the new techniques.  For me, it’s more that I think we were using a screwdriver to hammer in a nail.  When I first got to Orbitz a little over three years ago, they were using focus groups for concept evaluation and progression.  When you just don’t get a definitive sense of which concepts should move forward, it becomes very subject to interpretation, versus doing a quant study where you can actually do some rotation and get some clarity on what is working.  Actually, to your point, though, one thing we are doing is a qual/quant hybrid, which takes the place of a focus group.  It gives you a larger sample size and then also lets you explore language and how people are feeling about things, so you kind of get both sides.  And in that format, too, the consumers are not sitting in a room, but they’re virtual.  So you don’t get that group influence unless you want it to come into play.  You can control it, versus in a focus group were you don’t have control over it, and it’s a small sample size, where you just get this influence that you may or may not want in your decision making.

Ron:  With all the new consumer insights methods that have emerged over the past five years or so, where do you see the market research or consumer insights industry headed in the next few years? 

Stacey:  I think there definitely is a move to quantitative.  I’ve seen that in a variety of forms, even taking data that might previously have been called qualitative and making it quantitative.  Text analytics is a good example.  I think that’s definitely moving into a more mainstream space.

The market research that I have seen used most is information that helps you make decisions. And I feel like quantitative helps you do that in a more deliberate and consistent way.  So I think that’s been an evolution for sure.  Very fast-turnaround, self-service methodologies are here to stay.  I don’t think they’re going anywhere.

I think there may be some discussion, too, about the evolution of panels in communities.  It’s something I’ve kind of struggled with – where I fit on that continuum.  Because I feel like when you empanel somebody, either for just a generic research panel or for an online-community-type engagement, you change something.  The moment they become a part of that community, you change how they feel about the company and how they answer questions, so I don’t know how that’s going to evolve, but I bet it will in some meaningful way.  Either we have to just acknowledge that and just keep using it because it’s convenient and easy, or we find other ways to do it, like what I think Google’s consumer surveys is doing in going out to a broader audience where they can actually get to people that you may not have in other forums.  You may get a rawer, more honest perspective on things when you do that, versus going to a friendly kind of a panel environment.

Ron:  It’s interesting – you’re saying you see more of a move to quantitative, and of course the example you gave was something that’s like big data in that it’s existing information that people are now quantifying.  But what about the traditional quantitative surveys, whether using an online access panel, a panel you create, a phone survey, even a mail survey?  You’re saying there’s a move to quantitative data, but at the same time there are increasing concerns about the representativeness of traditional quantitative methodologies.  I think it was Pew that recently estimated that the typical phone response rate now is nine percent.  It’s not random-probability sampling when you’re using an online panel.  So how do you deal with the lack of representativeness, even as you’re saying more and more you do quantitative? 

Stacey:  First it’s recognizing that issue where it exists and making decisions given that fact.  So I caveat the heck out of things now to make sure people understand.  There’s always been a say/do gap anyway.  So even if it was a completely unbiased perspective, what somebody tells you is not always what they are going to go do.

For one thing, I argue that we should use multiple methods, and we do.  It’s observational plus it’s surveying.  But the other thing is I do feel like we potentially need to adapt how we ask questions, and therefore how surveys are done.  The method might be still useful where you can focus a person’s attention on something for a period of time, whereas if you’re in social media, you have no control over what they talk about, how long it gets talked about, how deep they go, or how much they understand it.

I think there’s still a role for the survey in the world today.  But I do think they need to be made more engaging so that a broader cross section of people will want to do them.  So instead of sitting there and saying, “Gosh, I have to sit here and answer 30 questions, text question after text question,” we have more visually engaging, more interactive methods I think could help counter some of the trends we’re seeing here and still give us useful output.

Ron:  Where and how do you learn about new research methods and new approaches? 

Stacey:  Some of it is definitely going to conferences.  I think attending a couple of major market-research-focused or advertising-research-focused conferences definitely helps a lot.  And definitely scanning publications that are out there, whether GreenBook or Quirk’s, is a really good resource as well.  Then just the grapevine.  Certainly, the first time I ever heard about Google Surveys was somebody at work who’s actually not even a researcher.  They happened upon it when they were surfing the Internet.

Ron:  When evaluating an approach you haven’t used before, what are the factors you look at to determine whether it’s something you believe is valid or something that you kick to the curb because you just don’t think it’s usable? 

Stacey:  Well, a couple things for me.  One, I try to avoid the “shiny new object” syndrome.  Just because something is new doesn’t mean we should go do it.  And I think partially that’s because I’m not dealing with a huge budget.  I think even if I did have a huge budget I’d want to use it very judiciously.  So I don’t try things out just to try them.  I actually want to be fairly educated first to make sure it’s something that could really fill a gap in what we know.

The other is actually more of a cultural reason than anything else.  I want to make sure I’m in tune with what my organization needs and can tolerate, in terms of risk and acceptance.  I am in a very data-driven organization, but consumer insights is a new practice for them over the past couple of years.  So if I went in and said, “Okay, here’s this new, really complicated technique; we’re going to try this,” I think they might be open to it, but whether or not I could really sell it in…  I think it would be harder to do if I didn’t feel confident that it really was something I knew could add value.  So I’m not so speculative about most things.

Ron:  The industry obviously has a lot of what I’ll call the traditional approaches, such as IDIs or focus groups, intercept interviews, ethnography, and all the different survey methods.  Which of these, if any, do you feel are still valid and useful today for your work?  And which them also do you feel will be valid and useful five or ten years from now?

Stacey:  I would say definitely the face-to-face research and survey techniques are just moving online.  I think we still use them, but we’re doing it in a way that’s more flexible for the person involved.  We have done group discussions by webcam.  We’ve used it for ethnography.  Where you want to go really in depth with somebody, I still think qualitative questioning is one of the best ways to do that.  We just might do it through an online bulletin board exchange, an online diary, or something like that, instead of sitting in a room and doing a one-on-one interview.  That I think has changed for sure.  And going forward I wouldn’t expect that to revert back.  That’s just sort of how I feel about focus groups and where I think those are.

I think the idea of co-creation or discussions with consumers, there’s something to that where you can do it either in person or virtually where you can make it less subject to group think, depending on how you structure it.  So that’s definitely something I feel like is evolving into just a little bit of a different space.

Ethnography, I still feel like that’s a great tool. And I think that approach is one that also is evolving with technology.  So ethnography now is:  How do people use mobile phones?  How are they using websites?  All of those things.  I think it’s the same principles; it’s just being used in a different channel.

Ron:  You’re talking about how a lot of different things are moving from in-person to online.  How much of that do you think is driven by, or unique to, the fact that the customer interaction with a company like Orbitz is online?  Shoppers are not going to the Orbitz store or the Orbitz dealership or taking a package of Orbitz home with them.  Do you think it’s influenced partly by the type of company you work in?  Or do you feel like if you were working at Post Foods or Walmart you’d have a lot of that same perspective? 

Stacey:  I think I would still have a perspective that digital is a way to do research that can get you past geographical issues.  It can get you to a consumer who couldn’t show up to a group discussion or to an individual interview.  You can get to them.  So I feel like it’s an access issue.  It is true, if I worked for Walmart, I would be in stores and doing things there which might involve talking to people in person, in a store.   So there’s some of that.  But I do think still the digital piece is something that lets us do more with less as budgets have gotten squeezed.  It also has that added benefit that it just takes some of these barriers away that we’ve had before that really might’ve narrowed our ability to reach different types of consumers.

Ron:  We talked about both traditional and non-traditional research approaches, but what about some of the stuff that’s a little bit more out there; a little newer?  Eye tracking, facial analysis, mobile MR? 

Stacey:  I’m involved with a project from the ARF (Advertising Research Foundation) which tested ads with various methods of neuromarketing research and biometrics including EEG, Skin conductance, fMRI, and eye tracking along with traditional survey techniques.  It’s almost like a bake-off to figure out which of those is better.

I think the cost/benefit equation is still difficult for any of the really new techniques, especially neuro- or bio-related. These methods do tell you if someone has a reaction to something, but it is not always clear if it is positive or negative reaction and how that might impact the effectiveness of your advertising. Eye tracking has been around the longest, and it does tell you how much time someone spends looking at something so you can optimize your use of space and visuals more so than with other methods.

Ron:  What about facial analysis or facial coding?  Do you have any experience with or thoughts on that? 

Stacey:  No.  That was not part of this study, but we are actually using for the first time now.  It’s the same thing as with those other techniques.  They’re so subject to interpretation.  Yes, there are principles and some things that people have observed over time, but I am still not sure I could go to my CEO and say, “Hey, because this person has this facial expression, we should go do something different.”  That said, we may be able to use facial coding to uncover where people say they have a negative stated reaction to something (like more risqué humor), vs what they actually find amusing.’

Ron:  Those are all methods that, to some extent, are used in partnership with traditional research approaches.  Then there are the completely new approaches that are either totally separate from traditional research techniques or some people see as a replacement, which would be behavioral economics, big data, social media monitoring, and neuroscience.  Which of these do you feel like are valid and useful today, or will be very soon? 

Stacey:  I think social media analytics can be overrated depending on the industry you are in and your context.  This is a big disclaimer.  We’re a price-driven industry. We don’t have a lot of people out there debating the benefits of Orbitz versus Expedia, in general, or at least it is not done in public.  The discourse is more, “Here, I got a promotion code for Orbitz; do you want to use it?”. We do use it for customer service issues or getting reaction to new campaigns.  In other cases though, the dialogue is typically not meaningful enough for us to use it in a robust way for generating insight.

Now, if you are like a P&G, for example, I could see where that could be more useful to you to see how products are being discussed and uncover issues.  But, again, you don’t control the conversation; you’re just observing it and you’re just trying to figure out which takes a tremendous amount of time to filter through  what are people talking about?  And it’s probably 1% valuable, 99% not.  And the time and effort it takes you to get to that 1%, my sense has been I don’t know if it’s worth it.  But again, it could be different in different industries, so I think that one is probably context-dependent.

I think mobile research was one of the other approaches you mentioned.  That one also depends on what business you’re in.  For us, that’s actually just part of my normal mix of tools I use, because we do have mobile apps and that’s very important to our business.  That, to me, is just a channel.  It’s not really a new technique, per se.  It’s still a survey; it’s just on a smaller device and has different usability.  But it’s still a survey at the end of the day.

Ron:  What about big data? 

Stacey:  Big data, to me, can take a lot of forms.  I like to use it in a very focused way.  So I make sure I triangulate, because you can’t trust any one data source these days; you need to have several to be able to really understand what’s going on.

For example, if I want to understand why people are canceling hotels, I want to look at all the people in our big data who have canceled a hotel.  How far ahead of time did they cancel it, and how did they cancel it?  Was it on a phone?  Was it on a desktop?  And then also do a survey of people who canceled and ask them why they did it.  So I feel like using big data in focused ways is meaningful.

It certainly can be valuable to have a complete and accurate record of things, because everything’s in there for click-stream data, for example.  If we want to understand how many people are leaving our site after visiting a certain page, we have that data, and we have to go and query and find it.  And I think actually those front-end tools that sit on top of big data are where the value is.  If you can get good tools to do that and help you quickly get to answers, that’s when big data also becomes more useful, is when your access points really enable you to surgically go into it versus trying to deal with all of it.  I think that’s maybe where there’s more value as well.

I think just complete, open data mining – potentially you can learn some things, but you have to have the resources to be able to do that.  I think it’s hard for a lot of companies to have somebody whose job it is to explore without knowing what they might find.  That’s just something we don’t really have the luxury of doing.  But if other companies do, that’s great.  I just think it’s a huge amount of effort to uncover something without having some type of business question you’re trying to answer with it.

These companies that came in and started with big data figured out there was something missing, and that was, I think, the why, and that’s where consumer insights comes in.  Why are people doing this?  Why do we see these patterns of behavior?  And it’s become a part of their decision making.

Ron:  Some of these new approaches require fairly specialized skills that a lot of traditional researchers don’t necessarily have.  Do any of these tend to make you nervous about your own skill set or future in the industry? 

Stacey:  I would say not really.  I have sort of an eclectic background anyway.  I have a master’s in applied social research, which was sort of a multidisciplinary program at the University of Michigan, and it has a lot of different aspects to it.  It has statistics, business social psychology and it had research methods and the cognitive psychology of asking questions.  So I feel like I actually had a very good background that enables me to go in different directions.  It enables me to go to the quant stuff.  It enables me to go to behavioral economics.  I would say I don’t feel too worried.

But I do think that if you want to go into research today, you should have a pretty broad base to draw on.  If you just learn traditional survey questioning and not the digital world of web analytics and big data, that will probably set you up for failure.  It’ll be much harder for you to figure out what’s going on.

I do think, too, there is a flexibility gap sometimes.  I find research can be a very logical, process-oriented activity, but I think there’s actually a need for more rapid thinking and flexibility in the field.   It may be harder for some researchers to move there, especially if you’ve spent your whole life doing a certain type of research and a certain function, especially in a bigger company.  I think it’s harder to adapt after you have been in that place.  It certainly depends where you are, too.  I happen to be in a job where I have to know all these techniques and use them. That’s just part of my job.  For me, I feel confident I can move between methods pretty well.

Ron:  When you consider all these different new techniques and approaches, which of these do you feel tends to be more true about our industry today:  A) Too many research professionals are dragging their feet and need to get on board with these new approaches, or they’re simply going to be left behind; or B) Too many research professionals are abandoning proven methods and jumping on the bandwagon of these new approaches without sufficient proof that they’re valid or meaningful? 

Stacey:  I think I’m going to be totally in the middle.  I think I’ve seen some of both.  And I don’t know which I agree with more.  I think there’s a little bit of each of them.  I don’t think we should forget about what’s good about traditional research.  I feel like sometimes principles are disappearing, like how to ask questions well.  That means we’re not asking things in the most effective ways.

I think some of the move to new methods is also causing people to focus less on the fundamentals.  That is an issue.  On the other hand, these new methods do represent where we’re going.  And I think if you want to be successful going forward, especially in today’s world, where people do have to move between companies or between industries, if you’re not open to some of these new things, I think it will be difficult to keep up.

Some of the core suppliers – some of the biggest suppliers in the world – still do rely on some of the really traditional techniques and bear a significant cost structure because of it.  And I just think they’re also probably going to have to evolve a bit in order to be successful in the long term.

Ron:  When you deal with people who say things such as, “In-person qualitative is dead,” or “Survey research is obsolete,” how do you respond to those folks? 

Stacey:  I feel like nothing is absolute in this world.  At the end of the day, there are different applications where different methods make a lot of sense.  And I think the industry might also be different.  But I tell people, at least for us, it’s like being a reporter.  If you don’t understand all the W’s and the how about something, then you don’t really understand it.  You can’t get the what, where, why, who and how from just observing somebody do something, just from social media analytics or just from click-stream data.  You need to get the why.  And sometimes the only way you can get a why is to ask somebody, or you could just decide, “Okay, I’m just not going to know it.”  And then you’ll be less educated for it.

It’s not an all-or-nothing kind of a thing.  I think there are benefits in these things; these techniques.  There are benefits in focusing a discussion on a topic that somebody may not otherwise think about.  I just think to help businesses grow, we’re going to have to use the best of what’s out there in order to really be successful.  And some of it is going to be more traditional and some of it’s going to be new.  I think that’s where the good intersection of this industry is.

Ron:  One thing I have seen with a lot of these newer approaches is that it’s not just, “Here’s a new tool,” but it’s, “Oh, you do this and you can replace this other thing.”  For example, “If you do social media monitoring, you can replace your surveys.  You don’t ever have to do a survey again.”  What does your reaction tend to be when you see that message coming from vendors offering these new tools?

Stacey:  You know what?  I actually think it erodes their credibility with me personally to say there are these absolutes out there.  At the end of the day, it’s sales, so I am pretty skeptical of that.  I need to have things be more proven before I’ll buy into them.  And I would never buy into that idea that there’s one solution to everything.  It doesn’t make sense to me logically, just given my experience.

Ron:  With some of these newer techniques that you’re using or you would consider using, are you looking to your traditional research vendors; vendors you have used for traditional research in the past?  Are you looking for them to work with you on those approaches, or are you looking for new vendors that specialize in the newer methods? 

Stacey:  I think it is a little of both.  It is difficult and disruptive to change vendors all the time, especially if you have brand tracking, ad testing, or other things like that.  On the other hand, I kind of watch.  I kind of plant some seeds and I watch and see if the companies I’m working with are adaptive or not.  I’ve left working with some companies because they do not seem to be adapting.  For me, it’s also about the relationship.

But I feel there are some vendor partners I work with that have clearly evolved, and I’ve worked with them for more than a decade now because they evolved and are proactive about my business needs in a really positive way, and others are kind of left behind because I feel like they’re not evolving, or the way they’re evolving is not as value-added for me.  If they’re not moving into these new spaces, in a way, then I feel like it’s sometimes easier to do things myself than it is for me to go through a supplier who does it for me.  So it’s trying to find that balance.

Share

How TNS Is Validating Mobile Globally

TNS is taking the lead with a serious look at mobile in market research as a global opportunity.

 Mobile-world

 

Editor’s Note: Edward Appleton is doing a series of posts focused on the client-side view of mobile research, with an emphasis on use cases and best practices learned so far. This is the fourth  post in that series that we’ll be publishing over the rest of the month. Parts 1 – 3 can be found here.

 

By Edward Appleton

TNS is one of the leading global Market Research agencies, with operations in over 80 countries, and part of the Kantar Group – a leading provider of insights and intelligence. Millward Brown, the Futures Company, The Added Value Company, Kantar Worldpanel, Kantar Retail and Kantar Media all form part of the group.

TNS is taking the lead with a serious look at mobile in market research as a global opportunity.

It was the main sponsor at the recent MRMW Conference in London (Market Research in the Mobile World – 10/2013 – http://bit.ly/1aeYQ6d); they have presented papers and case studies at various industry Mobile Events over the past 2 years.

More importantly, they are building an evidence-based approach to understanding mobile research. They execute annual large-scale quantitative Global studies on mobile  – TNS Mobile Life (http://bit.ly/1bloPNd)  – covering 43 countries, interviewing just under 38.000 mobile users globally. They are beginning to document the pros and cons of the different types of mobile surveys (Apps, WAP, SMS, USSD).

This empirical approach of global exploration and validation is extremely beneficial to the research industry as a whole, as it will create confidence amongst Client side researchers.

Their approach extends to a healthy skeptical attitude to what they see as massive hype surrounding mobile as a panacea to all marketing ills.

A recent blog on the ESOMAR site by Sam Curtis (http://bit.ly/1etPZFf), provocatively entitled “The Big Mobile Lie”,  shared evidence from TNS studies showing mobile devices being hardly used by shoppers at the point of purchase once usage data is broken down to category level – including pet food, alcohol, tobacco, OTC medicines.

So: what’s hype, what’s “for real”? What does TNS’ empirical approach currently reveal about how to best approach mobile research – for what types of study, how best design, what caveats?

I caught up with Sam to understand TNS’ views better.

Mobile: Hype or Seachange?

TNS actually shares much of the underlying excitement for mobile witnessed by many media and marketing organisations.

With the rise of Smartphones, the potential of mobile to change many industries radically is real –  retail (“showrooming”), finance (mobile banking, mobile wallets), health (m-health)  are examples where swift change in value chains is already happening.

Smart phones transform the device into something completely integrated, offering all sorts of experiences in one go. It offers consumers superior convenience and flexibility to explore, interact, buy – wherever they are, when they want, whilst at the same time offering entertainment (music, videos, games) connection to friends via social media, and the opportunity to keep up to date on events.

My sense is that TNS sees both the excitement and disruptive threat of mobile – something that will likely destroy traditional business models – and wish to be position themselves as the go-to insights consultancy offering marketing business advice on a range of issues beyond research.

Significant investment and senior talent within the Kantar group is being focused on understanding how best to use mobile in research, according to Sam.

In the research world, they estimate that within 2 – 3 years between 20 – 30% of all data collection will be mobile, a massive shift from today’s lower levels.

Equally, they see little evidence yet of a mobile “shopper revolution” – many FMCG items are frequent purchases, habit-driven, with little use of the mobile to check prices or find out more about the brand at the POS.

So – mobile is a sea change, but with significant dangers of unsubstantiated claims – hype – leading marketers to potentially invest indiscriminately into mobile as the hot topic of the moment.

Forces Driving Change

For research, TNS sees the following issues and trends that make them view mobile data collection as a strategically important topic:

  • Emerging markets will go mobile first, skipping the phase of laptop/desktop usage which characterizes many developed markets. Emerging markets are huge growth opportunities for many major multinational companies. Missing out here on growth fueled by mobile isn’t an option.
  • Dropping engagement levels on online panels. Mobile is potentially an answer to what is a widely recognized (if poorly documented) issue - relatively high churn levels of people dropping off MR panels, with recruitment of new users proving increasingly difficult. TNS sees mobile as a way of attracting new, particularly younger audiences to participate in market research.
  • Recall gaps. Our memories are flawed; data sets relying on recall are often not complete and therefore lack accuracy and granularity. Mobile is an “in-the-moment” medium. Not only can we capture the “what” more accurately, but also the way we feel at that point in time. Mobile overcomes the recall gaps.

The most powerful driver to mobile, however, is one that is relevant to the research industry as a whole: the need to do shorter surveys.

cargoship

Researchers have long been aware that conducting long surveys – arguably anything over 15 minutes – is in danger of being counter-productive: respondents’ engagement levels drop, their enthusiasm for future survey participation falls, and data quality suffers as a result.

Some of the key reasons for long surveys surviving in a world of ever-decreasing attention spans are as follows:

- Comparability: any change in methodology will likely result in the data changing. If you have tracker studies running for a number of years, with internally accepted KPIs, then switching data collection mode needs careful transition, calibration planning, and internal stakeholder sensitization. It is a bold but important call, that requires senior Insights staff that are both respected, well established and confident.

- Knowledge Needs: Clients (often Marketing staff who are the budget holders, but with little or no formal MR training) naturally wish to squeeze every last question in “once we have them there”. Micro-surveys are a relatively new phenomenon – Google Consumer Surveys was launched in 2011, and was limited initially to the US and Canada. The notion of “chunking” will be new to them – education is called for.

- Commercial forces: many Agencies  are reluctant to push-back on Survey length for fear of losing a piece of work. This is a spiral towards lower quality and needs to be broken out of, with the more commercially stable industry leaders playing a key role.

- More Complicated Scripting: scripting a mobile survey is no doubt more complex due to the myriad types of devices and operating systems that a given survey has to work with. This requires accessing sufficient staff with the relevant skill sets, investing in that resource. This can easily raise the base level cost of a business, which Agencies may or may not feel comfortable about passing on to clients.

The barriers are significant.

Mobile, however, has witnessed such rapid and widespread adoption amongst consumers worldwide that it represents an irresistible force that Companies – Client and Agency – will have to react to. The phase of “simply being prepared” is probably behind us, knowing how to integrate mobile into the research mix is an imperative.

If – as TNS suggests – mobile actually represents an opportunity by which the respondent experience for all types of MR engagement can be both improved and made more predictive, it has the potential to transform – revolutionize even – the way we conduct huge swathes of survey research.

Mobile: Shorter, with Higher Predictive Validity

bigorsmall2

TNS is currently conducting research-on-research studies in selected geographies and categories – focusing on brand equity, advertising tracking, customer satisfaction studies – to establish potential differences in response patterns between mobile and laptop/ desktop. They also are aiming to find questions that are the most predictive of actual behavior.

Key survey metrics – including Purchase Intent – are compared across the different devices to actual subsequent purchasing behavior. Analysis is at the more critical respondent level.

The topline meta- finding is startling: shorter, more relevant surveys have a higher predictive validity than longer ones.

This has major implications for interview length across survey type.

TNS can pinpoint which questions in a given survey type are more predictive, correlate well to actual behavior, and eliminate questions that are redundant.

Work TNS has conducted in 3 countries across two categories show the following questions to be redundant, as they have a low correlation with individual purchase behavior:

  • aided awareness
  • brand familiarity
  • brand satisfaction
  • purchase intent
  • recommendation (NPS)
  • brands bought – past 3 months, regularly, most often
  • brand attitudes (i.e. ‚this is a brand i trust‘)

Parallel to this work on mobile validation, they are working broadly with clients to implement “survey-length-reduction” principles:

  • Respondent Level Validity (only asking questions that respondents can answer accurately)
  • Principle of Redundancy (detecting respondent level correlations within-survey and eliminating unnecessary questions by using auto-fill)
  • Relevance (only asking questions about the few brands and attributes that participants really care about)

Brand and attribute lists are reduced radically, only one brand-rating question is asked per brand.

The resulting shorter survey (an equity study in this example) can be conducted in 3 minutes with a respondent-level validity of R = 0.62, and an impressive correlation to market share of R=0.90+.

Their conclusion: shorter surveys can give higher predictive validity.

Trackers and brand equity studies that could traditionally require 40 minutes of respondents’ time can be reduced to well under 10 minutes – as little as 3 in the case quoted.

This opens the door for many trackers to transition to mobile, saving time, eliminating respondent fatigue, reducing cost, and delivering better results.

The insight that shorter is more powerful is intuitive: we pay more attention if we are allowed to talk about the few things we care about in a category, and not get forced to answer multiple questions about things we may have no opinion about. This is still the case with many trackers.

It is as TNS states nothing less than a revolutionary approach to surveys – whatever data mode participants choose to respond.

In summary, best practice in mobile could drive best practice across mode type. There are probably few types of surveys that cannot, once condensed intelligently, using sufficient computer power and intelligent programming, be conducted on a mobile device.

Mobile: Increased Accuracy

Accurate

Alongside forcing Survey designers to think carefully about asking fewer questions, mobile helps solve the known problem of memory gaps. It delivers more accuracy.

This is particularly relevant for diary-formats, but also for any out-of-home buying or consumption occasion.

Delivering better accuracy can also be challenging: the emerging picture can be radically different from one delivered by memory-reliant desktop/laptop.

The following case looks at how drinks are ordered in a pub, and shows how different key outputs can be.

Case Study: Molson Coors On-Trade Lager Drinking

drinks

TNS was asked by Molson Coors to take a quantitative look at drinking in pubs in the UK (the on-trade), which in their view was an area thin on robust insights.

Industry wisdom – and one shared by the UK Government – was that alcohol consumption is price-sensitive, and that raising prices leads to a fall in consumption. Molson Coors wished to validate this, understand better what the key reasons driving brand choice were, including price, as accurately as possible. Mobile seemed an ideal vehicle for research.

TNS used a split-design approach, comparing online laptop/desktop data collection to mobile. They recruited 147 lager drinkers in the UK; and asked them the same questions regarding their drinking habits at a recruitment stage, (using a laptop), then complete the same questions whilst in the pub on their mobile.

The key findings showed clearly that using mobile gives a different response:

  • 3.8 influence sources were mentioned in the recruitment survey – in mobile there were only 1.4
  • A sharp swap of priorities was noticeable: price and special offers were stated as the most important influences in the survey completed on a laptop/desktop, whereas the in-the-moment responses via mobile showed “brand” to be the most important factor.
  • Special offer hardly figured as an influencer in the mobile survey.

The two versions challenge received opinion.

Whilst the “truth” isn’t necessarily accessible via a single-mode research approach – no triangulation was undertaken or at least made public – price and promotions appear to be less important in marketing in the on-trade as a traditional approach suggested, whereas the role of the brand is up-weighted.

This has huge implications for the Molson Coors marketing mix – suggesting that brand-building activities for online preference are key, the marketing mix needs to respect this. Targeted TV advertising (one example) should be given serious consideration.

The conclusion from this case: mobile is immensely valuable in getting marketing a step closer to out of home experiences as they are felt and recorded at the time.

This greater accuracy can lead directly to:

  • more efficient marketing-mix
  • a high likelihood of an improved ROI.

Mobile – What Types of Research?

Sam sees two main areas where doing mobile research has clear advantages

  1. Path-to-purchase journeys
  1. Touchpoint analyses

i)  Path to purchase journeys

Online-Purchase

TNS’ global study on mobile usage – Mobile Life – contains data showing that at present, mobile is used relatively more extensively for in-store intelligence gathering than purchasing.

There are clear differences by level of market and mobile maturity, with retail mobile usage increasing with rising smartphone penetration levels.

In Europe, 38% of mobile users have used their device at some point to research an in-store purchase whilst only 16% have actually used their mobile to buy something. Younger respondents are more likely to browse on their mobile in-store. The use varies strongly by category – usage levels on an individual trip basis, as Sam points out in his RW Connect blog, are often very low. TNS refers to “channel not fulfilling potential”.

Overall, the data suggests that some browsing has shifted from in-home (desktop) to in-store (mobile).

Using mobile to understand better how consumers react to in-store promotions or shelf-displays is therefore of value; a need which is likely to increase.

Mobile research should be executed in categories where the extent of in-store mobile browsing merits attention – books, DVDs/games and computers top the list, albeit still at single digit penetration levels.

A further consideration for mobile path-to-purchase studies is length of purchase cycle.

Engagement levels and diary entry enthusiasm can be held high for a relatively short time period, typically up to 4 days. After this, interest levels and reporting intensity drop.

For categories with particularly long decision making cycles, mobile tracking is not yet easy to execute successfully in TNS’ experience. As mobile panels become larger and more robust, this is likely to become easier.

Mobile diaries work well by using “near-the-moment” self-reporting: respondents are asked 4 – 5 questions at the end of each day, until they make a purchase.

ii) Touchpoint analyses

brand touchpoints

Mobile offers the following advantages:

- all touchpoints are recorded

- event sequencing can be monitored

Respondents’ behavior can be linked to a touchpoint – so allowing researchers to see precisely what prompts (advertising) or interactions (advice, recommendations, expert opinion) lead to an event, a purchase act.

The journey can be mapped over time.

Mobile allows a better understanding of the context in which the decision was made, as the recording is much nearer to the time it happened, and less reliant on erroneous recall.

“Touchpoint correlation” can also be identified: patterns can be identified of where the use of two or more touchpoints occur in clusters frequently. This allows researchers to probe the causality, engage in the diagnostics of “can you tell us more about…..” as an improvement on the direct and often blunt technique of “why did you….?”

Mobile Surveys – Good Practice Tips

good_practice

TNS talks of “good” rather than “best practice”, because:

  • the medium is still relatively new
  • insufficient R&D work has been done across audiences, geographies, categories

It is premature to issue a clear and comprehensive set of survey design guidelines.

Their overall comment on the state of mobile research is critical, that there is insufficient recognition that simply transferring an online survey onto mobile whole-scale isn’t the answer. To quote Sam: too many mobile surveys look like a regular survey.

Their current guidance on how to shape good respondent mobile survey experience is as follows, based on pilot studies completed between 2011 – 2013.

  • Short Interview length: maximum 10 minutes, but ideally much shorter, at 3 minutes.

This conflicts with all the knowledge needs a given project may have. It would require clients breaking up a large single surveys into multiple smaller ones. This means Agencies need to show Clients options that deliver on cost and timing – doing multiple shorter surveys instead of one longer ones.

This is an area that as yet is not standard practice in the development of proposals, and represents an opportunity for the industry. Education is needed.

  • Eliminate repetition

Questionnaires often – in TNS’ experience – cover the same areas twice, possibly more often. The drive to brevity requires any potentially overlapping questions to be merged.

For continuous surveys, this involves showing Client stakeholders relevant analysis (factor analysis, predictive analytics) and proof of redundancies, levels of inter-question correlation levels.

  • Filter for Relevance

Respondents should only evaluate brands that are in their consideration set. They don’t need to see a whole battery of say 8-10 brands or a list of 30 product attributes. This makes survey tasks – of image evaluation for example – much shorter, and manageable on a significantly smaller screen.

Filter questions are key: only ask questions on the attributes that respondents care about, for example, or the brands they have in their relevant set.

Grids are much simplified by following this guideline, and responses correlate more strongly with actual behavior.

  • Gamify to suit your audience

Making tasks more intuitive, and fun is a way of holding attention levels high – TNS sees gamification as an area of opportunity.This can be very simple gamification – removing a 1 – 5 touchpad option to a slider for example.

Challenges for Mobile Transition: Audience Biases

TNS currently see limitations in the size and scope of available mobile-enabled panels.

Where mobile panels are available, they see certain audience skews:

- more tech-savvy

- less older respondents

- less younger males

No doubt this is a moving target, with many panel providers working actively to increase the number of their participants who are opted-in for mobile research.

Given the current paucity of mobile-enabled panels, TNS currently adopts a partnering approach, linking up with companies outside of the traditional MR space that can deliver potentially interested MR audiences with the right approach and incentives.  These partnerships are becoming increasingly fruitful in Emerging Markets like India.

Mobile  & Passive Monitoring?

Kantar has been collecting “clickstream” data – respondents’ complete use of their mobile – for over three years now in North America.  They record telephone calls (only duration, not the content of the calls) pics taken, sites visited, ads exposed to -  recorded passively by an App they download. Cell tower GPS information reveals where respondents are.

A pilot is also testing an audio recognition app that can listen to and record sounds in the immediate vicinity – similar to the music-recognition app Shazam.

Such tracking allows a comparison between claimed and actual behavior – and is powerful in detecting potential contradictions – useful for insight generation.

As a method, passive mobile monitoring is something TNS is treating as an area of huge potential. There are still challenges in persuading respondents to join these panels over privacy concerns, even though no personally identifiable information is stored.

Whether sufficient consumers will consent to total tracking in future, thereby permitting some degree of representativeness, is an open question.

Unless barriers are overcome, passive monitoring may remain of niche relevance for research.

Summary/ Outlook

  • With its ongoing global mobile research program amongst tens of thousands of mobile phone uses, TNS is well positioned be a global leader in understanding the habits, preferences and  movements of the mobile consumer.
  • Using mobile as a vehicle to help convince the industry (clients and Agencies) to shift to shorter surveys with higher predictive validity has multiple benefits. It also has the potential to revolutionize the way surveys are conducted, possibly signaling the demise of the long, tedious 25 + minute tracker.
  • Mobile should improve touch-point and customer journey analyses. Improved efficiencies in media and marketing mix planning should result from mobile’s ability to register more touch-points, better understand how touch-points interrelate and then lead to an event such as purchase, and the context in which someone felt good or bad about a brand or a marketing message.
Share
Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

The ARF Captures Our Marketing And Research Industry Journey

This is the first part of a two part blog series on learnings from the ARF Rethink 2014 conference. Part two will be on “Big data, big research possibilities”.

rethink-398x120[1]

By Joel Rubinson

The ARF Rethink 2014 conference did a great job of capturing our marketing and research journey.  The keynote address on day one was actually an interview of one of the real 60s “mad men”: Keith Reinhard, Chairman Emeritus of DDB. Soledad O’Brien conducted the interview to artfully extract his view…of course we need research, but it was clear that his focus was on the creative spark that led to “…hear(ing) the bat hit the ball and you know it’s a home run.” So we started the conference with paying homage to ‘the great TV ad’, in a world of single-screen behaviors and little acknowledgement of the contribution of research.  It was a world where brand building was centered on the TV commercial, catching lightening in a bottle, and where there was no guarantee that success on one assignment would be reproduced on the next.

That was the past. But then, the ARF brought us into the present and near future.

At the conference we heard from Facebook how dominate mobile use is becoming to overall Facebook access and the need to link behaviors across screens for the same individuals via persistent log-in. They shared a factoid that 40% of people begin an activity on one screen and then complete it on another.

While we are in the digital age, at the same time, we saw evidence that it is also the golden age of television and in fact, digital might be TV’s best friend. Dave Poltrack of CBS shared three startling facts:

  1. TV viewing hours are NOT declining and linear TV is still, by far, the dominant behavior
  2. TV program audiences are NOT declining when you consider the full reach of a program across TV, social, digital, and mobile interactions
  3. There is a clear and conclusive correlation between program engagement and the effectiveness of advertising run on highly engaging programs.

The takeaways: advertisers can neither ignore the power of TV, nor take it in isolation of simultaneous second-screen behaviors and creatives need to realize how much tougher the challenge has become…to create ideas that work across screens to amplify and reinforce one another.

As media behaviors evolve in our digital, social, and mobile age, the marketing questions change that research must address while also arming research with tools it never had before.

The ARF showcased new possibilities for measurement from best in class work that leveraged big data.

We heard about the following data streams being leveraged for breakthrough insights:

  • Social media listening to naturally occurring conversations and vocabularies (Occulus 360)
  • The use of web-based micro-surveys to build a previously unheard of library of nearly 30,000 questions that offer the richest playground imaginable for using data science to mine for unexpected insights (CivicScience)
  • A new science was described and validated, “expectation science” that is the needed companion to “measurement science”. (CivicScience)
  • Bringing consumer segmentation to life not via focus groups but by creating prototypical digital behavior patterns (Brainjuicer’s Digividuals applied to Allstate segmentation)
  • The merging of massive databases of media behaviors, attitudes, purchase behaviors, media spend, etc. using anonymized matching and data fusion methods  (Nielsen, IRI, Comcast)
  • The power of asking questions to those whose digital behaviors are tracked to create a single source way of understanding path to purchase digital behaviors and motivations (Luth on behalf of Ford)
  • The challenges and opportunities of conducting research via smartphones.  AOL showed powerful evidence that smartphone research participation rates are lower, satisfaction with survey taking is lower, but data quality can be higher.
  • The continued need for the long-form survey against certain business questions…CBS conducting a 40 minute survey among 7,000 respondents, as part of a research program that included merging massive data sets together to understand the landscape of video viewing motivations and how program engagement affects advertising effectiveness

Ask yourself: do you have a data strategy to generate insights and measurement that leverages every one of these arrows in the quiver or are you still primarily in a traditional research mode? Are you working as hard to understand the sea changes in media consumption as you are to understand consumption of your brand?

In the next blog in this series, I will describe in more detail some of the emerging big data and data science-based solutions that were described at the ARF Rethink 2014 conference.

Share

Announcing The Finalists Of The 5th Wave Of The Insight Innovation Competition!

After several weeks of intense competition in the 5th wave of the Insights Innovation Competition, the results are in and 6 finalists have decisively risen to the top of the list based on votes by you, the global market research community.

insight-innovation-competition_400_Trans

After several weeks of intense competition with 7 firms competing in the 5th wave of the Insight Innovation Competition, 24,167 views and 3,100 votes the results are in and 6 finalists have decisively risen to the top of the list based on votes by you, the global market research community.

Interestingly, although the number of submission was down from previous waves (an artifact of this judging round happening in Chile, I believe) the numbers of views and votes are up, so these seven fought hard.

Please join me in congratulating all of the competitors and the finalists. Here they are:

 

IDEA NAME AUTHOR VIEWS VOTES
FINALISTS
eCForce Adriana Rocha 14503 1656
SocialDecode Eduardo De Leon 5940 482
Survmetrics, interactive surveys. Ramon Escobar 2112 471
Rolling Labs Mike Courtney 846 416
Sustainable Research Fiona Blades 642 64
Vysical Labs: Immersive Video Games for Product & Retail Testing Rolfe Swinton 124 11

 

The submissions broke down into a few key categories:

Social, Local & Mobile: 1 submission (eCForce)

Big Data Analytics & Visualization Tools: 2 submissions (SocialDecode & Rolling Labs)

Gamification: 1 submission (Vysical Labs)

Adaptive Surveys & Agile Data Collection: 2 submissions (Survmetrics & Television Surveys)

Other: 1 submission (Sustainable Research)

So a pretty wide swath of “buzz terms” were covered with a great variety of approaches and technologies represented. What is most interesting (to me at least) is how this mirrors broader trends both within and outside of the insights sector: these folks are obviously paying attention to what’s happening and are building businesses that are positioned to tap into those trends.

What happens next:

For the finalists, they are going to present on stage in front of our panel of judges and the audience at IIeX in Santiago, Chile in April. This is a no-lose proposition for them: past participating companies have seen their businesses accelerate due to their involvements in the Insight Innovation Competition, resulting in funding, partnerships, new clients, and global brand exposure. For the single winner they will get:

  • Exposure to a large international audience of potential prospects, funding partners and investors, including the Ricoh Innovation Accelerator, Lowe’s Innovation Lab, and independent venture capitalists and angel investors
  • A free consultation provided by Gen2 Advisors to help the winner develop a growth and expansion strategy.
  • A “Hot Desk 60”membership for one year at the Center for Social Innovation’s newest facility in the iconic Starrett-Lehigh building in Manhattan, which includes 60 hours of coworking space and 3 hours of meeting room access per month
  • An invitation to present at the next Insight Innovation eXchange
  • An interview to be posted on the GreenBook Blog, viewed by 36,000+ industry professionals per month
  • An opportunity to work with successful senior leaders within the market research space

On April 9th one of the above companies will join Decooda, Zappistore, Raw Data, RIWI and SocialGlimpz as winners and can look forward to rapid acceleration of their business.

competition winners

For the runner-ups (and the finalists who don’t win the next round too), the news is still good. All participating companies will be vetted for inclusion in the Ricoh Innovation Accelerator and Lowe’s Innovation Lab programs. Selected participants will gain guaranteed organic funding through pilot programs with program partner companies, as well as access to acceleration resources for marketing, strategy, finance, and business development.

The next round of the Competition will launch in April aligned with IIeX North America in June.   Stay tuned for more details on that soon.

 

 

Congratulations to everyone who competed and good luck on the next steps for each of you!

Share