1. RN-GBook-480-60-B2B-Banner-5-16
  2. Greenbook 2
  3. Greenbook-Mobile-6.29.16-
  4. mfour_new_1

Are You Shaking Up Market Research?

Are You Shaking Up Market Research? – Nominate the “who’s who” of the market research industry for a prestigious Next Gen Market Research Award

 

By Tom H. C. Anderson

For the 8th Annual Next Generation Market Research Awards, the NGMR Award Committee has decided to shake things up and change the categories a bit.

Announcing New Categories for 2016 NGMR Award Nominations

Since the founding of the NGMR Award in 2009, it has quickly become one of the most prestigious testaments to achievements in our industry. Some have even referred to it as the Nobel Prize of market research!

This years three new disruptive change categories include:

Outstanding Disruptive Start-Up

We’re looking for an outstanding start-up (3 years or less in the industry) that is disrupting traditional market research methods. Whether it’s a technology platform or a service based solution, nominate a firm that’s truly shaking up market research.

Most Innovative Research Method

What are the most innovative research methods you’ve come across in the past year? We’re looking for firms with tested new methods that have delivered exceptional, validated research results.

Industry Change Agent of the Year

Who is blowing your mind? Shaking things up? Doing things differently? This award will go to a truly exceptional change agent. Someone that not only talks the talk but walks the walk.

This year both NGMR (Next Gen Market Research Group) and WIRe (Women in Research) have teamed up to encourage their members to submit worthy nominations for this prestigious award.

Winners will receive a complimentary pass to TMRE and the opportunity to discuss their Next Gen Research best practices during the NGMR Award session moderated by Kristin Luck and Tom H. C. Anderson.

Nominations should be sent to NGMRawards [@] NextGenMR.com by no later than August 31, 2016 and must include two to three paragraphs (no more than one page) of supporting information as well as a brief nominee bio and/or company background. Additionally, if necessary, you may also attach separately up to two supporting charts/images or videos (on no more than one page).

Please note the category you are submitting for and provide the additional information below.

CONTACT INFORMATION

Nominee Name ______________________________
Nominee Title _______________________________
Nominee Firm/Institution_____________________

Email_______________________________________
Phone ______________________________________
Address_____________________________________

 

SUBMISSION INFORMATION

Please provide a brief argument for why one or more of the three award categories are appropriate.

The Judging committee will reach out should we have any further questions about a nomination. Only the award winners will be notified shortly before the event. Winners are expected to be able to attend the event and participate in the award ceremony.

Good Luck!

 

UBMobileOdinTextNGMR2016

This Year’s NGMR Awards proudly co-sponsored by OdinText and UB Mobile

Share

Unlimited Length Mobile Video Uploads – Are we there yet?

Mobile phone technology is advancing at a rapid pace. Our challenge as platform providers is getting ever-increasing file sizes to the research platform where they can be viewed and analyzed.

BlogPic

By Dan Weber

Researchers have long known that having participants capture their responses on mobile devices from their kitchen, in the store, or at an event would offer many benefits. Mobile devices could give researchers the ability of having participants capture moments as they happen in an unobtrusive manner. Mobile phone technology is advancing at a rapid pace. We can now shoot high quality 1080p full length videos.  Our challenge as platform providers is getting ever-increasing file sizes to the research platform where they can be viewed and analyzed.

History of mobile ethnography

The earliest forms of in-the-moment ethnography studies required a researcher to actually be present and record responses and reactions of participants as they completed the required activity or event. While I will not argue the merits or drawbacks of having ethnographers present, Mobile technology at the very least opens the range of possibilities when considering capturing ethnographic-type information.

Researchers tested these waters first by sending participants cameras and instructed them to record their responses. Participants were then responsible for sending the camera back. This made for a very drawn out and expensive process often extending the time requirements for the project.

The next testing ground utilized desktop computers and webcams.  This method too came with strings attached. Participants were not able to go out of the home and complete activities because they were tethered to their computers.  It wasn’t until the progression of smart phones and wireless technology, that true in-the-moment ethnography studies were made possible.

App or Web

There have been a number of blog posts related to web or app development choices as they relate to functionality and user experience when capturing mobile video.  When considering which development direction to take, we made what we would consider the easy choice.  We chose app over web for one simple reason, so that we could confidently and consistently upload full length videos.  It seemed unreasonable to us to tout the virtues of mobile ethnography while at the same time forcing participants to shoot 30 second to one minute videos because of the limitations associated with a web-based design.  We couldn’t ask participants to show us how they bake cookies in their kitchen and expect them to do this in 15 one minute videos.

With the web-based option, the participant must stay on the upload page until the video is fully loaded.  For example, a five minute video recorded at 1080p / 30 frames per second would have an approximate file size of 650mb.  On an average home quality Wi-Fi, this could take 20 minutes or more to upload let alone if one is outdoors where data strength can be intermittent.   Not only would most people not have the patience to wait that long before texting a friend or checking their email, but many devices are set to automatically lock after a certain amount of inactivity, which could disrupt the upload process.  This also assumes that at no point during the upload process a connection is lost.  This would eliminate capturing video in remote places and in stores where mobile data may not even be available.

Limited by the software’s capability, researchers encouraged participants to record videos between 30 seconds and a minute limiting the value of the response in many cases.  As qualitative researchers, we are trying to probe for more details, not less!  By limiting the length of the video, platform providers were minimizing the risk of the participant walking out of a building or going through a tunnel and losing their internet connection.  Compounding this issue is that some apps limit the uploading of videos to occur only on Wi-Fi, ensuring participants are not using up their data plans and incurring costs to participate in the research.

The Challenge

If full length videos that actually upload regardless of the environment was the challenge, then building an app that can survive the inconsistency of internet connections was the goal.  We found that as large videos were being uploaded, participants often moved in and out of Wi-Fi and various mobile wireless communications (3G, 4G, LTE, etc.) zones, disrupting the upload process.  To make matters worse, the cameras on mobile devices were improving rapidly, thus increasing the file size.

The Solution

Faced with this challenge, itracks set out to build a file compression and upload system that would allow participants to submit full length videos without the risk of the upload failing. Our video upload system is resilient to the nature of mobile phone users because it recognizes that mobile internet connectivity is fragile.

We set out to develop a system that chooses Wi-Fi when available, but could pause the upload if a connection was lost.  Now, once an internet connection is re-established, the video resumes the upload where it left off. If a mobile device can capture a long data rich video, the app can consistently upload the videos regardless of the length. With this improvement, the length of the mobile video that can be posted in our qualitative platform is only limited by the functionality of the mobile device collecting the video.  With higher quality cameras and larger and larger storage capabilities, mobile ethnography, in-home usage tests, and shop-along studies can be done with the participant free to express themselves in as much time as they need.

__________________

 

itracks is an independent market research technology and services company founded in 1998 by Dan Weber. itracks’ online focus groups, video focus groups, discussion boards, online communities, and markup tools are easy to use and come equipped with a wide range of engagement capabilities. itracks is known in the industry for mobile video data collection and video management capabilities. To learn more about online and mobile research, sign up for a webinarwww.itracks.com

Share

Level Up: The Possibilities Brought to Life by Pokémon GO

In the few weeks since Pokémon GO’s U.S. release, it’s become a hands-down winner for this summer’s “craze.” And for smartphones, the needle has forever been moved.

imgres

By Zoe Dowling

In the few weeks since Pokémon GO’s U.S. release, it’s become a hands-down winner for this summer’s “craze.” Future generations will likely reflect on these times with the same fondness as with the hula-hoop or (more recently) the ice bucket challenge – but for smartphones, the needle has forever been moved.

A Friday evening walk on Los Angeles’ Redondo Beach Pier mirrored many landmark locations around the country – a majority of visitors on the Pokémon hunt, many of whom came furnished with mobile battery packs and chargers. Beyond the volume of active players, it was striking to note how inclusive the game is – from tweens to grandpas; from individuals and couples to groups, everyone wanted to catch ‘em all.

What drove Pokémon GO’s unprecedented popularity?

Given the inclusive fan base of the game, its popularity isn’t just a result of the 90’s kids eagerly reliving their youth, nor is it simply techies delighting in the technological convergence and execution. While these are contributing factors, there’s more going on.

Pokémon GO is accessible

The internet, social media and smartphones facilitate a connectivity and global reach to the extent that memes and trends spread almost instantaneously. News about the game swept across the country and the globe. People want to be part of the newest trend.

At the same time, the game’s easy (and free) entry allows anyone with a smartphone to participate themselves. Within minutes of opening the app, you experience the wonder of being virtually positioned within your physical location and catch your very first Pokémon where Augmented Reality delights. Perhaps also Pokémon GO highlights the universal popularity of mobile casual gaming, although maybe for the first time it becomes a visible, in fact public, activity.

Pokémon GO merges technologies in a way that its predecessors didn’t succeed

Maps aren’t new to gamers but location-based gaming appears to have gone mainstream. The use of GPS and walking your virtual character around your physical world is very neat.

Aside from tracking your movements on the map, your physical and virtual location are also linked by Pokéstops. Here you pick up PokéBalls and other items to add to your stash while learning about the micro-landmarks in your immediate vicinity. During my first walk I discovered that my local diner is 40 years old and that the town library gardens are home to a small remembrance fountain. Not to mention countless, hitherto undetected, Pokémon to add to my Pokédex.

The inclusion of Augmented Reality (AR), which some rightly say is a limited aspect of the game appearing only when you encounter a Pokémon and attempt to catch it, nevertheless delivers one of the most ‘wow’ moments, being the final convincing glue between your physical and virtual worlds. These technologies, coupled with classic game elements of a mission based activity where you are awarded experience points, level ups and engage in traditional video-game combat, deliver a compelling experience.

Pokémon GO allows users to concurrently escape and explore their world

Finally, it’s possible that the game brings a welcome relief from this year’s bleak newsrooms. It provides a moment of escapism that you can share, even just with slight smiles and nods, with the people around you. Bringing us together, albeit for a brief moment, in an increasingly fragmented world.

The branded advantage

Whatever the reasons for Pokémon GO immense success, it has given us a glimpse of possibilities with geo-location and AR that up until now have felt more like a futuristic hyperbole. The opportunities extend well beyond the gaming world. For brands, the race is on to capitalize upon people’s engagement with the game and drive traffic to their retail environments. Furthermore, well-considered partnerships can also help position the brand as a player within the cultural conversation.

McDonald’s Japan became the first official brand partner with 400 restaurants as ‘gyms’ and the remaining 2,500 sponsored Pokéstops but there’s also been many instances of unofficial linkage with signs on shop windows offering “10% discount for any Pokémon captured here” and countless social media posts by brands all eager to be part of the moment.

Will Pokémon GO impact market research?

It’s hard not to start considering the implications for research. From an immediate perspective the smartphone message, which should already be loud and clear, is booming. People have smartphones. People are using smartphones. This is where we’ll find them.

The willingness to use GPS and having your movements mapped is an interesting one. In many ways, people already give out this information freely with check-ins on various social media and review sites but perhaps this takes it to a new level.

What would a shopper journey look like using an app with a map overlay? What if there were virtual items within the retail environment that people found during their journey to signal a feedback loop? What if we could use AR to have people select items from a set of features and overlay them to create a view of the environment as they’d like to see it?

In matter of few short weeks, this type of interaction with research respondents feels entirely possible rather than a pipe dream. The challenge now – turning the potential into a reality.

Happy hunting!

 

Share
Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

From Silos to Success: A Decision Maker’s Approach to Big Data

To reap the benefits of data, it needs to be turned into actionable insights. The question is: How do you approach this task – of synthesizing big data and all kinds of other information?

bigdatahypesigns

By David Albert

There are a million stories playing out on the big data landscape – not all of them pretty. Some marketers are working smoothly with data to drive their businesses to success.  But others are struggling to cope as data gets bigger and more ubiquitous.

The availability of data itself does not resolve marketers’ consumer information issues.  To reap the benefits of all this data, they need to turn it into actionable insights. Data must be integrated to fully realize the value of information.  The question is: How do you approach this task – of synthesizing big data and all kinds of other information? How can you get the most out of data?

  1. Every good solution starts with a well-defined problem

All too often, data integration starts with someone in an organization –typically a senior manager — asking, “Why you don’t we take advantage of all the data available to us?”  The request often arises without context, process, or clear goal.

Then the research team goes to its partners and asks the same random, unfocused question.  If action is taken at all, the knee-jerk response is to bring data streams together with little planning and emphasize the technology.  Breaking down silos is important; but if that is all you do, you are left with the same problem you had before – lots of data, but no decisions coming from it.

Effective use of multiple data sources must start with well-defined business questions, problems, or issues.  Without these, you are simply creating a bigger haystack to search for the needle.  The process of articulating these questions does not need to be cumbersome. Simply writing down what you and/or your organization are working towards provides a great start. Then prioritize and dig into a topic to expose precise business questions.

  1. The data you need is determined by the answers you seek

Literature is full of quest stories. In these tales (think Odysseus or Frodo), the protagonist is in search of something – he or she has a goal and is working towards it.

Data integration is no different; the business questions and issues help you define the goal.  Having multiple sources of information that can be combined in different ways provides many paths our hero can take.  Successful quests have a common thread – focus.

When Indiana Jones was searching for the Ark, he zeroed in on this goal. This same discipline needs to be applied to data integration efforts.  In the realm of data analysis, focus is achieved by having the right sources and amounts of data; too much can take you far off course.

By building off of precise business goals, you can craft a simple analytic model that will help you determine which sources to interrogate or combine.  For example, your business objective may be to increase share.  This leads to the question, “What are the factors that influence market share?”  There are market pressures – price, competitive products, and competitiveness of your value proposition.  There are internal factors, such as costs, service, and delivery. And there are customer factors, as well – satisfaction, perceptions, and needs.

From these “inputs” you can build a specific business question, such as, “Does our service model facilitate repeat purchase?” Then you can identify sources for the data you need to get to the answer.  Data will typically be sourced internally, from 3rd parties and from custom, ad-hoc techniques, such as customer surveys.  Having a focused list of what you need will help you stay on target in your quest to address the important business questions of the day.

Make sure you pay attention to what you cannot learn from internal data. It tends to be harder to understand why customers make the decisions they make – but knowing the “why” is essential to effective messaging and targeting. This type of insight can be obtained through consumer surveys and focus groups.  Make sure you have sources to understand consumer motivations, perceptions, and decision making before you run off making plans to modify marketing, service design, or products.

  1. Tools make the job easier – but they do not solve business issues

Data integration can be messy. Data sets from different sources do not fit neatly together. You could spend more time pulling data together and making it ready for analysis than actually analyzing the data. Today’s business environment moves too quickly for this kind of slowness.

This is where technology comes in. Across the value chain, programs and services can help you bring data together (think of Hive) and analyze it quickly (Hadoop and others).  But being able to run a report is vastly different from being able to interpret data.  And then there is the important step of communicating the findings and recommendations in a memorable, compelling way to spur change within your organization.

Do not mistake the ability to generate some output with the bigger task of developing business recommendations and motivating action. For this, you will need specialists who understand the business context, can derive meaning from data, and can effectively communicate to senior stakeholders. These people need to connect the dots, pulling insights from multiple sources, interpreting models, and tapping other aggregate sources.

Think again about Indiana Jones.  Much like his tools, such as his signature whip, big data technology is a tool.  It is his skill, understanding of the situation, and intuition that allow him to successfully find the treasures he is seeking.

  1. Don’t review the past – see the future

When analyzing data, you typically are looking in the rear-view mirror at patterns that emerged in the past. This can help you understand what the future may hold – but there are also robust analytic tools that can take data, in real-time, and predict what will happen next.

Imagine if your retail managers had access to a dashboard that allowed them to see what aspects of the customer experience they should focus on today to maximize conversion.  And the focus areas for one outlet would be specific for that location, taking into account macroeconomics, geography, demographics, and historical trends.  Such a tool is possible through data integration and big data analytics. You will continue to need to review the past to make strategic decisions; but having tactical tools that drive the right behavior at the point of sale is going to become essential for a distributed sales force.

Making use of data from multiple sources is a must for every marketer. Going into the endeavor with a clear plan is key to success, as is maintaining what made you successful in the past – your ability to interpret information and use it to make decisions.

Share

A Bad Week for Neuroscience

Recent finding have created serious doubts and cast aspersions on neuroscience.

Neuroscience

By Dr. Stephen Needel, Advanced Simulations

For those of us in the business who are scientists, or have pretensions of being scientists, neuroscience is a ridiculously compelling concept. Science uses the term sui generis (because we can’t help but use Latin terms profusely and indiscriminately), which means “cannot be reduced to a lower concept”.  The ability to define the neurological processes, the lowest level of an individual’s response, to marketing concepts is a holy grail for us. If we can map the process to the response, we get away from asking people questions and all the uncertainty, all the biases, and all the controversy in our scientific endeavors.

So it is with both great sadness and a certain amount of smug self-satisfaction that I read two publications recently that raise serious doubts and cast aspersions on neuroscience. The first is an article in Proceedings of the National Academy of Sciences (no, I haven’t heard of it either) by Eklund, Nichols, and Knutsson called “Cluster failure: Why fMRI inferences for spatial extent have inflated false positive rates”. I give you fair warning – this article is as dense as they come, both from a neurological and a statistical perspective; it is not for the faint of heart. Fortunately, they summarize the issue and the results in a language we can all understand. The three most common statistical packages that analyze fMRI data have a positive bias in the range of 70%. That’s worth repeating – they found that 70% of the time these packages deliver a false positive and they call into question the results of some 40,000 fMRI studies. In practical terms, when someone tells you that this type of stimulus excites this area of the brain and that means it’s good or bad, they are likely wrong.

The other publication I read was the July 2016 issue of Quirk’s Marketing Research Review. I recognize that this is an advertiser-supported publication ads_needle– it’s free to subscribers and I usually find at least one interesting article an issue in there – sometimes more. There’s an ad (on the inside back page) from a major marketing research supplier who will go unnamed, promoting their neuroscience business. The headline says, “Think your ad is good? We can make it GREAT. “ They use EEG, Biometrics, Facial Coding, Eye-Tracking, and Self-Report to “get at the truth of how people respond to your ad, so you can run your campaign with confidence.”

No, not really. They can tell you if there is or isn’t neurological stimulation and probably can tell you where in the ad the stimulation occurs or doesn’t occur. That will tell you two things – it generates some stimulation or not and does that stimulation occur when you think it should. Neither of these will make the ad great, or even good for that matter – it’s a report card. They can tell you whether it is more or less stimulating than other ads in your category that have been tested. They can tell you whether people liked the ad or not via facial coding and by asking them. That won’t make the ad great either. Why not? Simple – we don’t understand the relationship between neurological stimulation and purchasing and we barely understand the relationship between ad-liking and purchasing. At the end of the day, the question is whether advertising drives increased purchasing, and we have yet to establish the necessary linkages to define this neurologically. Research doesn’t make anything great – it tells us if it will likely be great.

I’ve argued for some time that neuromarketing is its own worst enemy, over-promising and under-delivering. Thankfully, we’ve seen less hyperbole in the last couple of years. Until this week.

Reference – www.pnas.org/cgi/doi/10.1073/pnas.1602413113

Share

The Restless Resting State, and Why Brain Scanning is Still Valid

Is brain scanning data invalid? Thomas Zoëga Ramsøy shares some perspective on the intended killing of fMRI.

neurobiz

By Thomas Zoëga Ramsøy

Is brain scanning data invalid? A recent meta-analysis study published in the esteemed journal PNAS (http://www.pnas.org/content/113/28/7900.abstract) has suggested that decades of brain research studies should be questioned! An estimated 40,000 studies using the established and used method fMRI (or functional Magnetic Resonance Imaging) may be hampered by a software bug that has gone undetected for too long.

In the study, several brain scanning studies of the so-called “resting-state data” suggest that as many as 70% of “positive findings” may indeed be false. That is, 7 out of 10 reported “brain blobs” found when people are resting inside a brain scanner may in fact never be there!

This is a very interesting finding indeed. It sure reminds us of the need for replications in science, a part of scientific housecleaning that is never honoured as part of a scientist’s career. It is worth noting that the finding pertains to only one type of neuroimaging studies, fMRI (and most likely only the so-called blood oxygenation level dependent, or BOLD, fMRI), and not other methods such as electroencephalography (EEG) or magnetoencephalography (MEG). Still, the problem could be substantial, as many of our insights about the brain may be unsupported.

But is it really that bad? There is a saying that “extraordinary claims require extraordinary evidence” and here, we have a problem. Indeed, one major caveat of the study is the assumption that the resting-state data, used in this meta-analysis, can be treated as equal between individual. After all, when asking people to rest (often between more active tasks), we can assume that the brain at rest is similar for us all, correct? Instead, decades of psychological studies of the mind at rest – including mind-wandering and Task Unrelated Images and Thoughts – have shown extreme degrees of variation both within and between individuals. What follows are inspired by a yet unpublished manuscript that I have co-authored with Prof. Bernard J Baars from the Neuroscience Institute, which I hope can help put a perspective to the intended killing of fMRI.

The restless resting state

Considerable attention has been called to a putative “resting state” of the brain, observed during designated rest breaks in neuroimaging experiments. Robust brain differences have been found between task-related (TR) and task-unrelated (rest break) conditions. Some scientists speculate that TU brain activity may reflect a special state of the brain, sometimes called a “resting state,” “default mode,” or a “baseline condition”. We suggest that the explanatory use of these terms is premature. Instead, a large empirical literature points to an alternative account: people during rest breaks are reverting to their normal, spontaneous stream of thought, which is subjectively rich and self-relevant, highly variable, multimodal, often explicitly goal-directed, and probably functional. Even the word “state” may be premature, since it suggests a stable condition of the brain. Instead, some five decades of psychological studies shows TU activities to be dynamic, heterogeneous, shaped by emotional and motivational primes, and focused on current life concerns. The term “spontaneous thought” might therefore be a more accurate label for task-unrelated brain activity.

The spontaneous brain

Scientists tend to be cautious about self-reported experiences, but some facts about consciousness are as predictable as objects falling in earth gravity. The entire field of psychophysics relates precisely controlled stimuli to reliable subjective reports. Even endogenous events can be reported reliably, as in experimental studies of verbal rehearsal and visual imagery, and their brain correlates. Less well-known is almost a half century of thought-sampling studies, using real-time reports under known conditions. One of the oldest psychological demonstrations is to simply close one’s eyes and try to stop the flow of thought. We can read sources over some 26 centuries reporting how difficult that is to accomplish. The flow of spontaneous experience appears to have its own persistence and “urgency,” as William James wrote a century ago. Some five decades of systematic psychological research supports the notion that spontaneous cognition is not random, but reflects “current, personal concerns”. There is also evidence for repetitive long-term themes in spontaneous mentation, influenced by major life events, traumatic experiences, and implicit goals. In everyday life, spontaneous, apparently unstructured thinking may be the most common kind of goal-directed thought.

What happens when people go from a focused, externally instructed cognitive task to a condition that is not heavily structured by external demands?  To many subjects, designated rest breaks may be a chance to get back to a normal, spontaneous, self-relevant, and active stream of thought. In the 1960’s and 1970’s a number of studies in experimental psychology focused on internally generated images and thoughts, which showed that depriving the mind from sensory information stimulated the occurrence of internally generated experiences. TU thoughts were studied in a more elaborate and specific research program called Task Unrelated Images and Thoughts (TUITS). Here, subjects were asked to report with regular intervals the content of their thoughts. This could either be performed in laboratory settings, in which subjects were given a tasks of varying difficulty and attentional load, or they could focus on more everyday settings, where subjects were interrupted at random times during a day, and were to write down their ongoing thoughts. One of the general findings from this research was that there is a continuous shifting of attention between externally and internally generated sources of information. Furthermore, spontaneous thoughts were found to be rather repetitive and predictable, always returning to “current concerns”. The content of thought was found to become increasingly unrelated to external events as these external events become more static and predictable. In this sense, the more boring the task, the more did people spend time (during testing) on task-irrelevant thoughts.

More recently, the study of TUITS has re-emerged in the scientific literature. A portion of this research has focused on detailed observations about the influence of TUITS on cognitive performance and a detailed examination of the intrusiveness of the task unrelated thoughts on both ongoing and later performance. Other studies have begun to couple the occurrence of TUITS to measurable physiological changes such as increased heart rate during TUITS. As with the original TUIT studies, these results confirm that increases in task difficulty, altering the attentional load, make TUITS become significantly less frequent. On the other hand, the easier the task – leading to more automatic behaviour – the more TUITS are reported by subjects. In this sense, a rest state is only at one extreme of how much attentional load is put on a subject’s mind. At the other end are highly energy- and attention-demanding tasks such as working memory 2-back or 3-back tasks. The conditions we are comparing in a resting state study are vital to our interpretation of the results.

Daydreaming was also studied in the same period as TUITS was explored. Using the “Imaginal processes inventory” it was shown that people were aware of some daydreaming every day, and that daydreams ranged from “obvious wishful thinking to elaborate and complex visions of frightening or guilty encounters”. Furthermore, factor analyses have revealed three major types of daydreaming: positive-constructive; guilty-dysphoric; and a poor attentional control pattern characterized by fleeting thoughts and an inability to focus on extended fantasy. At the same time, the test-retest reliability of daydreaming reports has been found to be high. As such, the literature on TUITS and daydreaming, both highly relevant inputs to the study of RS, is both rich in number of studies and in information about the richness of conscious content during such periods.

The non-death of fMRI findings

So what may initially seem like a failure of replication of brain scanning data, may in fact be a failure to understand the human mind. Many neuroscientists just don’t know the psychological literature enough to see that that the human mind does not “do anything” when asked to rest and relax. Indeed, as we have seen above, the mind is never at rest, and when not given an explicit task, it will defer to an inner state of “current concerns.”

So the basic assumption of this meta-analysis is flawed, and we should treat the conclusion accordingly. Analysing resting state brain activity and assuming that the results will be convergent is the same as claiming that every human mind at any time is thinking about the same things.

That said, we still replications of brain scanning studies, as we would need whether one scanning method is valid. Here, neuroscience fortunately has a vast toolbox, and now classical work from leading figures such as Nicos Logosthetis has provided clear links between fMRI studies and other measures of brain activity. For example, Logosthetis’ work has clearly shown that fMRI activation “blobs” are related to dendritic activity, rather than axonal activity. Still, there is nothing in these data to support a claim that fMRI data are invalid. Indeed, extraordinary claims have not been supported by extraordinary evidence in this case.

__________

Interested in learning more? Sign up for Thomas Zoëga Ramsøy’s free Coursera course on neuromarketing and consumer neuroscience.

Share

Top 9 Considerations When Choosing Tablets for Your CAPI Project

The Market Research industry is transitioning from PAPI (Paper based surveys) to offline Tablet based CAPI. There are many factors to consider when making the transition to tablets, both hardware and software factors.

capi

By Ofer Heijmans, CEO of Dooblo

The Market Research industry is transitioning from PAPI (Paper based surveys) to offline Tablet based CAPI. There are many factors to consider when making the transition to tablets, both hardware and software factors.

In this article I will focus on the tablets themselves and try to shed light on which of the hardware aspects are the most important to factor when choosing your devices and which can take a back seat if you are on a budget. You will be surprised how many low-mid range devices actually provide very good value-for-money and can in fact be used for 99% of the actual fieldwork that is done these days. The below is based on our experience at Dooblo where we help over 600 customers from 80 countries with the transition to CAPI using tablets and to perform over 20 million interviews each year.

1. Tablet sizes

Tablets come in different sizes ranging from the smaller 3.5” mini-tablets through 7” and 10” tablets. In general, the bigger the device the higher the cost but read carefully as for CAPI usage, bigger does not always mean better. The size you need highly depends on your unique CAPI environment and projects.

Our recommendation is that purchase a 7” tablet. The ideal blend of screen size, weight & costs makes it ideal for CAPI usage.

2. Memory
The amount of memory of the device is one of the most important factors for CAPI. The reason is that CAPI surveys tend to be very long and contain a hefty amount of complex logic. Due to the internal mechanisms of the Android operating system, even the most advanced tablet surveying apps like SurveyToGo require lots of memory. The reason is that typical questionnaires have tens if not hundreds of questions that need to be displayed along with media and many times some basic database access. All of these consume lots of memory.

We highly recommend purchasing a device with 1GB of memory or more. Try to stay away at all costs from devices with 512MB or less as you will be risking data integrity of your fieldwork.

3. GPS

GPS is very important for CAPI. The reason it is important is that GPS Locations of interviews and tracking interviewer routes is in 2016 a de-facto standard of quality control and without it you do not have the ability to know where a certain interview was performed nor where a certain interviewer walked around at a certain date-range. Luckily, today, almost all tablets are equipped with GPS however GPS is not a single yes/no parameter.

Make sure the tablet you choose has a true hardware GPS receiver. A-GPS and GLONASS should be considered as “nice-to-have” features.

4. Cameras

While maybe initially the camera might not seem as an important concern, as you get more proficient with using tablets for CAPI the more important the cameras become from a Quality Control point of view. In today’s CAPI world, the camera is used both as a way for the interviewers to snap pictures of where they are right now, but also for the QC department to perform silent-photo-capturing to get a direct glimpse of the interviewer surroundings to confirm the fieldwork is done according to the highest standards.

Backward facing camera is a must. Specs are not important. If possible, get a device that also has a front-facing camera.

5. Battery power life

The battery within your tablet is responsible for providing “air” to your device so it can breathe. Since CAPI projects naturally are performed in the field, getting the most battery life out of your device is critical.

In general, device manufacturers state batteries can be re-charged 300-600 times before the end-of-life is reached, although in real-world use, this number is usually much higher. Battery capacity decreases considerably over time and recharge cycles, so even though a battery might last longer than 600 recharges, its capacity guaranteed to deteriorate over time. Average numbers state 20% loss in capacity after 250 recharges.

From your shortlisted devices, pay attention to the number of mAh and choose the one with the highest number. Keep in mind though that the #1 factor that will affect your battery is how you use the device and not the amount of mAh you have.

6. Screen quality

Why does screen quality matter? When thinking about CAPI work we often neglect this important aspect and while this aspect is less important than a few other critical ones, it is still an important aspect. The screen quality will affect both your battery consumption, how the screen is visible in the sun and how sharp & crisp videos and photos appear on the device.

If the majority of work is done in bright sun, get an IPS LCD screen. If majority of work involves showing videos or photos, insist on getting Full-HD screens and up. Otherwise, no special consideration is necessary.

7. Networking

While in CAPI the actual fieldwork take place while being completely offline, eventually the tablet will upload the data back to the data center and it will do so using its networking components. Different tablets have different components present and it is important to plan ahead and get the tablets that have the networking components you need. As networking components are relatively expensive it can have a real impact on the price you will pay for your tablet.

For most cases, WIFI is enough with Bluetooth being optional. If you foresee that you will need real-time data upload, make sure to get a device with a 3G or Cellular component even if you will not use it now, you might use it later.

8. CPU & Storage

CPU Speed and storage plays a big role when choosing a tablet for consumer use, however for tablet CAPI surveys this has almost no implications. Any CPU should run CAPI surveys fine, even the low end devices. As for storage, for surveys with no videos this has no implication at all as survey data output size measures in ~30KB in size while even the low-end devices these days have ~8GB of internal storage (not memory) available which essentially means you can store over 200,000 interviews on the device before the need to upload and free the space. The only exception for storage would be if you are capturing lots of videos or photos. Photos tend to be about 150KB in size and full-HD videos can quickly consume 3-5MB of storage space per second. So in case you are capturing a huge amount of photos or videos, you should definitely make sure the device has enough storage and potentially can be expanded with an external SD card based on your needs.

Do not worry about CPU speeds as they are not relevant to CAPI. Storage space should only be a concern in case you are capturing videos or huge amounts of photos, otherwise it is not a concern.

9. Android OS version

Do not worry about the Android OS version, just as long as your CAPI vendor supports the OS number your tablet runs. Before buying the device, check with the CAPI vendor to make sure the app supports the Android OS Number.

If you have any questions on tablet hardware for CAPI or which to consult on which tablet to choose for your projects, you are welcome to contact me personally at: ofer@dooblo.com

Share

6 Themes from Insight Innovation Exchange (IIeX) 2016 North America

Sarah Faulkner shares what's new and what's next in insights based on her takeaways from IIeX 2016.

roundtable discussion

By Sarah Faulkner, Faulkner Strategic Consulting

This was my second year attending IIeX (Insight Innovation Exchange) and it was great to learn about so many innovative new approaches, technology, and suppliers in the market research space. Below are the top 6 themes that I took away from IIeX 2016–what’s new and what’s next in insights!

Key Insights Industry Trends:

1. The Commoditization of Research Execution

  • Automating research execution and interpretation opens up more thinking and analysis time (e.g. Research Now standard brand trackers, AYTM analytics).
  • Off-the-shelf research solutions (e.g. ZappiStore) provide a right-sized, pre-packaged research project at a great value.
  • Do-it-yourself (DIY) research (e.g. AYTM, SurveyMonkey) enables client researchers and consultants to do faster, cheaper research for simple objectives.

Key take-away: Automating non-value added work can be faster, cheaper and more accurate. However, it can never replace an actual human for creativity, influencing and engaging, convincing and telling stories.

2. Bite-sized, Right Sized

  • Bite-sized research means surveys that can take 5 minutes (vs. 50) or even a single question at a time (e.g. Google Consumer Surveys, gamified/app research).
  • Bite-sized insights can be easily and quickly understood and are a good way to ensure learning is absorbed in an organization (vs. one massive report).

Key take-away: In our information-overload, time-starved world, collecting and communicating data in bite-sized amounts can increase engagement all around.

3. Storytelling Everywhere

  • As a research tool—especially in qualitative research, but also larger scale via video in quant (e.g. Voxpopme) or online metaphor elicitation (e.g. Meta4Insight).
  • As a reporting tool—think re-telling consumer stories in qual or applying a narrative approach in a quant. summary.
  • As an innovation tool—use stories to bring a possible future to life (e.g. Lowe’s Innovation Lab comic books) or collaborate with sci-fi writers to create a future story (e.g. SciFutures).

Key take-away: Emotion is required for action, whether it’s consumer buying behavior or client/stakeholder decision making, and nothing gets to emotions better than a good story.

4. Rise of Machine Learning

  • Text analytics and sentiment analysis can derive meaning from big data, social listening, and survey data (e.g. OdinText, Converseon).
  • Facial coding can now be done effectively by machines and so opens up new worlds of application and scalability (e.g. Affectiva).

Key take-away: Advances in machine learning mean that computers can take over hours of laborious hand-coding of text and emotion—it’s not perfect yet, but it is much more scalable.

5. Visualization Drives Clarity

  • Visual questionnaire design can capture much more accurate data where the subject could be misinterpreted or hard to understand (e.g. VitalFindings).
  • Visualizing data and reporting is still a hot topic yet still a major client unmet need. Like stories, visuals make insights easier to understand and more likely to stick.
  • Visuals aren’t just for data; they are also essential for bringing strategy to life—think images, video, etc. in addition to text.

Key take-away: Visualization in survey design can help increase accuracy (e.g. visual scales, pictures + words), while in reporting and strategy documents, it’s a way to bring the content to life.

6. Behavioral Research: Actions Speak Louder

  • Re-targeting surveys can reach consumers based on a specific online behavior for research—ad effectiveness, site visitors, audience profiling, etc. (e.g. Survata).
  • Purchase/ receipt triggered surveys can be a great way to get real time, accurate sample and data on path to purchase, shopper insights, etc. (e.g. InfoScout, Field Agent).
  • Implicit research, including affective priming, gathers data indirectly so it can uncover real thoughts and feelings on a range of topics (e.g. Scientient Decision Science, Olson Zaltman).

Key take-away: Identify research respondents via actual behavior (vs. claimed) to increase accuracy. Also, brain science tells us that most decisions are made unconsciously, so don’t rely only on what people say, but also consider implicit and behavioral findings.

Share

#MRX Top 10: Big Data with a Little MR

Of the 6,024 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

 

By Jeffrey Henning

Of the 6,024 unique links shared on the Twitter #MRX hashtag over the past two weeks, here are 10 of the most retweeted…

  1. 10 Most Successful Big Data Technologies – The Forrester Research report, TechRadar: Big Data, Q1 2016, places 22 Big Data technologies on a growth curve, assessing those with the most potential. In terms of generating business value added, Forrester rates MPP data warehouses the highest, followed by predictive analytics and data virtualization.
  2. Big MR – From Big Data to the Big Picture – In this Online MR interview with Darren Mark Noyce, founder of Skopos London, he says, “Data scientists have to ensure … they are uncovering new facts and insights, or describing behaviours that could be interesting and useful to decision-makers in firms, and are able to communicate them well, or the power and the magic will be lost. Market Research has a true tradition and heritage in providing such flexible impactful insight solutions, delivered to decision-makers in an actionable trusted way. Can Big Data do this on its own? Perhaps we should work together? Big MR anyone?”
  3. Pokémon Go: Gamification Lessons For Research – Jason Anderson, president of Datagame.io, beat me to the punch with his take on Pokémon Go, providing six reasons that it has been so successful. Sadly, traditional MR played little role at Niantic Labs, its publisher.
  4. Transformation IQ – Jeff Resnick’s free ebook, Transformation IQ: Reinventing Your Business to Capitalize on a Changing World, provides profiles in the courage of transformation, through conversations with CEOs of 11 MRX companies.
  5. What Clients Want: 3 Key Aspects of the Best Research Reports – Kimberley Bell, writing for FlexMR, argues that the best research reports interpret the data, provide clarity rather than minutiae, and discuss the implications of the data.
  6. Partnering With Data Scientists: How Market Researchers Make the Most of Big Data At LinkedIn – In an interview with Sally Sadosky and Al Nevarez of LinkedIn, Marc Dresner focused on how market research has changed with the availability of Big Data. “At LinkedIn because we are able to look at the behavior of the members, we are able to do a lot more research in advance – looking at behaviors, looking at trends, testing hypotheses. When we actually talk to members, either through quantitative surveys or qualitative methods, we can really focus our questions… Our surveys tend to be a lot shorter, which is great for response rates and completion rates.
  7. A Model for Predictive Measurements of Advertising Effectiveness – Writing for AMA, Matt Weingarden of Curator Research discusses the 1961 Lavidge-Steiner model (the classic funnel):

funnel

  1.  How Consumers Buy Brands: The New Decision Journey – Graham Staplehurst of Millward Brown discusses the shift from such a consumer purchase funnel to a decision cycle:

cloud

  1. The Definition of Happiness Changes as You Age, According to Science – If you’re old enough, this article may make you happy.
  2. Before MR, Surveys Made for Fun Parlor Games – Writing for Quirks, Mike Boehm discusses when surveys were so engaging that Karl Marx, Paul Cezanne, Oscar Wilde, and Sir Arthur Conan Doyle took them.

 

Note: This list is ordered by the relative measure of each link’s influence in the first week it debuted in the weekly Top 5. A link’s influence is a tally of the influence of each Twitter user who shared the link and tagged it #MRX, ignoring retweets from closely related accounts. Only links with a research angle are considered.

Share

The Engagement Crisis: there is a Light at the End of the Tunnel

The lack of engagement from consumers/respondents in market research has generated a growing crisis in the industry. Is there a light at the end of the tunnel?

Illustration_the_engagament

By Adriana Rocha

In the last 10 years or so, we have witnessed the growth of automation on data collection, analysis and reporting in the market research industry. We have also seen new technologies (VR, facial reading, mobile, etc.) and numerous technology start-ups entering the market research space. However, it seems little or no advance at all has been made in terms of using new technologies to improve respondent engagement with market research.

As we well know, the lack of engagement from consumers/respondents in market research has generated a growing crisis in the industry. It is not a secret that online panel companies have faced many challenges to attract and retain new participants.  The decrease on response rates and the lack of mobile optimized surveys are also common problems well known in the industry. As per the latest GRIT (Greenbook’s Research Industry Trends), “The real existential threat to our industry is neither automation nor competing methodologies: it’s the future of research participation.“.

Also, as per GRIT, half of corporate researchers and market research firms think the quality of online sample will get worse in the coming years.  Data collection or sample providers are more optimistic, and majority of them think quality will get better, improved by technology. Surprisingly, technology or lack thereof, is also the prime culprit for sample getting worse: from bots, to survey design, to mobile enabled surveys, all these are driving sample quality down.

Is there a light at the end of the tunnel for this engagement crisis in Market Research? Well, here at eCGlobal, technology is helping us to increase data quality and response rates, but not just because of automation or fraud detection. Social and mobile technologies are helping us to get closer to the consumers, providing engaging user experiences, giving them a greater purpose to participate in market research, and, at the same time, having access to new data sources available on their mobile devices, social networks, websites and apps.

At the end of the day, it’s not just about technology, but changing the traditional market research mindset where consumers (“respondents”) are treated just as resources that are used and discarded when project goals are achieved. We knew, since our inception, that we had to give people enjoyable and engaging user experiences. This led us to build a social media platform that replicates how people naturally give feedback on a website, share experiences with followers in platforms such as Twitter, or have fun playing with apps and games on their mobile devices.

As on Facebook, we’ve created tools to empower users and let them generate more spontaneous content, as well as connect with others with similar interests and profiles through online communities that we create or they can create by themselves. Instead of traditional online forums, we have given them a news feed that is personalized, based on their preferences and data they input into the platform. We’ve also turned into the games, and have integrated gamification elements and dynamics into the platform, bringing the fun element as part of the general user experience. This combination of social, fun and greater purpose of helping others to make better decisions has hugely increased the amount of user-generated content, conversations between community members and superior data quality.

Despite the engagement crisis and current market issues, I have to agree with GRIT analysts and see a reason to be hopeful. I also believe the poor user experiences with market research are starting to contrast against the unique and engaging experiences created by innovators who’ve been unafraid to embrace change and drive innovation in the industry.

How about you? Do you see a light at the end of the tunnel for the engagement crisis in Market Research?

 

Share