Research Technology (ResTech)

July 16, 2013

Mobile Research Quality: Absolute vs. Relative

Defining mobile research quality, in absolute and relative contexts.

Scott Weinberg

by Scott Weinberg

0

I’ve found myself in an intriguing position in having both bought and sold mobile research studies, as a client broker and as a supplier. These are interesting times, no? I look around and see a buffet of webinars, whitepapers, and similar musings, mostly by authors who have never once been in a mobile research study as a participant. The occasional RoR pops up, and of course the endless procession of ubiquity and adoption metrics. What I see little of, are frank discussions of mobile research ‘quality.’ This is a broad term, so let’s define.

Defining Mobile Quality

In this post I’m referring to mobile research, not mobile surveys. Essentially my primary definition here is how much we can ‘trust’ mobile research results. I view this topic in absolute (i.e., as a new methodology) and relative (i.e., compared to other quant/qual fieldwork) contexts. Also, some of this overlaps with security issues which I’ll touch on.

In the Absolute

When I think of mobile research quality in absolute terms, it’s hard for me to not lapse into relativistic comparisons, but I will table that for now. Focusing on this as a new methodology, we all know a few items: it’s a recent entrant into our world, the devices are seemingly everywhere, and people have them nearby at all times. And as tempting it is to give this a blanket endorsement as ‘automatically’ having quality, ‘because these things are so common,’ that would be unwise. I’ve participated in quite a few of these studies (I have 11 research apps running on my G4); my guess would be over 50, not sure the exact number. And yes the usual design issues are in effect: test it so it’s not buggy or looping, shorter is better, etc. Most of the studies I’ve participated in are actually quite thoughtful in their respondent experience. Mobile panelists are quite precious, and the ease one can give a 1 star savaging in the app stores is on supplier’s minds.

Regardless of the survey design, UX, etc; what is the key issue regarding mobile research quality? It is this: I’m standing in the (insert_name_here) aisle at Target, I’m taking a barcode scan of the correct or incorrect product with instant validation, I’m taking a picture of my receipt or maybe using the product at my home. I have provided evidence that I have indeed purchased said product, or been in the aisle examining the signage…etc. Moreover, an implementation of geofencing or geovalidation ensures I’m indeed inside the store during the study and/or when the submit button is reached. Am I sharing the ‘right’ answers re what I think of the product, signage, etc? No way to ever know that from any respondent, but why wouldn’t I share the truth? There are no social desirability effects and my incentive is arriving whether if I’m yea or nay on the product. Same goes for OOH ad-recall / awareness studies.

In the Relative

Let’s exit the vacuum and compare this methodology to traditional quant techniques. Having spent many (too many) years inside of online panel suppliers, I can attest to the enormous reliance on these panels to power primary market research. The sheer volume of panel-sourced survey completes is staggering.

Frankly, I think comparing mobile research quality to online panel quality is laughable. There is no comparison. This is a slam dunk in favor of mobile. Maybe you think I’m being glib…but if you’ve seen what I’ve seen you would be nodding in agreement. With the exception of invite-only panels, the amount of fraud in this space is greater than you’ve heard or read about. I’m not going to deep dive as it’s off topic but it goes beyond satisficing, identity corroboration, recruiting sources and other supplier sound bites used to reduce hesitation when buying thousands of targeted completes for $2.35.

Yes these apps are in the app stores, ergo anyone with a compatible device can install (and rate) them. Some do allow (or require) custom/private recruiting for ethnography, qual & b2b, but the bulk are freely available to the mobilized masses. Isn’t this then like online panels in that anyone can sign up? Yes, pretty close. So what’s the difference? One difference is that organized (yes, organized) fraud hasn’t infiltrated this space yet. So there’s that. The other difference is that because this space is app powered, the security architecture is entirely different, and stronger relative to online Swiss Cheese firewalls. Yes another difference is the effort required to secure an incentive; specifically the requirement of being in a physical location helps.

Effort = Good

There is effort required with these studies. You’re not sitting on your couch mouse clicking dots on a screen. Effort makes the respondent invest in the experience with their time and candor. There is also multi-media verification. For example, I’ve listened to OE audio comments, and I would encourage you to do the same if you need any convincing that these studies are not ‘real’ somehow (I can play some for you). Once you hear the tone, the frustration, interest, happiness, etc; your doubts about the realness of these studies will dry up. Incidentally, once you’ve heard OE audio, your definition of the phrase ‘Voice of the Customer’ is about to get quite a lot more stringent.

I’ll wrap this up and save more for future posts. Thank you for reading, I hope I gave you food for thought and we can enjoy watching this fascinating technology unfold together.

0

data qualitymarket research suppliersmobile surveysonline researchsurvey design

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Scott Weinberg

Online Sample Quality – 4 years later

CEO Series

Online Sample Quality – 4 years later

Four years after the provocative article & observations from the insides of the online sample machinery.

Scott Weinberg

Scott Weinberg

Research Methodologies

Is Online Sample Quality A Pure Oxymoron?

Why is nobody here addressing the elephant in the room? It’s not just sample quality. It’s survey quality.

Scott Weinberg

Scott Weinberg

Consumer Behavior

How Widespread Is Geofencing In Market Research?

Apparently the number of mobile fieldwork suppliers with fully functional geofencing is amazingly low. Why?

Scott Weinberg

Scott Weinberg

Research Methodologies

Raise Your Hand If The Truth Starts At .05

I was taught, .05 is an arbitrarily agreed to compromise that splits the chances of making a Type 1 and Type 2 error.

Scott Weinberg

Scott Weinberg

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*