Google Consumer Surveys: Friend or Foe?
By Katrina Lerman
Armed with more personal data than anyone this side of the FBI, a consumer survey offering from Google is enough to make any research provider sit up and pay attention. Since being rolled out in March of 2012, Google Consumer Surveys hasn’t quite put anyone out of business yet, but its visibility and reach are on the rise. Since this new kid on the block looks here to stay, we decided to take GCS out for a spin to evaluate its capabilities, the user experience, and the benefits it might provide to our clients, both in its own right and as an augment to our private online community offering.
We had to admit, the elevator pitch was pretty tempting: for 10 cents per response, users are intercepted trying to access premium content from one of Google’s publishing partners, and have the option to take a 1-2 question survey rather than paying to continue. A couple of days later, you get your data (carefully weighted to match the U.S. Census) via a nifty web interface, with the ability to slice and dice it based on everything Google thinks it knows about your respondents. That’s a fast, cheap, easy-to-use service with a potential customer based comprising … well, almost any company in the world. To us, this is one of the most exciting aspects of GCS: simply that it makes consumer research accessible to an entire universe of small businesses who previously had to rely on their instincts alone.
Our dual-pronged test drive aimed to replicate two use cases we could see in our future: 1) using GCS to gather data from the general population or a broad segment (based on age, gender, or region, through Google’s inferred demographics); and 2) using GCS to augment or validate findings from a private online community, using a screening question to target a custom segment. We aimed to field a variety of question types to a variety of audiences, to get an idea of the range of services Google can provide.
What we found was that, like all research solutions, Google Consumer Surveys has its benefits and limitations—or tradeoffs, as we prefer to think of them.
Purity vs. Engagement
GCS promises to provide a clean sample, in contrast to what they call the “biased panels” found elsewhere—the assumption being that you can’t trust the responses of those who engage in market research on a regular basis. In an effort to increase response rate and improve the user experience, no one respondent ever answers more than two questions. Your survey can be as long as you want, but each question will be asked to a separate sample.
The benefit here is that there is no risk of survey fatigue, straight-lining, order bias, or any of the other concerns that traditionally accompany large-scale quantitative research. The downside is that there is no way to conduct longitudinal or follow-up research with a given participant. It is a point-in-time transaction, with little personal investment on the part of the person answering the question. They have no context around the topic in question—which may make them unbiased, but can also make them somewhat indifferent.
For example, one of our retail clients had found significant interest among their community members in the idea of a pop-up store and was interested in what items might appeal in this environment. Even with a screener to mimic the income levels found in the community, it was difficult to learn anything from the Google sample, because—with no context—they simply were not engaged with the concept. This is an example of what we call “indifference bias.” At the very least, an additional screening or follow-up question could have helped either further target our sample or learn more about lack of interest.
Speed vs. Depth
For most of our target segments, Google was able to get 500 responses per question within two days, with an average response rate of 15%. (By comparison, community surveys run for as long as two weeks, but garner about two-thirds of all responses in the first two days, with a response rate of 36%.) What’s more, you can log into the GCS site and check on your results in real time, including detailed response metrics and all of the dynamic cross-tab features.
This kind of speed and breadth is incredibly valuable for gathering quick, superficial feedback. What it does not provide is depth. While GCS offers many question types (ranking is notably absent), there are character limits on both question and answer text, with a maximum of five answer options per question. They have added an open-end text question with a dynamic word-cloud output, but it is much better suited to straightforward questions, such as “What is your favorite color?;” detailed or multi-part queries tend to garner low-quality responses. And, again, it cannot be used as a follow-up to provide context around a previous answer.
Scale vs. Intimacy
Thanks to Google’s ever-expanding publishing network—Pandora, AdWeek, and the New York Daily News are among their 100+ partners—they are able to gather large, nationally representative samples with ease. This makes GCS ideally suited for certain types of testing, modeling, and forecasting, where sample size and composition are of utmost concern.
However, when you get your sample, you know very little about the people in it: just Google’s inferred demographics, plus any information gathered from a screening question. They are simply data points; nothing close to resembling the three-dimensional human beings who will actually use your products and services. In an online community, by contrast, you learn a great deal about a smaller number of customers, through screener data, self-reported information, observation, and months, or even years, of reciprocal dialogue. This level of intimacy can ultimately provide the deep consumer insight necessary to compliment and make sense of large-scale quantitative data.
Simplicity vs. Precision
By focusing on a clean, simple experience for the user (both buyer and respondent), Google has created a self-service tool that can be used by even the most novice of researchers. As a result, however, the GCS platform lacks many of the complexities found in other survey tools—instruments that, for better or worse, have become hallmarks of the quantitative research industry. There is no data piping, no skip logic, no branching, no inter-question cross-tabs, no matrix questions, no ranking, and no rating of multiple concepts or attributes at once.
Creating your ideal sample is not always possible, either. You cannot request custom quotas for their demographic buckets (national representation only), and the limit of a single screening question or target segment means you can’t triangulate on several factors—necessary for any company doing segmentation work. In our case, Google was not able to deliver a full sample for two of our custom segments: diabetics and parents of young children. To be fair, they guarantee targeting only for groups with at least a 5% incidence rate in the population, and both of these segments fall close to that mark. The bottom line is that GCS can reliably deliver samples based on inferred demographics, but if your custom target is too narrow, you may not reach your desired sample size.
The Role of Inferred Demographics
Google provides a very cool web interface that allows you to run dynamic cross-tabs (on age, gender, region, and income) at the click of a button and flags significant differences on a special Insights tab (e.g., there were differences in the way women from the Northeast and women from the South answered this question). However, these analyses are entirely driven by Google’s inferred demographics—algorithmically derived from your browser’s cookies, your Gmail account, etc.—so their value hinges on the accuracy of this data.
In our experience, the inferred demographics—whether used to target a demographic or to cut the results—were generally “good enough,” but we did see some evidence that they were not always accurate, based on contradictions with self-reported data from our screening questions. Additionally, when we asked our community members to check out their inferred demographics (google.com/ads/preferences) and report back, about 40% said that Google had no information on them (these people are excluded from GCS’ data cuts). Those with data reported that Google was most accurate inferring their gender and interests, and least accurate predicting their age. For some users, this level of accuracy will be acceptable, but those particularly interested in segmentation may not feel confident they have hit their target.
The Verdict: We Can All Just Get Along
Google themselves address many of these topics in two whitepapers available on their website—one conducted by their own team and one in partnership Pew Research—which primarily aim to prove the accuracy of their methodology by comparing results obtained from Google Consumer Surveys to other on- and offline sources. And they recently got some outside validation when uber-pollster Nate Silver declared that GCS was the most accurate online poll leading up to the 2012 elections.
One year after its initial release, GCS is an ever-evolving product. Google continues to roll out new capabilities in both question design and analysis, and explore new ways to provide value to researchers and publishing partners alike. And they recently launched two new services aimed squarely at the MR crew: shopper satisfaction benchmarking, courtesy of Harris Worldwide, as an add-on service for clients who want to run their own tracking studies; and “think with Google,” their foray into the syndicated research space, which provides topical and industry trend reports using GCS data.
By connecting researchers and respondents in the most transactional way possible—one question at a time—Google has developed an incredibly appealing and effective quantitative tool, backed by one of the most powerful data engines in the world. It’s easy to let your imagination run wild thinking about the potential of linking GCS with Google’s many other consumer platforms.
Word from inside Google is that they dream, not of market research domination, but of a mecca for advertising partners, seamlessly linking their campaign metrics to survey data measuring ad effectiveness. Yet, as powerful as it would be to unify these data streams in one place, it would still not give you the context behind the numbers; it can’t tell you why a campaign succeeds or fails. For this reason, no survey tool will ever eliminate the need for humanistic methods that provide deep insight into consumer behavior.
Ultimately, like most other research solutions, Google Consumer Surveys is ideal for some use cases and not for others. We encourage other researchers to give it a try and think about how it could compliment, rather than compete with, their own products and services. And to GCS, we’d just like to say: welcome to the neighborhood.