The 2nd Edition Of the GRIT Consumer Participation in Research (CPR) Report Is Available

Now in its second year, the GRIT Consumer Participation in Research (CPR) report is our effort to answer the who, what, when, where, and why of global consumer participation.

grit-cpr

Respondents are the lifeblood of market research. Whether it’s qual or quant, surveys or communities, neuromarketing or ‘Big Data’ and everything in between knowing how to reach, engage, and understand people is the very bedrock of insights.

In our interconnected world, achieving that goal is in some ways easier, and in many more ways more difficult. Until now, little data have existed to help researchers understand this basic question: how do we get consumers to engage with us and what do those folks look like?

Now in its second year, the GRIT Consumer Participation in Research (CPR) report is our effort to answer the who, what, when, where, and why of global consumer participation.

VIEW GRIT CPR REPORT ONLINE »

The report includes the most up to date data in the world on the profiles of fresh vs. frequent responders. It answers questions such as:

gritcprcover

  • Are “Frequent Responders” categorically different from “Fresh Responders”, and, if so, in what ways? Does this matter? Why?
  • Is the difference significant enough that it should be of concern, or be of strategic benefit, to different stakeholders in the research process?
  • Do the differences necessitate a form of ‘data triangulation’ whereby customers need to receive a blend of respondents, some “fresh”, and some less so? Or should all respondents be “fresh”? Why?
  • Is there a confounding factor at play? If a majority of all responders online share a more dominant characteristic about which we do not know, such as intellectual curiosity (no matter how frequently they answer a survey), how much weight should we assign to the “freshness” findings shown here?
  • The people who were intercepted are likely somewhat biased toward heavier Web users. Since one can make this same observation of all Web-based respondent data capture modalities, does this matter? Why?
  • What are the implications that need to be addressed as an industry from these findings, specifically, for those who make data-based decisions?

We hope this report will become the go-to resource that researchers globally can use to validate and benchmark their own research. Enjoy!

Share
You can leave a response, or trackback from your own site.

8 Responses to “The 2nd Edition Of the GRIT Consumer Participation in Research (CPR) Report Is Available”

  1. Adriana Rocha says:

    April 21st, 2015 at 6:16 pm

    Hi Lenny and RIWI team,

    Thanks for the newest CPR report. Let me share some feedback, concerns and few questions to you:

    1) Concerning the concept of “Frequent Responders” as you define it in this study: you consider a “Frequent Respondent” as anyone who participated at least in ONE survey in the last 30 days. That includes a person who may have responded just one random survey (an online website evaluation, a telephone satisfaction survey or an interview through a mall intercept for example) and also members of online panels who can be taking several surveys a day. If the main thing here is about understanding the different profiles and behavior of “professional survey takers” X “non survey takers”, why you just didn’t ask directly “Do you participate in online research panels? “ , “How many surveys do you respond in average per month? “ . I have a hard time getting the “Frequent Respondent” segmentation criteria used here.

    2) 30% of the sample belong to age ranges with less than 18 years old or greater than 55+ years old. Those age groups are not the ones that represent majority of adult population in online research panels, so it is weird that highest penetration of “Frequent Respondents” is exactly among 14-17 years old (34%) and 65+ years old (33%). Any thoughts?

    3) It would be good to see the age and gender distribution of “Frequent X Non Frequent Survey Takers” by country. Even in emerging markets we don’t have nowadays so much discrepancy among men X women in the online population. Almost 70% of male seems a lot to me, so I wonder what could be the biases introduced with this sampling methodology?

    4) It seems the questionnaire had 20 questions but the report just states the results of 8-10 questions. It would be good to know what are the other questions and the general completion rates. You mentioned “Completion rates were much lower”, so I think it is very important to understand what would be the maximum questionnaire length recommended for this random intercept sampling methodology. Of course, any type of re-contact or validation check would be a limitation too.

    5) Can you share the % of “Fresh X Frequent Respondents” who completed the survey from mobile devices (tablet + smartphones) X desktop by country and age groups?

    6) Figures 11 and 12 states N = 347,475. What is that N since sample size supposes to be 55K respondents?

    I look forward to your responses. Thank you!

  2. Grant Miller says:

    April 22nd, 2015 at 2:00 pm

    Hi Adraina,

    Thank you very much for the questions! I’ll separate them in comments so it is easier to follow…

    1) Concerning the concept of “Frequent Responders” as you define it in this study: you consider a “Frequent Respondent” as anyone who participated at least in ONE survey in the last 30 days. That includes a person who may have responded just one random survey (an online website evaluation, a telephone satisfaction survey or an interview through a mall intercept for example) and also members of online panels who can be taking several surveys a day. If the main thing here is about understanding the different profiles and behavior of “professional survey takers” X “non survey takers”, why you just didn’t ask directly “Do you participate in online research panels? “ , “How many surveys do you respond in average per month? “ . I have a hard time getting the “Frequent Respondent” segmentation criteria used here.

    This is a great question and one we have debated internally. We don’t presuppose that every person who has done a survey in the last month is on a panel, and don’t want to make this just about paid panel respondents vs non-panel respondents. We also understand that even among panelists there is a huge variance in participation rates and that examining the conditioning effects on “frequent/professional” panelists vs occasional panelist responders has been done in the past. The thought behind our question was simple: how do we examine across the broader potential respondent population if there are differences between those who participate frequently in surveys vs those that don’t.

    The segmentation criteria can be refined in the next study we do and we are open to suggestions. However, I believe it is a rational conclusion that the length of time between the last survey completed and the moment in time we asked the question is a strong indicator of survey participation habits.

  3. Grant Miller says:

    April 22nd, 2015 at 2:01 pm

    2) 30% of the sample belong to age ranges with less than 18 years old or greater than 55+ years old. Those age groups are not the ones that represent majority of adult population in online research panels, so it is weird that highest penetration of “Frequent Respondents” is exactly among 14-17 years old (34%) and 65+ years old (33%). Any thoughts?

    Yes, our findings seem to indicate that actual frequency of participation in surveys in the general population is different from the inventory of available research studies that are conducted using traditional panels, and for which panels are recruited and designed.

    Online surveys tend to be thought of as panel, river sample or communities being asked to participate in a structured survey, written by a market research professional. I would suggest that social media and DIY polling technology have changed that. From one-question polls on popular websites to simple voting surveys you can send your friends, the definition of what constitutes a survey is changing.

    We can all agree that the vast majority of 18-64 year olds in the general population are not on market research panels. When we have helped build panels using RDIT we have consistently found that less than 1 in 2000 people who are randomly invited to join a panel will accept the offer. In contrast, we generally find that up to 10% of people globally are willing to answer an 8-12 question survey. While the 1 in 2000 ratio does not necessarily represent the percentage of the general population who are on panels, it does indicate that people are 200 times more willing to participate in research vs join a panel. This is intuitive. Just go to a street corner and ask people to participate in answering 5 questions and then the next day ask people to give a lot of personal information in order to join a panel. I’m sure you will experience a similar ratio.

  4. Grant Miller says:

    April 22nd, 2015 at 2:02 pm

    3) It would be good to see the age and gender distribution of “Frequent X Non Frequent Survey Takers” by country. Even in emerging markets we don’t have nowadays so much discrepancy among men X women in the online population. Almost 70% of male seems a lot to me, so I wonder what could be the biases introduced with this sampling methodology?

    Another great question. There is a bias associated with RDIT in that although it is a random sample of people using the internet in any given country, it is affected by the volume of Internet usage within given age groups and genders. As you can see by our basic age demographics, Internet usage by volume, is highest among 14-29 year olds. There are numerous internet usage studies that would support this and with quick look around your city I imagine you would see young people glued to their phones.

    And unfortunately, there remains significant gender discrepancy in both access and usage of the Internet, particularly in developing countries (some good stats here: http://dalberg.com/blog/?p=1555), and our work with the World Bank also confirms this (see http://www.openinggovernment.com). Panels address this through using quotas whereas RDIT allows for natural fall-out and the data can then be weighted to census if desired.

    Here is a breakdown of Frequent X Non Frequent Survey Takers by a few countries that we included in a recent report…I hope this suffices.

    USA: Males: Fresh 67% Females: 65%
    India: Males: Fresh 74% Females: 71%
    China: Males: Fresh 82% Females: 63%
    Brazil: Males: Fresh 73% Females: 72%
    UK: Males: Fresh 64% Females: 66%
    Mexico: Males: Fresh 74% Females: 71%
    Indonesia: Males: Fresh 76% Females: 76%
    Nigeria: Males: Fresh 70% Females: 70%

  5. Grant Miller says:

    April 22nd, 2015 at 2:03 pm

    4) It seems the questionnaire had 20 questions but the report just states the results of 8-10 questions. It would be good to know what are the other questions and the general completion rates. You mentioned “Completion rates were much lower”, so I think it is very important to understand what would be the maximum questionnaire length recommended for this random intercept sampling methodology. Of course, any type of re-contact or validation check would be a limitation too.

    There were 12 questions we did on behalf of our clients and the data is not for public release although our partners may choose to release this information at a later date.

    The maximum suggested questionnaire length is a hard one to answer as we are doing a lot of work to determine how to best use RDIT. RIWI just finished a 160 question survey with our partners at Environics. We modularized and randomized one of the longest surveys in our market that had been running for 20 years. We have had the good fortune to be selected to present a paper at ESOMAR Congress on the results of this study, so I can’t go into great detail except to say it dispels the notion that RDIT is a nano/micro survey platform, but rather a modular survey platform.

    In this study, we anchored a few questions and then randomized the order in which the rest of the questions appeared. People who chose to only answer 5 questions were still included in the overall data set. Our aim was to get 1000 completed responses to every question within every country.

    You are quite right to assert that there are limitations to using RDIT, of which we are very open about. Re-contacts are not possible, unless we ask specifically for email addresses, which we very seldom do. Targeting can be done at country and now increasingly at city levels, but we are hardly an efficient source if you are looking for low incidence members of the population. Long questionnaires that rely on the same person answering all the questions in a linear format are not efficient on RDIT either. RDIT is not a technology that is going to replace legacy data collection methods. RDIT can however, reach populations that are not being reached by legacy data collection methods. It works on every device connected to the Internet, in every country in the world. Our partners see it as a complement to their current methodologies.

  6. Grant Miller says:

    April 22nd, 2015 at 2:04 pm

    5) Can you share the % of “Fresh X Frequent Respondents” who completed the survey from mobile devices (tablet + smartphones) X desktop by country and age groups?

    Again, I hope you are OK with me, for sake of time, sharing a few examples:

    USA: Mobile: 33% Desktop: 67%
    India: Mobile: 41% Desktop: 59%
    China: Mobile: 30% Desktop: 70%
    Brazil: Mobile: 25% Desktop: 75%
    UK: Mobile: 22% Desktop:78%
    Mexico: Mobile: 35% Desktop: 65%
    Indonesia: Mobile: 40% Desktop: 60%
    Nigeria: Mobile: 61% Desktop: 39%

    6) Figures 11 and 12 states N = 347,475. What is that N since sample size supposes to be 55K respondents?

    N = 347,475 is the total number of respondents to the age gender question that was anchored to the start of every survey (i.e., the total number of respondents globally). In earlier graphs, N= 50,313 was the number of respondents that answered the “freshness” question.

    I hope this answers your great questions!

  7. Adriana Rocha says:

    April 22nd, 2015 at 8:40 pm

    Grant, thank you for your responses! Yes, they answered most of my questions! 🙂
    Interesting to see that out of the 10% who accepted to take the survey, the majority of them (53%) have participated in surveys in the past. However, as an industry, I believe we still have a big challenge to tap the other 90% of the population who are probably not willing to take surveys at all…. Thanks again for the feedback! 🙂

  8. John Sukup says:

    April 29th, 2015 at 11:10 am

    I had an additional follow-up question similar to #1 that Adriana posed: After making the segment split between “fresh” and “frequent” responders in the study, it is clear that the majority of these individuals fall into the “fresh” category (72% to be precise). Was any adjustment made prior to making comparisons between these two groups to account for the fact that “fresh” is overrepresented compared to “frequent?” You are naturally going to see differences in responses between two groups when one has a sample size more than twice the size of the other. I don’t necessarily agree that any sort of statistical testing should have been used to call out “significant” differences between these two groups…it’s probably very misleading (as it usually is when used frivolously in MR studies).

    Do you have any of the study’s underlying descriptive stats available to justify some of these conclusions?

Leave a Reply

*

%d bloggers like this: