Who is Doing Cocaine: Estimating Bad Behavior

The Item Count method can be used to reduce the effects of social desirability distortion.

By Michael Lieberman

Marketing researchers face a number of measurement challenges when interviewing their audiences. Certain factors limit subjects’ ability to provide accurate or reasonable answers to all questions: they may not be informed, may not remember, or may be unable to articulate certain types of responses. And even if respondents are able to answer a particular question, they may be unwilling to disclose, at least accurately, sensitive information because this may cause embarrassment or threaten their prestige or self-image.

Research projects that investigate socially risky behavior (for example, “Have you used illegal drugs in the past week?”) or, conversely, socially expected behavior (e.g., voting, religious attendance, charitable giving, etc.) are subject to what are referred to as social desirability pressures. These sorts of questions, we advise, will almost certainly yield inaccurate results if not administered correctly.

Social desirability distortion is the tendency of respondents to answer questions in a more socially desirable direction than they would if the survey were administered anonymously. A form of measurement error, it is often referred to as bias, socially desirable responding, or response distortion.

Over the past few years, we have successfully employed a questionnaire technique called the Item Count method to reduce the effects of social desirability distortion. As used, the respondent reports only the number of items on the list in which he/she has engaged, not which behaviors. If the average number of non-stigmatizing behaviors is known for the population, one can estimate the rate of the sensitive behavior for the population by the difference between the average number of behaviors reported for the population including and excluding the stigmatized behavior.

Item Count Method

The item count method allows survey respondents to remain anonymous when reporting a sensitive behavior. This is accomplished by including the sensitive behavior of interest in a list of other relatively non-stigmatizing behaviors. It is also used to estimate socially positive behaviors as well, such as voting.

Example

In the example below, the researcher is attempting to identify the percentage of college students who tried or used cocaine in the past.

One of the groups received a set of five items. The participants were told that this questionnaire was designed to encourage honest responding, and were asked not to respond directly to whether any particular item is true; rather, they were asked to list how many of the four items were true.

How many of the following have you done in the past six months?

While indicating how many items were true, they never directly endorse any particular item. Someone who responded 3, for instance, was indicating that three of the five items were true for him or her.

Another group of respondents is given the same list, plus one additional behavior—the one we are interested in measuring.

How many of the following have you done in the past six months?

Subtracting the average number of behaviors reported by the first group from the average number of behaviors reported by the second group estimates the proportion of people given the longer list who said they performed the requested behavior.

There are a few guidelines when designing an item-count method experiment.

  • The behavior used on the item-count method list should be such that few respondents have performed all or none of them. Reporting one activity negates the anonymity.
  • Behaviors should be within the same ‘category’. For example, if one is investigating risky sexual behavior, then other risky behaviors should be included on the list. It the goal is to estimate voter turnout, other civil activities should be included.
  • A separate sample for direct reporting may be included for comparison.
  • Larger samples enhance estimate stability and accuracy.

Below are preliminary results, by gender.

The base rate estimate for the behavior of interest is found by subtracting the two means: mean (Group 2) – mean (Group 1). In this example, in total Group 1 conducted 2.35 behaviors; Group 2 conducted 2.72 behaviors. Thus, the base rate estimate in this population for cocaine use is 2.72-2.35=0.37—roughly 37% of our college student sample has used cocaine at one time or another.

Broken out by gender, men have slightly higher means of risky behavior—not surprising, given that, for example, riding a bike without a helmet or walking through a dangerous neighborhood is viewed as less risky by male students than female. Men also had a slightly higher estimated cocaine usage.

Closing Thoughts

Reducing measurement error is an ongoing challenge for marketing researchers. Respondents are generally unwilling to respond, or respond truthfully, to questions they consider inappropriate for the given context, they do not see as serving a legitimate purpose, or are sensitive, embarrassing, or threatening to their own-self image. Utilizing the item-count measure can be an effective way of reducing misreporting caused by social desirability pressures associated with interviewer-administration.

You can leave a response, or trackback from your own site.

One Response to “Who is Doing Cocaine: Estimating Bad Behavior”

  1. Brian Ward says:

    March 15th, 2017 at 11:20 am

    Fascinating approach to this, and it seems to have great potential to address this very real challenge to survey research. Nice work.

    What is the minimum sample size you suggest per group for this type of test?

    Are there established “behavior sets” for types of risky behavior? If not, how do you create?

    Have you had the chance to validate any experimental findings against known population incidences? In other words, does this work? I hope it does.

Leave a Reply

*