1. RN-GBook-480-60-B2B-Banner-5-16
  2. Greenbook 2
  3. Greenbook-Mobile-6.29.16-
  4. mfour_new_1

Do Respondents Even Understand Our Surveys?

Language is an imperfect method for communication. How often does the receiver of a message truly understand it exactly as the sender intended?

survey puzzle

By Allan Fromen

I’ve always been fascinated by how people perceive the world around them. Even prior to my Psychology degrees, I’ve often thought about how we communicate with each other, and have been particularly interested in understanding how our words and actions can be misconstrued.

Like many teenagers, I used to waiter to make extra cash. My friend and fellow waiter once said to me “I hate waitering so much.” It immediately dawned on me that there were (at least) two meanings to his statement. He could have meant, I really dislike waitering, with “so much” serving as a description of the intensity of his dislike (as in “I hate waitering with every fiber of my being”). Conversely, he could have meant that he dislikes waitering often, as in waitering once in a while was ok, but doing it every night was unenjoyable (“so much” in this case means “so often”).

I later learned that this is called a linguistic ambiguity, and refers to phrases that can be understood in more than one way. Consider the following examples:

  • They are hunting dogs
  • I left her behind for you
  • The police shot the rioters with guns
  • I saw the man with binoculars

These statements are all examples of phrases that we speak and write in our everyday life, but which have more than one meaning.

It turns out that language, despite our reliance on it, is an imperfect method for communication. How often does the receiver of a message truly understand it exactly as the sender intended? Does it even matter?

Turns out, in matters a great deal. In the best-seller Superforecasting, the authors introduce us to Sherman Kent, who worked in the intelligence department that eventually became the Central Intelligence Agency (CIA). After Yugoslavia broke from the Soviet Union, Kent’s team issued the following analysis: “Although it is impossible to determine which course of action the Kremlin is likely to adopt, we believe that the extent of [Eastern European] military and propaganda preparations indicates that an attack on Yugoslavia in 1951 should be considered a serious possibility” (emphasis my own).

A State Department official later asked Kent to translate the statement into odds of an attack. Everyone on the team had agreed to use the phrase “serious possibility.” But when Kent went back and asked them to convert the agreed-to phrase into actual odds, one analyst stated 80 / 20 in favor of an attack, and another stated the exact opposite, 20 / 80. Other analysts were scattered between the extremes with no apparent consensus. As the authors write “A phrase that looked informative was so vague as to be almost useless.”

In market research, it is a best practice to have anchors for each and every response option. But do we really know how respondents interpret Very Satisfied or how it differs from Satisfied? What odds would a respondent actually attach to Very Likely on a question that seeks to measure purchase intent? When respondents provide ratings via the Likert type scales we commonly use in market research, they probably interpret these anchors in a myriad of ways with little agreement. Much like the intelligence analysts described above, our respondents bring forth their own histories and biases, resulting in multiple interpretations to the same scale.

Some research on research addressing this topic would be a great start, and a significant contribution to the market research community. In the meantime, I think we all need to understand the limits of our individual research efforts, and constantly seek to draw conclusions from multiple data sources. By integrating various sources of data, we reduce the biases inherent from any one study, which strengthens our insights and conclusions. I’ll be talking about examining multiple data points, which I refer to as triangulation, at the forthcoming IIeX conference.

Side note:

While there are many great books about how we think and are prone to cognitive biases – especially in the behavioral economics genre – I highly recommend Superforecasting, as it deals with what we market researchers try to do every day. I hope you enjoy it.

Share
You can leave a response, or trackback from your own site.

One Response to “Do Respondents Even Understand Our Surveys?”

  1. chris robinson says:

    May 22nd, 2016 at 9:22 pm

    In fact there is a lot of academic work on respondent usage of Likert scales. The evidence seems to be very clear that they both understand them and know how to use them. In one famous study the scale used for an annual survey was incorrectly selected, almost meaningless to the topic. Respondents worked this out but still responded using the scale magnitudes. The validation was that survey results were in line with previous studies. Respondents may not be as simple as we think. At least they don’t assume unit values between scales which is why they use scales to reflect their own cultures with yea-saying cultures less likely to criticize than other cultures. In fact if there is any criticism it should be lobbed at market researchers and statisticians who assume that the unit space between scale levels is a “one” on every scale. Practice has told us that that is not the case.

Leave a Reply

*

%d bloggers like this: