Agile Research And Agile Running Backs

0207117001436797172_filepicker_large

 

By Jeffrey Henning 

Agile research is a buzzword that I happen to love, so whenever someone uses it wrong, it stings.

But just like a paper wasp isn’t made out of paper, agile research isn’t made out of agility. It’s a compound word, which means you can’t take agile at face value.

Agile research is not just about being quick. It’s not about being lively, sharp, buoyant, spirited, or fast to hit the hole in the defensive line, either.

Agile research is a play on agile software development. Rather than write my own definition for that, though, let’s use one that was itself developed agilely:

Agile Software Development is a set of software development methods in which requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development, early delivery, continuous improvement, and encourages rapid and flexible response to change.

Agile research has not yet been graced with its own crowd-sourced definition, so we’ll have to beg, borrow, and shoplift one:

Agile research is a type of market research in which the requirements and solutions evolve through collaboration between researcher and sponsor. It is not phased research but is iteratively open-ended, refining the research until all key questions are answered.

The agile research projects I’ve conducted have been to address thorny issues. In one case, we iteratively explored an opportunity for a software startup in the sharing economy by profiling the user experience with services ranging from Airbnb to Uber and ZocDoc. The project ended up involving three phases as we narrowed in on a market opportunity. In another case, we helped a European company iteratively test several hundred words for use as product names in the United States. By coincidence, we ended up conducting three phases to produce the final list.

Ah, but this is just phased research, Jeffrey, you think to yourself. And you never forget a phase.

Ah, but it isn’t, gentle reader. Unlike traditional phased research, we don’t know in advance how many phases we will conduct. It’s open-ended. We will conduct as many phases as needed to iteratively answer the key questions. The rhythm is: revise the instrument, field it, analyze it, and decide if and how to iterate.

Which of course brings us to the million-dollar problem when selling agile research.

No one wants to buy a million-dollar solution. It’s hard to sell an unbounded offering with an expanding price yet to be determined. We’re not selling public works projects like decontaminating rivers after all.

So I had a crazy idea that I decided to test on myself as a proof of concept. “Dear sponsor,” I said to myself, “I will sell you 1,000 survey completes. We’ll parcel out 100 responses at a time, stop the fielding after every 100 responses, then decide how to iterate. Do we modify the questionnaire? Do we open the taps? Do we play taps?”

Our team came up with a list of ideas, all of which lacked the most compelling item. A deadline. Than last Saturday morning it hit me like Earnest Byner. If we researched the Big Game, as my lawyers suggest I call it, we’d have to get it done. And the aspect that we decided to research was sports superstitions. (Also known as why you are not allowed at my house if the game is about to begin and why I am wearing a 15-year old shirt.)

The challenge was that we didn’t know what we didn’t know. A quick review of the academic literature – done with that convenience sample to end all convenience samples (college students) – left me no wiser than when I’d begun. We wrote a survey instrument with a series of open-ended questions about how satisfied people were with the NFL, which team people were rooting for, and why, and so on. We had some definite closed-ended questions that we would keep constant for all 1,000 respondents – a superstition index we created about how paranoid, fearful and afraid people are about taking certain actions while watching a game – as well as the team they rooted for and their demographics.

After the first 100 responses, we looked at the open-ended question about NFL satisfaction and wrote a closed-ended question to replace it (a grid, I confess). We weren’t getting enough on sports superstitions. People claimed not to have them. “I have absolutely none. Truth is I always feel if I do ‘any’ routine that alone might jinx it.” So, lifting a page from behavioral economics, we added a new open-ended question about other people’s superstitions.

As we collected 100 responses to each open-ended question about which team they would root for, I replaced it with a closed-ended, select-all-that-apply question instead. (To the woman who wrote “Because as a Cleveland Browns fan, I hate the Denver Broncos,” please know that recalling those games still makes my blood boil like the Cuyahoga River.)

We kept the other-people-have-superstitions-not-me question running for 200 responses because of gems like “I have a family member who makes his whole family, even his two dogs, wear Auburn football gear for two days before an important game (like the Iron Bowl).” You want predictive irrationality? We’ve got magical thinking, taboos, jinxes, numerology, and ovomancy. And that’s just me. Think of brand engagement and brand identification so strong that it warps people’s understanding of cause and effect and temporal mechanics. (Dude, you were watching the game on DVR. It didn’t matter whether or not you were holding your daughter. Clearly that only affects the game if you are watching live TV.) After 200 responses, we replaced that essay question with a select-all-that-apply question about what activities people do to help their team win because the last thing I need is new rituals to try.

We then asked 100 people about rules, because boy people like to complain about the rules of professional football on Twitter, but it turned out not to be a big deal to the average person, and we turned it off. We added and removed a few other questions as well. Because we had questions. And respondents. And budget.

We were able to collect over 400 responses on the agilely developed questions we cared about, while generating 1,000 responses to our superstition index so that we could tell you with the confidence inspired by Bayesian averages that Buffalo Bills fans are the most superstitious fans, and Cleveland Browns fans (sigh) are the least. (Because God hates Cleveland.)

I call the whole approach a mindful survey: one survey, done contemplatively, with pauses, with iterations, to hone in on the best questions. In a happier world, I’d have months to do research projects like I did back in 1988. Nothing could be finer.

But in a happier world I’d be taking my dad to see Otto Graham’s successor win the Super Bowl.

OK, now that stings.

Please share...

5 responses to “Agile Research And Agile Running Backs

  1. Jeffrey, that was a terrific post. And I’m not saying that just because I now know that you have totally jinxed the Broncos by (gasp) talking openly about superstitions before a game as if they were not God’s truth, and now my Carolina Panthers will certainly emerge victorious on Sunday.

    I’m saying that because you have defined the concept of agile research neatly, and with a nice selection of examples. But I wonder if you have been perhaps too narrow in your description. For example, you talk about instances where the research INSTRUMENT changes iteratively as the project proceeds. But in advertising research, we promote a design where the STIMULUS changes over the course of the research. So we start with our experimental stimulus (a draft advertisement), and solicit feedback on that from target respondents. Then after X number of responses, or X time in the field, we reassess, revise the ad, and go back with an “improved” version of the ad. In this manner, the ad changes, bit by bit, over the duration of the research, and we come out of the process at the end with an optimized product. It’s a sense-and-respond process that can work in other research environments (e.g., new product/concept testing), as well. Would you include that as agile research?

    If so, then maybe you can re-write your post, with the expanded definition of the concept. We can call it Agile Blogging, and declare it to be the precursor to Agile Reporting. :o)

    Again, thanks for the insights.

  2. Nice job, and my condolences re: the Browns. Of course, the Browns may not be succeeding because their fans are not sufficiently superstitious – correlation rules!

    My concern with the term “agile research” is that we often hear (at ESOMAR or IIEX, for example), a bastardization of the idea that one introduces a product with known problems and one incrementally researches those problems and fixes them. This is the world of technology today, and it’s why very few buy into new tech anymore – they wait for the bugs to be fixed – as compared to the old days when we instantly updated Windows when a new version came out.

    I like your definition much better.

  3. What about the importance of having a fixed sample design and clear research questions and hypotheses before you start the research? Tempting as it is to iterate and improve on the questionnaire while in field (or continue fielding certain questions longer than others) while you are able to see the data flooding in, don’t you think this will encourage Type 1 errors – i.e. the odds of seeing an “interesting” or “statistically significant” result purely because of chance? Check out this great npr podcast that addresses this issue: http://www.npr.org/sections/money/2016/01/15/463237871/episode-677-the-experiment-experiment

  4. One highly important thing left unsaid is that project design of this nature requires a consistent sample frame. Sample balancing often takes place toward the end of a project (e.g. we’re not getting enough responses from men, so we need to sample more men or start terminating the women). Particularly with panel sample where response rates can differ significantly by age, gender, ethnicity, etc. it is unfortunately quite common to be continually engineering the sample throughout the project, rather than a true RDD phone sample as we used to have where you just consistently dialed all the way through to the end.

    As an example, consider how the process of constantly engineering the sample frame might affect a study of 1,000 people (let’s just use gender). The first 100 completes might be largely women. Seeing this, the panel company invites more men; now the next 100 completes are largely men. Now that’s it’s more balanced, they let it fall naturally again and the next 200 are 65% women, until it gets corrected again. So if you pause after the first 100 completes, you have a sample mostly of women. If you change the questions and put it back into the field for the next 100 completes, now you have mostly men.

    This is not a problem that is insurmountable, but it must be taken into account during the planning, and that is critically important.

    There are two far more important problems with this article. One is that Earnest Byner and your Big Game euphemism for the Super Bowl don’t belong in the same paragraph. Now, if your example were Franco Harris, it would be much more appropriate.

    Second is that in a post on agile research and agile running backs, the picture is of a wide receiver (Odell Beckham Jr.). Shame on you, Lenny Murphy!

Join the conversation

Sign up to our emails

Don't miss out....
Get great insights content delivered straight to your inbox.
 I agree to receive emails with insights-related content from GreenBook.


You can manage your email preferences or unsubscribe at any time. GreenBook protects your privacy under the General Data Protection Regulation.