When companies need to know what their consumer base is thinking, surveying is often the only scalable way to find out this information. Nevertheless, surveys take up a lot of time and can be incredibly boring. As the respondents’ patience gets zapped by the umpteenth question and their willpower dies, they employ coping strategies known as “satisficing”— a fancy way of saying that respondents just try to meet the lowest threshold of acceptability for an answer, rather than making the time to give the best response. This can be seen when questionnaires come back with all answers being “5/5”, “extremely happy”, or other arbitrary patterns that call to question their authenticity and potentially hurt data quality.
Attention checks are a survey strategy for “catching” respondents who appear to not follow the instructions of—or pay sufficient attention to—the survey. Colloquially, these are known as “trick questions” or “trap questions”. Attention checks can be as simple as embedding a command within a block of instructions indicating that, in order to demonstrate attention, respondents should click the “other” option and enter “I read the instructions” in the corresponding text box. The assumption is that respondents that fail the attention check are “bad” respondents who should simply be eliminated from the dataset, the sooner the better. However, new studies show that removing such respondents is likely to introduce demographic bias, particularly for age, and also cause respondents to behave worse later in the survey!
Researchers hypothesize that respondents may recognize the attention check question and subsequently have a sense of being “past the trap” which induces them to invest less effort into their responses. An alternative hypothesis is that the trap reduces respondents’ trust and reciprocity with the survey researcher, thus reducing their willingness to engage thoughtfully and carefully with the survey (Vannette, 2017).
This is not to say, however, that attention checks are always bad. Studies conducted by Hauser and Schwarz (2015) show that attention checks made respondents reconsider their spontaneous solution and adopt a more systematic reasoning strategy. This is great if the researcher wants to measure something like education level.
Nevertheless, for many surveys—especially in the impact and evaluation realm—deeper thinking may not always be the best state for a respondent to be in during a survey. Clifford and Jerit (2015) found, for example, that respondents edited or censored their responses because an attention check made them feel like they feel they were being watched. For such situations, an honest and natural thinking state is more optimal, where respondents are reasoning as they normally would in daily life.
Whether or not attention checks should be used is still being debated by researchers, so what are we to do in the meantime? Hauser and Schwarz (2015) propose a simple solution: if an attention check is deemed necessary, place it after all crucial measures and manipulations in a study. Doing so at least guarantees that the attention check will not somehow influence how participants interpreted and answered other questions.
Clifford, S., & Jerit, J. (2015). Do Attempts to Improve Respondent Attention Increase Social Desirability Bias? Public Opinion Quarterly 79(3):790-802. Retrieved July 27, 2017, from http://perpustakaan.unitomo.ac.id/repository/Do%20Attempts%20to%20Improve%20Respondent%20Attention%20Increase%20Social%20Desirability%20Bias.pdf
Hauser, D. J., & Schwarz, N. (2015, May 22). IT’S A TRAP! Instructional manipulation checks prompt increased systematic thinking on. Retrieved July 27, 2017, from http://journals.sagepub.com/doi/pdf/10.1177/2158244015584617
Vannette, D. (2017, June 30). Using Attention Checks in Your Surveys May Harm Data Quality. Retrieved July 27, 2017, from https://www.qualtrics.com/blog/using-attention-checks-in-your-surveys-may-harm-data-quality/