Advertisement

Political Behavior

, Volume 36, Issue 3, pp 659–682 | Cite as

Artificial Inflation or Deflation? Assessing the Item Count Technique in Comparative Surveys

  • Chad P. Kiewiet de Jonge
  • David W. Nickerson
Original Paper

Abstract

While the popularity of using the item count technique (ICT) or list experiment to obtain estimates of attitudes and behaviors subject to social desirability bias has increased in recent years among political scientists, many of the empirical properties of the technique remain untested. In this paper, we explore whether estimates are biased due to the different list lengths provided to control and treatment groups rather than due to the substance of the treatment items. By using face-to-face survey data from national probability samples of households in Uruguay and Honduras, we assess how effective the ICT is in the context of face-to-face surveys—where social desirability bias should be strongest—and in developing contexts—where literacy rates raise questions about the capability of respondents to engage in cognitively taxing process required by ICT. We find little evidence that the ICT overestimates the incidence of behaviors and instead find that the ICT provides extremely conservative estimates of high incidence behaviors. Thus, the ICT may be more useful for detecting low prevalence attitudes and behaviors and may overstate social desirability bias when the technique is used for higher frequency socially desirable attitudes and behaviors. However, we do not find strong evidence of variance in deflationary effects across common demographic subgroups, suggesting that multivariate estimates using the ICT may not be biased.

Keywords

List experiment Item count technique Survey design Social desirability bias Uruguay Honduras 

Notes

Acknowledgments

Funding for the surveys was provided by the Kellogg Institute for International Studies and the Institute for Scholarship in the Liberal Arts at the University of Notre Dame. Nickerson is grateful for the Center for the Study of Democratic Politics at Princeton University for the time to work on this project. We thank Equipos Mori for fielding the Uruguayan survey and Borge y Asociados for conducting the Honduran survey. We would also like to thank Scott Desposato, Macartan Humphries, Jim Kuklinski, and Devra Moeller and anonymous reviewers for helpful comments. We are particularly indebted to the continuing collaboration of Ezequiel Gonzalez Ocantos, Carlos Melendez, and Javier Osorio.

References

  1. Anderson, D. A., Simmons, A. M., Milnes, S. M., & Earleywine, M. (2007). Effect of response format on endorsement of eating disordered attitudes and behaviors. International Journal of Eating Disorders, 40(1), 90–93.CrossRefGoogle Scholar
  2. Biemer, P., & Brown, G. (2005). Model-based estimation of drug use prevalence using item count data. Journal of Official Statistics, 21(2), 287–308.Google Scholar
  3. Biemer, P., Kathleen Jordan, B., Hubbard, M., & Wright, D. (2005). A test of the item count methodology for estimating cocaine use prevalence. In J. Kenneth & J. Gfroerer (Eds.), Evaluating and improving methods used in the national survey on drug use and health. Rockville: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.Google Scholar
  4. Blair, G., & Imai, K. (2012). Statistical analysis of list experiments. Political Analysis, 20, 47–77.CrossRefGoogle Scholar
  5. Bless, H., Bohner, G., Hild, T., & Schwarz, N. (1992). Asking difficult questions: Task complexity increases the impact of response alternatives. European Journal of Social Psychology, 22, 309–312.CrossRefGoogle Scholar
  6. Corstange, D. (2009). Sensitive questions, truthful answers? Modeling the list experiment with LISTIT. Political Analysis, 17(1), 45–63.CrossRefGoogle Scholar
  7. Coutts, E and Jann B. (2008). Sensitive Questions in Online Surveys: Experimental results for the randomized response technique (RRT) and the item count technique (UCT). ETH Zurich Sociology Working Paper No. 3. ETH Zurich.Google Scholar
  8. Dalton, D. R., Wimbush, J. C., & Daily, C. M. (1994). Using the unmatched count technique (UCT) to estimate base rates for sensitive behavior. Personnel Psychology, 47(4), 817.CrossRefGoogle Scholar
  9. Díaz Cayeros, A., Magaloni, B., Matanock, A., & Romero, V. (2011). Living in fear: Mapping the social embeddedness of drug gangs and violence in Mexico. doi: 10.2139/ssrn.1963836.
  10. Droitcour, J., Caspar, R. A., Hubbard, M. L., Parsley, T. L., Visscher, W., & Ezzati, T. M. (1991). The item count technique as a method of indirect questioning: A review of its development and a case study application. Measurement errors in surveys, 185–210.Google Scholar
  11. Glynn, A. N. (2013). What can we learn with statistical truth serum? Design and analysis of the list experiment. Public Opinion Quarterly, 77(S1), 159–172. doi: 10.1093/poq/nfs070.CrossRefGoogle Scholar
  12. Harkness, J., & Van de Vijver, F. (2003). Cross-cultural survey methods. Hoboken: Wiley.Google Scholar
  13. Heerwig, J. A., & McCabe, B. J. (2009). Education and social desirability bias: The case of a black presidential Candidate. Social Science Quarterly, 90(3), 674–686.CrossRefGoogle Scholar
  14. Holbrook, A. L., & Krosnick, J. A. (2010). Social desirability bias in voter turnout reports: Tests using the item count technique. Public Opinion Quarterly, 74(1), 37–67.CrossRefGoogle Scholar
  15. Hubbard, M.L., Casper, R.A., Lessler, J.T. (1989). Respondent reactions to item count lists and randomized response. Proceedings of the Survey Research Section of the American Statistical Association, pp. 544–548.Google Scholar
  16. Imai, K. (2011). Multivariate regression analysis for the item count technique. Journal of the American Statistical Association, 106(494), 407–416.CrossRefGoogle Scholar
  17. Jackman, S. (2007). The social desirability of belief in god. Presentation for the Boston area methods meeting, March 2007.Google Scholar
  18. Johnson, T., & Van de Vijver, F. (2003). Social desirability bias in cross-cultural research. In J. Harkness, F. Van de Vijver, & P. Mohler (Eds.), Cross-cultural survey methods (pp. 195–204). Hoboken: Wiley.Google Scholar
  19. Kane, J. G., Craig, S. C., & Wald, K. D. (2004). Religion and presidential politics in Florida: A list experiment. Social Science Quarterly, 85(2), 281–293.CrossRefGoogle Scholar
  20. Karlan, D., & Zinman, J. (2012). List randomization for sensitive behavior: An application for measuring use of loan proceeds. Journal of Development Economics, 98, 71–75.CrossRefGoogle Scholar
  21. Krosnick, J. A., & Alwin, D. F. (1987). An evaluation of a cognitive theory of response order effects in survey measurement. Public Opinion Quarterly, 51, 201–219.CrossRefGoogle Scholar
  22. Kuklinski, J. H., Cobb, M. D., & Gilens, M. (1997a). Racial attitudes and the new South. The Journal of Politics, 59(2), 323–349.CrossRefGoogle Scholar
  23. Kuklinski, J. H., Sniderman, P. M., Knight, K., Piazza, T., Tetlock, P. E., Lawrence, G. R., et al. (1997b). Racial prejudice and attitudes toward affirmative action. American Journal of Political Science, 41(2), 402–419.CrossRefGoogle Scholar
  24. LaBrie, J. W., & Earleywine, M. (2000). Sexual risk behaviors and alcohol: Higher base rates revealed using the unmatched-count technique. The Journal of Sex Research, 37(4), 321–326.CrossRefGoogle Scholar
  25. Malesky, E., Jensen, N., & Gueorguiev, D. (2011). “Rent(s) asunder: Sectoral rent extraction possibilities and bribery by Multinational Corporations. Working Paper Series, Peterson Institute for International Economics.Google Scholar
  26. Menon, G., Raghubir, P., & Schwarz, N. (1995). Behavioral frequency judgments: An accessibility-diagnosticity framework. Journal of Consumer Research, 22(2), 212–228.CrossRefGoogle Scholar
  27. Miller, J.D. (1984). A new survey technique for studying deviant behavior. Ph.D. thesis. Washington, DC: George Washington University.Google Scholar
  28. Miller, J. D., & Cisin, I. H. (1984). The item-count/paired lists technique: An indirect method of surveying deviant behavior. Washington, DC: George Washington University, Social Research Group.Google Scholar
  29. Ocantos, G., Ezequiel, C. K., de Jonge, C., Meléndez, J. O., & Nickerson, D. W. (2012). Vote buying and social desirability bias: Experimental evidence from Nicaragua. American Journal of Political Science, 56(1), 202–217.CrossRefGoogle Scholar
  30. Schwarz, N., & Bienias, J. (1990). What mediates the impact of response alternatives on frequency reports of mundane behaviors? Applied Cognitive Psychology, 4, 61–72.CrossRefGoogle Scholar
  31. Schwarz, N., & Scheuring, B. (1988). Judgments of relationship satisfaction: Inter- and intra-individual comparison strategies as a function of questionnaire structure. European Journal of Social Psychology, 18, 485–496.CrossRefGoogle Scholar
  32. Schwarz, N., Hippler, H. J., Deutsch, B., & Strack, F. (1985). Response categories: Effects on behavioral reports and comparative judgments. Public Opinion Quarterly, 49, 388–395.CrossRefGoogle Scholar
  33. Streb, M. J., Burrell, B., Frederick, B., & Genovese, M. A. (2008). Social desirability effects and support for a female American President. Public Opinion Quarterly, 72(1), 76–89.CrossRefGoogle Scholar
  34. Sudman, Seymour. (1966). Probability sampling with quotas. Journal of the American Statistical Association, 61(315), 749–771.CrossRefGoogle Scholar
  35. Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco: Jossey-Bass.Google Scholar
  36. Tourangeau, R., & Smith, T. (1996). Asking sensitive questions: The impact of data collection, question format, and question context. Public Opinion Quarterly, 60, 275–304.CrossRefGoogle Scholar
  37. Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859–883.CrossRefGoogle Scholar
  38. Tsuchiya, T., & Hirai, Y. (2010). Elaborate item count questioning: Why do people underreport count responses? Survey Research Methods, 4(3), 139–149.Google Scholar
  39. Tsuchiya, T., Hirai, Y., & Ono, S. (2007). A study of the properties of the item count technique. Public Opinion Quarterly, 71(2), 253–272.CrossRefGoogle Scholar
  40. Weghorst, K. (2010). Uncovered sensitive political attitudes with list experiments and randomized response technique: A survey experiment assessing data quality in Tanzania. Presented at the 2010 Midwest Political Science Association National Conference.Google Scholar
  41. Wimbush, J. C., & Dalton, D. R. (1997). Base rate for employee theft: Convergence of multiple methods. Journal of Applied Psychology, 82(5), 756–763.CrossRefGoogle Scholar
  42. Zimmerman, R. S., & Langer, L. M. (1995). Improving estimates of prevalence rates of sensitive behaviors: The randomized lists technique and consideration of self-reported honesty. The Journal of Sex Research, 32(2), 107–117.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Chad P. Kiewiet de Jonge
    • 1
  • David W. Nickerson
    • 2
  1. 1.Political Studies DivisionCentro de Investigación y Docencia Económicas (CIDE)MexicoMexico
  2. 2.Department of Political ScienceUniversity of Notre DameNotre DameUSA

Personalised recommendations