Artificial Inflation or Deflation? Assessing the Item Count Technique in Comparative Surveys
- 477 Downloads
While the popularity of using the item count technique (ICT) or list experiment to obtain estimates of attitudes and behaviors subject to social desirability bias has increased in recent years among political scientists, many of the empirical properties of the technique remain untested. In this paper, we explore whether estimates are biased due to the different list lengths provided to control and treatment groups rather than due to the substance of the treatment items. By using face-to-face survey data from national probability samples of households in Uruguay and Honduras, we assess how effective the ICT is in the context of face-to-face surveys—where social desirability bias should be strongest—and in developing contexts—where literacy rates raise questions about the capability of respondents to engage in cognitively taxing process required by ICT. We find little evidence that the ICT overestimates the incidence of behaviors and instead find that the ICT provides extremely conservative estimates of high incidence behaviors. Thus, the ICT may be more useful for detecting low prevalence attitudes and behaviors and may overstate social desirability bias when the technique is used for higher frequency socially desirable attitudes and behaviors. However, we do not find strong evidence of variance in deflationary effects across common demographic subgroups, suggesting that multivariate estimates using the ICT may not be biased.
KeywordsList experiment Item count technique Survey design Social desirability bias Uruguay Honduras
Funding for the surveys was provided by the Kellogg Institute for International Studies and the Institute for Scholarship in the Liberal Arts at the University of Notre Dame. Nickerson is grateful for the Center for the Study of Democratic Politics at Princeton University for the time to work on this project. We thank Equipos Mori for fielding the Uruguayan survey and Borge y Asociados for conducting the Honduran survey. We would also like to thank Scott Desposato, Macartan Humphries, Jim Kuklinski, and Devra Moeller and anonymous reviewers for helpful comments. We are particularly indebted to the continuing collaboration of Ezequiel Gonzalez Ocantos, Carlos Melendez, and Javier Osorio.
- Biemer, P., & Brown, G. (2005). Model-based estimation of drug use prevalence using item count data. Journal of Official Statistics, 21(2), 287–308.Google Scholar
- Biemer, P., Kathleen Jordan, B., Hubbard, M., & Wright, D. (2005). A test of the item count methodology for estimating cocaine use prevalence. In J. Kenneth & J. Gfroerer (Eds.), Evaluating and improving methods used in the national survey on drug use and health. Rockville: Substance Abuse and Mental Health Services Administration, Office of Applied Studies.Google Scholar
- Coutts, E and Jann B. (2008). Sensitive Questions in Online Surveys: Experimental results for the randomized response technique (RRT) and the item count technique (UCT). ETH Zurich Sociology Working Paper No. 3. ETH Zurich.Google Scholar
- Díaz Cayeros, A., Magaloni, B., Matanock, A., & Romero, V. (2011). Living in fear: Mapping the social embeddedness of drug gangs and violence in Mexico. doi: 10.2139/ssrn.1963836.
- Droitcour, J., Caspar, R. A., Hubbard, M. L., Parsley, T. L., Visscher, W., & Ezzati, T. M. (1991). The item count technique as a method of indirect questioning: A review of its development and a case study application. Measurement errors in surveys, 185–210.Google Scholar
- Harkness, J., & Van de Vijver, F. (2003). Cross-cultural survey methods. Hoboken: Wiley.Google Scholar
- Hubbard, M.L., Casper, R.A., Lessler, J.T. (1989). Respondent reactions to item count lists and randomized response. Proceedings of the Survey Research Section of the American Statistical Association, pp. 544–548.Google Scholar
- Jackman, S. (2007). The social desirability of belief in god. Presentation for the Boston area methods meeting, March 2007.Google Scholar
- Johnson, T., & Van de Vijver, F. (2003). Social desirability bias in cross-cultural research. In J. Harkness, F. Van de Vijver, & P. Mohler (Eds.), Cross-cultural survey methods (pp. 195–204). Hoboken: Wiley.Google Scholar
- Malesky, E., Jensen, N., & Gueorguiev, D. (2011). “Rent(s) asunder: Sectoral rent extraction possibilities and bribery by Multinational Corporations. Working Paper Series, Peterson Institute for International Economics.Google Scholar
- Miller, J.D. (1984). A new survey technique for studying deviant behavior. Ph.D. thesis. Washington, DC: George Washington University.Google Scholar
- Miller, J. D., & Cisin, I. H. (1984). The item-count/paired lists technique: An indirect method of surveying deviant behavior. Washington, DC: George Washington University, Social Research Group.Google Scholar
- Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco: Jossey-Bass.Google Scholar
- Tsuchiya, T., & Hirai, Y. (2010). Elaborate item count questioning: Why do people underreport count responses? Survey Research Methods, 4(3), 139–149.Google Scholar
- Weghorst, K. (2010). Uncovered sensitive political attitudes with list experiments and randomized response technique: A survey experiment assessing data quality in Tanzania. Presented at the 2010 Midwest Political Science Association National Conference.Google Scholar