Skip to main content

Identifying and Addressing Response Errors in Self-Report Surveys

  • Chapter
  • First Online:
Book cover Handbook of Quantitative Criminology

Abstract

Much of the data used by criminologists is generated by self-report surveys of victims and offenders. Although both sources share a common reliance on responses to questions, little overlap exists between the two traditions mainly because of the differences in the original motivating goals and auspices of each. Recent changes in how these data are used–especially self-report offending surveys–necessitate a re-examination of this division. In this chapter, we review the methodological work on response errors conducted in the context of victimization surveys in order to identify ways to improve data accuracy in self-report offending surveys. We find evidence to suggest that several types of response error may affect the results obtained by self-report offending surveys.On the basis of these findings, we conclude that further exploration of sources of response error is needed and that a true understanding of these errors may only be possible with the creation of a “state of the art” survey to serve as a benchmark for less expensive surveys. In the interim, we suggest ways in which researchers can utilize existing surveys to obtain a better understanding of how response errors affect crime estimation, especially for particular uses such as trajectory modeling.

Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use the terms “self-report offending surveys” and “offender surveys” as well as “self-report victimization surveys” and “victimization surveys” to refer to data collection efforts that rely on respondents reporting their own experiences with crime (as either an offender or victim).

  2. 2.

    In particular, the use of sophisticated statistical tools such as latent growth curve analysis or semi-parametric group-based modeling (SPGM) of trajectories has increased dramatically in the past decade. Piquero (2008), identified over 80 studies that have employed these methods to analyze criminal activity over the life course, and many of these studies use data from self-report surveys of offending.

  3. 3.

    Two smaller victimization surveys are of note with regard to their work on improving our understanding of recall and screening with regard to sexual assault and rape. These two surveys are the National Violence Against Women Survey (Tjaden and Thoennes 1998) and the Campus Survey of Sexual Violence (Fisher and Cullen 2000).

  4. 4.

    In the United States, no federally-sponsored survey of offending that is comparable to the NCVS exists. The United Kingdom takes a different approach. There, the Home Office sponsors an Offending, Crime and Justice Survey, which was first fielded in 2003 (Budd et al. 2005).

  5. 5.

    Martin et al. (1986) also identified strategic response as a problem. Here, respondents recall the event but refuse to report it because they are ashamed, embarrassed, or suspicious. This source of error is not included in our discussion here because the victim survey tradition has very little to contribute beyond that which the offender survey tradition has done already. Both types of surveys employ Computer-Assisted Self Interviewing (CASI) to reduce strategic response (Thornberry and Krohn 2000; Cantor and Lynch 2000).

  6. 6.

    In fact, the screening interview more than doubled in the amount of time it took to administer. The screener developed for the NCVS requires about 18 minutes to complete as compared to the screener previously used in the NCS, which only took about 8 minutes.

  7. 7.

    The NCS did not specifically ask about rape because of concerns that such questions were too sensitive to be asked, especially in a government-sponsored survey. During the redesign, it was determined that societal norms had changed enough to permit directly asking questions about rape and sexual assault (Rennison and Rand 2007).

  8. 8.

    Other self-report offending surveys such as the NYS do screen with some reference to school, family, and other as frames of reference. However, the density of cues is more limited than in the NCVS.

  9. 9.

    In the victimization survey context, studies have compared early surveys that asked respondents to do these complex exclusions within one question with surveys that isolated identifying potential events from exclusion of ineligible events. Respondents were more productive when asked separate questions that allowed them to identify potential events and exclude ineligible ones (Biderman et al. 1967; Reiss 1967; Cantor and Lynch 2000).

  10. 10.

    In addition, sample size, in turns, brings issues of cost into play.

  11. 11.

    Life event calendars use the recounting of life events on a calendar as a means of improving recall of target events in retrospective surveys and increasing the accuracy of dating these events. For an application in offending surveys, see Horney and Marshall (1992).

  12. 12.

    Three reasons have been posited for how bounding in the crime survey reduces telescoping (see, Biderman and Cantor 1984; Skogan 1981). First, the information from the bounding interview generates a reference list for the interviewer. Second, the bounding interview provides a cognitive reference point for the respondent. Third, the bounding interview educates respondents about precision expected in their responses.

  13. 13.

    Huizinga and Elliot (1985) come the closest in the self-report offending survey tradition to an incident-based reverse record check where arrests incidents found in police records are matched on a one to one basis with reports from the self-report survey. Even in this study, the amount of information used to match is quite limited, and dates are not included.

  14. 14.

    The authors are grateful to Terry Thornberry for providing a copy of the Rochester Youth Development Study interview schedule.

  15. 15.

    The authors thank the editors for suggesting this option.

  16. 16.

    Some debate exists over what “serious” means. Typically, a more serious crime means a more morally objectionable act, but these acts also are more likely to have consequences, such as being report to police or insurance companies and hospital visits, all of which make them more memorable. Still others argue that it is the unambiguous nature of certain crime events that makes them more immune to response errors.

References

  • Addington LA (2005) Disentangling the effects of bounding and mobility on reports of criminal victimization. J Quant Criminol 23:321–343

    Article  Google Scholar 

  • Biderman AD (1966) Social indicators and goals. In: Bauer R (ed) Social indicators. MIT Press, Cambridge, MA

    Google Scholar 

  • Biderman AD, Cantor D (1984) A longitudinal analysis of bounding respondent conditioning and mobility as sources of panel bias in the National Crime Survey. In: American Statistical Association 1984 proceedings of the Section on Survey Research Methods. American Statistical Association, Washington, DC

    Google Scholar 

  • Biderman AD, Lynch JP (1981) Recency bias in data on self-reported victimization. American Statistical Association 1981 proceedings of the Social Statistics Section. American Statistical Association, Washington, DC

    Google Scholar 

  • Biderman AD, Lynch JP (1991) Understanding crime incidence statistics: why the UCR diverges from the NCS. Springer, New York

    Google Scholar 

  • Biderman AD, Moore J (1982) Report on the Workshop on Cognitive Issues in Retrospective Surveys. Bureau of Social Science Research and U.S. Census Bureau, Washington, DC

    Google Scholar 

  • Biderman AD, Reiss AJ Jr (1967) On exploring the “dark figure” of crime. Ann Am Acad Pol Soc Sci 374:1–15

    Article  Google Scholar 

  • Biderman AD, Johnson LA, McIntyre J, Weir AW (1967) Report on a pilot study in the District of Columbia on victimization and attitudes toward law enforcement. President’s Commission on Law Enforcement and Administration of Justice, Field Surveys no. 1. U.S. Government Printing Office, Washington, DC

    Google Scholar 

  • Biderman AD, Cantor D, Reiss A (1985) A quasi-experimental analysis of personal victimization by household respondents in the NCS. Paper presented at the annual meetings of the American Statistical Association, Philadelphia

    Google Scholar 

  • Biderman AD, Cantor D, Lynch JP, Martin E (1986) Final report of the National Crime Survey redesign. Bureau of Social Science Research, Washington, DC

    Google Scholar 

  • Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S (2004) Measurement errors in surveys. Wiley, New York

    Google Scholar 

  • Budd T, Sharp C, Mayhew P (2005) Offending in England and Wales: first results of the 2003 Crime and Justice Survey. Home Office Research Study No. 275. London Home Office

    Google Scholar 

  • Bushery J (1981) Recall bias for different reference periods in the National Crime Survey. Proceedings of the American Statistical Association Survey Methods Research Section. American Statistical Association, Washington, DC, pp 238–243

    Google Scholar 

  • Cantor D, Lynch JP (2000) Self-report surveys as measures of crime and criminal victimization. In: Duffee D, McDowall D, Mazerolle LG, Mastrofski SD (eds) Criminal Justice 2000: measurement and analysis of crime and justice. National Institute of Justice, Washington, DC

    Google Scholar 

  • Cantor D, Lynch JP (2005) Exploring the effects of changes in design on the analytical uses of the NCVS data. J Quant Criminol 21:293–319

    Article  Google Scholar 

  • Dodge R (1970) The Washington DC Recall Study. Reprinted in Robert G. Lehnen and Wesley G. Skogan (1981) The National Crime Survey: Working papers, volume 1: Current and historical perspectives. Washington, DC: U.S. Department of Justice

    Google Scholar 

  • Dodge R, Balog F (1987) Series victimization: a report on a field test. Bureau of Justice Statistics, Washington, DC

    Google Scholar 

  • Eggleston E, Laub JH, Sampson RJ (2004) Methodological sensitivities to latent class analysis of long-term trajectories. J Quant Criminol 20:1–42

    Article  Google Scholar 

  • Elliot D (2008) National Youth Survey [United States]: Wave V, 1980. [Computer File]. Inter-university Consortium for Political and Social Research, Ann Arbor, MI

    Google Scholar 

  • Fisher BS, Cullen FT (2000) Measuring the sexual victimization of women: evolution, current controversies and future research. In: Duffee D, McDowall D, Mazerolle LG, Mastrofski SD (eds) Criminal Justice 2000: measurement and analysis of crime and justice. National Institute of Justice, Washington, DC

    Google Scholar 

  • Gold M (1972) National Survey of Youth, 1972 [Computer file]. Inter-university Consortium for Political and Social Research, Ann Arbor, MI

    Google Scholar 

  • Horney J, Marshall I (1992) An experimental comparison of two self-report methods for measuring lambda. J Res Crime Delinq 29:102–121

    Article  Google Scholar 

  • Hubble D Wilder BE (1988) Preliminary results from the National Crime Survey CATI experiment. In: American Statistical Association 1988 proceedings of the Section on Survey Research Methods. American Statistical Association, Washington, DC

    Google Scholar 

  • Huizinga D, Elliot DS (1985) A preliminary examination of the reliability and validity of the National Youth Survey self-reported delinquency indices. National Youth Survey Project Report No. 2. Behavioral Research Institute, Boulder, CO

    Google Scholar 

  • Huizinga D, Elliot DS (1986) Reassessing the reliability and validity of self report delinquency measures. J Quant Criminol 2:293–327

    Google Scholar 

  • Jabine TB, Straf ML, Tanur JM, Tourangeau R (eds) (1984) Cognitive aspects of survey methodology: building a bridge between disciplines. National Academy Press, Washington, DC

    Google Scholar 

  • Junger-Tas J, Ineke Marshall (1999) The self-report methodology in crime research. In: Tonry M (ed) Crime and Justice: a review of research, vol 25. University of Chicago, Chicago, pp 291–359

    Google Scholar 

  • Kazemian L, Farrington DP (2005) Comparing the validity of prospective retrospective and official onset for different offending categories. J Quant Criminol 21:127–147

    Article  Google Scholar 

  • Kindermann C, Lynch JP Cantor D (1997) The effects of the redesign on victimization estimates. U.S. Department of Justice, Washington, DC

    Google Scholar 

  • Kobelarcik E, Alexander C, Singh R, Shapiro G (1983) Alternative reference periods for the National Crime Survey. Proceedings of the American Statistical Association Survey Methods Section, 1983. American Statistical Association, Washington, DC

    Google Scholar 

  • Koss M (1996) The measurement of rape victimization in crime surveys. Crim Justice Behav 23:55–69

    Article  Google Scholar 

  • Lauritsen J (1998) The age crime debate: assessing the limits of longitudinal self-report data. Soc Forces 77:127–155

    Article  Google Scholar 

  • Lauritsen J (1999) Limitations on the use of longitudinal self-report data: a comment. Criminology 37:687–694

    Article  Google Scholar 

  • Lehnen RG, Skogan WG (1981) The National Crime Survey: Working papers, volume 1: current and historical perspectives. U.S. Department of Justice, Washington, DC

    Google Scholar 

  • Lepkowski J (1981) Sample design issues from the National Crime Survey. Survey Research Center, University of Michigan, Ann Arbor, MI

    Google Scholar 

  • Loftus E, Marburger W (1983) Since the eruption of Mt. St. Helens has anyone beaten you up? Mem Cogni 11: 114–120

    Article  Google Scholar 

  • Lynch JP (2006) Problems and promise of victimization surveys for cross-national research. In: Tonry M (ed) Crime and Justice: a review of research. University of Chicago Press, Chicago, pp 229–287

    Google Scholar 

  • Martin E with Groves R, Maitlin J, Miller P (1986) Report on the Development of Alternative Screening Procedures in the National Crime Survey. Bureau of Social Science Research, Inc. Washington, DC

    Google Scholar 

  • Mennard S, Elliot D (1990) Longitudinal and cross-sectional data collection and analysis in the study of crime and delinquency. Justice Q 7:11–54

    Article  Google Scholar 

  • Miller PV, Groves RM (1986) Matching survey respondents to official records: an exploration of validity in victimization reporting. Public Opin Q 49:366–380

    Article  Google Scholar 

  • Murphy LR, Cowan CD (1984) Effects of bounding on telescoping in the National Crime Survey. In: Lehnen R, Skogan W (eds) The National Crime Survey: Working Papers, vol II. U.S. Department of Justice, Washington, DC

    Google Scholar 

  • Nagin D (1999) Analyzing developmental trajectories: a semi-parametric group based approach. Psychol Methods 4(2):139–157

    Article  Google Scholar 

  • Nagin D (2004) Response to “methodological sensitivities to latent class analysis of long term criminal trajectories”. J Quant Criminol 20:27–35

    Article  Google Scholar 

  • Penick BKE, Owens M (1976) Surveying crime. National Academy Press, Washington, DC

    Google Scholar 

  • Persley C (1995) The National Crime Victimization Survey redesign: measuring the impact of new methods. In: American Statistical Association 1995 proceedings of the Section on Survey Research Methods. American Statistical Association, Washington, DC

    Google Scholar 

  • Piquero AR (2008) Taking stock of development trajectories of criminal activity. In: Lieberman A (ed) The long view of crime: a synthesis of longitudinal research. Springer, New York

    Google Scholar 

  • Planty M (2007) Series victimization and divergence. In: Lynch JP, Addington LA (eds) Understanding crime statistics: revisiting the divergence of the UCR and the NCVS. Cambridge University Press, Cambridge, UK

    Google Scholar 

  • Rand M, Cantor D, Lynch JP (1997) Criminal victimization, 1973–95. Bureau of Justice Statistics, Washington, DC

    Google Scholar 

  • Reiss AJ (1967) Measurement of the nature and the amount of crime: studies in crime and law enforcement in major metropolitan areas. President’s Commission on Law Enforcement and Administration of Justice, vol. 1 of Field Surveys no. 3. U.S. Government Printing Office, Washington, DC

    Google Scholar 

  • Reiss AJ (1982) Victimization Productivity in Proxy Interviewing. Institution for Social and Policy Studies, Yale University, New Haven, CT

    Google Scholar 

  • Rennison CM, Rand M (2007) Introduction to the National Crime Victimization Survey. In: Lynch JP, Addington LA (eds) Understanding crime statistics: revisiting the divergence of the NCVS and UCR. Cambridge University Press, Cambridge, UK

    Google Scholar 

  • Roberts J, Mulvey E, Horney J, Lewis J, Arter M (2005) A test of two methods of recall for violent events. J Quant Criminol 21:175–194

    Article  Google Scholar 

  • Rosenfeld R (2007) Explaining the divergence between UCR and NCVS aggravated assault trends. In: Lynch JP, Addington LA (eds) Understanding crime statistics: revisiting the divergence of the NCVS and UCR. Cambridge University Press, New York

    Google Scholar 

  • Sampson R, Laub JH, Eggleston E (2004) On the robustness and validity of groups. J Quant Criminol 20:37–42

    Article  Google Scholar 

  • Skogan WG (1981) Issues in the measurement of victimization. U.S. Department of Justice, Bureau of Justice Statistics, Washington, DC

    Google Scholar 

  • Thornberry TP (1989) Panel effects and the use of self-reported measures of delinquency in longitudinal studies. In: Klein MW (ed) Cross-national research in self-reported crime and delinquency. Kluwer, Los Angeles

    Google Scholar 

  • Thornberry TP, Krohn MD (2000) Self-report surveys as measures of delinquency and crime. In: Duffe D (ed) Criminal Justice 2000: measurement and analysis of crime and justice. United States Department of Justice, Washington, DC, pp 33–84

    Google Scholar 

  • Tjaden P, Thoennes N (1998) Prevalence, incidence, and consequences of violence against women: findings from the National Violence Against Women Survey. U.S. National Institute of Justice, Washington, DC

    Google Scholar 

  • Turner A (1972) San Jose Methods Test of Known Crime Victims. U.S. Department of Justice, Law Enforcement Assistance Administration, National Institute of Law Enforcement and Criminal Justice, Washington, DC

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Lynch, J.P., Addington, L.A. (2010). Identifying and Addressing Response Errors in Self-Report Surveys. In: Piquero, A., Weisburd, D. (eds) Handbook of Quantitative Criminology. Springer, New York, NY. https://doi.org/10.1007/978-0-387-77650-7_13

Download citation

Publish with us

Policies and ethics