Mail Surveys: A Closer Look at Nonresponse Rates
Abstract
The topics of response rates, corresponding nonresponse rates and the associated nonresponse bias are, or should be, of interest to all researchers using survey instruments (Struebbe, Kernan, and Grogan 1986). Given that ad hoc (i.e., one-shot) mail surveys, in comparison to other survey methods including mail panels, have traditionally suffered from much higher nonresponse rates (Visser et al. 1996), both practitioners and academicians conducting mail surveys of this type are generally very concerned with nonresponse rates and normally report these rates in their research output (Yammarino, Skinner, and Childers 1991). These researchers also generally state whether they were able to detect nonresponse bias. That is, they acknowledge that it is important to determine the error that results from a systematic difference between those who responded to a mail survey and those who did not respond, because such a systematic difference raises serious doubts concerning the accuracy of the results (Lambert and Harrington 1990).
Keywords
Public Opinion Systematic Difference Research Output Nonresponse Bias Mail SurveyREFERENCES
- Lambert, Douglas M. and Thomas C. Harrington. 1990. “Measuring Nonresponse Bias in Customer Service Mail Surveys.” Journal of Business Logistics 11/2: 5–26.Google Scholar
- Struebbe, Jolene M., Jerome B. Kernan, and Thomas J. Grogan. 1986. “The Refusal Problem in Telephone Surveys.” Journal of Advertising Research 26/3 (Jun/Jul): 29–37.Google Scholar
- Visser, Penny S., Jon A Krosnick, Jesse Marquette, and Michael Curtin. 1996. “Mail Surveys for Election Forecasting? An Evaluation of the Columbus Dispatch Poll.” Public Opinion Quarterly 60/2 (Summer): 181–227.CrossRefGoogle Scholar
- Yammarino, Francis J., Steven J. Skinner, and Terry L. Childers. 1991. “Understanding Mail Survey Response Behavior: A Meta-Analysis.” Public Opinion Quarterly 55/4 (Winter): 613–640.CrossRefGoogle Scholar