Skip to main content
Log in

Extreme nonresponse and response bias

A “worst case” analysis

  • Published:
Quality & Quantity Aims and scope Submit manuscript

Abstract

The article analyzes response bias in the Norwegian Monitor, a series of surveys carried out every second year since 1985, with a response rate of only 4 % in the last wave. One third of the respondents in a telephone interview completed the follow-up mail questionnaire. Their answers in the telephone interview are compared with those of the total telephone sample. Furthermore, results from the mail questionnaire are compared with population statistics and high-response surveys. Finally, the plausibility of nonresponse bias as an explanation regarding trends and correlations in the data is discussed. The conclusion is that even in this extreme case of nonresponse most results are not biased, suggesting that also survey data with very low response rates may have scientific value.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. SIRUS (Statens institutt for rusmiddelforskning), The Government institute for research on alcohol and drug use.

  2. The telephone sample has more than ten thousand respondents, split 3812/6307 between those answering/not answering the mail questionnaire. This, as an illustration, means that a difference of just 1.7 percentage points between 20.0 and 21.7 would be significant in a 5 percent level two-tailed test. See also footnote 7.

  3. During the interviewing, it was kept track of the number of interviews completed within each of five age groups, so that the supervisor could make extra selections from the number base to compensate for differences in nonresponse. In 2013, the result was a heavy oversampling of ages 15–24, and less so in 2011. In earlier waves, this procedure was not used. A consequence of such oversampling is a lowering of the response rate, which is part of the explanation for the reduction in 2013 (Table 1).

  4. Taking party choice as an example, the difference between the unweighted results in Fig. 2 and the results after weighting for age, gender and region is negligible. Nine of the twelve differences lie between 0.0 and 0.2 percentage points, and the largest difference is 0.4.

  5. In the elections of 2005 and 2009 (Berglund et al. 2011, p. 17).

  6. The entire series for share of votes at the five levels of education as shown in Table 3 is 2.4, 4.5, 3.4, 6.6 and 9.2 for SV, and 14.3, 11.7, 11.1, 6.8 and 4.4 for FP (percent saying they would vote for the party among those eligible and intending to vote).

  7. The result illustrates the point made earlier regarding the use of significance tests in analyses of response bias. When comparing first party choice for the NM sample and the nonrespondents from the telephone sample, the differences vary between 0 and 2.5 percentage points, averaging 1.3. Just five of the 12 differences are not significant at the 5 % level (six at 1 %). The result might be interpreted as a sign of substantial differences, in contrast to what is indicated by the corresponding patterns for party choice in the two samples in Table 5. The outcome of the significance test of course depends on sample size. If the size of the two samples involved had been reduced to one tenth (381 and 631), none of the differences would have been significant at the 5 percent level.

  8. Source: http://www.ssb.no/befolkning/statistikker/fobbolig/hvert-10-aar/2013-02-26.

  9. The trade unions are LO (881 thousand members in 2011), YS (227 in 2012), UNIO (320 in 2014) and Akademikerne (117 in 2014), all together 1.6 million members. In the 2013 NM, the corresponding figures are 663, 205, 594 and 484, in sum 1.9 million members, out of an adult (age 15 years and above) population of 4 million.

  10. In 2009, when FP as shown in Table 5 was particularly underrepresented in the NM sample, 44 percent said they disliked the party strongly and 22 percent that they disliked it—compared to 46 percent strongly dislike for R, 27 for SF, 25 for KF, 16 for SP, 16 for V, 8 for H and 5 % for A.

  11. Of the 24 comparisons of trends that can be made in Table 7, 18 (75 %) show the same tendency for changes in outcome of the elections and in the result of the NM. In three cases, there is a minor difference in tendency for changes that lie close to zero, and not significant for the NM results (5 % significance level). In the three remaining cases (12.5 %), two of them being those for FP and SV already mentioned in the text, there is a clear difference in trends.

  12. In 2009 the mean deviation from the election result in NEP was 1.7 % points, and in 2013 1.2 points. In both elections, the largest single deviation was for FP, which was underrepresented by respectively 3.6 and 3.8 % points.

  13. There are differences between the two sets of surveys that complicate the comparisons. Part of the NEP sample is a panel, where the respondents were interviewed in the prior election. In NEP, the data is collected by personal interviews, in the NM by telephone interviews. The contents of the rest of the interviews may have produced different contextual effects.

  14. 5660 telephone interviews in a national population sample age 16 years and above.

  15. The Statistics Norway study uses telephone interviewing, while the height and weight questions are part of the self-completion interview in the NM.

  16. The number of respondents lies between 4500 and 5000. A difference between the NM and SN surveys is the use of self-completion questionnaire in the first and telephone interview in the last case.

  17. The Gallup internet panel in 2011 had around 45,000 members. They had been recruited using ordinary telephone and mail surveys, declaring willingness to participate in internet surveys, where they are rewarded each time they participate. In the present survey 3160 were invited to participate, and 1522 responded.

  18. This was due to a change in interview method for the screener interview, from face to face to telephone. This made a reduction in the length of the screener interview necessary, and some of the questions were moved to the self-completion mail questionnaire, and the results thus no longer are affected by an interviewer effect.

References

  • Berglund, F., Reymert I.S., Aardal, B.: Valgundersøkelse 2009. Dokumentasjonsrapport. SSB notater, Oslo 29/2011 (2011)

  • Blanchflower, D.G., Oswald, A.J.: Is well-being U-shaped over the life cycle? Soc. Sci. Med. 66–8, 1733–1749 (2008)

    Article  Google Scholar 

  • Blanchflower, D.G., Oswald, A.J.: The U-shape without controls: a response to Glenn. Soc. Sci. Med. 69–4, 486–488 (2009)

    Article  Google Scholar 

  • Breivik, G., Hellevik, O.: More active and less fit: changes in physical activity in the adult Norwegian population from 1985 to 2011. Sport Soc. Cult. Commer. Media Polit. 17–2, 157–175 (2014)

    Google Scholar 

  • de Leeuwe, E., de Heer, W.: Trends in household survey nonresponse: a longitudinal and international comparison. In: Groves, R., Dillman, D., Eltinge, J., Little, R.J.A. (eds.) Survey Nonresponse, pp. 41–54. Wiley, New York (2002)

    Google Scholar 

  • Flanagan, S.C.: Changing values in advanced industrial society. Comp. Polit. Stud. 14, 403–444 (1982)

    Article  Google Scholar 

  • Glenn, N.: Is the apparent U-shape of well-being over the life course a result of inappropriate use of control variables? A commentary on Blanchflower and Oswald (66:8, 2008, 1733–1749). Soc. Sci. Med. 69(4) 481–485 (2009)

  • Goyder, J.C.: The Silent Minority: Nonrespondents on Sample Surveys. Westview Press, Boulder (1987)

    Google Scholar 

  • Groves, R.M.: Nonresponse rates and nonresponse bias in household surveys. Public Opin. Q. 70–5, 646–675 (2006)

    Article  Google Scholar 

  • Groves, R.M., Peytcheva, E.: The impact of nonresponse rates on nonresponse bias. A Meta-Anal. Public Opin. Q. 72–2, 167–189 (2008)

    Article  Google Scholar 

  • Hellevik, O.: Postmaterialism as a dimension of cultural change. Intern. J. Public Opin. Res. 5–3, 211–233 (1993)

    Article  Google Scholar 

  • Hellevik, O.: Age differences in value orientation—life cycle or cohort effect? Intern. J. Public Opin. Res. 14–3, 286–302 (2002a)

    Article  Google Scholar 

  • Hellevik, O.: Beliefs, attitudes and behavior towards the environment. In: Lafferty, W.M., Nordskog, M., Aakre, H.A. (eds.) Realizing Rio in Norway, pp. 7–19. Prosus, Oslo (2002b)

    Google Scholar 

  • Hellevik, O.: Economy, values and happiness in Norway. J. Happiness Stud. 4–3, 243–283 (2003)

    Article  Google Scholar 

  • Hellevik, O.: Assessing long-term value changes in societies. In: Donsbach, W., Traugot, M. (eds.) Handbook of public opinion research, pp. 556–569. Sage, London (2008a)

    Chapter  Google Scholar 

  • Hellevik, O.: Jakten på den norske lykken. Norsk Monitor 1985–2007 (The Pursuit of Happiness in Norway. The Norwegian Monitor 1985–2007). Universitetsforlaget, Oslo (2008b)

    Google Scholar 

  • Hellevik, O.: Mål og mening. Om feiltolking av meningsmålinger. (Measures and Meaning. Misinterpretations of Opinion Polls). Universitetsforlaget, Oslo (2011)

    Google Scholar 

  • Hellevik, O.: Is the good life sustainable? A three decade study of values, happiness and sustainability in Norway. In: Mueller, M.L., Syse, K.V.L. (eds.) Sustainable Consumption and the Good Life, pp. 55–79. Routledge, London & New York (2015)

    Google Scholar 

  • Hellevik, O.: Is there a U-shaped relationship between age and happiness? (forthcoming)

  • Inglehart, R.: The Silent Revolution—Changing Values and Political Styles Among Western Publics. Princeton University Press, Princeton (1977)

    Google Scholar 

  • Inglehart, R.: Culture Shift in Advanced Industrial Society. Princeton University Press, Princeton (1990)

    Google Scholar 

  • Keeter, S., Kennedy, C., Dimock, M., Best, J., Craighill, P.: Gauging the impact of growing nonresponse on estimates from a National RRD telephone survey. Public Opin. Quart. 70–5, 759–779 (2006)

    Article  Google Scholar 

  • HL-senteret: Antisemittisme i Norge. Den norske befolkningens holdninger til jøder og andre minoriteter. Oslo, HL-senteret (2012)

    Google Scholar 

  • PEW Research Center (2012). Assessing the representativeness of public opinion surveys. http://www.people-press.org/2012/05/15/assessing-the-representativeness-of-public-opinion-surveys/

  • Singer, E.: Introduction. Nonresponse bias in household surveys. Public Opinion Quarterly. 70–5, 637–645 (2006)

    Article  Google Scholar 

  • Singleton, R., Straits, B.: Approaches to Social Research, 4th edn. Oxford University Press, New York (2005)

    Google Scholar 

  • Smith, T.W.: Developing nonresponse standards. In: Groves, R., Dillman, D., Eltinge, J., Little, R.J.A. (eds.) Survey Nonresponse, pp. 27–40. Wiley, New York (2002)

    Google Scholar 

  • Vedøy, T.F, Skretting, A.: Ungdom og rusmidler. Resultater fra spørreskjemaundersøkelser 1968–2008. SIRUS-Rapport nr. 5/2009. Statens institutt for rusmiddelforskning, Oslo (2009)

Download references

Acknowledgments

I have received useful comments/suggestions from anonymous referees, from Erik Dalen, Kristin Rogge Pran, Jan-Paul Brekke, Karen Lillebøe and Arild Sæle at Ipsos MMI, and from Johannes Bergh, Gunnar Sæbø, Tale Hellevik and Erik Neslein Mønness.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ottar Hellevik.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hellevik, O. Extreme nonresponse and response bias. Qual Quant 50, 1969–1991 (2016). https://doi.org/10.1007/s11135-015-0246-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11135-015-0246-5

Keywords

Navigation