The effects of survey mode and sampling in Belgian election studies: a comparison of a national probability face-to-face survey and a nonprobability Internet survey

Abstract

National probability election surveys are more and more abandoned. Decreasing response rates and the escalating costs of face-to-face and telephone interviews have strengthened election scholars’ reliance on nonprobability internet samples to conduct election surveys online. In a number of countries, experiments with alternative ways of recruiting respondents and different interview modes have been well documented. For other countries, however, substantially less is known about the consequences of relying on nonprobability internet panels. In this paper, we investigate the effects of survey mode and sampling method in the Belgian context. This is a particularly important and relevant case study because election researchers in Belgium can draw a sample of voters directly from the National Register. In line with previous studies, we find important differences in the marginal distributions of variables measured in the two surveys. When considering vote choice models and the inferences that scholars would draw, in contrast, we find minor differences.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

Notes

  1. 1.

    Although it is important to point out that self-administered surveys come with problems as well, such as challenges related to measuring political knowledge in a reliable way (and preventing cheating) (Motta et al. 2016).

  2. 2.

    For the British Election Studies, the response rate has dropped from 79% in 1963 (Crewe et al. 1977) to 56% in 2015 (Fieldhouse et al. 2016); in Canada, the response rate has dropped from 63% in 1965 (Converse et al. 2002) to 37% in 2015 (Northrup 2016); in Australia, the response rate has declined from 63% in 1987 to 23% in 2015 (Cameron and McAllister 2016).

  3. 3.

    i.e., Canada, Germany, Great Britain, and the United States. See also Appendix A.

  4. 4.

    According to the United Nations, by 2014, 85% of Belgians were internet users. This is substantially higher than what was seen 5 or 10 years before (70 or 54% respectively). For more information, see http://data.un.org/Data.aspx?d=WDI&f=Indicator_Code%3AIT.NET.USER.P2.

  5. 5.

    PartiRep stands for ‘Participation and Representation,’ an Inter-University Attraction Pole that was funded by the Belgian Science Policy. More information on the project can be found at www.partirep.eu/.

  6. 6.

    More info on this research project can be found at www.electoraldemocracy.com/.

  7. 7.

    Ansolabehere and Schaffner (2014) include a mail survey in their comparison as well. For a full overview of the exact sampling and mode comparisons in the studies cited in this literature review, see the supplementary materials.

  8. 8.

    Quotas for age were based on three broad age ranges: 18–34, 35–54, and 55–99 years. The education quotas as well were based on three categories: lower secondary education, upper secondary education, and tertiary education.

  9. 9.

    For the Flemish region: GMI, HPOl. and Toluna. In Wallonia: GMI, HPOL, Toluna, and SSI.

  10. 10.

    Passwords were sent to panelists along with the URL to the survey. In this way, access to the survey was controlled, and only the selected panelists could participate.

  11. 11.

    This was the case for the U.S.C. Dornsife/Los Angeles Times Daybreak poll during the 2016 presidential elections in the United States, cf. www.nytimes.com/2016/10/13/upshot/how-one-19-year-old-illinois-man-is-distorting-national-polling-averages.html.

  12. 12.

    We apply the FINALweightg weight for the PartiRep dataset and the WEIGHT1 weights for the MEDW data. For the pre-electoral wave, the PartiRep weight varies between 0.55 and 3.19, with a mean of 1.00 and a standard deviation of 0.43. The MEDW basic sociodemographic wave for the pre-electoral wave has a minimum value of 0.54 and a maximum value of 3.71. Its mean is 1.00 with a standard deviation of 0.43.

  13. 13.

    We compare the reported votes in the samples with the vote shares that parties obtained in the Flemish region, excluding voters in Brussels.

  14. 14.

    We do not estimate models explaining the vote for some of the smaller parties, as the datasets only included small numbers of respondents who voted for green, populist radical right, and extreme-left parties. It is important to note that Fig. 2 indicated that differences in the reported vote choices are somewhat larger for radical parties. As a result, not investigating the determinants of voting for these parties might lead us to underestimate differences between the two surveys.

  15. 15.

    When estimating the bivariate models without applying the sociodemographic weight, six interaction terms are significant (results available from the authors). Weighting thus only has a marginal impact when focusing on explanatory models.

  16. 16.

    For a desired significance level of 0.05, we divide 0.05 by the number of tests. In this case, 0.05/28 results in a p value threshold of 0.002 (Gelman et al. 2012).

  17. 17.

    Compared to a series of bivariate models on unweighted data, there is no improvement in the number of significant interaction terms. (Results available from the authors.)

  18. 18.

    For a 0.05-level and 21 tests, the p value threshold is 0.05/21, or 0.002.

  19. 19.

    In a supplementary analysis, reported in Appendix F, we have also verified whether the impact of political interest on vote choice differs in the two samples. For none of the seven parties (four parties in the Flemish region and three parties in the Walloon region), the interaction term—survey × political interest—is significant at conventional levels.

  20. 20.

    For, the Walloon sample as well Kolmogorov–Smirnov tests indicate that the distributions of respondents’ answers on the party like/dislike scales differ significantly between the two samples—without a single exception.

  21. 21.

    Unfortunately, we do not dispose of a large number of alternative dependent variables for which we could investigate whether the two surveys would lead to different conclusions regarding what explains it. Given the overall high turnout, and overreported turnout, explaining reported turnout is not an option. We did pursue an additional analysis to explain hypothetical turnout under voluntary voting rules. These supplementary analyses did not reveal strong differences between the two surveys that could not be attributed to wording differences.

  22. 22.

    However, it also has to be noted that the face-to-face interview can be much longer than the standard online survey. For that exact reason, it is not straightforward to compare the cost by respondent of the PartiRep survey to the cost of the MEDW survey.

References

  1. Alvarez, R. Michael, Robert P. Sherman, and Carla VanBeselaere. 2003. Subject Acquisition for Web-Based Surveys. Political Analysis 11 (1): 23–43.

    Article  Google Scholar 

  2. André, Audrey, and Sam Depauw. 2015. A Divided Nation? The 2014 Belgian Federal Elections. West European Politics 38 (1): 228–237.

    Article  Google Scholar 

  3. Ansolabehere, Stephen, and Brian F. Schaffner. 2014. Does Survey Mode Still Matter? Findings from a 2010 Multi-Mode Comparison. Political Analysis 22 (3): 285–303.

    Article  Google Scholar 

  4. Baker, Reg, Stephen J. Blumberg, Michael J. Brick, Mick P. Couper, J. Melanie Courtright, Michael Dennis, Don Dillman, et al. 2010. AAPOR Report on Online Panels. Public Opinion Quarterly 74 (4): 711–781.

    Article  Google Scholar 

  5. Berrens, Robert P., Alok K. Bohara, Hank Jenkins-Smith, Carol Silva, and David L. Weimer. 2003. The Advent of Internet Surveys for Political Research: A Comparison of Telephone and Internet Samples. Political Analysis 11 (1): 1–22.

    Article  Google Scholar 

  6. Breton, Charles, Fred Cutler, Lachance Sarah, and Alex Mierke-Zatwarnicki. 2017. Telephone versus Online Survey Modes for Election Studies: Comparing Canadian Public Opinion and Vote Choice in the 2015 Federal Election. Canadian Journal of Political Science 50 (4): 1005–1036.

    Article  Google Scholar 

  7. Bytzek, Evelyn, and Ina E. Bieber. 2016. Does Survey Mode Matter for Studying Electoral Behaviour? Evidence from the 2009 German Longitudinal Election Study. Electoral Studies 43: 41–51.

    Article  Google Scholar 

  8. Cameron, Sarah M., and Ian McAllister. 2016. Trends in Australian Political Opinion: Results from the Australian Election Study 1987–2016. Canberra: Australian National University.

    Google Scholar 

  9. Chang, Linchiat, and Jon A. Krosnick. 2009. National Surveys via RDD Telephone Interviewing versus the Internet. Comparing Sample Representativeness and Response Quality. Public Opinion Quarterly 73 (4): 641–678.

    Article  Google Scholar 

  10. Converse, Philip, John Meisel, Maurice Pinard, Peter Regenstreif, and Mildred Schwartz. 2002. Canadian National Election Study, 1965. ICPSR 7225 [Data file]. Ann Arbor (MI): Inter-University Consortium for Political and Social Research [distributor].

  11. Couper, Mick P. 2000. Web Surveys: A Review of Issues and Approaches. Public Opinion Quarterly 64 (4): 464–494.

    Article  Google Scholar 

  12. Crewe, Ivor, Bo Sarlvik, and James Alt. 1977. Partisan Dealignment in Britain 1964–1974. British Journal of Political Science 7 (2): 129–190.

    Article  Google Scholar 

  13. Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. Changes in Telephone Survey Nonresponse over the Past Quarter Century. Public Opinion Quarterly 69 (1): 87–98.

    Article  Google Scholar 

  14. Dassonneville, Ruth, and Dieter Stiers. 2018. Electoral Volatility in Belgium (2009–2014). Is There a Difference between Stable and Volatile Voters. Acta Politica 53 (1): 68–97.

    Article  Google Scholar 

  15. Deschouwer, Kris (ed.). 2018. Mind the Gap. Political Participation and Representation in Belgium. Lanham, MD: Rowman & Littlefield.

    Google Scholar 

  16. Dillman, Don A. 2000. Mail and Internet Surveys: The Tailored Design Method. New York: Harper & Row.

    Google Scholar 

  17. Durand, Claire, André Blais, and Mylène Larochelle. 2004. The Polls-Review. The Polls in the, 2002. French Presidential Election: An Autopsy. Public Opinion Quarterly 68 (4): 602–622.

    Article  Google Scholar 

  18. Fieldhouse, Ed, Jane Green, Geoffrey Evans, Herman Schmitt, Cees van der Eijk, Jonathan Mellon, and Chris Prosser. 2016. British Election Study, 2015: Face-to-Face Post-Election Survey [data collection]. UK Data Service. SN: 7972. https://doi.org/10.5255/UKDA-SN-7972-1.

  19. Fieldhouse, Ed, Jane Green, Geoffrey Evans, Herman Schmitt, Cees van der Eijk, Jonathan Mellon, and Chris Prosser. 2017. British Election Study Internet Panel Waves 1-13 [datafile]. https://doi.org/10.15127/1.293723.

  20. Foucault, Martial. 2017. L’enquête électorale française (www.enef.fr). Paris: CEVIPOF SciencesPo.

    Google Scholar 

  21. Gelman, Andrew, Jennifer Hill, and Masanao Yajima. 2012. Why We (Usually) Don’t Have to Worry About Multiple Comparisons. Journal of Research on Educational Effectiveness 5 (2): 189–211.

    Article  Google Scholar 

  22. Groves, Robert M. 2006. Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opinion Quarterly 70 (5): 646–675.

    Article  Google Scholar 

  23. Hooghe, Marc, Sofie Marien, and Teun Pauwels. 2011. Where Do Distrusting Voters Turn if There is No Viable Exit or Voice Option? The Impact of Political Trust on Electoral Behaviour in the Belgian Regional Elections of June 2009. Government and Opposition 46 (2): 245–273.

    Article  Google Scholar 

  24. Hooghe, Marc, Sara Vissers, Dietlind Stolle, and Valérie-Anne Mahéo. 2010. The Potential of Internet Mobilization: An Experimental Study on the Effect of Internet and Face-to-Face Mobilization Efforts. Political Communication 27 (4): 406–431.

    Article  Google Scholar 

  25. Keeter, Scott, Courtney Kennedy, April Clark, Trevor Tompson, and Mike Mokrzycki. 2007. What’s Missing from National Landline RDD Surveys? The Impact of the Growing Cell-Only Population. Public Opinion Quarterly 71 (5): 772–792.

    Article  Google Scholar 

  26. Lynn, Peter. 2015. Alternative Sequential Mixed-Mode Designs: Effects on Attrition Rates, Attrition Bias, and Costs. Journal of Survey Statistics and Methodology 1 (2): 183–205.

    Article  Google Scholar 

  27. Malhotra, Neil, and Jon A. Krosnick. 2007. The Effect of Survey Mode and Sampling on In-ferences about Political Attitudes and Behavior: Comparing the 2000 and 2004 ANES to Internet Surveys with Nonprobability Samples. Political Analysis 15 (3): 286–323.

    Article  Google Scholar 

  28. Manfreda, Katja Lozar, Jernej Berzelak, Vasja Vehovar, Michael Bosnjak, and Iris Haas. 2008. Web Surveys Versus Other Survey Modes: A Meta-Analysis Comparing Response Rates. International Journal of Market Research 50 (2): 269–291.

    Google Scholar 

  29. MEDW, Making Electoral Democracy Work. 2014. Making Electoral Democracy Work. Belgium Regional, National and European Election-Brussels, Flanders and Wallonia. Technical Report. Montréal.

  30. Mellon, Jonathan, and Christopher Prosser. 2017. Missing Nonvoters and Misweighted Samples. Explaining the 2015 Great British Polling Miss. Public Opinion Quarterly 81 (3): 661–667.

    Article  Google Scholar 

  31. Mood, Carina. 2010. Logistic Regression: Why We Cannot Do What We Think We Can Do, and What We Can Do About It. European Sociological Review 26 (1): 67–82.

    Article  Google Scholar 

  32. Morton, Susan M.B., Dinusha K. Bandara, Elizabeth M. Robinson, and Polly E. Atatoa Carr. 2012. In the 21st Century, What is an Acceptable Response Rate? Australian and New Zealand Journal of Public Health 36 (2): 106–108.

    Article  Google Scholar 

  33. Motta, Matthew P., Timothy H. Callaghan, and Brianna Smith. 2016. Looking for Answers: Identifying Search Behavior and Improving Knowledge-Based Data Quality in Online Surveys. International Journal of Public Opinion Research 29 (4): 575–603.

    Google Scholar 

  34. Northrup, David. 2016. The 2015 Canadian Election Study. Toronto: Institute for Social Research, York University.

    Google Scholar 

  35. PartiRep. 2014. PartiRep Voter Panel Survey 2014. Technical Report. Brussels.

  36. Pasek, Josh. 2016. When will Nonprobability Surveys Mirror Probability Surveys? Considering Types of Inference and Weighting Strategies as Criteria for Correspondence. International Journal of Public Opinion Research 28 (2): 269–291.

    Article  Google Scholar 

  37. Sanders, David, Harold D. Clarke, Marianne C. Stewart, and Paul Whiteley. 2007. Does Mode Matter for Modeling Political Choice? Evidence From the 2005 British Election Study. Political Analysis 15 (3): 257–285.

    Article  Google Scholar 

  38. Schoen, Harald, and Thorsten Faas. 2005. When Methodology Interferes with Substance: The Difference of Attitudes towards E-Campaigning in Online and Offline Surveys. Social Science Computer Review 23 (3): 326–333.

    Article  Google Scholar 

  39. Selb, Peter, and Simon Munzert. 2013. Voter Overrepresentation, Vote Misreporting, and Turnout Bias in Postelection Surveys. Electoral Studies 32 (1): 186–196.

    Article  Google Scholar 

  40. Simmons, Alicia D., and Lawrence D. Bobo. 2015. Can Non-Full-Probability Internet Surveys Yield Useful Data? A Comparison with Full-Probability Face-to-Face Surveys in the Domain of Race and Social Inequality Attitudes. Sociological Methodology 45 (1): 357–387.

    Article  Google Scholar 

  41. Stephenson, Laura B., and Jean Crête. 2010. Studying Political Behavior: A Comparison of Internet and Telephone Surveys. International Journal of Public Opinion Research 23 (1): 24–55.

    Article  Google Scholar 

  42. Stern, Michael J., Ipek Bilgen, and Don A. Dillman. 2014. The State of Survey Methodology: Challenges, Dilemmas, and New Frontiers in the Era of the Tailored Design. Field Methods 26 (3): 284–301.

    Article  Google Scholar 

  43. Vavreck, Lynn, and Douglas Rivers. 2008. The 2006 Cooperative Congressional Election Study. Journal of Elections, Public Opinion and Parties 18 (4): 355–366.

    Article  Google Scholar 

  44. Yeager, David S., Jon A. Krosnick, Chang LinChiat, Harold S. Javitz, Matthew S. Levendusky, Alberto Simpser, and Rui Wang. 2011. Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples. Public Opinion Quarterly 75 (4): 709–747.

    Article  Google Scholar 

Download references

Acknowledgements

A previous version of this paper was presented during the Making Electoral Democracy Work mini-conference at the 113th Annual Meeting of the American Political Science Meeting, San Francisco, August 31-September 3, 2017. We thank Filip Kostelka for providing technical information on the MEDW-survey and Fernando Feitosa for research assistance. We are grateful to Shane Singh and Dieter Stiers for commenting on previous drafts of the papers and the anonymous reviewers of this journal for excellent suggestions.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ruth Dassonneville.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 181 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dassonneville, R., Blais, A., Hooghe, M. et al. The effects of survey mode and sampling in Belgian election studies: a comparison of a national probability face-to-face survey and a nonprobability Internet survey. Acta Polit 55, 175–198 (2020). https://doi.org/10.1057/s41269-018-0110-4

Download citation

Keywords

  • Election study
  • Belgium
  • Survey mode effects
  • Representativeness
  • Nonprobability sample