Advertisement

Adolescent Research Review

, Volume 4, Issue 4, pp 329–340 | Cite as

Survey Development for Adolescents Aged 11–16 Years: A Developmental Science Based Guide

  • Atefeh OmraniEmail author
  • Joanna Wakefield-Scurr
  • Jenny Smith
  • Nicola Brown
Qualitative Review

Abstract

Methodological knowledge on surveying young adolescents is scarce and researchers often rely on theories and methodological studies based on adult respondents. However, young adolescents are in the process of developing their cognitive, psychological, emotional and social skills, therefore present a unique set of considerations. Question characteristics including; question type and format, question difficulty, wording, ambiguity, the number of response options, and the inclusion of a neutral mid-point, play a pivotal role in the response quality of young adolescents. Failure to address these factors is likely to encourage young adolescents to use satisficing techniques. In this article, we provide a science based guide for developing surveys for use with adolescents aged 11–16 years. The guide considers the characteristics and developmental stages of adolescents as survey responders and incorporates advice on appropriate question characteristics, survey layout and question sequence, approaches to pre-testing surveys and mode of survey administration. The guide provides recommendations for developmentally appropriate survey design to improve response quality in survey research with young adolescents.

Keywords

Survey development Adolescents Satisficing Respondent characteristic Question characteristic 

Notes

Authors’ Contributions

AO created the first draft of the article, and JWS, JS and NB provided substantive feedback on subsequent drafts. After several iterations where all authors contributed, all authors approved the final version of the article.

Funding

There were no forms of financial support, funding, or involvement.

Compliance with Ethical Standards

Conflict of interest

The authors report no conflict of interests.

References

  1. Alm-Roijer, C., Stagmo, M., Uden, G., & Erhardt, L. (2004). Better knowledge improves adherence to lifestyle changes and medication in patients with coronary heart disease. European Journal of Cardiovascular Nursing, 3(4), 321–330.  https://doi.org/10.1016/j.ejcnurse.2004.05.002.CrossRefPubMedPubMedCentralGoogle Scholar
  2. Alwin, D. F. (2007). Margins of error. Wiley series in survey methodology. Hoboken: Wiley.  https://doi.org/10.1002/9780470146316.CrossRefGoogle Scholar
  3. Arthur, A. M., Smith, M. H., White, A. S., Hawley, L., & Koziol, N. A. (2017). Age-sensitive instrument design for youth: A developmental approach. Retrieved from http://cyfs.unl.edu/resources/downloads/working-papers/MAP-working-paper-2017-1.pdf. Accessed November, 2 2017.
  4. Ayre, C., & Scally, A. J. (2013). Critical values for Lawshe’s content validity ratio revisiting the original methods of calculation. Measurement and Evaluation in Counseling and Development, 47(1), 79–86.  https://doi.org/10.1177/0748175613513808.CrossRefGoogle Scholar
  5. Beatty, P. C., & Willis, G. B. (2007). The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311.  https://doi.org/10.1093/poq/nfm006.CrossRefGoogle Scholar
  6. Bell, A. (2007). Designing and testing questionnaires for children. Journal of Research in Nursing, 12(5), 461–469.  https://doi.org/10.1177/1744987107079616.CrossRefGoogle Scholar
  7. Bland, J. M., & Altman, D. G. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The Lancet, 327(8476), 307–310.  https://doi.org/10.1016/s0140-6736(86)90837-8.CrossRefGoogle Scholar
  8. Borgers, N., de Leeuw, E., & Hox, J. J. (2000). Children as respondents in survey research: Cognitive development and response quality. Bulletin of Sociological Methodology/Bulletin de Mathodologie Sociologique, 66(1), 60–75.  https://doi.org/10.1177/075910630006600106.CrossRefGoogle Scholar
  9. Borgers, N., & Hox, J. J. (2000). Reliability of responses in questionnaire research with children. In The fifth international conference on logic and methodology. Cologne, Germany.Google Scholar
  10. Borgers, N., & Hox, J. J. (2001). Item nonresponse in questionnaire research with children. Journal of Official Statistics, 17(2), 321–335.Google Scholar
  11. Borgers, N., Hox, J. J., & Sikkel, D. (2003). Response quality in survey research with children and adolescents: The effect of labeled response options and vague quantifiers. International Journal of Public Opinion Research, 15(1), 83–94.  https://doi.org/10.1093/ijpor/15.1.83.CrossRefGoogle Scholar
  12. Borgers, N., Hox, J. J., & Sikkel, D. (2004). Response effects in surveys on children and adolescents: The effect of number of response options, negative wording, and neutral mid-point. Quality and Quantity, 38(1), 17–33.  https://doi.org/10.1023/b:ququ.0000013236.29205.a6.CrossRefGoogle Scholar
  13. Bowling, A. (2005). Mode of questionnaire administration can have serious effects on data quality. Journal of Public Health, 27(3), 281–291.  https://doi.org/10.1093/pubmed/fdi031.CrossRefPubMedPubMedCentralGoogle Scholar
  14. Bowling, A. (2014). Research methods in health: Investigating health and health services (4th edn.). Maidenhead: McGraw-Hill Open University Press.Google Scholar
  15. Boynton, P. M., & Greenhalgh, T. (2004). Selecting, designing, and developing your questionnaire. BMJ, 328(7451), 1312–1315.  https://doi.org/10.1136/bmj.328.7451.1312.CrossRefPubMedPubMedCentralGoogle Scholar
  16. Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions the definitive guide to questionnaire design—for market research, political polls, and social and health questionnaires. San Francisco: Jossey-Bass Inc.Google Scholar
  17. Bryman, A. (2012). Social research methods (4th edn.). Oxford: University Press.Google Scholar
  18. Calderon, J. L., Morales, L. S., Liu, H., & Hays, R. (2006). Variation in the readability of items within surveys. American Journal of Medical Quality, 21(1), 49–56.  https://doi.org/10.1177/1062860605283572.CrossRefPubMedPubMedCentralGoogle Scholar
  19. Cauffman, E., & Steinberg, L. (2000). Im) maturity of judgment in adolescence: Why adolescents may be less culpable than adults. Behavioral Sciences & the Law, 18(6), 741–760.CrossRefGoogle Scholar
  20. Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37–46.  https://doi.org/10.1177/001316446002000104.CrossRefGoogle Scholar
  21. Cohen, J. (1968). Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220.  https://doi.org/10.1037/h0026256.CrossRefPubMedPubMedCentralGoogle Scholar
  22. Collins, D. (2003). Pretesting survey instruments: An overview of cognitive methods. Quality of Life Research, 12(3), 229–238.  https://doi.org/10.1023/a:1023254226592.CrossRefPubMedPubMedCentralGoogle Scholar
  23. Colosi, R. (2005). Negatively worded questions cause respondent confusion. In U. B. o. t. Census (pp. 2896–2903). Suitland: ASA Section on Survey Research Methods.Google Scholar
  24. Cronbach, L. J. (1950). Further evidence on response sets and test design. Educational and Psychological Measurement, 10(1), 3–31.  https://doi.org/10.1177/001316445001000101.CrossRefGoogle Scholar
  25. Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 3–16). New York: Routledge.Google Scholar
  26. Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302.  https://doi.org/10.1037/h0040957.CrossRefPubMedPubMedCentralGoogle Scholar
  27. de Leeuw, E., Natacha, B., & Astrid, S. (2004). Pretesting questionnaires for children and adolescents. In S. Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. Martin, J. Martin & E. Singer (Eds.), Methods for testing and evaluating survey questionnaires (pp. 409–429). New Jersey: John Wiley & Sons.CrossRefGoogle Scholar
  28. de Leeuw, E. D. (2011). Improving data quality when surveying children and adolescents: Cognitive and social development and its role in questionnaire construction and pretesting. In Report prepared for the Annual Meeting of the Academy of Finland: Research Programs Public Health, Finland.Google Scholar
  29. De Vaus, D. (2014). Surveys in social research. New York: Routledge.Google Scholar
  30. Deutskens, E., de Ruyter, K., Wetzels, M., & Oosterveld, P. (2004). Response rate and response quality of internet-based surveys: An experimental study. Marketing Letters, 15(1), 21–36.  https://doi.org/10.1023/b:mark.0000021968.86465.00.CrossRefGoogle Scholar
  31. DeVellis, R. F. (2016). Scale development: Theory and applications (4th edn.). London: SAGE.Google Scholar
  32. Dillman, D. A. (2006). Why choice of survey mode makes a difference. Public Health Reports, 121(1), 11–13.  https://doi.org/10.1177/003335490612100106.CrossRefPubMedPubMedCentralGoogle Scholar
  33. Dillman, D. A., & Smyth, J. D. (2007). Design effects in the transition to web-based surveys. American Journal of Preventive Medicine, 32(5), S90–S96.  https://doi.org/10.1016/j.amepre.2007.03.008.CrossRefPubMedPubMedCentralGoogle Scholar
  34. Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone mail and mixed-mode surveys: The Tailored design method (4th edn.). New York: Wiley.Google Scholar
  35. DuBay, W. H. (2007). Smart language: Readers, readability, and the grading of text. Costa Mesa: Booksurge Publishing.Google Scholar
  36. Fargas-Malet, M., McSherry, D., Larkin, E., & Robinson, C. (2010). Research with children: Methodological issues and innovative techniques. Journal of Early Childhood Research, 8(2), 175–192.  https://doi.org/10.1177/1476718x09345412.CrossRefGoogle Scholar
  37. Feldman, D. H. (2004). Piaget’s stages: The unfinished symphony of cognitive development. New Ideas in Psychology, 22(3), 175–231.CrossRefGoogle Scholar
  38. Ferketich, S. (1991). Focus on psychometrics. Aspects of item analysis. Research in Nursing & Health, 14(2), 165–168.  https://doi.org/10.1002/nur.4770140211.CrossRefGoogle Scholar
  39. Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233.  https://doi.org/10.1037/h0057532.CrossRefPubMedPubMedCentralGoogle Scholar
  40. Flesch, R. (1962). The art of plain talk. Revised edition. New York: Harper and Row.Google Scholar
  41. Fuchs, M. (2005). Children and adolescents as respondents. Experiments on question order, response order, scale effects and the effect of numeric values associated with response options. Journal of Official Statistics, 21(4), 701–725.Google Scholar
  42. Galesic, M., & Bosnjak, M. (2009). Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opinion Quarterly, 73(2), 349–360.  https://doi.org/10.1093/poq/nfp031.CrossRefGoogle Scholar
  43. Greig, A., Taylor, J., & MacKay, T. (2013). Doing research with children: A practical guide. Ltd: SAGE Publications.  https://doi.org/10.4135/9781526402219.CrossRefGoogle Scholar
  44. Haeger, H., Lambert, A. D., Kinzie, J., & Gieser, J. (2012). Using cognitive interviews to improve survey instruments. New Orleans: The Association for Institutional Research Annual Forum.Google Scholar
  45. Hardesty, D. M., & Bearden, W. O. (2004). The use of expert judges in scale development. Journal of Business Research, 57(2), 98–107.  https://doi.org/10.1016/s0148-2963(01)00295-8.CrossRefGoogle Scholar
  46. Harter, S. (2012a). Manual for the self-perception profile for adolescents. Denver, CO: University of Denver.Google Scholar
  47. Harter, S. (2012b). Manual for the self-perception profile for children. Denver, CO: University of Denver.Google Scholar
  48. Hattie, J., & Cooksey, R. W. (1984). Procedures for assessing the validities of tests using the known-groups method. Applied Psychological Measurement, 8(3), 295–305.CrossRefGoogle Scholar
  49. Jakobsson, U., & Westergren, A. (2005). Statistical methods for assessing agreement for ordinal data. Scandinavian Journal of Caring Sciences.  https://doi.org/10.1111/j.1471-6712.2005.00368.x.CrossRefPubMedPubMedCentralGoogle Scholar
  50. Kelley, K., Clark, B., Brown, V., & Sitzia, J. (2003). Good practice in the conduct and reporting of survey research. International Journal for Quality in Health Care, 15(3), 261–266.  https://doi.org/10.1093/intqhc/mzg031.CrossRefPubMedPubMedCentralGoogle Scholar
  51. Kincaid, J. P., Fishburne, R., Rogers, R., & Chissom, B. S. (1975). Derivation of new readability formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for navy enlisted personnel. Defense Technical Information Center.  https://doi.org/10.21236/ada006655.CrossRefGoogle Scholar
  52. Kitzinger, J. (1995). Qualitative research: Introducing focus groups. BMJ, 311(7000), 299–302.  https://doi.org/10.1136/bmj.311.7000.299.CrossRefPubMedPubMedCentralGoogle Scholar
  53. Kline, P. (2000). The handbook of psychological testing (2nd edn.). London: Routledge.Google Scholar
  54. Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236.  https://doi.org/10.1002/acp.2350050305.CrossRefGoogle Scholar
  55. Krosnick, J. A., & Fabrigar, L. R. (1997). Designing rating scales for effective measurement in surveys. In P. B. L. Lyberg, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz & D. Trewin (Eds.), Survey measurement and process quality (pp. 141–164). New York: Wiley.CrossRefGoogle Scholar
  56. Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In J. D. Wright & P. V. Marsden (Ed.), Handbook of survey research (pp. 263–314). Bingley: Emerald Group Publishing Ltd.Google Scholar
  57. Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575.  https://doi.org/10.1111/j.1744-6570.1975.tb01393.x.CrossRefGoogle Scholar
  58. Lenzner, T. (2014). Are readability formulas valid tools for assessing survey question difficulty? Sociological Methods and Research, 43(4), 677–698.  https://doi.org/10.1177/0049124113513436.CrossRefGoogle Scholar
  59. Leshem, R. (2016). Brain development, impulsivity, risky decision making, and cognitive control: Integrating cognitive and socioemotional processes during adolescence—an introduction to the special issue. Developmental Neuropsychology, 41(1–2), 1–5.  https://doi.org/10.1080/87565641.2016.1187033.CrossRefPubMedPubMedCentralGoogle Scholar
  60. Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22(140), 1–55.Google Scholar
  61. Lissitz, R. W., & Hou, X. (2012). Multiple choice items and constructed response items: Does it matter? Retrieved from http://www.education.umd.edu/MARC/multiplechoiceitemsandconstructedresponseitems.pdf. Accessed June 25, 2016.
  62. Lozano, L. M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology, 4(2), 73–79.  https://doi.org/10.1027/1614-2241.4.2.73.CrossRefGoogle Scholar
  63. Marcus, B., Bosnjak, M., Lindner, S., Pilischenko, S., & Schütz, A. (2007). Compensating for low topic interest and long surveys. Social Science Computer Review, 25(3), 372–383.  https://doi.org/10.1177/0894439307297606.CrossRefGoogle Scholar
  64. Marsh, H. W. (1986). Negative item bias in ratings scales for preadolescent children: A cognitive-developmental phenomenon. Developmental Psychology, 22(1), 37–49.  https://doi.org/10.1037//0012-1649.22.1.37.CrossRefGoogle Scholar
  65. Mavletova, A., & Lynn, P. (2017). Data quality in the understanding society youth self-completion questionnaire. Understanding Society at the Institute for Social and Economic Research (No. 2017–8).Google Scholar
  66. McCabe, M. P., & Ricciardelli, L. A. (2003). Body image and strategies to lose weight and increase muscle among boys and girls. Health Psychology, 22(1), 39–46.  https://doi.org/10.1037//0278-6133.22.1.39.CrossRefPubMedPubMedCentralGoogle Scholar
  67. McLaughlin, G. H. (1969). SMOG grading: A new readability formula. Journal of Reading, 12(8), 639–646.Google Scholar
  68. Mellor, D., & Moore, K. A. (2003). The questionnaire on teacher interaction: Assessing information transfer in single and multi-teacher environments. Journal of Classroom Interaction, 38(2), 29–35.Google Scholar
  69. Mellor, D., & Moore, K. A. (2014). The use of Likert scales with children. Journal of Pediatric Psychology, 39(3), 369–379.  https://doi.org/10.1093/jpepsy/jst079.CrossRefPubMedPubMedCentralGoogle Scholar
  70. Menold, N., Kaczmirek, L., Lenzner, T., & Neusar, A. (2014). How do respondents attend to verbal labels in rating scales? Field Methods, 26(1), 21–39.  https://doi.org/10.1177/1525822x13508270.CrossRefGoogle Scholar
  71. Michalos, A. C., Creech, H., Swayze, N., Kahlke, M., Buckler, P., C., & Rempel, K. (2011). Measuring knowledge, attitudes and behaviours concerning sustainable development among tenth grade students in Manitoba. Social Indicators Research, 106(2), 213–238.  https://doi.org/10.1007/s11205-011-9809-6.CrossRefGoogle Scholar
  72. Mitchell, M. L., & Jolley, J. M. (2013). Research design explained (8th edn.). Boston: Wadsworth Publishing Co.Google Scholar
  73. Mondak, J. J., & Davis, B. C. (2001). Asked and answered: knowledge levels when we won’t take “don’t know” for an answer. Political Behavior, 23(3), 199–224.  https://doi.org/10.1023/a:1015015227594.CrossRefGoogle Scholar
  74. Moore, K. A., & Mellor, D. J. (2003). The nature of children’s social interactions at school. School Psychology International, 24(3), 329–339.  https://doi.org/10.1177/01430343030243005.CrossRefGoogle Scholar
  75. Moors, G., Kieruj, N. D., & Vermunt, J. K. (2014). The effect of labeling and numbering of response scales on the likelihood of response bias. Sociological Methodology, 44(1), 369–399.  https://doi.org/10.1177/0081175013516114.CrossRefGoogle Scholar
  76. Morgan, D. L. (1996). Focus groups. Annual Review of Sociology, 22(1), 129–152.  https://doi.org/10.1146/annurev.soc.22.1.129.CrossRefGoogle Scholar
  77. Moser, C., & Kalton, G. (1985). Survey methods in social investigation (2nd edn.). Aldershot: Gower.Google Scholar
  78. Nevo, B. (1985). Face validity revisited. Journal of Educational Measurement, 22(4), 287–293.  https://doi.org/10.1111/j.1745-3984.1985.tb01065.x.CrossRefGoogle Scholar
  79. Norris, A. E., Torres-Thomas, S., & Williams, E. T. (2014). Adapting cognitive interviewing for early adolescent hispanic girls and sensitive topics. Hispanic Health Care International, 12(3), 111–119.  https://doi.org/10.1891/1540-4153.12.3.111.CrossRefPubMedPubMedCentralGoogle Scholar
  80. Ogan, C., Karakuay, T., & Kurayun, E. (2013). Methodological issues in a survey of children’s online risk-taking and other behaviours in Europe. Journal of Children and Media, 7(1), 133–150.  https://doi.org/10.1080/17482798.2012.739812.CrossRefGoogle Scholar
  81. Osgood, C. E. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning. Champaign: University of Illinois Press.Google Scholar
  82. Partridge, B. C. (2010). Adolescent psychological development, parenting styles, and pediatric decision making. Journal of Medicine and Philosophy, 35(5), 518–525.PubMedCrossRefPubMedCentralGoogle Scholar
  83. Piaget, J. (1929). Child’s conception of the world (international library of psychology). London: Routledge and Kegan Paul.Google Scholar
  84. Pole, C., & Lampard, R. (2013). Practical social investigation. Qualitative and quantitative methods in social research. Harlow: Printice Hall.Google Scholar
  85. Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., & Singer, E. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarterly, 68(1), 109–130.  https://doi.org/10.1093/poq/nfh008.CrossRefGoogle Scholar
  86. Preston, C. C., & Colman, A. M. (2000). Optimal number 0f response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1), 1–15.  https://doi.org/10.1016/s0001-6918(99)00050-5.CrossRefPubMedPubMedCentralGoogle Scholar
  87. Priest, J., Thomas, L., & Bond, S. (1995). Developing and refining a new measurement tool. Nurse Researcher, 2(4), 69–81.  https://doi.org/10.7748/nr.2.4.69.s8.CrossRefGoogle Scholar
  88. Quaigrain, K., & Arhin, A. K. (2017). Using reliability and item analysis to evaluate a teacher-developed test in educational measurement and evaluation. Cogent Education, 4(1).  https://doi.org/10.1080/2331186x.2017.1301013.
  89. Rattray, J., & Jones, M. C. (2007). Essential elements of questionnaire design and development. Journal of Clinical Nursing, 16(2), 234–243.  https://doi.org/10.1111/j.1365-2702.2006.01573.x.CrossRefPubMedPubMedCentralGoogle Scholar
  90. Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research: A comprehensive guide (3rd edn.). San Francisco: Jossey-Bass.Google Scholar
  91. Reja, U., Manfreda, K. L., Hlebec, V., & Vehovar, V. (2003). Open-ended vs. close-ended questions in web questionnaires. Developments in Applied Statistics, 19, 159–177.Google Scholar
  92. Reyna, V. F., & Rivers, S. E. (2008). Current theories of risk and rational decision making. Developmental Review, 28(1), 1–11.PubMedCrossRefPubMedCentralGoogle Scholar
  93. Robson, C. (2011). Real world research: A resource for users of social research methods in applied settings (3rd edn.). Chichester: Wiley.Google Scholar
  94. Roediger, H. L., & Marsh, E. J. (2005). The positive and negative consequences of multiple-choice testing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(5), 1155–1159.  https://doi.org/10.1037/0278-7393.31.5.1155.CrossRefPubMedPubMedCentralGoogle Scholar
  95. Rohrmann, B. (2007). Verbal qualifiers for rating scales: Sociolinguistic considerations and psychometric data. University of Melbourne/Australia. Retrieved from http://www.rohrmannresearch.net/pdfs/rohrmann-vqs-report.pdf. Accessed: 15 April, 2016.
  96. Rossi, P. H., Wright, J. D., & Anderson, A. B. (2013). Handbook of survey research. New York: Academic Press.Google Scholar
  97. Saffi, M. A. L., Macedo Junior Jacques de, L. J., Trojahn, M. M., Polanczyk, C. A., & Rabelo-Silva, E. (2013). Validity and reliability of a questionnaire on knowledge of cardiovascular risk factors for use in Brazil. Revista Da Escola de Enfermagem Da USP, 47(5), 1083–1089.  https://doi.org/10.1590/s0080-623420130000500011.CrossRefGoogle Scholar
  98. Schuman, H., & Presser, S. (1996). Questions and answers in attitude surveys: Experiments on question form, wording, and context. New York: Academic Press.Google Scholar
  99. Scott, E. S., & Steinberg, L. (2008). Adolescent development and the regulation of youth crime. The Future of Childre, 18(2), 15–33.  https://doi.org/10.1353/foc.0.0011.CrossRefGoogle Scholar
  100. Scott, J. (1997). Children as respondents: Methods for improving data quality. In P. B. L. Lyberg, M. Collins, E. de Leeuw, C. Dippo, N. Schwarz & D. Trewin (Eds.), Survey measurement and process quality (pp. 331–350). New York: Wiley.CrossRefGoogle Scholar
  101. Seale, C. (2004). Social research methods: A reader. London: Routledge.Google Scholar
  102. Sim, J., & Wright, C. C. (2005). The Kappa statistic in reliability studies: Use, interpretation, and sample size requirements. Physical Therapy, 85(3), 257–268.  https://doi.org/10.1093/ptj/85.3.257.CrossRefPubMedPubMedCentralGoogle Scholar
  103. Smith, K., & Platt, L. (2013). How do children answer questions about frequencies and quantities? Evidence from a large-scale field test. Centre for Longitudinal Studies, Institute of Education, University of London.Google Scholar
  104. Somerville, L. H. (2013). The teenage brain: Sensitivity to social evaluation. Current Directions in Psychological Science, 22(2), 121–127.  https://doi.org/10.1177/0963721413476512.CrossRefPubMedPubMedCentralGoogle Scholar
  105. Tan, C. L., Hassali, M. A., Saleem, F., Shafie, A. A., Aljadhey, H., & Gan, V. B. (2015). Development, test-retest reliability and validity of the pharmacy value-added services questionnaire (PVASQ). Pharmacy Practice, 13(3), 598.  https://doi.org/10.18549/pharmpract.2015.03.598.CrossRefPubMedPubMedCentralGoogle Scholar
  106. Thurstone, L. L. (1928). Attitudes can be measured. American Journal of Sociology, 33(4), 529–554.  https://doi.org/10.1086/214483.CrossRefGoogle Scholar
  107. Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press.  https://doi.org/10.1017/cbo9780511819322.CrossRefGoogle Scholar
  108. Turner, J. (1975). Cognitive development. London: Methue.Google Scholar
  109. Vaillancourt, P. M. (1973). Stability of children’s survey responses. Public Opinion Quarterly, 37(3), 373.  https://doi.org/10.1086/268099.CrossRefGoogle Scholar
  110. van Hattum, M. J. C., & de Leeuw, E. (1999). A disk by mail survey of pupils in primary schools: Data quality and logistics. Journal of Official Statistics, 15(3), 413–429.Google Scholar
  111. Vogt, D. S., King, D. W., & King, L. A. (2004). Focus groups in psychological assessment: Enhancing content validity by consulting members of the target population. Psychological Assessment, 16(3), 231–243.  https://doi.org/10.1037/1040-3590.16.3.231.CrossRefPubMedPubMedCentralGoogle Scholar
  112. Wallace, C. M. (2009). Measuring changes in attitude, skill and knowledge of undergraduate nursing students after receiving an educational intervention in intimate partner violence.Google Scholar
  113. Weir, J. P. (2005). Quantifying test-retest reliability using the intraclass correlation coefficient and the SEM. Journal of Strength and Conditioning Research, 19(1), 231–240.  https://doi.org/10.1519/00124278-200502000-00038.CrossRefPubMedPubMedCentralGoogle Scholar
  114. Williams, B., Onsman, A., & Brown, T. (2010). Exploratory factor analysis: A five-step guide for novices. Australasian Journal of Paramedicine, 8(3), 1–13.CrossRefGoogle Scholar
  115. Willis, G. G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. In PsycEXTRA dataset. Thousand Oaks: SAGE Publications.Google Scholar
  116. Zanolin, M. E., Visentin, M., Trentin, L., Saiani, L., Brugnolli, A., & Grassi, M. (2007). A questionnaire to evaluate the knowledge and attitudes of health care providers on pain. Journal of Pain and Symptom Management, 33(6), 727–736.  https://doi.org/10.1016/j.jpainsymman.2006.09.032.CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Sport Health and Applied ScienceSt Mary’s UniversityTwickenhamUK
  2. 2.Department of Sport and Exercise ScienceUniversity of PortsmouthPortsmouthUK
  3. 3.Department of Sport and Exercise SciencesUniversity of ChichesterChichesterUK

Personalised recommendations