A Comparison of Paper and Computer Administered Strengths and Difficulties Questionnaire

  • Praveetha Patalay
  • Daniel Hayes
  • Jessica Deighton
  • Miranda Wolpert
Article

Abstract

The Strengths and Difficulties Questionnaire (SDQ) is one of the most widely used measures of young people’s mental health difficulties in research and clinical decision-making. Although the SDQ is available in both paper and computer survey formats, cross-format equivalences have yet to be established. The current study aimed to assess the measure’s equivalence across paper- and computer-based survey formats in a community-based school setting. The study examined self-reported measures completed by a matched sample of 11–14 year olds in secondary schools in England (589 completed paper version; 589 online version). Analyses demonstrate that the factor structure, although did not vary by survey format, resulted in poorly fitting models limiting the use of model based invariance testing. Results indicate that the measure does not operate similarly across different formats, with scale-level mean differences observed for the hyperactivity scale, which also affects the total difficulties score, with higher scores seen in the paper version. Responses to the impact supplement were also influenced by survey format, with higher impact in specific domains disclosed on the computer-based measure. Item-level differential item functioning was observed for four items in the measure; two from the prosocial scale where the DIF is large enough to affect the scale (DTF, ν2 = 0.14). The inconsistency across survey formats highlights the need for more assessment of influences of different survey formats on young people, their perceived privacy and their mental health disclosures via different media. The findings also highlight the potential confounding effect of format when different methods of data collection are used, with a potentially substantive impact on cross-sample comparisons and within child clinical review.

Keywords

SDQ Computer Psychometric properties Validation DIF Format effects 

Notes

Acknowledgments

We would like to thank members of the wider research group who are part of the larger study from which data are drawn and the Department for Children, Schools and Families (now Department of Education) for funding the research. We are grateful to the schools and young people who participated in the study.

Conflict of Interest

Praveetha Patalay, Daniel Hayes, Jessica Deighton, and Miranda Wolpert declare that they have no conflict of interest.

Experiment Participants

Ethics permissions for collecting data in this and the wider study from which data are drawn, were received from the Research Ethics Committee at University College London.

References

  1. Achenbach, T. M., Becker, A., Dopfner, M., Heiervang, E., Roessner, V., Steinhausen, H. C., & Rothenberger, A. (2008). Multicultural assessment of child and adolescent psychopathology with ASEBA and SDQ instruments: research findings, applications, and future directions. Journal of Child Psychology and Psychiatry, 49, 251–275.CrossRefPubMedGoogle Scholar
  2. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (1999). Standards for educational and psychological testing. Washington, DC:American Psychological Association.Google Scholar
  3. Bland, J. M., & Altman, D. G. (1994). Matching. British Medical Journal, 309, 1128. doi: 10.1136/bmj.309.6962.1128.CrossRefPubMedPubMedCentralGoogle Scholar
  4. Brown, T. A. (2012). Confirmatory factor analysis for applied research: Guilford Press.Google Scholar
  5. Department for Education (2010). Schools, pupils and their characteristics. London: HMSO Retrieved from https://www.gov.uk/government/publications/schools-pupils-and-their-characteristics-january-2010.Google Scholar
  6. Department of health. (2013). Annual report of the Chief Medical Officer 2012: Our children deserve better: prevention pays. Available from https://www.gov.uk/government/publications/chief-medical-officers-annual-report-2012-our-children-deserve-better-prevention-pays.
  7. Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: a practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105, 399–412.CrossRefPubMedGoogle Scholar
  8. Fink, E., Patalay, P., Sharpe, H., Holley, S., Deighton, J., & Wolpert, M. (2015). Mental health difficulties in early adolescence: a comparison of two cross-sectional studies in England from 2009 to 2014. Journal of Adolescent Health, 56, 502–507. doi: 10.1016/j.jadohealth.2015.01.023.CrossRefPubMedGoogle Scholar
  9. Giannakopoulos, G., Tzavara, C., Dimitrakaki, C., Kolaitis, G., Rotsika, V., & Tountas, Y. (2009). The factor structure of the strengths and difficulties questionnaire (SDQ) in Greek adolescents. Annals General Psychiatry, 8.Google Scholar
  10. Goodman, A., & Goodman, R. (2009). Strengths and difficulties questionnaire as a dimensional measure of child mental health. Journal of the American Academy of Child and Adolescent Psychiatry, 48, 400–403.CrossRefPubMedGoogle Scholar
  11. Goodman, R. (1997). The strengths and difficulties questionnaire: a research note. Journal Of Child Psychology And Psychiatry, 38, 581–586.CrossRefPubMedGoogle Scholar
  12. Goodman, R., Meltzer, H., & Bailey, V. (1998). The strengths and difficulties questionnaire: a pilot study on the validity of the self-report version. European Child & Adolescent Psychiatry, 7, 125–130.CrossRefGoogle Scholar
  13. Goodman, R., & Scott, S. (1999). Comparing the strengths and difficulties questionnaire and the child behavior checklist : is small beautiful ? Journal of Abnormal Child Psychology, 27, 17–24.CrossRefPubMedGoogle Scholar
  14. Hagquist, C. (2007). The psychometric properties of the self-reported SDQ–an analysis of Swedish data based on the rasch model. Personality and Individual Differences, 43, 1289–1301.CrossRefGoogle Scholar
  15. Hale, D., Patalay, P., Fitzgerald-Yau, N., Hargreaves, D., Bond, L., Görzig, A.,... Viner, R. (2014). School-level variation in health outcomes in adolescence: analysis of three longitudinal studies in England. Prevention Science, 15, 600–610. doi: 10.1007/s11121-013-0414-6.
  16. Hayslett, M. M., & Wildemuth, B. M. (2004). Pixels or pencils? The relative effectiveness of web-based versus paper surveys. Information Science Research, 26, 73–93. doi: 10.1016/j.lisr.2003.11.005.CrossRefGoogle Scholar
  17. Holländare, F., Andersson, G., & Engström, I. (2010). A comparison of psychometric properties between internet and paper versions of two depression instruments (BDI-II and MADRS-S) administered to clinic patients. Journal of Medical Internet Research, 12. doi: 10.2196/jmir.1392.
  18. Hu, L. t., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 1–55. doi:  10.1080/10705519909540118.
  19. Joseph, L., Gyorkos, T. W., & Coupal, L. (1995). Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard. American Journal of Epidemiology, 141, 263–272.PubMedGoogle Scholar
  20. Kays, K., Gathercoal, K., & Buhrow, W. (2012). Does survey format influence self-disclosure on sensitive question items? Computers in Human Behavior, 28, 251–256. doi: 10.1016/j.chb.2011.09.007.CrossRefGoogle Scholar
  21. Kelley, K., & Lai, K. (2012). MBESS R Package version 3.3.2: Retrieved from http://CRAN.R-project.org/package=MBESS.
  22. Klasen, H., Woerner, W., Wolke, D., Meyer, R., Overmeyer, S., Kaschnitz, W.,... Goodman, R. (2000). Comparing the German versions of the strengths and difficulties questionnaire (SDQ-Deu) and the child behavior checklist. European Child & Adolescent Psychiatry, 9, 271–276.Google Scholar
  23. Leuven, E., & Sianesi, B. (2003). PSMATCH2: Stata module to perform full mahalanobis and propensity score matching, common support graphing, and covariate imbalance testing (version revised 19 Jul 2012): Boston College Department of Economics.Google Scholar
  24. Liu, I. M., & Agresti, A. (1996). Mantel-haenszel-type inference for cumulative odds ratios with a stratified ordinal response. Biometrics, 52, 1223–1234.CrossRefPubMedGoogle Scholar
  25. Livingstone, S. (2008). Taking risky opportunities in youthful content creation: teenagers use of social networking sites for intimacy, privacy and self-expression. New Media & Society, 10, 393–411. doi: 10.1177/1461444808089415.CrossRefGoogle Scholar
  26. Mellor, D., & Stokes, M. (2007). The factor structure of the strengths and difficulties questionnaire. European Journal of Psychological Assessment, 23, 105–112.CrossRefGoogle Scholar
  27. Mojtabai, R. (2006). Serious emotional and behavioral problems and mental health contacts in American and British children and adolescents. Journal of the American Academy of Child and Adolescent Psychiatry, 45, 1215–1223.CrossRefPubMedGoogle Scholar
  28. Mullick, M. S., & Goodman, R. (2001). Questionnaire screening for mental health problems in Bangladeshi children: a preliminary study. Social Psychiatry and Psychiatric Epidemiology, 36, 94–99.CrossRefPubMedGoogle Scholar
  29. Muthén, L. K., & Muthén, B. O. (2012). Mplus User's Guide, 7th Edn. Los Angeles, CA:Muthén & Muthén.Google Scholar
  30. Obel, C., Heiervang, E., Rodriguez, A., Heyerdahl, S., Smedje, H., Sourander, A.,... Olsen, J. (2004). The strengths and difficulties questionnaire in the Nordic countries. European Child & Adolescent Psychiatry, 13, ii32-ii39. doi: 10.1007/s00787-004-2006-2.
  31. Patalay, P., Deighton, J., Fonagy, P., & Wolpert, M. (2015). Equivalence of paper and computer formats of a child self-report mental health measure. European Journal of Psychological Assessment, 31, 54-61.Google Scholar
  32. Penfield, R. (2005). DIFAS: differential item functioning analysis system. Applied Psychological Measurement, 29, 150–151. doi: 10.1177/0146621603260686.CrossRefGoogle Scholar
  33. Penfield, R. (2007). An approach for categorizing DIF in polytomous items. Applied Measurement in Education, 20, 335–355. doi: 10.1080/08957340701431435.CrossRefGoogle Scholar
  34. Penfield, R., & Algina, J. (2006). A generalized DIF effect variance estimator for measuring unsigned differential test functioning in mixed format tests. Journal of Educational Measurement, 43, 295–312.CrossRefGoogle Scholar
  35. Raykov, T., Marcoulides, G. A., & Li, C.-H. (2012). Measurement invariance for latent constructs in multiple populations a critical view and refocus. Educational and Psychological Measurement, 72, 954–974.CrossRefGoogle Scholar
  36. Rønning, J. A., Handegaard, B. H., Sourander, A., & Mørch, W.-T. (2004). The strengths and difficulties self-report questionnaire as a screening instrument in Norwegian community samples. European Child & Adolescent Psychiatry, 13, 73–82.CrossRefGoogle Scholar
  37. Ronning, J. A., Handegaard, B. H., Sourander, A., & Morch, W. T. (2004). The strengths and difficulties self-report questionnaire as a screening instrument in Norwegian community samples. European Child & Adolescent Psychiatry, 13, 73–82.CrossRefGoogle Scholar
  38. Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39, 33–38.Google Scholar
  39. Scientific advisory committee of the medical outcomes trust. (2002). Assessing health status and quality of life instruments: attributes and review criteria. Quality of Life Research, 11, 193–205. http://www.jstor.org/stable/4038039.
  40. StataCorp (2011). Stata Statistical Software: Release 12. College Station:StataCorp LP.Google Scholar
  41. Suler, J. (2004). The online disinhibition effect. Cyber Psychology and Behavior, 7, 321–333.CrossRefGoogle Scholar
  42. Turner, C. F., Ku, L., Rogers, S. M., Lindberg, L. D., Pleck, J. H., & Sonenstein, F. L. (1998). Adolescent sexual behavior, drug use, and violence: increased reporting with computer survey technology. Science, 280, 867–873. doi: 10.1126/science.280.5365.867.CrossRefPubMedGoogle Scholar
  43. Whitehead, L. (2011). Methodological issues in internet-mediated research: a randomized comparison of internet versus mailed questionnaires. Journal of Medical Internet Research, 13, e109. doi: 10.2196/jmir.1593.CrossRefPubMedPubMedCentralGoogle Scholar
  44. Wigelsworth, M., Humphrey, N., & Lendrum, A. (2011). A national evaluation of the impact of the secondary social and emotional aspects of learning (SEAL) programme. Educational Psychologist, 32, 213–238. doi: 10.1080/01443410.2011.640308.CrossRefGoogle Scholar
  45. Wijndaele, K., Matton, L., Duvigneaud, N., Lefevre, J., Duquet, W., Thomis, M.,... Philippaerts, R. (2007). Reliability, equivalence and respondent preference of computerized versus paper-and-pencil mental health questionnaires. Computers in Human Behavior, 23, 1958–1970. doi: 10.1016/j.chb.2006.02.005.
  46. Wolpert, M., Cheng, H., & Deighton, J. (2015). Measurement issues: review of four patient reported outcome measures: SDQ, RCADS, CORS and GBO–their strengths and limitations for clinical use and service evaluation. Child and Adolescent Mental Health, 20, 63–70.Google Scholar
  47. Wolpert, M., Görzig, A., Deighton, J., Fugard, A. J. B., Newman, R., & Ford, T. (2015). Comparison of indices of clinically meaningful change in child and adolescent mental health services: Difference scores, reliable change, crossing clinical thresholds and ‘added value’ – an exploration using parent rated scores on the SDQ. Child and Adolescent Mental Health, 20, 94–101. doi: 10.1111/camh.12080.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Praveetha Patalay
    • 1
  • Daniel Hayes
    • 1
  • Jessica Deighton
    • 1
  • Miranda Wolpert
    • 1
  1. 1.Evidence Based Practice Unit (EBPU)University College London and Anna Freud CentreLondonUK

Personalised recommendations