Opportunities to Participate (OtP) in Science: Examining Differences Longitudinally and Across Socioeconomically Diverse Schools

  • Christine L. BaeEmail author
  • Morgan DeBusk-Lane
  • Kathryn N. Hayes
  • Fa Zhang


The purpose of this study was to develop and validate a survey of opportunities to participate (OtP) in science that will allow educators and researchers to closely approximate the types of learning opportunities students have in science classrooms. Additionally, we examined whether and how opportunity gaps in science learning may exist across schools with different socioeconomic levels. The OtP in science survey consists of four dimensions that include acquiring foundational knowledge, planning an investigation, conducting an investigation, and using evidence to communicate findings. A total of 1214 middle school students across 8 diverse school districts completed the survey. Tests of reliability, construct validity, measurement invariance, and external validity were conducted using data collected at the beginning and end of the school year. Results showed evidence that the OtP in science survey scores were internally reliable, invariant across school socioeconomic groups across and time points (i.e., lacking systematic biases in responses by group or time point), and externally valid. Given that scores from the survey were reliable and valid indicators of the four dimensions of interest, structural invariance tests were conducted to examine possible differences in OtP in science across schools from high, middle, and low socioeconomic backgrounds. Findings demonstrate specific ways opportunity gaps to learn science manifest in lower income schools. We discuss the implications of these gaps for science instruction, professional development, policy, and diverse students’ interest and achievement in science, and propose several lines of future study.


Opportunities in science Measurement invariance Structural invariance Survey Middle school Socioeconomic status 



This research is based upon work supported by the National Science Foundation under Grant No. 096280. We would also like to thank the participating teachers and students, as well as the science education team at the Alameda County Office of Education.


  1. Anderman, E. M., Anderman, L. H., & Griesinger, T. (1999). The relation of present and possible academic selves during early adolescence to grade point average and achievement goals. The Elementary School Journal, 100(1), 3–17.CrossRefGoogle Scholar
  2. Anderson, J. R. (1993). Problem solving and learning. American Psychologist, 48(1), 35–44.CrossRefGoogle Scholar
  3. Anderson, T., & Shattuck, J. (2012). Design-based research: a decade of progress in education research. Educational Researcher, 41(1), 16–25.CrossRefGoogle Scholar
  4. Aschbacher, P. R., Li, E., & Roth, E. J. (2010). Is science me? High school students' identities, participation and aspirations in science, engineering, and medicine. Journal of Research in Science Teaching, 47(5), 564–582.Google Scholar
  5. Baker, D. P., Goesling, B., & LeTendre, G. K. (2002). Socioeconomic status, school quality, and national economic development: a cross-national analysis of the “Heyneman-Loxley effect” on mathematics and science achievement. Comparative Education Review, 46(3), 291–312.Google Scholar
  6. Banilower, E. R., Smith, P. S., Weiss, I. R., Malzahn, K. A., Campbell, K. M., & Weis, A. M. (2013). Report of the 2012 National Survey of science and mathematics education. Horizon Research, Inc. (NJ1).Google Scholar
  7. Baroody, A. J. (2003). The development of adaptive expertise and flexibility: the integration of conceptual and procedural knowledge. The Development of Arithmetic Concepts and Skills: Constructing Adaptive Expertise, pp. 1–33.Google Scholar
  8. Britner, S. L., & Pajares, F. (2006). Sources of science self‐efficacy beliefs of middle school students. Journal of Research in Science Teaching, 43(5), 485–499.CrossRefGoogle Scholar
  9. Brown, G. T., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9(2), 105–121.CrossRefGoogle Scholar
  10. Bybee, R. W., & Kennedy, D. (2005). Math and science achievement. Science, 307(5709), 481.CrossRefGoogle Scholar
  11. Byrne, B. M., Shavelson, R. J., & Muthén, B. (1989). Testing for the equivalence of factor covariance and mean structures: the issue of partial measurement invariance. Psychological Bulletin, 105(3), 456–466.CrossRefGoogle Scholar
  12. Byrnes, J. P., & Miller, D. C. (2007). The relative importance of predictors of math and science achievement: an opportunity–propensity analysis. Contemporary Educational Psychology, 32(4), 599–629.CrossRefGoogle Scholar
  13. Caldas, S. J., & Bankston, C. (1997). Effect of school population socioeconomic status on individual academic achievement. The Journal of Educational Research, 90(5), 269–277.CrossRefGoogle Scholar
  14. Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14(3), 464–504.CrossRefGoogle Scholar
  15. Cheung, G. W., & Rensvold, R. B. (1999). Testing factorial invariance across groups: a reconceptualization and proposed new method. Journal of Management, 25(1), 1–27.CrossRefGoogle Scholar
  16. Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9(2), 233–255.CrossRefGoogle Scholar
  17. Chi, M. T., Slotta, J. D., & De Leeuw, N. (1994). From things to processes: a theory of conceptual change for learning science concepts. Learning and Instruction, 4(1), 27–43.CrossRefGoogle Scholar
  18. Chinn, C. A., & Malhotra, B. A. (2002). Epistemologically authentic inquiry in schools: a theoretical framework for evaluating inquiry tasks. Science Education, 86(2), 175–218.CrossRefGoogle Scholar
  19. Cromley, J. G., Perez, T., & Kaplan, A. (2016). Undergraduate STEM achievement and retention: cognitive, motivational, and institutional factors and solutions. Policy Insights from the Behavioral and Brain Sciences, 3(1), 4–11.CrossRefGoogle Scholar
  20. Driver, R., Newton, P., & Osborne, J. (2000). Establishing the norms of scientific argumentation in classrooms. Science Education, 84(3), 287–312.CrossRefGoogle Scholar
  21. Duncan, O. D., Featherman, D. L., & Duncan, B. (1972). Socio-economic background and achievement. NY: Seminar Press.Google Scholar
  22. Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to omega: a practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105(3), 399–412.CrossRefGoogle Scholar
  23. Duschl, R. A. (2007). Quality argumentation and epistemic criteria. In Argumentation in Science Education (pp. 159–175). Dordrecht: Springer.CrossRefGoogle Scholar
  24. Erduran, S., Simon, S., & Osborne, J. (2004). TAPping into argumentation: developments in the application of Toulmin's argument pattern for studying science discourse. Science Education, 88(6), 915–933.CrossRefGoogle Scholar
  25. Forbes, C. T., Biggers, M., & Zangori, L. (2013). Investigating essential characteristics of scientific practices in elementary science learning environments: the practices of science observation protocol (P-SOP). School Science and Mathematics, 113(4), 180–190.CrossRefGoogle Scholar
  26. Fredricks, J. A., Wang, M. T., Linn, J. S., Hofkens, T. L., Sung, H., Parr, A., & Allerton, J. (2016). Using qualitative methods to develop a survey measure of math and science engagement. Learning and Instruction, 43, 5–15.CrossRefGoogle Scholar
  27. Gao, S. (2014). Relationship between science teaching practices and students’ achievement in Singapore, Chinese Taipei, and the US: An analysis using TIMSS 2011 data. Frontiers of Education in China, 9(4), 519–551.CrossRefGoogle Scholar
  28. Gregorich, S. E. (2006). Do self-report instruments allow meaningful comparisons across diverse population groups? Testing measurement invariance using the confirmatory factor analysis framework. Medical Care, 44(11 Suppl 3), S78–S94.CrossRefGoogle Scholar
  29. Greiff, S., Holt, D., & Funke, J. (2013). Perspectives on problem solving in cognitive research and educational assessment: analytical, interactive, and collaborative problem solving. Journal of Problem Solving (The), 5, 71–91.Google Scholar
  30. Grolnick, W. S., Price, C. E., Beiswenger, K. L., & Sauck, C. C. (2007). Evaluative pressure in mothers: effects of situation, maternal, and child characteristics on autonomy supportive versus controlling behavior. Developmental Psychology, 43(4), 991–1002.CrossRefGoogle Scholar
  31. Guiton, G., & Oakes, J. (1995). Opportunity to learn and conceptions of educational equality. Educational Evaluation and Policy Analysis, 17(3), 323–336.CrossRefGoogle Scholar
  32. Hanushek, E. A., & Rivkin, S. G. (2006). Teacher quality. Handbook of the Economics of Education, 2, 1051–1078.CrossRefGoogle Scholar
  33. Hartry, A., Dorph, R., Shields, P., Tiffany-Morales, J., & Romero, V. (2012). The status of middle school science education in California. Sacrament: The Center for the Future of Teaching and Learning at WestEd.Google Scholar
  34. Harwell, M., & LeBeau, B. (2010). Student eligibility for a free lunch as an SES measure in education research. Educational Researcher, 39(2), 120–131.CrossRefGoogle Scholar
  35. Hayes, K. N., Lee, C. S., DiStefano, R., O'Connor, D., & Seitz, J. (2016). Measuring scienceinstructional practices: A survey tool for the age of NGSS. Journal of Science Teacher Education, 27(2), 137–164.CrossRefGoogle Scholar
  36. Hayes, K. N., & Trexler, C. J. (2016). Testing predictors of instructional practice in elementary science education: The significant role of accountability. Science Education, 100(2), 266–289.CrossRefGoogle Scholar
  37. Hmelo-Silver, C. E. (2004). Problem-based learning: what and how do students learn? Educational Psychology Review, 16(3), 235–266.CrossRefGoogle Scholar
  38. Hogan, K., Nastasi, B. K., & Pressley, M. (1999). Discourse patterns and collaborative scientific reasoning in peer and teacher-guided discussions. Cognition and Instruction, 17(4), 379–432.CrossRefGoogle Scholar
  39. Hu, L., & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychological Methods, 3(4), 424–453.CrossRefGoogle Scholar
  40. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. Scholar
  41. Jacob, B. A. (2007). The challenges of staffing urban schools with effective teachers. The Future of Children, 17, 129–153.CrossRefGoogle Scholar
  42. Jöreskog, K. G. (1993). Testing structural equation models. Sage Focus Editions, 154, 294–294.Google Scholar
  43. Kahn, P. H., & Kellert, S. R. (2002). Children and nature: psychological, sociocultural, and evolutionary investigations. MIT press.Google Scholar
  44. Kline, R. B. (2015). Principles and practice of structural equation modeling. Guilford publications.Google Scholar
  45. Kolodner, J. L., Camp, P. J., Crismond, D., Fasse, B., Gray, J., Holbrook, J., Puntambekar, S., & Ryan, M. (2003). Problem-based learning meets case-based reasoning in the middle-school science classroom: putting learning by design (tm) into practice. The Journal of the Learning Sciences, 12(4), 495–547.CrossRefGoogle Scholar
  46. Krajcik, J. S., Blumenfeld, P. C., Marx, R. W., & Soloway, E. (1994). A collaborative model for helping middle grade science teachers learn project-based instruction. The Elementary School Journal, 94(5), 483–497.CrossRefGoogle Scholar
  47. Kuhn, D. (2015). Thinking together and alone. Educational Researcher, 44(1), 46–53.CrossRefGoogle Scholar
  48. Lankford, H., Loeb, S., & Wyckoff, J. (2002). Teacher sorting and the plight of urban schools: a descriptive analysis. Educational Evaluation and Policy Analysis, 24(1), 37–62.CrossRefGoogle Scholar
  49. Lee, C. S., Hayes, K. N., Seitz, J. C., DiStefano, R., & O'Connor, D. (2016). Examiningmotivational structures that differentially predict engagement and achievement in middle school science. International Journal of Science Education, 38(2), 192–215.CrossRefGoogle Scholar
  50. Lee, O., & Buxton, C. A. (2010). Diversity and equity in science education: research, policy, and practice. Multicultural education series. Teachers College Press.Google Scholar
  51. Lee, O., & Luykx, A. (2007). Science education and student diversity: Race/ethnicity, language, culture, and socioeconomic status. Handbook of Research on Science Education, 1, 171–197.Google Scholar
  52. Lee, O., & Luykx, A. (2005). Dilemmas in scaling up innovations in elementary science instruction with nonmainstream students. American Educational Research Journal, 42(3), 411–438.CrossRefGoogle Scholar
  53. Lemke, J. L. (2001). Articulating communities: sociocultural perspectives on science education. Journal of Research in Science Teaching, 38(3), 296–316.CrossRefGoogle Scholar
  54. Lewis, R. W., & Farkas, G. (2017). Using an opportunity-propensity framework to estimate individual-, classroom-, and school-level predictors of middle school science achievement. Contemporary Educational Psychology, 51, 185–197.CrossRefGoogle Scholar
  55. Little, R. J. (1993). Pattern-mixture models for multivariate incomplete data. Journal of the American Statistical Association, 88(421), 125–134.Google Scholar
  56. Little, T. D. (2013). Longitudinal structural equation modeling. Guilford press. Google Scholar
  57. Llewellyn, D. (2005). Teaching high school science through inquiry: A case study approach. Corwin Press.Google Scholar
  58. Manz, E., & Suárez, E. (2018). Supporting teachers to negotiate uncertainty for science, students, and teaching. Science Education, 102(4), 771–795.CrossRefGoogle Scholar
  59. Marsh, H. W., Hau, K.-T., & Wen, Z. (2004a). In search of golden rules: comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341.CrossRefGoogle Scholar
  60. Marsh, H. W., Wen, Z., & Hau, K. T. (2004b). Structural equation models of latent interactions: evaluation of alternative estimation strategies and indicator construction. Psychological Methods, 9(3), 275–300.CrossRefGoogle Scholar
  61. McCoach, D. B., O'connell, A. A., Reis, S. M., & Levitt, H. A. (2006). Growing readers: a hierarchical linear model of children's reading growth during the first 2 years of school. Journal of Educational Psychology, 98(1), 14–28.CrossRefGoogle Scholar
  62. McGinn, M. K., & Roth, W. M. (1999). Preparing students for competent scientific practice: implications of recent research in science and technology studies. Educational Researcher, 28(3), 14–24.CrossRefGoogle Scholar
  63. McKenna, M. C., Conradi, K., Lawrence, C., Jang, B. G., & Meyer, J. P. (2012). Reading attitudes of middle school students: results of a US survey. Reading Research Quarterly, 47(3), 283–306.CrossRefGoogle Scholar
  64. McNeill, K. L., & Krajcik, J. (2008). Scientific explanations: characterizing and evaluating the effects of teachers' instructional practices on student learning. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 45(1), 53–78.CrossRefGoogle Scholar
  65. Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58(4), 525–543.CrossRefGoogle Scholar
  66. Messick, S. (1989). Meaning and values in test validation: the science and ethics of assessment. Educational Researcher, 18(2), 5–11.CrossRefGoogle Scholar
  67. Minner, D. D., Levy, A. J., & Century, J. (2010). Inquiry-based science instruction—what is it and does it matter? Results from a research synthesis years 1984 to 2002. Journal of Research in Science Teaching, 47(4), 474–496.CrossRefGoogle Scholar
  68. Mo, Y., Singh, K., & Chang, M. (2013). Opportunity to learn and student engagement: a HLM study on eighth grade science achievement. Educational Research for Policy and Practice, 12(1), 3–19.CrossRefGoogle Scholar
  69. Morgan, J. N., & Sonquist, J. A. (1963). Problems in the analysis of survey data, and a proposal. Journal of the American Statistical Association, 58(302), 415–434.CrossRefGoogle Scholar
  70. Morgan, P. L., Farkas, G., Hillemeier, M. M., & Maczuga, S. (2016). Science achievement gaps begin very early, persist, and are largely explained by modifiable factors. Educational Researcher, 45(1), 18–35.CrossRefGoogle Scholar
  71. Muthén, B. O. (1989). Latent variable modeling in heterogeneous populations. Psychometrika, 54(4), 557–585.CrossRefGoogle Scholar
  72. Muthén, L. K., & Muthén, B. O. (2011). Mplus statistical modeling software (version 6.12). Los Angeles: Muthén & Muthén.Google Scholar
  73. Muthén, L.K. and Muthén, B.O. (1998-2017). Mplus User’s Guide. Eighth Edition. Los Angeles: Muthén & Muthén.Google Scholar
  74. National Research Council. (NRC). (2012). A framework for K-12 science education: Practices, crosscutting concepts, and Core ideas. Washington, DC: The National Academies Press.Google Scholar
  75. NGSS Lead States. (2013). Next Generation Science Standards: For States, By States. Washington, DC: The National Academies Press.Google Scholar
  76. Oakes, J. (1990). Multiplying inequalities: the effects of race, social class, and tracking on opportunities to learn mathematics and science. RAND Coporation: Washington D.C.Google Scholar
  77. OECD. (2016). PISA 2015 Results (Volume) I003A Excellence and Equity in Education. Paris: PISA, OECD Publishing.Google Scholar
  78. Osborne, J. F., Henderson, J. B., MacPherson, A., Szu, E., Wild, A., & Yao, S. Y. (2016). The development and validation of a learning progression for argumentation in science. Journal of Research in Science Teaching, 53(6), 821–846.CrossRefGoogle Scholar
  79. Owens, D. C., Sadler, T. D., Barlow, A. T., & Smith-Walters, C. (2017). Student motivation from and resistance to active learning rooted in essential science practices. Research in Science Education, 1–25.Google Scholar
  80. Pajares, F., Britner, S. L., & Valiante, G. (2000). Relation between achievement goals and self beliefs of middle school students in writing and science. Contemporary Educational Psychology, 25(4), 406–422.CrossRefGoogle Scholar
  81. Passmore, C., & Stewart, J. (2002). A modeling approach to teaching evolutionary biology in high schools. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 39(3), 185–204.CrossRefGoogle Scholar
  82. Podschuweit, S., & Bernholt, S. (2017). Composition-effects of context-based learning opportunities on students’ understanding of energy. Research in Science Education, 1–36.Google Scholar
  83. President’s Council of Advisors on Science and Technology (PCAST). (2010). Report to the president: prepare and inspire: K-12 education in science, technology, engineering, and mathematics (STEM) for America’s future. Washington, DC: Executive Office of the President.Google Scholar
  84. Quinn, D. M., & Cooc, N. (2015). Science achievement gaps by gender and race/ethnicity in elementary and middle school: Trends and predictors. Educational Researcher, 44(6), 336–346.CrossRefGoogle Scholar
  85. Quinn, D. M., Cooc, N., McIntyre, J., & Gomez, C. J. (2016). Seasonal dynamics of academic achievement inequality by socioeconomic status and race/ethnicity: updating and extending past research with new national data. Educational Researcher, 45(8), 443–453.CrossRefGoogle Scholar
  86. Rhemtulla, M., Brosseau-Liard, P. É., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354–373.CrossRefGoogle Scholar
  87. Rinke, C. R., Gimbel, S. J., & Haskell, S. (2013). Opportunities for inquiry science in Montessori classrooms: learning from a culture of interest, communication, and explanation. Research in Science Education, 43(4), 1517–1533.CrossRefGoogle Scholar
  88. Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: an iterative process. Journal of Educational Psychology, 93(2), 346–362.CrossRefGoogle Scholar
  89. Ruiz-Primo, M. A., Shavelson, R. J., Hamilton, L., & Klein, S. (2002). On the evaluation of systemic science education reform: searching for instructional sensitivity. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 39(5), 369–393.CrossRefGoogle Scholar
  90. Saçkes, M., Trundle, K. C., Bell, R. L., & O'Connell, A. A. (2011). The influence of early science experience in kindergarten on children's immediate and later science achievement: evidence from the early childhood longitudinal study. Journal of Research in Science Teaching, 48(2), 217–235.CrossRefGoogle Scholar
  91. Schwarz, C. V., Reiser, B. J., Davis, E. A., Kenyon, L., Achér, A., Fortus, D., Shwartz, Y., Hug, B., & Krajcik, J. (2009). Developing a learning progression for scientific modeling: making scientific modeling accessible and meaningful for learners. Journal of Research in Science Teaching, 46(6), 632–654.CrossRefGoogle Scholar
  92. Shim, S. S., Ryan, A. M., & Anderson, C. J. (2008). Achievement goals and achievement during early adolescence: examining time-varying predictor and outcome variables in growth-curve analysis. Journal of Educational Psychology, 100(3), 655–671.CrossRefGoogle Scholar
  93. Sirin, S. R. (2005). Socioeconomic status and academic achievement: a meta-analytic review of research. Review of Educational Research, 75(3), 417–453.CrossRefGoogle Scholar
  94. Skinner, E. A., & Belmont, M. J. (1993). Motivation in the classroom: reciprocal effects of teacher behavior and student engagement across the school year. Journal of Educational Psychology, 85(4), 571–581.CrossRefGoogle Scholar
  95. Steinmetz, H., Schmidt, P., Tina-Booh, A., Wieczorek, S., & Schwartz, S. H. (2009). Testing measurement invariance using multigroup CFA: differences between educational groups in human values measurement. Quality & Quantity, 43(4), 599–616.CrossRefGoogle Scholar
  96. Stemler, S. E. (2004). A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Practical Assessment, Research, and Evaluation, 9(4), 1–19.Google Scholar
  97. Tekkumru-Kisa, M., Stein, M. K., & Coker, R. (2018). Teachers' learning to facilitate high-level student thinking: impact of a video-based professional development. Journal of Research in Science Teaching, 55(4), 479–502.CrossRefGoogle Scholar
  98. TIMSS 2015. Assessment Frameworks. Copyright © 2013 International Association for the Evaluation of Educational Achievement (IEA). Publisher: TIMSS & PIRLS International Study Center, Lynch School of Education, Boston College.Google Scholar
  99. Tobin, K., Seiler, G., & Walls, E. (1999). Reproduction of social class in the teaching and learning of science in urban high schools. Research in Science Education, 29(2), 171–187.CrossRefGoogle Scholar
  100. U.S. Department of Education. (2000). National Center for Education Statistics, The Condition of Education 2000, NCES 2000–062. Washington, DC: U.S. Government Printing Office.Google Scholar
  101. Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70.CrossRefGoogle Scholar
  102. Wang, M. T., Fredricks, J. A., Ye, F., Hofkens, T. L., & Linn, J. S. (2016). The math and science engagement scales: Scale development, validation, and psychometric properties. Learning and Instruction, 43, 16–26.CrossRefGoogle Scholar
  103. Windschitl, M., Thompson, J., & Braaten, M. (2008). Beyond the scientific method: model-based inquiry as a new paradigm of preference for school science investigations. Science Education, 92(5), 941–967.CrossRefGoogle Scholar
  104. Windschitl, M., Thompson, J., Braaten, M., & Stroupe, D. (2012). Proposing a core set of instructional practices and tools for teachers of science. Science Education, 96(5), 878–903.CrossRefGoogle Scholar
  105. Zhang, Z., & Yuan, K. H. (2016). Robust coefficients alpha and omega and confidence intervals with outlying observations and missing data: Methods and software. Educational and Psychological Measurement, 76(3), 387–411.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.Department of Foundations of EducationVirginia Commonwealth UniversityRichmondUSA
  2. 2.Department of Educational LeadershipCalifornia State University East BayHaywardUSA

Personalised recommendations