Skip to main content

Using Multi-faceted Rasch Models to Understand Middle School Students’ Argumentation Around Scenarios Grounded in Socio-scientific Issues

  • Chapter
  • First Online:
Advances in Applications of Rasch Measurement in Science Education

Part of the book series: Contemporary Trends and Issues in Science Education ((CTISE,volume 57))

Abstract

We describe the multi-faceted Rasch model as a definition of measurement in contexts with multiple independent sources of error. We apply it to validation of open-ended written scenario-based assessments of argumentation around socio-scientific issues which are subject to errors associated with the argumentation competency being assessed, the rater being assigned, and the particular socio-scientific issue given to the student. Through inspection of the hierarchy within each facet and misfit of particular elements, we were able to tease out the strengths and limitations of particular scenarios and raters, and ultimately derive a more general understanding of how students’ observed argumentation changes as their ability increases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Andrich, D. (2004). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42, I7–I16.

    Article  Google Scholar 

  • Bergan, J. R. (2013). Rasch versus Birnbaum: New arguments in an old debate. Assessment Technology.

    Google Scholar 

  • Berland, L. K., & McNeill, K. L. (2010). A learning progression for scientific argumentation: Understanding student work and designing supportive instructional contexts. Science Education, 94(5), 765–793.

    Article  Google Scholar 

  • Berland, L. K., McNeill, K. L., Pelletier, P., & Krajcik, J. (2017). Engaging in scientific argumentation. In B. Reiser, C. Schwarz, & C. Passmore (Eds.), Moving beyond knowing science to making sense of the world: Bringing next generation science and engineering practices in our K-12 classrooms. National Science Teachers Association Press.

    Google Scholar 

  • Boone, W. J. (2016). Rasch analysis for instrument development: Why, when, and how? CBE—Life sciences education, 15(4), rm4 (Vol. 15).

    Google Scholar 

  • Boone, W. J., Staver, J. R., & Yale, M. S. (2013). Rasch analysis in the human sciences. Springer Science & Business Media.

    Google Scholar 

  • Boone, W. J., Townsend, J. S., & Staver, J. R. (2016). Utilizing multifaceted Rasch measurement through FACETS to evaluate science education data sets composed of judges, respondents, and rating scale items: An exemplar utilizing the elementary science teaching analysis matrix instrument. Science Education, 100(2), 221–238.

    Article  Google Scholar 

  • Covitt, B., Dauer, J., & Anderson, C. (2017). The role of practices in scientific literacy. In C. Schwarz, C. Passmore, & B. Reiser (Eds.), Helping students make sense of the world using next generation science and engineering practices (pp. 59–83). NSTA Press.

    Google Scholar 

  • Deane, P., Song, Y., van Rijn, P., O’Reilly, T., Fowles, M., Bennett, R., et al. (2019). The case for scenario-based assessment of written argumentation. Reading and Writing, 32, 1575–1606.

    Article  Google Scholar 

  • Gotwals, A. W., & Songer, N. B. (2010). Reasoning up and down a food chain: Using an assessment framework to investigate students’ middle knowledge. Science Education, 94(2), 259–281.

    Google Scholar 

  • Kinslow, A. T., Sadler, T. D., & Nguyen, H. (2019). Socio-scientific reasoning and environmental literacy in a field-based ecology class. Environmental Education Research, 25, 388–410. https://doi.org/10.1080/13504622.2018.1442418

    Article  Google Scholar 

  • Krajcik, J. (2015). Three-dimensional instruction. The Science Teacher, 82(8), 50–52.

    Article  Google Scholar 

  • Lead States, N. G. S. S. (2013). Next generation science standards: For states, by states. National Academies Press.

    Google Scholar 

  • Lin, S. S., & Mintzes, J. J. (2010). Learning argumentation skills through instruction in socioscientific issues: The effect of ability level. International Journal of Science and Mathematics Education, 8(6), 993–1017.

    Article  Google Scholar 

  • Linacre, J. M. (2006). WINSTEPS Rasch measurement computer program. WINSTEPS.com

    Google Scholar 

  • Linacre, J. M., & Tennant, A. (2009). More about critical eigenvalue sizes (variances) in standardized residual principal components analysis (PCA). Rasch Measurement Transactions, 23(3), 1228.

    Google Scholar 

  • Linacre, J. M., & Wright, B. D. (2014). Facets. Computer Program for Many-faceted Rasch Measurement, 1998. MESA.

    Google Scholar 

  • Massey, G. J. (2007). A new approach to the logic of discovery. Theoria, Beograd, 50(1), 7–27.

    Article  Google Scholar 

  • Masters, G. N. (1988). Item discrimination: When more is worse. Journal of Educational Measurement, 25(1), 15–29.

    Article  Google Scholar 

  • National Research Council. (2014). Developing assessments for the next generation science standards. National Academies Press.

    Google Scholar 

  • Osborne, J. F., Henderson, J. B., MacPherson, A., Szu, E., Wild, A., & Yao, S. Y. (2016). The development and validation of a learning progression for argumentation in science. Journal of Research in Science Teaching, 53(6), 821–846.

    Article  Google Scholar 

  • Owens, D. C., Sadler, T. D., Petit, D., & Forbes, C. T. (2021). Exploring undergraduates’ breadth of socio-scientific reasoning through domains of knowledge. Research in Science Education, 52, 1643–1658. https://doi.org/10.1007/s11165-021-10014-w

    Article  Google Scholar 

  • Popper, K. R. (1963). Science as falsification. Conjectures and Refutations, 1(1963), 33–39.

    Google Scholar 

  • Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Sage.

    Google Scholar 

  • Sadler, T. D. (2004). Informal reasoning regarding socioscientific issues: A critical review of research. Journal of Research in Science Teaching, 41(5), 513–536.

    Article  Google Scholar 

  • Sadler, T. D., Romine, W. L., Stuart, P. E., & Merle-Johnson, D. (2013). Game-based curricula in biology classes: Differential effects among varying academic levels. Journal of Research in Science Teaching, 50(4), 479–499.

    Article  Google Scholar 

  • Linking Science & Literacy for All Learners. (2018). Resources & materials: Multimodal text sets. University of Missouri. Retrieved April 22, 2022, from https://scienceandliteracy.missouri.edu/resources-materials/

    Google Scholar 

  • Sweller, J., Chandler, P., & Kalyuga, S. (2011). Cognitive load theory. Springer.

    Book  Google Scholar 

  • Thurstone, L. L. (1928). Attitudes can be measured. American Journal of Sociology, 33(4), 529–554.

    Article  Google Scholar 

  • Venville, G. J., & Dawson, V. M. (2010). The impact of a classroom intervention on grade 10 students’ argumentation skills, informal reasoning, and conceptual understanding of science. Journal of Research in Science Teaching, 47(8), 952–977.

    Google Scholar 

  • Wallin, J. F., Dixon, D. S., & Page, G. L. (2007). Testing gravity in the outer solar system: Results from trans-Neptunian objects. The Astrophysical Journal, 666(2), 1296–1302.

    Article  Google Scholar 

  • Wilson, M. (2004). Constructing measures: An item response modeling approach. Routledge.

    Book  Google Scholar 

  • Wilson, M. (2009). Measuring progressions: Assessment structures underlying a learning progression. Journal of Research in Science Teaching, 46(6), 716–730.

    Article  Google Scholar 

  • Womack, A. J., Wulff, E., Sadler, T. D., & Romine, W. (2017, April). Assessment of next generation science learning. San Antonio.

    Google Scholar 

  • Worrall, J. (1989). Structural realism: The best of both worlds? Dialectica, 43(1–2), 99–124.

    Article  Google Scholar 

  • Wright, B. (1992). IRT in the 1990s: Which models work best? 3PL or Rasch? Ben Wright's opening remarks in his invited debate with Ron Hambleton, session 11.05, AERA annual meeting 1992.

    Google Scholar 

  • Wright, B. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8, 370.

    Google Scholar 

  • Wright, B. D., & Stone, M. A. (1979). Best test design. MESA Press.

    Google Scholar 

  • Wright, B. D., Linacre, J. M., Gustafson, J. E., & Martin-Loff, P. (1994). Reasonable mean square fit values. Rasch Measurement Transactions, 8(3), 370.

    Google Scholar 

  • Zeidler, D. L., Herman, B. C., & Sadler, T. D. (2019). New directions in socioscientific issues research. Disciplinary and Interdisciplinary Science Education Research, 1(1), 1–9.

    Article  Google Scholar 

Download references

Acknowledgements

This research was funded by National Science Foundation DRK-12 grant #2010312. The views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William Romine .

Editor information

Editors and Affiliations

Appendix (Rasch Analysis Codes with Annotations)

Appendix (Rasch Analysis Codes with Annotations)

16.1.1 Multi-faceted Rasch Analysis in FACETS

Title = SBA Cohort 3 ; title MISSING = . ;using a “.” to indicate missing data Facets = 4 ;telling how many facets (student, item, rater, scenario) Inter-rater = 3 ;facet 3 is the rater facet Delements = LN ;how the data are represented (see manual for details), other options are N, L, or NL Non-centered = 1 ;First facet (students) is non-centered (i.e. items, raters, scenarios are centered at 0) Yardstick = 0,2. ;Yardstick = Horizontal columns, Vertical lines, Low range, High range, {“Measure” or “End”} Model= ?,?,?,?,Score ;N2MX,1,8,4,4 ;C68MX,1,5,4,4 ;N3MX,1,8,4,4 ;student identifier, item number, rater number, scenario number, score on that mix * Rating Scale=Score,R4, Keep ;"Score” refers to the ‘score on that mix’; R4 refers to a rating scale with a max of 4. “Keep” means model all ordinal categories even if not observed. 1 = lowest score 4 = highest score * Labels=;this indicates that the section where you are labeling the facets is beginning. 1, Student;name of first facet, the students 1 = N2MX 2 = C68MX 3 = N3MX . . . . 320 = J23YX 321 = J24YX 322 = J25YX

* 2, Item ;name of the second facet, the items 1 = Claim 2 = Evidence 3 = Content 4 = Reasoning 5 = Writing 6 = Holistic

* 3, Rater;name of the third facet, the raters. This is the “rater facet”. 1 = 1 2 = 2 3 = 3 4 = 4 5 = 5 6 = 6 7 = 7 8 = 8

* 4, Scenario;name of the fourth facet, the scenarios. This is analogous to alternate testing forms with parallel items. 1 = 1 2 = 2 3 = 3 4 = 4 5 = 5 6 = 6 7 = 7 8 = 8

* Query = N;asking the software to not do the query and just run it all the way through without stopping at each iteration.

* Missing=.;tells Facets that the “.” is the missing data indicator Data =;tells Facets that you will be pasting the data below. N2MX,1,8,4,4 C68MX,1,5,4,4 N3MX,1,8,4,4 N4MX,1,8,4,2 W15AX,1,5,1,4 W14AX,1,5,1,4 W13AX,1,5,1,4 V29AX,1,5,1,4 …… .

16.1.2 Two-Faceted Rasch Analysis in WINSTEPS

&INST;indicates beginning of the control file syntax TITLE = ‘2-faceted SBA analysis for Cohort 3’;title given by the researcher NI = 6;six items ITEM1 = 1;first item begins on first column XWIDE = 1;each item is one column wide CODES = 1234;ordinal codes to be modeled. All other codes treated as missing. NCOL = 6;six columns in the data MODELS = R;default method for dichotomous, rating scale, and partial credit models STBIAS=Y;correction for estimation bias PRCOMP=S;principal components analysis on standardized residuals GROUPS = 0;each item allowed to have its own unique rating scale TABLES = 11111111111111111111111;asks for all of the tables &END;end of the command file syntax 1 = Claim;name of first item 2 = Evidence;name of second item 3 = Content;name of third item 4 = Reasoning;name of fourth item 5 = Writing;name of fifth item 6 = Holistic;name of sixth item END NAMES;indicates end of the naming syntax. Data are pasted below. 434343 444434 444444 232222 444444 ……

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Romine, W., Lannin, A., Kareem, M.K., Singer, N. (2023). Using Multi-faceted Rasch Models to Understand Middle School Students’ Argumentation Around Scenarios Grounded in Socio-scientific Issues. In: Liu, X., Boone, W.J. (eds) Advances in Applications of Rasch Measurement in Science Education. Contemporary Trends and Issues in Science Education, vol 57. Springer, Cham. https://doi.org/10.1007/978-3-031-28776-3_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-28776-3_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-28775-6

  • Online ISBN: 978-3-031-28776-3

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics