Skip to main content
Log in

An item response theory and Rasch analysis of the NUDKS: a data literacy scale

  • Published:
Educational Assessment, Evaluation and Accountability Aims and scope Submit manuscript

Abstract

There is an extensive need for school systems to reliably assess the data literacy and data use skills of their educators. To address this need, the current study seeks to refine the NU Data Knowledge Scale (NUDKS) for assessing teacher data literacy for classroom data. A data-based decision-making framework provides the theoretical underpinnings for the instrument. The study’s objective is to refine the NUDKS such that items are located at various points along the data literacy continuum. In this fashion, the NUDKS should be able to measure teacher data literacy throughout the data literacy continuum. To this end, item response theory is used to provide the estimates of the items’ locations and teacher data literacy. Analyses revealed that the NUDKS conformed to the Rasch model. To facilitate the future use of the NUDKS, concordance tables were created to provide a quick determination of teacher data literacy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Andrich, D. (2004). Controversy and the Rasch model: A characteristic of incompatible paradigms? Medical Care, 42, 17–116.

    Article  Google Scholar 

  • Bakx, A., Baartman, L., & van Schilt-Mol, T. (2014). Development and evaluation of a summative assessment program for senior teacher competence. Studies in Educational Evaluation, 40, 50–62.

    Article  Google Scholar 

  • Booher-Jennings, J. (2005). Below the bubble:“educational triage” and the Texas accountability system. American Educational Research Journal, 42, 231–268.

    Article  Google Scholar 

  • Bond, T. G., & Fox, C. M. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Lawrence Erlbaum Associates Publishers.

    Book  Google Scholar 

  • Christoforidou, M., Kyriakides, L., Antoniou, P., & Creemers, B. P. M. (2014). Searching for stages of teacher’s skills in assessment. Studies in Educational Evaluation, 40, 1–11.

    Article  Google Scholar 

  • de Ayala, R. (2009). The theory and practice of item response theory. The Guilford Press.

    Google Scholar 

  • de Ayala, R. (2013). The IRT tradition and its applications. FoundationsIn T. D. Little (Ed.), Oxford Handbook of Quantitative Methods (Vol. I, pp. 144–168). Oxford University Press.

    Google Scholar 

  • Doll, B., Haack, K., Kosse, S., Osterloh, M., Siemers, E., & Pray, B. (2005). The dilemma of pragmatics: Why schools don’t use quality team consultation practices. Journal of Educational and Psychological Consultation, 16, 127–155.

    Article  Google Scholar 

  • Doll, B., Franta, E., Thomas, A., Chapla, B., & Sikorski, J. (2014, April). Nu data: Preparing educators to use data well. Paper presented at the American Educational Reserach Association.

  • Doll, B., Horn, C., & Shope, R. (2010). Using data to foster the school success of students with disabilities (Grant No. R324A110131). IES.

  • Duncan, A. (2009c, June). Robust data gives up the roadmap to reform. Keynote address at the Fourth Annual IES Research Conference, Retrieved from http://www2.ed.gov/news/speeches/2009/06/06082009.pdf

  • Dunn, K. E., Airola, D. T., Lo, W.-J., & Garrison, M. (2013). Becoming data driven: The influence of teachers’ sense of efficacy on concerns related to data-driven decision making. Journal of Experimental Education, 81, 222–241.

    Article  Google Scholar 

  • Gettinger, J. B., Mulford, L., & Hoffman, A. (2010). Prevention and early intervention for preschool children at risk for learning and behavior problems. In Doll, W. Pfohl & J. Yoon (Eds.), Handbook of youth prevention science (pp. 349–374). Routledge.

  • Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and applications. Kluwer-Nijhoff.

    Book  Google Scholar 

  • Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and teacher decision making: Barriers to the use of data to improve practice. Teachers College Record, 106, 1258–1287.

    Article  Google Scholar 

  • Jacobs, J., Gregory, A., Hoppey, D., & Yendol-Hoppey, D. (2009). Data literacy: Understanding teachers’ data use in a context of accountability and response to intervention. Action in Teacher Education, 31, 41–55.

    Article  Google Scholar 

  • Jimerson, J. B. (2014). Thinking about data: Exploring the development of mental models for “data use” among teachers and school leaders. Studies in Educational Evaluation, 42, 5–14.

    Article  Google Scholar 

  • Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes, and lessons from three urban districts. American Journal of Education, 112, 496–552.

    Article  Google Scholar 

  • Kosse, S., & Doll, B. (2006, March). A comparison of traditional student assistance teams and response to intervention. A paper presented at the Annual Convention of the National Association of School Psychologists.

  • Lai, M. K., & Hsiao, S. (2014). Developing data collection and management systems for decision-making: What professional development is required? Studies in Educational Evaluation, 42, 63–67.

    Article  Google Scholar 

  • Lord, F. M. (1980). Applications of item response theory to practical testing problems. Erlbaum.

    Google Scholar 

  • Love, N. (2004). Taking data to new depths. Journal of Staff Development, 25, 22–26.

    Google Scholar 

  • Love, N. (2011). Data literacy for teachers. Dude Publishing.

    Google Scholar 

  • Mandinach, E. B., & Gummer, E. (2012). Navigating the landscape of data literacy: it is complex. Washington, DC and Portland, OR: WestEd and Education Northwest.

  • Mandinach, E. B., & Gummer, E. S. (2013a). Building educators’ data literacy: Differing perspectives. The Journal of Educational Research & Policy Studies, 13, 4–8.

    Google Scholar 

  • Mandinach, E. B., & Gummer, E. S. (2013b). A systemic view of implementing data literacy in educator preparation. Educational Researcher, 42, 30–37.

    Article  Google Scholar 

  • McCutchen, D., Abbott, R. D., Green, L. B., Beretvas, S. N., Cox, S., Potter, N. S., ... Gray, A. L. (2002). Beginning literacy: links among teacher knowledge, teacher practice, and student learning. Journal of Learning Disabilities, 35, 69-86

  • Mead, R.J. (2008) A Rasch primer: the measurement theory of Georg Rasch. Psychometrics services research memorandum 2008–001. Data Recognition Corporation.

  • Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers’ ability to use data to inform instruction: challenges and supports. Office of Planning, Evaluation and Policy Development, US Department of Education (pp. 1–122). US Department of Education.

  • Mertler, C. A., & Campbell, C. (2005, April). Measuring teachers’ knowledge & application of classroom assessment concepts: Development of the assessment literacy inventory. Paper presented at the American Educational Research Association

  • Sikorski, J. (2015). Examination of the NU Data Knowledge Scale. (Doctoral dissertation, University of Nebraska – Lincoln). Retrieved from https://digitalcommons.unl.edu/dissertations/AAI10125684/

  • Sikorski, J., Doll, B., Thomas, A., Franta, E., & Kenney, C. (2013). Nu data: building educators’ data use in schools. The Researcher: Theory and Practice, 23–27.

  • Sikorski, J., Franta, E., & Doll, B. (2014, February). Measuring educators’ data literacy: the NU data knowledge scale. Paper presented at the National Association of School Psychologist Annual Convention.

  • Staman, L., Visscher, A. J., & Luyten, H. (2014). The effects of professional development on the attitudes, knowledge and skills for data-driven decision making. Studies in Educational Evaluation, 42, 79–9. 1.1016/j.stueduc.2013.11.002

  • Suen, H. K., Lei, P.-W., & Li, H. (2011). Data analysis for effective decision making. In M. A. Bray & T. J. Kehle (Eds.), The oxford handbook of school psychology (pp. 140–168). Oxford University Press.

    Google Scholar 

  • U.S. Department of Education. (2008). Teachers’ use of student data systems to improve instruction: 2005 to 2007.

  • Wayman, J. C., & Jimerson, J. B. (2014). Teacher needs for data-related professional learning. Studies in Educational Evaluation, 42, 25–34.

    Article  Google Scholar 

  • Wilson, M. (2005). Constructing measures: An item response modeling approach. Erlbaum.

    Google Scholar 

  • Wright, B. D. (1968). Sample-free test calibration and person measurement. In Proceedings of the 1967 Invitational Conference on Testing Problems. Educational Testing Service.

  • Wright, B. D., & Stone, M. H. (1979). Best test design. MESA Press.

    Google Scholar 

  • Yen, W. M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8, 125–145.

    Article  Google Scholar 

  • Yen, W. M. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187–213.

    Article  Google Scholar 

  • Zimowski, M. F., Muraki, E., Mislevy, R. J., & Bock, R. D. (1996). BILOG-MG [Computer software]. Scientific Software International.

    Google Scholar 

Download references

Acknowledgements

The authors wish to thank the two anonymous reviewers and the editor for their useful comments on this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pamela S. Trantham.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

NUDKS item stems:

  1. 1.

    When is an intervention evidence based?

  2. 2.

    What is baseline data?

  3. 3.

    Which of the following is an example of a strong goal statement?

  4. 4.

    Which of the following is an observable behavior?

  5. 5.

    How would you translate a student’s behavior ratings of rarely, sometimes, often, or almost always into data that could be used in a graph?

  6. 6.

    Which of the following best describes an effective progress monitoring strategy?

  7. 7.

    Your kindergarten team is working with a student who is struggling with pre-literacy skills. Which of the following is an acceptable strategy for strengthening the student’s pre-literacy skills?

  8. 8.

    In addition to class grades, how could you reliably measure the academic performance of students in reading?

  9. 9.

    What should you do before collecting information on a student?

  10. 10.

    You have been monitoring the number of times a student was out of seat during class. Your data show that the student has good days and bad days but it is hard to tell if the student is improving. How could a graph show the parents whether the student is making progress?

  11. 11.

    Pat is constantly disrupting class by being out of his seat. How would you measure how much Pat was out of his seat during an observation?

  12. 12.

    Your team has collected data on a student with behavioral disorders for several weeks and is now ready to implement an intervention. How would you show where an intervention started on a line graph?

  13. 13.

    A teaching team is worried about a student who is not passing her English class. The team’s data show that the student increased her work completion from 45% of assignments to 60% of assignments. Still, the student was failing because her grades on each assignment were still low. Given what the team knows, what would be their next step?

  14. 14.

    A third grade teacher surveyed the students to see which subject was their favorite: math, science, reading, or social studies. If the teacher wanted to show the student’s a graph describing the percent of students preferring each subject, which graph should they choose?

  15. 15.

    A sixth grade teacher notices that her students appear to be arguing and complaining more than usual. She would like to collect data about what is taking place in her classroom. What kind of data collection would be useful in collecting the information she wants to measure?

  16. 16.

    After collecting data on a student’s distracting behavior in class, a teacher wants to graph the number of times the student engaged in distracting behavior over a two week period on the line graph below. What would you label the x and y axes in the below graph?

  17. 17.

    A special education team met to make a plan for a student with a behavior problem. They defined the target behavior and created a rating scale from 0 to 4; with 0 representing a bad day and 4 representing a good day. They set a goal of the student earning a 3 or better each day. They collected data, but when they graphed it they were disappointed by how far the student was from meeting the goal they set. What step did the team forget in creating their plan?

  18. 18.

    How could you describe the difference in data between the before intervention and during intervention phases on the graph below?

  19. 19.

    A student with cognitive disabilities is being taught the steps of washing her hands. She routinely skips one or two steps and becomes frustrated. To figure out which steps of hand washing the students was skipping, what should the teacher do next?

  20. 20.

    Your team was referred a student who has a history of being extremely difficult and resistant with frequent tantrums. Your team is not sure why the problem behavior occurs. What should they do?

  21. 21.

    A teacher has been implementing an academic intervention to increase a student’s test scores. How should you describe the difference in data between the before intervention and during intervention phases on the graph below?

  22. 22.

    You collected 5 days of baseline data and 5 more days of data after beginning an intervention. You graphed the data on a line graph but are unable to tell whether the student’s on-task behavior improved. What could you do to clarify changes in the student’s on-task behavior over time?

  23. 23.

    A team of four teachers met briefly to create a plan for collecting data on a student who was consistently disruptive during their classes. They all decided to record the number of times the student was off-task during their class periods by tallying the number of times the student was disruptive. When they met after school, their tallies varied greatly and they could not agree on what the student’s problem behavior was. What did they forget to do before collecting data on the student?

  24. 24.

    You have been collecting data on a student for several weeks and decided to implement an intervention with the goal that your student would increase work completion from 45 to 80% of assignments. Your student has not missed completing an assignment for the last three weeks and appears to have reached this goal. What should you do next?

  25. 25.

    A middle school team has been implementing an evidence-based intervention for student bullying. They are concerned that the teasing program might not be appropriate because 90% of their school’s enrollment is Latino/a and the program was written for a low-income, predominately white college town in Southern Mississippi. Parents requested the school use an evidence-based program, but a counselor urges the team to use a peacemaking program that was written by a former counselor in the district because it is a better match to the students’ culture. What should the team do?

  26. 26.

    An Art teacher wanted to record how much she was praising a first grade classroom for positive behavior during a lesson. How could the teacher measure how much she praised the students?

  27. 27.

    Midway through the year, a newly enrolled third grade student is referred to your student assistance team because of an inability read at grade level. How could your team gain the most useful information about this student’s current reading abilities and instructional needs?

  28. 28.

    You are working with a first year teacher to use a behavioral intervention program to reduce classroom interruptions and teach cooperative work behaviors. Your role is to coach the teacher in using the intervention and provide the materials to implement it properly. After two weeks the teacher stopped the intervention because it was not working. What should you do next?

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Trantham, P.S., Sikorski, J., de Ayala, R.J. et al. An item response theory and Rasch analysis of the NUDKS: a data literacy scale. Educ Asse Eval Acc 34, 113–135 (2022). https://doi.org/10.1007/s11092-021-09372-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11092-021-09372-w

Keywords

Navigation