Learning Analytics: Negotiating the Intersection of Measurement Technology and Information Technology

  • Mark WilsonEmail author
  • Kathleen ScaliseEmail author
Living reference work entry


In this chapter, we will review the current state of play in the area of overlap between learning analytics (LA), specifically data mining and exploratory analytics, and the field of measurement science. We will review the logic of measurement science, as instantiated through the BEAR Assessment System (BAS), and illustrate it in the context of a LA example. An example is presented showing how complex digital assessments can be designed through BAS with attention to measurement science, while LA approaches can help to score some of the complex digital artifacts embedded in the design. With that background, we suggest ways that the two approaches can be seen to support and complement one another, leading to a larger perspective. This chapter concludes with a discussion of the implications of this emerging intersection and a survey of possible next steps.


Learning analytics Data mining Measurement science BEAR Assessment System Twenty-first-century skills ATC21S 


  1. American Educational Research Association, American Psychological Association, National Council for Measurement in Education (AERA, APA, NCME). (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
  2. Baker, R. S., & Siemens, G. (2014). Educational data mining and learning analytics. In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences. Cambridge, UK: Cambridge University Press, (pp. 253–274).Google Scholar
  3. Binkley, M., Erstad, O., Herman, J., Raizen, S., Ripley, M., Miller-Ricci, M., & Rumble, M. (2012). Defining twenty-first century skills. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching of 21st century skills (Vol. 1). Dordrecht, The Netherands/New York, NY: Springer.Google Scholar
  4. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. doi:10.1080/0969595980050102.CrossRefGoogle Scholar
  5. Brady, A., Conlan, O., Wade, V., & Dagger, D. (2006). Supporting users in creating pedagogically sound personalised learning objects. Paper presented at the Adaptive Hypermedia and Adaptive Web-based Systems, Dublin, Ireland.Google Scholar
  6. Chedrawy, Z., & Abidi, S. S. R. (2006). An adaptive personalized recommendation strategy featuring context sensitive content adaptation. Paper presented at the Adaptive Hypermedia and Adaptive Web-Based Systems, 4th International Conference, AH 2006, Dublin, Ireland.Google Scholar
  7. Chi, E. H., Pirolli, P., Suh, B., Kittur, A., Pendleton, B., & Mytkowicz, T. (2008). Augmented social cognition. Palo Alto, CA: Palo Alto Research Center.Google Scholar
  8. Council, N. R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.Google Scholar
  9. Dagger, D., Wade, V., & Conlan, O. (2005). Personalisation for all: Making adaptive course composition easy. Educational Technology & Society, 8(3), 9–25.Google Scholar
  10. Dringus, L. P. (2012). Learning analytics considered harmful. Journal of Asynchronous Learning Networks, 16(3), 87–100.Google Scholar
  11. Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5/6), 301–317. doi:10.1504/IJTEL.2012.051816.CrossRefGoogle Scholar
  12. Gasevic, G., Dawson, C., Ferguson, S. B., Duval, E., Verbert, K., & Baker, R. S. J. D. (2011). Open learning analytics: An integrated & modularized platform (Concept paper). Society for Learning Analytics Research. Retrieved from
  13. Griffin, P., McGaw, B., & Care, E. (Eds.). (2012). Assessment and teaching of 21st century skills. Dordrecht, The Netherlands/New York, NY: Springer.Google Scholar
  14. Kennedy, C. A., & Draney, K. (2006). Interpreting and using multidimensional performance data to improve learning. In X. Liu (Ed.), Applications of Rasch measurement to science education. Chicago, IL: JAM Press.Google Scholar
  15. McFarlane, A. (2003). Assessment for the digital age. Assessment in Education: Principles, Policy & Practice, 10, 261–266.CrossRefGoogle Scholar
  16. Mislevy, R. J. (2016). [Discussion of learning analytics].Google Scholar
  17. Mislevy, R. J., Almond, R. G., & Lukas, J. F. (2003). A brief introduction to Evidence-Centered Design. CRESST Technical Paper Series. Los Angeles, CA: CRESST.Google Scholar
  18. Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Educational Technology & Society, 17(4), 49–64.Google Scholar
  19. Partnership for 21st Century Skills & American Association of Colleges of Teacher Education. (2010). 21st Century Knowledge and Skills in Educator Preparation.
  20. Pirolli, P. (2007). Cognitive models of human-information interaction. In F. T. Durso (Ed.), Handbook of applied cognition (pp. 443–470). New York, NY: Wiley.CrossRefGoogle Scholar
  21. Pirolli, P. (2009, April 3–9). An elementary social information foraging model. Paper presented at the CHI 2009, ACM Conference on Human Factors in Computing Systems, Boston, MA.Google Scholar
  22. Pirolli, P., Preece, J., & Shneiderman, B. (2010). Cyberinfrastructure for social action on national priorities. IEEE Computer, 43(11), 20–21.CrossRefGoogle Scholar
  23. Pirolli, P., & Wilson, M. (1998). A theory of the measurement of knowledge content, access, and learning. Psychological Review, 105(1), 58–82.CrossRefGoogle Scholar
  24. Resnick, L. B., & Resnick, D. P. (1992). Assessing the thinking curriculum: New tools for educational reform. In B. R. Gifford & M. C. O'Connor (Eds.), Changing assessments: Alternative views of aptitude, achievement and instruction (pp. 37–76). Boston, MA: Kluwer.CrossRefGoogle Scholar
  25. Russell, S., & Norvig, P. (2009). Artificial intelligence, a modern approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall.Google Scholar
  26. Scalise, K. (2012). Using technology to assess hard-to-measure constructs in the CCSS and to expand accessibility. Invitational Research Symposium on Technology Enhanced Assessments.
  27. Scalise, K. (2016). Student collaboration and school educational technology: Technology integration practices in the classroom. Journal on School Educational Technology, 11(4), 39–49.Google Scholar
  28. Scalise, K., & Gifford, B. R. (2006). Computer-based assessment in E-Learning: A framework for constructing “Intermediate Constraint” questions and tasks for technology platforms. Journal of Teaching, Learning and Assessment, 4(6), 7.Google Scholar
  29. Scalise, K., Bernbaum, D. J., Timms, M. J., Veeragoudar Harrell, S., Burmester, K., Kennedy, C. A., & Wilson, M. (2007). Adaptive technology for e-Learning: Principles and case studies of an emerging field. Journal of the American Society for Information Science and Technology, 58(14), 001–015.CrossRefGoogle Scholar
  30. Schrum, L., & Levin, B. B. (2014). Evidence-based strategies for leading 21st century schools. Thousand Oaks, CA: Corwin.Google Scholar
  31. Sclater, N. (2014). JISC: Code of practice for learning analytics: A literature review of the ethical and legal issues.
  32. Stanton, J. M. (2012). An introduction to data science. Retrieved from
  33. Stevens, S. S. (1946). On the theory of scales of measurement. Science, 103, 221–263.CrossRefGoogle Scholar
  34. Timms, M. (2016). Towards a model of how learners process feedback: A deeper look at learning. Australian Journal of Education. doi:10.1177/0004944116652912.Google Scholar
  35. Timms, M., DeVelle, S., & Schwanter, U. (2015). Towards a model of how learners process feedback. Paper presented at the Artificial Intelligence in Education Conference 2015, Switzerland.Google Scholar
  36. van Barneveld, A., Arnold, K. E., & Campbell, J. P. (2012). Analytics in higher education: Establishing a common language. EDUCAUSE Learning Initiative.
  37. Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah, NJ: Lawrence Erlbaum Assoc.Google Scholar
  38. Wilson, M., Bejar, I., Scalise, K., Templin, J., Wiliam, D., & Torres-Irribarra, D. (2012). Perspectives on methodological issues. In P. Griffin, B. McGaw B., & E. Care (Eds.), Assessment and teaching of 21st century skills (pp. 67–142). Dordrecht: Springer.Google Scholar
  39. Wilson, M., Scalise, K., & Gochyyev, P. (in press). ICT Literacy – Learning in digital networks. In R. W. Lissitz & H. Jiao (Eds.), Technology enhanced innovative assessment: Development, modeling, and scoring from an interdisciplinary perspective. Charlotte, NC: Information Age Publisher.Google Scholar
  40. Wilson, M., Scalise, K., & Gochyyev, P. (2016). Assessment of learning in digital interactive social networks: A learning analytics approach. Online Learning Journal, 20(2). ISSN 2472–5730.
  41. Wilson, M. (2009). Measuring progressions: Assessment structures underlying a learning progression. Journal for Research in Science Teaching, 46(6), 716–730.CrossRefGoogle Scholar
  42. Wilson, M., & Scalise, K. (2014). Assessment of learning in digital networks. In P. Griffin & E. Care (Eds.), Assessment and teaching of 21st century skills: Vol. 2. Methods & approaches. Dordrecht: Springer.Google Scholar
  43. Wilson, M., & Sloane, K. (2000). From principles to practice: An embedded assessment system. Applied Measurement in Education, 13(2), 181–208.CrossRefGoogle Scholar
  44. Wilson, M., Scalise, K., & Gochyyev, P. (2015). Rethinking ICT literacy: From computer skills to social network settings. Thinking Skills & Creativity, 18, 65–80.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.University of CaliforniaBerkeleyUSA
  2. 2.University of OregonEugeneUSA

Personalised recommendations