Technology, Knowledge and Learning

, Volume 21, Issue 2, pp 151–153 | Cite as

Challenging Authentic Digital Scenarios

Editorial

Components of authentic learning are globally investigated in the light of digital technologies for their potential to transform learning and teaching. Authenticity is an important criterion for observing and analyzing a digital performance, because the validity of observable evidence of knowledge is provided by actions situated in a particular context and culture (Brown et al. 1989; Rosen 2015). Components of authenticity include real world problems, inquiry learning activities, discourse in a community of learners, and student autonomy (Gibson and Ifenthaler 2016).

Within these digital scenarios, data inputs are collected by an interactive computational application that either come directly from the learner or secondarily from aggregations of those inputs (Ifenthaler 2015). A mouse click, tracked eye movement, or keyboard press are examples of direct event-level interactions and a group of such actions, such as forming a word with keyboard presses or organizing screen resources into a priority order by dragging and dropping them onto an image, are examples of aggregated sets of actions (Ifenthaler and Widanapathirana 2014; Nasraoui 2006). When the components of authentic learning are enabled by technology and the event-level interactions of learners are recorded as a historical stream of items, a voluminous and varied data record of the performance in the scenario rapidly accumulates into a transcript (Berland et al. 2014; Gibson and de Freitas 2016; Romero and Ventura 2015).

Currently, such large records of data about the context of authentic digital scenarios and the actual performance of individuals or teams are collected. However, real-time analysis and feedback are still to be implemented. They require intelligent adaptive algorithms in order to enable meaningful analysis as well as personalized and adaptive feedback to the learner (Ifenthaler and Erlandson 2016). The articles in this special issue stem from an interdisciplinary group of researchers, envisioning to facilitate scholarly research and theory focused on contemporary issues related to both technological as well as pedagogical issues and its implications for learning and instruction in authentic digital scenarios.

1 Paper Selection Process

This special issue is assembled from the extended versions of best papers from the Special Interest Group Technology, Instruction Cognition and Learning (TICL; http://bit.ly/AERA-TICL) presented at the American Educational Research Association (AERA; www.aera.net) 2015 Annual Meeting that was held in Chicago, IL, USA in April 2015. Each contribution represents a unique research or technological approach that highlights the intersection of technology, instruction, cognition and learning.

During the AERA 2015 Annual Meeting, the program committee evaluated the paper presentations. Based on this evaluation and the previous results of the AERA double-blind review process, the highest ranked papers were selected for inclusion in this special issue. Authors of the selected papers were invited to extend their manuscripts for a full journal by providing them detailed information about the journal’s requirements. Authors submitted their full manuscripts by the end of July 2015. Each manuscript was assigned to at least three expert reviewers. Based on the comments of the reviewers and on the individual feedback of the editor, authors were asked to submit their final revised manuscript by the end of December 2015. The final acceptance of manuscripts was completed by the end of May 2016.

2 Contributors to this Special Issue

This special issue begins with Measurement in Learning Games Evaluation: Review of Methodologies Used in Determining Effectiveness of Math Snacks Games and Animations. The authors, Karen Trujillo (New Mexico State University), Barbara Chamberlin (New Mexico State University), Karin Wiburg (New Mexico State University), and Amanda Armstrong (New Mexico State University), focus on the effectiveness and impact of a set of mathematical educational games and animations for middle-school aged students.

André R. Denham (University of Alabama) investigates in Improving the Design of a Learning Game Through Intrinsic Integration and Playtesting, the effectiveness of a design and development approach centered on playtesting.

Got Game? A Choice-Based Learning Assessment of Data Literacy and Visualization Skills by Doris B. Chin (Stanford Graduate School of Education), Kristen P. Blair (Stanford Graduate School of Education), and Daniel L. Schwartz (Stanford Graduate School of Education) describes the design of a choice-based assessment and reports on an initial study of the curriculum and game with 10th grade biology students.

Brian C. Nelson (Arizona State University), Younsu Kim (Arizona State University), and Kent Slack (Arizona State University) examine in Visual Signaling in a High-Search Virtual World-based Assessment: A SAVE Science Design Study, design frameworks designed to help students to manage high cognitive effort they may experience while completing tests.

Flipping the Classroom: Embedding Self-Regulated Learning Prompts in Videos by Daniel C. Moos (Gustavus Adolphus College) and Caitlin Bonde (Gustavus Adolphus College) addresses the effectiveness of embedding self-regulated learning prompts in a video designed for the flipped class model.

Victor Law (University of New Mexico), Xun Ge (University of Oklahoma), and Deniz Eseryel (North Carolina State University) propose in The Development of a Self-regulation in a Collaborative Context Scale, a new instrument to measure self-regulation in a collaborative context.

MOOCs for Research: The Case of the Indiana University Plagiarism Tutorial and Tests by Theodore Frick (Indiana University) and Cesur Dagli (Indiana University) illustrate two research studies in progress which demonstrate the value of MOOCs (Massive Open Online Courses) as vehicles for research.

The special issue concludes with an emerging technology report, Screencasts: Formative Assessment for Mathematical Thinking, by Melissa Soto (San Diego State University) and Rebecca Ambrose (University of California, Davis). The authors highlight how a student-generated screencast example can be used as a formative assessment tool.

The eight papers of this special issue demonstrate the many complex interactions between technology, instruction, cognition and learning. The insightful findings propose implications for the transformative potential of digital technologies in planning and supporting learning in multiple contexts.

References

  1. Berland, M., Baker, R. S., & Bilkstein, P. (2014). Educational data mining and learning analytics: Applications to constructionist research. Technology, Knowledge and Learning, 19(1–2), 205–220. doi:10.1007/s10758-014-9223-7.CrossRefGoogle Scholar
  2. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42.CrossRefGoogle Scholar
  3. Gibson, D. C., & de Freitas, S. (2016). Exploratory analysis in learning analytics. Technology, Knowledge and Learning, 21(1), 5–19. doi:10.1007/s10758-015-9249-5.CrossRefGoogle Scholar
  4. Gibson, D. C., & Ifenthaler, D. (2016). Analysing performance in authentic digital scenarios. In R. Huang, N.-S. Chen, & C. W. W. Kinshuk (Eds.), Authentic learning through advances in technologies. New York, NY: Springer.Google Scholar
  5. Ifenthaler, D. (2015). Learning analytics. In J. M. Spector (Ed.), The SAGE encyclopedia of educational technology (Vol. 2, pp. 447–451). Thousand Oaks, CA: Sage.Google Scholar
  6. Ifenthaler, D., & Erlandson, B. E. (2016). Learning with data: Visualization to support teaching, learning, and assessment. Technology, Knowledge and Learning, 21(1), 1–3. doi:10.1007/s10758-015-9273-5.CrossRefGoogle Scholar
  7. Ifenthaler, D., & Widanapathirana, C. (2014). Development and validation of a learning analytics framework: Two case studies using support vector machines. Technology, Knowledge and Learning, 19(1–2), 221–240. doi:10.1007/s10758-014-9226-4.CrossRefGoogle Scholar
  8. Nasraoui, O. (2006). A multi-layered and multi-faceted framework for mining evolving web clickstreams. Adaptive and Personalized Semantic Web, 14, 11–35. doi:10.1007/3-540-33279-0_2.CrossRefGoogle Scholar
  9. Romero, C., & Ventura, S. (2015). J. A. Larusson, B. White (eds): Learning analytics: From research to practice. Technology, Knowledge and Learning. doi:10.1007/s10758-015-9244-x.Google Scholar
  10. Rosen, Y. (2015). Computer-based assessment of collaborative problem solving: Exploring the feasibility of human-to-agent approach. International Journal of Artificial Intelligence in Education, 25(3), 380–406. doi:10.1007/s40593-015-0042-3.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.University of MannheimMannheimGermany
  2. 2.Deakin UniversityMelbourneAustralia

Personalised recommendations