Growing Deeper Learners. How to Assess Robotics, Coding, Making and Tinkering Activities for Signiﬁcant Learning

As more and more unstructured project-based activities ﬁll the learning time of students, there is a growing need for assessment models for educational robotics, tinkering, making and coding activities. On one hand, there is a poor understanding or an underestimation of the need for evaluation, and its ability to improve systems and learning outcomes, while on the other, it is difﬁcult to identify or devise suitable assessment frameworks. Examples from the international context are discussed more for their potential to raise awareness than as deﬁnitive answers.


Introduction
The spread of robotics, coding, making-tinkering activities in K-12 education in Italy is in part due to PNSD (National Plan for Digital Education, a policy launched to define a strategy for innovation within the school system and move it into the digital age) [1]. Its Action 15 established innovative scenarios for the development of applied digital skills. Since the 2014/15 academic year, the Ministry of Education has promoted its "Program the Future" (Action 17) [2] campaign to offer all students courses in making, robotics, and the Internet of Things.
These are also seen as key developments in STEM/STEAM (science, technology, engineering, art and mathematics) curricula focused on creating future employees in job markets with growing demands [3][4][5]. However, while the fact that they are enjoyable is certainly appreciable, it is not enough, since, according to the PNSD, efforts should be focused on the epistemological and cultural dimensions. These are important methodological-didactic challenges for teachers that have no readymade answers. On the other hand, the goal is completely clear: to promote learning, or rather, to grow deeper learners. To address this critical issue, teacher training programs should raise awareness of the fact that there is no such thing as a neutral medium [6]. All digital tools and all digital environments carry cognitive and social implications, and digital environments can provide the support for strategic and deep learning. Yet, while it is important to understand the crucial role of evaluation methods for improving the system's effectiveness [7], teaching practices and research (which is quite generous in other digital activities) both seem to fall short on suitable answers for evaluating robotics, coding, making and tinkering activities. We present several models here, to focus on this issue.

Robotics, Coding, Making, and Tinkering as Mindtools
Robotics, coding, making, and tinkering all involve the learning-by-doing approach and relate to the gluing action of computational thinking, which "represents a universally applicable attitude and skill set that everyone, not just computer scientists, would be eager to learn and use" [8]. They are all considered powerful tools that can even support learning in special education. However, in addition to displaying affinities, they also have their specificities.
Tinkering and making are both technology-based extensions of DIY (do-ityourself) culture, which intersects with hacker culture, but they are more focused on physical rather than software objects and concepts. Moreover, tinkering sits at the more creative and improvisational end of the continuum.
Coding is linked to computer programming in English-speaking countries, whereas in some countries, like Italy and Spain, it has become a way of referring to visual and block programming, and a series of unplugged activities with educational objectives.
It is undeniable that there is a lack of agreement on the definition of robotics (educational robotics, educational robots, robots in education and robots for education), but consensus is strong about their potential. Angel-Fernandez and Vincze [9] suggest that educational robotics (ER) is a field of study that aims to improve the learning experience through activities, technologies, and artifacts in which robots play an active role. Therefore, the use of the pedagogical activity tag is suggested for activities in ER that have clear learning outcomes and evidence of learning.
A recent paper by Scaradozzi, Screpanti and Cesaretti [10] suggests that robots in education should be considered a broader field encompassing a wide range of applications, from assistive robotics to social robotics or socially assistive robotics [11]. Robotics in education (RiE), they emphasize, is not the same as ER, which is based on constructionism and the need for active exploration of the artifact. Robots playing an active role-as in social robots teaching or assisting teachers-cannot be defined as educational robots. Such activities can be beneficial for some aspects of learning, but they do not help children become "active prosumers of technology".
Hence, if, as Jonassen says, "mindtools are knowledge construction tools that learners learn with, not from" [12], ER, coding, making and tinkering can rightly be considered mindtools, insofar as they support knowledge construction [13].

The Underlying Pedagogies
Generally speaking, all these activities within the STEM/STEAM framework are a way to develop emotional, social and cognitive skills. Maker education is not about things, but rather connections, community and meaning. Also, maker-centered learning environments are built on educational theories like constructivism and constructionism, in order to develop knowledge and awareness through interactive, open-ended, student-driven, multi-disciplinary experiences. Although active learning is recognized as a higher-impact method in education, a recent study shows that most STEM teachers still choose traditional teaching methods (which are also preferred by students). This could be due in part to the increased cognitive effort required during active learning.
Researchers at Harvard University have recently discussed this issue in an article entitled "Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom" [14].

Through the Maze of Assessment
School education is essentially about defining a curriculum, teaching and learning, and assessment. Of the three, assessment is the weak factor. Its use in formal assessment programs is for certification and accountability. But when it is used formatively (see feedback, in particular), it is one of the most powerful interventions found in the educational research literature that can improve learning and teaching and therefore the entire school system [15].
However, identifying models and tools to assess relatively new educational activities like robotics, coding, making, and tinkering is a an additional critical issue, because of a lack of models. Owing to their fundamental focus on collaborative, iterative, process-based learning, progress in the key 21st-century skills developed in maker education cannot be suitably tracked using traditional assessment methods. With their emphasis on cognitive, social and emotional aspects, traditional methods, on the other hand, are plentiful and often interconnected. Teachers, therefore, first have to identify the subject and transversal results they expect, define which dimensions need to be developed, and then observe them. After this, they have to choose an effective assessment model and, finally, evaluate. The challenge (but also the pedagogical benefit) can be understood by considering, for example, the five dimensions of The Tinkering Learning Dimensions Framework [16]: initiative and intentionality, problem-solving and critical thinking, conceptual understanding, creativity and selfexpression, social-emotional engagement. Nevertheless, it is a challenge that needs to be met, in order to address the issue of how maker-centered learning can truly benefit the broader education system, for both students and teachers, and improve the quality of learning.
Below are presented validated evaluation frameworks from the international landscape.

Assessing Students' Work in Robotics (The Digital Technologies Hub)
Robotics-based tasks feature different types of knowledge and cognitive processes related to the digital, mathematical and socio-cultural contexts inherent to roboticsbased learning. They are so tightly connected that they can be difficult to spot and observe. The Digital Technologies Hub developed by Education Services Australia for the Australian Government Department of Education [17] provides a large array of learning resources and services to support the implementation of quality Digital Technologies programs and curricula in schools. Among these can be found advice and resources to assist teachers in creating quality assessment tasks, that are flexible, but closely connected to the Australian Curriculum Achievement Standards. First, it is important to emphasize which elements of the achievement standard are to be targeted, and why and how.

Feedback, AfL, PASA
While the contexts and the key functions of evaluation (of the system, the school, for certification and selection purposes, and accountability) are clear, the common denominator of all of them is improvement of learning outcomes. As feedback strategies play a crucial role, in a frequently quoted article, Hattie and Timperley [18] present a conceptual analysis of feedback and analyze the evidence relating to its impact on learning and student performance. Furthermore, according to Popham,[19] feedback is more powerful when used by students themselves to adjust their learning strategies. By this he means peer feedback, and suggests it should be carried out from an early age. But feedback is what happens second, that is, after performance itself, through doing or making. Starting from similar premises, in 1998, Black and Wiliam [15] had already developed the AfL (Assessment for Learning) strategies, which enabled students to become more independent in their learning, taking part in self-assessment and peer assessment. The AfL movement encourages educators to use assessment data primarily for formative purposes. This movement also features peer and self-assessment (PASA).
There are many PASA models, but it is interesting to observe that better learning outcomes are achieved if the evaluation criteria are negotiated with the students themselves.
Catlin [20] illustrates the role of PASA in the successful use of ERA (Educational Robotic Applications). In fact, PASA is an intrinsic aspect of educational robotics activities, because they normally involve students working in groups, sharing, discussing and evaluating each other's ideas on an almost continuous basis.

Assessing the Development of Computational Thinking: The ScratchEd Framework
Scratch is one of the best known environments employed to support computational thinking. Its first version was developed by the Lifelong Kindergarten Group at the MIT Media Lab, in 2003 [21]. A second version was released in 2013. In early 2019, the focus of energy shifted away from the ScratchEd online community to other ways of supporting educators working with Scratch [22]. Three key dimensions have been defined: computational concepts, computational practices, and computational perspectives. To assess them they have relied primarily on three approaches: artifact-based interviews, design scenarios, and learner documentation. The focus is on evolving familiarity and fluency through computational thinking practices. Dr. Scratch, an analytical Beta tool that evaluates Scratch projects, should also be mentioned. Teachers can group their students to keep track of their progress rapidly and simply [23].

Assessing Collaborative Problem-Solving
Since a prominent feature of robotics, coding, making and tinkering activities is collaborative problem-solving, it may be interesting to look at it from the point of view of assessment. For this we will consider the framework developed by the Assessment Research Centre of Melbourne Graduate School of Education [24].
Collaborative problem-solving can be a simplifying element that makes the students collaborate on problems that lead them to learn higher-order skills in science, mathematics, history or even physical education. In other words, it is in itself a non-cognitive skill, or it is a skill that promotes other domain-specific and cognitive competences. The project identified many indicators for interpreting scales derived from five dimensions (participation, perspective-taking, social regulation, task regulation, and knowledge building), two dimensions (social, cognitive) or one general dimension of a collaboratively solved problem. The sound framework provides teachers with an opportunity to identify the student's Vygotsky zone of proximal development for instructional intervention. Even better, it allows the use of development progressions as an assessment strategy that can be explored.

Conclusions
Students and families seem to appreciate the spread of robotics, coding, makingtinkering activities in K-12 teaching practices, because of the element of fun and the commitment they promote, but also because they prepare them for employment opportunities.
The efforts of the school system should focus instead on the epistemological and cultural dimensions of these areas. Indeed, research shows they are rich in cognitive, social and emotional affordances that should be suitably encouraged. Their impact should be tracked to see whether the expected outcomes are reached.
As assessment is one of the most powerful interventions, according to the literature in educational research, and is crucial to improving learning, teaching and therefore the entire school system, appropriate assessment frameworks and tools should be introduced, to close achievement gaps and increase equity. However, identifying models and tools to assess relatively new educational activities like robotics, coding, making, and tinkering, all of which are collaborative, iterative and process-based forms of learning, is an additional critical issue, because of a lack of models. In addition to training in different pedagogical and educational paradigms, this critical issue should be addressed urgently through specific training in assessing activities carried out in the digital context and with digital tools [25].
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.