1 Introduction

Personalized learning proved to be more effective in students’ progress than traditional one (Pane et al. 2015). However, adopting personalized learning is challenging due to the lack of resources. Over the last few years, major developments in the area of robotics have been observed in different settings and places, from elderly care (Gallagher et al. 2016), education (Mubin et al. 2013; Benitti 2012), mental health (Jeong et al. 2020), and entertainment (Gehle et al. 2017). These developments have a widespread impact on society, especially for social robots, since they can make meaningful interactions to help develop a close and personalized connection with the user (Feil-Seifer and Mataric 2005).Thus, these social robots can supplement the existing teaching structures to provide additional support to students for personalized learning.

Socially assistive robots can be used as educational companions to interpret the emotional responses of the students, and this can be extremely helpful to develop a personalized motivational strategy for the students. Furthermore, an adaptive or personalized robot will be able to consider the particular preferences, requirements, and needs of preschool children. Therefore, this personalization of behaviors of socially assistive robots forms the primary focus of the current work.

There have been a number of studies on personalization in educational robotics (Chen et al. 2020; Ramachandran et al. 2019; Blancas-Muñoz et al. 2018; Obaid et al. 2018; Jones and Castellano 2018; Ramachandran et al. 2017; Coninx et al. 2016; Gordon et al. 2016). However, most of the research in this domain analyzed the utilization of personalization in different settings (e.g., feedback or reward systems), using multiple supporting technologies as the main medium of delivery (e.g., tablet), and recruiting children ages five and above. However, the investigation of various forms of content delivery (e.g., teaching style) or changing/adapting the content was rare. Most notably, limited work has been conducted on the children under five years old. Studies (Hertzman and Melton 2011; Mounet 2014) emphasized that the first five years of any child life are very important. This is because in these early years, a child undergoes massive brain development, and starts learning things. Thus, investigating personalized teaching tools using social robots could be informative for preschool children.

This research investigates the feasibility of developing an autonomous personalization of educational robot tutors for preschool children, specifically those who are aged three to five years old. This study examines adapting the content and difficulty level of the lesson, assessment, and feedback based on three factors: (i) the knowledge gain measured by the correctness of the child’s response, (ii) executive functioning of attention (measured by the child’s body orientation and gaze direction), and (iii) working memory or hesitation, measured by the time lag before the answer. This is accomplished by conducting a case user study method, where we design and deploy autonomously personalized child–robot interaction with children in homes to minimize human intervention. The results are analyzed qualitatively from the observations and parent interviews, and quantitatively from pre- and posttests and parent questionnaires.

The main focus and the contributions of this work are: (1) to identify and understand different teaching methods to teach preschool children, (2) to design, develop, and deploy autonomous and personalized interaction policies for tutoring lessons and exercises for three to five years old children with minimum human intervention, and (3) to provide general guidelines for the development of personalized-learning factors and strategies for individualized learning during sessions with each participant based on our validation study.

2 Related work

According to a recent review of robots in education (Belpaeme et al. 2018), recent works on robot tutors for preschool children, in general, are limited. Furthermore, studies on adaptive robot tutor for children remain to be addressed. For example, we only found eight research studies, and only one study (Gordon et al. 2016) focused on preschool children. Table 1 summarizes the related literature on adaptive robot tutors.

Table 1 Summary of related research on adaptive robot tutors for children

A recent study (Chen et al. 2020) used the robot Tega to investigate adaptive robot role in game-based word-learning activities for children aged between five and seven years. The study (Chen et al. 2020) recruited 59 students, who were randomized in three robot role conditions: (i) tutee, (ii) tutor, and (iii) peer. The authors analyzed facial expression, engagement, and acquired knowledge during each session. The results showed that the adaptive peer-like robot was more efficient in learning gain and affective engagement compared to the other robot’s roles.

In (Ramachandran et al. 2019), the study evaluated how children communicate with social robot tutors, particularly regarding seeking help. The study recruited fourth-grade students who directly interacted with an NAO robot in a math problem-solving session presented on a tablet. During each session, the researchers collected data on the “number of hints requested and time between hint requests.” To figure out when a tutor’s intervention is helpful, the researchers used the Assistive Tutor partially observable Markov decision process (AT-POMDP). Finally, the authors reported that using the AT-POMDP policy significantly increased the learning gains compared to a fixed help action selection policy.

Moreover, another study (Blancas-Muñoz et al. 2018) evaluated the overall effectiveness of robot tutors for children (using the NAO robot), particularly regarding each robot tutor’s ability to adapt strategies, along with measuring educational outcomes following tutoring sessions. The study recruited 60 children of 8–9 years old to compare actual help and distractions. The results of this study showed the effect that adaptation has in regards to a robot tutor’s helping strategies, where providing help and distracting facts improved the knowledge gain.

In addition, another robot tutoring study (Obaid et al. 2018) explored the robot’s ability to describe navigation tasks, along with evaluating children’s interactions with the tutor to provide adaptive empathetic behaviors, responses, and feedback. The study aimed to evaluate a child’s engagement with a robot tutor through their interactions with the tutor. The methods included evaluating 43 students aged 10–13 during three sessions related to navigating a map. However, the results did not show significant improvement in learning gain in students between empathetic and non-empathetic robot conditions.

The authors (Jones and Castellano 2018) evaluated a robot tutor’s ability to promote self-learning and practice self-regulated learning processes (SRL). Student attitudes regarding robot tutors are significant, as comfortability in asking for help is an indication of active learning, which results in better educational outcomes. The results showed that scaffolding SRL processes in an open learning model approach help students learn better and improves their ability to reflect on tools. In addition, this can better help students than controlled conditions where the robotic tutor only provides domain support to the students.

A study (Ramachandran et al. 2017) utilizes personalized rewards to explore autonomous tutoring robot (using NAO robot), such as personalized breaks, and whether these rewards have the potential to dramatically increase learning performance and results for young students. The study recruited 40 elementary school students out of class from both genders. Overall, the research showed that students who received breaks for triggers such as reward or refocus improved performance. On the other hand, the students who received breaks at fixed times did not show improved performance. Therefore, the authors concluded that learning can be improved when personalized breaks are given during study hours.

Another potential application of robot tutors is in physical-based activities. One such study in this area evaluated the behavioral accommodations between students and robot tutors during dancing sessions over time (Coninx et al. 2016). The researchers evaluated three children in terms of their request–response action during 3 dance sessions. The study found behavior accommodation increased from the first to the last session, and children were more willing to move along with the robot’s rhythm.

In addition to analyzing robot strategies, interactivity and engagement with robots by children is also an area of importance. A study (Gordon et al. 2016) aimed to measure the engagement of children during individualized tutoring with a robot through nonverbal cues and facial expression. The study involved 34 preschool students, where the system integrates a tablet, headphones, and a robot tutoring system. The goal was to help students learn new words in Spanish in a game context. The study compared two conditions: (i) personalized effective response from the robot tutor, and (ii) non-personalized effective responses. The results showed the robot tutor system helped increase children learning of Spanish words, where personalization formulas helped increase long-term interactions between robot and child.

Even though studies showed a significant impact on student learning with a social robot (Belpaeme et al. 2018), it is important to account for some robot behaviors that might negatively impact the student. For example, even though the students had a significant learning gain with a robot compared to a tablet only, the study (Kennedy et al. 2015) found that when the robot exhibits social behavior, the child’s learning improvement is lost. The study related this learning decrease to the students’ distraction and observed that students gazed more at the robot rather than the task on the tablet screen, which shifted their attention. Another study (Baxter et al. 2015) reported that children were distracted by the robot behavior in the first week of the deployment in the classroom, which faded away during the second week. This might be due to the novelty effect of the robot that should also be taken into consideration when introducing children to a new robot. Likewise, a recent survey (Smakman and Konijn 2019) discussed the impact of robot tutors on students. The authors described their concerns about the robots efficacy, given the current technology (e.g., speech recognition) that might negatively impact the student learning process.

As evidenced by the research, a handful of studies have been on personalized robot tutoring in HRI research. Most of these studies have analyzed personalization through robot role, feedback, switch activities, and/or providing breaks based on measures such as the child’s state. To the best of our knowledge, adjusting the lesson content or various forms of delivering lesson content have not been investigated. In addition to robot technology, many studies also incorporated additional technology, such as touch screen tables, tablets, and computers as either the main method of the teaching activity itself or for evaluating participant reaction and responses, instead of letting the robot act as the sole provider of instruction. Moreover, most of the studies were conducted on young children aged above five. Thus, this study contributes using the concept of personalizing the learning process by (1) utilizing the robot itself without using any additional technology for instruction to reduce distraction and shifting attentions, (2) examining learning level to make adjustments on delivery style of the lesson, its content and its assessment, and (3) recruiting preschool children (three to five years) to validate the personalization factors and provide guidelines for future studies. The primary idea of using the robot as the sole technology is to reduce the use of screens for this young age, and deliver the content using tangible material from their everyday items (e.g., holding an object).

3 Personalized robot tutor interaction and content designs

Early education should not dwell on teaching children how to read and write formally but should be based on familiar objects, such as toys that children use in the environment (UNICEF 2016; Damovska et al. 2009). Most preschools serve children from three to five years old, where they learn through playing. Teachers introduce them to basic concepts (e.g., shapes, letters, and colors) and help them develop a relationship with learning and others. At this age, children are inquisitive, enjoy listening to stories, and love asking questions (Damovska et al. 2009; Child 2011) that motivates us to investigate learning effectiveness with a social robot. Understanding children’s capabilities and learning style will help shape this study methodology, especially in designing the human–robot interaction in terms of learning topics and delivery style. Acknowledging the differences between preschool children not only by age but also by skill development, we aimed at designing an autonomous personalization of content, difficulty, and assessment levels.

This study conducted interviews with preschool teachers regarding curricula design and teaching methods besides surveying the literature. This was done to design the optimal method to teach preschool children using a robot and determine the best lessons that can be taught at an early age. We interviewed two preschool teachers: (i) the first teacher works as an elementary grade supervisor, and (ii) the second teacher holds a master’s degree in curricula and teaching methods. Moreover, the interviews were semi-structured with general questions about preschool learning content, children’s behavior, learning style, and effective feedback. Based on the interviews, we obtained the following key findings:

  • Aside from identifying colors and some vocabulary, preschool children can categorize things, such as animals and body parts.

  • The characteristics of learners and their learning skills in terms of speed, mental development, ability to learn and gain information, and attention and memorization differ among individuals. For example, some children learn immediately, whereas others learn only after the fifth repetition. Thus, repetition is necessary so that the children can remember and accomplish a given task.

  • Young children learn best through visuals and songs.

  • Children usually concentrate for short periods only. After 15 minutes, the children might ignore further lessons unless the activity changes.

Lessons content and difficulty levels: In this work, we chose to teach the children the basic shapes through the robot in three different difficulty levels. More specifically, first the robot starts with the medium level as default, and then, based on the child interaction, the robot either reduces or increases the difficulty level for the next shape (as shown in Fig. 1).

Fig. 1
figure 1

Overview of the our personalized sessions

The learning session consists of three lessons to teach the children three basic shapes: (i) circle, (ii) square, and (iii) triangle. This topic was chosen because it can be taught in various ways, and is estimated to work well with personalized learning in this study. Each shape is presented in a separate lesson, as shown in Fig. 1, and each lesson is created to include three difficulty levels to personalize the teaching style for each child based on their cognitive gain and engagement.

For level 1, the robot will slide a set of cards illustrating a single shape in different sizes and explain the shape. Furthermore, providing more than one card for the same shape supports repetition to ensure that the child receives the information and can memorize the shape. In level 2, the robot starts the lesson by drawing a single shape several times in different sizes using a pen, while explaining the shape characteristics (See Fig. 2a). In addition, the robot will also draw a simple example to illustrate to the child where the shape can be found in their environment, such as a circle shape indicating a ball. For level 3, the lesson is explained and delivered using the robot’s gestures to demonstrate a specific shape several times, as shown in Fig. 2b. While presenting, the robot will describe and explain the shape to ensure that the child receives the information. Additionally, the robot will ask the participant to repeat after it to ensure understanding of the idea. This way of teaching might be new to the child; therefore, the robot needs to deliver the lesson clearly while explaining the demonstrated shape.

Fig. 2
figure 2

Personalized lesson delivery styles in different difficulty levels

Lesson assessments: All three lessons include an exercise given to the child at the end of each lesson. The robot will ask the child to solve the given exercise, depending on the lesson’s difficulty level. Moreover, since the lesson delivery differs for each level of the lesson, exercises differ from one level to another as follows:

  • Level 1 exercise: The robot holds a card that presents a specific shape that has been taught and asks the child, “Do you remember what is the name of that shape?”

  • Level 2 exercise: Once the robot draws and explains the shape, it hands over a pen to the child and asks to draw the shape in two sizes: small and large.

  • Level 3: The robot demonstrates the lesson’s shape and asks, “What is the name of the shape I gestured?”

Personalization factors: Even though academic knowledge is important to personalize the lesson content, academic knowledge by itself is not the main targeted skill for the age group in this study. Executive functioning, which includes inhibitory control, working memory, and attention, is considered more essential than academic skills in predicting preschool children’s later achievements (Brock et al. 2009). Therefore, in this work, we attempt to measure indicators of executive functioning during the child–robot interaction in an educational setting in order to personalize the order and the difficulty level of the lessons. We choose three personalization strategies and factors, which are: (1) correctness, since it is the standard assessment and objective observation in knowledge gain, (2) child’s attention, which is one part of the executive functioning, where we measured it through child’s body orientation and gazing toward the robot, and (3) time lag for answering, as an indicator of working memory and hesitation, which is measured by calculating the time between the robot presenting the exercise to the time taken by the child to solve it (see Table 2). Although these are basic and preliminarily measures of academic knowledge and executive functioning, nonetheless, they can potentially be prominent indicators in autonomous agent interaction. These personalization strategies and factors are weighted to simplify the evaluation of the children’s responses and behaviors after solving the given exercises. The weight for correctness (cognitive gain) accounts for 50% of the overall evaluation since the main objective of the learning session is for the child to gain knowledge. On the other hand, a child’s effect measured by engagement and confidence shares the other 50% of the factors’ weights. This is to understand the child’s behavior and assess the suitable content and delivery style.

Table 2 Weighted personalization factors

Feedback: Children’s well-being and satisfaction are associated with others’ compliments and the encouragement of the environment around them. Preschool children who are sensitive need supportive guidance and feedback [8]. Therefore, we also personalized robot feedback during each lesson, where the robot provides personalized feedback (see Table 3). For instance, if the child could not accomplish a task, the robot provides supportive feedback, such as “Maybe I should explain this in another way.” The robot will keep encouraging the child based on his/her learning and ability to finish the lesson.

Table 3 Examples of feedback that is provided for children based on their level

Personalization process:

Fig. 3
figure 3

The overall personalized tutoring design and the learning scenarios

Figure 3 illustrates the overall tutoring design and learning scenario sequence. The first lesson will be about circles and begin at a difficulty level 2, where the robot will draw the shape in different sizes. We chose this level as an anchor point to start the interaction of tutoring robots with children by assuming that this level is the baseline for children’s knowledge at this age. The personalization level would level up, down, or be kept the same with our personalization factors to test the learning path. That is, after giving the first lesson, the robot evaluates the child’s understanding by asking the child to draw two circles of different sizes. Once the child responds, the robot will evaluate the participant’s answer and behavior based on the three factors of the personalized learning assessment. Suppose the assessment measures total is above 2, which means the child meets the higher level. In this case, the robot will give positive feedback and proceed to the second lesson and test the child at a higher difficulty level (see Assessment measures in Fig. 1). In contrast, if the assessment measures total is equal to 2, then the robot will start with the square lesson at the same difficulty level. If the child does not achieve an assessment measures total above or equal to 2, the robot will repeat the lesson at the lowest level and provide feedback, such as, “No worries; we will repeat that in another way,” and after the repeated lowered lesson ends the child will be given an exercise to solve at the lower level.

The structure is the same for the rest of the lessons. This means that after each lesson, the robot will prepare an exercise for the child to solve or answer. Then, the robot will evaluate the child’s answer and behavior based on the total of the personalization factors to determine whether to start the second or third lesson at a higher level or repeat the lesson at the lower level. When changing the lesson level (in repeated or new lessons), the robot changes the content delivery, the exercise, as well as the feedback levels. Moreover, the robot changes one level at a time. For example, if the child is in a level 3 in one concept and could not correctly answer the exercise, the robot will change the student level to level 2 of the same concept. If the child could not proceed with a correct answer again, the robot will switch to level 1 of the same concept, then repeat in this level till the child answers correctly. However, if the robot detects that the child had left the study area for more than 30 s, the robot will end the session.

4 Experimental study design

This study utilizes the NAO robot and follows a modified version of the ten layers of research protocol methodology for human–robot interaction described in Shamsuddin et al. (2014) to conduct our case studies with the children. The protocol was designed to include ethical concerns surrounding robotic adjunct therapy protocols for children with autism. Nonetheless, the ten stages should be considered before commencing any intervention program in child–robot interaction. This is aimed to guide researchers to design appropriate interaction timelines, intensity, and settings.

Participants: We recruited study participants through close circles of the research team, where we explained the purpose and the procedure of the study to the parents. We administered the interaction with the five children aged between 3 and 4.5 (\(\mu {=}3.60, \sigma {=}0.82\)). The children were chosen based on the four selection criteria:(i) aged between three to five, (ii) not have hearing or vision impairments, (iii) understand and speak English (since the robot interaction was performed in English), and (iv) able to work individually without the constant help of the parent. Moreover, the robot autonomous adaption policy was observed and analyzed both quantitatively and qualitatively. None of the children included in the study had interacted with the NAO robot before. In addition, each child was handed a small gift (a toy) as an incentive for participating in the study after the end of the learning session.

Parent consent: All participants’ parents signed an informed consent form prior to the study, which included an additional approval for media release in research publications. Four parents signed and approved the media release, including their child’s face. Parents were informed that their presence was required during the study to provide the child comfortable environment. Moreover, the parents were informed that their presence should not hinder the child from independently interacting with the robot. All the parents were instructed not to give any assistance or instructions to the children. All the parents were informed that if the parents try to assist the child or any child cannot adapt the satiation, those students would be excluded from the study. The study took place in the home of the child to reduce the child’s distraction from a new environment and provide comfort with the parent’s presence. The home visit was arranged with the parent for the time that suited them and their child, where the child gave verbal consent to their parent to participate in the study. The study protocol was reviewed and received ethical approval from the institutional review board (IRB).

Parents’ questionnaire and interviews:

Table 4 Parents’ questionnaire

Before the first interaction with each child, each parent filled out the questionnaire. The parent questionnaires and interview questions were derived from the teachers’ interviews and from surveying the literature. The main focus was on the child’s behavior and reaction toward learning and assessments, which helped in understanding and analyzing the factors that should be included for the personalization purpose during the observations. Therefore, the questionnaire contains a set of questions about the child’s learning behavior in general (see Table 4). The goal is to obtain a clear picture of when a child knows the answer, when a child regrets the answer, and when a child does not know the answer. By observing the child’s behaviors that align with their parents’ interpretation, we can estimate the child’s intentions and derive a rationale of their reactions (Havighurst et al. 2004; England-Mason and Gonzalez 2020). This way, we can determine when the child can respond to a task and ignore it or cannot understand what is needed from them. When the robot perceives the child’s answer incorrectly, the robot’s judgment might not be fair to the child and subsequently the child’s perception of the robot and the learning session, as well as their behavior and emotions might be affected (Kahn et al. 2007). Thus, the questionnaire is required to evaluate the child–robot interaction at the end, which helps with our quantitative and qualitative analysis, such that:

  • Aid assessment and analysis of the child’s interaction during the experiment because the parents know and understand their children’s reactions best.

  • Ensure that the final results of the session from the robot’s side will correspond with the information that the parents provide about their children’s behaviors.

  • Mitigate an unjustified evaluation from the robot.

  • Ensure that the factors of personalized learning do not underestimate the child’s learning level.

Pre- and Posttests: To assess the child’s cognitive gain from the robot interaction, all children participating in this study took a pre- and post-quiz of their knowledge of basic shapes. This test is performed before commencing a learning session with a human experimenter. The child is asked to name each shape, as illustrated in Fig. 4. The pretest aims to learn the child’s current knowledge level and compare it with the post-examination results. The child was not given any feedback on whether their answer was correct or incorrect.

After completing the experiment, each child took another test to assess his/her knowledge gain. This test is performed 10-min after the learning session with the robot is completed. This test was conducted to ensure that the child has learned all the shapes instead of memorizing them or simply repeating after the robot. During this 10-min delay, the robot was turned off, and the child was free to do anything he/she wanted to do. Similar to the posttest, no feedback was given to the child about their answer.

Fig. 4
figure 4

Pre- and Posttests: “Can you name this shape?” (The shapes were presented individually)

Data collection and video recordings: Besides the parent questionnaire and the pre- and posttests, we also collected log files and recorded videos of the interaction. The log files were used to validate the learning path for each child and account for any misrecognition of speech, objects, etc. Two video recorders are set up in the experiment room to record the complete learning session from two different angles. This is to evaluate our expectation of personalized learning achieved through the robot and to ensure that each participant has received a clear set of learning targets and the needed support to master each shape before moving to the next lesson.

Child–robot interaction procedure: Once the parental consent was obtained, the interaction procedure was in the following sequence:

  • Pretest, where the human experimenter administers the child’s current knowledge about the shapes, as described above.

  • The robot introduces itself, where the robot will be standing next to the study table to meet the child, recognize his/her face, and start by welcoming and engaging the child by calling their name.

  • The robot asks the child to sit on a chair that is facing the robot’s elevating stand, which allows the robot to slightly level up to the child’s level (and for the robot to track the child’s gaze).

  • The robot engages the child in an educational song, where the robot asks the child to sing along a song about shapes. This is to serve as an ice breaker to create a bond and rapport between the child and the robot.

  • The robot starts the personalized learning scenario, where it engages the child through three lessons, switching between explaining and exercises and providing feedback.

  • The robot ends the session once the three lessons are successful, and thank the child for their achievements.

  • Handing a gift to the child by the experimenter, which is the only incentive provided for participating.

  • A delayed posttest is administered after 10-min after the child–robot interaction, as described above.

All the robot interaction was done in English, including speech recognition, feedback, etc.

5 Technical implementation

As mentioned earlier, this study used the Humanoid Robot known as “NAO,” which is widely used in HRI studies. The robot is equipped with two HD cameras, four microphones, two speakers, and tactile sensors that allowed it to interact and understand the surrounding environment. For this study, several modules from [34] are used to program the robot-child interaction, including:

  • Speech Recognition module:—gives the robot an ability to recognize predefined words or phrases when the child responds. A list of words (circle, square, and triangle) was added as parameters to this module. Given the low performance of speech recognition for children speech, adding a word list could increase the probability of recognizing these specific words, and therefore enable the robot to recognize if the child’s answer matched these values. Moreover, the confidence threshold value has been tested and modified to achieve the best value by imitating the voice of children, as some children might have some pronunciation difficulties.

  • Face Detection module—is a vision module in which the robot detects and recognizes the child’s face. This module automatically generates a personalized greeting (e.g., child’s name); it also enables the robot to learn, detect, and identify the participants’ faces during the session. To do so, pictures of the participants were collected from their parents before the session and used by the robot to learn the face model of the child. Moreover, the face recognition function was tested several times with different pictures of the children before commencing the child–robot interaction to avoid any glitches in the detection and any delays that might need experimenter intervention, which possibly would affect the child’s experience.

  • Engagement Zones module—Analyze the position of the child to the robot. The robot stops the session from running when it detects that the child has moved away.

  • Create movements module—To animate shapes for lessons in the upper level (level 3). The animations were tested and modified to achieve the best timing, optimize the child’s learning, and avoid any confusion. In addition, the speech descriptions of the shape were aligned with the relevant movement during operation. Other components that were created using this function were the sequences of welcoming the child, the dance during the educational song, sitting on the pedestal, and movement while delivering the feedback. These animations (movements) were implemented to evoke emotions that would positively affect the child’s motivation and engagement level. Moreover, in order to give the robot the ability to grab objects and draw shapes, we used this functionality to animate the movement (since the study table height is fixed).

  • Text To Speech module—allows the robot to speak to the child to explain the lesson and to ask questions.

  • Vision Recognition module—is a vision module that makes the robot try to recognize the child’s drawing. This functionality was also used to teach the robot to recognize the shapes to be held to the child (level 1) and to grab the pen to start the drawing (level 3).

  • Audio Player module—provides playback services to play the educational song.

  • Gaze Analysis module—analyzes the child’s gaze direction.

Moreover, robot movements, such as sequences of welcoming the child, the dance during the educational song, and during presenting the feedback, were implemented to evoke emotions that would positively affect the child’s motivation and engagement level. As this study focuses on young children, the robot tutor ideally should mimic the corresponding human actions and supportive manner of delivering the content and the feedback that would be most conducive to the child’s learning.

There are several limitations of the robot that must be taken into account during development. Most importantly, the robot’s motors and the head process might overheat. The over-heating may cause the robot to shut down eventually or hinder its ability to perform the required behaviors.

6 Results

A pilot experimental study was conducted to validate the effectiveness of a personalized learning policy using a robot tutor for preschool children. We analyze the results of the parent questionnaires and interviews, pre- and posttests, the case studies, and the recorded sessions in the following sections.

6.1 Parent questionnaires

In this study, five mothers approved their children’s participation (see Table 6 for participants’ information). Thus, they were required to fill in the parent questionnaires before commencing the child–robot interaction. The results of the questionnaires of the five children included in the study are described in Table 5.

Table 5 Parents’ questionnaire responses

The results of the five children who participated show that if the child knows the answer and is willing to answer, whether it is wrong or correct, they would respond directly without hesitating. Children’s confidence and what they look at are clear indications of how they will answer questions. Whereas, if the child starts to become unfocused, shifts their attention to other things, or demonstrates that they have lost interest, which means that the child avoids answering the questions. Also, the results show that children start asking questions or repeating the given question if they do not know the answer. This means that the child will look around and ask for help. Some of them might start smiling or become shy. Thus, the children’s behaviors will indicate whether the children are interactive and willing to answer, are avoiding answering the questions, or are unable to respond to the question.

While robots show promise as tutors for children, understanding the child’s response and reactions to a question will help improve the child–robot interaction as per the following:

  • When children resist answering questions by avoiding them, shifting their attention, or demonstrating that they have lost interest, the robot has to track their gaze for some time while waiting for their response.

  • When children respond directly or keep repeating the answer, the robot should ask the child to repeat the answer if it does not understand what the child was saying. This is as the child may respond before the voice recognition function starts, or the robot may not understand what the child is saying due to the repetition. Hence, having another chance for the child to respond and for the robot to hear the replies will avoid any misunderstanding that leads to the wrong lesson level by the robot, which will affect the child’s learning path.

  • When the child starts asking questions and looking around for help because the child does not know the answer or cannot follow what the robot is saying, the robot’s speech speed must be clear and slow. Moreover, if the robot tracks the direction of the child’s gaze toward other people in the room (e.g., the child’s mother) during certain portions of the lesson. This would help indicate what information the robot needs to repeat instead of repeating the full lesson or lowering the lesson level.

6.2 Case studies

The experiment was conducted on five children (three girls and two boys), four of which managed to accomplish the full planned learning session, two of which included 3 lessons plan and the other two includes only 2 lessons plan. Table 6 shows the participants’ information.

Table 6 Participants information
Table 7 Participants Learning Paths (the lesson level and number of repeats for each lesson, as well as the total duration of the session)

To test the personalized learning path for each child, the first lesson starts at the second level, which would level up or down based on the child’s performance. Table 7 summarizes each child path and the number of repetition for each lesson. Each child’s learning path was unique, and different challenges were faced during the robot interaction, which led to modifications of the robot logic design and interaction features. Therefore, we analyze each case study individually and explain the modifications made after each case study.

Subject 1:

The first subject was a three-year-old girl. The learning session lasted for five minutes, where Fig. 5 illustrates the child’s learning path. The child was excited initially, especially when the robot started the session with the education song. However, once the robot explained the first shape and waited for her response to the assessment exercise, she lost focus and avoided answering by moving around the table. Thus, the robot had to end the learning session, as the child could not adapt to the situation.

Fig. 5
figure 5

Subject 1 learning path

Fig. 6
figure 6

Subject 1 Robot interaction session

Based on our observation of the recorded videos and during the experiment, the girl was listening to the robot and repeated what the robot was saying and doing during the lesson as shown in Fig. 6a. However, the child had difficulty transitioning between the lesson and the exercise. She stopped listening or became inattentive (Fig. 6b) before finally moving away (Fig. 6c). There may be three reasons for this. First, there may not have been sufficient indication that the child would have to answer a question. Second, the child was distracted by the robot’s eye, which turned blue and blinked to indicate that the robot was listening and waiting for the child’s answer. Third, the child resists answering. Her mother stated that her child will not respond and will draw her attention to other things around her if she tries to avoid answering.

After conducting the first experiment on Subject 1, we modified the learning scenario as follows:

  • We slowed down the robot’s speech.

  • At the end of each lesson, the robot informs the child that it will ask a question and that the child should be listening and must answer.

  • We added a 30-s time-out timer, beginning when the robot starts the voice recognition. If the timer reaches 30 s and the child has not responded, the robot will stop the voice recognition and say, “Okay, I guess we have to learn that again, so listen to me again.” Then, the robot will repeat the same lesson at the same level.

Subject 2:

The second subject is a three-year-old girl. The learning session lasted 21 min, which was more than the time estimated. Figure 7 illustrates the child’s learning path. At the beginning of the session, the child was somehow afraid of the robot when it started to walk and directed her to sit on her chair. She became calm and engaged when the robot reached its position and performed the educational song. The robot explained the first shape, and she answered correctly to the given exercise. During the second lesson, the robot overheated, so we had to give the child a break to allow the robot to rest for a while. Next, the robot repeated the second lesson, and the child’s answer was incorrect. Thus, the robot repeated the lesson at a lower level. The second lesson was repeated three times in the lower level before moving on to the third lesson. In the final lesson, the child also repeated the lesson. She answered correctly in the second round, but the robot registered the answer as incorrect. This might have been due to the child having tonsil and adenoid issues that affect her voice tone and pronunciation. When the child’s answer was correctly registered, the robot ended the learning session by thanking the child and giving her a gift as a token of appreciation for her participation.

Fig. 7
figure 7

Subject 2 learning path

Based on our observation of the recorded videos and the experiment, the girl was simultaneously semi-distracted and interacting with the robot during the learning session. In the middle of the session, her mother had to sit next to her. Having the mother beside her was an important source of motivation to keep her attention and ensure she interacted with what the robot explained and asked. In this case, we allowed the mother to provide some assistance for several reasons. The child often mimicked what the robot was doing, so having her mother beside her limited those behaviors and kept the child engaged as much as possible. Although the modification we made was sufficient (as the child now understood that a question would be asked), the robot’s eyes remained a distraction to children of that age. The child was hyperactive and sometimes yelled, which indicated her reluctance to answer the questions (as her mother had mentioned in the questionnaire). The robot’s processor overheated about three times between the lessons. Consequently, with the additional delay, the child became tired and bored.

After conducting the second experiment on Subject 2, we considered minimizing the learning scenario for the following subjects, especially for children who have some knowledge of shapes. This would allow the robot to carry out a full session with several repetitions without overheating. However, we wanted to validate this consideration before committing for the change when running the full scenario for an older preschooler (as shown with the following child). Moreover, the distraction caused by the robot’s eye was not changed in this study, where we suggest a robot introduction session with the child in future studies.

Subject 3:

The third subject is a four-and-a-half-year-old girl. The full learning session lasted six minutes. Figure 8 illustrates the child’s learning path. Compared to the other subjects, she was fully attentive and engaged with the robot, resulting in a smooth learning session, preventing the robot from overheating. The robot explained all the lessons on the same level, without repetition. Eventfully, the learning session ended, and the robot thanked the child by giving her a gift as a token of appreciation for her participation.

Based on our observation of the recorded videos and during the experiment, the child did not face any difficulties, and the learning session was smooth. The child’s confidence was an indicator that she was willing to answer the questions regardless of whether her answer was wrong or correct (as her mother mentioned in the questionnaire). However, the observation showed that this topic was easy for her as she already has full knowledge of the basic shapes. It would be more challenging for her if the robot taught other shapes (rather than learning basic shapes) or changed its way of teaching.

Fig. 8
figure 8

Subject 3 learning path

After conducting the third experiment on Subject 3, no modification was needed on the learning scenario. However, based on the experiment with Subject 2, we wanted to test minimizing the learning session for the next two subjects to reduce the chances of the robot overheating by only using two shapes rather than having all three shapes. Therefore, we could experiment without breaks, which might affect the child’s learning process and thereafter the results.

Subject 4:

The fourth subject is a four-and-a-half-year-old boy. The learning session lasted eight minutes. Figure 10 illustrates the child’s learning path. The learning session only covered two shapes, as previously mentioned. The child was excited but not fully focused on what the robot was saying. Therefore, the robot repeated the lesson at a lower level. Subsequently, the robot explained the last shape, and the child answered correctly, but the robot registered it as an incorrect answer. This might stem from the child responding before the voice recognition function began and only part of the word was recognized. This happened two times, so we had to re-inform the child to answer the question only after the robot’s eye turned blue and blinked, which led to registering the child’s correct response.

Fig. 9
figure 9

Subject 3 robot interaction session

Fig. 10
figure 10

Subject 4 learning path

Fig. 11
figure 11

Subject 4 robot interaction session

Based on our observation of the recorded videos, the child was semi-focused during the experiment. The mother stated that her child would ask questions if he did not know the answer or did not understand what was asked. The child, at this stage, turned around and said, “What is the robot saying?” as shown in Fig. 11b. The child’s behavior changed, and the child became fully engaged and attentive to what the robot said. However, the child responded too quickly before the voice recognition function started, which led to repeat the lesson. Although the child is older than a four year old, he only paid attention with some guidance. This is unlike what occurred with Subject 3 where no assistance was needed.

After conducting the fourth experiment on Subject 4, we modified the learning scenario by adding a 10-s-timeout timer, beginning when the robot starts the voice recognition in addition to the 30 s-timeout starts, which we added after the first experiment. If the timer reaches 10 s and the child has not responded, the robot will repeat the question instead of repeating the lesson. This is done to give the child a second chance to answer, as the child might not listen to the robot’s speech even though they are paying attention to the explanation part.

Subject 5:

Fig. 12
figure 12

Subject 5 learning path

Fig. 13
figure 13

Subject 5 robot interaction session

The fifth subject is a three-year-old boy. The child’s learning session was smooth and lasted 10 min. He was shy but attentive during the session. Figure 12 illustrates the child’s learning path. The robot explained the first shape. At first, rather than naming the shape, the child grabbed the explained shape—placed between other shapes beside him—and showed it to the robot, as depicted in Fig. 13a. The robot waited for 10 s and then repeated the exercise, to which the child responded, but the robot could not hear. Therefore, the lesson was repeated at the same level because no response was detected during the permitted answering time. After the second and third repetition, the child answered correctly. However, the robot registered his reply as an incorrect answer. This was due to the child’s tone and pronunciation (as mentioned, the child was too shy). Thus, we manually moved to the last shape to avoid the child’s frustration. The robot explained the final shape, but the robot registered the child’s answer as incorrect. The child was asked to speak with a louder voice, which let the robot recognize the answer correctly. Then, the robot ended the learning session as usual.

Based on our observation of the recorded videos and observations made during the experiment, the child paid full attention to the robot most of the time, even though he was too shy (see Fig. 13b). Although he is a three year old, he listened and understood the robot’s explanation. But within the repetitions, specifically during the lesson part only, the child started to shift his gaze around. This might be because he already knew the answer and the child was uninterested to listen again. Compared to subject 1 and subject 2, the child did not face any issues when the robot moved to the exercise part, as the child was nodding when the robot explained that a question would be asked. Also, no assistance was required from our side. This shows that the method of teaching was not too challenging for the child, even though one of the shapes was new for him. However, the child’s voice was too low for the robot to detect easily, which was an issue and a common problem for speech recognition with children.

After conducting this session, no further adjustments were required for the learning scenarios. However, we may attach a microphone to the child or place one under the table to enable the robot to hear the child’s low voice efficiently. Hence, if the child answers correctly, the robot will clap and motivate the child. This may break the ice and encourage the child to answer loudly for the following lesson.

6.3 Summary of system changes

As discussed in the detailed case studies (Sect. 6.2), some of the robot interaction aspects were modified after each case to accommodate the challenges and lessons learned from each case.

  • Slow robot speech: to allow the child to engage and listen.

  • Clear instruction about asking a question: to set the child expectation and prepare them to answer.

  • 10 s timeout before repeating the question: to give the child a chance to answer the question before repeating the lesson.

  • 30 s timeout before repeating the lesson: and after repeating the question.

  • Minimize the lesson path: to reduce the chances of overheating with a long learning session.

All children enjoyed and engaged in the educational song, which also served as creating a bond and rapport between the child and the robot. Previous studies showed that using the child’s name and displaying familiarity by the robot increase the perceived friendliness with the robot; thus, this increase the child’s openness to interact with the robot (Kruijff-Korbayová et al. 2015). Through session and video observations, personalizing the robot interaction to use the child’s name and face recognition also helped in increasing the child’s attention and engagement. Given the scope of this study, we did not compare between calling the child by their name or using a neutral call; nonetheless, this preliminary observation is in line with the literature with young children (Kruijff-Korbayová et al. 2015; Westlund et al. 2018). However, some robot behaviors were distracting to the child, but we cannot change them, including the robot’s eye colors when expecting speech, and overheating. To mitigate these aspects, a long-term interaction could allow the child to get used to the robot’s behavior, and shorter sessions could reduce the overheating issues.

6.4 Pre- and posttests

Each participant’s knowledge of shapes was examined before and after the learning session with the robot to measure the child’s learning gain, except for one child who did not complete the learning session. This learning gain also serves as evaluating the measure of the designed learning scenarios and the penalization factors. These results were specifically instructive in the case of children who had only some or no knowledge of the basic shapes, as it reflects on the child’s knowledge growth. As shown in Table 8, the participants who have some knowledge about shapes (Subject 2 and Subject 5) have learned all the explained shapes after the lesson. This shows that the robot was able to deliver the lessons and teach the children different characteristics to recognize shapes. As expected, the knowledge gained from the children who already knew the basic shapes were not notable. Nonetheless, the positive interaction and learning experience were observed and were informative regarding the personalized learning paths. Even though some sessions faced some challenges (e.g., overheating issues with the robot), the overall children experience and knowledge gain stayed positive.

Table 8 Pre- and Posttest results for the five subjects

7 Discussion

7.1 Robot characteristics

The results show that children exhibited different behaviors and learning paths in robot-learning environments, which is in line with the main motivation of learning personalization (Belpaeme et al. 2018). Based on the observation of children’s behavior, personalizing the robot interaction to include their names through face recognition encompassed nonverbal behavior of attention and engagement. This led to obtain higher performance of children in robot-learning environments. Even though this observation is only preliminarily (i.e., we did not compare with the neutral call and did not validate through inter-rater reliability observation), it is in line with previous studies with young children (Kruijff-Korbayová et al. 2015; Westlund et al. 2018). The same finding was presented in most personalized learning with the robot (Chen et al. 2020; Ramachandran et al. 2019; Blancas-Muñoz et al. 2018; Obaid et al. 2018; Jones and Castellano 2018; Ramachandran et al. 2017; Coninx et al. 2016; Gordon et al. 2016). The parent questionnaire and the observed behavior indicated that children were more responsive when the robot evoked personalized responses (feedback), which is in line with (Blancas-Muñoz et al. 2018; Ramachandran et al. 2017). Social presence and social support characteristics were observed to mostly increase child participation. This finding is in line with the previous studies, where a child’s perceived social presence of a robot increased the child’s feeling of being supported by the robot, especially when the robot expresses empathy during the interaction (Leite et al. 2014). In the current study, when the robot detects that the child lost attention (evaluated through the gaze and engagement zone), the robot provides basic personalized feedback (e.g., calling the child’s name) and repeats the question to attract their attention. However, including additional support from the robot side, such as calling the child’s name with personalized, motivating feedback during the lesson, could have a positive impact. The given exercise could be applied to the child’s distractions, such as attempting to draw other people’s attention or if the child resists answering (based on the child’s behavior and the detected cues). Such robot characteristics could be utilized to enhance the robot-child interaction as it would maintain the child’s level of engagement during the learning interaction.

Observations on the child’s affect response, personality, and adaptability were made. The child’s adaptive personality was observed as being beneficial for a successful child–robot interaction, such as children’s openness to interact with the robot and adapting their behavior to the robot capabilities (speak louder, wait for the robot lights, etc.). In our study, only the child’s attention and confidence were considered as a factor for engagement by the robot. The robot expressiveness to the child’s emotional state was observed in other studies to affect the depth of response (e.g., Park et al. 2019). Thus, future child–robot interaction design might benefit from personalizing the robot to modify its expressiveness following the child’s perceived difficulty in responding to the lesson delivery style and questions. Robot adaptations could include speech speed differentiation: if the child got confused or required help because the child was unable to understand what the robot was saying (indicated by gaze direction and the child’s body language), the robot would adjust its speed of speech accordingly. Studies showed that the speed of the robot speech that is closer to those of natural speech convey friendliness and trustworthiness when interacting with children (Rossi et al. 2019; Song and Luximon 2020).

7.2 Personalized learning path

Given the student-centered approach, each child’s performance recognizing shapes provided insight into the impact of the lesson. In all cases, personalized settings, including personalized feedback, repetition while explaining each shape, and modifications made to the scenario after conducting each session could improve the child’s performance and outcomes.

We found that the designed learning scenarios and how the robot delivered the lessons were insightful through the primary analyses of each subject’s learning path, knowledge gain, and observed behaviors, and each child achieved the study’s educational goal. Even though the robot experienced several breakdowns or made unnecessary repetitions, the children were able to learn the shapes and enjoyed the interaction. These robot limitations (including breakdowns, erroneous recognition, actions, etc.) are common for the robot deployments, which could negatively impact the learning process if not mitigated (Smakman and Konijn 2019). We believe that the novelty effect of introducing the robot to the children for the first time might have impacted their tolerance of these technical complications. However, long-term child–robot interaction should account for these issues to ensure a seamless learning process.

The performance of older children increased in all cases, and older children were more excited about having the robot as a tutor, as it was easy for them to understand what the robot was saying. Moreover, these children required less assistance than younger children because older children were less distracted. Such preliminary positive results could be attributed to their prior knowledge of the introduced concepts. Even though repetition is beneficial for preschool students and could increase their confidence (Rosa Paiz and Martinez Herrera 2021), having more challenging topics or challenging teaching methods could be used to isolate the effect of their prior knowledge on the outcome of the robot interaction.

The results could imply that a personalized robot lesson had positive an impact on these children and could potentially increased their learning across the continuum. However, this does not hold for familiar tasks. Younger children experiencing incidental learning showed increased performance. While these results are informative, the experiment was performed in an isolated room with parental support and therefore did not account for all classroom-based variables that may affect learning.

Beyond the learning scenarios, the study found that applying personalized learning showed positive results, but the applicability of the finding is limited by the fact that the study focused only on the learning path rather than on what each child needed. Therefore, including a new dimension of personalization based on each child’s behaviors, in addition to the personalized content, would improve the children’s overall learning experience. Hence, the robot tutor should interpret each child’s social cues, which would indicate the child’s task engagement and status (confusion and attention).

In this study, the first lesson level is predefined as level 2, which is then reduced or increased in the following session. This is done since the robot does not know the pretest we performed. Future studies could implement the pretest in the robot’s system to personalize the first lesson’s level as well. For example, if the child does not know any shapes, the robot would adjust the first lesson to level 1. Moreover, if the child knows all the shapes, the robot could start with an assessment at a higher level instead of explaining the lesson.

7.3 Distracting robot features

As discussed in Sect. 2, some robot behavior and features were found to be distracting from the interaction (Belpaeme et al. 2018; Kennedy et al. 2015; Baxter et al. 2015; Smakman and Konijn 2019). Robot features were observed to impact learning at two levels. While increased interactive properties promoted learning, robot features not aligned to learning goals shifted learner attention. When the robot’s eyes turned blue and started blinking waiting for the speech response, the children became attentive to the expression rather than the content. As a result, this showed increased incorrect responses. This challenge indicates that robot design and its nature of expressiveness should be aligned with the educational goal to maximize learning. This could be mitigated through a first contact introduction session, where the robot introduces itself and how it functions during talking and listening, with a practice exercise where the child could ask a question to the robot. Hence, the child could use the eye light as a cue instead of a distraction, which could also help the child not to answer the question promptly.

From the study, it can be deduced that robot software design—particularly intelligence and responsiveness—could potentially affect the applicability of robots to the learning environment. Artificial intelligence design that allows the robot to adapt interaction policy based on the child’s state and ability is challenging, especially when dealing with a pedagogical environment. This is because the robot needs to understand the child’s ability and progress to select the appropriate adjustment (action). Processing data on a cloud server could undertake such adaptations, thus outsourcing tasks with a heavy processor load. This would enable the robot to apply learning abilities while minimizing the risk of overheating due to the intensity of local data processing. Allowing the robot to take control and to detect and react to specific signals from the child’s behavior would improve the child’s learning experience.

Regardless, improved programming can enhance lesson delivery that accommodates personalized support (additional support from the robot side). Several children needed guidance during the learning session, either from the mother or the experimenter. Although the results showed that the participants were willing to listen and respond to the robot, some of them asked for help or resisted responding. Consequently, human intervention was needed. This suggests that robots that closely mimic human teachers will have a more positive impact on learning. The way to achieve this is by applying additional personalization strategies that are based on the child’s behavior.

8 Limitation

8.1 Robot functionality limitations

The necessary modification that we made after a couple of sessions had reasonably enhanced the learning scenarios, where we intended to validate the personalization factors and strategies in a future study. For example, the response time factor was shown to be an important one, which was used to know whether to repeat the exercises and lesson. However, given the robot’s functionality, where the child had to wait for the activation of the speech recognition (indicated by blue light around the robot’s eyes), children who responded too fast were not rewarded.

During the implementation phase, the eye gaze tracking function was tested; however, the result was not satisfying, leading to overhead processing and overheating. The estimation of the duration and orientation of the child’s gaze would ensure a satisfactory result in detecting the child’s engagement, which would lead the robot to follow the correct learning path for the child.

Even though adaptive learning using artificial intelligence techniques is widely investigated in personalized learning systems, these systems use desktop platforms (e.g., web). Such platforms are flexible and have high resources (e.g., memory, CPU), which allows for such investigations. However, the robot’s limited resources and artificial intelligence libraries did not allow us to investigate such an approach. Future work will investigate a cloud-based analysis for advanced capabilities and avoiding the robot’s overheating.

8.2 Study design limitations

There are two primary limitation of this study: (i) the limited number of participants, and (ii) the lack of a control group to assess the effectiveness of the personalized learning path and personalized robot interaction. However, this pilot study examined and understood the child’s reaction toward the robot tutor, as well as the child’s behavior cues during the session. These would be considered in future adjustments, which would substantially impact the assessment methods.

Applying all the factors (correctness, response time, and gaze direction) to assess the child’s response (in addition to the suggestions listed above), could potentially enhance the child’s learning experience based on their ability. We aim to include further factors to personalize the child’s learning based on their behavior, such as parameters of their personality and reactions during normal learning sessions.

Moreover, since this study is a pilot study for the overall system design and functionality testing; therefore, this study only focused on child play-testing sessions. However, an advanced method to evaluate the system should be considered for a larger-scale study and conducted with the child and the parent/caregiver. Evaluation methods with the child, such as “The fun toolkit” (smiley meter), along with peer-play observation and child interview (Guran et al. 2020), will be conducted in future studies. Beyond being users and testers of the technology, participatory design with children could provide insights of developing technology aimed for their age (Albu et al. 2014; Abbas et al. 2018; Superti Pantoja et al. 2020). Previous studies on participatory design with preschool children as co-designers showed intriguing solutions, where the researchers used drawing, prototyping, play-based sessions for this purpose. Such participatory design methods could be investigated in future studies with robot interaction, where the robot could be used as one of the tools (e.g., teleoperated actor) to elicit and encourage ideas, scripts, and prototyping with children as co-designers. Moreover, parent, caregiver, and/or preschool/kindergarten teachers’ evaluation and satisfaction are also essential for the success of any educational tool and its future use, where interviews, workshops, and focus groups could be utilized (Guran et al. 2020).

9 Conclusion and future research

This pilot study validated the effectiveness of a designed personalized policy for a robot tutor for preschool children aged 3–5 years in a situational context. The selected personalized learning factors and strategies are tested for individualized learning during sessions with the robot. Moreover, this study designed a personalized robot tutor interaction and content by interviewing preschool teachers, preschool curriculum experts, and surveying the literature on children’s early education. Personalization factors included both cognitive gain (e.g., answer correctness) and executive functioning of effect state (e.g., engagement and confidence), which adopt the lesson’s delivery style, content, exercises, and feedback. Even though this study did not include a control group, the results of this pilot study showed some potential of the personalized interaction, where: (i) personalizing the learning path by the robot based on the child’s response showed a positive engagement and a potential increase in the children’s learning gains; and (ii) the robot’s social presence and supportive characteristics revealed improvement in the child’s engagement, as they were responsive when the robot provided personalized feedback (e.g., using the child’s name).

Furthermore, this study also identified several challenges and opportunities, where advanced solutions could be integrated into future robot designs, when their applications concern children. Robot tutors should closely mimic human teachers to positively impact learning by applying additional personalization strategies based on each child’s behavior and affect state (beyond attention and confidence). Therefore, this study concludes that developing and combining these two dimensions of personalization (i.e., knowledge and behavior) would impact the assessment methods. In addition, this might also improve the personalized interaction policy.

In future research, the authors of this study aim to include further factors to personalize the child’s learning based on their behavior, such as parameters relating to their personality and behaviors during normal learning sessions (e.g., how they behave when avoid answering, need help). The authors are interested to investigate whether allowing the robot to react based on the child’s ability, behaviors and needs enhance multiple aspects of the children’s overall learning experience.

One of the ideas investigated in this study is the use of the robot as a sole technology and utilizes its embodiment to interact and deliver the content using tangible material from everyday items. This is to reduce the use of screens for children in young age and allows them to connect with their environment. Future investigations could potentially utilize the robot embodiment for kinesthetic, visual learning, or sensorimotor tasks, for example.