Abstract
The use of AI-based social robots has been shown to be beneficial for learning English as a Second Language (ESL). Not much is known, however, about the drivers of parental intention to use those robots in support of their children’s ESL learning. This study aims to explore the factors that drive parental intention to adopt AI-based social robots for children’s ESL learning. The research model is proposed based on the theories and literature regarding motivations, product smartness, personality traits and physical risk perception. Data collected from 315 participants are analyzed using the partial least squares structural equation modeling (PLS-SEM) method. The results show that utilitarian and hedonic motivations positively affect parental intention to adopt AI-based social robots. In addition, utilitarian motivation is influenced by robots’ autonomy and parental personal innovativeness, while hedonic motivation is influenced by robots’ autonomy and humanlike interaction, and parental personal innovativeness. From the findings, important implications for promoting parental intention to adopt AI-based social robots in children’s ELS learning are offered.
Similar content being viewed by others
Explore related subjects
Find the latest articles, discoveries, and news in related topics.Avoid common mistakes on your manuscript.
1 Introduction
In recent years, digital technologies have been used and developed to support the learning of English as a Second Language (ESL) (Golonka et al., 2014). Social robots dominated by artificial intelligence (AI) have appeared as an exciting innovation in the field of ESL. AI-based social robots that can interact with an individual in multilingual contexts and provide immediate feedback using speech, movement, and facial expressions have the potential to become excellent resources for foreign language learner (Neumann, 2020; Tuna & Tuna, 2019). Recent reviews on the application of AI-based social robots (e.g., Cheng et al., 2018) indicate that they are very promising for formal and informal language learning in educational contexts ranging from preschool to college. In the ESL context, researchers have acknowledged the learning advantages of AI-based social robots for children in terms of their performance while learning English (e.g., Alemi & Haeri, 2020; Tolksdorf et al., 2021; van den Berghe et al., 2021), the creation of enthusiasm and excitement while learning English (e.g., Crompton et al., 2018), and the growth of sustained interest and engagement (e.g., Kim et al., 2014).
While English is often considered a global language used for communicating and socializing with people from around the world (Sharifian, 2009), most Taiwanese parents think that children can gain a competitive academic advantage and possibly a better career through mastering English (Lan et al., 2012; Lin & Chen, 2016; Oladejo, 2006). Hence, most parents in Taiwan are motivated to have their children learn English and provide immense financial resources for their children to study English language starting from an early age. When the opportunities that parents create to facilitate their child’s English acquisition through English practice at home are crucial for success in achieving English learning goals (Li, 2006), AI-based social robots can play an important role in English practices that parents provide at home to promote their children’s ESL learning. Given that parents have the final decision on whether to adopt AI-based social robots to assist their children to learn English at home, it is important to investigate parental acceptance of AI-based social robots to better understand the factors that drive parental behavioral intention to support the use of these robots for their children’s ESL learning.
To examine how parents intend to behave regarding the use of AI-based social robots, previous studies on technology acceptance that highlight individual motivations, personality traits and perceptions of technology are essential. For example, some researchers have explored the influences of utilitarian and hedonic motivations on adopting or using technology (e.g., Akdim et al., 2022; Keszey, 2020); others have focused on the impact of personality traits (e.g., conscientiousness: Ma Dangi & Mohamed Saat, 2021; Forgas-Coll et al., 2022; personal innovativeness: Jackson et al., 2013; Kim, 2016; Sung & Jeon, 2020) . These studies have verified the significance of the aforementioned variables in the adoption or use of technology. Regarding personality traits, the current study focuses on conscientiousness as one dimension of the Big Five personality traits, and personal innovativeness, both of which are relevant to the adoption of technological innovation. Earlier studies have identified that acceptance intention toward robots is significantly influenced by conscientious (e.g., Forgas-Coll et al., 2022) and personal innovativeness (e.g., Sung & Jeon, 2020). Additionally, Yeh et al. (Yeh 2021) reported that among the Big Five personality traits, conscientious was the only predictor for motivations associated with the acceptance of learning technology adoption. Furthermore, the risk perception factor (e.g., physical risk) is seen as being capable of influencing behavioral intention to adopt or use types of technology (Gunawan et al., 2022). Past work has demonstrated that physical risks are likely to lower individuals’ adoption level of a technology (e.g., Ikhsan & Sunaryo, 2020). However, few studies have integrated physical risk to predict behavioral intentions with respect to the adoption of AI-based social robots. Additionally, given that AI-based social robots are emerging technologies with humanlike features (e.g., intelligence), theses robots have various abilities (i.e., autonomy, adaptability, reactivity, ability to cooperate, humanlike interaction, and personality) that are referred to as product smartness (Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007). It is currently unclear how product smartness affects parental intention to use AI-based social robots for their children’s ESL learning.
To fully understand the relationships between parental behavioral intention to support the use of AI-based social robots and its influence factors, the present study theorizes a model that shows how parents decide to support the use of AI-based social robots for their children (age 12 and under) within the ESL context. Inspired by aforementioned studies, this paper incorporates eleven influencing factors into the analysis of parental behavioral intention to support the use of AI-based social robots, including motivations (i.e., utilitarian and hedonic motivations), product smartness (i.e., autonomy, adaptability, reactivity, ability to cooperate, humanlike interaction, and personality), personality traits (i.e., conscientiousness and innovativeness), and physical risk.
2 Theoretical background
2.1 AI-based social robots
Traditionally, a robot is a machine capable of automatically performing programmed actions in sequence order to complete particular tasks with intelligence (Neumann, 2020), which is not designed for interacting with humans (van den Berghe et al., 2021). Nowadays, robots are often equipped with social intelligence and thereby able to have social interaction and intelligent communication with humans (Neumann, 2020) in a way that follows “the behavioral norms expected by the people with whom the robot is intended to interact” (Bartneck & Forlizzi, 2004, p. 592). Social robots as intelligent and social machines can be either semiautonomous or autonomous (Bartneck & Forlizzi, 2004), and take on different physical forms that exist in the real world, such as human or animals (van den Berghe et al., 2021). To facilitate robot-human interaction, AI-based social robots can use visual recognition, speech production and recognition, body movements, and facial expressions (Chang et al., 2010; Neumann, 2020).
AI-based social robots can be programmed to take on the role of a teacher, tutor, or friend according to various learning task goals (van den Berghe et al., 2021) to support pedagogical purposes related to the language and literacy skills of learners (Neumann, 2020). Compared to other technologies, AI-based social robots with humanoid or pet-like appearances often have more natural interactions with children due to their ability to employ nonverbal cues (e.g., eye gazes, pointing, and other types of gestures) (van den Berghe et al., 2021). Moreover, AI-based social robots can manipulate real-life objects and use whole-body movements and gestures to integrate physical exercises or objects into learning tasks so that children can interact with their physical surroundings, and thereby increase sensorimotor experiences (Neumann, 2020; Wellsby & Pexman, 2014), which benefits vocabulary learning (Wellsby & Pexman, 2014) and appeals to many types of learners (Causo et al., 2016). Moreover, AI-based social robots also can connect with other devices such as computers through wireless internet connections to promote sharing and preserving of learning activities and results (Chang et al., 2010).
To date, very few studies have investigated parents’ perceptions on AI-based social robots’ usage for language learning (e.g., Lin et al., 2021; Louie et al., 2021; Tolksdorf et al., 2021) and most have used qualitative approaches. Those studies have shown that parents found educational robots to be useful and fun (Smakman et al., 2020) for their children’s ESL learning. However, no studies have been conducted that have used quantitative analysis to identify which factors are most influential for determining parental intentions to support the use of AI-based social robots in ESL learning.
2.2 Behavioral intention to support the use of AI-based social robots
Behavioral intention refers to the possibility that individuals believe that they will engage in a specific behavior (Fishbein & Ajzen, 1975). The existing information system literature has assumed that behavioral intention can be transformed into the actual use or adoption of a system or technology (Ajzen, 1991; Venkatesh et al., 2003, 2012). Many studies have provided research evidence that the execution of system use behavior depends on the strength of the behavioral intention to use (e.g., Almaiah et al., 2019; Fang et al., 2021; Gansser & Reich, 2021). For example, Gansser and Reich (2021) found that behavioral intention predicted the use behavior of AI-containing products (e.g., cleaning robots, robots for people with health needs, intelligent personal assistants and care robots) across all three application segments (i.e., mobility, household, and health) in everyday life environments. Given that behavioral intention to use has long been employed as a reliable predictor of actual use behavior, this study assumes that parents’ willingness to use AI-based social robots for their children’s ESL learning at home can predict their actual practical application of AI-based social robots. Thus, the current study focused on investigating the behavioral intention to use AI-based social robots for their children’s ESL learning at home.
2.3 Utilitarian and hedonic motivations
While individual’s technology adopting behavior can be directed by motivations related to use of technology (Ajzen, 1991), most technology adopting studies have focused on utilitarian motivations that reflect users’ desire for pragmatic reasons and benefits to decide to adopt new technologies (e.g., Aboelmaged, 2018; Kritzinger & Petzer, 2021). Conversely, there is a recent growing interest in hedonic motivation rooted in marketing research. Hedonic motivation refers to the extent to which individuals feel fun or pleasure acquired from using a technology (Ramírez-Correa et al., 2019; Venkatesh et al., 2012). Researchers have recognized that hedonic motivation plays a critical role in determining technology adoption (e.g., Brown & Venkatesh, 2005; Venkatesh & Davis, 2000).
Prior research has examined the effects of hedonic and utilitarian motivations in robot settings. For utilitarian motivations, most studies have chosen to study utilitarian motivators such as perceived usefulness and performance expectancy, instead of directly studying the impact of utilitarian motivations, and have proven that utilitarian motivators act as stronger predictors of behavioral intention to use AI-containing products (Gansser & Reich, 2021), hotel robot assistants (Lee et al., 2021), assistive social robots (Heerink et al., 2010) and smart home service robots (Hung et al., 2020). However, Ghazali et al. (2020) found perceived usefulness did not predict intention to use social robots as persuasive agents when making decisions about donating to charities. Regarding the association between hedonic motivation to behavioral intention, the existing study results are conflicting. Hedonic motivation has been demonstrated to be a significant influencer of behavioral intention to adopt AI-containing products (Gansser & Reich, 2021) and hotel robot assistants (Lee et al., 2021). In contrast, Hung et al. (2020) found hedonic motivation did not determine behavioral intention to use smart home service robots.
Hence, AI-based social robots can be instrumentalized to improve young children’s ESL learning quality and efficiency through fun and enjoyable learning experiences. Utilitarian and hedonic motivations are expected to impact parents’ intention to use the AI-based social robots. Thus, this study hypothesizes the following:
-
H1: Utilitarian motivation positively impacts parental intention to use AI-based social robots.
-
H2: Hedonic motivation positively impacts parental intention to use AI-based social robots.
2.4 Product smartness
Intelligent products that contain microchips, software, and sensors can collect, process, and produce information like humans (Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007). AI-based social robots with humanlike senses are intelligent products, since robots can detect and recognize users’ voices, speech, gestures, and facial expressions through cameras and sensors; differentiate their intent, facial expressions, and feedback; track their location within their surroundings; and categorize human social activities and behaviors to decode them (Tung & Au, 2018). According to Rijsdijk et al. (2007) and Rijsdijk and Hultink (2009), product smartness refers to intelligent products, such as robots, displaying various abilities that non-intelligent products do not possess. The dimensions of product smartness include autonomy, adaptability, reactivity, ability to cooperate, humanlike interaction, and personality.
The autonomy dimension represents the ability of an intelligent product to act in pursuit of certain goals, and work independently (Rhiu & Yun, 2018; Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007). Intelligent products with high autonomy often reduce the time and energy users expend on tasks related to operation and control (Baber, 1996). Thus, AI-based social robots forego the need for human intervention as they independently perform multiple functions and ESL tasks with younger children in support of their English learning. Concerning utilitarian benefits, the autonomy of AI-based social robots provides convenience to all users with less need for operation and control. Moreover, the higher level of automation associated with AI-based social robots may lead users to concentrate on emotional benefits (e.g., enjoyment) induced by interacting with these robots (Frank et al., 2021). Maria and Christian (2019) found that the perceived autonomy level of a smart product (i.e., smart washing machine) positively influences the perceived functional value (i.e., features, performance, and quality) and emotional value (i.e., emotional experiences and sentiments regarding the perception and usage of a product). Moreover, Chen and Lin (2022) found the automation of leisure-sports appliances positively influences perceived usefulness. Thus, when parents perceive a higher autonomy level of AI-based social robots, this can lead to higher utilitarian and hedonic motivation. In turn, the following hypotheses are posited:
-
H3: The autonomy of AI-based social robots positively impacts parental utilitarian motivation.
-
H4: The autonomy of AI-based social robots positively impacts parental hedonic motivation.
The adaptability dimension is defined as the ability of an intelligent product to respond and adapt to new conditions (Rhiu & Yun, 2018; Rijsdijk et al., 2007; Rijsdijk & Hultink, 2009). AI can learn from and adapt future actions by analyzing the influence of their previous actions on the environment (High-Level Expert Group on Artificial Intelligence, 2019) based on continuously collected information regarding the users and their usage of the product (Lee & Kim, 2016), which can in turn generate personal meaning and emotional value (Park & Lee, 2014), but also lead to privacy concerns regarding user information collection and use (Lee & Kim, 2016). Similarly, the adaptability of a robot is defined as the perceived ability of a robot to customize its functionality to fit users’ changing needs, preferences, and personality (Heerink et al., 2010). In the ESL learning context, it is important that AI-based social robots can recognize ESL learners’ levels and their learning progress to choose appropriate ESL learning contents that suit the needs and preferences of those learners. In this way, learners can learn at their own pace and benefit from customized learning based on their strengths and needs (Tuna & Tuna, 2019). In a previous study by Park and Lee (2014), perceived adaptability level of a smart product (i.e., smart phones) was identified as positively predicting functional and emotional values. Thus, parental perceptions of AI-based social robots’ adaptability are expected to increase both utilitarian and hedonic motivation. In turn, the following are posited:
-
H5: The adaptability of AI-based social robots positively impacts parental utilitarian motivation.
-
H6: The adaptability of AI-based social robots positively impacts parental hedonic motivation.
The reactivity dimension represents the ability of an intelligent product to make different and instant responses to changes in its environment in a stimulus/response manner (Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007). Reactive behaviors of AI-based social robots can include offering appropriate responses to learners’ performance (e.g., responding with exclamations of joy, or dealing with mistakes by providing hints) when conversing with ESL learners in real time by collecting environmental information about the learners (Chang et al., 2010). While Maria and Christian (2019) noted that the perceived reactivity level of a smart product (i.e., smart washing machine) negatively affects functional value and has no impact on emotional value, Park and Lee (2014) found that the perceived reactivity level of a smart product (i.e., smart phones) positively affects emotional value, but has no impact on functional value. Thus, parental perceptions of AI-based social robots’ reactivity are expected to increase utilitarian motivation and hedonic motivation, leading to the following hypotheses:
-
H7: The reactivity of AI-based social robots positively impacts parental utilitarian motivation.
-
H8: The reactivity of AI-based social robots positively impacts parental hedonic motivation.
The ability to cooperate dimension reflects the ability of an intelligent product to work cooperatively with other devices to complete a task with a common goal (Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007). For example, AI-based social robots need to communicate with computers or other devices that users already own through a wireless channel (e.g., Bluetooth or Wi-Fi) in order to use their software or programs (Chang et al., 2010). The more other products an intelligent product can cooperate with, the more user needs the product is congruent with (Rijsdijk et al., 2007). This ability to cooperate may enhance the functionality of the connected devices (Rijsdijk et al., 2007), create convenience for users (Rhiu & Yun, 2018), and encourage positive emotions in users (Rhiu & Yun, 2018). Until now, no study has investigated the predicting effects of a smart products’ ability to cooperate in terms of utilitarian motivation and hedonic motivation. Even so, parental perception of AI-based social robots’ ability to cooperate is expected to increase both motivation types, as posited below:
-
H9: AI-based social robots’ ability to cooperate positively impacts parental utilitarian motivation.
-
H10: AI-based social robots’ ability to cooperate positively impacts parental hedonic motivation.
The humanlike interaction dimension concerns the ability of an intelligent product to communicate and interact with users in a humanlike way (Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007). In the ESL learning context, it is necessary for AI-based social robots to engage in social interaction by talking, showing facial expressions, exhibiting social behaviors, and being able to act and respond in culturally appropriate manners, all to assist users to practice real-life conversations (Tuna & Tuna, 2019). Meanwhile, users are more likely to become emotionally attached to AI-based social robots that offer humanlike interactions and in turn experience additional emotional benefits (e.g., enjoyment) from these interactions (van Straten et al., 2020). No study has investigated the predicting effect of the perceived humanlike interaction level of a smart product on utilitarian motivation; however, Park and Lee (2014) found that the perceived humanlike interaction level of a smart product (i.e., smart phone) positively affects emotional values. Thus, parental perceptions of AI-based social robots’ humanlike interaction are expected to increase utilitarian motivation and hedonic motivation. In response, the following hypotheses are posited:
-
H11: Humanlike interaction of AI-based social robots positively impacts parental utilitarian motivation.
-
H12: Humanlike interaction of AI-based social robots positively impacts parental hedonic motivation.
Finally, the final dimension of personality refers to the ability to make an overall impression by showing the properties (Govers, 2004; Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007) or emotional states (Rijsdijk & Hultink, 2009; Rijsdijk et al., 2007) of a credible character. Mou et al.’s (2020) systematic review of the personality of robots found that they are manifested according to visual appearance (e.g., anthropomorphic, zoomorphic, caricatured, and functional), language style, vocal features (e.g., volume, speaking speed, pitch, and the amount of speech), movement (e.g., kinesics, movement angles, movement speeds, and movemen t patterns), countenance (e.g., eye contact and gaze behavior), touch sensation of a robot part, the role that the user plays in the human–robot interaction, and proxemics (i.e., the distance between the robot and an object). Robots that display a particular personality have been found to affect users’ behaviors and task effort (Park et al., 2017), as well as evoke enjoyable and favorable emotions (Hwang et al., 2013). However, to date, no study has looked into the predicting effect of perceived personality of a smart product on utilitarian motivation and hedonic motivation. In the current study, parental perception of AI-based social robots’ personality is expected to increase both motivation types, leading to the following hypotheses:
-
H13: The personality of AI-based social robots positively impacts parental utilitarian motivation.
-
H14: The personality of AI-based social robots positively impacts parental hedonic motivation.
2.5 Conscientiousness
Conscientiousness is a primary Big five personality trait that reflects being well-organized, responsible, careful, and efficient (Goldberg, 1990). Highly conscientious individuals tend to be self-disciplined, have concrete goals with a strong desire to fulfil them, and possess a life direction. This implies that they carefully examine whether using a technology can help them effectively reach their goals (Devaraj et al., 2008) and thus may magnify their desire to use the technology. Moreover, highly conscientious individuals who are self-controlled and responsible tend to excel at controlling their desire to express their own emotions and experience pleasure (Taufik et al., 2019). Previous studies have reported conflicting results regarding the predicting effect of conscientiousness on utilitarian and hedonic motivations. For example,Punnoose (2012) found thatconscientiousness has a significant positive direct effect on utilitarian motivation (i.e., perceived usefulness) and a negative direct effect on hedonic motivation (i.e., perceived enjoyment) of using e-learning in the future. However, other researchers have shown conscientiousness to be irrelevant in predicting utilitarian motivation (i.e., perceived usefulness) (e.g., Denden et al., 2022) and hedonic motivation (e.g., Busch, 2020). As such, conscientious parents are more likely to carefully consider the utilitarian benefits of using AI-based social robots to enhance their kids’ ESL learning performance, and they will also be more utilitarian-motivated to use AI-based social robots by virtue of this deep consideration. However, the hedonic benefits that AI-based social robots can bring to their children during the ESL learning process may not reflect goals that highly conscientious parents want to pursue. Therefore, more conscientious parents are more likely to be utilitarian-motivated, and less likely to be hedonism-motivated, as posited below:
-
H15: Conscientiousness positively impacts parental utilitarian motivation.
-
H16: Conscientiousness negatively impacts parental hedonic motivation.
2.6 Personal innovativeness
Personal innovativeness refers to the level of willingness an individual holds to try out any newly introduced technology at an early stage (Agarwal & Prasad, 1998). While individuals respond differently to new innovations, personal innovativeness could be classified as a type of personality trait (Midgley & Dowling, 1978) . Individuals could be considered innovative if they are early adopters of new types of technology (Agarwal & Prasad, 1998). High innovation individuals may tend to recognize the utilitarian and hedonic benefits of adopting new types of technology (Oliveira et al., 2016). Previous studies have demonstrated that personal innovativeness has a positive significant role in enhancing the level of performance expectancy (e.g., Chayomchai, 2020) and perceived usefulness (e.g., Jackson et al., 2013; Shanmugavel & Micheal, 2022) from a utilitarian motivation perspective as well as perceived enjoyment (e.g., Alalwan et al., 2018; Rouibah et al., 2016) from a hedonic motivation perspective. Indeed, AI-based social robots can be seen as novel and new types of learning technology. Parents should have an extent level of personal innovativeness as a means to really be utilitarian- and hedonism-motivated to adopt AI-based social robots. In other words, highly innovative parents will not only be motivated to employ AI-based social robots for utilitarian benefits, but also perceive hedonic benefits associated with using them. Thus, the following hypotheses are proposed:
-
H17: Personal innovativeness positively impacts parental utilitarian motivation.
-
H18: Personal innovativeness positively impacts parental hedonic motivation.
2.7 Perceived physical risk
Physical risk refers to perceptions of risk associated with physical harm from the adoption of a technology (Gunawan et al., 2022). Such harm is not necessarily limited to technology adopters themselves—it may also affect others close to them (e.g., family members) who adopt the technology. Perceptions of risk associated with the adoption of a technology (e.g., physical risk) may negatively influence the motivation to use a technology to achieving one’s goals and perceptions of enjoyment while adopting the technology (Cocosila et al., 2009). Thus, physical risk has become an increasingly popular consideration in the technology adoption literature (Faqih, 2022). However, limited studies have discussed whether perceived physical risk has negative implications for utilitarian and hedonic motivations. One study conducted by Cocosila et al. (2007) showed a negative effect of physical risk on a utilitarian motivator (i.e., perceived usefulness) with respect to the adoption of 3G cell phones. Currently, no study exists that investigates the relationship between physical risk and hedonic motivation in terms of technology adoption. However, Zhong et al. (2021) revealed that hedonic motivation (i.e., enjoyment gained from dining-out behaviors) can be negatively influenced by perceived physical risk.
There are various reasons to perceive AI-based social robots as physically risky. For example, users may suspect that robots possessing intelligence and decision-making capabilities may behave automatically and unpredictably, which might endanger their lives (Delgosha & Hajiheydari, 2021). Similarly, Lin et al. (2021) found that parents perceived high autonomy storytelling robots as much more threatening low autonomy storytelling robots. Moreover, as robots are electrical machines, users could get electrically shocked during interacting with robots (Woo et al.,2021). In sum, parents who believe robots represent a low physical risk tend to be utility-motivated and hedonism-motivated. Hence, the following are posited:
-
H19: Perceived physical risk negatively impacts parental utilitarian motivation.
-
H20: Perceived physical risk negatively impacts parental hedonic motivation.
2.8 Research model
On the basis of a review of the literature, the study proposed a research model involving motivations (i.e., utilitarian and hedonic motivations), product smartness (i.e., autonomy, adaptability, reactivity, ability to cooperate, humanlike interaction, and personality), personality traits (i.e., conscientiousness and innovativeness), and physical risk. In Fig. 1, the conceptual model used in this study and the relationships between the constructs are illustrated. Based on this model, the aim of this study was to identify the factors that influence parental behavioral intention to support the use of AI-based social robots.
3 Methodology
3.1 Participants
A total of 315 valid anonymous survey responses were collected in two ways: through an online survey distributed to parents who were members of various online communities related to teaching English to children through social networked websites, and a paper survey distributed to school-run parents of children in kindergarten and elementary schools in Taiwan. Participation in the study was voluntary. To encourage participation, all participants were entered into a draw where they could win one of 100 gift cards valued at NT$100.
As Table 1 shows, participants were parents of children aged under 12 years: most were female (60%). The majority of parents (54.3%) were aged 31 to 40 years old; 25.7% were aged 41 to 50; 17.1% were aged 21 to 30, and 2.8 were 51 or above. Respondents ranged in terms of monthly income, with most earning NT$40,001–60,000 (33.3%), followed by NT$20,001–40,000 (33%), above NT60,000 (22.5%), and under 20,000 (11.1%). In terms of employment, 89.5% of these parents worked full- or part-time, 3.2% were students, and 7.3% identified as homemakers. Moreover, 15.6% reported having a child or children aged between 0 and 2 years, 27.9% reported having a child or children aged between 3 and 6 years, 32.4% reported having a child or children who attended grade 1 or 2, 23.5% reported to having a child or children who attended grade 3 or 4, and 16.2% reported having a child or children who attended grade 5 or 6.
3.2 Measures
The study relied on previously validated scales for all the research constructs in this particular research model, adjusted to the context of AI-based social robot usage (please see the Appendix for more information). The study also consulted three experts in information systems and/or English teaching systems to examine the intelligibility and appropriateness of all the survey items before distributing the survey and modified items based on the expert's suggestion. Parents' intention to use an AI robot was assessed with three items modified from Zhu et al. (2022). Five items were used to measure utilitarian motivation of parents, and three items to measure parents’ hedonic motivation, all adapted from To et al. (2007). Parents’ perceptions of an AI-based social robot’s autonomy, adaptability, reactivity, ability to cooperate, humanlike interaction, and personality were adapted from Rijsdijk et al.’s (2007) product intelligence inventory. Parents' personality traits pertaining to conscientiousness were assessed with the eight-item conscientiousness subscale in the Big Five inventory, developed by John and Srivastava (1999). Parents’ personal innovativeness was assessed using three items adopted from Jackson et al. (2013). Three items from Wiedmann et al. (2011) were modified to assess parents’ perceptions of physical risk. All research constructs were measured on a seven-point Likert scale ranging from 1 (totally disagree) to 7 (totally agree).
3.3 Data analysis
Partial least squares structural equation modeling (PLS-SEM) was used to test the research model and hypotheses. When evaluating the PLS-SEM results, the study followed a two-step process involving separate assessment of the measurement models and the structural model (Hair et al., 2017). The first step focused on the reliability and validity of the construct measures, while the second tested the hypotheses with the coefficients obtained. Both the measurement model and the structural model were assessed using Smartpls software.
4 Results
4.1 Measurement model
Convergent validity of the measures was assessed by examining the individual item loadings between an item and its corresponding underlying factor, as well as the average variance extracted (AVE). After deleting one conscientiousness item, one ability of learning item, and three utilitarian motivation items due to cross-loadings or weak loadings, the results of the final analysis show that all factor loadings exceeded 0.7 (Hair et al., 2017), and all AVE values exceeded 0.5 (Hair et al., 2017) as shown in Table 2, suggesting good convergent validity. Internal consistency was assessed using Cronbach's Alpha and composite reliability (CR). As depicted in Table 2, the Cronbach's Alpha and CR values of all constructs exceeded the recommended level of 0.70, demonstrating good internal consistency. These results support the convergent validity of the measures. Discriminant validity was assessed using the Fornell-Larcker criteria and the Hetrotrait-Monotrait Ratio (HTMT). The Fornell and Larcker criteria requires that discriminant validity be established by confirming that the square root of the AVE of each construct is larger than its correlation with other constructs. Further, the Hetrotrait-Monotrait Ratio (HTMT) must not exceed the threshold of 0.9 (Henseler et al., 2015). As seen Table 3, the model had good discriminant validity.
Furthermore, in addition to the reliability and validity assessments, multicollinearity was checked to make sure all inner variance inflation factor (VIF) values were below 5 (Hair et al., 2017), to ensure no significant intercorrelation between the constructs. The study found that the inner VIF values in the structural model ranged from 1.036 to 3.027, indicating that the research constructs did not suffer from multicollinearity issues.
4.2 Structural model
As seen in Table 4 and Fig. 2., seven of the 20 hypotheses are confirmed. Utilitarian motivation (H1) and hedonic motivation (H2) were significant in explaining intention to use AI robots (β = 0.391, p < 0.001 and β = 0.426, p < 0.001, respectively). H3 and H4 concern the relationships between autonomy and both utilitarian and hedonic motivation: both were significant (β = 0.189, p < 0.05 and β = 0.189, p < 0.01, respectively) supporting H3 and H4. However, Hypotheses 5 and 6, which proposed the effect of autonomy on both utilitarian (H5: β = 0.005, p = 0.95) and hedonic motivation (H6: β = 0.047, p = 0.575), were not supported. Moreover, the direct paths from reactivity to both utilitarian (H7: β = 0.091, p = 0.249) and hedonic motivation (H8: β = 0.068, p = 0.404) were not significant. The direct effects of the ability to cooperate on both utilitarian (H9: β = 0.124, p = 0.088) and hedonic motivation (H10: β = 0.059, p = 0.414) were not significant. No significant relationship existed between humanlike interaction and utilitarian motivation (H11: β = 0.078, p = 0.298), whereas humanlike interaction (H12: β = 0.19, p < 0.01) positively affected hedonic motivation. Hypotheses 13 and 14, which suggest that personality has positive effects on utilitarian motivation (H13: β = 0.055, p = 0.383) and hedonic motivation (H14: β = 0.108, p = 0.073), were not supported. Hypotheses 15 and 16 were not supported: the standardized path coefficients from conscientiousness to utilitarian motivation (H15: β = 0.118, p = 0.055) and to hedonic motivation (H16: β = 0.067, p = 0.267) were not statistically significant. H17 and H18 concerned the effect of personal innovativeness on utilitarian (β = 0.160, p < 0.01) and hedonic motivation (β = 0.201, p < 0.001), respectively: both were supported. The direct links from physical risk to both utilitarian (H19: β = -0.048, p = 0.355) and hedonic motivation (H20: β = -0.031, p = 0.540) were not supported. These results show that the relationship between hedonic motivation and intention to use is the strongest one in the model. The research model explains 57.3% of the variance in use intention, 37.6% of the variance in utilitarian motivation, and 47.6% of the variance in hedonic motivation.
5 Discussion
To our knowledge, the present study is the first to theorize a research model for a sample of parents in order to better understand parental intention to support the use of AI-based social robots to assist in the ESL learning of children under 12 through relationships among motivations (i.e., utilitarian and hedonic motivations), product smartness, personality traits (i.e., conscientiousness and innovativeness), and physical risk. The current study tests the framework with a sample of parents who are prone to using AI-based social robots to help with ESL learning. Understanding the determinants of parental behavioral intention to use educational robots is important for instructional designers, AI-based social robot engineers, and AI-based social robot designers to better understand the needs and wants of parents in an effort to determine whether AI-based social robots fit the context of ELS learning at home.
The results demonstrate that parents’ intention to use AI-based social robots for their kids is not only directly affected by utilitarian motivation—their rational side—but also by their hedonic motivation—their emotional side. Therefore, utilitarian and hedonic motivations are real drivers behind parental intention to use these robots. The findings reveal that utilitarian motivation is significant for the behavioral intention to adopt AI-based social robots, consistent with previous research (e.g., Gansser & Reich, 2021; Heerink et al., 2010; Hung et al., 2020; Lee et al., 2021). This implies that the extent to which AI-based social robots provide utilitarian benefits (i.e., being helpful, effective, functional, necessary, and practical) for kids learning English is significant to the adoption rate of AI-based social robots. Moreover, the results suggest that higher hedonic motivation in terms of using AI-based social robots may result in higher intention to adopt AI-based social robots, similar to studies conducted by Gansser and Reich (2021) and Lee et al. (2021). This may suggest that hedonic reasons such as fun, delight, and enjoyment may lead to the adoption of AI-based social robots.
Moreover, the six smartness attributes of AI-based social robots (i.e., autonomy, adaptability, reactivity, ability to cooperate, humanlike interaction, and personality) were used to determine the influence of parental utilitarian and hedonic motivations. While not all these attributes were found to influence utilitarian and hedonic motivations, some notable results are noted here. 1) Autonomy influences parental utilitarian and hedonic motivations directly, confirming the findings of Maria and Christian (2019) and Chen and Lin (2022). This means that this autonomy attribute can increase parents’ feelings of convenience and provide them with hedonic experiences, because AI-based social robots can perform tasks independently. The results show that the autonomy of AI-based social robots is a source of both parental utilitarian and hedonic motivation, which in turn leads to an increase in the intention to use AI-based social robots. 2) Humanlike interaction significantly predicts parental hedonic motivation, confirming Park and Lee (2014)’s findings, but insignificantly predicts parental utilitarian motivation, contrary to Park and Lee (2014)’s findings. This means that the humanlike interaction attribute can strengthen parental hedonic motivation because AI-based social robots communicate and interact with their children in a humanlike way during the ESL learning process. However, parents tend to perceive that humanlike interaction provides no utilitarian benefit (i.e., being helpful, effective, functional, necessary, and practical). These findings suggest that the higher the level of humanlike interaction the AI-based social robots can achieve, the more hedonically motivated parents will be with respect to their adoption. Further, the study does not support the impacts of adaptability, reactivity, ability to cooperate, or personality on parental utilitarian and hedonic motivation. Apparently, parents perceived that the adaptability, reactivity, ability to cooperate, and personality level of the robots do not provide utilitarian benefits (i.e., being helpful, effective, functional, necessary, and practical) as their children learn English, so they did not become utilitarian-motivated. Moreover, parents are not hedonically motivated to adopt AI-based social robots while they do not perceive that the higher adaptability, reactivity, ability to cooperate, and personality level of the robots leads to more fun, delight, and enjoyment for their children as they learn English. The insignificant results pertaining to adaptability, reactivity, and personality might be, to some degree, due to the uncanny valley effect: parents may exhibit negative emotional reactions to a robot’s human resemblance shown through its adaptive and affective capabilities (Lin et al., 2021). The statistically non-significant effects regarding the ability to cooperate may be explained by the higher expectations that parents put on the robots. It is possible that the widespread dissemination of high-technology products has resulted in a situation where the ability of a robot to work cooperatively with other devices and systems may already be regarded as standard, rather than as a special feature.
This study explored the influence of two types of personality traits on utilitarian and hedonic motivation: conscientiousness and personal innovativeness. The results indicate that the influence of personal innovativeness on parental utilitarian and hedonic motivations to adopt AI-based social robots was validated, but did not validate the effect of conscientiousness on these motivations. This study assumed participants would view AI-based social robots as innovative products, and in turn highlighted the role of personal innovativeness as a means to increase parental utilitarian and hedonic motivation. These findings support the results of Chayomchai (2020), Jackson et al. (2013), and Shanmugavel and Micheal (2022), linking personal innovativeness to utilitarian motivators (i.e., performance expectancy and perceived usefulness); they are also in line with research conducted by Alalwan et al. (2018) and Rouibah et al. (2016) in terms of relating personal innovations to hedonic motivators (i.e., perceived enjoyment). These results suggest that as parents become more innovative, their ability to perceive utilitarian benefits and hedonic appeal associated with the adoption of AI-based social robots increases. That is, utilitarian benefits and hedonic appeal may increase in situations where parents pursue innovation and exploration. However, the hypothesis that conscientiousness has a positive relationship with utilitarian and hedonic motivations was not supported. Some possible reasons for this finding include parental concerns regarding robots, such as that they: lack empathy, may give out their children’s data or information through manipulation or hacking, and increase children’s amount of screen time, as noted in Louie et al.’s (2021) study.
The findings of this study, nevertheless, showed no support for the effects of perceived physical risk on parental utilitarian and hedonic motivations, which contradicts the results obtained by Cocosila et al. (2007) and Zhong et al. (2021). One possible reason is related to the cute, likeable appearance of AI-based social robots: they are usually designed with the goals of increasing children’s learning engagement and motivation, and reducing any fears associated with robots (Randall,2019). Moreover, in Louie et al.’s (2021) study, parents chose robots that had an appealing, smart-looking appearance. This type of choice may reduce parental physical risk perceptions and alleviate the negative effects of perceived physical risk on parental utilitarian and hedonic motivations toward adopting AI-based social robots.
5.1 Theoretical implications
This study is an early attempt to use PLS-SEM to examine determinants of parental behavioral intention to adopt AI-based social robots. The findings provide several essential contributions to the existing technology adoption research. First, past studies investigating AI-based social robots in educational contexts were conducted in school settings with a predominant focus on whether using social robots could improve affective learning outcomes, cognitive learning outcomes, and motivation. Little work has been conducted on parental willingness to adopt AI-based social robots to support children’s ESL learning at home. A research model was proposed and validated in this study to address this gap. The empirical evidence of the present study broadens our understanding of the factors that influence parental intention to use these types of robots. Second, the findings highlight the importance of utilitarian motivation and hedonic motivation in affecting parental willingness to adopt AI-based social robots to support children’s ESL learning at home. That said, hedonic motivation was found to be the strongest predictor of parental intention to adopt AI-based social robots, indicating that parents’ prioritized fun, feelings of delight, and the enjoyment associated with using AI-based social robots. Third, comparatively little is known about what role perceived smartness features of AI-based social robots play in parental intention to adopt such robots for ESL learning, which is an important antecedent of parental adoption of these robots. Evidence for the effect of product smartness, therefore, is critical not only to establish a link between smartness features and motivation, but also to demonstrate that smartness features can affect how users are motivated to adopt the product, subsequently influencing behavioral intention of adoption. The results show that the effect of autonomy on utilitarian motivation surpassed that of personal innovativeness. This suggests that autonomy promotes utilitarian motivation in the context of parental acceptance of AI-based social robots better than personal innovativeness does. Finally, this study has successfully established links from personal innovativeness to motivations. This expands our understanding of parental willingness to adopt AI-based social robots by considering the role of innovativeness in this research context, as well as examining the impact of innovativeness on utilitarian and hedonic motivations. The influence of innovativeness on hedonic motivation was superior to those of autonomy and humanlike interaction. This may indicate that personal innovativeness increases hedonic motivation better than autonomy and humanlike interaction do in the context of parental acceptance of AI-based social robots.
5.2 Practical implications
From a practical perspective, this study offers empirical evidence that needs to be taken into account by AI-based social robot designers, developers, and marketers. First, the results support that utilitarian motivation and hedonic motivation have positive effects on parental intention to adopt AI-based social robots. Thus, designers, developers, and marketers of AI-based social robots should consider emphasizing the utilitarian benefits and hedonic appeal of these robots in their design, development, and marketing activities. The stronger impact of hedonic appeal indicates that greater efforts should be targeted on the fun, feelings of delight, and enjoyment associated with using AI-based robots to encourage parental acceptance. Second, the results clearly show that personal innovativeness is a significant predictor in terms of predicting parents' utilitarian and hedonic motivation to adopt AI-based social robots. Therefore, further marketing efforts should promote these robots as a pioneering and emerging technology that adds value to children’s ESL learning at home. In addition, when introducing AI-based robots to the marketplace, marketers should try to identify parents with high personal innovativeness, which may maximize and expedite product acceptance. Third, the six smartness features of AI-based social robots have different impacts on utilitarian and hedonic motivation. The results provide evidence that autonomy and humanlike interaction positively enable increased hedonic motivation, while autonomy increases utilitarian motivation. For these reasons, designing, developing, and marketing AI-based social robots in a way that facilitates autonomy and humanlike interaction is suggested.
5.3 Limitations and future studies
Several limitations should be mentioned. First, the study utilized a convenience sample where all respondents came from Taiwan. Moreover, half of participants in the study were mothers aged between 30 and 40. Hence, the findings may not be generalizable to other nations and settings. Future researchers can consider utilizing a more diverse sample and random sampling techniques to verify and generalize the findings across multiple nations. Second, given that this study focused on the Big five personality trait of conscientiousness, future researchers can include the other traits (e.g., agreeableness, extroversion, openness, and neuroticism) to elaborate on the results. Third, this study specifically focused on the effect of physical risks on utilitarian and hedonic motivation. Future studies may consider incorporating privacy risk perceptions, which has been identified as an issue of concern for some parents with respect to using robots in children ESL learning contexts (e.g., Louie et al., 2021). Finally, future researchers should attempt to add other factors that may influence utilitarian motivation (e.g., ease of use: Chang et al., 2023) or hedonic motivation (e.g., flow: Zhao & Bacao, 2021), or integrate variables from the Hedonic—Motivation System Adoption Model proposed by Lowry et al. (2013), which may produce a better explanation of the variance in parental willingness to adopt AI-based social robots.
Data availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
References
Aboelmaged, M. G. (2018). Knowledge sharing through enterprise social network (ESN) systems: motivational drivers and their impact on employees’ productivity. Journal of Knowledge Management, 22(2), 362–383. https://doi.org/10.1108/JKM-05-2017-0188
Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the domain of information technology. Information Systems Research, 9, 204–216. https://doi.org/10.1287/isre.9.2.204
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211. https://doi.org/10.1016/j.jretconser.2021.102888
Akdim, K., Casaló, L. V., & Flavián, C. (2022). The role of utilitarian and hedonic aspects in the continuance intention to use social mobile apps. Journal of Retailing and Consumer Services, 66, 102888. https://doi.org/10.1016/j.jretconser.2021.102888
Alalwan, A. A., Baabdullah, A. M., Rana, N. P., Tamilmani, K., & Dwivedi, Y. K. (2018). Examining adoption of mobile internet in Saudi Arabia: Extending TAM with perceived enjoyment, innovativeness and trust. Technology in Society, 55, 100–110. https://doi.org/10.1016/j.techsoc.2018.06.007
Alemi, M., & Haeri, N. S. (2020). Robot-assisted instruction of L2 pragmatics: Effects on young EFL learners’ speech act performance. Language Learning & Technology, 24(2), 86–103. http://hdl.handle.net/10125/44727
Almaiah, M. A., Alamri, M. M., & Al-Rahmi, W. (2019). Applying the UTAUT model to explain the students’ acceptance of mobile learning system in higher education. IEEE Access, 7, 174673–174686. https://doi.org/10.1109/ACCESS.2019.2957206
Baber, C. (1996). Humans, servants and agents: Human factors of intelligent products [Conference presentation]. IEE Colloquium on Artificial Intelligence in Consumer and Domestic Products.
Bartneck, C., & Forlizzi, J. (2004). A design-centred framework for social human-robot interaction. In Proceedings of 13th IEEE International Workshop on Robot and Human Interactive Communication (pp. 591–594). IEEE. https://doi.org/10.1109/ROMAN.2004.1374827
Brown, S. A., & Venkatesh, V. (2005). A model of adoption of technology in the household: A baseline model test and extension incorporating household life cycle. MIS Quarterly, 29(3), 11. https://doi.org/10.2307/25148690
Busch, P. A. (2020). Problematic smartphone use and its associations with personality traits and hedonic motivation [Conference presentation]. PACIS 2020: Pacific Asia Conference on Information Systems, Dubai, UAE.
Causo, A., Vo, G. T., Chen, I. -M., & Yeo, S. H. (2016). Design of robots used as education companion and tutor. In S. Zeghloul, M. Amine Laribi, & J.-P. Gazeau (Eds.), Robotics and mechatronics (pp. 75–84). Springer. https://doi.org/10.1007/978-3-319-22368-1_8
Chang, C.-W., Lee, J.-H., Chao, P.-Y., Wang, C.-Y., & Chen, G.-D. (2010). Exploring the possibility of using humanoid robots as instructional tools for teaching a second language in primary school. Educational Technology & Society, 13(2), 13–24.
Chang, Y.-W., Hsu, P.-Y., Chen, J., Shiau, W.-L., & Xu, N. (2023). Utilitarian and/or hedonic shopping – consumer motivation to purchase in smart stores. Industrial Management & Data Systems, 123(3), 821–842. https://doi.org/10.1108/IMDS-04-2022-0250
Chayomchai, A. (2020). The online technology acceptance model of generation-z people in Thailand during COVID-19 crisis. Management & Marketing: Challenges for the Knowledge Society, 15, 496–513. https://doi.org/10.2478/mmcks-2020-0029
Chen, C., & Lin, C. (2022). How smartness of leisure-sports appliances influence tourists’ intention to use. Advances in Hospitality and Tourism Research, 10(3), 427–447. https://doi.org/10.30519/ahtr.939463
Cheng, Y. W., Sun, P. C., & Chen, N. S. (2018). The essential applications of educational robot: Requirement analysis from the perspectives of experts, researchers and instructors. Computers & Education, 126, 399–416. https://doi.org/10.1016/j.compedu.2018.07.020
Cocosila, M., Turel, O., Archer, N., & Yuan, Y. (2007). Perceived health risks of 3G cell phones. Communications of the ACM, 50(6), 89–92. https://doi.org/10.1145/1247001.1247026
Cocosila, M., Archer, N., & Yuan, Y. (2009). Early investigation of new information technology acceptance: A perceived risk - motivation model. Communications of the Association for Information Systems, 25, 30. https://doi.org/10.17705/1CAIS.02530
Crompton, H., Gregory, K., & Burke, D. (2018). Humanoid robots supporting children’s learning in an early childhood setting. British Journal of Educational Technology, 49(5), 911–927. https://doi.org/10.1111/bjet.12654
Delgosha, M. S., & Hajiheydari, N. (2021). How human users engage with consumer robots? A dual model of psychological ownership and trust to explain post-adoption behaviours. Computers in Human Behavior, 117, 106660. https://doi.org/10.1016/j.chb.2020.106660
Denden, M., Tlili, A., Abed, M., Bozkurt, A., Huang, R., & Burgos, D. (2022). To use or not to use: Impact of personality on the intention of using gamified learning environments. Electronics, 11, 1907. https://doi.org/10.3390/electronics11121907
Devaraj, S., Easley, R. F., & Crant, J. M. (2008). How does personality matter? Relating the five-factor model to technology acceptance and use. Information System Research, 19(1), 93–105. https://doi.org/10.1287/isre.1070.0153
Fang, W.-T., Huang, M.-H., Cheng, B.-Y., Chiu, R.-J., Chiang, Y.-T., Hsu, C.-W., & Ng, E. (2021). Applying a comprehensive action determination model to examine the recycling behavior of Taipei city residents. Sustainability, 13, 490. https://doi.org/10.3390/su13020490
Faqih, K. M. S. (2022). Factors influencing the behavioral intention to adopt a technological innovation from a developing country context: The case of mobile augmented reality games. Technology in Society, 69, 101958. https://doi.org/10.1016/j.techsoc.2022.101958
Fishbein, M. A., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An Introduction to theory and research. Addison- Wesley.
Forgas-Coll, S., Huertas-Garcia, R., Andriella, A., & Alenyà, G. (2022). Does the personality of consumers influence the assessment of the experience of interaction with social robots?. International Journal of Social Robotics. Advance online publication. https://doi.org/10.1007/s12369-022-00935-5
Frank, B., Herbas-Torrico, B., & Schvaneveldt, S. J. (2021). The AI-extended consumer: Technology, consumer, country differences in the formation of demand for AI-empowered consumer products. Technological Forecasting & Social Change, 172, 121018. https://doi.org/10.1016/j.techfore.2021.121018
Gansser, O. A., & Reich, C. S. (2021). A new acceptance model for artificial intelligence with extensions to UTAUT2: An empirical study in three segments of application. Technology in Society, 65, 101535. https://doi.org/10.1016/j.techsoc.2021.101535
Ghazali, A. S., Ham, J., Barakova, E., & Markopoulos, P. (2020). Persuasive robots acceptance model (PRAM): Roles of social responses within the acceptance model of persuasive robots. International Journal of Social Robotics, 12, 1075–1092. https://doi.org/10.1007/s12369-019-00611-1
Goldberg, L. R. (1990). Personality processes and individual differences. Journal of Personality and Social Psychology, 59(6), 1216–1229.
Golonka, E. M., Bowles, A. R., Frank, V. M., Richardson, D. L., & Freynik, S. F. (2014). Technologies for foreign language learning: A review of technology types and their effectiveness. Computer Assisted Language Learning, 27(1), 70–105. https://doi.org/10.1080/09588221.2012.700315
Govers, P. C. M. (2004). Product personality (Unpublished doctoral dissertation). Delft University of Technology.
Gunawan, I., Redi, A. A. N. P., Santosa, A. A., Maghfiroh, M. F. N., Pandyaswargo, A. H., & Kurniawan, A. C. (2022). Determinants of customer intentions to use electric vehicle in Indonesia: An integrated model analysis. Sustainability, 14, 1972. https://doi.org/10.3390/su14041972
Hair, J. F., Jr., Hult, G. T. M., Ringle, C., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling. Sage.
Heerink, M., Kröse, B., Evers, V., & Wielinga, B. (2010). Assessing acceptance of assistive social agent technology by older adults: The Almere model. International Journal of Social Robotics, 2, 361–375. https://doi.org/10.1007/s12369-010-0068-5
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8
High-Level Expert Group on Artificial Intelligence (2019). Ethics guidelines for trustworthy AI. European Commission. Retrieved July 17, 2023, from https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf.
Hung, M.-C., Chiu, M.-L., & Chen, C. H. (2020). The adoption determinants of smart home service robot. Journal of Innovation and Management, 16(2), 1–28.
Hwang, J., Park, T., & Hwang, W. (2013). The effects of overall robot shape on the emotions invoked in users and the perceived personalities of robot. Applied Ergonomics, 44(3), 459–471. https://doi.org/10.1016/j.apergo.2012.10.010
Ikhsan, K., & Sunaryo, D. (2020). Technology acceptance model, social influence and perceived risk in using mobile applications: Empirical evidence in online transportation in Indonesia. Jurnal Dinamika Manajemen, 11(2), 127–138. https://doi.org/10.15294/jdm.v11i2.23309
Jackson, J. D., Mun, Y. Y., & Park, J. S. (2013). An empirical test of three mediation models for the relationship between personal innovativeness and user acceptance of technology. Information & Management, 50(4), 154–161. https://doi.org/10.1016/j.im.2013.02.006
John, O. P., & Srivastava, S. (1999). The Big-Five trait taxonomy: History, measurement, and theoretical perspectives. In L. Pervin & O. P. John (Eds.), Handbook of personality: Theory and research (pp. 102–138). Guilford.
Keszey, T. (2020). Behavioural intention to use autonomous vehicles: Systematic review and empirical extension. Transportation Research Part C, 119, 102732. https://doi.org/10.1016/j.trc.2020.102732
Kim, H. J. (2016). Intention to continue using a social network site: Effects of personality traits and site quality. Social Behavior and Personality: An International Journal, 44(9), 1419–1427. https://doi.org/10.2224/sbp.2016.44.9.1419
Kim, Y., Smith, D., Kim, N., & Chen, T. (2014). Playing with a robot to learn English vocabulary. KAERA Research Forum, 1(2), 3–8.
Kline, R. B. (2011). Principles and practice of structural equation modeling. Guilford Press.
Kritzinger, R., & Petzer, D. J. (2021). Motivational factors, customer engagement and loyalty in the South African mobile instant messaging environment: Moderating effect of application usage. European Business Review, 33(4), 642–666. https://doi.org/10.1108/EBR-04-2020-0104
Lan, Y. C., Torr, J., & Degotardi, S. (2012). Taiwanese mothers’ motivations for teaching english to their young children at home. Child Studies in Diverse Contexts, 2(2), 133–144. https://doi.org/10.5723/csdc.2012.2.2.133
Lee, Y., Lee, S., & Kim, D.-Y. (2021). Exploring hotel guests’ perceptions of using robot assistants. Tourism Management Perspectives, 37, 100781. https://doi.org/10.1016/j.tmp.2020.100781
Lee, D.-K., & Kim, M.-S. (2016). An empirical investigation of smart product adoption [Conference presentation]. International Conference on Electronic Business.
Li, G. (2006). Biliteracy and trilingual practices in the home context: Case studies of Chinese Canadian children. Journal of Early Childhood Literacy, 6(3), 355–381. https://doi.org/10.1177/1468798406069797
Lin, C.-Y., & Chen, H.-C. (2016). Parental perceptions of early childhood English education. International Journal on Studies in English Language and Literature, 4(11), 62–70. https://doi.org/10.20431/2347-3134.0411011
Lin, C., Šabanović, S., Dombrowski, L., Miller, A. D., Brady, E., & MacDorman, K. F. (2021). Parental acceptance of children’s storytelling robots: A projection of the Uncanny Valley of AI. Frontiers in Robotics and AI, 8, 579993. https://doi.org/10.3389/frobt.2021.579993
Louie, B., Björling, E. A., & Kuo, A. C. (2021). The desire for social robots to support English language learners: Exploring robot perceptions of teachers, parents, and students. Frontier in Education, 6, 566909. https://doi.org/10.3389/feduc.2021.566909
Lowry, P. B., Gaskin, J., Twyman, N. W., Hammer, B., & Roberts, T. L. (2013). Taking ‘fun and games’ seriously: Proposing the hedonic-motivation system adoption model (HMSAM). Journal of the Association for Information Systems, 14(11), 617–671.
Maria, K., & Christian, S. (2019). How product intelligence and brand affect consumption value and intended usage: Evidence from a smart washing machine [Conference presentation]. 48th Annual conference of the European Marketing Academy.
Mat Dangi, M. R., & Mohamed Saat, M. (2021). 21st century educational technology adoption in accounting education: Does institutional support moderates accounting educators acceptance behavior and conscientiousness trait towards behavioural intention? International Journal Academic Research in Business and Social Sciences, 11(1), 304–333. https://doi.org/10.6007/IJARBSS/v11-i1/8288
Midgley, D. F., & Dowling, G. R. (1978). Innovativeness: The concept and its measurement. Journal of Journal of Consumer Research, 4(4), 229–242. https://doi.org/10.1086/208701
Mou, Y., Shi, C., Shen, T., & Xu, K. (2020). A systematic review of the personality of robot: Mapping its conceptualization, operationalization, contextualization and effects. International Journal of Human-Computer Interaction, 30(6), 591–605. https://doi.org/10.1080/10447318.2019.1663008
Neumann, M. M. (2020). Social robots and young children’s early language and literacy learning. Early Childhood Education Journal, 48, 157–170. https://doi.org/10.1007/s10643-019-00997-7
Oladejo, J. (2006). Parents’ attitudes towards bilingual education policy in Taiwan. Bilingual Research Journal, 30(1), 147–170. https://doi.org/10.1080/15235882.2006.10162870
Oliveira, T., Thomas, M., Baptista, G., & Campos, F. (2016). Mobile payment: Understanding the determinants of customer adoption and intention to recommend the technology. Computers in Human Behavior, 61, 404–414. https://doi.org/10.1016/j.chb.2016.03.030
Park, H. W., & Lee, H. S. (2014). Product smartness and use-diffusion of smart products: The mediating roles of consumption values. Asian Social Science, 10(3), 54–61. https://doi.org/10.5539/ass.v10n3p54
Park, H. W., Rosenberg-Kima, R., Rosenberg, M., Gordon, G., Breazeal, C. (2017). Growing growth mindset with a social robot peer. In Proceedings of 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (pp. 137–145). ACM Press. https://doi.org/10.1145/2909824.3020213
Punnoose, A. C. (2012). Determinants of intention to use eLearning based on the technology acceptance model. Journal of Information Technology Education: Research, 11, 301–337. https://doi.org/10.28945/1744
Ramírez-Correa, P., Mariano-Melo, A., & Alfaro-Pérez, J. (2019). Predicting and explaining the acceptance of social video platforms for learning: The case of Brazilian YouTube users. Sustainability, 11, 7115. https://doi.org/10.3390/su11247115
Randall, N. (2019). A survey of robot-assisted language learning (RALL). ACM Transactions on Human-Robot Interaction, 9(1), 7. https://doi.org/10.1145/3345506
Rhiu, I., & Yun, M. H. (2018). Exploring user experience of smartphones in social media: A mixed-method analysis. International Journal of Human-Computer Interaction, 34(4), 1–10. https://doi.org/10.1080/10447318.2018.1471572
Rijsdijk, S. A., & Hultink, E. J. (2009). How today’s consumers perceive tomorrow’s smart products. Journal of Product Innovation Management, 26, 24–42. https://doi.org/10.1111/j.1540-5885.2009.00332.x
Rijsdijk, S. A., Hultink, E. J., & Diamantopoulos, A. (2007). Product intelligence: Its conceptualization, measurement and impact on consumer satisfaction. Journal of the Academy of Marketing Science, 35(3), 340–356. https://doi.org/10.1007/s11747-007-0040-6
Rouibah, K., Lowry, P. B., & Hwang, Y. (2016). The effects of perceived enjoyment and perceived risks on trust formation and intentions to use online payment systems: New perspectives from an Arab country. Electronic Commerce Research and Applications, 19, 33–43. https://doi.org/10.1016/j.elerap.2016.07.001
Shanmugavel, N., & Micheal, M. (2022). Exploring the marketing related stimuli and personal innovativeness on the purchase intention of electric vehicles through Technology Acceptance Model. Cleaner Logistics and Supply Chain, 3, 100029. https://doi.org/10.1016/j.clscn.2022.100029
Sharifian, F. (2009). English as an international language: Perspectives and pedagogical issues. Multilingual Matters
Smakman, M., Jansen, B., Leunen, J., & Konijn, E. (2020). Acceptable social robots in education: A value sensitive parent perspective. In Proceedings of 14th International Technology, Education and Development Conference (pp. 7946–7953). IATED Academy. https://doi.org/10.21125/inted.2020.2161
Sung, H. J., & Jeon, H. M. (2020). Untact: Customer’s acceptance Intention toward robot barista in coffee shop. Sustainability, 12, 8598. https://doi.org/10.3390/su12208598
Taufik, T., Prihartanti, N., & Hamid, H. S. A. (2019). Neuroticism, extraversion and conscientiousness as predictors of the hedonistic lifestyle. North American Journal of Psychology, 21(3), 645–660.
To, P.-L., Liao, C., & Lin, T.-H. (2007). Shopping motivations on Internet: A study based on utilitarian and hedonic value. Technovation, 27(12), 774–787. https://doi.org/10.1016/j.technovation.2007.01.001
Tolksdorf, N. F., Viertel, F. E., & Rohlfing, K. J. (2021). Do shy preschoolers interact differently when learning language with a social robot? An analysis of interactional behavior and word learning. Frontiers in Robotics and AI, 8, 676123. https://doi.org/10.3389/frobt.2021.676123
Tuna, A., & Tuna, G. (2019). The use of humanoid robots with multilingual interaction skills in teaching a foreign language: opportunities, research challenges and future research directions. CEPS Journal, 9(3), 95–115. https://doi.org/10.26529/cepsj.679
Tung, V. W. S., & Au, N. (2018). Exploring customer experiences with robotics in hospitality. International Journal of Contemporary Hospitality Management, 30(7), 2680–2697. https://doi.org/10.1108/IJCHM-06-2017-0322
van den Berghe, R., de Haas, M., Oudgenoeg-Paz, O., Krahmer, E., Verhagen, J., Vogt, P., Willemsen, B., de Wit, J., & Leseman, P. (2021). A toy or a friend? Children’s anthropomorphic beliefs about robots and how these relate to second-language word learning. Journal of Computer Assisted Learning, 37(2), 396–410. https://doi.org/10.1111/jcal.12497
van Straten, C. L., Peter, J., & Kühne, R. (2020). Child–robot relationship formation: A narrative review of empirical research. International Journal of Social Robotics, 12, 325–344. https://doi.org/10.1007/s12369-019-00569-0
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540
Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157–178. https://doi.org/10.2307/41410412
Wellsby, M., & Pexman, P. M. (2014). Developing embodied cognition: Insights from children’s concepts and language processing. Frontiers in Psychology, 5, 506. https://doi.org/10.3389/fpsyg.2014.00506
Wiedmann, K. P., Hennigs, N., Pankalla, L., Kassubek, M., & Seegebarth, B. (2011). Adoption barriers and resistance to sustainable solutions in the automotive sector. Journal of Business Research, 64(11), 1201–1206. https://doi.org/10.1016/j.jbusres.2011.06.023
Woo, H., LeTendre, G. K., Pham-Shouse, T., & Xiong, Y. (2021). The use of social robots in classrooms: A review of field-based studies. Educational Research Review, 33, 100388. https://doi.org/10.1016/j.edurev.2021.100388
Yeh, C.-H., Wang, Y.-S., Wang, Y.-M., & Liao, T.-J. (2021). Drivers of mobile learning app usage: An integrated perspective of personality, readiness, and motivation. Interactive Learning Environments. Advance online publication. https://doi.org/10.1080/10494820.2021.1937658
Zhao, Y., & Bacao, F. (2021). How does gender moderate customer intention of shopping via live-streaming apps during the Covid-19 pandemic lockdown period? International Journal of Environmental Research and Public Health, 18, 13004. https://doi.org/10.3390/ijerph182413004
Zhong, Y., Oh, S., & Moon, H. C. (2021). What can drive consumers’ dining-out behavior in China and Korea during the COVID-19 pandemic? Sustainability, 13, 1724. https://doi.org/10.3390/su13041724
Zhu, Z., Liu, Y., Cao, X., & Dong, W. (2022). Factors affecting customer intention to adopt a mobile chronic disease management service: Differentiating age effect from experiential distance perspective. Journal of Organizational and End User Computing, 34(4), 1–23. https://doi.org/10.4018/JOEUC.287910
Acknowledgements
This study was financially supported and ethically approved by the Ministry of Science and Technology of Taiwan under grant no. MOST 111-2410-H-018 -028 -MY3.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Disclosure
Nothing to report.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix. Measurement items used in this study
Appendix. Measurement items used in this study
1.1 Conscientiousness
I do a thorough job.
I am reliable.
I am organized.
I am diligent.
I persevere until the task is finished.
I do things efficiently.
I make plans and follow through on them.
I am uneasily distracted.
1.2 Personal innovativeness
I like to try out new types of information technology.
Among my peers, I am usually the first to try out new types of information technology.
If I heard about a new information technology, I look for ways to try it out and experience it.
1.3 Physical risk
I am concerned about potential physical risks associated with using AI-based social robots for my kids to learning English.
One concern I have about using an AI-based social robot for my kids’ English learning is that the risk of endangering my family members might be high.
I have concerns that uncontrollable circumstances will happen with AI-based social robots for English learning.
1.4 Autonomy
-
1.
I think an AI-based social robot for English learning should be able to determine how it conducts tasks.
-
2.
I think an AI-based social robot for English learning should be able to make decisions by itself.
-
3.
I think an AI-based social robot for English learning should be able to take the initiative.
-
4.
I think an AI-based social robot for English learning should be able to do things by itself.
1.5 Ability to learn
-
1.
I think an AI-based social robot for English learning should possess learning ability.
-
2.
I think an AI-based social robot for English learning should perform better as the amount of time using it goes up.
-
3.
I think an AI-based social robot for English learning should be able to learn from experiences with users.
-
4.
I think an AI-based social robot for English learning should be able to improve on its shortcomings and keep progressing.
-
5.
I think an AI-based social robot for English learning should be able to adapt itself to the environment over time.
1.6 Reactivity
-
1.
I think an AI-based social robot for English learning should perform actions after observing its surroundings.
-
2.
I think an AI-based social robot for English learning should keep an eye on its environment.
-
3.
I think an AI-based social robot for English learning should react to changes in its surroundings.
-
4.
I think an AI-based social robot for English learning should adapt its behavior to its environment.
1.7 Ability to cooperate
-
1.
I think an AI-based social robot for English learning should be able to cooperate with other products.
-
2.
I think an AI-based social robot for English learning should be able to connect with other products and share information.
-
3.
I think an AI-based social robot for English learning should be able to work better in cooperation with other products.
1.8 Humanlike interaction
-
1.
I think an AI-based social robot for English learning should ask for views and opinions of the user in a timely manner.
-
2.
I think an AI-based social robot for English learning should be able to assist the user when the user has a need.
-
3.
I think an AI-based social robot for English learning should be able to start a dialogue with the user.
-
4.
I think an AI-based social robot for English learning should explain to the user how it should be used.
-
5.
I think an AI-based social robot for English learning should explain what it is doing.
1.9 Personality
-
1.
I think an AI-based social robot for English learning should have its own character.
-
2.
I think an AI-based social robot for English learning should be like a person.
-
3.
I think an AI-based social robot for English learning should behave like a human being.
1.10 Utilitarian motivation
I think using AI-based social robots to learn English is helpful for kids.
I think using AI-based social robots is more effective for kids to learn English.
I think using AI-based social robots to learn English is functional.
I think it is necessary to use AI-based social robots to learn English.
I think using AI-based social robots to learn English is practical.
1.11 Hedonic motivation
I think the process of using AI-based social robots to learn English is fun.
I think the process of using AI-based social robots to learn English is delightful.
I think the process of using AI-based social robots to learn English is enjoyable.
1.12 Intention
I plan to let my kids use an AI-based social robot for English learning.
I will recommend an AI-based social robot for English learning to my friends and relatives.
I will often use an AI-based social robot to help my kids learn English in the future.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lin, GY., Jhang, CC. & Wang, YS. Factors affecting parental intention to use AI-based social robots for children’s ESL learning. Educ Inf Technol 29, 6059–6086 (2024). https://doi.org/10.1007/s10639-023-12023-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-023-12023-w