Skip to main content
Log in

Knowledge-based chatbots: a scale measuring students’ learning experiences in massive open online courses

  • Development Article
  • Published:
Educational technology research and development Aims and scope Submit manuscript

Abstract

This paper presents our efforts to develop a scale for measuring students’ learning experiences with knowledge-based chatbots in massive open online courses (MOOCs) through three studies. In Study 1, we conducted a qualitative synthesis of the current literature and analyzed students’ open-ended responses regarding their experiences with a knowledge-based chatbot. Consequently, we identified eight salient domains (i.e., social presence, teaching presence, cognitive presence, self-regulation, co-regulation, perceived ease of use, behavioral intention, and enjoyment), resulting in the creation of 53 items. In Study 2, we selected 30 items that received more than 80% agreement from five experts. Finally, in Study 3, we reported the findings of exploratory and confirmatory factor analyses of the final scale based on student responses (N = 237) and presented 22 items across five domains (i.e., social presence, teaching and cognitive presence, self-regulation, perceived ease of use, and behavioral intention). This research contributes to the current literature by providing an instrument to measure students’ learning experiences with knowledge-based chatbots in MOOCs, which is presently unavailable. The scale developed in this study could be employed for further research aiming to systematically develop knowledge-based chatbots and investigate the relationships between salient factors influencing students’ learning experiences in MOOCs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

The quantitative data set analyzed in this study is available from the corresponding author upon request.

References

Download references

Acknowledgements

We extend our sincere gratitude to Drs. Jason Harron, Martin Hlosta, Amy Ogan, Zilong Pan, and Wenting Ellen Zou for their invaluable expertise and contribution to the content validity evaluation in Study 2. Our gratitude extends to Dr. Henry May for the valuable suggestion to use IRT for scale enhancement in the future. Their insightful feedback and guidance have significantly enriched our research.

Author information

Authors and Affiliations

Authors

Contributions

SH: Conceptualization, methodology, software, validation, formal analysis, investigation, data curation, writing (original and editing), visualization. XH: conceptualization, methodology, validation, investigation, resources, supervision. YC and PS: conceptualization, data curation, validation, investigation, original draft writing. ML: conceptualization, methodology, resources, supervision.

Corresponding author

Correspondence to Songhee Han.

Ethics declarations

Competing interests

The authors report no potential conflicts of interest.

Ethical approval

The Institutional Review Board at the University of Texas at Austin reviewed and approved this study before the research.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Table 7.

Table 7 Summary of the literature review

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, S., Hamilton, X., Cai, Y. et al. Knowledge-based chatbots: a scale measuring students’ learning experiences in massive open online courses. Education Tech Research Dev 71, 2431–2456 (2023). https://doi.org/10.1007/s11423-023-10280-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11423-023-10280-7

Keywords

Navigation