1 Introduction

The advancements in information technologies and their application have led to the increasing adoption of digitalization and automation in various aspects of the maritime industry (Kitada et al. 2019; Janssen et al. 2021). In continuation to this trend and with the efforts to control and support seaborne ships with remote locations, artificial intelligence (AI) will play an essential role in the coming decades through its application in the maritime industry. The future seafarers will be expected to understand and communicate effectively with the various decision support systems enabled by AI (Alop 2019). The education and training of the maritime industry will require a different approach in the face of these changes (Burke and Clott 2016). In addition to the competencies listed in the Standards of Training, Certification, and Watchkeeping (STCW) regulations, maritime stakeholders need to consider cultivating digital skills and AI-enabled education to adequately prepare future seafarers (Sharma and Kim 2021; Baldauf et al. 2016). This study presents a proof of concept for the application of AI in maritime education and training.

Artificial intelligence (AI) has been making steady advancements in recent years and providing various domains with several functional benefits through its use. Although the origin of AI as a concept can be traced back to the 1950s (McCarthy et al. 2006), the recent developments with computing capabilities, advancements in machine learning techniques, and enhanced memory and processing capabilities have led to novel applications in a variety of domains. AI is now being used in finance, healthcare, services, and governance, to provide a few examples of its usage (Buchanan et al. 2020; Ferreira et al. 2021; Sharma et al. 2020; Kouziokas 2017). The basic premise to utilize AI has been its potential to increase efficiency and innovate associated processes in any field of application. However, the use of AI in such applications also puts focus on the role of the associated human element in such instances. Several researchers and professionals have pointed out that with the advent of AI, there would be a parallel trend regarding the need to reskilling the workforce and redefining their roles (Card and Nelson 2019; Rotatori et al. 2021). The AI and its application, in its fundamental premise, is supposed to augment human performance.

A recurring theme around the adoption of AI in workspaces has been regarding the awareness and experience of the individuals with the technology in question, i.e., AI literacy. In this regard, Long and Magerko (2020, p. 2) have defined the term as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online at home, and in the workplace.” They demarcate a set of competencies (human role, data literacy, ethics, etc.) and design considerations (critical thinking, social interaction, low barrier to entry, etc.) to support developers and educators for creating learner-centered AI. Promoting AI literacy, among other considerations, goes hand in hand in with the efforts to advance the technologies in the workspaces and the training of the future workforce. Furthermore, Rahm (2021) has argued in this regard that the relationship between technological development and education is a reciprocal one. While AI literacy is needed as a change in the educational system to enable AI adaption, increased AI literacy by itself can also be utilized to direct the desired technical development in various domains.

One of the primary objectives for applying AI is to improve the learning outcomes in education and training (Pedro et al. 2019). Like other areas of its application, AI brings affordances of greater computing power and tailored delivery of content to the learners. The application of AI in education (AIEd) is closely linked to the advances in the AI domain at large (Zawacki-Richter et al. 2019). Several studies have pointed out the potential of AI to promote engagement, reduce redundant tasks, personalize educational content, and identify emerging learning gaps in the classrooms (Owoc et al. 2021; Schiff 2021). According to Luckin et al. (2016), the AIEd system consists of the domain, pedagogical, and learner models. The strength of AIEd is the fact that the system can select appropriate content from the domain model to the requirements of the learner (model) while also tracking the intermediate interactions (pedagogical model) (Samuelis 2007). Thus, AIEd can enable tailored content delivery suitable to each learner’s needs.

1.1 AIEd and application of chatbots

The use of AI in education has occurred concurrently with advances in AI technology itself, offering several functional benefits of its use. AIEd has slowly progressed from personal computers in education to Web-based/online learning systems. The current use of embedded systems and other technologies available through advances in computing power has influenced how education is delivered (Chen et al. 2020). These developments have enabled, among other applications, the use of intelligent conversational agents or chatbots that can perform instructor-like functions in a classroom. Similarly, Timms (2016) has argued that in the future, AIEd will break away from merely education from personal devices to provide new solutions for learning and teaching activities. One of the varied directions AIEd can potentially take will be developing and using “educational cobots” that will be designed to support human instructors. These cobots can keep the learners engaged and answer simple queries the learners might have. Through social network analysis of selected literature related to application of AI in education, Goksel and Bozkurt (2019) have demonstrated that terms like expert systems (ES) and intelligent tutoring systems (ITS), which can mimic human behavior and provide immediate as well as customized feedback to learners, have remained at the forefront for AI-related educational research. With the advances in AI, namely natural language processing (NLP), this concept is being reimagined as intelligent agents or systems that can guide individuals towards the learning objectives and aims and help them navigate the associated process. This is also congruent to the increasing instances of human-automation agent teaming that is taking place in a wide variety of work processes. The larger trend has been towards delegating the tasks of mundane and repetitive nature to intelligent agents. The application of chatbots in education is due to the advances mentioned above in AIEd. In simple terms, the chatbot or conversational agent is defined as a computer program designed to simulate conversation with human agents (Adamopolou and Moussiades 2020). The development of chatbots and their application has been occurring concurrently with AI research. The first known chatbot was developed in the 1960s and was called ELIZA, intended to act as a psychotherapist (Weizenbaum 1966). Since then, there has been a steady progression in chatbot technology, improving the NLP capabilities with various applications in different business/operational cases. Recently, there has been an increase in the research studies that aim to evaluate the application of chatbots in an educational context. In this regard, Okonkwo and Ade-Ibijola (2021) have conducted a systematic literature review regarding chatbot applications in education. They have listed integration of educational content, increased motivation and engagement, ubiquitous access, and simultaneous use by multiple learners as some of the primary benefits of using chatbots in education. They also shed light on some of the challenges that accompany chatbot usage, such as usability and evaluation issues, ethical issues, and programming issues. Similarly, Rapp et al. (2021), adapting a human–computer interaction lens and through their literature review, have identified themes such as trust, expectations, experience, and satisfaction, which are relevant in studies focusing on chatbot and associated interaction issues.

1.2 COLREGs in maritime education and training

The maritime industry is a safety–critical industry with ships moving valuable cargo from one geographical location to another. The consequences of accidents in the maritime industry are often catastrophic, with loss of valuable cargo, environmental pollution, and, in extreme cases, loss of passengers and crew members on board (Schröder-Hinrichs et al. 2012) . There are various frameworks and mechanisms in place to avoid such undesirable events happening and ensure the safety of sea transportation. From the ship’s design, guidelines for maritime operations, and the training of seafarers, the maritime industry has adopted various codes and regulations to ensure compliance and promote safety at sea. The seafarers working as crew members play a crucial role in day-to-day operations. Their education and training directly impact the safety of operations onboard (Ziarati 2006). The Maritime Education and Training (MET) domain, which follows the broader framework as stipulated by the Standards of Training, Certification, and Watchkeeping (STCW’74 as amended), ensures the supply of skilled and competent workforce working in the maritime industry. The STCW lists competence requirements for various operational roles onboard (deck officer, marine engineer, ratings, etc.). The mandatory minimum competence requirements for deck officers in charge of the navigational watch are listed in the STCW table A-II/1. There are a total of 19 competence areas that a prospective officer should demonstrate to be deemed worthy for a Certificate of Competency (CoC). Among them, the knowledge of International Regulations for Preventing Collisions at Sea or COLREGs forms an integral part of the operational knowledge required for a deck officer. The COLREGs, also sometimes referred to as “Rules of Road,” lists the various regulations that govern the safe movement of maritime traffic. They assign responsibilities such as “Give-Way Vessel” or “Stand-On Vessel” to ships encountering each other at sea. Furthermore, they also list the correct light and sound signals that should be exhibited by different ship types in conditions that apply to them. These rules are crucial in determining the action to be taken by ships when performing navigation (Chauvin et al. 2013). According to the European Maritime Safety Agency, collisions and groundings were the cause of about 25% of maritime casualties in the year 2020 (EMSA 2021). Improper understanding and application of COLREGs can therefore have serious consequences (Mohovic et al. 2016). The MET institutes all across the globe take various measures to adequately cater to the development of good understanding and application of COLREGs during the training period of future deck officers. However, it is also recognized by the maritime stakeholders that various flag states signatory to the STCW differ in terms of educational resources and approaches towards education and training of seafarers. The educational content delivery, tools utilized, and how assessment is carried out for learning outcomes depend upon these factors and are at the discretion of the MET institutes. In Norway, for example, COLREGs training is imparted as part of the 3-year Bachelor’s in Nautical Science degree. The COLREGs training forms part of the curriculum in various ways in which specific learning outcomes are expected of the trainees undergoing the 3-year degree. Firstly, the students are expected to remember and understand the different terminologies associated with COLREGs, their framework and historical background, and some of the rule content by heart. Furthermore, the students can develop skills in applying COLREGs in a safe, controlled environment (simulator) where they solve practical assignments and understand the relationship between COLREGs and bridge resource management (BRM). Finally, to further increase the competence and synthesize new ways to use COLREGs to solve emerging challenges in the maritime industry, students can opt to write their Bachelor thesis in a related topic to gain specialization. The above learning outcomes constitute a macromodel of the curriculum in subdiscipline of navigation namely COLREGs as it is conducted in Norway. The culmination of the training occurs when the students go out at sea for 12 months as deck cadets obtaining real-world training in its application before being awarded CoC by the Norwegian Maritime Authority. The COLREGs training, therefore, consists of the demonstration of both innate knowledge as well as practical skills. The knowledge component forms the building block and fundamentals in the understanding of COLREGs. The focus on novel ways to promote understanding and knowledge acquisition can support the overall goal of making the deck officer trainees competent in this important sub-discipline of navigation.

1.3 Pedagogical use of chatbot or conversational agent

There are various pedagogical frameworks applicable and in use for supporting professional learning. The most common characteristic of chatbot in supporting professional educational needs is its ubiquity and simulation of conversations of an instructor or peer. As such, the chatbot or conversational agent is particularly well suited to support self-directed learning (SDL) among individuals. SDL can be defined as a process in which individuals take the initiative in their learning (Knowles 1975). The benefit of using a chatbot is that it can be incorporated in learning instruction design with the discretion of the students. It can support learning activities outside the traditional classroom. The students can pose targeted queries to the conversational agent and get responses. The agent can also promote reflection as dialogue is initiated in the process. Instead of passively learning about COLREGs, the chatbot can promote engagement and offer the students an opportunity to exercise initiative. The chatbot can also act as an additional source of knowledge other than peers and the instructor (Ref Fig. 1). The acquisition of the knowledge component of COLREGs is iterative in nature; and therefore, the chatbot is well suited to support self-directed learning experiences in aiding its understanding. COLREGs are relevant for this context because, in addition to being an essential part of deck officer training, they are also “fixed” in terms of component and their numbers, thus providing sufficient rationale to be designated to an intelligent agent.

Fig. 1
figure 1

Student-centered learning activities with chatbot support for self-directed learning

In the present study, we, therefore, aimed to design and implement a chatbot for supporting the COLREGs training in the maritime classroom. The primary research objective was to conceptualize and design the chatbot “FLOKI” which can act as an intelligent conversational agent for answering queries related to a selected number of COLREGs. Furthermore, we wanted feedback on the usability of the designed chatbot FLOKI itself. We wanted to understand if it also offers a better user experience in learning COLREGs which is often perceived as a routine and repetitive component of nautical education. For this purpose, a standardized questionnaire known as System Usability Scale (SUS) was utilized. The objective of this study was to provide a “proof of concept” for AIEd in a maritime educational context. The subsequent sections of the paper describe the design and implementation process.

2 Method

2.1 Design of chatbot FLOKI

For achieving the research objective, the chatbot was built using the IBM Watson Assistant service (IBM 2022). It is a service on the IBM Cloud that enables businesses and organizations to build and deploy conversational agents. The service instance was created by the first author on the IBM Cloud and was eventually developed to meet the objectives of deploying a conversational agent that could help the maritime trainees learn the COLREGs. A chatbot or conversational agent has three primary building blocks as per the IBM Watson Assistant service, namely (1) intent, (2) entity, and (3) dialogue. In simple terms, intent can be defined as the purpose of the user’s input. Several separate intents have to be described in the chatbot to cater for all possible purposes that the user in question can have to interact with it. The entity refers to an object or term that is related to the intent described by the user and lists all possible synonyms or similar words that can be related to the user’s intent. Finally, the dialogue is a response to the recognized intent by the chatbot. It reverts with the response(s) and option(s) to the query posed by the user and enables to supply the most appropriate answers or information that the user queried for initially. These blocks of chatbot work seamlessly together the moment a user query is received by its interface. The intent block matches the query with pre-stored intent. The context of the conversation is stored in the entity block, so that the chatbot “remembers” the conversation’s objective, and the dialogue block gives appropriate response to the query. The chatbot during the design phase was titled “FLOKI”—as a tribute to the Norse navigator Floki Vilgerdson, often attributed for discovering Iceland (Thirslund 1997) and providing a maritime persona to the conversational agent. The primary objective of the FLOKI was to enable the discussions of the COLREGs with the maritime trainees; therefore, it was required to input the specific regulations. The International Regulations for Preventing Collisions at Sea (1978 as amended) have 41 regulations and 4 annexes (IMO 2021).

As the intention with FLOKI was to demonstrate proof of concept, only a subset of these rules were selected to be introduced in the chatbot. The authors decided to focus on Rules 11–18, which fall under the Part-B, Section-II, of the regulations and are titled “Conduct of vessels when in sight of one another.” The following intents were created for the chatbot—#Greetings, #COLREGs, #thank_you, and #Goodbye. Several examples were provided under each intent to enable the chatbot to capture them. In this step, as per recommendations by the IBM, the first author typed many sentences related to how a student might type a query and stored them as an example under each intent. It is often advised to type as many variations of the query as are possible, including misspelled sentences or typos, to ensure optimum simulation of the actual use case. Furthermore, 8 entities were created, one for each rule, so that chatbot can capture the context and does not “forgets” when intermediate sentences or queries are being directed to it.

Finally, the dialogue block was filled with responses to the expected intent. The main components of this block were the actual COLREGS (Rules 11–18) that were inserted under appropriate headings. These were in textual format; however, some images were also inserted under each rule where applicable to enable a richer response than just plain text and offer better multimedia integration.

The finalization of contents within all three blocks resulted in a hierarchical branching logic flowchart of the chatbot FLOKI which first greets the user with a predefined text describing who it is and its intended purpose, understands the intent of the input, can do customary chit-chat (e.g., “Hello to you, Lets get started!”), returns with the relevant COLREGs when queried, and can also offer conversation ending salutations (e.g., “Goodbye to you”). The IBM cloud service enabled a trial pop-up on the side, which can be used dynamically throughout the process to test how the chatbot is responding. This service was utilized, and several iterations later, the chatbot was deemed suitable for deployment. However, at that stage, the chatbot service was still situated in the IBM Cloud, and to make an actual user case, the service should be hosted in a “real world” environment. For this purpose, a WordPress® site was deployed (www.flokipress.com) and the API integration was enabled with the IBM cloud via a plugin, which resulted in the pop-up of chatbot every time the website was accessed.

2.2 Implementation in a maritime classroom

After the completion of the design and deployment of the chatbot, it was introduced in a regular classroom for B.Sc. in Nautical Science students. An informed consent form briefly describing the purpose of the experiment and a few demographics-related questions were provided in a separate sheet. The summary of the demographic data is provided in Table 1. Participation in the study was voluntary, and no personal information was collected throughout the experiment. The study was conducted on 17 September 2021 with 2nd year B.Sc. Nautical Sciences students at a university which offers maritime education and training (MET) programs in Norway. A total no. of n = 18 students participated in the study. The students in the group received an introductory briefing and were given consent forms. After filling out these forms, the students received some additional instructions regarding the use of chatbot FLOKI in a separate information sheet that dealt with interaction instructions and the use of a QR code to access the WordPress site quickly as show in Fig. 2. The students were, therefore, free to select either smartphone or a laptop to interact with FLOKI. After 20 min of familiarization, students proceeded to interact with the chatbot regarding COLREGs Rules 11 to 18. A further 20 min was allotted for conducting this phase of the study (Fig. 3).

Table 1 Demographic characteristics of the respondents
Fig. 2
figure 2

Design of chatbot with IBM Watson APIs

Fig. 3
figure 3

Implementation in the classroom and instruction page

The students interacted with FLOKI by first typing customary greetings and then asking specific questions. As per the design of the conversational agent, the input was classified and processed accordingly, and the relevant dialogue block responded with the appropriate rule and supporting images where applicable. The students practiced in this manner for Rules 11–18 as intended in this exercise and compared the experience with reading rules from a textbook with no interaction (Fig. 4). As originally intended, all of the students could simultaneously interact with FLOKI independently. Some of the students used their smartphones, while some used their tablets or laptop devices for their convenience.

Fig. 4
figure 4

Example interaction of a student with FLOKI

Afterwards, the students were handed another questionnaire—the System Usability Scale (SUS)—to enable the collection of the usability data for the chatbot FLOKI. The SUS is used to provide an overall usability score as per ISO9241-11 on characteristics such as effectiveness, efficiency, and satisfaction (Brooke 1986). The whole exercise was approximately 1 h long for the students—resembling a typical lecture session in the classroom. The collected data was then analyzed using software packages—MS Excel and SPSS. The obtained results, along with the figures and related statistics, are described in the next section.

3 Result

The demographic data of the student respondents is summarized in Table 1:

The usability data of FLOKI was collected through the 18 participating students. For this purpose, a System Usability Scale was utilized. The scale has 10 items, and the respondents were asked to rate each item from scale 1 (Strongly Disagree) to 5 (Strongly Agree) for respective statements. The results are summarized in Table 2:

Table 2 The System Usability scores of the chatbot FLOKI

For calculating the overall usability score, the guidelines given by Brooke (2013) were followed. The guidelines refer to converting all the score to 0–4 scale. For odd-numbered questions, the mean was subtracted by 1 and for the even-numbered questions the mean was subtracted with 5 to compensate for their negative wording. The total of both odd- and even-numbered questions is then multiplied by 2.5.

$$(2.53\:+\:2.97\:+\:2.13\:+\:3.19\:+\:3.04)\:=\:13.86$$
$$(3.19 + 3.74 + 2.53 + 2.90 + 3.27) = 15.63$$
$$(13.86 + 15.63) * 2.5 = 73.72$$

We calculated the internal consistency of the scale through Cronbach’s alpha. The even-numbered questions which were negatively worded were re-coded in SPSS and made to correspond with the positively worded questions. The Cronbach alpha’s calculated value was 0.884, greater than the recommended value of 0.700 of scales with a similar number of items (Nunnally and Bernstein 1994).

As some of the demographics data was also collected, we examined this data to investigate if the experience of navigation and experience in the prior use of chatbot had any effect on the perceived usability scores across the groups. To cater for this, we utilized non-parametric Mann–Whitney U test scores.

For the groups having experience or lack thereof with navigation on ships, there were 10 respondents stating that they had some experience in navigation and consequently experienced in the practical application of COLREGs, whereas 8 respondents stated that they had no experience with navigation on ships. The average SUS scores for these two groups were 74.97 and 72.70, respectively (Fig. 5).

Fig. 5
figure 5

SUS scores of groups in experience with navigation

The Mann–Whitney U test showed no significant difference in both groups at 0.05 significance level (U value = 38, Z score = 0.13328, two-tailed) with p = 0.896.

Similarly, the respondents stated whether they have any prior interaction with chatbots. A total of 11 respondents replied that they have interacted with a chatbot prior to this exercise, while 7 respondents stated they have not, or they are not sure of this experience. The average SUS scores for these two groups were 78.61 and 66.65 respectively (Fig. 6).

Fig. 6
figure 6

SUS scores of groups in experience interacting with chatbot

The Mann–Whitney U test showed no significant difference in both groups at 0.05 significance level (U value = 21, Z score = 0.15396, two-tailed) with p = 0.123.

4 Discussion

The overall usability data for the chatbot suggest that it was received positively by the students in terms of its effectiveness, efficiency, and satisfaction. The median score of usability as obtained by SUS for a large number of product evaluation studies is 70.5 (Bangor et al. 2008) . The chatbot FLOKI, with a score of 73.72, got a higher score than the established benchmark in usability studies. It should be noted that the usability scores out of 100 do not refer to a percentage score. The median score of 70.5 marks the 50th percentile of the established usability benchmark. The score of 73.72 is above the 50th percentile and would lie in the 3rd quartile of the mean scores for the SUS scale. As per the classification given by Bangor et al. (2008) , the rating can be described as “Good”; however, higher ratings of “excellent” (SUS score ranging from 80 to 90) and “best imaginable” (SUS score ranging from 90 to 100) are also present in the continuum. The non-parametric Mann–Whitney U test results showed no difference in the usability evaluation of the chatbot by the students who had prior experience in navigation and the use of COLREGs. The difference in the average SUS scores for the students who had prior experience interacting with a chatbot was relatively higher than the group of students with experience in navigation. However, similar to the evaluation between the first sub-groups, a non-statistically significant difference was observed. The findings indicate that prior experience and familiarization with an AIEd tool can influence how the students perceive it; however, more evidence is still needed in this direction.

During the informal debriefing session after the conclusion of the study, some of the students did remark that they found the chatbot “interesting” and “novel” for the purpose of studying COLREGs, and they would consider it to be a worthwhile addition in the overall efforts to master the knowledge-related aspects of COLREGs application. Some of the students also mentioned that they found the chatbot “relatable” while interacting with it and would like to practice further to gain a better understanding of the COLREGs. However, as described earlier, the chatbot was trained to respond to a limited number of COLREGs, namely from Rules 11–18. To be truly integrated into the curriculum and for the possibility of future usage, it will be required to further include the dialogue blocks for all of the COLREGs, namely from Rules 1 to 41. Due to the limitations concerning the handling of personal data, advanced features like voice recognition were not considered. Voice recognition with the use of artificial neural networks (ANN) allows the chatbot to have an advanced interface that can communicate with the trainees back and forth through textual medium and recognize their voice inputs and respond accordingly. This would result in a much-improved interaction experience for the students. Advances in natural language processing (NLP) capabilities of the chatbot can also recognize the voice tone and the corresponding emotion of the students, thereby also catering to the students’ emotions and respond with appropriate empathy (Suta et al. 2020).

In several countries which are signatory to the STCW, oral examination constitutes a part of the competence assessment of deck officers. For example, the Maritime and Coastguard Agency (MCA) of the UK states that “The oral examination forms part of the assessment of the attainment of all MCA Certificates of Competency, and all candidates must demonstrate an adequate knowledge of English Language” (MCA 2021). This also applies to the demonstration of knowledge regarding COLREGs in the oral examinations related to the Navigation function for the deck officers. Since, this part of the assessment can be thought to be iterative in nature and sufficiently narrow in scope, it has the potential for the application of AIEd tools. Specifically, chatbot FLOKI, with voice recognition integration, can facilitate the self-directed learning process of oral examination preparation for the prospective deck officers. The maritime trainees can utilize the chatbot virtually without limit to master this aspect of curriculum without depending on the instructors or their peers for the support.

5 Outlook and conclusions

The ongoing efforts for introducing digital solutions and support for maritime education and training purposes have to go further than merely catering to the basic knowledge recall and application. To support higher order of knowledge development in various scenarios, digital interactive tools such as those presented in this paper can prove helpful. The stakeholders must understand the potential applications within the maritime classrooms and simulators to optimally use such solutions. The support from artificial intelligence can be considered in light of rapidly evolving educational technology and changing client expectations. Traditional curriculum design affected by technological integration needs to reflect and be inspired by this continuing innovation in the industry.

Some limitations of the current study can be pointed out, and future research directions can be identified. Firstly, the STCW signatory states differ in their approach towards Maritime Education and Training (MET) and the application of technological resources. The current study presented a proof of concept and was carried out in a Norwegian maritime university offering three levels of maritime education for the students. The assumptions towards the use of technological tools such as smartphones or laptops to further support the acquisition of knowledge-related components of B.Sc. in Nautical Sciences could differ from one geographical region to another. The sample size of the study (n = 18), in addition to the university-specific context, warrants caution in the exercise of generalization across other regions and to other STCW signatory states. Furthermore, the usability data gathered was compared with the generic benchmarks established in wider usability studies. However, understanding towards application of AIEd tools in MET can further be benefitted by longitudinal studies involving the chatbot FLOKI using the same scale (SUS) and comparing the obtained scores with other AIEd interfaces. The text gathered from the numerous interactions of the chatbot FLOKI can also be subjected to conversation analysis to uncover further the knowledge construction process that unfolds while the maritime trainees attempt to establish their understanding of COLREGs. It should also be noted that the objective of the paper was to illustrate AIEd tool application and COLREGs training was selected as a use case. The COLREGs-related content and its presentation would need further refinement currently to be deemed ready for classroom deployment. Future research should be directed to further investigate the application of AIEd tools to support efficiency, competence development, and self-directed learning in MET and provide a multi-faceted approach to tackle the fast-paced nature of evolution for the required skillsets for professional settings as the maritime domain.

In this study, a proof of concept of AI in maritime education and training—the chatbot FLOKI—was designed and implemented in a maritime classroom. The chatbot demonstrated a use case in the COLREGs training for B.Sc. in Nautical Science students. The 10-item SUS was utilized to gather the usability data concerning effectiveness, efficiency, and satisfaction. The usability data gathered for the chatbot FLOKI shows overall satisfaction in its usage by the maritime students with a usability score in the 3rd quartile of the established benchmark. The obtained SUS score was found to be not dependent on any prior experience of navigation or chatbot interaction by the maritime students. Future research should be directed in further investigation of the potential of AI chatbots such as FLOKI for supporting knowledge components of the B.Sc. in Nautical Sciences education and investigation of avenues in MET at large for application of AIEd to promote efficiency and competence development.