1 Introduction

In healthcare education, there is a general shift towards competency-based education where students demonstrate learned knowledge and skills to achieve specific predetermined competencies (Ross et al., 2022). To acquire this new knowledge and skills, feedback—which can include self-, peer-, patient-, and instructor feedback (Holmboe et al., 2015)—is considered crucial on the students’ learning process (Baird et al., 2016; Carter et al., 2017; Sidebotham et al., 2018; Tekian et al., 2017; Tickle et al., 2021; Van Ostaeyen et al., 2022).

Since the early 2000s, ePortfolios have replaced paper-based portfolios and have gone through a series of technical advancements (Buzzetto-More, 2006; Gray et al., 2019; Segaran & Hasim, 2021). Nevertheless, the input formats for feedback among stakeholders in ePortfolios have barely evolved. They are heavily text-based (Bing-You et al., 2017; Birks et al., 2016; Bramley et al., 2021), which may be a legacy from the original paper-based portfolios and challenges with handling media in the early days of the Internet (Marty et al., 2023; Siegle, 2002). In addition, feedback is often suboptimal and research on how to incorporate feedback into an ePortfolio design is scarce (Tickle et al., 2022).

Granted the discrepancy between technological advancement and feedback format, there is a need to investigate new ways to capture feedback in ePortfolios. To improve the use of feedback in ePortfolios within healthcare education we compared the preference for three feedback formats with participants from different disciplines within healthcare education. This leads to the following research objectives:

  1. (1)

    to compare the usefulness, ease of use and attitude towards three feedback formats: open-text feedback, structured-text feedback or speech-to-text feedback.

  2. (2)

    to explore recommendations for the design of feedback formats for future ePortfolios.

1.1 Implementation of feedback in ePortfolios

An ePortfolio (also known as a digital portfolio, online portfolio, e-portfolio, or eFolio) can be defined as a collection of electronic evidence assembled and managed by a user, usually online (Cedefop, 2023). The digital switch has brought several practical advantages to students and supervisors. They offer the opportunity for wireless synchronization, preventing the loss of documents, no size restrictions, reduced printing costs, increased security, avoiding duplication, ease of use, and the opportunity to receive digital notifications when feedback was provided (Janssens et al., 2022). Moores and Parks (2010) proposed 12 tips for introducing ePortfolios in health professions education, which have been updated with the introduction of newer technologies and possibilities (Siddiqui et al., 2022).

1.2 Benefits and challenges of effective feedback in clinical education

Effective feedback is defined as a process whereby learners obtain information about their work in order to appreciate the similarities and differences between the appropriate standards for any given work, and the qualities of work itself, in order to generate improved work (Baud & Molley, 2013). Effective feedback is central to support cognitive, technical and professional development in healthcare education (Archer, 2010). It plays an important role in supporting the learner to reflect, and allowing progress to be made on the experience ladder. When it is systematically delivered from credible sources, individual feedback can change clinical performance (Veloski et al., 2006).

Research mainly focusses on the content of the feedback and provides guidelines on how to give feedback during assessments (Bing-You et al., 2017; Van der Leeuw & Slootweg, 2013). Some of the principles of effective feedback are that it should be clear, specific and based on direct observations. It should be communicated through an appropriate setting and focus on the performance, not on the individual. The wording should be neutral, non-judgmental and emphasize positive aspects, this by being descriptive, rather than evaluative (Qureshi, 2017).

Bing-You et al. (2017) argue that there are significant gaps in feedback. First, most of the feedback given to the students is given orally or written. More progressive forms such as feedback cards, audio tapes and videotapes of feedback interactions are described in literature in a limited number of cases. Moreover, evaluations on a preferred type of feedback and use are lacking.

Second, written feedback is time-consuming and, therefore, not given on the spot. Healthcare professionals work in a fast-paced environment and struggle with providing timely feedback and reflection on actions. Students and mentors on the work floor are challenged to balance between teaching moments and responding to the demands of patient care. An example of this struggle for qualitative feedback is written by Kamath et al. (2015) in their experimental study with a head-mounted high-definition video camera (GoPro HERO3). They recorded a patient during induction of general anaesthesia to provide qualitative feedback. This study highlights the need for a more dynamic and different type of feedback. The introduction of smartphones on the work floor in the healthcare disciplines opens up more digital reflections (Marty et al., 2023).

Nonetheless, despite the attention to feedback, healthcare students have long indicated that they receive insufficient feedback (De Sumit et al., 2004; Liberman et al., 2005) and that it is often mainly noted down by the student, at the end of an internship (Tickle et al., 2022).

1.3 Speech-to-text Technology in Education

The domain of Human Computer Interaction (HCI), how humans and computers interact, evolved at a firm paste over the years making computers more ubiquitous, invisible and easier to work with (Weiser, 1991). Speech technology has proven its benefits and is integrated in commercial successful products such as smartphones, car control, or home speakers (Hoy, 2018).

In an educational context, Automatic Speech Recognition (ASR) technology and speech-to-text technology are less common (Shadiev et al., 2014). The technology supports deaf and hard of hearing students in the lack of spoken classroom information and the translation of auditory to visual adaptations (Kushalnagar et al., 2015). It can also help international- or students with cognitive or physical limitations understand lectures better, take notes concurrently during these lectures, and finish their homework (Shadiev et al., 2014). As the technology got more mature, the target users got broader and speech-to-text was adopted to assist not only students with special needs but also general population of students in a broad context such as improving students’ understanding of a learning content as well as offering guidance to accomplish reflective writing and homework (Hwang et al., 2012; Shadiev et al., 2014).

The usefulness and the experience for users with the use of Natural User Interfaces (NUI) – and other gesture interfaces – are frequently evaluated by researchers and software developers (Guerino & Valentim, 2020). In a recent study comparing speech with text input for document processing, college students preferred typing over speech input even though they perceived speech input simple and helpful (Tsai, 2021). Results showed that functional barriers (i.e. usage, value, and risk barriers) and psychological barriers (i.e. tradition and image barriers) positively affected users’ resistance to change (Tsai, 2021).

In sum, there is a discrepancy between available technology (i.e. AI, audio and video recordings, speech recognition,…) and the options for feedback implemented in current ePortfolios (Birks et al., 2016; Slade & Downer, 2020). Research is needed to confirm that the enhancement of new technology will increase the quality and efficiency of workplace-based feedback and assessment in professional education (Marty et al., 2023; van der Schaaf et al., 2017).

2 Method

2.1 Study Design

A within-subject and mixed method designed experiment was conducted between 15/02/2021 and 30/03/2021. The study is part of the project SBO SCAFFOLD to develop an evidence-based ePortfolio to support workplace learning in healthcare education. We compared three different feedback formats in an ePortfolio prototype: open-text feedback, structured- text feedback and speech-to-text feedback. The following Fig. 1 explains the sequence of the experiment.

Fig. 1
figure 1

sequence of the experiment: watch scenario through a video, give feedback on the scenario in the ePortfolio in the presented feedback format, fill in the survey to evaluate the feedback format, post-interview

First, a participant watched a video showing a fictious scenario involving a patient, student and mentor in the hospital workplace (Fig. 2). Secondly, the participant provided feedback in the ePortfolio prototype. Participants were invited to imagine that they were in practice, so to observe and give feedback as they would do in real life. The participants gave feedback to the student in the videos through three different formats (described in detail in Data Collection Tools): by means of open-text input, by answering three structured-text questions or through the speech-to-text transcription with spoken text input. Thirdly, after having watched 2 different scenarios and providing feedback twice in the same feedback format, participants completed a survey derived from the Technology Acceptance Model (TAM) (Davis, 1989) to measure their acceptance of the presented feedback format. Finally, at the end of the experiment, when the participant had tested the 3 feedback formats, a qualitative post-interview was conducted questioning participants about their experience and the use of the different feedback formats in real situations.

Fig. 2
figure 2

Example of a video fragment showing a simulated scenario with a patient, student and workplace mentor in a hospital room

2.2 Participants

85 participants from different disciplines in healthcare education were recruited: 35 Midwives (EQF level 6), 23 Nurses (EQF level 5) and 27 Nurses (EQF level 6) (Europass, 2022). Participants had different roles being: 26 students, 24 workplace mentors and 35 supervisors (Table 1) from the region of Flanders, Belgium. All of them were familiar with the use of an ePortfolio, which only offered open-text feedback. Participants were recruited through mailings and posters at three different educational organisations. For their contribution and transport costs, they received a small financial compensation. All participants signed an informed consent before coming to the lab.

Table 1 Overview of participants per discipline and background

2.3 Data collection

The experiment took place in lab circumstances due to Covid countermeasures at the time, restricting external visitors in hospitals. The experiment duration was between 60 to 90 min. Participants received a personal login for the ePortfolio and were classified into one of four groups for the order of scenario presentation.

2.4 Scenarios through video

The researchers created eight videos of approximately three minutes each. Each video reflected a possible scenario between a student and a mentor in the workplace. The actors in these videos were the researchers that are involved in the project. All scenarios took place in a simulated hospital room or in the corridor next to the room where a simulation patient was being treated. For example, one scenario was an intravenous insertion to be performed by a student who was confronted with a patient showing little trust in her skills.

To prevent bias and order effects, the videos were presented in altered order depending on the participant's assigned group for a balanced design (Table 2). This study used a within-subjects design, in which participants had to provide feedback in all conditions (i.e. using all three feedback formats). Each participant watched 2 videos per format, thus 6 scenarios in total.

Table 2 Overview of experimental design for the four different groups of participants in chronological order (top to down). Participants answered a questionnaire about Perceived Usefulness (PU), Perceived Ease of Use (PEOU) and Attitude towards the feedback format after watching and giving feedback to two scenarios

2.5 Feedback Formats

In the following paragraph we describe the technical details for the ePortfolio prototype that was used. Each time after watching a video scenario on the tablet, participants were presented with the general ePortfolio feedback page. They were instructed to press the ‘Add Feedback’ button. On the feedback page, participants were presented with one out of the three different feedback formats (i.e. open-, structured- and speech-to-text), depending on which group they were assigned. Each feedback module required different user actions in the ePortfolio.

Open-Text Feedback: the open-text feedback module (Fig. 3) showed an open text field with the title “Add feedback”. This type of open text input was common in ePortfolios, and had been used by participants before. There was no space restriction to add feedback.

Fig. 3
figure 3

Open-Text feedback provided in the prototype

Structured-Text Feedback: the structured-text feedback module (Fig. 4) had three questions with accompanying open text fields for the answers: (1) What were the strong points? What did the student do well? (2) What were the working points? What went less well? and (3) What concrete tips can you give the student to improve a subsequent similar situation? The questions were based on Pendleton’s rule (Pendleton, 1984) a model to provide feedback in advanced life support training.

Fig. 4
figure 4

Structured-text feedback provided in the prototype. Users give feedback by answering three defined questions

Speech-to-text: The speech-to-text module (Fig. 5) converted speech fragments into text by the use of advanced deep learning neural network algorithms for automatic speech recognition (ASR) (Google Cloud, n.d.). For our prototype, the API-powered module of Google was used. To record a fragment, the user taped the record button, talked to the computer, and terminated the recording when finished. The transcription of the recording was done by default. However, the user could choose not to initiate it. Technically the recorded audio file was sent to the Google cloud through the API, where the Cloud processed the file and returned a written transcript. This process was done through a synchronous request: it could only transcribe one file at a time, while other files had to wait in a queue. The transcription typically took a few seconds, whereafter the user received a written interpretation of the audio file. Finally, the audio file was wiped from the cloud and stored on the mandated servers.

Fig. 5
figure 5

Feedback module for the Speech-to-text condition. Users press ‘Record Feedback’. After processing, the voice recording is transformed to editable text

Along with the audio file, the transcribed text document was saved and could be accessed and edited by the user. The ePortfolio was a prototype used in Belgium, where Dutch was the main language.

2.6 Survey feedback formats

Numerous theoretical models have been developed to explain users’ acceptance of technology; the Technology Acceptance Model (TAM) is one of the most cited. This model builds on the Theory of Reasoned Action TRA (Ajzen, 1991) in an effort to understand acceptance related specifically to the uptake of new technologies (Davis, 1989). The survey consists of three subscales with 14 items for Perceived Usefulness (PU), 13 items for Perceived Ease of Use (PEOU) and 4 items for Attitude towards the technology. All 31 items were statements that had to be rated on a 5-point Likert scale ranging from “totally disagree” (1) to “totally agree” (5). In total, 8 items were reverse scored for the analysis and the results reflect average score for the subscale items. For example, PU was questioned with items such as “In general, I think this feedback module is useful for my job as a supervisor” or “This feedback module helps me to save time”. The PEOU subscale consisted of items such as "I often make mistakes when using this feedback module” or “Interacting with this feedback module requires a lot of mental effort”. Finally, an example of the Attitude subscale is “Usage of this feedback module provides a lot of advantages”.

2.7 Post-Interview

After the experiment, the qualitative post-interview was conducted by the same researcher. This phase involved a personal semi-structured interview with a duration ranging from 10 to 15 min. All 85 participants participated in the post-interviews, but due to changes in the interview script, the first 12 participants did not rate the top three format preference and were excluded from the analysis. Following questions were asked: Are there positive or negative remarks on the different feedback formats and do you see areas for improvement?, Can you ranking your preferred feedback format from 1 to 3 and motivate why?, What is the possible impact of real-life situations in the workplace when using the feedback formats? All interviews were conducted in (omitted) (the native language of the participants) and were transcribed by the interviewer.

2.8 Data analysis

The data from the usability survey of the different feedback formats was analysed with JASP software (JASP Team, 2022). Different repeated measures ANOVA models were performed for the three subscales (i.e. PEOU, PU and Attitude) with feedback format condition as a within-subjects factor that had three levels (i.e. open, structured, speech-to-text). Degrees of freedom were corrected using Greenhouse–Geisser estimates when Mauchly’s test indicated that the assumption of sphericity had been violated, and partial eta squared effect sizes are reported. Pairwise comparisons between feedback format conditions were conducted with holm-adjusted p values and Cohen’s d effect sizes are reported.

To analyse the data from the post-interviews, thematic content analysis was used (Guest et al., 2014) in order to find common themes in participants’ responses. This means that data were explored without any predetermined framework and themes were inductively drawn from the data. Two researchers (ODR, JM) read the open-ended responses and interview notes several times to get acquainted with the data. Once completed, the emerging themes were discussed between the same researchers.

3 Results

We first describe the quantitative results from the survey, followed by the qualitative results from the post-interviews. The sample of participants comprised 10 men and 75 women. They were between 18 and 58 years old, the mean age was 35,5 (SD 12,1).

3.1 Survey on evaluation of feedback formats

Table 3 presents the self-reported perceived Usefulness (PU), Ease of Use (PEOU) and Attitude towards the feedback formats used in the ePortfolio. The results are elaborated in the following paragraphs.

Table 3 Post-hoc comparisons and statistics for factor feedback format condition

Usefulness

The analysis confirmed that the three feedback format conditions significantly differed in reported PU, F(2,168) = 14.185, p < 0.001, ηp2 = 0.14 (see Fig. 6). Post-hoc analysis revealed a significant difference in reported usefulness between all conditions. The structured-feedback format was perceived as the most useful, followed by respectively the open-feedback format and the speech-to-text format.

Fig. 6
figure 6

Comparison between feedback formats for perceived usefulness (PU)

Ease of Use

Similarly, the analysis for PEOU, F(1.85,155.75) = 52.654, p < 0.001, ηp2 = 0.39, also indicated there were differences between the feedback formats (Fig. 7). Consistently, participants reported a greater subjective experience of ease of use for both the structured- and open-feedback formats. Compared to these formats, the speech-to-text feedback format was perceived as more difficult to use (with a mean difference of 0.81 on the scale).

Fig. 7
figure 7

Comparison between feedback formats for perceived ease of use (PEOU)

Attitude

Finally, also the analysis for attitude confirmed a similar difference between the feedback formats F(1.84,154.21) = 10.211, p < 0.001, ηp2 = 11. Users reported a greater attitude and thus preferred both structured- and open-feedback formats, when compared to the speech-to-text format (Fig. 8).

Fig. 8
figure 8

Comparison between feedback formats for attitude scores

3.2 Preferred feedback format

In the post-interview, participants were asked to decide on the top three preferences of the feedback formats. Results of the non-parametric Friedman test indicated there was a statistically significant difference in preference of feedback format, χ2(2) = 26.042, p < 0.001, Kendall’s W = 0.020. Post hoc analysis with Conover’s comparisons shows that structured feedback was rated better in the top three compared to open- and speech-to-text feedback, all p’s < 0.001. There was no significant difference in the rating between open and speech-to-text feedback in the top three, p = 0.559.

3.3 Thematic analysis of post-interviews

The qualitative post-interview revealed that, although participants did not score the speech-to-text feedback as the best format, many participants would like to use such type of oral feedback during their daily job. However, the quality of the transcription of the speech-to-text feedback was below participants’ expectations.

The analysis of the qualitative post-interviews resulted in eight themes, later refined to three based on their similarities: (1) Usability and quality of feedback (2) New context of use (3) Shortcomings and recommendations to speech-to-text feedback. Table 4 summarizes the outcome per feedback format and theme.

Table 4 Overview of results per feedback format and theme

Usability and quality of feedback

Below, we structure the positive and negative comments about the perceived usability and quality of feedback for each feedback format:

(1) Open-text feedback was the common way to provide feedback in ePortfolios. Consequently, the participants perceived this format easy to use and especially suited mentors and supervisors already experienced in providing feedback. It was praised because users felt free to write feedback as they were used to doing, and ideally, this form of feedback resulted in a well-considered and nuanced story. However, participants recognized the risk of telling an incomplete or too one-sided story (e.g. only negative). Some participants critically reflected on this format and mentioned that open-text feedback did not challenge them to improve the quality of their feedback as was realized with the directed questions (structured feedback) or by incorporating audio recorded feedback fragments during the day into the portfolio as speech-to-text feedback does.

(2) Structured-text feedback in this experiment proposed three questions: 1. What were the strong points? What did the student do well?; 2. What were the working points? What went less well?; 3. What concrete tips can you give the student to improve a subsequent similar treatment? The majority of participants mentioned that the questions guided them to organise their thoughts and to provide balanced feedback with positive elements and points of improvement. However, the feedback format with an open box under each question gave some participants, especially experienced mentors and supervisors, the feeling of repetition if answers to questions overlapped with feedback that they already had written under another question. Some participants recommended adding a fourth text box with an open place to add a personalised title, followed by feedback that did not fit into one of the pre-given boxes.

“This (structured-text) feedback format takes longer than open-text feedback. For the student this is clear, he receives concrete action points; to me, it doesn't matter because I'm already doing this (giving structured feedback) anyway” – R35.

(3) Speech-to-text feedback was new for all participants. In general, participants found it hard to provide oral feedback to the computer or felt awkward doing so. Some found the feedback lacked structure and could, therefore, be confusing for the person receiving it. The intonation was lost when converting speech to text. Subsequently, some nuances were not fully taken into account. Because of the need to rework and correct mistakes in the transcriptions, the tool was not time efficient for the participants, even on the contrary. Participants spontaneously came up with solutions to improve the usability of the tool. An overview can be found below in the shortcomings and recommendations to speech-to-text feedback. However, they believed that, under the condition that the technology would improve, the content and the quality of speech-to-text feedback could become more accurate and detailed compared to the open- and structured text formats. The main reason was that the recording could be made on-premise.

New contexts of use

Nowadays, mentors and supervisors often give feedback at home at the end of the day or on a free day, because of the high workload during patient care. The use of oral feedback during work shifts would give the possibility to shift away from the computer and give feedback immediately on-premise, in the hospital or workplace.

Instead of recording a complete feedback comment on a scenario, as was tested in this experiment, participants mentioned they would rather make short recordings when they noticed an important event in the workplace related to the student’s positive or negative performance or attitude. Afterwards, they would provide more elaborate written feedback later on. By recording just a couple of sentences during the day, the feedback would be more rich and accurate.

Respondents also suggested to recording the oral feedback when given to the student. The recordings would avoid possible discussions with students about what had or had not been said and would save time when writing the final assessment. In sum, the use of speech-to-text feedback in ePortfolios to take notes and to record oral feedback would be beneficial for the amount of feedback given to the students, a crucial element in ePortfolios. To make this shift from providing feedback at home or during shift-hours to feedback on-premise, the accessibility on the smartphone instead of a portable computer is important to consider when designing an ePortfolio. This shift to a new context of use was not encountered with structured-text and open-text feedback.

“I would record something quickly, during working hours. At home I would reread the transcript on PC and wonder if it came across the way I wanted it to”—R44.

“The advantage of recording immediately is that it is faster, and it is a reminder to myself. It feels like having a paper with me to quickly write something down – R35”

Shortcomings and recommendations to speech-to-text feedback

The quality of the transcriptions was the biggest shortcoming of the speech-to-text format. The lack of punctuation, sentences that were not understood or completely differently transcribed, were some of the technical issues encountered. None of the participants could publish the transcribed text without first making corrections and reworking the text. Especially participants speaking a dialect or participants whose native language was not Dutch experienced a poor quality of speech-to-text transcription. On top of that, the specialized healthcare jargon was also difficult for the AI to efficiently transcribe the audio file.

Some participants were not able to record the feedback in one track or needed to retake the audio fragment. They requested a pause or stop button when recording feedback or a memorization list on the screen in the form of a post-it note, which they could consult while providing oral feedback to structure their monologue. Additionally, they preferred to be in a room alone to make the recording. A situation which is not always possible in a real-life context in the hospital or other healthcare settings with shared lunch- and workspaces. The quality of the recording during the experiment was not influenced by environmental variables. But participants warned about possible problems such as Internet availability in the care unit, background noise, restrictions and rules around smartphone use, or transcription with different voices.

“I would prefer to leave feedback (with voice recording) without having anyone around me. Feedback has to grow, is a process, it will be incomplete if I say something quickly”. – R76

To improve the quality of the transcriptions, instructions could be provided. Based on the analysis of the interviews, we list participants’ recommendations:

  • provide tutorials on how to make a speech-to-text recording in the form of a video tutorial or text: pay attention to articulation, avoid non-lexical utterances (e.g. “huh”, “uh”, or erm”), avoid the use of dialects, use complete sentences and no phrases in bullet points, use formal wording, keep the microphone close to the source, limit the background noise

  • make speech-to-text available in different languages to allow participants recode in their native language

  • introduce speech-to-text as a tool to make notes and record oral feedback when given to a student instead of using it for final feedback. This would be beneficial for the amount of feedback given to the student during the period of students’ internship

  • combine the speech-to-text format with the structured-text feedback format. Doing so, the participant will not need to provide long speech input, but shorter answers to the structured questions

  • create pause and stop buttons during the recording to avoid having to repeat the recording

  • create a “post-it note” list which can be personalised with bullet points and which is visible on the side of the screen while giving the oral feedback. It will help the participant to structure his thoughts during the recording.

4 Discussion

This study is new to our knowledge since feedback formats in ePortfolios are underexplored. The input formats for feedback among stakeholders in ePortfolios have barely evolved; they are predominantly based on text input, and neglect new interfaces possibilities (Bing-You et al., 2017; Birks et al., 2016; Bramley et al., 2021). However, the enhancement of new technology will increase the quality and efficiency of workplace-based feedback and assessment in professional education (Marty et al., 2023; van der Schaaf et al., 2017). Up until now, ePortfolios in healthcare education are often designed without a focus on the users’ experiences. Therefore, understanding how users perceive their experiences can lead to improved practices and policies with regard to ePortfolios and should be considered in future research (Slade & Downer, 2020; Strudler & Wetzel, 2008).

In this experimental study, we focussed on ePortfolio users to compare three feedback formats. Structured-text feedback received the highest scores on perceived usefulness, perceived ease of use and attitude towards technology. Participants preferred this type of feedback above open-text feedback, currently the standard, and the less common speech-to-text feedback. An unintended but valuable result of this study was the interest to use the speech-to-text feedback, not to record and transcribe complete feedback comments to students, as was foreseen in the experiment, but for short recordings or to record conversations with a student as a proof that they had occurred. These insights are valuable and different to the initial purpose of the system.

5 Limitations

Previous research already showed that a major limitation of the speech-to-text technology is its accuracy and that only speech-to-text generated text with a reasonable accuracy is useful and meaningful for students (Colwell et al., 2005; Hwang et al., 2012). Ranchal et al. (2013) observed adapted and moderate speed and volume by the speakers using the technology; they speak less spontaneously and with a better fluency. Clear pronunciations and avoidance of non-lexical utterances (e.g. “huh”, “uh”, or erm”), the usage of short sentences, and a good position towards the microphone are common guidelines to improve the quality of the transcriptions. We add in this paper some recommendations on how the outcome of the transcripts can be improved with e.g. a pause button or written notes next to the recording button.

All recommendations should be read, bearing in mind that this ePortfolio prototype used the Google Cloud speech-to-text application in Dutch in November 2021; probably the most commercially available advanced product on the market at that time (Google Cloud, n.d.). However, our experiment indicated that the ease of use, usability and attitude score below expectations of the participants. The product website mentioned it is challenging for the API to create transcripts of specialized domain-jargon (legal, medical, …) which is certainly applicable for this experiment. Outdoors or noisy backgrounds can also affect the end result of the transcript, but this effect could be minimized in this experiment thanks to the lab setting where the experiment took place. However, it should be considered for following research in hospital settings.

Our study was held with respondents with good computer skills and experienced in giving feedback. Years of experience with typing and typing speed were not questioned or captured in advance, which is a limitation of our study. However, we noticed that it impacted the preference for feedback, and especially the older participants (mentors and supervisors) preferred speech technology due to their lower typing skills in comparison with the younger students. This study was limited to the region of Flanders, Belgium, the results can vary in regions with different computer and feedback skills.

Another shortcoming of the study was the fact that participants were questioned in a lab, out of their work context. This was due to the Covid regulations at that time, restricting external visitors in hospitals. Because of this, we could not take all real-life situations into account that could have had an impact on the preferred feedback format, such as being with several people in a crowded room when feedback was given, receiving calls during the feedback,… Also, the cases on which participants provided feedback, were recorded on video in several simulated scenarios and were thus not real-life scenarios in the hospital. However, the lab environment gave researchers the possibility to test with consistent quality of output and more control in an experimental design. The differences with real-life context were later discussed in a qualitative post-interview, which gives us the possibility to prepare a follow-up study on a smaller scale with a more evolved prototype.

The aim for future research was to translate the recommendations into a working prototype of an ePortfolio. The study focussed on the preference for a feedback format, not on the content nor the quality of the feedback given. Future research will amongst others compare the quality of the given feedback between the different feedback formats; study how feedback loops can be created and how competencies of students can be inputted and visualised. In this upcoming stage, different mock-ups will be first tested in a lab setting and later also tested and implemented in a hospital setting.

6 Conclusion

This study compares preferences for three feedback formats used in an ePortfolio for healthcare education using a mixed method design. It goes even further by giving recommendations for the design of feedback formats for future ePortfolios based on user-experience. Structured-text feedback was the preferred feedback format in this study in terms of usefulness, ease of use and attitude, all p’s < 0.001. There was no significant difference in the rating between open-text feedback and speech-to-text feedback in the top three, p = 0.559. It is, thus, recommended to offer structured-text as a feedback format in ePortfolios.

However, these findings need to be nuanced with the qualitative data indicating that participants have high expectations of speech-to-text technology but the quality of the speech-to-text transcriptions in the experiment being below expectations. This could be due to the language of the participants Dutch not being English. Participants mentioned that they would use the speech-to-text in a different way than offered in the experiment. The use of oral feedback will give them the possibility to shift away from the computer and make the feedback immediately on-premise. Instead of recording complete feedback reports, as tested in this study, participants mentioned they would rather make oral notes as a reminder for more elaborate written feedback later on. In sum, the quality of the speech-to-text technology used in this experiment was insufficient to use in a ePortfolio but holds potential to improve the process of feedback and increase the amount of feedback given to the students, a crucial element in ePortfolios and workplace learning.

The authors provide recommendations for designers and managers developing ePortfolio with feedback modules. Among them a list of recommendations from participants to increase the quality of speech-to-text feedback.