1 Introduction

A rapid shift towards telemedicine has taken place particularly in recent years (Mann et al., 2020; Waller and Stotler, 2018). Telemedicine refers to delivering medical care to patients over distance via technology such as voice calls, email, videocalls and text messaging (Waller and Stotler, 2018). Although, telemedicine and other online text-based counseling services can provide many benefits, such as efficiency (Katz and Moyer, 2004), privacy and accessibility (Moylan et al., 2022), concerns have been raised about the ability of these services to convey empathy (Terry and Cain, 2016; Moylan et al., 2022).

In this paper we focus on patient-doctor discourse at a chat-based online clinic, and the role of different conversational characteristics used by the doctors in conveying a sense of empathy to the patient.

1.1 Patient – Doctor Empathy

A common definition of empathy includes the skills of understanding others’ thoughts and emotions (cognitive empathy, mentalizing), sharing emotional states with others (affective empathy), and responding to others’ distress with care and compassion (De Waal and Preston, 2017; Levenson and Ruef, 1992).

Empathic responding is known to matter in face-to-face medical consultations. A systematic review by Derksen and colleagues found that medical doctors’ empathy was associated with patients’ satisfaction, enablement, and adherence to treatment, decreased anxiety and distress, and was related to better clinical outcomes (Derksen et al., 2013). More recent research has been in line with these findings, showing that empathy is linked with patients’ compliance to treatment plans (Attar and Chandramani, 2012), patient satisfaction (Menendez et al., 2015; Pollak et al., 2011; Wang et al., 2018), enablement (Mercer et al., 2012), self-reported well-being (Mercer et al., 2016), and self-reported treatment outcomes (Steinhausen et al., 2014). Yet, contrasting findings also exist, showing that surgeons’ empathy after a single visit was not associated with patient-reported depression symptoms – and although a weak relation was found between empathy and decreased pain, it was not considered clinically significant (Kootstra et al., 2018). The mechanisms through which empathy improves wellbeing likely relate to decreased patient stress as well as improved adherence and enablement. For example, a study by Xu and colleagues showed that medical doctors’ empathy was associated with decreased inflammation among patients with Chron’s disease and that this effect was mediated by better sleep, decreased stress, and improved self-efficacy (Xu et al., 2020).

As in all interactive settings, both verbal and non-verbal communication are important for conveying empathy during medical consultations. Non-verbal cues may include body posture, facial expression, eye contact, and tone of voice that are appropriate for the patient’s situation (Vogel et al., 2018). As described by Dean and Street (2014) empathic verbal communication can include verbally recognizing the patient’s emotions: "I know this is stressful to you”, validating their emotional responses and giving space to explore them further: "Tell me about what’s going on”, as well as taking therapeutic action and verbally reassuring the patient about an action plan: "We will figure this out together”.

In addition to the importance of empathy during face-to-face encounters, we have recently shown that medical doctors’ empathy also plays a role at an online clinic during text-based consultations (Martikainen et al., 2022). Patients who reported their doctors as more empathetic also reported lower concern about their symptoms and perceived their symptoms as less severe after the online encounter. Furthermore, patients who rated their doctors as more empathetic experienced that their symptoms had alleviated more than those who rated their doctors as less empathetic two weeks later. The exact communicational content improving patient experiences was not, however, a focus of the earlier work. Text-based interaction differs from face-to-face encounters in various ways, especially in terms of displaying empathy. To develop telemedicine encounters and to motivate good quality communication, it is important to understand, what type of communicational content supports the experience of empathy during text-based medical consultation.

1.2 Conversational Features and Empathy in Chats

Communication in chats is often considered to be ‘quasi-synchronous’, or a ‘hybrid’ of written and spoken language (Garcia and Baker Jacobs, 1999; Keng Wee Ong, 2011). Interaction in chats differs from spoken interaction mainly in terms of the affordances – the possibilities and constraints of a specific technology – that can be used for communication (Norman, 1988; Gaver, 1991; Hutchby, 2001). Affordances determine how the users can produce conversational actions and how the overall conversation can unfold in chats. Since chat communication is text-based, the ways in which conversational features are produced and received differ greatly from spoken communication. Although differences between spoken and chat communication are widely studied, researchers have avoided treating chats as "leaner" in comparison with spoken communication, and instead, examined them as a unique form of mediated discourse (Arminen et al., 2016). This way, the potential of text-based communication can be revealed, not only its deviations from spoken communication (Herring, 1999).

In spoken communication, one expectation is that turns that belong together will occur temporally adjacent to each another (Schegloff, 2007). However, turns in a chat conversation are not always expected to relate to the turn that has been posted immediately above (Meredith, 2019). Instead, multiple topics and conversations can be interwoven, and anyone can intervene between an initiated topic and its response (Herring, 1999). In addition, self- or other-initiated repair – which is a common feature in spoken communication to deal with troubles that arise in speaking, hearing, or understanding talk – function differently in chats. In chats, repairs usually include correcting a misspelling or an error of some other kind in the text (Meredith, 2019). In addition, although a message is posted in a chat in its entirety (i.e., without repairing it while producing it), participants often edit their messages while they still write them, for the message to better respond to something posted by another participant or a message that someone has posted in between (Garcia and Baker Jacobs, 1999). Thus, participants can utilize the affordances of the medium already during the message production and "repair" the message even before it is being sent.

One major difference between a chat and spoken communication is access to the embodied conduct of co-participants during interaction (Herring, 1999). Nonverbal features such as facial expressions, gaze, body position and movements, intonation, verbal stress, and rhythm of speech remain hidden in chats. This has implications for the display of empathy and affiliative responses in chat interaction. Affiliation refers to such actions with which a recipient demonstrates that they have access to and understanding of the speaker’s affective stance as well as display that they support or endorse this stance (Sorjonen et al., 2021; Stivers, 2008). Resources for delivering affiliative actions can be verbal (e.g., response cries such as "oh wow!” and assessments such as "that’s wonderful/terrible!”) or nonverbal (e.g., a nod).

Over the years, chat users have created substitutes for expressing embodied cues, socioemotional information, and empathy, by using textual resources and typographic marks (Walther et al., 2005; Keng Wee Ong, 2011). Emoticons (i.e., facial expressions produced typographically with letters, numbers, punctuation marks, and symbols), emojis (i.e., facial expressions, people, objects, places, and so on in a more pictorial form), and images in graphics interchange format (GIF) are also commonly used to give information, react to messages, and express emotions (Lyons, 2018). Users vary in terms of how much and often they use emoticons and may have different understanding of the meaning of specific emoticons (Lu et al., 2016). Thus, users may have a very different strategy for communicating with close ones in comparison to a doctor in a digital clinic.

Despite the possibility for using emoticons and emojis, it has been acknowledged that displaying empathy is still somewhat challenging in chat communication, because to understand the thoughts and feelings of the other, people often rely on facial expressions and body movements (Pfeil and Zaphiris, 2007). In addition, since research investigating the displays of empathy and intimacy in chat encounters has mostly focused on interactions between friends and family (e.g., (Hassib et al., 2017; Y. Hu et al., 2017), this challenge prevails especially in institutional chat encounters, such as online counselling and text-based helplines (Predmore et al., 2017; van Dolen and Weinberg, 2019).

In a recent study of Moylan et al. (2022) the volunteer participants in an online helpline reported that although expressing empathy is an essential part of their work, in the absence of vocal cues, such as a tone of voice and pauses, it was difficult for them to reach the help-seeker’s emotional state. This created uncertainty of accurately understanding the help-seeker’s needs and emotion and whether the emotional level of their own response was suitable for the situation. Moylan et al. (2022) concluded that the difficulty of expressing empathy over a text can turn out to be a barrier to mutual understanding between the help provider and help seeker. Apart from this study, research investigating the actual content of emotional and empathetic chat discussions is scarce. Thus, the ways in which participants can and seek to display empathy, especially in institutional chat encounters such as an online clinic, requires further attention.

Research from another context of text-based interaction – online communities and support groups – has shown that giving and receiving emotional support and empathy in a verbal form is one major characteristic of these communities (Pfeil and Zaphiris, 2007; Rodgers and Chen, 2017; Wright, 2000). As an example, Pfeil and Zaphiris (2007) analyzed messages in an online community that was directed for the elderly and identified categories in relation to the display of empathetic content. They divided these categories into the messages written by the empathy-seekers ("targets") and messages written by the ones displaying empathy ("empathizers"). In the targets’ messages, empathy-seeking was shown as self-disclosure in text units that described the general feeling that the target described, narratives of the target’s current situation, information about the medical situation of the target, and the target asking for others’ support or advice. In the empathizers’ messages, empathy was shown as providing light support in terms of showing interest to the target by asking for more information or clarification, displaying encouragement without going into detail, and speaking out best wishes for the target or the whole community. Empathy was also displayed as providing deep support in terms of deep emotional support towards the target, reassurance for the information, action, or feelings that the target reported, and giving help and advice concerning the target’s situation. Finally, the empathizers also displayed empathy with self-disclosure by describing being in a similar situation or having a similar problem with the target. These observations describe in detail how it is possible to display empathy in a written form and would be worth investigating in a chat context as well.

1.3 Research Questions

In this study our aim is to further analyze the text-based patient-doctor discourses from our previous study (Martikainen et al., 2022), to investigate what conversational characteristics are present during text-based consultations and how they relate to the patient experience. To categorize the patients’ and doctors’ utterances we use an adapted version of the Roter Interaction Analysis System (RIAS) (Roter and Larson, 2002) that we have modified to suit the purposes of analyzing online text-based consultations.

Our research questions (RQs) are:

  • RQ 1: How reliably can the text-based utterances be coded using a modified version of the RIAS?

  • RQ 2: What types of utterances are present during text-based communication?

  • RQ 3: What types of utterances are related to the perception of empathy by the patients?

2 Methods

2.1 Research Setting

The study was conducted in a private healthcare provider’s online service. The service is used to treat symptoms and diseases that do not require a physical examination. The patients can use the online clinic through a browser or a mobile application, logging in using their online banking credentials. After logged in, the patient can open a new conversation with the doctor. The consultations are charged per discussion independent of their duration. The doctors, providing care through the service, can access the patients’ previous medical records, write prescriptions, and invite the patient to a face-to-face check-up if needed. The interaction is text-based but the patients can also send photographs to the doctor.

The data were collected as part of a study investigating the role of doctors’ empathy in patients’ experiences online and testing augmentations to an online anamnesis questionnaire to support patient experience as described earlier (Martikainen et al., 2022). The anamnesis questionnaire was created as part of the online consultation service, it is filled in by the patients when checking in to the online clinic, before interacting with the doctor. The questionnaire includes drop-down menus as well as spaces for the patient to describe their symptoms and requests in their own words. The doctor opens the discussion after reading the patients’ answers to the questionnaire.

2.2 Questionnaires

Patients’ perceptions of doctors’ empathy were assessed using a Finnish translation of the Consultation and Relational Empathy (CARE) questionnaire (Mercer et al., 2004). The patients answered ten questions on the doctors’ ability to convey empathy on a five-point scale (0 – poor, 1 – fair, 2 – good, 3 – very good, 4 – excellent), they also had a possibility to state that the question did not apply to the situation. Internal consistency of the scale was found to be excellent (Cronbach's α = 0.97). A mean empathy score was calculated for participants who considered at least three out of ten of the questions as applicable (n = 159).

The patients also filled out basic demographic information on their educational level, income, gender, and age. Patient’s stress levels were assessed with the Perceived Stress Scale (Power, 2003).

2.3 Participants

The data were collected from June to November 2019. Patients’ interest to participate in the study was enquired at the end of the digital anamnesis questionnaire described above. After the online appointment, each patient willing to participate received and signed a written informed consent form to indicate whether their chat discussions could be used for the research.

Altogether 209 adult patients participated in the study. Of these participants, valid chat conversation data were available in 201 cases. In the 8 non-valid cases the full dialogue was not recorded due to technical problems. Of the participating 201 patients 135 (67.2%) were women, 64 (31.8%) had up to high school level education, 72 (35.8%) had a Bachelor’s degree or equivalent and 65 (32.3%) had a Master’s degree or higher education. For the analyses on patients’ experiences of doctor’s empathy, 172 (86%) participants with valid conversation data filled out a follow-up questionnaire within two weeks of the online encounter with the doctor (M = 4.9, SD = 3.5 days). Of these participants, 2 had met with the same doctor face-to-face after the meeting at the online clinic. These participants’ data was excluded from the analyses regarding the perceptions of empathy since meeting the doctor face-to-face might bias the patients’ evaluations of online empathy.

The doctors’ participation was anonymous, and no background data were collected from the doctors to make the participation as easy as possible. In general, at the time of the data collection 54 doctors (20 women and 34 men) were working at the online clinic. Of these doctors, 31 were involved in the discussions with the patients participating in this study. The average number of discussions per doctor was 5.7 (SD = 5.6, range 1 to 30).

Most of the doctors working at the clinic during the data collection were experienced general practitioners or occupational physicians, while some were also specialized in pediatrics, otorhinolaryngology, and gynecology. All had previous experience of the online clinic work. The doctors had not received communication training prior to the study by the employer.

The patients’ consent was obtained only after the dialogue with the doctor had already taken place, thus the patients already knew what type of information they were choosing to share for the study. The doctors were informed about the data collection before it started. No identification information was collected from the doctors at any point. The data was stripped from all information that might have even indirectly led to identification of the participants and the data was stored on a secured server only accessible for the researchers involved in the study. The Research Ethics Committee in the Humanities and Social and Behavioural Sciences of the University of Helsinki approved the study protocol.

2.4 Categorizing the Patient – Doctor Discourse

The consultations were investigated via the RIAS method modified for the purpose of this study. In addition to coding the discussions between the doctors and the patients, the patients’ answers and requests written in their own words in the anamnesis questionnaire, filled in before the consultation started, were also coded.

The RIAS system has been used to study various kinds of medical interaction settings, including consultations with general practitioners (Bensing, 1991), and medical specialists (Ong et al., 1998). Although most of the studies have examined spoken communication between patients and doctors in face-to-face settings, RIAS has also been modified for technology-mediated interaction in video-based telemedicine consultations (Miller and Nelson, 2005). To our knowledge, RIAS has not been applied for classifying chat-based encounters in earlier research.

The original RIAS includes 62 categories for classifying the communicative utterances: 34 items for doctor communication and 28 items for patient communication (Ong et al., 1998). These 62 categories are merged into clusters and the clusters are further divided into Instrumental/Task focused talk and Affective/Socio-emotional talk (Miller and Nelson, 2005; van den Brink-Muinen et al., 2002). In this paper we will use the terms Instrumental and Affective talk in line with van den Brink-Muinen and colleagues (van den Brink-Muinen et al., 2002).

The clusters pertaining to Instrumental talk are: "Biomedical talk", "Psychosocial talk", and "Procedural statements" and categories included in Affective talk are: "Social talk", "Agreement", "Rapport building", and "Facilitation" (see Table 1 for a description of each category and cluster).

Table 1 Descriptions of the RIAS clusters and categories used in this study

In this study, we used an iterative coding strategy to modify the RIAS to fit the text-based data. The RIAS categories were not translated into Finnish as they were used only by the researchers. The process of modifying the RIAS is described in detail in Appendix 1. After the modifications were finalized, two coders (authors 3 and 4) analyzed all the recorded data and a mean value between the two coders was used to indicate the number of utterances pertaining to each conversational category. One new cluster, "Technology related exchange”, and one new category, "Sick-leave related talk” (pertaining to the Biomedical talk cluster) were identified during the process (Table 1).

2.5 Statistical Analyses

2.5.1 Interrater Reliability

Interrater reliability was calculated by correlating the frequency of codes pertaining to the different clusters in each dialogue observed by the two raters. Due to non-normality of some of the variables we used Spearman rank order correlations as indicators of inter-rater reliability in line with Ong et al. (1998). Some of the categories were rarely found in the data, thus we proceeded to calculate the inter-rater reliability for each of the clusters but did not calculate reliability for individual categories. Additionally, negative expressions were only found in six patient cases and one doctor case and were thus left out from the analyses (Table 2).

Table 2 Descriptive statistics of the different conversational categories and clusters among doctors and patients (N = 201)

For the separate inter-rater reliability analyses of each cluster, we only included cases where at least one of the two coders had recognized at least one of the utterances pertaining to the cluster in question. For example, if a specific patient did not have any technology related utterances identified by either of the coders, this patient’s discourse was left out from the inter-rater reliability analyses of technology related talk. Consequently, a smaller set of discussions were available for evaluating the inter-rater reliability of some of the less frequently utilized clusters (Table 3).

Table 3 Spearman rank order correlations for clusters of coding categories

2.5.2 Associations Between Conversational Characteristics and Patient’s Experience of Doctor’s Empathy

Due to nonnormality of the conversational variables, we used Spearman correlation analyses to investigate the associations between doctors’ conversational characteristics and patients’ evaluations of doctors’ empathy. After investigating the simple correlations, we ran partial Spearman correlation analyses to account for the patients’ age and gender, self-reported stress, and the number of overall words written by the doctors (to take the different length of the consultations into account).

3 Results

3.1 Descriptive Statistics

Mean number of individual words written during the patient-doctor interaction was 57.0 (SD = 33.9) by the doctors and 38.5 (SD = 39.4) by the patients. During the consultations, the most common topics included respiratory infections (19%), musculoskeletal problems (19%), urinary tract infections (17%), eye infections (11%), and eczema or other skin problems (8%).

Table 2 shows that Biomedical talk was the most common type of talk among patients (67.2%) and doctors (52.2%), followed by social talk (12% for patients and 16.6% for doctors). Doctors also used more procedural statements (9.8%) and technology related talk (9.1%), when compared to the patients (2.5 and 1.8%, respectively).

Based on the 201 analyzed chat encounters, the following three original RIAS categories were never used: Legitimizes; Asks questions (open ended) — psychosocial/feelings; Asks questions (open ended) — lifestyle.

3.2 Interrater reliability

As shown in Table 3, inter-rater reliability was high for Social (ρ  = 0.950), Biomedical (ρ  = 0.939), and Technology related talk (ρ  = 0.833), and moderate for Procedural statements (ρ  = 0.693) and Agreement (ρ  = 0.687). The reliability was poor for Rapport building (ρ  = 0.193), Facilitation (ρ  = 0.008) and Psychosocial talk (ρ  = 0.341).

The categories with low inter-rater reliability were also very rarely found in the data, which may explain why they could not be reliably assessed. When an average by rater 1 and 2 was calculated, the mean per conversation (doctors and patients combined) for facilitation utterances was 0.33 (SD = 0.56), mean for rapport building was 0.72 (SD = 0.84) and mean for psychosocial talk was 0.58 (SD = 1.45).

3.3 Associations Between Conversation Characteristics and Patient’s Evaluations of Doctor’s Empathy

Table 4 shows that the amount of doctors’ social talk (Spearman’s ρ  = 0.23, p = 0.003) and procedural statements (Spearman’s ρ  = 0.24, p = 0.002) were positively correlated with the patients’ evaluations of doctors’ empathy. The correlations remained significant also after controlling for patients’ age, gender, stress levels, and the number of words written by the doctors. However, the effect sizes of the significant associations remained weak (ρ  < 0.30).

Table 4 Correlations between doctors’ conversational characteristics and patients’ evaluations of doctor's empathy

4 Discussion

We have shown earlier that patients’ experiences of doctor’s empathy after an online consultation are important for their subjective health status and support positive experiences of consultation (Martikainen et al., 2022). In this study our aim was to categorize the text-based patient-doctor discourse from the same participants using an adaptation of the RIAS method 1) to assess the reliability of the adapted RIAS 2) to investigate what types of utterances are present during the text-based consultations and 3) to examine what types of utterances are related to the perception of empathy by the patients.

Regarding the inter-rater reliability of the different clusters, highest reliability was found for the clusters of Social talk, Biomedical talk, and Technology related talk. We found that the reliability was unacceptable for the clusters of facilitation utterances, rapport building, and psychosocial talk. These clusters were not common in the text-based patient-doctor communication. This may be since the conversations included mostly very focused medical topics low with emotional content and conditions not demanding face-to-face visits. These results indicate that at least these aspects of the RIAS are not as easily applicable to this type of brief online clinic encounter in which mostly emotionally neutral topics are covered.

Furthermore, patients or doctors did not pose any open-ended questions regarding psychosocial or emotional states or lifestyle, these topics were rare in the chat, and generally open-ended questions were less used in the chat. These categories seem less relevant for a chat setting compared to a face-to-face setting because of the apparent neutrality and conciseness of the written conversations at the online clinic.

We found that the conversations over the chat were generally high in their instrumental content and low in affective talk. This type of conversational pattern is close to the "biomedical pattern” recognized by Brink-Muinen et al. (2002), in face-to-face consultations using the RIAS method, in which biomedical topics covered 52% of patients’ talk and 46% of the doctors’ talk. In fact, in our data the amount of biomedical talk was even higher covering 67% of the patients’ talk and 55% of the doctors’ talk.

The high percentage of biomedical talk is probably explained by two factors: 1) The issues dealt with at the digital clinic cover only cases that can be treated without a face-to-face consultation, thus more severe symptoms and situations probably requiring more empathic responding are typically not included in these interactions. 2) Telemedicine encounters might generally favor instrumental talk over affective talk. A previous study has shown that doctors used more empathic words during face-to-face consultations when compared to video-based consultations (Liu et al., 2007). The lack of physical presence and nonverbal communication might make it less encouraging to produce emotional responses (Kruger et al., 2005; Joseph B. Walther, 1996, 1992).

Nevertheless, patient’s experiences of empathy are meaningful also at the digital clinic (Martikainen et al., 2022). In the current study we found that two clusters of utterances: social talk and procedural statements, were positively associated with patients’ experiences of doctors’ empathy.

The Social talk cluster included the categories of ‘Personal remarks, social conversation’ such as: "Good evening”, "Have a nice day” ‘Shows approval’ such as: "Sounds like a good plan” and ‘Gives compliment’, such as: "Thank you for…”. This type of talk can be expected to increase the sense of closeness and cohesion between the patient and the doctor. In line with this result, a previous study has also shown that the use of personal words increases patient’s positive evaluations of their communication with the doctor (Sen et al., 2017).

The Procedural statements cluster included the categories of ‘Statement’ and ‘Transition words’. Statements include talk that describes the progress of the situation, for example what the doctor has just done or is about to do ("I’ll make a referral”). Transition words include utterances that introduce a new topic of discussion ("Let’s begin”, "I would like to ask one more thing”). This type of talk may make it easier for the patient to follow the discussion, understand what is going on, and what are the next steps in their care process. This can convey a feeling that the patient is being involved in their care process and that the doctor is actively keeping in mind the patient’s perspective. Aiming to understand the other’s perspective is one of the key components in empathic responding (De Waal and Preston, 2017).

To our knowledge previous studies on face-to-face consultations directly comparable to our do not exist. However, our findings are in line with what is known about patient-centered and empathy enabling communication. Regarding our findings on the importance of social talk, being friendly and positive towards the patient were recognized as important for the experience of empathy (Mercer et al., 2004), and regarding the procedural statements, verbally reassuring the patient about an action plan and explaining things clearly are known to be important in empathic patient-centered communication (Dean and Street, 2014; Mercer et al., 2004).

These findings have clinical implications. Since it is known that medical doctors’ empathy matters also online, it would be beneficial to encourage doctors to use personal talk and give clear information to the patients about what is going on during the consultation and what is the plan for future action. The user-interface could also be designed to clearly show the plan of action and next steps in the care process to the patients. It should be noted, however, that these implications apply to cases that were treated at the online clinic (i.e., cases that do not require face-to-face checkup). These included less severe medical conditions such as respiratory infections, musculoskeletal problems, and urinary tract infections. The findings cannot be generalized to more severe conditions. These findings may have additional implications for developing chatbots for telemedicine. Based on previous research, it is known that individuals can experience empathy also during automated interaction (T. Hu et al., 2018). However more research is needed to better understand how patients would evaluate interaction with chatbots in a medical setting.

Our findings have some similarities to and can be discussed with the categories that Pfeil and Zaphiris (2007) categorized as displaying empathy in online discussions, although their study focused on a different type of online platform: support communities for the elderly. While the nature of these communications differs from the patient-doctor discourse, similar features can be detected as meaningful in both cases, such as providing light support in terms of showing interest to the target by asking for more information or clarification, displaying encouragement without going into detail, and speaking out best wishes for the target or the whole community and giving help and advice concerning the target’s situation.

4.1 Strengths, Limitations, and Future Work

This study has various strengths. The study was conducted in a real-life environment with a relatively large number of participants. We used previously validated methods for assessing medical doctors’ empathy (Mercer et al., 2004) and categorizing the patient-doctor discourse (van den Brink-Muinen et al., 2002; Ong et al., 1998). Two raters categorized all the data and only the categories yielding acceptable reliability were used in the further analyses.

The RIAS method has been used extensively in studies focusing on face-to-face clinical encounters (e.g., Ong et al., 1998; Bensing, 1991; van den Brink-Muinen et al., 2002), and it is one of the few validated methods for medical interaction analysis(Roter and Larson, 2002). Furthermore, it has been modified for video-based telemedicine encounters as well (Miller and Nelson, 2005). According to Miller and Nelson (2005) two of the main weaknesses of RIAS are that it does not account well for situations with multiple participants and for non-verbal communication. In this sense applying RIAS in text-based consultations may be more accurate as non-verbal communication does not occur and only two participants are involved in the dialogue.

The study also has some weaknesses. First, using the RIAS method with text-based data might not recognize all the nuances of the patient-doctor discourse. Although we were able to reliably recognize several communicative categories, some of the analyzed categories showed low interrater reliability. It would be important to conduct future studies focusing on the discourse content using a qualitative approach that could better consider the fine-tuned differences in the communication that may lead to improved experiences of empathy. Second, although this coding scheme is applicable to text-based medical context like the one studied here, the findings from this study may not be generalizable to other types of medical consultations, since this study was conducted at a digital clinic in which only medical problems not requiring a face-to-face visit were treated. Analyzing consultations with more difficult medical problems or emotional topics might yield different results as compared to our study. Also, the applicability of the coding scheme to medical consultations outside Finland should be tested in future studies. Some elements of the coding scheme might not be useful in all contexts (e.g., the sick leave utterances), however, having the possibility to use these elements may improve the accuracy of the method, while they can simply be left without ratings in contexts where they are not needed.

Although the partial correlations accounted for the number of words written by the doctor (indicating the length of the conversation), and patient’s age and gender, other confounder such as the doctor’s specialization or whether the patient was interacting with the doctor for the first time may have affected the results, a future study should investigate these potential confounders more closely. It is also of note that a validation study of the modified method was not carried out and we did not assess intra-rater reliability. These issues should be focused on future research.

Furthermore, it should be noted that when analyzing agreement between coders we calculated the frequency of the codes per dialogue. This may have overestimated the agreement rates since it does not consider the agreement in timing when the codes were applied.

5 Conclusions

We investigated the characteristics of text-based medical consultations and how they relate to patients’ experiences of doctors’ empathy. In general, the consultations followed a biomedical pattern including little affective talk in line with previous work showing that empathic expressions are less common in telemedicine encounters. Crucially, the findings stress the importance of positive personalized talk and giving clear information to the patients about the progress of the consultation and the plan of future action.