Abstract
In clinical practice, decision-making is not performed by individual knowers but by an assemblage of people and instruments in which no one member has full access to every piece of evidence. This is due to decision making teams consisting of members with different kinds of expertise, as well as to organisational and time constraints. This raises important questions for the epistemology of medicine, which is inherently social in this kind of setting, and implies epistemic dependence on others. Trust in these contexts is a highly complex social practice, involving different forms of relationships between trust and reasons for trust: based on reasons, and not based on reasons; based on reasons that are easily accessible to reflection and others that are not. In this paper, we focus on what it means to have reasons to trust colleagues in an established clinical team, collectively supporting or carrying out every day clinical decision-making. We show two important points about these reasons, firstly, they are not sought or given in advance of a situation of epistemic dependence, but are established within these situations; secondly they are implicit in the sense of being contained or nested within other actions that are not directly about trusting another person. The processes of establishing these reasons are directly about accomplishing a task, and indirectly about trusting someone else’s expertise or competence. These processes establish a space of reasons within which what it means to have reasons for trust, or not, gains a meaning and traction in these team-work settings. Based on a qualitative study of decision-making in image assisted diagnosis and treatment of a complex disease called pulmonary hypertension (PH), we show how an intersubjective framework, or ‘space of reasons’ is established through team members forging together a common way of identifying and dealing with evidence. In dealing with images as a central diagnostic tool, this also involves a common way of looking at the images, a common mode or style of perception. These frameworks are developed through many iterations of adjusting and calibrating interpretations in relation to those of others, establishing what counts as evidence, and ranking different kinds of evidence. Implicit trust is at work throughout this process. Trusting the expertise of others in clinical decision-making teams occurs while the members of the team are busy on other tasks, most importantly, building up a framework of common modes of seeing, and common ways of identifying and assessing evidence emerge. It is only in this way that trusting or mistrusting becomes meaningful in these contexts, and that a framework for epistemic dependence is established.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In clinical practice, decision-making is not performed by individual knowers but by an assemblage of people and instruments in which no one member has full access to every piece of evidence. This is due to decision-making teams consisting of members with different kinds of expertise, as well as to organisational and time constraints (Baalen et al. 2016). In this paper we will argue that implicit trust plays a pivotal role in the effective performance of such assemblages and that medical imaging such as X-ray and magnetic resonance imaging (MRI) mediate the cultivation of these trust practices, by enabling the establishment of a shared ‘space of reasons’.
In social epistemology, one source of evidence that has been widely discussed is evidence that is provided by testimony from others (e.g. see Lackey 2010). In the debate on the epistemology of testimony the central focus is how testimony can be the basis for justified belief or knowledge, which is traditionally answered in two lines of epistemological theories: reductionist and antireductionist. It is not the primary aim of this article to provide an in-depth discussion of the epistemologies of testimony, but very roughly, these two lines of thought can be characterised as follows: A reductionist account reduces trust to the reasons that support it, and assert that it is reasonable to accept someone’s testimony when there is positive evidence of the reliability of testimony, while according to non-reductionist accounts testimony is similar to other sources of knowledge (such as perception or memory) that are basic, and justified unless there is evidence against them (Faulkner 2007; Lackey 2010; Origgi 2004). In addition to reductionist and antireductionist theories of testimony, some epistemologies of testimony claim that features of the interpersonal relationship between speaker and audience provide the epistemic value of the testimonial beliefs acquired (Kappel 2013). For example, several authors argue that speakers can be motivated to act in a trustworthy way when their audience is expecting them to and when betrayal will be associated with negative reactive emotions (Faulkner 2007; Pettit 1995).
The reliance on testimony within teams that collaborate to acquire some shared epistemic goal, such as multidisciplinary clinical teams making diagnosis or treatment decisions or interdisciplinary scientific collaborations, means that individual team members are epistemically dependent on each other (Andersen and Wagenknecht 2013; Hardwig 1985). Epistemic dependence, in turn, brings trust into play, since every member has gaps in their knowledge and abilities that require precisely the leaps of faith that are characteristic of trust (Lagerspetz 2015; Mollering 2001). Philosophers have argued that dealing with epistemic dependence in multidisciplinary teams requires epistemic trust (Hardwig 1991; Wagenknecht 2014). In these understandings of epistemic trust, trust involves dependence on another person for some belief or other input in one’s reasoning or knowledge production, and an aspect that provides a reason for trusting, e.g. a disposition, an expectation, assumption, or communication in combination with confidence. In this paper we assume epistemic dependence between team members and focus on the reasons for trust in this team.
Trust of all forms fills in a gap between evidence and expectation: it occurs when there is an expectation that one’s friend will return money loaned, that the babysitter will conscientiously take care of the children left in his care, that one’s doctor has correctly identified which drug to prescribe for a treatment, that one’s colleague correctly reports a radiograph, or a myriad other everyday expectations. For example, Guido Mollering, claims that trust involves an expectation of a favourable outcome, the interpretation of available evidence and a mental leap enabled by suspension, bracketing the unknowable (Mollering 2001). This approach accounts for moving from trusting based on a purely rational consideration of ‘good reasons’ to trust based on fragmentary information as the basis for further action. The gap is not one that can be inductively filled in. Epistemic trust relates to belief and knowledge; whereas moral trust relates to behaviour and actions as falling under the principles of ‘right’ and ‘wrong’ (broadly); and even more broadly, social trust is trust that others will comply with social or psychological ‘rules’, conventions or practices (Mollering 2001; Lewis and Weigert 1985).
Epistemic trust is characterised in many different ways, of which we cannot provide an overview here, but we will give some examples. Kappel (2013) argues that epistemic trust is a non-inferential disposition of an individual to believe what another individual asserts or transmits. According to Kappel, this can ground knowledge and justification when the disposition is discriminating and defeater-sensitive. Frost-Arnold (2014) argues that ’trust involves taking the proposition that the trusted will act as expected as a premise in one’s practical reasoning’ (2014: p. 1957). McCraw (2015) argues that for H to place epistemic trust in S that p requires; H to believe that p; S to communicate that p; H to depend upon S for (H’s belief that) p and that H has confidence in S with respect to p (2015: p. 421).
A question that often arises is whether epistemic and moral trust can be distinguished in practice, or how closely related they might be. For example, in his analysis of epistemic trust, Faulkner (2007) distinguishes between two forms of trust. Predictive trust requires the audience to knowingly depend on the speaker, with an expectation, in the sense of predicting that something will happen (2007: p. 895), whereas affective trust also assumes dependence but implies a different type of expectation, namely in the sense of expecting something of someone, such as expecting someone to tell the truth. In the latter type of trust, when the trusted person does not show any motivation to do what is expected of them the truster is susceptible to a certain reactive attitude, which is, according to Faulkner, resentment. In response to the objection that affective trust brings in other forms of trust that are not strictly epistemic—such as reasons of friendship or moral reasons (2007: p. 895) Faulkner argues that affective trust is not moral, strictly speaking, because ‘one criminal could affectively trust another to do something immoral’ (2007: p. 895). While it is not clear that a situation that is immoral necessarily contains no morality (such as loyalty), this mistakes the source of the objection, as to describe these as moral reasons does not imply ‘moral’ as positive evaluation, but rather ‘moral’ as in the sense of ‘falling in the domain of the moral’ (just as moral philosophers do not always behave morally). Faulkner argues that when affective trust is epistemic, it is so by virtue of its target: that is, when there is a presumption that another is telling the truth, then affective trust is epistemic. Interestingly, he goes on to write that ‘In the context of trusting a speaker for the truth, this presumption of trustworthiness provides an epistemic reason because it is the presumption that the speaker is telling the truth. ‘What is right in this objection is that the grounding presumption of affective trust is rarely isolated. Trust is ordinarily motivated by a view of the relationship and values shared with the trusted, or what one desires to share with the trusted.’ (2007: p. 895). Faulkner here goes some way to recognising that shared values play an indispensable part in affective trust, but he still wants to distinguish what makes this trust epistemic from these other values in which it is embedded. This is where we part ways. Following philosophers such as Bernard Williams, we would be more inclined to say that affective trust that takes the form of presuming a speaker is telling the truth also involves an expectation of truthfulness on the part of the speaker, and this is a moral quality (Williams 2002).
Another family of theories on epistemic trust characterize it as trust in the competence and goodwill of others, suggesting that epistemic trust also involves moral trust (Hardwig 1991; Origgi 2004). In moral philosophy, trust (and distrust) is often regarded as an emotional rather than cognitive state and includes an expectation about the future: that the vulnerable position you have put yourself in by entrusting someone with something you care about will not be exploited (Baier 1986; Jones 1996; Lahno 2001). There are different views as to how closely moral character and epistemic trust are related. For example, Origgi (2004) argues that moral character is only relevant in the assessment of goodwill and not of competence, and Adler (1994) and McCraw (2015) deny that moral character is involved in epistemic trust at all, claiming that it is possible to trust a person’s report without trusting the person (Adler 1994), or that only competence and not goodwill is relevant for epistemic trust (McCraw 2015). Hardwig, instead, stresses that epistemic trust includes reliance on the person’s moral character, of which an aspect is truthfulness, as well as their epistemic character, which includes competence, conscientiousness and epistemic self-assessment (Hardwig 1991, p. 700).Footnote 1 Moral aspects of trust play out in interpersonal relationships. For example, for Steven Shapin, to foreground the social aspects of science is also to foreground its moral aspects (evident throughout (Shapin 1994), but especially in chapter 1); whereas for others, (such as in the overview by (Lewis and Weigert 1985) social trust is not necessarily moral, though it is normative. In this paper, we will tend to see the interpersonal in the broad moral terms of Shapin’s perspective, for expediency in the present context, as we will not explore the kinds of norms social norms are.
A further distinction that is important for our account of knowledge is that between implicit and explicit forms of trust. There are different ways of understanding this distinction, but for our purposes we distinguish between forms of trust that are tacit in a situation of trust, or that trusters are not aware of (and need not be), and forms of trust that are tacit and can be brought to awareness or reflected upon, for example when there has been a disruption in trust (Hertzberg 1988; Lagerspetz 1998). All in all, we are more concerned with this latter form of implicit trust, in order to show how it operates. All forms of trust can be implicit or explicit.
In this article, we focus specifically on expertise, that is, specialised skills and knowledge not shared by all members of a team, and how members of a team who have only partial access to evidence due to differences in expertise face trust issues making everyday decisions. We do not take up the debate between reductionist and anti-reductionist epistemologies of trust, but instead focus on the question what it might mean to have a reason for trust, and aim to show that although team work in medical contexts demands trust based on reasons, in order for there to be reasons that are able to justify trust in the skills and expertise of others, basic forms of trust also need to be in place. The distinctions that are most important for our account are firstly, that between implicit and explicit trust, or as we prefer to put it, between trust with and without awareness; secondly, that between epistemic and moral or interpersonal trust. On this front we argue that while these two forms of trust have different targets, they support each other in actual situations.
The question we have asked of trust in this article cuts across epistemic and moral aspects of trust. Rather than focusing on the rationality of clinical decision-making in team contexts, we focus on having reasons (Toulmin 2001), which holds for both aspects of trust. We accept that trust in contexts of clinical decision-making involves reasons of different kinds, and these may be epistemic (geared towards knowledge) or moral. These epistemic and moral aspects of trust are interdependent and therefore we think that trust in these teams cannot be reduced to either moral or epistemic aspects, but instead that any analysis of trust in team decision-making should not exclude moral aspects of trust. Since the members of clinical teams have a high level of epistemic dependence among one another, trust plays a crucial role in the team being a competent team as well as one made up of competent members, that is a team able to carry out tasks relating to the diagnosis and treatment of patients. Hardwig shifts the normative question about trust from individuals to teams: ‘Knowing, then, is often not a privileged psychological state. If it is a privileged state at all, it is a privileged social state. So, we need an epistemological analysis of the social structure that makes the members of some teams knowers while the members of others are not.’ (Hardwig 1991: p. 697) We agree with Anderson and Wagenknecht that trust in these contexts is not and ought not to be blind, but is and ought to be reflective, that there ought to be reasons supporting trust. Our question is this: What is it to have reasons to trust someone’s expertise, skill or proficiency at a task? And here we argue that there are not clearly reasons in advance of trusting one’s colleagues; reasons and trust are instead interwoven.
In this paper, we will elaborate on the trust relations between members of established teams with a history of working together, making collaborative decisions in day-to-day clinical practice, but whose difference in expertise means they are epistemically interdependent. The aim of the paper is to show the crucial role of implicit trust in a complex epistemic team such as a clinical decision-making team, and to shed light on how implicit trust operates: it’s ‘mechanisms’. The overall framework of our account is that there are two forms of implicit trust at play in teams: moral (that is, trust in the goodwill of others to participate in team work and collaboration, and/or in their professional ethosFootnote 2), and trust that is epistemic (that is, trust in the competence, expertise and skills of others, in their epistemic dependability). Within the broad category of implicit trust (either epistemic or moral), we focus on two ways for trust to be implicit: Trust A, implicit in virtue of being without a prior act of reflecting on reasons, or bringing such reasons as there may be into awareness; Trust B, implicit trust by virtue of there being reflection on reasons, but the reasons are directly about a task at hand, and only indirectly about the epistemic dependability of others. Our paper also focuses on a further aspect of these reasons in Trust B: the lack of reflection is not because they’re accepted without question, but rather because the reasons do not precede the team, and therefore are absent until such time as they are established through ongoing interactions and ‘doing together’ in the team. Our fieldwork shows that the awareness and reflection of team members is directly oriented towards tasks undertaken together; out of this process reasons that can indirectly support (or not) trusting fellow team members emerge and are established.
We stress that our fieldwork is no more than a snapshot of an established team with a long history of collaborative decision-making, and this no doube plays a role in laying down bases for ongoing trust practices, which obviously extend beyond the bounds of this empirical study. However, it is beyond the scope of any empirical study to establish some form of originary trust, or an ab initio form of trust, on which other trust practices would be founded, since every empirical study would find that trust builds on something else. (The same cannot be said of mistrust consequent to a breach of trust, which can be traced to an event or action, but which also shows up what trust was depending upon if this was not explicit.)
We argue that in the process of establishing criteria for jointly carrying out tasks, a shared framework or ‘space of reasons’ is established (Carusi 2009) within which having reasons for trusting each other’s skills, expertise or competence gains traction and meaning.Footnote 3 Crucially, this is a process in time: team members come into already established teams that have already have shared practices; they are co-opted or brought into teams by already trusted team members. There is a great deal of implicit dependence on what is already in place, as teams collaborate on every day decision-making, and on developing and embedding new technologies in their practice. These frameworks are developed through many iterations of adjusting and calibrating interpretations in relation to those of others, establishing what counts as evidence for or against a claim, and ranking different kinds of evidence: to have a similar orientation to evidence in a clinical decision making space is to be in a shared ‘space of reasons’. Taking the use of imaging technologies for diagnosis and treatment of a complex disease as our central example, we argue that establishing a common way of seeing or mode of perception is essential to establishing what can count as a reason in the inter-subjective framework, as argued by (Carusi 2008), developing the notion of the sensus communis from Kantian aesthetics. Carusi’s suggestion that the visualisation software plays an important role in forming this common way of seeing is borne out by Kathrin Friedrich’s analysis of computed tomography images (Friedrich 2010). Friedrich shows that software plays an important mediating role in setting up shared ‘sight styles’ across a team (Friedrich 2010), drawing upon Ludwig Flecks’ notion of ‘thought collectives’ and ‘thought styles’ (Fleck 1979, 1986e [1947]). While Friedrich focuses on the software interface of images (the Graphical User Interface or GUI), we focus on the interactions around image processing that lead to images that have the features required for clinicians.
In Sect. 2 of this paper we will introduce the domain of our fieldwork, a clinical team working on diagnosis and treatment of a complex disease called pulmonary hypertension (PH). In this domain, the use of imaging is pervasive and therefore it plays an important role in our analysis. In Sect. 3 we introduce our framework of implicit trust through our analysis of interdependence between team members with different expertises. Through an iterative process in repeated interactions team members converge on orientations towards evidence, establishing and cultivating a shared space of reason that provides a framework in which interpretations, reasons and justifications can be shared. In Sect. 4 we argue that imaging mediates the establishment of a space of reasons by developing a shared way of looking and an agreement about what is worth looking at and what claims looking can support. We present an example of image processing of a specific MRI modality to illustrate how this works in practice. We conclude with an overview of the ‘mechanism’ of implicit trust that we derive from our analysis, and a discussion of issues with epistemological responsibility and consensus that arise from our account.
2 Pulmonary hypertension
Our account is based on a qualitative study of decision-making in image assisted diagnosis and treatment of pulmonary hypertension (PH), within a medical expert team working in one of eight expert centres in the UK and Ireland that diagnose and treat PH. The team consisted principally of pulmonary clinicians, a cardiologist, a nurse consultant, specialist pharmacists, radiologists, junior doctors, specialist nurses and a ward nursing team and conferred weekly in a ward multidisciplinary team (MDT) meeting discussing the management of the current ward patients and a radiology MDT meeting where current inpatients, patients in short stay admissions for diagnostic testing, and outpatients may be discussed.Footnote 4 The clinicians that we have observed had been working together (in slowly changing membership compositions) for years. As each new member comes into the team, he or she is embedded into the already existent team with its well-established practices and forms of interactions. Junior members are trained by senior members; new senior colleagues are recruited by the existing team members; usually wider networks play a role here, such as acquaintance with supervisors or other colleagues of new recruits.
PH is defined by elevation of blood pressure in the pulmonary artery, the artery through which blood is pumped from the heart to the lungs, as measured by an invasive test called right heart catheterisation, and increased resistance in the pulmonary vasculature. This leads to enlargement and decreased function of the right heart ventricle that causes breathlessness and limitation of exercise capacity and is ultimately life-threatening. It is further classified into five different categories (with a number of subdivisions),Footnote 5 for which different treatments are required. A careful clinical history and a range of investigations are required to diagnose and categorise PH. The treatment regime for the patient is based upon these tests and classifications, and can range from drug treatments to heart and/or lung transplantation.
In the PH clinic, team members have to combine evidence from heterogeneous sources, such as the patient’s history, clinical examination, lab tests, images and measurements, basic textbook knowledge and results from clinical research to form a cohesive and consistent profile of each individual patient based on the available evidence. Part of this process occurs within interactions between different experts, who from their different expertises provide a specific outlook on evidence, for example radiologists interpret images whereas clinicians provide clinical information from the patient’s history and physical examination, and nurses from clinical interactions with patients and their family on the ward. Fitting the different interpretations together with other evidence requires the interpretations of other experts. In these respects, the process is distributed over different instruments and people, working in different and overlapping contexts, at different points of the patient’s encounter with the clinic generating and interpreting evidence (Baalen et al. 2016).
Even though there are many different sources of information relating to any particular patient, images play a dominant role in diagnosis and treatment in the team that we studied. A feature of this team is the close inter-relationship between image engineers, radiographers, radiologists and consultant clinicians.Footnote 6 This means that expertises around imaging (producing, interpreting and using them in clinical contexts) are particularly pronounced, giving the opportunity to see how they operate, and how the interplay between expertise and technologies.
3 Implicit trust in a multidisciplinary team
Clinical decision-making is distributed, combining expertises from a range of professionals contributing specific aspects to the framing of a patient, stemming from different knowledge, experience and skills as well as from different roles in practice. No individual member of the team has full access to every piece of evidence for two reasons: firstly, the evidence is gathered by different people (clinical history and examination by the consultant clinician(s); the echocardiograms and ECGs by the cardiologist(s); the images by the radiographer(s); the reports on the images by the radiologist(s), and so on). This division of labour regarding the gathering of evidence is due partially to pragmatic and organisational constraints. Secondly, the interpretation of evidence, such as ECG measurements or imaging, and the action to be taken due on its basis, such as prescribing a specific drug or recommending an organ transplant, requires expertise that is not available to all team members. As a consequence, every individual team member only has direct access to a fraction of the evidence, and team members are epistemically dependent on each other for the gathering and interpretation of evidence necessary to make clinical decisions (Andersen and Wagenknecht 2013; Hardwig 1985).
This relationship of epistemic dependence was evident throughout our fieldwork. It structurally organises the working week of the PH team, with weekly MDT meetings of two types: the ward MDT meeting and the radiology MDT meeting held on one morning each week, with the second immediately following the first. The ward MDT meeting is led by one of the consultant clinicians (on a rotating basis), with a mix of other members of the team caring for patients in the ward, such as junior doctors, pharmacists, nurses and others. The cases of the week are discussed, and immediately following the ward MDT meeting, one consultant clinician does a ward round, and at least one other consultant clinician (but usually more), attends the radiology MDT meeting where selected cases from the ward MDT meeting are discussed along with others that are already on file for the radiology MDT meeting. The relationship between radiologists and consultant clinicians is a good example of a division of expertise that brings trust into play. There is general acknowledgement that the radiologists are the experts on giving interpretations of the images, over the consultant clinicians, so much so that the consultants do not give a final judgement on the images to patients until such time as they have had a report from the radiologists, as is seen in quote 1 from a consultant pulmonary clinician. However, this higher level of expertise is accorded to the cardiothoracic radiologists who specialise in pulmonary hypertension and who have been working with the team over many years, and not general radiologists or those not in the team (for example, see quote 2 from a consultant pulmonary clinician). The consultant clinicians describe their own expertise as having been honed by working with the radiologists (for example, see quote 3 from a consultant pulmonary clinician). However, the interactions among these different expertise groups, which have been developing in some cases over several years, have also created a situation where the expertise of each is developed in dialogue with the other: quote 4 from a cardiothoracic radiologist and quote 5 from a consultant pulmonary clinician illustrate this.
In these examples, we have focused on the interactions between radiologists and consultants, both groupings being clinicians but with different expertises. These are not the only forms of expertise in the team that will contribute to the diagnosis, treatment and care of patients, but we focus on them for the moment, firstly because radiologist-consultant interactions are extremely important in a domain where imaging plays a pivotal role, but secondly, they are fairly typical of what we found among other expertises in the group. It is important to note that while all of the examples we have discussed explicitly talk about trust, this is most likely because our questioning purposefully framed the issues in terms of trust. It is notoriously difficult to empirically grasp when people take themselves to be trusting within actual situations, but we wanted to know how our research participants would respond once trust was ‘on the table’ so to speak. That is, we wanted to know whether they would recognise themselves as trusting, and how they would discuss trust, what kinds of things came up around trust. In fact, all the research participants easily recognised the situations they worked in as situations in which they trusted, and none of the participants repudiated the very category. More interesting were the kinds of reasons that they gave: for example, they readily appealed to expertise (for example quote 1), but they also pointed to aspects of the tasks they’re carrying out: discrepancies between the radiology report and ‘what I’m seeing’ (quote 2) (while what they’re seeing is very much established in their constant interactions with each other). Quote 6 shows how quickly talk of trust becomes focused on the fine grain of evidence gathering: the discrepancy between the way an MRI ‘looks’ to the numbers (“the ejection fraction [given quantitatively] is very low ....). Quote 6 goes on to show that even when there is reason to be sceptical about something [the numbers], it is not directly about others’ skills or competence but because of technical challenges and difficulties. (More about this in the next section). This was typical of our conversations with our research participants, where even in the abstract without naming any particular person, questions about trust were answered very quickly in terms of forms of evidence and how one is weighed up against another. As we shall argue below, this suggests that team members focus directly on the task at hand and only indirectly on reasons for trusting the expertise of others.
What we draw from these examples is evidence of the extent of interdependence between forms of expertise, and the extent to which trust is based upon familiarity and experience over long stretches of time (see also Jirotka et al. 2005). However, it is also striking that this familiarity and experience with each other goes further than providing a cumulative stock of evidence for trusting the expertise of the other. Rather, the consultant clinicians become better at reading the images as they learn to see how the radiologists would report them, and the radiologists refine their reports on the basis of the queries that the consultant clinicians direct to them. Thus, the consultant clinicians’ confidence in the radiologists is not based just on more evidence of their trustworthiness. Rather, it is based on a mutual adaptation, one to the other, whereby the consultant clinicians come to see as the radiologists do (that is, to pick out the same features as they would for a report), and the radiologists bring to the foreground among the indefinitely many features of an image, those features that are important in the consultant clinicians’ world. In the next section, we will give another example of this mutual adaptation, but here we pause in order to consider initial implications for trust.
Among the different members of the team more or less directly involved in diagnosis and treatment decisions, none explicitly mentioned moral trust; one participant among the image engineers mentioned this directly and without our asking or mentioning trust ourselves. See quotes 7 and 8. This participant points to moral and interpersonal aspects of trust, and also gives reasons. The team members referred to in these quotes overlap with those we observed in the clinical teams, and our observations bear out a strong norm of conscientiousness and a shared norm of putting patients first.
The first implication is that the forms of trust described in the interviews and observed in our fieldwork is implicit, in these senses: trust in the skills and expertise of others (for example, as in quotes 1 and 4) implicitly assumes that those others are not withholding their competence, that they do their work conscientiously and with goodwill and according to shared norms. This is moral/interpersonal trust, generally of the kind we have labelled Trust A (although as we saw in quote 7, it can also be brought into awareness). There is also implicit trust in the form of Trust B: that is, it is not directly about team members’ competence or reliability. The ease with which our questions about trust were answered suggests that there is a form of trust that is tacit though fairly easily available for reflection. In day-to-day clinical practice, these experts are not explicitly or directly concerned with who they trust and why. When they do reflect on trust, what they tend to talk about is how they assess evidence, not people. The reasons for trusting are not consciously or explicitly reflected upon when collaborating. Instead, team members attend to the task at hand and reflect on how it is executed and what are the standards for a good completion of the task. Several of the quotes also mentioned the interactions between people with different expertise, through which the way they accomplish their tasks are calibrated to each other. This is an important aspect of the intersubjective framework for trust, or the ‘space of reasons’. In the next section, we will expand on how this is established.
That trust practices are implicit does not mean that trust is blind. Most authors agree that it takes time and experience for a person to become acquainted with another individual and trust is something that builds up gradually during that time, in a process that can be more reflective then implicit (Baier 1986; Jones 1996; Origgi 2004; Pettit 1995). Susann Wagenknecht (2014) has studied the strategies of scientists in interdisciplinary research teams to handle epistemic dependence in practices of incomplete trust. For example, through dialoguing practices and explanatory responsiveness (being able to address the listener’s epistemic needs) researchers can probe the competence of the others and by asking them to explain something, can assess whether the explanation makes sense on a ‘meta-disiciplinary’ level (i.e. the logic of the other’s reasoning). In addition, researchers can assess impersonal characteristics, such as track record and training to evaluate the other’s trustworthiness (Wagenknecht 2014). Hence, trust is placed in the other on the basis of prior experiences and critical reflection on those experiences and the other’s competence, as the consultant clinician in quote 6 also contends.
While it is important to note that implicit trust is not blind trust, it is just as important to note that its sight is not simply given, and that it has to learn to see: that is, it has to learn what are possible reasons for trusting the expertise of others, since these are, by definition, not within the purview of everyone in the team. We claim that what counts as reasons for trust in the expertise and competence of others is established indirectly, while attending to other things besides trust. Therefore, it is worth looking at what members of the teams with different expertises do attend to. In the MDT meetings that we observed, and in several of the interviews, a main preoccupation was frequently that there is convergence of different views. This arises in different ways. We will give a concrete example of such convergence regarding imaging in the next section (Sect. 4), but first we will lay out how convergence operates in weekly MDT meetings.
As we have noted, in our PH case study, the clinicians and radiologists conferred in weekly radiology MDT meetings. In these meetings, generally the consultant opens with introducing the patient by giving a summary of the previous course of the disease, the clinical signs and symptoms, other test results and the specific queries that they have. Then, the radiologist presents the images, shows specific findings and compares them with images from different modalities and with earlier images if available and sometimes asks for clarification about the patient’s clinical history from the clinician to refine their evaluation. After what can be quite protracted discussion and going back and forth between images, and sometimes metrics, the consultant concludes the interaction by making a note of the shared conclusion and the follow-up plan for that patient. This is spoken out loud while the consultant is writing it down in the patient’s clinical record. This allows for a shared ‘ownership’ of the decision.
In these types of repeated interactions, medical teams cultivate a collection of stable, agreed upon orientations towards evidence and knowledge that builds up an inter-subjective framework within which claims and interpretations can be justified, and decisions can be arrived at and shared by others. Face-to-face meetings, such as these MDT meetings play an important role in the establishment of these shared frameworks. They provide a place where orientations towards evidence are coordinated and calibrated, where interpretations can be shared and explained and where experts learn from each other, developing a shared basis for diagnosis and treatment of the population of patient that they specifically manage collaboratively. In this process, pulmonary consultants and radiologists learn from each other and come to interpret evidence such as imaging and the clinical background in a similar way as their colleagues with a different background, as the pulmonary consultants express in quote 3 and 5 and a radiologist in quote 4.
An important aspect of this process is the validation of measurements, imaging and interpretations for the local situation (the clinical team as well as patient population, see quote 9 from a consultant pulmonary clinician) and the calibration of different types of evidence provided and obtained by different experts. For example, after voicing the team’s decision, the consultant clinician also mentions the right heart catheter measurements, which allows for a final integration of all evidence and helps radiologists to get a feel for the correlation between the imaging and right heart catheter findings which are considered to be the gold standard measurement to diagnose PH.
In short, by correlating and calibrating findings from different sources as well as interpretations by different experts in repeated interactions, such as MDT meetings, the medical team develops an inter-subjective understanding of what counts as evidence, how different pieces of evidence should be handled and fitted together to produce a profile of a patient and make diagnosis and treatment decisions. Instead of reflecting on who they trust and for what reasons, what the team devotes a great deal of energy to is building an inter-subjective framework where what counts as a ‘good report’ or a ‘good question’ or a ‘good decision’ is given shape. It is only within that inter-subjective framework that what counts as a reason for trust can meaningfully be reflected upon when it is actually reflected upon. This inter-subjective framework in which the different experts come to have shared orientations, a shared way of seeing (in the perceptual and conceptual senses) provides something that team members can look at and consider, when they are considering whether they trust the outputs of the expertise of others. We call this an inter-subjective space of reasons, where members share similar orientations to what counts as a reason or justification for interpretations, judgements or decisions, and for the trustworthiness of experts with whom researchers are in a relationship of epistemic dependence.
So far we have considered the relationship between radiologists and consultants, which by its nature, is geared towards the images used for diagnosis and treatment decisions. With our next example, we dig further down in the relationships around an imaging intensive team, which not only uses images but also researches and develops new imaging tools and techniques. Therefore, apart from the clinical team with its relationships extending toward staff and patients, there is also, on the other side, its relationships extending to the radiography department of the hospital, and the imaging engineers, computer scientists and others.
4 Imaging mediates trust practices in clinical decision-making: developing common ways of seeing
One way of establishing an inter-subjective space of reasons—what counts as something to reflect on and what might be a reason to explicitly trust or mistrust—is through the establishment of what is worth looking at.
The PH team in our case study makes use of an MRI scan called cardiac magnetic resonance imaging (CMRI) to assess the anatomy and function of the cardiac chambers. This specific MRI sequence synchronises a person’s heart rhythm with MRI data, resulting in a reconstruction of the cardiac cycle that resembles a beating heart. These images are used to assess the function of the right heart for prognosis and disease severity, and of the left heart to exclude left heart disease, by visual assessment of chamber anatomy, contraction and potentially leaking heart valves.
CMRI imaging is a recent development in PH diagnosis and within the PH team, the clinicians, radiologists and radiographer had to establish together what is worth looking at in these images. For example, they have to agree that the right ventricle function and morphology are relevant signs of the disease progression, that CMRI can be used to assess the right ventricle function and morphology and, more basically, which part of the image refers to the right heart. Through a process of tinkering with the image acquisition protocol and processing, image interpretation, calibration with clinical outcome by radiographers, radiologists and clinicians, and an ongoing dialogue about which images are most useful, the attention of the whole team is directed at those aspects of the images that they come to agree are worth looking at. Going deeper than the corresponding interpretations and orientations to reasons that we discussed in the previous section, this process around the development of new imaging protocols allows for an alignment of vision that underlies those interpretations and reasons. Alignment at this level produces shared modes of looking at the images, or common modes of perception, the ‘sight styles’ that we have already mentioned in Sect. 1. When these are brought into play, imaging shifts from its role as being what expertise is about—discussed in the previous Sect. 3—to playing an important role in bringing about alignment at this rather deep level, that makes possible the ‘space of reasons’ within which trust practices can take hold.
To illustrate this point, we discuss an example in more detail: that is, the drawing of the contour of the right heart ventricle (see Fig. 1). This example is drawn from another central relationship for this team of highly image oriented medical practitioners, that is, the relationship between radiologists and radiographers, who produce the images and process them in the first instance.
In addition to visual assessment, the CMRI images are also processed to quantify predetermined parameters such as the right ventricular ejection faction (RVEJ), a measure held to be clinically important by correlating with disease severity and prognosis. To measure RVEJ, the volume of the right ventricle is measured at two moments in the cardiac cycle: immediately before contraction (the end-diastolic volume) and immediately after contraction (the end-systolic volume), by drawing the contour of the right ventricle in all slices covering the right ventricle volume for the two points in the cardiac cycle. A radiographer draws the right ventricle contours after which a software program calculates the ejection fraction and other metrics characterizing the right ventricle function that are summarized in a report containing numbers and diagrams which the radiologists receive in PACS.
The production of this contour of the right ventricle and its associated metrics, is another illustration of distributed expertise, since the radiologists have to trust the radiographers’ skill in producing and processing the raw data of the images. Team members have to agree that defining the end-systolic and end-diastolic volumes are good ways to determine RV ejection fraction and that this metric is clinically relevant. Radiographers draw the contour of the right ventricle, defining which part of the image refers to the ventricle wall. In this way, they are making a knowledge claim, which radiologists need to be able to trust. In iterative interactions with radiologists and engineers, the ‘right’ way of drawing the contour so that the appropriate metrics can be derived from it, is determined. Together they bring to the fore this contour, and define how it needs to be. Because several people have been involved in the development of this metric, there can be discussions regarding how an image looks, to the eye, and the metrics. These discussions around weighing up qualitative and quantitative features of images occur frequently, quote 6 being but one example which acknowledges how the ‘numbers can be wrong’ and can be inconsistent with how something ‘looks’. That is, these specific features about the images are what are focused upon, in the production and interpretation of the images, and all produce a discourse of reasons around the images in terms of which expertise of people, but also the images themselves, are taken to be trustworthy or not. All these things have been established through a long history of interacting and collaborating, aligning their way of looking at CMRI images in the identification of the right ventricle wall and calibrating their interpretation of metric with clinical outcome. Indeed, as one of our participants noted, skills for older technologies are displaced in the development of skills needed for newer technologies, as they are no longer practiced (see quote 10 from a consultant cardiothoracic radiologist).
At this deep level of engendering ways of looking, medical images play an important role in assemblages of distributed knowing. Images fulfil this role in two ways, firstly as epistemic objects that can be distributed among all members of a team, as well as interpreted and discussed, thus facilitating communication and sharing of information and thus mediating the establishment of a space of reasons for trust and distrust. Images allow different experts to converge on their interpretation of a case by being an object to refer to, to relate their own and other’s interpretation as well as other evidence from the case to. As shared objects that anchor shared interpretations, images are often regarded as more objective than some other types of evidence, such as clinical history or physical examinations, as pointed out by the consultant clinician in quote 11.Footnote 7
Secondly, the sharing of images and communicating through them produces shared vision through which the members of the assemblage come to see and perceive in a common way. According to Goodwin, ’the ability to see relevant entities is not lodged in the individual mind, but instead within a community of competent practitioners.’ (Goodwin 1994: p. 626) In medical teams, the ability to see relevant entities is developed in interactions between clinicians, who know and can relate to the clinical case, radiologists who interpret the images in terms of the disease as well as technological limitations and radiographers who are aware of the actual production of images. In these interactions they establish what can be seen on the images and how this relates to clinical outcome. Developing shared vision like this, the medical team develops an even stronger basis for sharing interpretations, and building up a shared orientation towards what counts as reasons for accepting knowledge claims of others.
A shared way of seeing co-evolves with the development of new imaging modalities. In our field study, the physicists, radiographers and radiologists involved in PH imaging have a long history of collaboration, developing methods to analyse and evaluate CMR images and metrics together. The technologies, e.g. the scanner, the sequences facilitating the acquisition of CMR images, the image processing algorithms and the software tools that enables drawing the right heart ventricle contours, calculating the ejection fraction and sharing the results, play a crucial and active role in these processes. The technologies, the users, the ways of looking, and the possible knowledge claims co-evolve with each other. MRI, by producing a specific type of contrast, between soft tissues, drives a specific kind of visualisation of the heart muscle, and the method of ECG-gating allows visualization of movements of the heart during a complete heart cycle, enabling CMRI. Clinicians and radiologists involved in diagnosis and treatment of PH, from being familiar with heart anatomy and physiology, recognise the relevant structures (i.e. septum, ventricles and valves), and from being familiar with what type of information is required in clinical practice, they recognise which relevant questions might possibly be answered by these types of imaging. However, they need to learn how to recognise deficiencies and how to evaluate function by relating images to clinical outcomes. Together with an ongoing and rigorous discussion, these interactions between radiographers, radiologists and clinicians, and the imaging technologies pushes the development and tweaking of acquisition sequences to improve image contrast for those specific practices, and image processing and analysis algorithms to produce relevant metrics such as right heart ventricle ejection fraction. Such developments further reinforce the trust framework within which medical decision-making operates.
5 Discussion
Social epistemology addresses knowing in terms of social interactions, such as team collaborations, or social environments, such as institutions or scientific communities. Focusing on social aspects of knowing implies that attention must be paid to epistemic dependence and trust. In this paper we have examined how trust operates in an image intensive clinical setting. Our goal was to uncover the ‘mechanisms’ of implicit trust in a complex epistemic system where team members with different roles and expertises have to collaborate to formulate shared diagnosis and treatment decisions and imaging technologies play an important role. Elsewhere, we have studied the socio-technological epistemology of clinical decision-making and image interpretation as a social and collective activity (Baalen et al. 2016). By studying how implicit trust operates in a clinical team, we have elaborated further on these practices
Figure 2 illustrates the different forms of trust that we take to be at play in clinical decision-making teams. Trust A is trust that is given without awareness or reflection on reasons (although there may be reasons), whereas in Trust B there is reflection, but this is directly on accomplishing a joint task; in the process of accomplishing the joint task, reasons for assessing the task indirectly also become reasons for trusting (or not) team members. Without these reasons that are jointly ‘owned’ it is difficult to see what trusting the expertise of another (whose expertise one precisely does not share) would actually consist in.
Through our fieldwork and conceptual analysis we have identified several characteristics of trust practices: trust practices (a) are indirect, in that they do not directly evaluate the level or kind of expertise of the trustee; (b) are implicit in that that they do not reflect on reasons for trust in advance, or there may well not be reasons in advance; (c) are interactional and highly contextualised in joint tasks; (d) require labour from both the truster and the trustee, and a reversibility between them (the truster is also the trustee and vice versa); (e) iterative, they develop and evolve over time; (f) operate by building up a common stock of appropriate reasons for accepting evidence or information or not that truster and trustee both share and both have access to. The common stock of appropriate reasons for accepting evidence or information is also a common style or way of assessing evidence or information, and is built up through forging common modes of perception, ‘sight styles’ in which technologies play a mediating role. Through these means the expertises of the others are less opaque, more accessible and comprehensible for collaborators with another specialty, and can be brought to awareness and reflected upon if needs be in ways that are meaningful across the different expertises. Our example of the measurement of an MRI metric shows that there is also an overlapping of reasons for trusting different things or different entities: reasons for trusting an image or a number overlap with trusting the expertise of a person who is involved with producing or interpreting that image or number. We would go further and claim that trusting the expertise of someone else on whom one is epistemically dependent is rarely directly aimed at that person’s expertise, but is instead nested into ways of trusting evidence and information that are engendered through iterated interactions, which bring collaborators into a common space of reasons. Quote 8 expresses this well: “I mean we founded our collaboration on a need to get something going, and that’s always a good starting point. You work on something and you get it working and you see that it’s actually been useful”. In this quote, we see the social bond of ‘getting something going’, and in the other by the same participant, quote 7, we see the dependence of epistemic trust on other forms of trust: moral and interpersonal, attested by a positive evaluation of the other’s character (“he’s a pragmatist, he’s a people’s person”) a common commitment to shared norms and values (“his main priority is patient care”, “non-political”) and pragmatic ends (“a need to get something going”).
We turn now to a brief consideration of two further challenges that arise from our account of implicit trust in multi-disciplinary clinical settings. Even though trust is implicit, reasons for trusting can come to be attended to and reflected upon; but these reasons are not pre-given. It is only through participating in the creation of an inter-subjectively shared framework for assessing evidence and information, that reasons for trusting or distrusting emerge. In other words, we claim that trust and reasons for trust are interwoven and develop in the same situation of epistemic dependence in collaboratively completing a task, while team members attend to the task at hand and criteria for the execution of that task. As we have argued, establishing and cultivating a shared space of reasons requires agreement about what is worth looking at, what specific pieces of evidence say about specific patients, and in the clinical team we observed this agreement was usually reached through deliberation and consensus. However, as some of the participants in our field study and several authors (e.g. see: Esser 1998; Solomon 2006; Urfalino 2014) have pointed out, this is not without its problems.
Groupthink, group dynamics and social pressure may impede the quality of a group decision reached through deliberation and consensus for several reasons. Consensus agreement does not necessarily reflect the opinions or input of all the members of a group equally, because rather than a summation of all opinions, consensus is the conclusion that no one in the group objects to (Urfalino 2014). In addition, usually one team member has the responsibility to summarise all inputs into a group decision, which involves weighting all inputs. In that process, the input from people with more authority or a stronger voice can be given more weight hence have more influence on the resulting group decision. Additionally, since in these meetings consensus is the desired outcome, minority opinions may feel pressured not to voice their opinion, thus consensus may discourage disagreement (Solomon 2006). Dissenting individuals may feel pressured to change their mind, or not share knowledge of contrary evidence, therefore, deliberation and consensus may not be the best way to collect and fit together all available knowledge or opinions. We have not assessed whether groupthink and other social dynamics associated with consensus affected the decision-making processes in our fieldwork.
However, this leads to a deeper issue, that has potentially serious implications. It is difficult to see from which perspective or foothold an evaluation of the practices and inter-subjective framework is possible. For example, the development and implementation of a new technology or technique requires a great investment of time and practice to develop the shared way of seeing that is necessary for it to operate well in a context geared towards providing reasons for clinical decision-making. As we have noted (in Sect. 4), in the development of the skills needed for the new technique, other techniques could become obsolete because the skills for using them are no longer practiced. In this case, at least one of the bases for external comparisons and evaluations of the practices and inter-subjective framework in operation in a team, can be eroded. This can lead to skewed practices, that could be either not beneficial or even harmful to health.Footnote 8 How this is balanced against other bases for comparison and evaluation is something we have not broached in this article. We will try, however, to give an outline of how both of these challenges, that of groupthink/dynamics and that of independent evaluation, might be broached.
To return to a distinction mentioned in the introduction, the account that we are offering does not fall into either reductionist nor anti-reductionist epistemologies of trust. It is a holist account of trust, that stresses the inter-dependencies between different kinds of trust. It might be thought of as a circular mode of trust, and this is the underlying worry of both the challenges we have mentioned. There is certainly bootstrapping in the account of trust we are advocating, but it is not necessarily vicious or undermining of trust. Moral or interpersonal trust in the character and professionalism of others does a lot to lay the ground for epistemic trust in the expertise and skills of others, in specific instances when the diagnosis and treatment of specific patients is at issue; implicit epistemic trust might bootstrap on moral or interpersonal trust, sufficiently to ‘get things going’. The intersubjective space of reasons could well give rise to a form of ‘groupthink’ and enclosed local practices: this is a real and not only hypothetical possibility. However, to draw from this the normative lesson that this form of trust is weak or unreliable would be incorrect. To insist on ‘independent’ reasons to remedy this potential ‘groupthink’ is not helpful because there may not be ways of actually applying these independent reasons meaningfully within the team. Meaningful reasons are criteria that are shared by teams. The remedy instead is to look at the broader settings in which clinical teams are embedded. Clinical teams are highly porous entities, and not at all self-enclosed. They can subsist for quite some time with changing members; new members bring in their background education and experience; there are often visitors and others who attend MDTs; they operate in hospitals with organisational structures that link them to other hospitals and institutions, with mechanisms of oversight; many members of the team also undertake research and publish and are therefore also subjected to the scrutiny of peer reviewers. Thus, shared by the team does not imply closed off within the team, impervious to scrutiny. It is these overlapping contexts of institutions of trust that result in the holist form of trust we have been describing not being necessarily subject to vicious circularity. Instead, it is a web of interconnected forms of trust that are made more robust through their interconnection. In order to address the ‘bootstrapping’ problem, we need to be able to view practices from different distances and perspectives: from the close granular view of how teams with distributions of expertise across members operate, medium-scale views of how they are embedded within clinics and institutions; and larger more distant views of how they operate across clinics and institutions. The normative implications are that the justification for trust practices need to be sought in these interconnections between establishing what can count as reasons for trusting expertise, and the intersecting contexts in which reasons are produced, communicated and exchanged from within teams, to across institutions. An applied philosophy of medicine could do valuable work in shedding light on the nature of these interconnections and how they contribute to the robustness of trust practices.
In conclusion, a social epistemology of clinical decision-making involves directing attention to epistemic dependence between team members with different expertises and roles in the clinic. Epistemic dependence, in turn, requires trust. In this paper we have given an account of the trust implied in epistemic dependence as a form of implicit trust in these teams, that has both epistemic and moral elements. Through implicit trust, teams build an inter-subjective framework, in which reasons for trusting the expertise of others on whom each is dependent are very closely intertwined with reasons relating to identifying and assessing evidence. It is only within such a common framework that trusting or mistrusting become meaningful in these contexts. Trusting well in clinical decision-making teams is what one does while attending to other things than expertise, most importantly, agreeing on what are good reasons for recognising a task well done.
Quote 1 | |
P10: I think I would always trust the radiology interpretation above and beyond my own. Because...I guess a lot of that comes down to your confidence. Ehm... and your certainty. As I look at more scans, and this is going to be an ongoing thing throughout my working career I guess, I will probably get more experienced and more confident in my own interpretation of scans. And I’m happy with certain degrees of that. And interpretation. But I still...But I still very much value and need, if you like, that to be...double checked by the radiology staff. So I would be very unlikely to formulate a final opinion on a patient without having had the scans reviewed with radiology. I would, I would come to my own conclusion, but I wouldn’t finalise that until I’d had that double checked. Which is probably a bit different to certain people, I think it places different strengths and, different people have different degrees of confidence in their interpretation of things. Ehm.. so that’s, but that’s the way that I think about it. |
Quote 2 | |
P11: yea, you’d look at it [a report from a general radiologist] and them sometimes we’d take it back to ours [radiologist from the PH team] and they’d say ”that’s rubbish”. So, I think, you know, we’d...yea, I would look at it. If I get a report, I always look at the scans and make sure that fits in with what...what I’m seeing. |
Quote 3 | |
P11. So I’m not bad in [interpretation of images regarding] lungs, lung parenchyma I think I’m actually quite good...Ehm...and...the different patterns in gas trapping and clearly we’re good at the heart and the pulminovasculature, although we still miss clots occasionally, and stuff like that, so, some expertise. And in some selected areas more expertise than some radiologists, probably. Just in very selected areas. | |
I: so how would you say you obtained that expertise? | |
P11: ehm...just sort of...experience, reading, ehm...yea...experience and reading your bits and pieces and picking up instruments, pick up tips from the radiologists in the MDT over the years. | |
I: So it’s also the interaction with the... | |
P11: yea, you also know what they’re going to...you know what they...You see an image and you go ”Oh, I know they’re going to report that.” Or what the differential diagnosis is going to be. Because you’ve seen it many times before. |
Quote 4 | |
I: Who do you think are the experts on the images? | |
P4: Ehm...I think us radiologists, because we’ve got the background. And because we’re dealing with images on a daily basis. We do...you know we do, we are familiar with what we’re doing. But at the same time, I think what we say, or the decision that what we say on the images, does base heavily on what the clinicians come up with as well. So depending on the history, what they think the patient’s background is. Then we can narrow our, our findings. |
Quote 5 | |
I: and do you influence each other in that process [...], I don’t mean specifically about diagnosis, but just in what you’re looking for? | |
P7: yes, I think that’s fair isn’t it. I mean you point people in the...if you sit and work with people for several years, you’re educating each other. So radiologists are educating us and we’re educating the radiologists. |
Quote 6 | |
I: what makes you trust people more or less? | |
P5: I think your experience, so if you have seen for a long period of time that they make the correct diagnosis, they can probably see things other people can’t see. They [the other people] miss. And you trust them [the experience people] more. | |
I: it’s your personal experience? | |
P5: it’s a personal experience. For example if you have the right ventricle, now talking about his ejection fraction, what looks to be absolutely normal [on the MRI scan] and the ejection fraction is very low...It’s common sense. That’s the other thing. It can’t be true. | |
I: so it’s your own... | |
P5: so you have to filter all the numbers, because the numbers...because of the technical challenges and technical difficulties, may be wrong. |
Quote 7 | |
P9 : I think he’s pragmatist, he’s people person, and I respond to trust. For me it was trust. I like the guy, I’d do anything to help him out. Within reason that I can. And, as clinicians go, he’s a very very open, friendly and non-political person, and I kind of warmed to that kind of personality. You can tell with him that he’s a...his main priority is patient care. |
Quote 8 |
P9: we founded our collaboration on a need to get something going, and that’s always a good starting point. You work on something and you get it working and you see that it’s actually been useful. |
Quote 9 | |
I: so, and this validation, is it about...correlating it to other information, or is it also about...learning the interpret the images? | |
P10: I think it’s both. Because if somebody showed me a paper today that said, you know, it proved from a group of...a hundred pulmonary hypertension patients that a MR cardiac output and estimates of certain parameters were just as good as the right heart catheter...then I’d say that’s very good, but I’d want to see, you kind of also want to see it in other centres, and in your own patient population. And, you’d still want to build up your own experience to feel that that was correct within your unit [...] So if you got a paper from a.. an eminent...unit in Amsterdam, or somewhere, saying how fantastic MRI scan is. That may work perfectly well for them...but will it work for you? With the department you have, with the scanner you have and with the staff you have processing and reporting your scans. Probably, because they’re very good here. But you don’t actually know that until you see it for yourself, do you? So there’s whole different levels of validation. |
Quote 10 | |
P4: before, we used to use, just contrast angiography, so what we used to do was, to look at the pulmonary arteries, we used to just inject contrast and then just look at the flow of the contrast in the pulmonary artery. So very...it’s an invasive procedure, so you have to have a catheter put in the groin and...but, that’s more or less obsolete these days. So we don’t, we hardly do one a year. | |
[...] | |
I: do you miss anything about imaging modalities that you, you know...is there any time that you would say, well we could have seen that on [another older imaging modality]... | |
P4: the thing is, because we don’t do it that often, we’re losing the skill to interpret the...[...] So, you know, if somebody gives us a pulmonary angiography now, I think we’ll all struggle to identify what’s happening. |
Quote 11 | |
I: an image...is seen as more objective than... | |
P5: it’s more objective. The other thing is that you can see the structure in front of you. The structure for example of the heart, and the function of the heart. So you can’t doubt that, if you have a heart that’s not functioning well, it’s pretty obvious. Not always, but many times. |
Notes
‘Character’ is another word closely associated with ethics, especially in virtue ethics. We will not explore that in this paper, but a next step in the theoretical framework of trust in science would be the moral aspects of professional character.
See Oakley and Cocking (2001) for an account of the ethical aspects of professionalism.
The idea of the ‘space of reasons’ draws upon several philosophical threads: firstly Wittgenstein’s On Certainty (Wittgenstein 1969), where the question of what can count as a test or grounds for knowledge or doubt is raised (for example, Para 109–110), and the suggestion is made that all testing takes place within a system, ‘And this system is not so much a point of departure as the element in which arguments have their life’ (para 105). The term ‘space of reasons’ is borrowed from the discussion between Wilfrid Sellars, John McDowell and Robert Brandom (See for example Brandom 1995). The broad claim is that not just anything can count as a reason, but needs to be given intelligibility through interconnected norms and criteria in a context or system.
Data were collected through observing weekly MDT meetings; performing eleven qualitative semi-structured interviews with members of the clinical team and conducting a group discussion on emerging imaging technologies. In addition, we video-recorded a session of two radiologists collaboratively reporting an X-ray computed tomography (CT) scan, and an interdisciplinary meeting to determine the usefulness of an emerging imaging technique. MDT meetings were not video or audio recorded as we did not have ethical clearance for this; we recorded our observations in notes. All interviews were audio recorded, and recordings were transcribed and coded using Nvivo (QRS international Pty Ltd. version 10, 2012). Data for this particular study are the data from MDT observations and semi-structured interviews. The interviews were divided into three main sections: the interpretation and use of images; expertise and trust; the introduction of new imaging modalities. We used a grounded approach for the analysis of the data, broadly following these categorisations, but also looking for connections between them. In other words, we used the main topics and sub-topics of the interviews as a first iteration for analysis of transcripts and notes, and an open coding approach, looking for relationships and groupings within and among these topics and sub-topics, thereby establishing recurring and contrasting motifs and themes, particularly connections between the MDT observations and the interviews with individual research participants.
(1) Pulmonary arterial hypertension (PAH) either idiopathic or associated with other conditions, (2) PH due to left heart disease, (3) PH due to lung diseases and/or hypoxia, (4) chronic thromboembolic PH and (5) PH with unclear and/or multifactorial mechanisms.
This team may not be representative of all clinical teams specialized in diagnosing and treating PH or of any other specialty. For instance, not all clinical team have such intensive collaborations with imaging engineers, and in this set up radiographers usually attended radiology MDT meetings, which is usually not the case for other teams. We did not assess the extent to which this team differs from others and how that impacts the validity of our analysis to other teams.
It might be objected that the increasing use of images is a sign of reducing the need to trust others, and indeed, even oneself. Images, and in particular, the quantification that is associated with many types of images, is often rhetorically associated with greater objectivity (Joyce 2008). However, this is not really borne out by fieldwork, including our own, since how the imaging modalities come to be used in specific contexts, how they are taken and processed, involves skills and expertise at every step, and therefore precisely the kind of teamwork that we have been describing here.
The history of medicine is full of examples. See for example Weisz (2003).
References
Adler, J. E. (1994). Testimony, trust, knowing. The Journal of Philosophy, 19(4), 264–275.
Andersen, H., & Wagenknecht, S. (2013). Epistemic dependence in interdisciplinary groups. Synthese, 190, 1881–1898.
Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260.
Brandom, R. (1995). Knowledge and the social articulation of the space of reasons. Philosophy and Phenomenlogical Research, 55(4), 895–908.
Carusi, A. (2008). Scientific visualisations and aesthetic grounds for trust. Ethics and Information Technology, 10(4), 243–254. doi:10.1007/s10676-008-9159-5.
Carusi, A. (2009). Implicit trust in the space of reasons and implications for technology design: A response to Justine Pila. Social Epistemology, 23(1), 25–43. doi:10.1080/02691720902741423.
Esser, J. K. (1998). Alive and well after 25 years: A review of groupthink research. Organizational Behavior and Human Decision Processes, 73(2), 116–141.
Faulkner, P. (2007). On telling and trusting. Mind, 116(464), 875–902. doi:10.1093/mind/fzm875.
Fleck, L. (1979). Genesis and development of a scientific fact. Chicago: University of Chicago Press.
Fleck, L. (1986e [1947]). To look. To see. To know. In R. S. Cohenand & T. Schnelle (Eds.), Cognition and fact. Materials on Ludwik Fleck. Dordrecht: Reidel Publishing.
Friedrich, K. (2010). ‘Sehkollektiv’: Sight styles in diagnostic computed tomography. Medicine Studies, 2(3), 185–195. doi:10.1007/s12376-010-0050-4.
Frost-Arnold, K. (2014). The cognitive attitude of rational trust. Synthese, 191, 1957–1974.
Goodwin, C. (1994). Professional vision. American Anthropologist, 96(3), 606–633.
Hardwig, J. (1985). Epistemic dependence. The Journal of Philosophy, 82(7), 335–349.
Hardwig, J. (1991). The role of trust in knowledge. The Journal of Philosophy, 88(12), 698–708.
Hertzberg, L. (1988). On the attitude of trust. Inquiry, 31(3), 307–322.
Jirotka, M., Procter, R. O. B., Hartswood, M., Slack, R., Simpson, A., Coopmans, C., et al. (2005). Collaboration and Trust in healthcare innovation: The eDiaMoND case study. Computer Supported Cooperative Work (CSCW), 14(4), 369–398. doi:10.1007/s10606-005-9001-0.
Jones, D. K. (1996). Trust as an affective attitute. Ethics, 107(1), 4–25.
Joyce, K. (2008). Magnetic appeal: MRI and the myth of transparency. Ithaca: Cornell University Press.
Kappel, K. (2013). Believing on trust. Synthese, 191(9), 2009–2028. doi:10.1007/s11229-013-0376-z.
Lackey, J. (2010). Testimony: Acquiring knowledge from others. In A. I. Goldman & D. Whitcomb (Eds.), Social epistemology. Essential Readings: Oxford University Press.
Lagerspetz, O. (1998). Trust: The tacit demand (Vol. 1). Berlin: Springer Science & Business Media.
Lagerspetz, O. (2015). Trust, ethics and human reason. London: Bloomsbury Publishing.
Lahno, B. (2001). On the emotional character of trust. Ethical Theory and Moral Practice, 4(2), 171–189.
Lewis, J. D., & Weigert, A. (1985). Trust as a social reality. Social Forces, 63(4), 967–984.
McCraw, B. W. (2015). The nature of epistemic trust. Social Epistemology, 29(4), 413–430. doi:10.1080/02691728.2014.971907.
Mollering, G. (2001). The nature of trust: From Georg Simmel to a theory of expectation, interpretation and suspension. Sociology, 35(2), 403–420.
Oakley, J., & Cocking, D. (2001). Virtue ethics and professional roles. Cambridge: Cambridge University Press.
Origgi, G. (2004). Is trust an epistemological notion? Episteme, 1(1), 61–72.
Pettit, P. (1995). The cunning of trust. Philosophy & Public Affairs, 24(3), 202–225.
Rajaram, S. (2013). Imaging in pulmonary hypertension: The role of MR and CT (MD thesis). University of Sheffield, p. 213.
Shapin, S. (1994). A social history of truth: Civility and science in 17th century England. Chicago: Universiy of Chicago Press.
Solomon, M. (2006). Groupthink versus the wisdom of crowds: The social epistemology of deliberation and dissent. The Southern Journal of Philosophy, 44(S1), 28–42. doi:10.1111/j.2041-6962.2006.tb00028.x.
Toulmin, S. (2001). Return to reason. Cambridge: Harvard University Press.
Urfalino, P. (2014). The rule of non-opposition: Opening up decision-making by consensus. Journal of Political Philosophy, 22(3), 320–341. doi:10.1111/jopp.12037.
van Baalen, S., Carusi, A., Sabroe, I., & Kiely, D. G. (2016). A social-technological epistemology of clinical decision-making as mediated by imaging. Journal of Evaluation in Clinical Practice. doi:10.1111/jep.12637.
Wagenknecht, S. (2014). Facing the incompleteness of epistemic trust: Managing dependence in scientific practice. Social Epistemology. doi:10.1080/02691728.2013.794872.
Weisz, G. (2003). The emergence of medical specialization in the nineteenth century. Bulletin of the History of Medicine, 77(3, Fall 2003), 536–574.
Williams, B. (2002). Truth and truthfulness: An essay in genealogy. Princeton: University of Princeton Press.
Wittgenstein, L. (1969). On certainty. New York: Harper Torchbooks.
Acknowledgements
We are very grateful to the members of the clinical team that we have observed and interviewed for this project. We wish to thank two anonymous reviewer for their helpful comments and the organizers and participants of the conference Medical Knowledge in a Social World (Irvine, California, USA, March 2016) and the conference of the Society for Philosophy of Science in Practice (SPSP2016, Rowan University, June 2016) at which drafts of this paper were presented. We are also grateful to Giovanni De Grandis for discussion and suggestions. S.v.B. is financially supported by an ASPASIA Grant (409.40216) of the Dutch National Science Foundation (NWO).
Author information
Authors and Affiliations
Corresponding author
Additional information
Sophie van Baalen and Annamaria Carusi are joint first authors.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
van Baalen, S., Carusi, A. Implicit trust in clinical decision-making by multidisciplinary teams. Synthese 196, 4469–4492 (2019). https://doi.org/10.1007/s11229-017-1475-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-017-1475-z