1 Introduction: what are workplace studies for? Revisited

“It is not individual factors that make or break a technology implementation effort but the dynamic interaction between them… we need studies that are interdisciplinary, nondeterministic, locally situated, and designed to examine the recursive relationship between human action and the wider organizational and system context.” (Greenhalgh et al. 2017, 3).

This article provides fresh empirical and analytic support for the rather old argument that understanding the challenges of adopting data driven clinical decision support systems can be facilitated through the use of (ethnomethodologically-informed) ethnographic approaches. This concern with the value of what can broadly be called ‘fieldwork’ methods reflects a longstanding concern in the CSCW literature. Almost 30 years ago, Plowman et al. (1995), asked ‘what are workplace studies for?’ and outlined the tensions between ethnographic fieldwork and system design, between providing detailed explanatory accounts and producing usable design recommendations:

“the descriptive language and sociologically-generated analytical categories constructed in ethnographic studies are likely to be of little relevance to the practical problem of designing computer systems. Those who attempt to show explicitly the relevance of their research, may find that in the process of translating their detailed accounts into more formal requirements, the richness and significance of their work gets lost, distorted or misconstrued.” (Plowman et al. 1995, 321)

Schmidt (2000) also sensed disillusionment and scepticism amongst those that hoped that fieldwork studies might contribute to technology design. We don’t really share that disillusion or scepticism; and use this paper to present a ‘scoping study’ (Hughes et al. 1994) of work in various UK histopathology labs as a precursor to the evaluation of AI tools. The ethnomethodologically-informed, ethnographic approach we advocate differs from many standard ‘ethnographic’ approaches in that it sets out to make visible the real-world sociality of any particular setting, in order to develop an understanding of the situatedness of individual activities and of the wider work setting, highlighting interdependencies between activities, and stressing the ‘practical participation’ of individuals in the collaborative achievement of work. This understanding comes in the face of the growing ubiquity of IT systems and artefacts, decision support systems, and where the design problem becomes not so much concerned with the simple creation of new computer-based tools as it is with their effective integration with existing and developing work practices, where users must try and embed any new system within their work practice (Hartswood et al. 2000).

2 Background

The use of AI based clinical decision support systems in assisting various forms of medical diagnosis has been of increasing interest for some years (Fenton et al. 2011; Hosny et al. 2018; Shortcliffe et al. 2018; King et al. 2023) International qualitative studies, for example, suggest a range of different views and attitudes amongst clinicians and different stakeholders – for example radiologists and radiographers - on AI supported decision support (Huisman et al. 2021a, b; Abuzaid et al. 2022) The position is well summarized by King et al. (2023, 529):

Recent reviews of studies of clinicians’ perceptions of AI describe positive attitudes regarding the potential for improved diagnostic accuracy, fewer errors, and more efficient workflows. Nonetheless, acceptability was moderate,4 with concerns about liability, reputational loss, lack of evidence of efficacy in clinical settings, and lack of explainability, as well as key themes of lack of trust in patient safety and technology maturity.

However, few of these studies, at least until relatively recently, have been much concerned with documenting exactly how these systems impacted work practices and procedures, or how these new systems practically integrated with or complemented existing clinical or organizational work practices. To be fair, recent work by Farič et al. (2024) does examine and outline some early user and organisational experiences of implementing and integrating an AI-based diagnostic decision support system in chest radiology, pointing to the importance, and relative lack of research on socio-organizational factors. What is missing from this and other accounts is any clear description and understanding of how the everyday work of using these decision support tools actually ‘gets done’. The phenomena of interest – the routine use of a decision tool – has effectively disappeared and we are presented instead with a host of accounts of the various attitudes of different stakeholders towards it, some reflections of its possible impact on the future of work etc. but little on how the tool or system is actually and mundanely used.

A similar critique might be made of other recent studies (King et al. 2023) that have adopted a realist position to the evaluation of healthcare technologies. This approach starts with the idea:

AI is a complex intervention. Studying complex interventions requires a strong theoretical foundation”, where ‘theoretical’ encompasses a range of different ideas and users’ perspectives and “theories typically combine substantive theory and stakeholders’ theories derived from experience.” The conclusions of these studies are not really that different from a range of other studies operating from rather different theoretical viewpoints (or even common sense): “Relevant theories suggested AI is more likely to be accepted if pathologists are able to ‘make sense’ of the technology, engaged in the adoption process, supported in adapting their work processes, and can identify potential benefits to its introduction.

But, again, there is little that documents exactly how the technology is actually, routinely used – instead of being the ‘topic’ of research, it is relegated to a simple ‘resource’ in the belief that ‘everybody knows’ how the technology is used. But, of course, they don’t, and that is exactly the point of our ethnomethodologically informed, ethnographic approach and other similar approaches that have emerged from the ‘workplace studies’ approach found in CSCW (Computer Supported Cooperative Work).

For example, Randell et al.’s (2012) early study of the use of glass slides in a histopathology department of a large UK hospital aimed at the design of a digital microscope. Although this precedes the introduction of digital imaging and AI support for clinical diagnosis and decision-making, which is the focus of our study, the research similarly documents the procedures and sequencing of everyday diagnostic work and the routine processes involved in viewing the slides; making comparisons between slides; requesting extra slides; getting a second opinion; and reporting a case:

we can identify a general pattern of an initial scan at low power, followed by zooming in on areas of interest. Having read the request form, the histopathologist will begin with one or more high-level questions that they are seeking to answer, e.g., ‘Does this person have cancer?’… With each slide, the histopathologist is revising their hypothesis with regard to the diagnosis.

Our study also has some broad ‘design’ ambitions concerned with the design and use of diagnostic tools and the shaping of workplace practices. In our research the understanding of any work and workplace setting work activity must align itself closely to the data, it is ‘data-driven’: to show, in detail, exactly how work is organised and the sense making procedures clinicians use in the course of their work. We are interested in documenting the kinds of things they routinely do as part of ‘doing the work’; the order, timing and sequencing of their activities; what they try to be aware of or keep ‘in the back of their minds’.

In the following section, we describe the methodology used in the study. In Section 4, we provide an overview of the histopathology lab, its workflow and diagnostic practices. This is followed in Section 5 by extracts of study participants’ views on how an AI-based CDSS could be deployed in this setting and the issues this raises for how the need for governance of such technologies might be satisfied. In Section 6 we review these findings and how ethnomethodologically informed ethnography may help inform the design of AI-based CDSSs and the organisational practices that will be needed to ‘domesticate’ them. Finally, in Section 7 we argue for the continuing importance of detailed studies of work practices for the successful adoption of technological interventions in healthcare work.

3 Methodology

In this paper, we draw on a recent ethnographic study into clinical diagnostic work performed as a key step in the patient cancer care pathway. The setting for the study was two UK histopathology laboratories (Procter et al. 2022) that were taking part in a project that involved piloting and evaluating an AI-based CDSS intended to assist histopathologists in the diagnosis of prostate biopsies.

The ethnomethodologically-informed approach to ethnography we follow is well described elsewhere (Clarke et al. 2001a, b2003; Hartswood and Procter 2000; Hartswood et al. 2002, 2003a, b, c; Slack et al. 2007, 2010; Procter et al. 2022, 2023). It focuses on what we can learn as members of the setting display the real-world, real-time competences and practices through which they organise their interactions; thereby documenting the everyday orderliness in mundane work activity and the ‘machinery’ of social interaction. This involves fine-grained, moment by moment, analysis of everyday situated practices and interactions in order to explicate people’s ‘ethno-methods’ – the practical, situated exercise of common-sense, whereby activities can be seen – and are made to be seen – to be accountable, organised and recognisable.

Ethnomethodology is decidedly not the same as ethnography, but it provides our approach to how ethnographic data might be analysed and understood – hence the name ‘ethnomethodologically informed ethnography’. This perspective resists imposing any prior theoretical framework on the phenomenon; as Garfinkel (1967) puts it the aim is:

to treat practical activities, practical circumstances, and practical sociological reasonings as topics of empirical study, and by paying to the most commonplace activities of daily life the attention usually accorded extraordinary events, seeks to learn about them as phenomena in their own right.

It starts with the assumption that the setting and its associated activities make sense to the participants (or ‘members’) – people working in the pathology lab in this instance – people generally know what they are doing – and our interest resides in understanding activity from the viewpoint of these ‘members’, the parties to the particular setting – rather than from any particular theoretical perspective. So, interest and attention is focused exclusively on the study of what is involved in actually doing the work; what people routinely busy themselves with, what they ‘lookout’ for, how they organise themselves – whatever the work might be. The approach:

“orients us to the practically accomplished character of the real world’s ‘giveness’ to society’s members, to the ‘giveness’ of the world as feature of human action and interaction, and thus makes common-sense into a topic of study in its own right; something to investigate and unpack, rather than treat as an unexplicated resource for sociological theorising.” (Button et al. 2015); it provides “a members’ methodology for assembling the organised settings and scenes of everyday life, settings and scenes in which computing systems, applications and services have to gear into if they are to survive.” (Button et al. 2015, 149).

The data was collected during observations of 11 histopathologists with varied levels of experience as they worked, of meetings where progress with deploying the AI-based CDSS within the laboratory setting were presented and meetings where small groups of histopathologists shared their experiences of using the CDSS and discussed their assessment of its performance on selected cases. A total of ten observations were conducted. Observations of work lasted between 1 and 2 hours and included discussions with participants when appropriate, for example, to obtain clarification of diagnostic procedures. Observations were complemented by sixteen semi-structured interviews, both in person and on Microsoft ‘Teams’, which lasted approximately 90 min. All participants were members of two UK pathology labs that have recently transitioned from glass to digitally imaged biopsies and who were also taking part in a trial of an AI-based CDSS for prostate cancer diagnosis. Eight of were consultant histopathologists and three were trainees. Typically, histopathology consultants have experience of diagnosing a range of cancers but in the two labs in this study most of the consultants were members of teams that specialise in a particular type of cancer, e.g., liver, renal, breast, prostate, etc.

Discussions ranged over the impact of the recently completed transition to the digitalisation of images and the introduction of an AI-based CDSS to assist the histopathologists in the diagnostic process. Extensive fieldnotes were taken during observation sessions, and discussions in observations, meetings and interviews were recorded and transcribed. Ethical approval was obtained through Warwick University Biomedical and Scientific Research Ethics Committee and research passports obtained from the relevant hospital trusts.

Key themes were identified through an iterative process involving three of the authors reading the fieldnotes and interview transcripts and discussing their interpretations and findings.

4 Histopathology laboratory work

It’s a common feature of everyday working life that people just ‘get on’ with things; and its exactly that ‘getting on’ that we wish to document, describe and analyse in this particular setting of a histopathology lab, so that we can evaluate the ways in which new technologies might impact on that everyday work.

Biopsies, i.e., samples of tissue, are taken when a patient is suspected of having cancer, so that this can be investigated and, if cancer is diagnosed, its grading can be determined. The histopathology lab work begins with biopsies being sliced into thin sections and mounted on glass slides, digitised. The digital images are then passed to histopathologists for diagnosis, which is the focus of this study.

In the fieldwork extracts below, we document how histopathologists diagnosing tissue biopsies, engage in everyday practical actions such as magnification, manipulation and annotating of images – key components of the lived work of ‘doing’ diagnosis.

The practice of diagnosing biopsies calls for histopathologists to exercise a subtle combination of reasoning, knowledge, and skills, or ‘professional vision’ – “socially organized ways of seeing and understanding events that are answerable to the distinctive interests of a particular social group” (Goodwin 1994, 606), combining perceptual and interpretive skills in a complex visual environment. In action, ‘professional vision’ is concerned with the activities of the individual relative to some particular professional set of expectancies.

‘The relevant unit for the analysis of the intersubjectivity at issue here is thus not these individuals as isolated entities but (…) a profession, a community of competent practitioners, most of whom have never met each other but nonetheless expect each other to be able to see and categorize the world in ways that are relevant to the work, tools, and artifacts that constitute their profession’ (Goodwin 1994, 615).

Thus, our studies of the work of histopathologists present a particular and perhaps rather different view of diagnosis in medical settings to conventional understandings. As Wears and Nemeth (2007, 206) suggest:

Most enquiries into diagnosis have viewed the physician as an information-processing device that is usually flawed. Decisions and actions are viewed as discrete events rather than as a continuous flow of activity. Informational cues are viewed as clearly available ‘nuggets’ of objective knowledge rather than as constructions that workers build from their own expertise and expectancies. Physicians are thought of as individuals working in isolation rather than as heterogenous groups of clinicians working together… this model of diagnostic thinking does not correspond well to what people in the world actually do, and its continued use only impedes efforts to understand diagnostic failures.

In contrast, our studies attempt to document the ‘real world’ of how diagnosis is done, emphasising the meaningful and practical human activity involved in the orientation of histopathologists to colleagues, to work artefacts, and to technology, in order to provide a baseline understanding of the work and the setting into which new technology systems may have to fit.

4.1 The pathology lab workflow

In the following fieldwork extract, a histopathologist outlines the major stages in the pathology lab workflow, beginning with the reception of tissue specimens, their preparation as biopsy artefacts and ending with a diagnosis following a visual examination of the biopsy.

Biopsies come in formalin in pots from theatres or like radiology, wherever the biopsy has been taken, and the lab staff will book them into the lab, give them a lab accession number and the lab staff handle the biopsies. And so, they will basically transfer them into a numbered cassette and then the cassette goes into a processor, the tissue gets dehydrated with a series of graded alcohols and turned eventually into something that’s filled with paraffin wax, and then that gets embedded in a paraffin wax block, which the lab staff section on a microtome and float those sections on a water bath. They pick them up on glass slides and then the slides will go into the digital slide scanner and then the slides will get put in my pigeonhole. I’ll go and get the slide and take it to my office and then I have the slide and the paper form and then I will look at it on the Phillips image management system, which is the digital pathology viewer. And assuming it’s all in focus, if it’s not, I have to do it on the glass slide. I’ll just sort of look, make my assessment, decide if I can sign it out or not on just that H & E. If I can’t, I’ll order extra stains, so immunohistochemistry. And if I can, I will review it and then sort of mentally make a report in my head. And then when I finish, type the report onto the laboratory Information management system and then authorize it or I might need to ask somebody else’s opinion.

In the UK, as elsewhere, the histopathology lab has been the site of significant technology innovations in recent years, beginning with, as noted in the extract above, the replacement of biopsy sections on glass slides with scanned digital images (Procter et al. 2022). This turn to digital biopsy images is now driving a second wave of innovation in histopathology. Selected UK histopathology labs are in the process of piloting the use of an AI-based CDSS that had been trained on digital biopsy images to identify malignant changes in biopsy specimens. Our studies were therefore motivated by gaining an understanding of histopathologists’ diagnostic work as it is currently performed and its implications for the introduction of these twin technological innovations.

The above description of the overall workflow raises a number of questions about this that we will explore in the following sections. So, one fundamental question here is: What is the work of making a pathology specimen amenable to the use of a CDSS? Are there steps in the preliminary workflow that are key to making its use possible, and, if so, do they differ from the old way of working, or do things just go as they always did with the CDSS being able to insert itself into a pre-existing process? In other words, are there ways in which pathology lab personnel are having to adapt their behaviour to accommodate the CDSS and, if so, what are the consequences of that in terms of traditional CSCW concerns regarding support, such as, articulation work, invisible work, collaboration, cooperation, awareness, etc. Already, the above workflow description indicates some matters of interest. For instance, it is clear that originally biopsied tissue is subjected to a series of different, often manual, manipulations in order to arrive at something that can be adequately rendered in digital form in a digital pathology viewer. In that case, what aspects of those manipulations are specifically geared towards a digital rendering, and what aspects are features of accomplishing any workable rendering for diagnostic work? Another important question is what histopathologists consider to be the benefits of having a digital image. For instance, is such a rendering ‘better’, ‘quicker’, ‘more accessible’, ‘more flexible’ (i.e., can more be done with it as a result)? Does it feed better into subsequent workflows, or augment the scope for sharing and collaboration? Is there scope for saving effort by providing an initial analysis? And so on. These are all matters we will expand upon below.

4.2 Moving from glass slides to digital images

In this section, we explore how histopathologists are adapting to the move to digital images in diagnostic work. In the extract below, histopathologists noted a number of advantages over glass slides.

I would say digital has the advantage, of course, the flexibility of work, the remoteness and the accessibility from various locations, but also the ability to share cases very easily, the ability to get cases from archives. Also for audit and research purposes.

an added benefit of the digital is you have measuring tools, so you can very accurately measure for example, size of a tumour or the distance of a tumour to a surgical margin.

It can be seen here that this excerpt directly addresses some of the questions posed above. Thus, digital renderings are argued by histopathologists to offer greater flexibility, accessibility, and shareability. Added to this are advantages relating to archiving, auditing, research, and precision. However, histopathologists also observed that digital imaging brings some disadvantages.

Some slides are out focus like you’ll get a small number of slides either tiny biopsies or immunohistochemistry slides that are quite pale that won’t get picked up properly by the scanner and then they’ll be out focus and then it’s really annoying to start looking at a case digitally and then realize you can’t sign it out and you don’t have to slides. So, you might as just well have slides in case you need to look at them. So yeah, we’re sort of transition, we’re near the end of our transition, but yeah, we’re in this sort of hybrid state at the moment.

There are a number of known pitfalls between digital and glass and they’re listed out in various publications and so like for things that require texture, don’t always come out so well on digital because it’s a 2D snapshot through a slide, whereas the glass slide you can move up and down and look at different planes.

What this brings into view is the fact that professional diagnostic work entails a set of embodied practices, manipulations, and visual competences that can be thwarted by the current presentation of digital slides. These include resolving matters of contrast and the handling of texture where physically moving between planes is part of what informs the ‘seeing’ of potential abnormalities. Interestingly, this is not just presented as a matter of thwarting the diagnosis but of undermining the organisational accountability of the process because it prevents signing out of the case. Given these potentially disruptive differences, the introduction of digital imaging has required histopathologists to undergo additional training.

You have to retrain your brain to view things on a screen, than looking down a microscope, which is like looking in a tunnel and the way you just interact with the case is different and you know, on a microscope you do what’s called lawn mowering, where you go up and down the slide, screening back and forwards. And that’s not so easy on digital. So, you tend not to do that so much. So, people have to sort of find their own mechanisms for doing these things on digital.

Everyone has to do a minimum of 60 cases, so cases that you have reported on glass previously and then a bit of time has passed and then you go back and you look at the digital images.

In summary, the move to digital biopsy images is still in transition and glass slides are still routinely made available to histopathologists, who will then typically work with a mix of glass biopsy specimens and digital images. In the next section, we report on the practices of diagnosing biopsies.

4.3 Doing diagnostic work

In the everyday work of a histopathology lab, the practice of ‘reading’ and interpreting biopsies calls for the exercise of a set of subtle, learned skills. We consider such practices as constitutive of some form of ‘professional vision’ involving sets of tried and tested repertoires of ‘manipulations’ that are an integral part of the embodied practice of uncovering or realizing phenomenon in the biopsies and of deciding if these constitute evidence of cancer and, if so, its stage. These manipulations and the accompanying professional diagnosis involve reasoning, assessing, evaluating, diagnosing, making judgements, whilst engaged in the actual flow of activity. At the same time such activities are embedded in a set of professional expectancies, ‘professional vision’, concerning how to ‘go about’ the ‘doing’ of everyday diagnostic work. This involves translating features made visible in the digitised image into an appropriate organisational and professional formulation – particularly in terms of possible diagnosis and treatment. It is, as Garfinkel et al. point out, the ‘intertwining of worldly objects and embodied practices’ (1981, 165) that produces the recognizable and accountable diagnoses and decisions of the histopathologist. As will be seen, this reflexive relationship between objects, practices, and accountability, is central to how the work proceeds.

In this next fieldwork extract, the histopathologist discusses the basic set-up of the equipment. There then follows a series of extracts where histopathologists talk about the tools of their trade, and what might be regarded as their ‘professional’ use talking about the processes they generally follow in ‘looking’ at a biopsy and making their decision:

So obviously first you check that you got the right patient, so you can easily check with the name. You can also check the slide that’s been scanned. It’s got the label on there, so check this the right name and you have the right information on the digital system. And then when I start looking at the slide, I usually start low power and I firstly see how many pieces there are and I make sure that that makes that correlates with what’s written on the form, so they said they would take a one biopsy, and then there’s four I start to think is this the right specimen for the right case.

So, for me personally I really focus on the low power first of all. So, first of all, to make sure that I’m happy we’ve got a complete image and also it gives me a sense of where I might need to focus in on so rather than going up into a medium or higher power in sort of the first field, I don’t do that. I tend to scan the whole image on quite a low power first to get a feel for where I might need to go in and look at on higher power. And then I might sort of rescan quite quickly on a higher power and just go into those areas where I want to focus on in… So, like I might scan the whole thing on the equivalent of like a * 2 and then sort of start to look around on a * 4 but do that quite quickly because I know where I want to go and then quickly go into sort of * 10 * 20….

The above extracts give some insight into how the actual detection process works once a slide is ‘on the table’, so to speak. So, straight away, there is a bunch of metadata associated with any one slide that is subject to various checks (name of patient, number of samples, etc.) as a matter of routine professional competence. The very fact the histopathologist checks these details reveals that they are of accountable concern. The histopathologist can use these details to hold the organisation to account for having delivered up the right object for inspection and, at the same time, the histopathologist is accountable for having undertaken such checks. Organisational accountability trades upon these mutually implicative practices. Another point to keep in sight here is that ‘the slide’ is a central, tangible, object around which the whole diagnostic process is oriented and articulated. This is still the case when ‘the slide’ is digital and ‘the slide’ can actually contain more than one sample. Then there are processes of calibration, where inspection begins across the whole image, but focuses in on certain details in progressive increments. The ‘first glance’, so to speak, is about both verification (another example of organisational accountability in play) and about identifying prospective sites of interest, or ‘suspicion’. Armed with these, the histopathologist can move towards greater levels of magnification to open up those sites of interest to specific forms of diagnostic reasoning.

In another fieldwork extract, a histopathologist describes the processes involved in looking for particular cancers, in trying to differentiate the ‘normal’ from the ‘abnormal’:

If I’m looking at a piece of tissue, the first thing I would try and decide is what the tissue represents, what normal site, or what normal anatomy I can see in that issue.

If I’m looking at a piece of lung tissue, I would expect to find almost big empty spaces lined by the alveoli or the lung epithelial tissue and if instead of that I’m finding solid areas of rather than empty spaces, if I’m finding all these spaces filled with either cells or any other abnormal material, then I do know that it is abnormal.

Something to note straightaway here is that histopathologists have an accountable sense of how ‘normal’ tissue in some specific area should present itself. ‘Abnormality’ is then an accountable difference from that expected presentation. We use the word ‘presentation’ advisedly here, because, as noted above, the histopathologists are working with samples that have already undergone a number of potential manipulations and how the sample looks may be an artefact of these manipulations rather than an identifiable pathology. This forms part of what histopathologists are referring to when they talk about approaching their task with a certain ‘mindset’ – a mindset that corresponds to ‘professional vision’ and that involves having a plan and alternatives to deal with various contingencies:

Yes, this is also you know when you’re looking at the slide or an image you’re not cruising… you already have a plan you have a set of questions you’re answering mentally while you’re going through, and it’s, this is something we try to explain to the computer scientists, we’re still kind of in the process… So, because from the clinical background, clinical training and the experience when you look at the biopsy for example, you know these are the questions… You know these are the things I need to be looking for in each specimen… there are a set of questions you kind of look for their answers, while when you are looking at the slides these are related to the type of the biopsy the clinical history and of course the presenting features on the slide itself… sometimes you look at the biopsy .things are unexpected so your mental… pathway kind of changes accordingly as well.

There are a lot of things to unpack in the above. First of all, the histopathologist alludes to having a stock of appropriate considerations. Just what makes those considerations ‘appropriate’ is informed by their own background, the things they know must accountably be considered, specific information they have been given about the patient and the biopsy, and the presentation of the sample on the slide itself. An interesting aspect of this is that it is not ‘once and for all’: what counts as appropriate is endlessly revisable as their own reasoning unfolds. This has all the characteristics of what Livingston terms ‘midenic’ reasoning. Livingston says of midenic reasoning that it is:

“… situated reasoning: reasoning located in a particular place and time. It’s local reasoning about particular things, not reasoning in general. The term “midenic,” however, focuses attention on the fact that this type of reasoning is literally and hopelessly stuck in the “middle of things” and can’t be disengaged from what we’re doing at a particular time and how we’re doing it.” (Livingston 2017, 39).

For us this is also tremendously redolent of how Lynch and Jordan (1995) speak of the nature of instructed action in the work of molecular biologists when trying to address the divergence of results obtained regarding the Polymerase Chain Reaction (PCR):

“Differences among practitioners in the sequence of steps taken, materials used, and results obtained can be difficult to manage within a single lab environment. Procedures like PCR are said to be highly sensitive to “contamination” arising from the circumstances of use, and often are subject to disputatious claims about how best to do them. Practitioners do manage to get on with the work, despite many complaints about the procedures, but how they do so cannot be reduced to a particular set of instructions. If it is appropriate to speak of these routines as “standard” protocols, then their orderly character as such must derive from other, more localized and heterogeneous sources than can ever be contained in a single sequential plan. The fact that a set of instructions can provide an adequate account for those who are able to do the techniques in question does not justify saying that the “information” in the instructions produces the techniques or their results.” (Lynch and Jordan 1995, 239).

What the particular histopathologist in question appears to be alluding to in the extract, then, is the presence of a set of ‘standard protocols’ that they adhere to, where the exact character of the slide and the history made available to them constitute exactly these kinds of ‘localized and heterogeneous sources’ from which their actual diagnosis derives. Another thing to note is how the quoted histopathologist brings their reasoning to bear through an interrogatory, almost dialectic relationship between what is visible, those appropriate considerations, and their own stock of knowledge. It is no wonder that there is a challenge involved in describing this way of reasoning to computer scientists. It is highly distinct from the ways in which algorithms typically get constituted. And this, of course, is obviously a matter of concern when trying to capture the expertise of histopathologists within a CDSS.

4.4 Difficult cases and the management of suspicion

As the histopathologist explains in the fieldwork extract below, there are a range of options when deciding a diagnosis.

Basically what you’re trying to do is put them into the category of benign or malignant. But in the middle, there’s this category called PIN, which is has an association with development of cancer. And then there’s atypical. We call them atypical small acinar proliferation ASAP and basically it’s a management category like you can’t be certain that it’s benign or malignant, but you’re suspicious it might be malignant, but you can’t be definitive about that because you have to be 100% sure to call it cancer. Otherwise, that patient might have a radical prostatectomy. So, if you have an element of doubt, then it goes to the atypical category. And then that patient will either get a re-biopsy or just be surveyed with more MRI scans and clinical follow up.

What this makes clear is that, in a small number of cases, the histopathologist may find it difficult to be confident about what is the right diagnosis. It also makes evident some important concerns regarding the accountability of their actions. There is an explicit issue regarding unfortunate and inappropriate outcomes arising from their diagnosis, e.g., “Otherwise, that patient might have a radical prostatectomy”. Such outcomes evidently create a risk of being actively called to account. However, given this, there are mechanisms available for managing accountability. First of all, there is the simple grammatical device of framing uncertainty in terms of ‘suspicion’. ‘Suspicion’ is absolutely not certainty and provides for a range of potential outcomes. When you say you ‘suspect something of being X’ you cannot be accused of having claimed that it is X. However, at the same time, suspicion provides for further exploration; indeed, it would be equally accountable to not undertake further investigation if you say that you suspect something is X. Given the distinct possibility of histopathologists being put in the position of ‘suspicion’ rather than certainty, they have come up with a set of organisationally sanctioned and accountably ‘safe’ categories, such as ‘PIN’ and ‘ASAP’. When this happens, as explained in this next fieldwork extract, the histopathologist will seek a second opinion from a colleague in the histopathology lab.

There are various ways or various situations where you ask for a second opinion, so you can ask for a second opinion because you need somebody else to look at the case. You can’t reach a diagnose or you’re not certain to reach a diagnosis. And it’s on this case because various reasons, it’s a difficult diagnosis or it’s a critical diagnosis and you want another colleague to give you an opinion, so you will have more confidence… There are certain diagnoses which the college actually requires the histopathologist to have two people looking at it because of the lack of consistency and the consequences of such diagnosis, so this is adopted, kind of nationally if you like… So, you have two people looking at each case, and that’s in the guidelines.

What the above extract makes clear is that, having arrived at a point of ‘suspicion’, there are tried and trusted mechanisms for displaying one’s personal, professional, and organisational accountability, such that ‘appropriate’ decision-making is visible to any and all who might seek to audit decisions down the line. These mechanisms include, ‘asking for a second opinion’, ‘stipulating due process’ (e.g., given a set of conditions, a certain process must be followed), ‘following guidelines’, and so on.

4.5 The expert and the trainee

Diagnosis clearly involves a material, collaborative process drawing on different technologies, expert skills, and careful collaboration with colleagues – skills that are developed over time. This can be seen in this fieldwork extract where we hear a histopathologist talking about the differences between his approach and those of the novice or trainee in terms of an appreciation of ‘context’:

For example, surgeons will use diathermy to burn blood vessels when they’re taking out a specimen to reduce blood loss from the patient and that you get a burn artifact on that issue. Yeah, essentially sort of shrivelled and cooked and what happens to the nuclei at the edge there is they look quite distorted. And if you see that in context you know it’s just the margin, it’s just diathermy artifact. But if you were to see that out of context, you might be worried about the nuclear because they would look a bit darker. That sort of thing. But you see that on glass also, but would it be immediately recognizable for that to you or would be to me. If I was a junior who hadn’t been doing pathology for very long, then I might be worried about that, but if you’ve been, if you’ve got the experience and you know what it relates to, then it’s fine. You can just miss it. OK?

It can be seen that this example also reinforces the point made earlier about part of the competence of a histopathologist residing in judgments of normality in relation to the expected presentation of a sample. In the case of burnt artefacts at the edge of specimens, the presentation may appear abnormal, but, for an expert, it can be accountably discounted. This difference especially emerges in the explanation of the difference between the expert and the trainee when it comes to making decisions:

As we’re going through our training, the trainees will describe all of these features. And the more experience you get, you sort of cut to the chase more and you won’t include all of that. I will include things like that if I found it very difficult to come to a decision… These are the things we’ve taken into account and this is the conclusion we’ve come to. But if it’s a really, really straightforward case, even if I’ve had some of the same thought process… it’s been very easy and quick to sort of dismiss that and get to the crux of the matter.

Here, the matter of accountably discounting or disregarding certain features for experts is made quite explicit. The point about this being an accountable matter cannot be overemphasised. Another aspect of the professional competence in play here is being able to recognise what counts as a difficult case in the first place. For novices, it is hard for any case to be other than a potentially difficult case, so it would be risky in the extreme to omit certain accounts and to exercise professional disregard. Later, the above extract makes it clear that the inverse becomes the case where describing every detail would itself be accountable because that’s part of what makes one’s status as a novice manifest. However, if it is hard for an expert to make a decision, making the case ‘difficult’, evidencing that all features have been opened up to consideration becomes essential and the omission of such detail could potentially be called to account. Given all this, when seeking to understand what makes expertise visible, as one histopathologist explained, ‘context is everything’:

The clinical context is everything, so it’s basically… what we are as pathologist, you are a clinician in interpreting the images. You are not just kind of a morphologist. This is kind of a misinterpretation. Your interpretation is based on… clinical knowledge of disease or clinical information about the patient and it has to be always in context. Looking at things out of context can be very harmful actually.

5 Clinical decision support systems in histopathology

In this section, we move on to examining the views expressed by histopathologists regarding the prospective introduction of AI type systems into their work.

5.1 Risks and requirements

The best way in which to deploy CDSS (or ‘AI’) in biopsy diagnostic work remains an active topic of debate among histopathologists. One possibility would be to use the CDSS to screen out ‘normals’, which would have the benefit of reducing histopathologists’ workload.

“So, from an AI point of view, personally, what I would want to do is get it off a lot of the normal biopsies that I look at, I’d rather not look at them. I would prefer that AI was able to look at these and categorically call these benign or as normal so that we would not have to look at it. And if it could highlight the ones that are abnormal so that I could then look at those and concentrate on those ones that are abnormal.”

While this is clearly an ideal for histopathologists, our analysis of what is entailed in making judgments about normality or abnormality and the accountable exercise of expertise makes it clear that the use of any technology to make any such decisions in ways that can be routinely trusted is going to be challenging. Another possibility is to use a CDSS as an assistant that would help speed up the diagnostic process for each case.

“I’m interested to see the scope of what AI can do, so whether or not it’s as a diagnostic assistant to help me… That’s what tools are available at the moment and I would hope that that would help me with identifying areas of tumour, quantifying those areas of tumour, helping me to find the additional features that I need to put in reports of things like Perineural invasion, grading of tumour. So, it’s there to help me potentially speed through the reporting process without reducing the accuracy obviously. So, trying to make me more efficient in what I do.

As we have seen, expertise in histopathology turns upon one’s competence to read the presentation of the sample. At least one implication of the above reflections is the possibility of a developed form of presentation that can highlight certain features and overlay additional information so as to facilitate the work of reading and measuring. As also hinted at, in such a role, the CDSS might also be able to assist in the writing up of the report on each case.

Potentially, if it’s a very prescriptive report, it might also be able to write that report for me. Which would save an awful lot of time because things like the prostate biopsy report are very step down in terms of the content compared with something like a renal tumour referral where it’s much more descriptive and pulling together all the information and then putting it into a report that you can make it sense of something.

What the above makes visible in particular is the presence of certain elements in the workflow that histopathologists view as ‘much labour with little reward’, such as writing largely prescriptive reports. All workflows have such elements, but, if the CDSS has as one of its goals streamlining of the diagnostic process so as to improve capacity, identifying places where support would have this effect is useful. However, to be deployed as a diagnostic assistant, the CDSS would have to meet some key requirements.

It has to be as easy as possible and because particularly for things that are more routine in my life, like prostate biopsies, I feel that I’m probably as quick as I can be in terms of reporting. So, whatever is fitting in to that process has to be as least disruptive as possible. It has to be as easy for me to see as possible without adding extra work. Obviously, you want to make sure that you’re as accurate as you possibly can be. And I would want that for a tool to be able to pick up everything that I would expect it to in terms of malignancy or suspicious for malignancy that I can pay attention to appropriately. The issue will be is, is if it starts picking up things that are irrelevant in terms of the diagnostic outcomes, so areas that are you know are clearly benign to me that end up taking extra of my time to resolve them. So, I’ve already had a look at the case and I’m having to pick up all the bits that an AI tool might have focused on that then ends up taking me even longer. Yeah, it has to be simple.

One of the most notable concerns expressed above is that a CDSS might actually end up generating more work for the histopathologist, rather than less. One of the things this goes back to again is the matter of expert competence in relation to accountable disregard. As we have seen, there are many things in the presentation of a sample that an expert histopathologist has acquired the competence to routinely ignore. The risk for them with a CDSS is that it will not be able to exercise the same degree of disregard. In other words, it may continually act like a novice and bring to their attention things that can be safely set to one side. In the next fieldwork extract, a histopathologist explains how understanding how such a CDSS has been developed is important to its acceptability for clinical use.

It depends on what the AI system claims that it’s doing, and I think this is where we need to understand how the AI systems are developed, who’s developed them and what they’re claiming to do. I think that there needs to be honest conversations. Or all of these things might not be so relevant in a few years’ time when everyone’s used to it and people say, oh, that’s fine. We’re not gonna worry about those bits. And people accept the fact that that’s OK. But I think as people are starting to move into AI then understanding it is OK for you as a pathologist to ignore essentially what the tool has picked up. If I was told that the tool can pick up areas that they’re just drawing your attention to without claiming that they’re all malignant, that would be a different scenario, because otherwise you’re going to end up over investigating small foci that to a pathologist’s eye are clearly benign and asking other people to look at them or ordering immunohistochemistry that you might not have otherwise ordered. So, all of this is new, is going to take time and I think you know, some pathologists are gonna have to bear with it and get used to it. And then other pathologists will have to learn from what they do and have a very open conversation within the wider community about how to use AI.

Something visible in the above extract is the notion that, just as histopathologists already exercise professional disregard in relation to the presentation of samples, they also hope for a time when they might be able to exercise a similar disregard in relation to information generated by a CDSS. This would clearly resolve the eternal novice problem identified in the preceding extract. However, the interviewee here points to an important issue: to exercise such disregard you have to have some sense of how the system is doing what it does. It is relatively easy to know what to safely disregard when you understand how the presentation in front of you has been arrived at. The worry for the histopathologist here is that a CDSS may make that opaque.

5.2 The organisational context: accountability, auditing, and governance

We have already mentioned a number of ways in which histopathologists manage matters of accountability, auditing, and governance in and through their existing practices. Largely these are taken for granted and are only made explicit when things go wrong. However, the introduction of new technologies and new ways of doing can also potentially breach routine practice, such that the accountability of their work is actively opened up for inspection. For this reason, participants expressed some broader considerations related to auditing and governance for CDSS in order to ensure their continuing compliance with defined standards of performance.

I think there would be requirement for ongoing audit. I think the governance teams have a real headache coming for all of these things because there has to be governance processes in place. You know, nationally we’re thinking on behalf of the NHS anyway, but also then local governance processes as well to satisfy your own local governance board and audit would certainly be one of them. You know, taking out percentage of cases and relooking at those and making sure that you’re still happy. And that you’re happy to accept still the 90% that you might not have forwarded it. Yeah, it’s tricky, isn’t it?

Wrapped within this is a concern with trust, something we have already surfaced in relation to the current decision-making process and how it is built into the existing process, so to speak. The problem with using CDSS is that the existing mechanisms trade heavily upon the ordinary workings of intersubjectivity between people and the ways in which assumptions about shared ways of reasoning and moral accountability are tightly wrapped into the background expectations professionals bring to their work. As these intersubjectively based mechanisms for trust do not exist between people and machines, however sophisticated, alternative mechanisms for managing trust and accountability have to be invented, such as the governance and auditing proposals made in the excerpt above.

Both the development and deployment of AI CDSS will need to be guided by sharing of knowledge about development processes with clinicians, and the sharing of best practices and training post deployment. The importance that clinical professionals attach to the former is clearly highlighted in one participant’s musings on AI CDSS development:

I think you have to build a level of trust. I’d have to see it working and as you say it’s all part of validation, isn’t it, understanding what the machine’s seeing? And some people are going to be more trusting than others. For example, I might want to know more about who’s kind of helped develop that. Are they specialists, urological pathologists? If it’s urologic pathology, is it, you know? Is it a black box? Is it you know, what input have people had into the development of the tool and then where has it been validated? Who’s validated it? What are the results of the validation? So, people have gone through and, you know, the AI’s done its job on those sets of cases and the pathologists has, are they really happy that those are benign? What are the nuances and what are the pitfalls potentially so understanding how it’s been done elsewhere and then validating it yourself within your own case mix? Learning to trust it. Yeah, I don’t know.

This process will also need to be overseen by bodies representing healthcare professionals and by regulatory bodies. One participant observed:

The issue is how we get these systems into practice, how accepted they are amongst pathologists? I think more importantly, how they’re accepted among regulatory bodies and how we manage that side of things. That’s the more sticking point from where I’m sitting at the moment is working out how we use what’s being developed, and particularly not all places have even got digital pathology at the moment. And the hoops that you have to get through to get that system into place besides the expense, I think if we had all the money in the world, it’s still difficult.

Not unreasonably in view of its potential to cause harm if improperly conducted, histopathology is currently subject to a significant amount of regulation and many of the accountability mechanisms already in play are attuned to that potential oversight. For the histopathologist above, at least, it is tuning the accountability mechanisms of a CDSS to these regulatory concerns that is going to present the greatest challenge. As we have seen, a middle ground that might see off some of the issues is to have a CDSS play the role of a diagnostic assistant. However, participants also expressed concerns about the impact that the use of a CDSS as a diagnostic assistant might have on (a) their own skills and (b) how they themselves are held to account.

And I guess the other thing is that you know are you at risk of losing your own expertise if the AI starts reviewing certain types of cases and then who becomes better at it? You know, there’s all sorts of things. Who’s looking at what the AI does. If it’s reviewing your work. Are people judging you and the AI? There’s so many questions that are going to be asked. Who’s using that? You know, will they use that against your practice? Will that be something that we end up having to put in our appraisal that we got, you know, this percentage of cases? The AI thought that we’d missed something, which I’m hoping will not happen because I think we’re as good as we can get as we are. But people will ask these questions going forwards.

The above is interesting for how it stands in contradistinction to the concerns captured earlier regarding whether having the CDSS act like a novice and pick up on everything would augment their workload. In this case, the concern is the opposite, that the CDSS might become ‘more expert’ than they are and that they themselves might have to act more like novices in order to make sure they haven’t missed something for which they might get called to account.

6 Discussion: CSCW, technology and diagnostic work

In this discussion we want to review how the above findings can provide valuable insights regarding diagnostic and decision-making work in clinical settings as a precursor to the introduction and evaluation of AI-based CDSS. At the very least, we may be able to make some reasonable predictions about the consequences of introducing new designs or changes in technology into such settings. We also want to consider how our analysis of displays of accountability and trust might eventually feed into the design process and provide some ‘implications for design’ (Dourish 2006).

6.1 Diagnostic work

We have seen that, as with many other forms of diagnostic work, the practice of ‘reading’ of both glass and digitised biopsy slides calls for the exercise of a subtle, learned combination of reasoning, knowledge, and skill. We have suggested that such practices might be considered constitutive of some form of ‘professional vision’, though we are mindful of Livingston’s comments about the prejudices that surround such studies:

“One is that reasoning is a mental process, something that takes place in the brain rather than being bound up with the material world and situated, embodied action. The second is that skill is a property possessed by individuals rather than belonging to a collectivity of practitioners”. (Livingston 2017)

In Sections 4.3 to 4.5, we have therefore also examined the ways in which notions of midenic reasoning and instructed action can be informative for an understanding of what the work entails. As we have observed and documented in multiple other diagnostic settings, diagnostic work involves, requires even, sets of tried and tested repertoires of ‘manipulations’ that are an integral part of the embodied practice of uncovering or realizing phenomenon. These manipulations and the accompanying diagnosis are examples of reasoning, assessing, evaluating, diagnosing, making judgements, whilst actually fully engaged in the flow of activity. At the same time such activities are embedded in a set of professional expectancies concerning how to go about everyday diagnostic work. This involves translating features made visible in the digitised image into an appropriate organisational and professional formulation – particularly in terms of possible diagnosis and possible treatment. This is, of course, what ‘professional vision’ looks like in action: it is concerned with the activities of the individual relative to some particular professional set of expectancies.

Diagnosis appears then as a social process of a specific ‘community of practice’ to which its members are accountable. As we saw in Section 4.3, being a competent practitioner involves being able to accountably distinguish between what is ‘normal’ and what is ‘abnormal’ in a digital image or a glass slide and understanding the range of manipulations and shared professional interactional practices that make what is ‘normal’ or ‘abnormal’ witnessable and accountable. It is the ‘intertwining of worldly objects and embodied practices’ that produces the recognizable and accountable diagnoses and decisions.

Diagnosis should be regarded as a material, collaborative process involving technologies, expert skills, and careful sensory and sensitive collaborative engagement with others. Some diagnostic activity requires what might be regarded as rational everyday knowledge, some demands specific ‘scientific’ epistemic practices of measurement, representations, and calculations – all need to be carefully described and documented. While in our fieldwork extracts participants may speak of ‘seeing’, ‘noticing’ and other supposedly cognitive or mentalistic topics they do so in a thoroughly social manner that testifies to ‘seeing’ and ‘noticing’ being practical achievements, part of professional vision and recognised as such by a professional community of practitioners.

Cognitivist treatments ignore the crucial social dimensions of reading and the interactional constitution of diagnoses inherent in the term ‘professional vision’. Histopathologists’ activities of handling, annotating and talking around artefacts ‘constitute’ the biopsies they view as part of their craft. While it might seem that the work of reading is undertaken in and through the application of individual skill, we document how reading and diagnosis, even in the narrow sense of just examining slides or images, is an intersubjectively constituted achievement and, in Section 4.5 and throughout Section 5 we have indicated the challenges that may therefore pose for the introduction of AI. By showing how practical actions such as film arrangement, gesturing and pointing to features on films, manipulating films, are all components of the lived work of doing detection and diagnosis, our analysis points beyond the rather impoverished accounts of clinical diagnosis as a simple cognitive phenomenon (and thereby miss exactly what it is to be doing diagnosis); even in the simple exercise of suspicion (see Section 4.4. Our studies orient to the ‘real time, real world’ nature of diagnostic work, describing and emphasising the meaningful and practical human activity involved in such everyday work, the orientation of participants to colleagues, to various work artefacts, and to the available technology in order to provide some baseline understanding into which any new technology systems may need to fit (Randall et al. 2007).

6.2 Accountability and trust

Our analysis of the work of clinical decision-making has touched throughout on the complex matter of accountability and trust (Gambetta 2000; Luhmann 2018). The issues that arise here are especially important with regard to the design and redesign of supporting technologies. Our research on clinical diagnostic work is presented in the belief that the detailed understanding provided by an ethnographic approach and ethnomethodological analysis should be a precursor to the design and redesign and evaluation of clinical decision support systems. Diagnosis is entirely shaped by concerns about accountability (Procter et al. 2023); people are required to give explicit accounts or explanations for the different decisions that they make.

Accountability is embedded in the organisation of the process itself in the form of who has the right to speak in treatment planning meetings and the need to ‘account’ for particular or different and perhaps unusual evidence or interventions. Unusual results provoke debate whereby clinicians may request additional information from nurses, histopathologists, radiologists, admin staff, or other clinicians (Procter et al. 2023). Accountability is afforded by and accomplished through the public visibility and sharing of documents, such as the patient record, CT scans, mammograms, x-rays, biopsy images, etc. Patient files, for example, are managed by clinicians and have a complex relationship with the process of accountability and decision-making – forming part of what Bittner terms ‘stylistic unity’ (Bittner 1965) as a resource for elaboration and collaboration. Arriving at accountable decisions unfolds dynamically as a process of ‘midenic’ reasoning (Livingston 2017) (see Section 4.3) and diagnosis and decision-making, although clearly processual, is generally a complex, interwoven and mutually elaborative affair (Procter et al. 2023).

Accountability and trust are clearly related and our research on clinical diagnostic work is presented in the belief that a detailed understanding of everyday diagnostic practices is essential for the successful introduction of CDSSs (Procter et al. 2022). Only in this way will it be possible to understand the potential impact of such tools on the situated, collaborative practical activity diagnosis work that we report on in this paper. Questions that need to be addressed include: what design features of such tools might cause clinical professionals to ‘trust’ or ‘mistrust’ them? This concern was surfaced in Section 5. As with interpersonal trust, the need for evidence for the trustworthiness of technology is not achieved through a one-time act but must be an ongoing accomplishment (Procter et al. 2023). How can trust in these tools be sustained in and through interaction with them? We need to consider not only expectations of trustworthiness that will precede the deployment of these tools but also those aspects of trust that will then emerge as part and parcel of the production and accomplishment of the clinician’s everyday professional work that is to be supported by them. How can such tools be made accountable for their behaviour in and through clinicians’ interaction with them?

Our fieldwork illustrates that histopathologists’ work has an important collaborative, social character that is utterly bound up with the accountability of their actions and the intersubjective grounding of their work. We have already seen how trust is implicitly bound up with such concerns (see Section 5.2), so we must consider the possible impact of new technologies on trust with respect to the working arrangements and practices that underpin this collaboration. Clinicians expect to be held accountable to their colleagues for the decisions they make. This suggests that CDSSs must not only provide accounts that meet the needs of the individual clinician, but these accounts must also be compatible with how they themselves manage being accountable to one another and to the organisational setting of which they are members.

This reminds us how, as technological innovations become entangled in the complexities of organisational working, so the challenges for successful adoption correspondingly increase because of the requirement for their integration within existing and trusted work practices. Not least is the concern that while tools such as AI-based CDSSs may save diagnostic time and effort, this could be offset by the effort then required for continual auditing of their trustworthiness (Procter et al. 2023).

7 Conclusions

Our current studies take place against the background of high rates of failure in the adoption of digital technological innovations (Greenhalgh et al. 2017), who state:

It is not individual factors that make or break a technology implementation effort but the dynamic interaction between them… we need studies that are interdisciplinary, nondeterministic, locally situated, and designed to examine the recursive relationship between human action and the wider organizational and system context.

The empirical work reported above highlights some of the difficulties of evaluating healthcare technologies, addressing the concern raised by Bannon (1996) that while evaluations are important it is also important to be aware of the quality of the evaluation, since this might impact on what can be learned from any study, suggesting:

“a careful systematic account of what happens in particular settings when a prototype or system is installed, and how the system is viewed by the people on the ground, can provide useful information for ‘evaluating’ the system and the fitness for the purpose for which it was designed.” (Bannon 1996, 427).

Thus, our interest in documenting the rather mundal detail of how clinicians perform diagnostic work is driven both by a desire to obtain some clearer understandings of diagnostic work and by a practical interest in the design and evaluation of technological innovations, such as AI-based CDSS that are intended to act as resources and support clinical detection and diagnostic work in sustainable ways. In other words, satisfying requirements that will be important if the use of AI-based CDSS is to progress beyond a pilot and to scale up and be successfully embedded within potentially diverse clinical settings.

We therefore argue that a new technology such as an AI-based CDSS cannot be introduced without a commitment to first acquire a detailed understanding of the everyday work of clinical diagnosis. We would stress instead the importance of qualitative investigations of the impact of technological interventions on the everyday working and mundane interactional practices of various medical settings. We are convinced that a detailed understanding of a range of diagnostic practices should be the precursor to the design and redesign of clinical detection and diagnosis technologies.