Introduction

The HER2 biomarker for breast cancer is an emblematic example of a success story in the field of cancerbiomarkers, and it is used to bolster enthusiasm for biomarker research and precision oncology. After the discovery of the ERBB2 (HER2) gene in 1984 by the Weinberg lab, HER2 was eventually demonstrated to be a biomarker of poor breast cancer prognosis. Then, the targeted therapy trastuzumab was developed, aimed for breast cancer patients with HER2 positive tumours, and, in a foresighted move, tested in biomarker-stratified clinical trials. The assessment of HER2 protein expression (by immunohistochemistry) or gene copy number (by in situ hybridisation), or a combination of the two, has for many years now been used as the gold standard in the clinic to predict patients’ response to anti-HER2 therapy, both in the adjuvant and metastatic settings of breast cancer.

The HER2 success story is often evoked in projecting what precision oncology could materialise into, and as a model for developing other successful biomarkers. To a significant degree, this has meant promoting the advances achieved through HER2, with the effect of creating a ‘scientific bandwagon’ of efforts to emulate HER2 through ‘standardised packages’ of theories, methods and technologies.

In this chapter we depart from this tradition, by critical revisiting the HER2 story in a way that highlights not only the enabling conditions and reasons for its success, but also the complications and challenges that it faced and continues to face along the way; from biological complexity to associated social, political and ethical controversies for instance. In doing so, we aim to show that HER2 confronted many of the limitations that hinder work on other biomarkers and assert that the field can learn as much from these limitations as from HER2’s advances. To the extent that HER2 is held up as a standard for all oncology biomarkers, a more nuanced account of its development may provide more realistic expectations of ‘good enough’ biomarkers. This more nuanced story also helps us interpret what reflection the history of HER2 can lead us to about the possibilities and limits of precision oncology.

In section “The HER2 story”, we will review the HER2 history, from its discovery in the lab, to development of diagnostic tools and early clinical trials. In section “Revisiting the story of HER2”, we will revisit the story of HER2 by looking at the ethical, legal and social aspects faced by HER2 as a ‘standard package’ in the HER2/oncogene bandwagon; and discusses one of the key contemporary legacies of the HER2 story, namely: the sociotechnical imaginary of precision oncology. Section “Revisiting the story of HER2” ends with reflections on the need for a greater focus on ‘good enough’ biomarkers, particularly in a context of precision oncology driven by hyper-precision and the wish for molecular certainty, and it underlines the importance of being open about the low success rate of biomarkers reaching clinical practice, in particular when justifying the risks and opportunity costs of precision oncology. These key reflections are summarised in the concluding section “Conclusion”.

The HER2 Story

Discovery and Basic Studies

To understand how this extraordinary achievement for breast cancer patients started, we will shed light on the very first steps of the story, the discovery of the HER2 gene and the accompanying experimental research to understand the function of HER2. But we also need to peek into the time being – the early 80s – trying to understand the field of cancer research at that time.

Theories of cancer being caused by viruses carrying oncogenes, i.e. genes causing conversion of normal cells to cancer cells, to infected human cells have been present for several decades. The first report on gene alterations promoting cancer, resulting from the landmark work of Nobel laureates Bishop and Varmus (Stehelin et al. 1976; Bishop 1983; Varmus 1984), led to a rush in the search for human cancer genes (oncogenes) and their normal counterparts (proto-oncogenes), and how these ‘precursor’ genes are activated, for example by mutations, amplifications, or gene rearrangements. Many cancer genes were found to be variants of previously identified viral oncogenes.

Studies on a leukaemia virus in chicken led to a major finding in 1984. Several labs came to the conclusion that the v-erbB gene is derived from the gene encoding epidermal growth factor receptor, EGFR (Downward et al. 1984; Ullrich et al. 1984). The link between a chicken virus gene and human growth factor signalling was a game changer for cancer research. A model was proposed: “viral oncogenes were misbehaved variants of normal cellular proteins that played fundamental roles in growth factor signalling” (Sawyers 2019).

In the ongoing search for oncogenes, the HER2/neu gene was discovered (Schechter et al. 1984). At first, the oncogene neu was identified. Other research groups added to this knowledge as they cloned the new gene, naming it HER2, Human EGF Receptor 2, due to its resemblance to EGFR (Coussens et al. 1985; King et al. 1985; Semba et al. 1985).

Next step was to identify the functional consequences of alterations in HER2, now considered a human oncogene. Connections between other human oncogenes and specific cancer types were identified, like Burkitt’s lymphoma and neuroblastoma, but was still uncovered for HER2 and human cancer.

The HER2 protein was described as a receptor at the cell membrane. One (extracellular) part of the protein was available for blocking by an antibody. The Weinberg group demonstrated reversion of the cancer phenotype when blocking the neu gene in cancer cells overexpressing neu (Drebin et al. 1986). The same positive effects were not seen when using the same antibody in cancer cells overexpressing Ras, another oncogene. This indicated specificity of the antibody (Sawyers 2019). At the same time, the Ullrich group demonstrated that HER2 took a functional role in transformation of cells from normal to cancer (Hudziak et al. 1987). The functional pathology and treatment potential related to HER2 was now demonstrated.

At this stage, combining the knowledge from the Weinberg and Ullrich labs, the company Genentech initiated a search for relevant HER2 antibodies, proteins developed to bind to the HER2 protein, thereby blocking its cancer promoting effects. They reached the goal of developing an antibody that demonstrated acceptable selectivity towards HER2, leading to growth inhibition of breast cancer cell lines with HER2 amplification (Hudziak et al. 1989).

When reading stories from the time of these discoveries, we can appreciate nearly an electric sensation in the research races between labs, and the excitement of taking part in what could be a major breakthrough in cancer research (Sawyers 2019).

Biomarker Development

As mentioned, the HER2 biomarker is one of the most successful in contemporary medical oncology (Hunter et al. 2020). There is probably more than one explanation. First, due to the underlying genomic alteration for HER2 overexpression in most cases, HER2 gene amplification, this marker is much more dichotomous or «on-off» (or two-tiered) than many other tissue-based protein biomarkers, although borderline cases and challenges do indeed exist. In comparison, a completely different biomarker is the Ki67 protein expression for tumour cell proliferation, by many laboratories used in the St. Gallen based surrogate classification of Luminal B breast cancer cases. Ki67 varies continuously from one extreme to the other, without a stepwise pattern, and with much variation in how to stain for Ki67 and how to assess it. Second, in the case of HER2, the biomarker is identical to the target for tailored treatment, and that is the membrane protein HER2, a member of the EGFR family of tyrosine-type of receptors. This ‘companion structure’ is probably an important but not sufficient explanation. As an example of the opposite, in the case of anti-angiogenesis treatment, the measurement of VEGF protein in tissues or in the blood has not been a success in the prediction of treatment response, VEGF being the target for bevacizumab.

From the discovery phase, and especially after the successful generation of antibodies, a time of intensified in situ studies on human cancer tissues were performed, especially looking at expression of HER2 protein in various breast cancers and correlating the findings with clinico-pathologic phenotypes and patient outcome.

In 1987, Slamon and colleagues reported that HER2 was overexpressed in approximately 30% of human breast cancers. In studies of oncogenes, DNA was extracted from primary breast cancer tissues, and Southern blot analyses gave a hit on HER2 (Slamon et al. 1987). In the initial part of the study, a cohort of 103 primary tumours were analysed. HER2 amplification was found in 18% of the cases. Notably, when assigning the tumours to groups of (1) one HER2 copy; (2) 2–5 copies; (3) 5–20 copies; and (4) more than 20 copies, there was no apparent association with the established prognostic variables like tumour size, histologic grade, or oestrogen receptor (ER) status. However, a trend of association was seen between HER2 amplification and the number of lymph nodes with metastasis; HER2 amplification was seen in 32% of the cases with metastasis to more than 3 lymph nodes. Follow-up data like information on recurrent disease and survival was missing for this cohort, and evaluation of the association between HER2 and prognosis could not be made. Based on the trend of association between HER2 amplification and number of lymph node metastases, the research group interpreted this as a hint that HER2 amplification could present with prognostic value. To pursue this idea, a cohort of breast cancer patients with lymph node positive tumours was examined. Out of 100 primary tumours (86 with sufficient DNA), 40% showed HER2 amplification. The finding of an association between HER2 amplification and increasing number of lymph node metastases was repeated. Additionally, HER2 amplification was associated with larger tumour size and ER negative tumours and demonstrated strong association with time to relapse and survival. HER2 maintained independent prognostic value when adjusting for established prognostic variables in multivariate survival analysis.

When reading the results from the initial studies (Slamon et al. 1987), we may ask what made Slamon and colleagues pursue their studies on HER2 in primary breast cancer. Many researchers have seen similar results when looking at other biomarkers and decided not to follow that lead any longer. Thus, the difference between statistical and biological significance should always be kept in mind, along with considerations on sample size and statistical power, to avoid type-2 errors or ‘error of omission’ in the process of interpretation.

Importantly, Slamon and collaborators did not give up their search for a clinically relevant role for HER2 amplifications, although somewhat weak results from the initial part of the study.Footnote 1 Subsequent studies focused on well annotated tumours from patients with long follow-up information, acquired from the collaborator William McGuire, who had established a breast cancer biobank – unique at that time. By this landmark study (Slamon et al. 1987), Slamon and colleagues demonstrated that HER2 amplified tumours were associated with aggressive tumour features and reduced breast cancer survival, supporting a clinically relevant role for HER2 in breast cancer progression.

Already at this point, discussions on biomarker cut-off came to play. How should HER2 ‘positivity’ be defined? Should the definition follow a strict biological interpretation – indicating that more than two copies align to amplified status? Or would there be need to define amplification according to clinically relevant copy number increase? Slamon and colleagues noted a shorter time to relapse and overall survival in tumours with HER2 copy number >2, compared to the other – more striking separations between survival curves were seen when the cut-off for HER2 amplification was defined as >5 HER2 copies per tumour cell.

In the years to come, and following the initial ‘gold rush’, the field of precision oncology and cancer biomarker research has expanded significantly with respect to complexity at the methodological and biological levels, as well as in the clinical fields. And to increase this further, HER2 plays important roles in other cancers than in breast tissue. Issues such as definitions of ‘positivity’ when using immunohistochemistry (IHC) or in situ hybridization (ISH), or the combination, have increased with time, for example reflected in the American Society of Clinical Oncology (ASCO) 2018 Guidelines on HER2 testing in breast cancer (Wolff et al. 2018). The topic of tissue heterogeneity and sampling bias, in primary tumours as well as in metastases, and phenotypic development and ‘switches’ with tumour progression, are just a few questions of concern for practicing pathologists and clinicians. The complexity is still growing, and it is tempting to quote Churchill: ‘Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.’

Having demonstrated a clinical relevance for HER2, a hunt for a human tolerable antibody started. Techniques of adding a mouse antibody onto a human antibody backbone was described (Jones et al. 1986; Verhoeyen et al. 1988). Shepard and colleagues developed an antibody that selectively killed cells expressing high levels of HER2 (Carter et al. 1992). This had significant implications with regard to toxicity and patient selection, and it was a critical breakthrough in the path to clinical application.

Early Clinical Trials

The development of a humanized antibody, selectively killing cells with high HER2 expression, paved the way for the first clinical trials on trastuzumab (Herceptin®). Experimental models had demonstrated improved effects when combining standard chemotherapy regimens with trastuzumab in HER2 overexpressing cancer cells and xenografts in mice (Pietras et al. 1998; Pegram et al. 1999). Efficacy and safety of trastuzumab given alone to metastatic breast cancer had been demonstrated (Cobleigh et al. 1999), also in combination with chemotherapy, where the results demonstrated no increase in toxicity compared to when chemotherapy was given alone (Pegram et al. 1998).

With this knowledge, Slamon and colleagues set out with the first Phase 3 study on chemotherapy plus trastuzumab to HER2 overexpressing metastatic breast cancer, enrolling patients in the period 1995–1997 (Slamon et al. 2001). Prolonged time to progression was seen in the group receiving chemotherapy plus trastuzumab compared to the group receiving chemotherapy only (median 7.4 vs 4.6 months). Also, response duration and survival time was longer in the group receiving trastuzumab – and this group experienced 20% reduction in the risk of death. At the same time, reports on some severe side effects emerged, demonstrating the often-challenging balancing acts in this field (Slamon et al. 2001).

During the following years, several studies demonstrated clinical benefit from trastuzumab monotherapy in metastatic breast cancer (Baselga et al. 2005; Vogel et al. 2006). Two studies from 2005 demonstrated positive progression and survival effects from trastuzumab given after or concurrently with chemotherapy, in the adjuvant setting (Piccart-Gebhart et al. 2005; Romond et al. 2005), leading to approval of trastuzumab as adjuvant therapy in 2006 (Sawyers 2019).

Some of the important lessons learned from early clinical trials that led to the success of targeting tumourHER2 to a subpopulation of breast cancer patients, were appropriate patient selection, guided by the HER2 tumour status, and accompanied by a robustly validated biomarker test. These early trials would most likely not have demonstrated the progression and survival effects seen, if patients with metastatic breast cancer, across molecular subtypes, were included.

One key to understand the significant and lasting impact of the early trials was the observation that the influence of HER2 blockade was dependent on the stratification using tissue based HER2-status, as there was no overall treatment effect to be observed. Thus, without the ‘companion structure’ of HER2 tumour status (biomarker) and HER2 blockade (treatment), this field might have been missed initially. However, the momentum continued to increase, with multiple trials of many different anti-HER2 modalities and clinical indications (Wang and Xu 2019).

Revisiting the Story of HER2

We will now revisit the story of HER2 to reflect on the conditions of its inception, some of the reasons of its success, and the challenges met along the way. How do we aim to revisit the emblematic HER2 story? There are different types of scholarship, such as history, philosophy or sociology of medicine, that can do this work of ‘revisiting’ stories. In this chapter, we revisit the story of HER2 from the related field of Science and Technology Studies (STS), and particularly through the two lenses of ELSI/ELSAFootnote 2 and RRI,Footnote 3 as they are concerned with what is of interest to us here: how science, technology and society at large are ‘co-produced’ and change each other’s trajectories, and how contributions from the social sciences and the humanities can integrate productively into such processes of co-production.

The concept of ELSI – Ethical, Legal and Social Implications – came about in 1988, at the onset of the Human Genome Project, when ethical, legal and social concerns were raised about the implications of genomic analyses, and notably the risks of using that knowledge to discriminate against people (see for instance McEwen et al. 2014). Shortly after, the field of ELSA – Ethical, Legal and Social Aspects – emerged, and was set up as a research field in its own right, articulated around research programmes and funding schemes (see for instance the European Commission’s ELSA programme, 2007). In a similar way to ELSI, the field of ELSA is concerned with how science and technology permeate society and policy (and vice versa), and how emerging science and technologies sometimes leave us with specific problems and issues. Through the ELSI/ELSA lens, we will be able to revisit the different issues that HER2 left us, that have a legal, social, or economic component: ragged edges, cut-offs or questions of fairness, which we can analyse at three levels: (i) basic science; (ii) diagnostics; and (iii) treatment.

In particular, in the latter decade in Europe, the ELSI/ELSA concepts have gradually been supplemented, shifted and to some extent replaced by the emerging concept of RRI – Responsible Research and Innovation. While the ELSI/ELSA approach largely focused on immediate issues of overcoming the ethical, legal and societal obstacles that impede a successful translation and uptake of technology, the continued involvement of STS scholars and other social scientists in life science research led to an increasing appreciation of the need to study, understand and engage with the kind of choices and processes that lead scientists to do research on a particular development or technology. This is what RRI is about: critically looking at the social, political, and scientific interests and choices that shape the trajectory of a particular technology: why are we investing in that research field or technology? Is the research responsible? What kind of intended and unintended effects can we anticipate? Who is concerned and who will this research affect, and how might they become involved in choices that ultimately affect them? Through the RRI lens, we will be able to revisit the story of HER2 by seeing it also as an exemplary case that had a broader effect on research and innovation trajectories in the cancer field.

How the HER2 Story Began: Oncogene Research Gathering Steam as a ‘Bandwagon’

As described above, HER2 was discovered in the effervescence of the early days of oncogene research. In her seminal paper The Molecular Biological Bandwagon in CancerResearch: Where Social Worlds Meet (1988), Fujimura made the parallel between oncogene research and a bandwagon. She explains that oncogene theory and recombinant DNA technologies were ‘packaged’ in such a way that it became a scientific ‘bandwagonFootnote 4’. This scientific bandwagon gained momentum through the neat packaging of a new, and arguably more productive, theoretical model for explaining cancer (and therefore a new definition of cancer) – the oncogene theory – and recombinant DNA technologies for testing the theory (these technologies became standardised, and thus easily transferable). It attracted increasing interest, support and resources from a broad spectrum of actors ranging from public research institutions and laboratories to the private sector and funding agencies. Many groups, relatively close geographically (Cohen and colleagues, Vanderbilt University, USA; the Weinberg Group at MIT; the groups of Ullrich and Coussens at Genentech, USA; Greene and colleagues at Harvard Medical School – MIT; Seeger and colleagues at the UCLA School of Medicine; Minna and Johnson’s group at the National Cancer Institute; Slamon and colleagues, University of California, Los Angeles School of Medicine; just to name a few (see the review by Kumar and Badve 2008)), all worked towards understanding how oncogenes worked and could be inhibited. These early oncogene researchers used the oncogene theory-and-technology package to prioritise oncogene research in their cancer research institutions, and in doing so they deeply changed the work organisation in many laboratories, for which the priority became to work on oncogenes, biomarkers or antibodies.

Indeed, in the mid-1980s, “oncogene” became a buzzword that was the centre of attention in oncology research (Hunter and Simon 2007), and the ground-breaking results of Weinberg and colleagues, through experiences with rodent tumours induced by chemical carcinogens, led to an explosion of studies of oncogenes over the following few years (Weiss 2020). Scientific articles related to oncogene research started to flourish in scientific journals, and the media drew attention to these developments as well. In parallel, funding agencies such as the American National Institutes of Health channelled increasing funds into oncogene research, going from $5.5 million allocated to 54 projects in 1983, to $103.2 million allocated to 648 projects in 1987. Specific to the HER2 story and further accelerating the pace of the bandwagon, pharmaceutical companies collaborated with academia and regulatory agencies so that developments of the HER2 biomarker and targeted therapies would happen in parallel. In particular, biomarker-stratified trials were useful in efficiently linking the anti-HER2 treatment of trastuzumab and HER2 as an accompanying predictive biomarker.Footnote 5

By 1984, notably with the discovery of HER2 by the Weinberg lab, the “bandwagon sustained its own momentum and researchers climbed on primarily because it was a bandwagon” (Fujimura 1988, p. 262). Fujimura (1988) pins down the success of oncogene research, and the success of HER2, to three main reasons: (i) the oncogene theory-and-technology package provided a new frame to and definition of cancer,Footnote 6 and technologies allowed to address relatively ‘straightforward’ questions within that frame. The apparent simplicity in how oncogenes were seen to work, and the access to well-fitted technologies, made it attractive to engage in large-scale research efforts; (ii) the oncogene package involved novel, pioneering, recombinant DNA techniques which attracted interest from the different actors at play; and (iii) very quickly, researchers working with the oncogene package developed new and valuable knowledge about cancer at the molecular level – an aspect of cancer that had not been the topic of extensive previous investigations. From the story of HER2 described above, we can add four other reasons to further explain the success of HER2: (iv) the dichotomous nature of HER2 – the marker is most often either ‘on’ or ‘off’ – facilitated its successful application in the clinic; (v) the persistence of scientists, who despite somewhat inconclusive results from parts of their study, pursued the research on potential clinical application for HER2 amplifications; (vi) in the case of HER2, the biomarker and therapeutic target are the same, which again facilitated the road from the bench to the clinic; and (vii) the alignment between academia, pharmaceutical industries and regulatory agencies, gathered together on the bandwagon was key in the success of HER2.

Understanding the Social, Ethical and Economic Implications of the HER2/Oncogene Bandwagon

The analogy between the HER2 story and a scientific bandwagon allows us to look in more details at this bandwagon, and discuss some key social, ethical and economic implications associated with it. In particular, we will look at implications relating to: (i) basic science; (ii) diagnostic; and (iii) treatment.

Basic Science

One significant opportunity of creating an oncogene bandwagon was the capacity to open up research and nurture collaboration between different actors, who otherwise were not as aligned as during the HER2 story. We have seen in Section 3.1 how scientific laboratories and institutions, pharmaceutical companies and regulatory agencies together worked around the oncogene theory-and-technology package. Two aspects help explain why this collaboration was so successful. First, the recombinant DNA technologies of the package were standardised, and therefore highly ‘transportable’ to the different actors on the bandwagon. They could all relate to, and work on the ‘standard package’ without changing it, meaning that there were no major issues of translation, redefinition or reframing of the package: everyone was talking about the same thing, in a clearly defined frame – making collaboration easier. Second, the different actors could see their interests fulfilled through this collaboration, with academia bringing its scientific discovery to the clinic in a uniquely successful and fast way; pharmaceutical companies ensuring that most target patients got a clinical benefit from the treatment while maintaining profits through higher prices; and regulatory agencies in seeing the most effective and safe treatment delivered to each patient (Parker 2018).

However, the bandwagon was also found to be a way of closing down research, or in other words, to effectively constrain cancer research trajectories into a domain, where looking at oncogenes was the top priority of cancer research agendas. The phenomenon of the ‘bandwagon’ finds different expressions in the field of philosophy of science, among which two are interesting for closely looking at this ‘closing down’ effect. In his paper History of science and its rational reconstructions, Lakatos (1970) devised the concept of ‘research programmes’ as a “sequence of theories within a domain of scientific inquiry”. If the move from one theory to the next one is characterised by an advancement of the field, then we are in a ‘progressive research programme’. However, if this is not the case, then the programme is ‘degenerating’, and scientists might ultimately leave the programme to create or engage in a new one. Lakatos’ analysis was a response to the famous theory that Thomas Kuhn introduced in his book The Structure of Scientific Revolutions (1962), in which he held that research trajectories presented themselves as “paradigms”, that is, sets of theories, research methods, assumptions and standards for what constitutes legitimate contributions to a field. He developed a model for science where periods of successful acquisition of new knowledge within the context of the dominant paradigm (periods of ‘normal science’), were interrupted by ‘scientific revolutions’ demanding a ‘paradigm shift’. These revolutions would occur after an accumulation of ‘anomalies’ in the field (typically, repeated failures of the current paradigm to take into account observed phenomena or explain facts). In the paradigm shift, the underlying models, definitions and assumptions of the field are critically questioned, and a new paradigm is established. The new paradigm will ask new questions of old data, giving research a different direction. While every philosopher of science in the 1960s and 1970s admitted that phenomena similar to Fujimura’s bandwagons, Kuhn’s paradigms and Lakatos’ research programmes existed, the main issue of contestation was the rationality of the processes of constraint and change. For Kuhn, such processes were ultimately driven by extra-scientific considerations. For Lakatos, the development of science was inherently a rational and logical process. He conceded, however, that the rationality cannot be determined in real time. Only in historical hindsight can one fully and rightly judge on progressive and degenerative research programmes; on what was a brilliant case of patient persistence and what was an unfortunate lock-in.

Indeed, at some point, the oncogene theory-and-technology package seemed to reach its limits: anomalies were accumulating around the oncogene package, and its paradigm was increasingly criticised for being an “illusion that cancer was as simple as it possibly could be […and] that a small number of molecular events might explain cancer” (Weinberg 2014, p. 269). In particular, the oncogene package struggled to investigate cancer mechanisms in the face of tumourheterogeneity and complexity (as we discuss in the subsection below). The impressive and outstanding success of HER2 and trastuzumab was exactly that: it stood out, impressed, and remains to date the poster child for biomarkers and targeted therapies. At some point, what was called for was that researchers progressively jumped off the bandwagon of oncogene research to search for new progressive programmes, focusing for instance on immunotherapy (Akkari et al. 2020), and biomarkers of the tumour microenvironment (Laplane et al. 2019).

Diagnostic

There is a certain beauty in the apparent simplicity of HER2. It is one of the rare biomarkers which is most often either ‘on’ or ‘off’, and where the biomarker is also the therapeutic target. This explains to a great extent why HER2 has found successful applications in the clinic. However, as seen above, despite the formidable success of HER2, the standard oncogene package started to meet important limits, in particular when facing tumourheterogeneity and overall high complexity of cancer biology, even for HER2. These limits are still visible today at the diagnostic level, where HER2 faces important uncertainties and ethical, social and economic implications, in particular related to: (i) inter- and intra-tumoral heterogeneity which question the reliability and quality of biopsies; (ii) the determination of HER2 positivity and questions of cut-off, with inter- and intra-observer variability. and (iii) the new sequencing and imaging technologies that generate immense amounts of data that need to be governed and made sense of (this latter point is maybe more relevant for new and emerging biomarkers, but might have implications for HER2 as well).

Several models have tried to address the heterogeneity of tumours and the complexity of cancer biology without being capable of singlehandedly grasping it (Boniolo 2017). Basically, each tumour represents an ‘individual organism’, different from all others, and is itself composed of sub-tumours or cellular sub-populations (or clones). This heterogeneity significantly and directly impacts on clinical matters (Boniolo and Campaner 2019) and has important implications for patients who will receive or not receive anti-HER2 treatment based on the HER2 test outcome. Relative to intra-tumour heterogeneity, the heterogeneity within the tumour, Blanchard and Wik (2017) explains that a patient might get a result showing HER2+ or HER2− depending on where in the tumour the biopsy has been taken. Relative to inter-tumour heterogeneity, the heterogeneity between different tumours in one patient, there are some cases where the primary tumour is HER2− and develops into HER2+ metastases, and vice versa. Therefore, as argued by Boniolo and Campaner: “We can no longer speak in terms of, for instance, breast cancer, but properly speaking, we should refer to one of the many possible cancers affecting the breast.” (2019, p. 34) How, under these conditions, can we set up a robust diagnostic algorithm? How many biopsies have to be taken? How many metastatic lesions should be sampled, and how often, and to which costs? This extends to uncertainties relative to how to treat the patient. For instance, Goldhirsch et al. (2009) argue that HER2 overexpression in circulating tumour cells might justify targeted therapy even in the absence of a HER2+ primary tumour; but this remains contested.

This tumourheterogeneity has direct implications on the determination of HER2 positivity and where to place the cut-off. There are indeed some cases where it is not obvious to determine whether HER2 is over-expressed or amplified, or not. As discussed above, two main techniques for determining HER2 positivity are used in clinical practice: (i) immunohistochemistry or (ii) the determination of gene amplification by FISH, CISH or SISH. The threshold for HER2 positivity, and how to ‘correctly’ place the cut-off defining HER2 positive and negative tumours, in rare instances, are still debated (Wolff et al. 2018). Some of the questions without clear answers are referred to in the literature as the ‘cut-off’ problem (Rosoff 2017) and the ‘ragged edge’ problem (Callahan 1990).Footnote 7 Although initially perceived as a clean story of HER2 amplification with a clear-cut tissue biomarker, persisting efforts and technology developments have widened our understanding of HER2 biology with corresponding clinical implications. For instance, somatic HER2 mutations might occur in 2–5% of primary breast cancers, mainly among HER2 amplification negative cases (Yi et al. 2020). Observations on Chromosome 17 polysomia or monosomia as well as CEP17 centromeric amplifications have made this area even more complicated from a diagnostic point of view.

Correspondingly, on the clinical side, there is not always a clear separation between between responders and non-responders; rather, there is a continuum of responses. For instance, developments of imaging techniques may influence the sensitivity levels of response definitions and detection. Patients who are just below the cut-off will ‘fall off’ the ragged edge, and not get access to the treatment

Correspondingly, on the clinical side, there is indeed no clear separation between strong responders and non-responders; rather, there is a continuum of responses, and the patients who are just below the cut-off will ‘fall off’ the ragged edge, and not get access to the treatment (Blanchard 2016; Fleck 2012). Callahan (1990), who first coined the concept of the ‘ragged edge’, argues that wherever we draw the line, there will always be people just below; and we should therefore try and accept to live with ragged edges: “We can accept [the ragged edge], not because we lack sympathy for those on it, but because we know that, once a ragged edge is defeated, we will then simply move on to still another ragged edge, with new victims – and there will always be new victims. […] It is a struggle we cannot win. […] We can ask not how to continually push back all frontiers, smooth out all ragged edges, but how to make life tolerable on the ragged edges; for we will all one day be on such an edge, sooner or later.” (Callahan 1990, p. 65).

Finally, a third limit that new and emerging cancerbiomarkers face, but that also has implications for the future developments of HER2-linked diagnostics, is related to the new sequencing and imaging technologies which produce huge quantities of data, and explain why, as noted by Boniolo and Campaner (2019), the number of bioinformaticians in the field of oncology has increased exponentially in the last two decades. However, it has proven very challenging to govern these data that are created at an extremely rapid pace, and understand their meaning. These big data, rather than supporting clinical decision-making at the diagnostic level, come to complicate the picture in uncertain ways. The deep-sequencing analyses of tumour DNAs add a new layer of complexity, and the big data that is generated arguably overwhelms our abilities to interpret and make sense of themFootnote 8 (Weinberg 2014). The development of large, genomic data sets arguably complicates the patients’ choices relative to the use of their genomic information (Mayeur and van Hoof 2021), and it challenges even more their participation in clinical decision-making relative to their treatment, as they might experience an information overload. In addition, the high cost of these technologies, their sophistication and the technical and scientific expertise they demand, challenge their fair and just access, both nationally and globally.Footnote 9

Treatment

After the success of HER2 and adjuvant therapy trastuzumab, other treatment modalities have been explored to propose alternative targeted therapies that could address tumourheterogeneity and the biological complexity represented by redundant activation of signalling pathways downstream of HER2. There is currently a broad selection of anti-HER2 treatments against HER2tumours of primary resistance, in the form of monoclonal antibodies, antibody-drug conjugates or tyrosine kinase inhibitors for instance. However, it has been difficult to find adequate biomarkers to select between these different modalities. Why has the development slowed down on the biomarker side? Why have the tight collaborations that were happening on the oncogene bandwagon between academia, pharmaceutical industries and regulatory agencies, not continued with the same intensity and simplicity? If we return to Fujimura’s notion of oncogene theory-and-technology ‘package’, we see that at the level of basic science, this package would travel quite seamlessly between the different actors while remaining immutable – the technologies were standardised, and the package was therefore highly transportable without changing its initial shape. At the treatment level, this seems to be a different story. The various actors have confronted the package with the heterogeneity and complexity of cancer, which made it not as transportable and immutable anymore. Multiple treatment modalities are being developed, in trying to address the high complexity of cancer, but the framings, definitions, technologies and interests of academia, pharmaceutical companies and regulatory agencies do not align as well anymore. Issues around data sharing agreements emerge (Antoniou et al. 2019), and the need is growing for pharmaceutical companies to come up with a different business model, as potentially excluding patients from the target treatment population is not compatible with an interest in optimising profits (Parker 2018). Indeed, historically, most pharmaceutical companies have relied on the business model of blockbuster drugs, whereby companies derive their profits on a small number of drugs which can be marketed widely to a broad population (OECD 2011). Shifting to a precision oncology model that relies on targeted therapies for subgroups of patients will reduce the market size for a drug and will off-balance the optimised ratio between profits and development costs. As mentioned in the OECD report from 2011: “Increasingly, to serve the original market, two or three different drugs may be needed, potentially increasing development costs to serve the same market size and accrue the same revenue.” (p. 33) A variety of new business models are being adopted by the pharmaceutical industry, tailored to biomarker application in pharmacogenetics and diagnostics. Some of those new business models also use biomarkers to improve the efficacy of existing drugs or work on how to repurpose them. But maintaining profits in a context of increasingly segmented markets, an evolving regulatory environment, and unequal drug developments in terms of speed, costs and efficacy, remains to date a challenge.

Having this array of treatment options give rise to two other ethical, social and economic implications. First, the costs of these treatment options range from $5,134 per cycle for trastuzumab, to $10,290 per cycle for pertuzumab (Hassett et al. 2020). This raises the question of the fair and just accessibility of such treatments both nationally and globally. Nationally, countries that do not offer public health care schemes might suffer from important discrepancies between the well-insured and non-insured in access to those targeted therapies. In particular, targeted therapies could add billions of dollars per year to the cost of public health care in the USA and in Europe, and these costs will have to be met by the insurance sector (Blanchard 2016). However, Ginsburg and Willard (2009) note that it is not sure whether insurance companies will be able and willing to reimburse these costs. We actually already see some American insurance companies who have begun off-loading the expenses onto patients, leading to 62% of personal bankruptcies being attributed to medical costs, principally cancer (Jackson and Sood 2011). Similarly, at the global level, we experience that personalised cancer therapies increase discrepancies in access to such treatments, not only because of the high costs of treatments which fail to be absorbed by health care systems in developing countries, but also because of the sophistication of the technologies required for the diagnostic part. This means that personalised cancer therapies in general are mostly accessible to the wealthy.

The second implication of having these many options in anti-HER2 treatments, is that it creates an overwhelming choice that comes to complicate the clinical decision-making and the integration of the patient in this decision-making. The different treatment options come with indications about which patients they might be most efficient for (ECOG performance status, size of tumour, earlier treatments, potential side-effects, etc.), but since diagnoses are surrounded by uncertainties, and the expression of HER2 positivity is sometimes unclear, treatment options are chosen on the basis of best, but uncertain, knowledge.

Where Are We Now? The Imaginary of Precision Oncology

We have looked at how the oncogene bandwagon gathered steam and allowed for unique collaborations between academia, pharmaceutical companies and regulatory agencies, thus opening up the field for unprecedented successes such as HER2. We saw how the oncogene bandwagon constrained cancer research trajectories into a domain, where looking at oncogene was the top priority of cancer research agendas. The oncogene ‘research programme’ met its limits with tumourheterogeneity and the biological complexity, with important social, ethical and economic implications. We saw how some researchers progressively jumped off the bandwagon in the face of these limitations, and the paradigm shifted towards ‘progressive’ research programmes looking among other at immunotherapy, large-scale (omics) data, biomarkers of the tumour microenvironment, and composite (signature) biomarkers. Having revisited HER2, it is clear that there is a high and ongoing potential for reflexivity and adaptability in the field of cancer research, as a persisting basis for ‘scientific revolutions’, for researchers to reinvent their field so that it continues to be ‘progressive’. In this section we look at one of the key contemporary legacies of the HER2 story, namely: the sociotechnical imaginary of precision oncology.

The HER2 story is still used as evidence to bolster the sociotechnical imaginary of precision oncology. The concept of sociotechnical imaginaries was developed by Jasanoff and Kim in 2013 and defined as “collectively held and performed visions of desirable futures […] animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology” (Jasanoff 2015, p. 25). In other words, sociotechnical imaginaries are visions created and shared by actors in science, industry and politics, of desirable and feasible futures, attainable through science and technology (Strand et al. 2018). They allow for a collective sense-making of social and technological futures, but they are not neutral: they are embedded in social and political negotiations and practices, and “almost always include implicit shared understandings of what is considered to be ‘good’ or desirable, such as what constitutes “public good” or a “good” society, or how science and technology could meet public needs” (Ballo 2015, p. 12). In that sense, some aspects might be prioritised over others, as some actors have more power than others to participate in the creation of these sociotechnical imaginaries.

Cancer research today is strongly steered by the sociotechnical imaginary of precision oncology. Precision oncology is marketed in policy documents as aiming to achieve “the right therapeutic strategy for the right person at the right time, and/or to determine the predisposition to disease, and/or to deliver timely and targeted prevention” (EC 2015, p. 3). Significant funds are channelled into this ambitious effort. For instance, the EU 7th Framework Programme funded 209 projects on personalised medicine for a total amount of €1.334 billion over the period 2007–2013; and the EU Horizon 2020 funded 167 projects on personalised medicine for a total amount of € 872 million over the 2014–2017 period (EC 2017). In parallel, precision oncology is supported by new technologies relying on big data, such as emerging imaging and new-generation sequencing techniques, often described as being “cost- and time-effective sequencing of tumour DNA, leading to a “genomic era” of cancer research and treatment” (Morganti et al. 2019, p. 9). Significant and important progress has been made in the last few years, with new biomarkers and associated targeted therapies being developed; however, none of these have reached the outstanding success of HER2. These advances and promises are relayed by the media as capable of revolutionising cancer treatment,Footnote 10 and are thus accompanied by a strong hope among (future) patients and policymakers alike.

The technoscientific imaginary of precision oncology is producing, among other things, a culture of medicalisation, where the expected medical ‘miracles’ are no longer perceived as “mirages”, but “solidly out there on the horizonFootnote 11” (Callahan 2003), and it is frequent to see new communities of patients or ‘biocollectivesFootnote 12’ expect and demand a tailored treatment for their diseasesFootnote 13 (Brekke and Sirnes 2011). There is a fusion of hope and reality, and a confusion between the temporalities of precision oncology, with on one side the ‘imaginary’ of a desirable future where targeted cancerdrugs work without any ambiguity; and on the other side the reality of cancer research today, and the limits it faces with regard to tumourheterogeneity for instance. This confusion fuels the idea that biomarkers are robust enough to (soon) offer solutions and tell us who has, or is at risk of having, a particular type of cancer; who can be treated, with what and when; and how the patient might react to the treatment, including the risk of relapse (Boniolo and Campaner 2019).

We have seen from revisiting the story of HER2 that finding answers to all these questions is rather ambitious, and it would be a shortcut to think that a robust biomarker could help solve all dilemmas related to clinical decision-making, as well as the ethical and social dilemmas related to cut-offs for instance. In a chapter called “What is a good (enough) biomarker?Footnote 14” (2017), Bremer and Wik explored the different dimensions, from the oncology and policy perspectives, according to which a biomarker might be deemed good (enough): analytical validity, clinical validity and clinical utility, as well as improving the health and quality of life of cancer patients and contributing to the sustainability and fairness of healthcare systems. Following that, they discussed the importance of highlighting the opportunity costs of the imaginary of precision oncology. Being steered by the aim of achieving “the right therapeutic strategy for the right person at the right time” (EC 2015), and trying to find ‘perfect’ biomarkers that can support this endeavour, make us miss important aspects. The key message of Wik and Bremer was that it is impossible for one single biomarker to score high in all of the above-mentioned dimensions. Indeed, choices have to be made when designing a new biomarker, according to its purpose. For instance, while a highly sophisticated and composite biomarker might help better understand the complex cancer biology, it might face important challenges of quality, uncertainty, and difficult implementation in a clinical setting. Similarly, while a simpler biomarker might find broader clinical application and be more widely accessible, the cut-off for patient stratification might be rougher. Therefore, alongside the search for biomarkers that can do it all, it is also important to highlight the potential for extremely relevant research on how biomarkers can be good enough in certain settings, and how they should be evaluated and implemented. In particular, how the ‘quality’ of a good (enough) biomarker is evaluated through its ‘fitness for purpose’ (the biomarker does what it is supposed to do), rather than its capacity to score points in all the above dimensions. This could help curb the spiralling culture of medicalisation, where limits to the realisation of extraordinary treatments are perceived as being only political, not scientific;Footnote 15 and where questions of justice and fairness in healthcare distribution are often reduced to a mere problem of lack of accuracy, precision, sensitivity or specificity from the biomarker.

The reality of precision oncology today, is that 99% of published cancerbiomarkers fail to enter clinical practice (Kern 2012; see also Ioannidis and Bossuyt 2017 and Ren et al. 2020). Let us look at that 99% through the ‘degenerative’ and ‘progressive’ research programmes of Lakatos. At first sight, it would seem rather rational to jump off the precision oncology research programme, to move towards a more ‘progressive’ type of research. However, Lakatos argues that it is neither irrational nor rational, neither good nor bad, to stay on a degenerating programme: “One may rationally stick to a degenerating programme until it is overtaken by a rival and even after. What one must not do is to deny its poor public record. […] It is perfectly rational to play a risky game: what is irrational is to deceive oneself about the risk.” (pp. 104–105) This means, then, that if precision oncology is regarded as the future of cancer research, we have to accept Kern’s claim, be open and honest about the 99% of published biomarkers which don’t make it to practice, be ready to justify the risks and opportunity costs of such research efforts, and be transparent about the fact that precision oncology, as shown by the HER2 story, is operating within the limits of tumourheterogeneity and complexity of cancer biology.

Conclusion

In this chapter, we looked at the story of HER2 from its discovery and basic studies, to biomarker development and early clinical trials. We then revisited HER2’s story to reflect on the conditions of its inception, some of the reasons for its success, and the challenges met along the way. In particular, we drew a parallel between the story of HER2 and a bandwagon, to see HER2 as a standard theory-and-technology package, that could easily circulate around the network of actors and organisations working on oncogene research, greatly facilitating its development. Nevertheless, revisiting HER2 made clear that despite its extraordinary success, this biomarker operates in a context of high levels of biological complexity, in particular with regard to cancer tumourheterogeneity. HER2 therefore faces legal, social, or economic challenges and dilemmas including ragged edges and where to justly place the cut-off between HER2+ and HER2− patients, questions of fairness in the access of high-priced and sophisticated technologies and therapies, or the difficult partnerships between academia and pharmaceutical companies to bring a scientific discovery to the clinic. Revisiting HER2 also more generally highlighted that the fields of cancer biomarker research and precision oncology, where HER2 belongs, are based on a sometimes confusing blend of hope and reality: hope that targeted therapies will (soon) work for every cancer patient; and the reality of the complexity of cancer biology.

Based on these observations, we reflected upon two aspects relating to the future of cancer biomarker research. First, it is important to not be ‘blinded’ by the prospects of precision oncology and strive at all costs for hyper-precision and an unachievable molecular certainty, numerical exactness and conceptual rigour. The field of cancerbiomarkers could derive much learning from a more pronounced focus on ‘good enough’ biomarkers: how they can support patients well enough in certain settings, and how they can potentially reconcile cancer as a disease and as an illness, by for instance giving a greater place to the patient’s personal experiences of living with cancer. Second, if precision oncology is regarded as the future of cancer research, then we have to accept the uncomfortable claim by Kern (2012) and be honest about the low success rate of 1% of published biomarkers which reach clinical practice. In particular, this means that we should be ready to justify the risks and opportunity costs of precision oncology. As shown by the HER2 story, cancerbiomarkers are dealing with intrinsically complex, open and non-deterministic systems from cells to patients, and the field will therefore always operate within the limits of the complexity of cancer biology.