Skip to main content

Robust Biomarkers: Methodologically Tracking Causal Processes in Alzheimer’s Measurement

  • Chapter
  • First Online:
Uncertainty in Pharmacology

Part of the book series: Boston Studies in the Philosophy and History of Science ((BSPS,volume 338))

Abstract

In biomedical measurement, biomarkers are used to achieve reliable prediction of, and useful causal information about, patient outcomes while minimizing complexity of measurement, resources, and invasiveness. In this paper we discuss a specific methodological use of clinical biomarkers in pharmacological measurement. We confront the reliability of clinical biomarkers that are used to gather information about clinically meaningful endpoints. Next, we present a systematic methodology for assessing the reliability of multiple surrogate markers (and biomarkers in general). We propose three relevant conditions for a robust methodology for biomarkers: (R1) Intervention-based demonstration of partial independence of modes; (R2) Comparison of diverging and converging results across biomarkers; and (R3) Information within the context of theory. Finally, we apply our robust methodology to currently developing Alzheimer’s research to make specific theoretical conclusions about promising causal culprits as well as decoupled biomarkers and endpoints.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See Mayeux (2004) and Aronson (2005) for classification of CBs.

  2. 2.

    Katz (2004) describes all biomarkers as being “candidate” surrogate markers.

  3. 3.

    The modeling work for this project was completed in 2015 and 2016 when this was still an unfolding empirical puzzle.

  4. 4.

    See Wimsatt (2007); Levins (1966); Weisberg (2006); and Glymour (1980).

  5. 5.

    See Horwich (2011); Hacking (1983); Franklin (1997); Sober (1989); Trout (1998); Culp (1994); Stegenga (2009, 2012).

  6. 6.

    Woodward adds that inferential robustness “…is taken to show that D supports S or provides a reason for believing S” (2006, 220).

  7. 7.

    For example, assumptions in kinetic theory of heat can be used to explain the function of two different thermometry procedures.

  8. 8.

    Philosophers have addressed the “individuation” (independence) of modes of evidence See Franklin (1997); Sober (1989); Culp (1994); Keeley (2002); Staley (2004); Douglas (2004); Wimsatt (2007); and Stegenga (2009, 2012); Lloyd (2010); Schupbach (2016); Keyser (2016).

  9. 9.

    For example, consider data on some disease derived from electronic health record (EHR) by clustering health and lifestyle variables, and lab data on the development of the same condition from animals exposed to a specific toxin. The EHR data build on the idealized assumption that patients surveyed in the given sample, do not all share a specific hidden confound that would skew the effect. The lab data build on the idealized assumption that the animal physiology used is relevantly similar to human physiology for this type of effect. Because these assumptions are not shared, convergence of results is less likely to come from the same systematic error.

  10. 10.

    A process threshold is analogous to a point of no return or a penultimate risk factor that relates to some physical/structural/tissue change in the underlying workings of a system. For example, glomerulosclerosis precedes the endpoint of end-stage renal disease/kidney failure.

  11. 11.

    Woodward’s (2004) account of causation is relevant here. An interesting discussion would be to apply Woodward’s operation of manipulating one variable (surrogate marker) to see changes in another (the target variable).

  12. 12.

    If a surrogate is “reasonably likely” to forecast an outcome, but such a tether is not fully conclusive based on the evidence, a surrogate may be considered unvalidated and used for accelerated approval of drugs and medical devices in pressing clinical situations with few alternatives. In accordance with FDA regulations (CFR Title 21 Subpart H), these unvalidated SMs must be subsequently validated (Katz 2004). Unvalidated surrogates are also used in pre-clinical or pilot trials exploring safety or reasonable likelihood alone. As the spectrum of disease far outstrips our toolkit of validated surrogates, most disease-centered biomedical literature utilizes unvalidated surrogates as sources of evidence. It is important to note that the FDA only lists four validated surrogate markers: systolic blood pressure (SBD), low density lipoprotein cholesterol (LDL) level, forced expiratory volume in 1 second (FEV1), and human immunodeficiency virus (HIV) viral load (http://www.fda.gov/AboutFDA/Innovation/ucm512503.htm).

  13. 13.

    That is, the intervention may produce other causal interactions that are relevant to the outcome of interest. Additionally, combinations of (C1), (C2), and (C3) are probable in biological systems.

  14. 14.

    There is disagreement about combining bimarkers to make useful predictions. In Alzheimer’s research, Lehmann et al. (2014) say that combining adequate markers (e.g., 80% specificity and sensitivity) improves their utility. Palmqvist et al. (2015) argue that combining markers does not improve their predictive utility—although he does not directly address higher ranges of specificity and sensitivity.

  15. 15.

    As discussed, because sensitivity and specificity carry uncertainty, it would not be a simple case of using the “highest” scoring surrogate marker.

  16. 16.

    Here, “error” does not refer to measurement error. In other words, we assume that the measurements reflect the actual value of the biomarker. In the case of biomarkers, “error” refers to the interference or confounding of unspecified biological variables in a physiological network.

  17. 17.

    See Hacking (1983), Barad (2007), Stegenga (2009, 2012), and Keyser (2016), Woodward (2006).

  18. 18.

    We thank an anonymous reviewer for the suggestion that ‘partial independence’ involves causal models.

  19. 19.

    Absent of this, we run into the circularity that two metrics are partially-independent because we observe discordance and we can glean causal information from discordance among said markers because they are partially-independent.

  20. 20.

    Due to a significant worsening of cognitive scores and the emergence of several alarming off-target effects in the semagacestat groups, the full panel biomarker assessments were not completed in humans prior to termination of clinical trials.

  21. 21.

    Orzack and Sober’s (1993) criticism still looms. Perhaps it is some common core, shared by the individual modes, that is driving the robust result. See Justus (2012) for a summary of the concern: robustness analysis may only reflect shared properties of models rather than anything about the real world system (798).

  22. 22.

    To illustrate his point, Schupbach uses the example of Perrin’s modes of measuring Brownian motion. While Perrin’s use of varieties of pollen are not “strongly heterogeneous” because each experiment uses a type of pollen; the experiments with varieties of pollen are different enough in order to rule out potential confounding hypotheses—such as, Brownian motion is only due to a specific type of pollen (2016, 316).

  23. 23.

    Keyser (2016) draws on van Fraasen’s (2008) discussion of the relation between theory and measurement practice: Theory classifies what is being measured. However, as Keyser points out, this does not have to be a fully developed theory. It can even amount to a theory of how the instruments work. This may be applicable to cases of biomarker measurement where there is no overarching theory.

  24. 24.

    Additional imaging techniques are being adapted and blood plasma measurements of Aβ are being developed.

  25. 25.

    Currently, these are thought to be the most toxic form, but lesser or interacting toxicity of other forms of Aβ is not discounted. The current discussion revolves around the many forms and sizes of oligomers: some toxic, some not, prefibrillar forms, and fibrillary (fiber-like) forms which can form either diffuse or dense plaques. The general order of formation is asserted to be: peptides, small oligomers, larger oligomers, prefibrillar forms, fibrillary forms, diffuse plaques, dense neuritic plaques. But this formation order is in no way invariant, as there are branches, two-way streets and overlaps. It is important to note that much of this information on ACH and oligomers was hypothesized before the biomarkers were characterized. It can be argued that such theoretical explanations were not fully integrated into the theoretical model of ACH until recently with the help of biomarkers. The most current model of the theory is discussed by Selkoe and Hardy (2016).

  26. 26.

    The second reason had been suspected (Reviewed in Walsh and Selkoe 2007; Review that monomers may actually be protective: Giuffrida et al. 2009).

  27. 27.

    Such discordance has been observed in 21% of normal individuals, 12% of MCI cases and in 6% of cases with diagnosed Alzheimer’s dementia (Mattsson et al. 2014). It is worth noting that the oligomer sub-model, discussed later in the paragraph, can account for both types of discordance mentioned in this study: the larger discordance of the florbetapir+ (PET positivity) /CSF Aβ(-) group and the smaller discordance of the florbetapir- (PET negativity)/CSF Aβ(+) group.

  28. 28.

    This has indeed been demonstrated by Toledo et al. 2014 and Ritchie et al. 2016.

  29. 29.

    Additionally, OSM can account for the fact that an individual may have cognitive decline even with low CSF Aβ and no plaques because processing enzymes that produce Aβ are also necessary for the production of neurotrophic and neurodifferentiation factors (Willem et al. 2006; Woo et al. 2009; De Strooper et al. 2010). Thus, low activity might lower both CSF Aβ and factors important for optimal neuronal function. This may be seen in neuroinflammation or CNS infection as well (Krut et al. 2012).

  30. 30.

    In simpler terms, scientists replaced the “tail”, also known as the constant region of the antibody with that from a mouse so that the mouse’s immune system wouldn’t have an immune response to the human antibody. This shows that it is the specific antigen-binding region from the screen, which interacts with Aβ oligomers.

  31. 31.

    While awaiting Phase III efficacy trials, which were begun immediately upon consolidation of positive findings, a lingering question (Lee et al. 2006) poses whether plaque destabilization could actually lead to increased exposure of neurons to toxic forms of amyloid as many structural models indicate that plaque-oligomer interconversion could be bidirectional.

  32. 32.

    Solanezumab is currently undergoing pooled subgroup re-evaluation from two Phase III trials after small but significant positive findings (34% deceleration of cognitive decline over 18 months versus placebo) were observed in Alzheimer’s Disease Assessment Scale (ADAS) cognitive domain scores in those with mild impairment (Toyn 2015; Ratner 2015; Selkoe and Hardy 2016). Results of a follow-up Phase III study are in the offing (http://www.alzforum.org/therapeutics).

  33. 33.

    As well as an increase in a factor not measured in the bapineuzumab study: plasma Aβ.

  34. 34.

    This has the corollary that plaques are being pulled out faster than oligomers can form new plaques, and monomers cannot form new oligomers as fast as oligomers go into plaques. This is similar to LeChatelier’s principle. This could also be turned into a “bidirectionality” theoretical model.

  35. 35.

    See C1 for support.

  36. 36.

    See background on solanezumab earlier in the section for support.

  37. 37.

    In a stronger form of robustness analysis, we can use the convergence of results to eliminate Sevigny’s theoretical model. Alternatively, auxiliary modifications can be made positing a more complex causal relationship between PET positivity, oligomer change, and cognition. But given the analysis in C1 and C3 thus far, we can at least cast doubt on Sevigny’s theoretical model.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vadim Keyser .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Keyser, V., Sarry, L. (2020). Robust Biomarkers: Methodologically Tracking Causal Processes in Alzheimer’s Measurement. In: LaCaze, A., Osimani, B. (eds) Uncertainty in Pharmacology. Boston Studies in the Philosophy and History of Science, vol 338. Springer, Cham. https://doi.org/10.1007/978-3-030-29179-2_13

Download citation

Publish with us

Policies and ethics