Skip to main content
Log in

Causal Criteria in Medical and Biological Disciplines: History, Essence, and Radiation Aspect. Report 3, Part 2: Hill’s Last Four Criteria: Use and Limitations

  • SCIENTIFIC SEARCH METHODOLOGY
  • Published:
Biology Bulletin Aims and scope Submit manuscript

Abstract

Report 3 is devoted to the history, nature, and limitations of the epidemiological criteria for causality (“Hill’s criteria”). Based on material from the original publications of leading researchers of causality (A.B. Hill., M.W. Susser, K. Rothman, etc., 1950s–2019), from dozens of modern textbooks on epidemiology and carcinogenesis, from documents of international and internationally recognized organizations (UNSCEAR, BEIR, USEPA, IARC, etc.), as well as from many other sources, in part 2 of this report, Hill’s last four criteria are considered: biological plausibility, coherence with current facts and theoretical knowledge, experimental, and analogy. The theoretical and practical aspects for each criterion are presented: history of appearance, terminology, philosophical and epidemiological essence, applicability in various disciplines, and limitations. Factual examples are provided for each of the criteria, including data from radiation epidemiology and radiation medicine.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Notes

  1. So far, there is no evidence that these subtle changes can be realized in tangible disorders, anomalies, and pathologies that could not be registered, as it was said, in dozens of thousands of descendants in various cohorts for more than 60 years [3037].

  2. The authors of [40, 41] indicate that, since the age of the offspring did not exceed 40 years, and epigenetic changes depended on age, there is a possibility that the frequency of transgenerationally transmitted changes could increase (be accelerated) in the future.

  3. For example, according to [44], for the drugs encainide and flecainide, after a mean follow-up period of ten months, 89 patients died: 59 from arrhythmias (43 with the drug vs. 16 with placebo; p = 4 × 10–4), 22 from non-arrhythmic cardiac causes (17 vs. 5; p = 0.01), and eight from noncardiac causes (3 vs. 5).

  4. “A remedy which is known to work, though nobody knows why, is preferable to a remedy which has the support of theory without the confirmation of practice” [58].

  5. “Many things which in theory ought to be highly effective turn out in practice to be completely useless” [58].

  6. “It is not based on any theory of how the treatments might work” [59].

  7. “The question to which we must always find an answer is not “should it work?” but “does it work?” (even if we do not know why)”[58].

  8. The term “biologically implausible” can be found in the decision of the English public health authority of 1854, according to which evidence of cholera transmission through London water (researcher John Snow) is not supported by laboratory evidence [7]).

  9.   “Another factor is the widespread overemphasis on statistical approaches, with the concomitant tendency to neglect the fact that epidemiology is a biologic science concerned with disease in human beings” [60].

  10.  World Health Organization/International Program on Chemical Safety. Provides a formal framework for assessing data on pathways of causal key events leading to adverse health outcomes [78].

  11.  For example, in the manual by M. Szklo and F.J. Nieto from 2019 (fourth edition [65]), the “Coherence,” “Specificity,” and “Analogy” criteria are removed from the list of criteria for causality. It is stated that “We, like other authors (L. Gordis from 2014 [67], K.J. Rothman and S. Greenland from 2005 [50]), believe that these three guidelines are useless for the following reasons: “Coherence” is difficult to distinguish from “Biological Plausibility…”.

  12.  A summarizing compilation (semantic translation) of works [18, 19] is presented, the material in which differs somewhat from each other in completeness.)

  13.  “Coherence is an ultimate and yet not a necessary criterion for causality” [67].

  14.  “Coherence is comforting; incoherence by itself is often not destructive of a hypothesis but emphasizes gaps in scientific understanding” [86].

  15.  “All scientific work is incomplete—whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time” [1].

  16.  This conclusion should not be taken too seriously, it is kind of sophism. First, we do not have data on the dynamics of chrono-changes for all world cohorts of workers in the nuclear industry (an example is given for workers in England, however, for all [117]). Second, absolute (not relative) risks are more important, which, compared with previous decades, obviously decreased along with the background values for the general population. Third, as the period of employment lengthens, the “healthy worker effect” becomes smaller [117]. However, for the media and even ordinary scientific consciousness, this paradox may seem socially significant.

  17.  Rem—roentgen equivalent man; 1 rem = 0.01 Sv [121].

  18.  “We all have a vague feeling that if we can make an event occur, we understand it better than if we simply observe it passively” [123] (quoted from [124]).

  19.  “Although experimental tests can be much stronger than other tests, they are not as decisive as often thought, because of difficulties in interpretation” [122]. Anyone who has received experimental work for review will agree with this. Sometimes it is the interpretation of the authors that is the basis (sometimes incorrect and subjective) of the conclusions, and the essence of the methodology (by the meaning of the objective) is not always deepened.

  20.  “As Popper emphasized, however, there are always many alternative explanations for the outcome of every experiment” [50, 122].

  21.  “This simple fact is especially important to epidemiologists, who often face the criticism that proof is impossible in epidemiology, with the implication that it is possible in other scientific disciplines. Such criticism may stem from a view that experiments are the definitive source of scientific knowledge. Such a view is mistaken on at least two counts. First, the nonexperimental nature of a science does not preclude impressive scientific discoveries; the myriad examples include plate tectonics, the evolution of species, planets orbiting other stars, and the effects of cigarette smoking on human health. Even when they are possible, experiments (including randomized trials) do not provide anything approaching proof, and in fact may be controversial, contradictory, or irreproducible” [50].

  22.  “Some experimental scientists hold that epidemiologic relations are only suggestive, and believe that detailed laboratory study of mechanisms within single individuals can reveal cause–effect relations with certainty. This view overlooks the fact that all relations are suggestive in exactly the manner discussed by Hume: even the most careful and detailed mechanistic dissection of individual events cannot provide more than associations, albeit at a finer level. Laboratory studies often involve a degree of observer control that cannot be approached in epidemiology; it is only this control, not the level of observation, that can strengthen the inferences from laboratory studies. Furthermore, such control is no guarantee against error. All of the fruits of scientific work, in epidemiology or other disciplines, are at best only tentative formulations of a description of nature, even when the work itself is carried out without mistakes” [50].

  23.  Above, it was pointed out to the broad, literal citation of this Hill’s “credo” in foreign publications. A Google search for the exact combination of words gives about 1.65 million sources for the quote; the statement is considered, among other things, within the framework of the “precautionary principle.”

  24.  With regard to Hill’s “Experiment” criterion, [51] states “To different observers, experimental evidence can refer to clinical trials, to laboratory experiments with rodents or other nonhuman organisms, or to both.”

  25.  “It is not clear what Hill meant by experimental evidence. It might have referred to evidence from laboratory experiments on animals, or to evidence from human experiments [122].

  26.  For example, in the early 1980s, the non-steroidal anti-inflammatory drug benoxaprofen was developed for the treatment of arthritis/musculoskeletal pain [135, 145] (under the commercial names Opren in Europe and Oraflex in the United States [135]). A large-scale RCT in a population aged 18–65 demonstrated its efficacy, and the drug began to be promoted through aggressive marketing in the United Kingdom and the United States. However, thousands of elderly patients experienced severe side effects, and many deaths from hepatorenal insufficiency were registered [135, 145] (according to a parliamentary report, 77 deaths in the United Kingdom alone [146]).

  27.  The history of the development of the RCT can be found on the website of the thematic “James Lind Library” (Edinburgh).

  28.  “In epidemiology, scientific investigations often proceed inductively” [166].

  29.  “Inductive methods constitute the substance of standard epidemiological texts such as Rothman’s, Kahn’s, Miettinen’s, etc.” [167].

  30.  “Certainly epidemiologists are in the habit of generating hypotheses by induction from the arrays of descriptive data and existing knowledge with which their studies are bound to begin” [168].

  31.  “My reply is that epidemiological inferences are but a part of a wider (inductive) epidemiological process” [169].

  32.  “...еpidemiology is essentially an inductive science, concerned not merely with describing the distribution of disease, but equally or more with fitting it into a consistent philosophy” [172] (cited from [173]).

  33.  “Often experimental epidemiology is simply equated with randomized controlled trials” [47].

  34. “...the book will be addressing issues relating to observational epidemiology—not experimental epidemiology.” “Almost all studies conducted by field epidemiologists are observational studies, in which the epidemiologists document rather than determine exposures” [179]. This means “natural,” semi-experiments.

  35. “Although some researchers consider clinical trials that employ experimental interventions part of epidemiology, this is not the common view” [175].

  36.  “Elevate “vaguely formulated expectations” into theory, rename the inductive reasoning process “reproduction,” and the transformation is accomplished” [187].

  37. “Adequate human data are the most relevant for assessing risks to humans. When sufficient human data are available to describe the exposure–response relationship for an adverse outcome(s) that is judged to be the most sensitive effect(s), reference values should be based on human data” [193].

  38. “Human data form the most direct evidence for an association between health effects and exposure to chemicals” [194].

  39.  “Human data are required for conclusions that there is a causal relationship between an exposure and an outcome in humans. Experimental animal data are commonly and appropriately used in establishing regulatory exposure limits and are useful in addressing biological plausibility and mechanism questions, but are not by themselves sufficient to establish causation in a lawsuit. In vitro data may be helpful in exploring mechanisms of toxicity but are not by themselves evidence of causation” [133].

  40.  “Experimental evidence in humans would indeed constitute proof of causation…” [54].

  41.  “A number of learned and progressive jurists have recognized that, in the presence of meaningful epidemiologic studies, the contrived exposure situations of animals in laboratories produce information of relatively little value” [195].

  42.  “…human data are the most valid metric to determine human causality” [196].

  43.  Even earlier, at the beginning of the 19th century, F. Magendie (1783–1855; France) developed an approach in the field of pharmacology and therapy based on animal tests [57].

  44.  In 1938, a venereologist from Birmingham published in a provincial journal a comparative study of three drugs for the treatment of syphilis: the English “Novostab,” the American “Mapharside,” and the German “Neosalvarsan.” The German drug proved to be the most effective in the ability to clear syphilitic ulcers from spirochetes in patients. The UK MRC’s approach, however, was to compare the new arsenic compounds with the standard formulations of salvarsan and neosalvarsan using a trypanosomal test in mice (the so-called “experimental epidemiology”). The possibility of a contradiction between the standard MRC approach and direct human data obtained by E.W. Assinder (together with the “unpatriotic” nature of his conclusion) then alerted not only the indicated organization but also the Department of Health Care of the United Kingdom [180].

  45.  For example, in 2006, a phase I study of CD28TGN1412, a monoclonal antibody against the CD28 T-lymphocyte receptor [204207], developed by the German company TeGenero, was conducted in the United Kingdom in eight volunteers [206]. The drug was supposed to be used in autoimmune diseases and leukemias [204, 206, 207]. Preclinical trials in primates and rabbits showed no side effects [207]. Two volunteers received a placebo [206], and six received only 1/500 of the dose estimated in primates [204]. In all six volunteers, the drug elicited a severe “cytokine storm” response [204206], leading to multiple organ failure and other severe effects [205207]. The swelling was so great that the trial was named “The Elephant Man Clinical Trial” [206]. One participant developed dry gangrene followed by amputation of part of the foot and toes (Raynaud’s phenomenon). Cognitive and many other serious impairments were observed years after the experiment, although all subjects survived [205].

  46.  “Once again looking at the obverse of the coin, there will be occasions when repetition is absent or impossible and yet we should not hesitate to draw conclusions” [1].

  47.  “Because the only ethical experiments concerning causality in humans are experiments in prevention” (Doll, R., 1978) [107].

  48.  “Ethically, the individual involved must have the potential to benefit and yet there must be uncertainty on the question posed. For risk factors, as opposed to protective factors, there may be no such benefit” [20].

  49.  “There must be the expectation that in the population under study the radiation will lead to an improvement in health status relative to any alternative treatment” [30].

  50.  A 2018 Japanese article [212] reports the following unpublished RCT data for a “hormesis study.” N. Shimizu from Osaka University analyzed the performance of volunteers who slept daily on special radioactive mats containing 228Ac and 77Br with a background radiation of 5 µGy/h. The same mats, but without radioactivity, served as controls. Healthy volunteers (30 males and 30 females, 22–48 years old, mean age 32 years) were randomly divided in half into a “hormesis” group and a placebo group. After three months (cumulative dose obtained over 3 mGy), the reactive oxygen species levels were, on average, 3.1 and 9.4% lower than the baseline for the placebo and hormesis groups, respectively, in men, and 3.1 and 8.5% lower in women (in both groups, p < 0.05). “Sleep delay, as well as physical, psychological, and neurosensory status improved in the hormesis group compared with the placebo group” [212]. The data were presented at the Japanese Society for Radiation Oncology Symposium (Symposium for Cancer Control) in Nagoya in 2017. In another similar RCT performed by N. Shimizu on 40 men, an increase in the IgA level in saliva and a lengthening of the period of non-REM sleep were observed in the experimental group [212]. It is hardly possible to draw serious conclusions from this study, but the very fact of a real RCT with irradiation of healthy people is still unique.

  51.  “Clinical observations can be made (and to be of any use, they must be made) just as accurately as laboratory observations; but in the human subject, observations cannot be as readily controlled, the conditions cannot be so easily kept uniform or varied; in one word, the problems cannot be analyzed as they can be in the animal” [222].

  52.  The threshold for the most radiosensitive deterministic effect known to all radiobiologists—temporary suppression of spermatogenesis—in most sources, including Russian manuals, corresponds to 0.15 Gy (no references are given). In other, also significant publications, 0.1 Gy is reported (for example, [226, 227]). Few references are made to the two original studies [228, 229] (and the summarizing review [230]), and the fact that these data, which are still reference data, were obtained in two clinical trials (1963–1973) on “volunteers” (22–52 years) in prisons of the United States (Washington and Oregon) is mentioned almost nowhere. The testes of the participants laying on a special bed were locally repeatedly irradiated with X-rays, up to accumulated doses of 75 mGy–6 Gy. In these experiments, the threshold of temporary suppression of spermatogenesis in humans was revealed (punctures for biopsy were taken periodically), which constituted approximately 0.08–0.1 Gy [228230]. The prerequisites for the experiment were the need to protect the “family jewels” (as an American Air Force colonel put it) in pilots of nuclear-powered aircraft constructed in the 1950s and in astronauts in the 1960s. In 1963, at a conference in Colorado, the leading US endocrinologist C.G. Heller said: “If everyone is interested in what happens with a person when the testes are irradiated, then why should we fuss with mice, beagle dogs, canaries, etc.? If you need to know about the human situation, why not experiment on humans?” [231]. In the early 1970s, these experiments were stopped for ethical reasons, and in 1994 a commission created by B. Clinton investigated all circumstances, including the health and subsequent life costs of many “volunteers” [231]. These experiments cannot be called RCT with radiation, they are just CTs on healthy volunteers.

  53.  “...it would be unethical to deliberately expose healthy human volunteers to a lethal or permanently disabling toxic biological, chemical, radiological, or nuclear substance…” [215].

  54.  “Approval under the Animal Rule can be pursued only if human efficacy studies cannot be conducted because the conduct of such trials is unethical and field trials after an accidental or deliberate exposure are not feasible” [215]. “Furthermore, field trials to study a product’s effectiveness after an accidental or intentional exposure are not feasible” [232].

  55.  “In the absence of adequate data on humans, it is biologically plausible and prudent to regard agents for which there is sufficient evidence of carcinogenicity in experimental animals as if they presented a carcinogenic risk to humans” [127].

  56.  “Although this association cannot establish that all agents that cause cancer in experimental animals also cause cancer in humans, it is biologically plausible that agents for which there is sufficient evidence of carcinogenicity in experimental animals also present a carcinogenic hazard to humans” [214].

  57.  “This guidance provides information and recommendations on drug and biological product development when human efficacy studies are not ethical or feasible” [215].

  58.  “The Animal Rule states that for drugs developed to ameliorate or prevent serious or life-threatening conditions caused by exposure to lethal or permanently disabling toxic substances, when human efficacy studies are not ethical and field trials are not feasible, the FDA may grant marketing approval based on adequate and well-controlled animal efficacy studies when the results of those studies establish that the drug is reasonably likely to produce clinical benefit in humans” [215].

  59.  “Marketing approval for new radiation countermeasures for which human efficacy studies are not feasible or ethical would be based on animal efficacy studies and phase I safety data in healthy volunteers. Under this animal rule, human efficacy trials of radiation countermeasures could be bypassed through a shortened but stringent FDA approval pathway that demonstrates drug efficacy in two animal species predictive of human responsiveness, a sound understanding of the mechanisms of action, and safety in humans” [232].

  60.  “The mode of action is the way that the mechanism ultimately affects the entity.” An example from aquatic ecology is given: the binding of copper ions in the gills disrupts ion regulation, leading to a decrease in the concentration of sodium and chloride in the blood, which affects its viscosity (mechanism), and this, in turn, causes a stop (arrest) of the fish heart (mode of action) [239].

  61.  We plan to consider the omnipresence of Hill’s causality criteria for evidence both in biomedical disciplines and in a wide variety of epidemiology (classical, field, molecular, judicial, behavioral, psychiatric, social, etc.; all such disciplines exist), in teratology, neuropsychiatry, jurisprudence, economics, etc., in Review 4. Here we see that Hill’s criteria in the fields of ecology and toxicology even reached animal studies. So to speak, they are “universal values.” This, as already noted in Part 1 of this report [9], is due to the fact that the inductive–deductive rules for establishing causal dependences are the same for the human mind and are a consequence of the laws of logic, being rooted in the constructions of philosophers of past centuries, mainly D. Hume and J. Mill [2, 3].

  62.  Apparently, the most striking example of how the absence of animal testing ends up after the introduction of a drug into practice was the mass poisoning with diethylene glycol in 1937 in the United States [192, 242244]. This was the first reported case of human toxicity of this compound [242]. Children and adults were poisoned not because they “drank antifreeze” (which includes this compound), but because they took a sulfanilamide syrup “with raspberry flavor” [243], called “Elixir Sulfanilamide,” in which the solvent was 72% diethylene glycol (sulfonamides are insoluble in water, and the company wanted to make a syrup) [192, 242244]. At that time, there was no law in the United States prohibiting the marketing of untested drugs [244], so the “elixir” passed only organoleptic testing [192], without animal testing, and was put on sale [192, 242244]. The drug was received by 353 patients, 105 of which (34 children and 71 adults) died of renal failure [242]. The chemist from the company who thought of using 72% (!) diethylene glycol as a solvent committed suicide [244]. As a result, the FDA passed a law in 1938 on mandatory testing of drugs on animals [242, 243]. Note that, unlike thalidomide of the 1950s discussed above, for which animal experiments did not show teratogenicity, for Elixir Sulfanilamide such experiments were not performed at all, despite the fact that, as was stated, as early as at the beginning of the 19th century F. Magendie started using animal testing of drugs [57].

  63.  “Resorting to animal experimentation can reduce some of these problems but introduces new ones, because an inference from results in animals to effects in humans is far from trivial” [249].

  64.  “Courts are recognizing that the effort to apply laboratory animal findings to man is not an extrapolation-even though it is commonly referred to as such. Rather it is a generalization, primarily a subjective process, in which a number of undefended assumptions are implicitly invoked” [195].

  65.  However, this analytical review [250] of 121 animal-to-human extrapolation studies of the effects of medicines, interventions, and just “events” actually gives a different picture, despite vague conclusions. Our digitization (GetData Graph Digitizer, ver. 2.26.0.20) and data calculations from diagrams and box-plots from [250] (Figs. 2, 4, 7) showed the following. Successful extrapolation (by 50–100%) corresponds to 67% of the sample (75–100%–27%); by the median data, the intervention studies and trials were extrapolatively adequate for 64 and 79%, respectively. Finally, for certain species, the median extrapolation success rates were 82% for mice, 73% for rabbits, 67% for rats, 64% for primates, 54% for dogs, and 33% for guinea pigs. For the pig, there was only one study in [250], and extrapolation success was 100%.

  66.  M.W. Susser in 1986 [18] provides an example of the artificiality of the design of animal experiments, which may not be replicable in human observational studies. In 1966, it was found that, in rats, acute protein deficiency in the early stages of development leads to depletion of brain cells. This fact has led to many epidemiological studies of children to test the impact of early malnutrition on mental development. But such works could not and did not test the effects of acute nutritional deficiency; instead, they assessed the effects of permanent and chronic malnutrition. Thus, until the developmental effects of acute prenatal fasting during fasting were studied, adequate evidence for experiments on rats could not be obtained.

  67.  “Replacing a test on a living organism with a cellular, chemicoanalytical, or computational approach obviously is reductionistic” [255].

  68.  “In vitro studies that test mechanistic pathways and demonstrate the biological role of an agent in disease progression may result in knowledge that can be used to predict potential human health outcomes in a much more time-efficient manner than human studies, particularly for adverse outcomes with a long latency period” [113].

  69.  “Risk assessments would eventually be conducted using mathematical models of toxicity pathways (TP models) to estimate exposures that will not cause biologically significant perturbations in these pathways” [258].)

  70.  “Toxicity pathways models are unlikely to contribute quantitatively to risk assessments for several reasons, including that the statistical variability inherent in such complex models severely limits their usefulness in estimating small changes in the response and that such models will likely continue to involve empirical modeling of dose responses” [258].

  71.  Tests for the presence of the Ames virus, mouse lymphoma, for micronuclei or chromosomal aberrations [260, 262] or for the formation of DNA adducts [261].

  72.  “Analogy: in some circumstances, it would be fair to judge by analogy. With the effects of thalidomide and rubella before us, we would surely be ready to accept slighter but similar evidence with another drug or another viral disease in pregnancy” [1].

  73.  Later, it turned out that W.G. McBride had falsified animal experiments with a new anti-nausea drug [263]. In our opinion, he could also have been guided by good intentions here, being frightened by thalidomide, although, of course, such methods are inexcusable even within the framework of the “precautionary principle.”

  74.  “...when one of a class of causal agents is known to have produced an effect, the standards for evidence that another agent of that class produces a similar effect can be reduced” [64].

  75.  “Analogy can be helpful, although the help seems limited since anybody with a little creativity can probably dream up an analogy!” [130].

  76.  “Analogy, then, becomes one way to invent a hypothesis, although it is a bit unimaginative. A Popperian alternative to analogy would be creative inventiveness” [272].

  77.  Howard Frumkin is Professor Emeritus of Environmental and Occupational Health Sciences at the University of Washington School of Public Health; Professor and Chair of Environmental and Occupational Health at Emory University’s Rollins School of Public Health, and Professor of Medicine at Emory Medical School from 1990–2005. https://deohs.washington.edu/faculty/howard-frumkin. Accessed November 11, 2020.

  78.  “Whatever insight might be derived from analogy is handicapped by the inventive imagination of scientists who can find analogies everywhere. At best, analogy provides a source of more elaborate hypotheses about the associations under study; the absence of such analogies only reflects a lack of imagination or experience, not falsity of the hypothesis” [50].

REFERENCES

  1. Hill, B.A., The environment and disease: association or causation?, Proc. R. Soc. Med., 1965, vol. 58, no. 5, pp. 295–300. https://doi.org/10.1177/0141076814562718

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Koterov, A.N., Causal criteria in medical and biological disciplines: history, essence, and radiation aspect. Report 1. Problem statement, conception of causes and causation, false associations, Biol. Bull. (Moscow), 2019, vol. 46, no. 11, pp. 1458–1488. https://doi.org/10.1134/S1062359019110165

    Article  Google Scholar 

  3. Koterov, A.N., Causality criteria in biomedical disciplines: history, essence and radiation aspect. Report 2. Postulates of Henle–Koch and criteria for the causality of non-infectious pathologies before Hill, Radiats. Biol. Radioekol., 2019, vol. 59, no. 4, pp. 341–375. https://doi.org/10.1134/S0869803119040052

    Article  Google Scholar 

  4. Koterov, A.N., Ushenkova, L.N., Zubenkova, E.S., et al., The strength of association. Report 1. Relative risk gradations, Med. Radiol. Radiats. Bezop., 2019, vol. 64, no. 4, pp. 5–17. https://doi.org/10.12737/article_5d1adb25725023.14868717

    Article  Google Scholar 

  5. Koterov, A.N., Ushenkova, L.N., Molodtsova, et al., The strength of association. Report 2. Correlation value gradations, Med. Radiol. Radiats. Bezop., 2019, vol. 64, no. 6, pp. 12–24. https://doi.org/10.12737/1024-6177-2019-64-6-12-24

    Article  Google Scholar 

  6. Koterov, A.N., Ushenkova, L.N., and Biryukov, A.P., Hill’s Temporality criterion: reverse causation and its radiation aspect, Biol. Bull. (Moscow), 2020, vol. 47, no. 12, pp. 1–33. https://doi.org/10.1134/S1062359020120031

    Article  Google Scholar 

  7. Koterov, A.N., Ushenkova, L.N., and Biryukov, A.P., Hill’s “Biological Plausibility” criterion: integration of data from various disciplines for epidemiology and radiation epidemiology, Biol. Bull. (Moscow), 2021, vol. 48, no. 11, pp. 1991–2014. https://doi.org/10.1134/S1062359021110054

    Article  Google Scholar 

  8. Koterov, A.N., Ushenkova, L.N., and Biryukov, A.P., Hill’s criterion ‘Experiment’: the counterfactual approach in non-radiation and radiation sciences, Biol. Bull. (Moscow), 2021, vol. 48, no. 12, pp. 2149–2173. https://doi.org/10.1134/S1062359021120062

    Article  Google Scholar 

  9. Koterov, A.N., Causality criteria in biomedical disciplines: history, essence and radiation aspect. Message 3. Part 1: Hill’s first five criteria: use and limitations, Radiats. Biol. Radioekol., 2021, vol. 61, рр. 301–332. https://doi.org/10.31857/S0869803121030085

  10. United States Department of Health, Education and Welfare (USDHEW). Smoking and Health: Report of the Advisory Committee to the Surgeon General of the Public Health Service Publication no. 1103, Washington, DC: US Department of Health, Education and Welfare, 1964. https://biotech.law.lsu.edu/cases/tobacco/nnbbmq.pdf. Accessed November 10, 2020.

  11. Susser, M., Judgement and causal inference: criteria in epidemiologic studies, Am. J. Epidemiol., 1977, vol. 105, no. 1, pp. 1–15. Reprint: Am. J. Epidemiol., 1995, vol. 141, no. 8, pp. 701–715.

    Article  CAS  PubMed  Google Scholar 

  12. Strom, B.L., Study designs available for pharmacoepidemiology studies, in Pharmacoepidemiology, Strom, B.L., Ed., Baffins Lane, Chichester, West Sussex: Wiley, 2000, 3rd ed., pp. 17–30.

    Book  Google Scholar 

  13. Meek, M.E., Palermo, C.M., Bachman, A.N., et al., Mode of action human relevance (species concordance) framework: evolution of the bradford hill considerations and comparative analysis of weight of evidence, J. Appl. Toxicol., 2014, vol. 34, no. 6, pp. 595–606. https://doi.org/10.1002/jat.2984

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Carbone, M., Klein, G., Gruber, J., and Wong, M., Modern criteria to establish human cancer etiology, Cancer Res., 2004, vol. 64, no. 15, pp. 5518–5524. https://doi.org/10.1158/0008-5472.CAN-04-0255

    Article  CAS  PubMed  Google Scholar 

  15. Weed, D.L. and Hursting, S.D., Biologic plausibility in causal inference: current method and practice, Am. J. Epidemiol., 1998, vol. 147, no. 5, pp. 415–425. https://doi.org/10.1093/oxfordjournals.aje.a009466

    Article  CAS  PubMed  Google Scholar 

  16. Weed, D.L., Precaution, prevention, and public health ethics, J. Med. Philosophy, 2004, vol. 29, no. 3, pp. 313–332. https://doi.org/10.1080/03605310490500527

    Article  Google Scholar 

  17. Weed, D.L., Epidemiologic evidence and causal inference, Hematol. Oncol. Clin. North Am., 2000, vol. 14, no. 4, pp. 797–807. https://doi.org/10.1016/S0889-8588(05)70312-9

    Article  CAS  PubMed  Google Scholar 

  18. Susser, M., Rules of inference in epidemiology, Regul. Toxicol. Pharmacol., 1986, vol. 6, no. 2, pp. 116–128. https://doi.org/10.1016/0273-2300(86)90029-2

    Article  CAS  PubMed  Google Scholar 

  19. Susser, M., The logic of sir karl popper and the practice of epidemiology, Am. J. Epidemiol., 1986, vol. 124, no. 5, pp. 711–718. https://doi.org/10.1093/oxfordjournals.aje.a114446

    Article  CAS  PubMed  Google Scholar 

  20. Bhopal, R.S., Concepts of Epidemiology: Integrated the Ideas, Theories, Principles and Methods of Epidemiology, Oxford: Oxford Univ. Press, 2016, 3rd ed.

    Book  Google Scholar 

  21. Worrall, J., What evidence in evidence-based medicine?, Philos. Sci., 2002, vol. 69, no. S3, pp. S316–S330. https://doi.org/10.1086/341855

    Article  Google Scholar 

  22. Worrall, J., Why randomize? Evidence and ethics in clinical trials, in Contemporary Perspectives in Philosophy and Methodology of Science, Gonzalez, W.J. and Alcolea, J., Eds., A Coruna: Netbiblo, 2006, pp. 65–82.

    Google Scholar 

  23. Worrall, J., Causality in medicine: getting back to the hill top, Prev. Med., 2011, vol. 53, nos. 4–5, pp. 235–238. https://doi.org/10.1016/j.ypmed.2011.08.009

    Article  PubMed  Google Scholar 

  24. Vlasov, V.V., Epidemiologiya: Uchebnoe posobie (Epidemiology: Textbook), Moscow: GEOTAR-Media, 2006, 2nd ed. (updt.).

  25. Hrobjartsson, B. and Gotzsche, P.C., Gluud C: the controlled clinical trial turns 100 years: Fibiger’s trial of serum treatment of diphtheria, Beit. Med. J., 1998, vol. 317, no. 7167, pp. 1243–1245. https://doi.org/10.1136/bmj.317.7167.1243

    Article  CAS  Google Scholar 

  26. Meldrum, M.L., A brief history of the randomized controlled trial. from oranges and lemons to the gold standard, Hematol. Oncol. Clin. North. Am., 2000, vol. 14, no. 4, pp. 745–760, vii. https://doi.org/10.1016/s0889-8588(05)70309-9

  27. Phillips, A.N. and Davey Smith, G., Confounding in epidemiological studies, Br. Med. J., 1993, vol. 306, no. 870, p. 142. https://doi.org/10.1136/bmj.306.6870.142-b

    Article  CAS  Google Scholar 

  28. DOE 1995, U.S. Department of Energy. Closing the Circle on the Splitting of the Atom. The Environmental Legacy of Nuclear Weapons Production in the United States and What the Department of Energy is Doing about It. U.S. Department of Energy, Office of Environmental Management, January 1995, DOE/EM-0266. www.energy.gov/sites/prod/files/ 2014/03/f8/Closing_the_Circle_Report.pdf. Accessed October 11, 2020.

  29. Frame, P. and Kolb, W., Living with Radiation: The First Hundred Years, Maryland: Syntec, Inc. 2000, 2nd ed.; and Gilbert U-238 Atomic Energy Lab (1950–1951), Oak Ridge Associated Universities, 1999. www.orau.org/ptp/collection/atomictoys/atomictoys.htm. Accessed October 11, 2020.

  30. BEIR VII Report 2006, Phase 2, Health Risks from Exposure to Low Levels of Ionizing Radiation. Committee to Assess Health Risks from Exposure to Low Levels of Ionizing Radiation, National Research Council. http://www.nap.edu/catalog/11340.html. Accessed October 11, 2020.

  31. Koterov, A.N. and Biryukov, A.P., The possibility of determining of anomalies and pathologies in the offspring of liquidators of Chernobyl accident by the non-radiation factors, Int. J. Low Radiat. (Paris), 2011, vol. 8, no. 4, pp. 256–312. https://doi.org/10.1504/IJLR.2011.046529

    Article  Google Scholar 

  32. Koterov, A.N. and Biryukov, A.P., Children of the liquidators of the accident at the Chernobyl nuclear power plant. 1. Evaluation of the fundamental possibility to register radiation effects, Med. Radiol. Radiats. Bezop., 2012, vol. 57, no. 1, pp. 58–79.

    Google Scholar 

  33. Koterov, A.N. and Biryukov, A.P., Children of participants in the liquidation of the consequences of the accident at the Chernobyl nuclear power plant. 2. The frequency of deviations and pathologies and their relationship with non-radiation factors, Med. Radiol. Radiats. Bezop., 2012, vol. 57, no. 2, pp. 51–77.

    Google Scholar 

  34. Koterov, A.N., Malye dozy radiatsii: fakty i mify. Osnovnye ponyatiya i nestabil’nost’ genoma (Low Doses of Radiation: Facts and Myths. Basic Concepts and Genome Instability), Moscow: FMBTs im. A.I. Burnazyana FMBA Rossii, 2010.

  35. Koterov, A.N., Genomic instability at exposure of low dose radiation with low LET. Mythical mechanism of unproved carcinogenic effects, Int. J. Low Radiat., 2005, vol. 1, no. 4, pp. 376–451. https://doi.org/10.1504/IJLR.2005.007913

    Article  CAS  Google Scholar 

  36. UNSCEAR 2001 , Report to the General Assembly, with Scientific Annexes. Annex Hereditary Effects of Radiation, New York: United Nations, 2001, pp. 5–160.

    Google Scholar 

  37. COMARE 2002, 7th Report, Parents Occupationally Exposed to Radiation Prior to the Conception of Their Children. A Review of the Evidence Concerning the Incidence of Cancer in Their Children, Crown, Ed., National Radiological Protection Board, 2002.

  38. Zakharova, M.L., Bezlepkin, V.G., Kirillova, E.N., et al., Genetic material of the radiobiological repository of human tissues and some results of its research, Med. Radiol. Radiats. Bezop., 2010, vol. 55, no. 5, pp. 5–13.

    Google Scholar 

  39. Bezlepkin, V.G., Kirillova, E.N., Zakharova, M.L., et al., Delayed and transgenerational molecular and genetic ef,fects of progressive influence of ionizing radiation in nuclear plant workers, Radiats. Biol. Radioecol., 2011, vol. 51, no. 1, pp. 20–32.

  40. Kuzmina, N.S., Myazin, A.E., Lapteva, N.Sh., and Rubanovich, A.V., The study of hypermethylation in irradiated parents and their children blood leukocytes, Cent. Eur. J. Biol., 2014, vol. 9, no. 10, pp. 941–950.

    CAS  Google Scholar 

  41. Kuzmina, N.S., Lapteva, N.Sh., and Rubanovich, A.V., Hypermethylation of gene promoters in peripheral blood leukocytes in humans long term after radiation exposure, Environ. Res., 2016, vol. 146, pp. 10–17. https://doi.org/10.1016/j.envres.2015.12.008

    Article  CAS  PubMed  Google Scholar 

  42. Kuzmina, N.S., Lapteva, N.Sh., Rusinova, G.G., et al., Gene hypermethylation in blood leukocytes in humans long term after radiation exposure. Validation set, Environ. Pollut., 2018, vol. 234, pp. 935–942. https://doi.org/10.1016/j.envpol.2017.12.039

    Article  CAS  PubMed  Google Scholar 

  43. Gotzsche, P.C., Deadly Medicines and Organised Crime. How Big Pharma has Corrupted Healthcare, London: Radcliffe Publishing, 2013.

    Google Scholar 

  44. Echt, D.S., Liebson, P.R., Mitchell, L.B., et al., Mortality and morbidity in patients receiving encainide, flecainide, or placebo. The cardiac arrhythmia suppression trial, N. Engl. J. Med., 1991, vol. 324, no. 12, pp. 781–788. https://doi.org/10.1056/NEJM199103213241201

    Article  CAS  PubMed  Google Scholar 

  45. Pocock, S.J., When to stop a clinical trial, Br. Med. J., 1992, vol. 305, no. 6847, pp. 235–240. https://doi.org/10.1136/bmj.305.6847.235

    Article  CAS  Google Scholar 

  46. Moore, T., Deadly Medicine: Why Tens of Thousands of Heart Patients Died in America’s Worst Drug Disaster, New York: Simon and Schuster, 1995.

    Google Scholar 

  47. Handbook of Epidemiology, Ahrens, W. and Pigeot, I., Eds., New York: Springer, 2014, 2nd ed.

  48. Davey Smith, G., Data dredging, bias, or confounding. they can all get you into the BMJ and the Friday papers, Br. Med. J., 2002, vol. 325, no. 7378, pp. 1437–1438. https://doi.org/10.1136/bmj.325.7378.1437

    Article  Google Scholar 

  49. Gage, S.H., Munafo, M.R., and Davey Smith, G., Causal inference in developmental origins of health and disease (DOHaD) research, Annu. Rev. Psychol., 2016, vol. 67, pp. 567–585. https://doi.org/10.1146/annurev-psych-122414-033352

    Article  PubMed  Google Scholar 

  50. Rothman, K.J. and Greenland, S., Causation and causal inference in epidemiology, Am. J. Public Health, 2005, vol. 95, suppl. 1, pp. S144–S150. https://doi.org/10.2105/AJPH.2004.059204

    Article  PubMed  Google Scholar 

  51. Rothman, K.J., Greenland, S., Poole, C., and Lash, T.L., Causation and causal inference, in Modern Epidemiology, Rothman, K.J., Greenland, S., and Lash, T.L., Eds., Philadelphia (PA): Wolters Kluwer, 2008, 3rd ed., pp. 5–31.

    Google Scholar 

  52. Goodman, K.J. and Phillips, C.V., Hill’s criteria of causation, in Encyclopedia of Statistics in Behavioral Science, Everitt, B.S. and Howell, D.C., Eds., Chichester: Wiley, 2005, vol. 2, pp. 818–820.

  53. Goodman, S.N. and Samet, J.M., Cause and cancer epidemiology, in Schottenfeld and Fraumeni Cancer Epidemiology and Prevention, Thun, M.J., et al., Eds., New York: Oxford Univ. Press, Printed by Sheridan Books, Inc., USA, 2018, 4th ed., pp. 97–104.

    Google Scholar 

  54. Gori, G.B., Epidemiologic Evidence in Public and Legal Policy: Reality or Metaphor? Critical Legal Issues, Working Paper Series no. 124, Washington, DC: Washington Legal Foundation, 2004.

  55. Swaen, G. and van Amelsvoort, L., A weight of evidence approach to causal inference, J. Clin. Epidemiol., 2009, vol. 62, no. 3, pp. 270–277. https://doi.org/10.1016/j.jclinepi.2008.06.013

    Article  PubMed  Google Scholar 

  56. Hippocrates, Precepts, Works, London: Wm. Heinemann, 1923, vol. 1, p. 313.

    Google Scholar 

  57. Bull, J.P., The historical development of clinical therapeutic trials, J. Chronic Dis., 1959, vol. 10, no. 3, pp. 218–248. https://doi.org/10.1016/0021-9681(59)90004-9

    Article  CAS  PubMed  Google Scholar 

  58. Asher, R., Apriority: thoughts on treatment, Lancet, 1961, vol. 2, no. 7217, pp. 1403–1404. https://doi.org/10.1016/s0140-6736(61)91217-x

    Article  CAS  PubMed  Google Scholar 

  59. Mathews, J.N.S., Introduction to Randomized Controlled Clinical Trials. Texts in Statistical Science, Chapman and Hall/CRC, 2006, 2nd ed.

    Book  Google Scholar 

  60. Terris, M., The society for epidemiologic research and the future of epidemiology, J. Publ. Health Policy, 1993, vol. 14, no. 2, pp. 137–148. https://doi.org/10.2307/3342960

    Article  CAS  Google Scholar 

  61. Rothman, K.J., Epidemiology. An Introduction, Oxford: Oxford Univ. Press, 2012, 2nd ed.

    Google Scholar 

  62. Merrill, R.M., Introduction to Epidemiology, Burlington: Jones and Bartlett Learning, 2017, 7th ed.

    Google Scholar 

  63. USEPA 2006, A Framework for Assessing Health Risks of Environmental Exposures to Children, EPA/600/R-05/093F, Washington, DC: National Center for Environmental Assessment Office of Research and Development U.S. Environmental Protection Agency, 2006.

    Google Scholar 

  64. Susser, M., What is a cause and how do we know one? A grammar for pragmatic epidemiology, Am. J. Epidemiol., 1991, vol. 133, no. 7, pp. 635–648. https://doi.org/10.1093/oxfordjournals.aje.a115939

    Article  CAS  PubMed  Google Scholar 

  65. Szklo, M. and Nieto, F.J., Epidemiology. Beyond the Basics, Burlington: Jones Bartlett Learning, 2019, 4th ed.

    Google Scholar 

  66. The Health Consequences of Smoking: A Report of the Surgeon General, Rockville, MD: Office of the Surgeon General, US Public Health Service, 2004.

  67. Gordis, L., Epidemiology, Philadelphia: Saunders, Elsevier Inc., 2014, 5th ed.

    Google Scholar 

  68. Aschengrau, A. and Seage, G.R., III, Epidemiology in Public Health, Burlington: Jones and Bartlett Learning, LLC, 2014, 3rd ed.

    Google Scholar 

  69. Rothman, K.J. and Greenland, S., Causation and Causal Inference Modern Epidemiology, Rothman, K.J., Ed., Philadelphia: Lippincott, Williams and Wilkins, 1998, 2nd ed.

    Google Scholar 

  70. Hofler, M., The bradford hill considerations on causality: a counterfactual perspective, Emerg. Them. Epidemiol., 2005, vol. 2, no. 11. https://doi.org/10.1186/1742-7622-2-11

  71. Thygesen, L.C., Andersen, G.S., and Andersen, H., A philosophical analysis of the hill criteria, J. Epidemiol. Commun. Health, 2005, vol. 59, no. 6, pp. 512–516. https://doi.org/10.1136/jech.2004.027524

    Article  Google Scholar 

  72. Webb, P. and Bain, C., Essential Epidemiology. An Introduction for Students and Health Professionals, Cambridge: Cambridge Univ. Press, 2011, 2nd ed.

    Google Scholar 

  73. Bonita, R., Beaglehole, R., and Kjellstrom, T., Basic Epidemiology, World Health Organization, 2006, 2nd ed.

    Google Scholar 

  74. Katz, D.L., Elmore, J.G., Wild, D.M.G., and Lucan, S.C., Jekel’s Epidemiology, Biostatistics, Preventive Medicine, and Public Health, Philadelphia: Elsevier, 2014, 4th ed.

    Google Scholar 

  75. Greenhalgh, T., The Basics of Evidence Based Medicine, London, UK: BMJ Books, 2001, 2nd ed.

    Google Scholar 

  76. Weed, D.L. and Gorelic, L.S., The practice of causal inference in cancer epidemiology, Cancer Epidemiol. Biomark. Prev., 1996, vol. 5, no. 4, pp. 303–311.

    CAS  Google Scholar 

  77. Encyclopedia of Statistics in Behavioral Science, Everit, B.S. and Howell, D.C., Eds., Chichester: Wiley, 2005, vol. 1.

  78. Becker, R.A., Dellarco, V., Seed, J., et al., Quantitative weight of evidence to assess confidence in potential modes of action, Regul. Toxicol. Pharmacol., 2017, vol. 86, pp. 205–220. https://doi.org/10.1016/j.yrtph.2017.02.017

    Article  CAS  PubMed  Google Scholar 

  79. Sonich-Mullin, C., Fielder, R., Wiltse, J., et al., International Programme on Chemical Safety. IPCS conceptual framework for evaluating a mode of action for chemical carcinogenesis, Regul. Toxicol. Pharmacol., 2001, vol. 34, no. 2, pp. 146–152. https://doi.org/10.1006/rtph.2001.1493

    Article  CAS  PubMed  Google Scholar 

  80. Ulanova, M., Gekalyuk, A., Agranovich, I., et al., Stress-induced stroke and stomach cancer: sex differences in oxygen saturation, Adv. Exp. Med. Biol., 2016, vol. 923, pp. 135–140. https://doi.org/10.1007/978-3-319-38810-6_18

    Article  CAS  PubMed  Google Scholar 

  81. Vorobtsova, I.E., Genetic and somatic effects of ionizing radiation in humans and animals (comparative aspect), Radiats. Biol. Radioekol., 2002, vol. 42, no. 6, pp. 639–643.

    CAS  Google Scholar 

  82. Evans, A.S., Causation and disease: the Henle–Koch postulates revisited, Yale J. Biol. Med., 1976, vol. 49, no. 2, pp. 175–195.

    CAS  PubMed  PubMed Central  Google Scholar 

  83. Brown, N.A. and Fabro, S., The value of animal teratogenicity testing for predicting human risk, Clin. Obstet. Gynecol., 1983, vol. 26, no. 2, pp. 467–477. https://doi.org/10.1016/0890-6238(93)90025-3

    Article  CAS  PubMed  Google Scholar 

  84. Popper, K.R., The Logic of Scientific Discovery, London: Routledge Classics, 2002.

    Google Scholar 

  85. USEPA 2005. Guidelines for Carcinogen Risk Assessment. EPA/630/P-03/001B, Washington, DC: Risk Assessment Forum. National Center for Environmental Assessment Office of Research and Development US Environmental Protection Agency, 2005.

  86. Fox, G.A., Practical causal inference for ecoepidemiologists, J. Toxicol. Environ. Health, 1991, vol. 33, no. 4, pp. 359–273. https://doi.org/10.1080/15287399109531535

    Article  CAS  PubMed  Google Scholar 

  87. Sidorenko, E.A., Counterfactual statements, in Novaya filosofskaya entsiklopediya (New Philosophical Encyclopedia), in 4 vols., Moscow: Mysl’, 2010, vol. 2, pp. 297–298.

  88. Bruce, N., Pope, D., and Stanistreet, D., Quantitative Methods for Health Research. A Practical Interactive Guide to Epidemiology and Statistics, Oxford: Wiley, 2019, 2nd ed.

    Google Scholar 

  89. Bollet, A.J., On seeking the cause of disease, Clin. Res., 1964, vol. 12, pp. 305–310.

    Google Scholar 

  90. Merrill, R.M., Frankenfeld, C.L., Freeborne, N., and Mink, M., Behavioral Epidemiology. Principles and Applications, Burlington: Jones and Bartlett Learning, LLC, 2016.

    Google Scholar 

  91. Egilman, D., Kim, J., and Biklen, M., Proving causation: the use and abuse of medical and scientific evidence inside the courtroom—an epidemiologist’s critique of the judicial interpretation of the Daubert ruling, Food Drug. Law J., 2003, vol. 58, no. 2, pp. 223–250.

    PubMed  Google Scholar 

  92. Koterov, A.N., Zharkova, G.P., and Biryukov, A.P., Tandem of radiation epidemiology and radiobiology for the practice of radiation protection, Med. Radiol. Radiats. Bezop., 2010, vol. 55, no. 4, pp. 55–84.

    Google Scholar 

  93. Schlesselman, J.J., “Proof” of cause and effect in epidemiologic studies: criteria for judgment, Prev. Med., 1987, vol. 16, no. 2, pp. 195–210. https://doi.org/10.1016/0091-7435(87)90083-1

    Article  CAS  PubMed  Google Scholar 

  94. Shakir, S.A. and Layton, D., Causal association in pharmacovigilance and pharmacoepidemiology: thoughts on the application of the Austin Bradford–Hill criteria, Drug Saf., 2002, vol. 25, no. 6, pp. 467–471. https://doi.org/10.2165/00002018-200225060-00012

    Article  PubMed  Google Scholar 

  95. UNSCEAR 2006, Report to the General Assembly, with Scientific Annexes. Annex A. Epidemiological Studies of Radiation and Cancer, New York: United Nations, 2008, pp. 17–322.

    Google Scholar 

  96. Smoking and Reproductive Life. The Impact of Smoking on Sexual, Reproductive and Child Health, Carter, D., Nathanson, N., Seddon, C., et al., Eds., British medical association. Board of Science and Education and Tobacco Control Resource Centre, 2004. www.rauchfrei-info.de/fileadmin/main/data/Dokumente/Smoking_ReproductiveLife.pdf. Accessed October 11, 2020.

  97. Hofmann, B., Holm, S., and Iversen, J.-G., Philosophy of science, in Research Methodology in the Medical and Biological Sciences, Laake, P., Benestad, H.B., and Olsen, B.R., Eds., London: Academic Press, Elsevier, 2007, pp. 1–32.

    Google Scholar 

  98. Gay, J., Clinical Epidemiology and Evidence-Based Medicine Glossary: Terminology Specific to Epidemiology, 2005. http://people.vetmed.wsu.edu/jmgay/ courses/GlossEpiTerminology.htm. Accessed October 11, 2020.

  99. A Dictionary of Epidemiology, Porta, M., Ed., New York: Oxford Univ. Press, 2014, 6th ed.

    Google Scholar 

  100. A Dictionary of Epidemiology, Last, J.M., Ed., Oxford: Oxford Univ. Press, 2001.

    Google Scholar 

  101. Doll, R., Weak associations in epidemiology: importance, detection, and interpretation, J. Epidemiol., 1996, vol. 6, no. 4 (suppl.), pp. S11–S20. https://doi.org/10.2188/jea.6.4sup_11

    Article  Google Scholar 

  102. Murray, C.J.L., Ezzati, M., Lopez, A.D., et al., Comparative quantification of health risks: conceptual framework and methodological issues, Health Metrics, 2003, vol. 1, p. 1. https://doi.org/10.1186/1478-7954-1-1

    Article  Google Scholar 

  103. Guzelian, P.S., Victoroff, M.S., Halmes, N.C., et al., Evidence-based toxicology: a comprehensive framework for causation, Hum. Exp. Toxicol., 2005, vol. 24, no. 4, pp. 161–201. https://doi.org/10.1191/0960327105ht517oa

    Article  CAS  PubMed  Google Scholar 

  104. Epidemiology: Principles and Practical Guidelines, Van den Broeck, J. and Brestoff, J.R., Eds., Dordrecht: Springer, 2013.

  105. Glynn, J.R., A question of attribution, Lancet, 1993, vol. 342, no. 8870, pp. 530–532. https://doi.org/10.1016/0140-6736(93)91651-2

    Article  CAS  PubMed  Google Scholar 

  106. Bae, S., Kim, H.C., Ye, B., et al., Causal inference in environmental epidemiology, Environ. Health Toxicol., 2017, vol. 32, p. e2017015. https://doi.org/10.5620/eht.e2017015

    Article  PubMed  PubMed Central  Google Scholar 

  107. Lower, G.M. and Kanarek, M.S., Conceptual/operational criteria of causality: relevance to systematic epidemiologic theory, Med. Hypotheses, 1983, vol. 11, pp. 217–244. https://doi.org/10.1016/0306-9877(83)90064-6

    Article  PubMed  Google Scholar 

  108. Collier, Z.A., Gust, K.A., Gonzalez-Morales, B., et al., A weight of evidence assessment approach for adverse outcome pathways, Regul. Toxicol. Pharmacol., 2016, vol. 75, pp. 46–57. https://doi.org/10.1016/j.yrtph.2015.12.014

    Article  PubMed  Google Scholar 

  109. Semenovykh, G.K., Novikov, S.M., and Semenovykh, L.N., Analiz sluchaev zabolevanii, obuslovlennykh deistviem faktorov sredy obitaniya. Kharakteristika opasnosti dlya zdorov’ya: uchebnoe posobie (Analysis of Cases of Diseases Caused by Environmental Factors. Health Hazard Characterization: Tutorial), Moscow: Pervyi Mosk. Gos. Med. Univ. im. I.M. Sechenova, 2011, no. 4.

  110. Maldonado, G. and Greenland, S., Estimating causal effects, Int. J. Epidemiol., 2002, vol. 31, no. 2, pp. 422–429.

    Article  PubMed  Google Scholar 

  111. Friis, R.H. and Sellers, T.A., Epidemiology for Public Health Practice, Burlington: Jones and Bartlett Learning, 2014, 5th ed.

    Google Scholar 

  112. Phillips, C.V. and Goodman, K.J., Hill’s considerations for causal inference, in Encyclopedia of Epidemiology. Two Volume Set, Boslaugh, S., Ed., Saint Louis University, SAGE Publications, Inc., 2008, pp. 494–495.

    Google Scholar 

  113. Fedak, K.M., Bernal, A., Capshaw, Z.A., and Gross, S., Applying the bradford hill criteria in the 21st century: how data integration has changed causal inference in molecular epidemiology, Emerg. Themes Epidemiol., 2015, vol. 12, p. 14. https://doi.org/10.1186/s12982-015-0037-4

    Article  PubMed  PubMed Central  Google Scholar 

  114. ICRP Publication 90, Biological effects after prenatal irradiation (embryo and fetus), Ann. ICRP, 2003, vol. 33, nos. 1–2, pp. 5–206. https://doi.org/10.1016/S0146-6453(03)00021-6-ICRP_90.pdf

  115. UNSCEAR 2012, Report to the General Assembly, with Scientific Annexes. Annex A. Attributing Health Effects to Ionizing Radiation Exposure and Inferring Risks, New York: United Nations, 2015.

    Google Scholar 

  116. Koterov, A.N. and Vainson, A.A., Biological and medical effects of low LET radiation for various dose ranges, Med. Radiol. Radiats. Bezop., 2015, vol. 60, no. 3, pp. 5–31.

    Google Scholar 

  117. Atkinson, W.D., Law, D.V., Bromley, K.J., and Inskip, H.M., Mortality of employees of the United Kingdom Atomic Energy Authority, 1946–97, Occup. Environ. Med., 2004, vol. 61, no. 7, pp. 577–585.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  118. Bell, C.M. and Coleman, D.A., Models of the healthy worker effect in industrial cohorts, Stat. Med., 1987, vol. 6, no. 8, pp. 901–909. https://doi.org/10.1002/sim.4780060805

    Article  CAS  PubMed  Google Scholar 

  119. Berrington, A., Darby, S.C., Weiss, S.A., and Doll, R., 100 years of observation on british radiologists: mortality from cancer and other causes 1897–1997, Br. J. Radiol., 2001, vol. 74, no. 882, pp. 507–519. https://doi.org/10.1259/bjr.74.882.740507

    Article  CAS  PubMed  Google Scholar 

  120. Mohan, A.K., Hauptmann, M., Linet, M.S., et al., Breast cancer mortality among female radiologic technologists in the united states, J. Natl. Cancer Inst., 2002, vol. 94, no. 12, pp. 943–948. https://doi.org/10.1093/jnci/94.12.943

    Article  PubMed  Google Scholar 

  121. ICRP Publication 118, ICRP Statement on Tissue Reactions and Early and Late Effects of Radiation in Normal Tissues and Organs—Threshold Doses for Tissue Reactions in a Radiation Protection Context. Annals of the ICRP, Clement, C.H., Ed., Amsterdam-New York: Elsevier, 2012.

    Google Scholar 

  122. Rothman, K. and Greenland, S., Hill’s Criteria for Causality, in Encyclopedia of Biostatistics, Online, Wiley, 2005. www.rtihs.org/sites/default/files/26902%-20Rothman%201998%20The%20encyclopedia%20of%-20biostatistics.pdf. Accessed October 11, 2020.

  123. Cornfield, J., Statistical relationships and proof in medicine, Am. Stat., 1954, vol. 8, no. 5, pp. 19–23.

    Google Scholar 

  124. Greenhouse, J.B., Commentary: cornfield, epidemiology and causality, Int. J. Epidemiol., 2009, vol. 38, no. 5, pp. 1199–1201. https://doi.org/10.1093/ije/dyp299

    Article  PubMed  PubMed Central  Google Scholar 

  125. Panchin, A.Yu., The Science. Small nonsense with big consequences. Blog. 2009-08-01. https://scinquisitor.livejournal.com/9724.html; “Epigenetics” website. Laboratory of Epigenetics of the Institute of Gerontology, NAMSU. Posted on September 16, 2011. https://www.epigenetics.com.ua/?p=153; “Biomolecule” website. Special project “Clinical Research.” June 29, 2018. https://biomolecula.ru/articles/put-k-tysiacham-aptek-nachinaetsia-s-odnoi-molekulyand many others. https://biomolecula.ru/articles/put-k-tysiacham-aptek-nachinaetsia-s-odnoi-molekuly and many others. Accessed October 20, 2020.

  126. Hume, D., A Treatise of Human Nature, Oxford: Oxford University Press, 1978.

    Google Scholar 

  127. IARC 1987, International Agency for Research on Cancer, IARC Monographs on the Evaluation of Carcinogenic Risks to Humans, Supplement 7: Overall Evaluations of Carcinogenicity: An Updating of IARC Monographs, Lyon, 1987, vols. 1–42.

  128. Stewart, A., Basic Statistics and Epidemiology: A Practical Guide, CRC Press, 2016, 4th ed.

    Google Scholar 

  129. Alexander, L.K., Lopes, B., Ricchetti-Masterson, K., and Yeatts, K.B., Causality, in Epidemiologic Research and Information Center (ERIC) Notebook, UNC Gillings School of Global Public Health, 2015, 2nd ed. https://sph.unc.edu/files/2015/07/nciph_ERIC15.pdf. Accessed October 17, 2020.

  130. Frumkin, H. (Instructor), Causation in Medicine, Emory University—Rollins School of Public Health, Atlanta, Georgia, 1997. http://www.aoec.org/ceem/ methods/emory2.html. Accessed October 17, 2020.

  131. Biesalski, H.K., Aggett, P.J., Anton, R., et al., Scientific substantiation of health claims: evidence-based nutrition, in 26th Hohenheim Consensus Conference, September 11, 2010, Nutrition, 2011, vol. 27, no. 10 (suppl.), pp. S1–S20. https://doi.org/10.1016/j.nut.2011.04.002

    Article  PubMed  Google Scholar 

  132. King, J., Bradford Hill Criteria for causal inference, The 2015 ANZEA Conference, Auckland: Julian King and Associates. https://www.julianking.co.nz/wp-content/uploads/2018/01/150602-BHC-jk5-web.pdf. Accessed October 17, 2020.

  133. Public Affairs Committee of the Teratology Society, Causation in teratology-related litigation, Birth Defects Res. A Clin. Mol. Teratol., 2005, vol. 73, no. 6, pp. 421–423. https://doi.org/10.1002/bdra.20139

    Article  CAS  Google Scholar 

  134. Canadian Task Force on the Periodic Health Examination. The periodic health examination, Can. Med. Assoc. J., 1979, vol. 121, no. 9, pp. 1193–1254.

    Google Scholar 

  135. Howick, J., The Philosophy of Evidence-Based Medicine, Chichester: Wiley-Blackwell, 2011.

    Book  Google Scholar 

  136. Howick, J., Chalmers, I., Glasziou, P., et al., The 2011 Oxford CEBM Evidence Levels of Evidence (Introductory Document), Oxford Centre for Evidence-Based Medicine. 2011. https://www.cebm.ox.ac.uk/ resources/levels-of-evidence/ocebm-levels-of-evidence. Accessed November 18, 2020.

  137. Andreeva, N.S., Rebrova, O.Yu., Zorin, N.A., et al., Systems for assessing the reliability of scientific evidence and persuasiveness of recommendations: comparative characteristics and unification prospects, Med. Tekhnol., Otsenka Vybor, 2012, no. 4, pp. 10–24.

  138. Fletcher, R.H., Fletcher, S.W., and Wagner, E.H., Clinical Epidemiology: The Essentials, Philadelphia: Lippincott Williams and Wilkins, 1996, 3rd ed.

    Google Scholar 

  139. Lilienfeld”s Foundations of Epidemiology, Schneider, D. and Lilienfeld, D.E., Eds., New York: Oxford Univ. Press, 2015, 4th ed.

    Google Scholar 

  140. Feinstein, A.R., Clinical epidemiology. I. The populational experiments of nature and of man in human illness, Ann. Int. Med., 1968, vol. 69, no. 4, pp. 807–820. https://doi.org/10.7326/0003-4819-69-4-807

    Article  CAS  PubMed  Google Scholar 

  141. Jones, D.S. and Podolsky, S.H., The art of medicine. The history and fate of the gold standard, Lancet, 2015, vol. 385, no. 9977, pp. 1502–1503. https://doi.org/10.1016/S0140-6736(15)60742-5

    Article  PubMed  Google Scholar 

  142. Feinstein, A.R. and Horwitz, R.I., Double standards, scientific methods, and epidemiologic research, N. Engl. J. Med., 1982, vol. 307, no. 26, pp. 1611–1617. https://doi.org/10.1056/NEJM198212233072604

    Article  CAS  PubMed  Google Scholar 

  143. Mant, D., Can randomised trials inform clinical decisions about individual patients?, Lancet, 1999, vol. 353, no. 9154, pp. 743–746. https://doi.org/10.1016/S0140-6736(98)09102-8

    Article  CAS  PubMed  Google Scholar 

  144. Mayer, D., Essential Evidence-Based Medicine, Cambridge Univ. Press, 2010, 2nd ed. www.yumpu.com/en/document/read/56834431/dan-mayer-essential-evidence-based-medicine. Accessed November 18, 2020.

  145. Worrall, J., Evidence: philosophy of science meets medicine, J. Eval. Clin. Pract., 2010, vol. 16, no. 2, pp. 356–362. https://doi.org/10.1111/j.1365-2753.2010.01400.x

    Article  PubMed  Google Scholar 

  146. Opren. Parliament.uk. Hansard 1803–2005. HC. Deb July 20, 1987, vol. 120, pp. 183–188. https://api.parliament.uk/historic-hansard/commons/1987/jul/20/opren. Accessed June 21, 2020.

  147. MacMahon, B., Pugh, T.F., and Ipsen, J., Epidemiologic Methods, Boston: Little, Brown, 1960.

    Google Scholar 

  148. Lilienfeld, D.E., Definitions of epidemiology, Am. J. Epidemiol., 1978, vol. 107, no. 2, pp. 87–90. https://doi.org/10.1093/oxfordjournals.aje.a112521

    Article  CAS  PubMed  Google Scholar 

  149. Jadad, A.R. and Enkin, M.W., Randomized Controlled Trials. Questions, Answers, and Musings, Malden, Oxford, Carlton: BMJ Books, 2007, 2nd ed.

    Book  Google Scholar 

  150. Worrall, J., Evidence in medicine, Compass, 2007, vol. 2, no. 6, pp. 981–1022. https://doi.org/10.1111/j.1747-9991.2007.00106.x

    Article  Google Scholar 

  151. Krauss, A., Why all randomised controlled trials produce biased results, Ann. Med., 2018, vol. 50, no. 4, pp. 312–322. https://doi.org/10.1080/07853890.2018.1453233

    Article  PubMed  Google Scholar 

  152. Wartolowska, K., Beard, D.J., and Carr, A.J., The use of placebos in controlled trials of surgical interventions: a brief history, J. R. Soc. Med., 2018, vol. 111, no. 5, pp. 177–182. https://doi.org/10.1177/0141076818769833

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  153. Hill, A.B., Observation and experiment, N. Engl. J. Med., 1953, vol. 248, no. 24, pp. 995–1001. https://doi.org/10.1056/NEJM195306112482401

    Article  CAS  PubMed  Google Scholar 

  154. Doll, R., Clinical trials: retrospect and prospect, Stat. Med., 1982, vol. 1, no. 4, pp. 337–344. https://doi.org/10.1002/sim.4780010411

    Article  CAS  PubMed  Google Scholar 

  155. Colebrook D. Report of the Work at the North Islington Infant Welfare Centre Light Department, March 3, FD1/5052, National Archive in Kew, London, 1925.

  156. Bell, J.A., Pertussis prophylaxis with two doses of alum-precipitated vaccine, Public Health Rep., 1941, vol. 56, no. 31, pp. 1535–1546. https://doi.org/10.2307/4583816

    Article  Google Scholar 

  157. Teaching Epidemiology. A Guide for Teachers in Epidemiology, Public Health, and Clinical Medicine, Olsen, J., Greene, N., Saracci, R., and Trichopoulos, D., Eds., New York: Oxford Univ. Press, 2015, 4th ed.

  158. Sackett, D.L., Clinical epidemiology. what, who, and whither, J. Clin. Epidemiol., 2002, vol. 55, no. 12, pp. 1161–1166. https://doi.org/10.1016/s0895-4356(02)00521-8

    Article  PubMed  Google Scholar 

  159. Feinstein, A.R., Clinical Epidemiology: The Architecture of Clinical Research, Philadelphia: W. B. Saunders Company, 1985.

    Google Scholar 

  160. Saracci, R., Epidemiology. A Very Short Introduction, New York: Oxford Univ. Press, 2010.

    Book  Google Scholar 

  161. Holmes, L.J., Applied Epidemiologic Principles and Concepts. Clinicians’ Guide to Study Design and Conduct, New York: Taylor and Francis, 2018.

    Google Scholar 

  162. Taylor, I., Epidemiology 1866–1966, Public Health, 1967, vol. 82, no. 1, pp. 31–37. https://doi.org/10.1016/s0033-3506(67)80063-5

    Article  Google Scholar 

  163. Kincaid, H., Causal modelling, mechanism, and probability in epidemiology, in Causality in the Sciences, Illari, P.M., Russo, F., and Williamson, J., Eds., New York: Oxford Univ. Press, 2011, p. 20. https://doi.org/10.1093/acprof:oso/9780199574131.003.0004

  164. Lagiou, P., Adami, H.O., and Trichopoulos, D., Causality in cancer epidemiology, Eur. J. Epidemiol., 2005, vol. 20, no. 7, pp. 565–574. https://doi.org/10.1007/sl0654-005-7968-y

    Article  PubMed  Google Scholar 

  165. Buck, C., Popper’s philosophy for epidemiologists, Int. J. Epidemiol., 1975, vol. 4, no. 3, pp. 159–168. https://doi.org/10.1093/ije/4.3.159

    Article  CAS  PubMed  Google Scholar 

  166. Coughlin, S.S., Causal Inference and Scientific Paradigms in Epidemiology, Bentham E-book, 2010. https://ebooks.benthamscience.com/book/9781608051816/. Accessed November 19, 2020.

  167. Karhausen, L.R., The poverty of Popperian epidemiology, Int. J. Epidemiol., 1995, vol. 24, no. 5, pp. 869–874. https://doi.org/10.1093/ije/24.5.869

    Article  CAS  PubMed  Google Scholar 

  168. Susser, M., Falsification, verification and causal inference in epidemiology: reconsiderations in the light of sir Karl Popper’s philosophy, in Causal Inference, Rothman, K.J., Ed., Chestnut Hill, MS: Epidemiologic Resources, 1988, pp. 33–57.

    Google Scholar 

  169. Jacobsen, M., in Causal Inference, Rothman, K.J., Ed., Chestnut Hill, MS: Epidemiologic Resources, 1988, рр. 105–117.

  170. Frost, W.H., Risk of persons in familial contact with pulmonary tuberculosis, Am. J. Public Health Nations Health, 1933, vol. 23, no. 5, pp. 426–432. https://doi.org/10.2105/ajph.23.5.426

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  171. Doll, R., Cohort studies: history of the method. I. Prospective cohort studies, Soz. Praventivmed., 2001, vol. 46, no. 2, pp. 75–86. https://doi.org/10.1007/bf01299724

    Article  CAS  PubMed  Google Scholar 

  172. Frost, W.H., Snow on Cholera: Being a Reprint of Two Papers by John Snow, M.D. Together with a Biographical Memoir by B.W. Richardson and an Introduction by Wade Hampton Frost, M.D., New York: The Commonwealth Fund, 1936, p. 15.

    Google Scholar 

  173. Labarthe, D.M. and Stallones, R.A., Epidemiologic inference, in Causal Inference, Rothman, K.J., Ed., Chestnut Hill, MS: Epidemiologic Resources, 1988, pp. 119–129.

    Google Scholar 

  174. Maclure, M., Popperian refutation in epidemiology, Am. J. Epidemiol., 1985, vol. 121, no. 3, pp. 343–350. https://doi.org/10.1093/oxfordjournals.aje.a114005

    Article  CAS  PubMed  Google Scholar 

  175. Parascandola, M., Epidemiology: second-rate science?, Public Health Rep., 1998, vol. 113, no. 4, pp. 312–320.

    CAS  PubMed  PubMed Central  Google Scholar 

  176. Ahlbom, A. and Norell, S., Introduction to Modern Epidemiology, Epidemiology Resources Inc., 1990, 2nd ed.

    Google Scholar 

  177. Susser, M. and Stein, Z., Eras in Epidemiology: The Evolution of Ideas, New York: Oxford Univ. Press, 2009.

    Book  Google Scholar 

  178. Obshchaya epidemiologiya s osnovami dokazatel’noi meditsiny: rukovodstvo k prakticheskim zanyatiyam: Uchebnoe Posobie (General Epidemiology with the Basics of Evidence-Based Medicine: A Guide to Practical Exercises: Tutorial), Pokrovskii, V.I. and Briko, N.I., Eds., Moscow: GEOTAR-Media, 2012.

    Google Scholar 

  179. Field Epidemiology, Gregg, M.B., Ed., Oxford Univ. Press, 2008, 3rd ed.

    Google Scholar 

  180. Toth, B., Why the MRC therapeutic trials committee did not introduce controlled clinical trials, J. R. Soc. Med., 2015, vol. 108, no. 12, pp. 499–511. https://doi.org/10.1177/0141076815618891

    Article  PubMed  PubMed Central  Google Scholar 

  181. Causal Inference, Rothman, K.J., Ed., Epidemiology Resources Inc. Mass., USA, 1988.

    Google Scholar 

  182. Rothman, K.J., Inferring causal connection—habit, faith or logic?, in Causal Inference, Rothman, K.J., Ed., Epidemiology Resources Inc. Mass., USA, 1988, pp. 3–12.

    Google Scholar 

  183. Hill, A.B., Reflections on the controlled trial, Ann. Rheum. Dis., 1966, vol. 25, no. 2, pp. 107–113. https://doi.org/10.1136/ard.25.2.107

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  184. Vandenbroucke, J.P., Observational research, randomised trials, and two views of medical science, PLoS Med., 2008, vol. 5, no. 3, p. e67. https://doi.org/10.1371/journal.pmed.0050067

    Article  PubMed  PubMed Central  Google Scholar 

  185. Collier, R., Legumes, lemons and streptomycin: a short history of the clinical trial, CMAJ, 2009, vol. 180, no. 1, pp. 23–24. https://doi.org/10.1503/cmaj.081879

    Article  PubMed  PubMed Central  Google Scholar 

  186. Boice, J.D., Jr., Ionizing radiation, in Schottenfeld and Fraumeni Cancer Epidemiology and Prevention, Schottenfeld, D. and Fraumeni, J.F., Eds., New York: Oxford Univ. Press, 2006, 3rd ed., pp. 259–293.

    Google Scholar 

  187. Susser, M., Rational science versus a system of logic, in Causal Inference, Rothman, K.J., Ed., Chestnut Hill, MS: Epidemiologic Resources, 1988, pp. 189–199.

    Google Scholar 

  188. Jacobsen, M., Against popperized epidemiology, Int. J. Epidemiol., 1976, vol. 5, no. 1, pp. 9–11. https://doi.org/10.1093/ije/5.1.9

    Article  CAS  PubMed  Google Scholar 

  189. Schlesinger, G.N., There”s a fascination frantic in philosophical fancies, in Causal Inference, Rothman, K.J., Ed., Chestnut Hill, MS: Epidemiologic Resources, 1988, pp. 165–172.

    Google Scholar 

  190. Greenland, S., Induction versus Popper: substance versus semantics, Int. J. Epidemiol., 1998, vol. 27, no. 4, pp. 543–548. https://doi.org/10.1093/ije/27.4.543

    Article  CAS  PubMed  Google Scholar 

  191. Ibn Sina (c. 1012 CE; c. 402 AH). Kitab al-Qanun fi al-tibb, The James Lind Library. https://www.jameslindlibrary.org/ibn-sina-c-1012-ce-c-402-ah/. Accessed November 20, 2020.

  192. Melikhov, O.G., Klinicheskie issledovaniya (Clinical Resear), Moscow: Atmosfera, 2013, 3rd ed.

  193. USEPA 2002, A Review of the Reference Dose and Reference Concentration Processes. EPA/630/P-02/002F, Final Report, Washington, DC: Risk Assessment Forum, National Center for Environmental Assessment Office of Research and Development, U.S. Environmental Protection Agency, 2002.

  194. Framework for the Integration of Human and Animal Data in Chemical Risk Assessment. Technical Report mo. 104, Brussels: European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC AISBL), 2009.

  195. Cole, P., The epidemiologist as an expert witness, J. Clin. Epidemiol., 1991, vol. 44, suppl. 1, pp. 35S–39S. https://doi.org/10.1016/0895-4356(91)90173-7

    Article  PubMed  Google Scholar 

  196. James, R.C., Britt, J.K., Halmes, N.C., and Guzelian, P.S., Evidence-based causation in toxicology: a 10-year retrospective, Hum. Exp. Toxicol., 2015, vol. 34, no. 12, pp. 1245–1252. https://doi.org/10.1177/0960327115601767

    Article  CAS  PubMed  Google Scholar 

  197. Franco, N.H., Animal experiments in biomedical research: a historical perspective, Animals (Basel), 2013, vol. 3, no. 1, pp. 238–273. https://doi.org/10.3390/ani3010238

    Article  PubMed  PubMed Central  Google Scholar 

  198. Bernard, C., An Introduction to the Study of Experimental Medicine, Henry Schuman Inc., 1949.

  199. Zhang, F.F., Michaels, D.C., Mathema, B., et al., Evolution of epidemiologic methods and concepts in selected textbooks of the 20th century, Soz. Praventivmed., 2004, vol. 49, no. 2, pp. 97–104. https://doi.org/10.1007/s00038-004-3117-8

    Article  PubMed  Google Scholar 

  200. Greenwood, M., Hill, A.B., Topley, W.W.C., and Wilson, J., Experimental Epidemiology, Medical Research Council Special Report Series no. 209, London: His Majesty’s Stationery Office, 1936. https://www.gwern.net/docs/genetics/selection/1936-greenwood-experimentalepidemiology.pdf. Accessed November 23, 2020.

  201. Parascandola, M., Two approaches to etiology: the debate over smoking and lung cancer in the 1950s, Endeavour, 2004, vol. 28, no. 2, pp. 81–86. https://doi.org/10.1016/j.endeavour.2004.02.003

    Article  PubMed  Google Scholar 

  202. Hinshaw, H.C. and Feldman, W.H., Evaluation of chemotherapeutic agents in clinical trials: a suggested procedure, Am. Rev. Tubercul., 1944, vol. 50, pp. 202–213.

    Google Scholar 

  203. Vandenbroucke, J.P., A short note on the history of the randomized controlled trial, J. Chronic Dis., 1987, vol. 40, no. 10, pp. 985–987. https://doi.org/10.1016/0021-9681(87)90149-4

    Article  CAS  PubMed  Google Scholar 

  204. Attarwala, H., TGN1412: from discovery to disaster, J. Young Pharm., 2010, vol. 2, no. 3, pp. 332–336. https://doi.org/10.4103/0975-1483.66810

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  205. Panoskaltsis, N., McCarthy, N.E., Stagg, A.J., et al., Immune reconstitution and clinical recovery following anti-CD28 antibody (TGN1412)-induced cytokine storm, Cancer Immunol. Immunother., 2020, vol. 8, pp. 1–16. https://doi.org/10.1007/s00262-020-02725-2

    Article  CAS  Google Scholar 

  206. Sandilands, G.P., Wilson, M., Huser, C., et al., Were monocytes responsible for initiating the cytokine storm in the TGN1412 clinical trial tragedy?, Clin. Exp. Immunol., 2010, vol. 162, no. 3, pp. 516–527. https://doi.org/10.1111/j.1365-2249.2010.04264.x

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  207. Wadman, M., London’s disastrous drug trial has serious side effects for research, Nature, 2006, vol. 40, no. 7083, pp. 388–389. https://doi.org/10.1038/440388a

    Article  CAS  Google Scholar 

  208. Nguyen, T.K., Nguyen, E.K., Warner, A., et al., Failed randomized clinical trials in radiation oncology: what can we learn?, Int. J. Radiat. Oncol. Biol. Phys., 2018, vol. 101, no. 5, pp. 1018–1024. https://doi.org/10.1016/j.ijrobp.2018.04.030

    Article  PubMed  Google Scholar 

  209. National Lung, Screening Trial, Research Team, Aberle, D.R., Adams, A.M., Berg, C.D., et al., Reduced lung-cancer mortality with low-dose computed tomographic screening, N. Engl. J. Med., 2011, vol. 365, no. 5, pp. 395–409. https://doi.org/10.1056/NEJMoa1102873

    Article  Google Scholar 

  210. Santos, I., Cantista, P., and Vasconcelos, C., Balneotherapy in rheumatoid arthritis—a systematic review, Int. J. Biometeorol., 2016, vol. 60, no. 8, pp. 1287–12301. https://doi.org/10.1007/s00484-015-1108-5

    Article  PubMed  Google Scholar 

  211. Reissfelder, C., Timke, C., Schmitz-Winnenthal, H., et al., A randomized controlled trial to investigate the influence of low dose radiotherapy on immune stimulatory effects in liver metastases of colorectal cancer, BMC Cancer, 2011, vol. 11, p. 419. https://doi.org/10.1186/1471-2407-11-419

    Article  PubMed  PubMed Central  Google Scholar 

  212. Shibamoto, Y. and Nakamura, H., Overview of biological, epidemiological, and clinical evidence of radiation hormesis, Int. J. Mol. Sci., 2018, vol. 19, no. 8, p. 2387. https://doi.org/10.3390/ijms19082387

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  213. Altman, D.G., Randomisation. essential for reducing bias, B. Med. J., 1991, vol. 302, no. 6791, pp. 1481–1482. https://doi.org/10.1136/bmj.302.6791.1481

    Article  CAS  Google Scholar 

  214. IARC 2012, Radiation. A Review of Human Carcinogens, IARC Monographs on the Evaluation of Carcinogenic Risks to Humans, Lyon, France, 2012, vol. 100.

    Google Scholar 

  215. FDA 2015, Product Development under the Animal Rule. Guidance for Industry, U.S. Department of Health and Human Services. Food and Drug Administration. Center for Drug Evaluation and Research (CDER). Center for Biologics Evaluation and Research (CBER). Animal Rule, 2015.

  216. Selezneva, A.I., Makarova, M.N., and Rybakova, A.V., Methods of randomization of animals in the experiment, Mezhdunar. Vestn. Vet., 2014, no. 2, pp. 84–89.

  217. Hirst, J.A., Howick, J., Aronson, J.K., et al., The need for randomization in animal trials: an overview of systematic reviews, PLoS One, 2014, vol. 9, no. 6, p. e98856. https://doi.org/10.1371/journal.pone.0098856

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  218. Ioannidis, J.P., Haidich, A.B., Pappa, M., et al., Comparison of evidence of treatment effects in randomized and nonrandomized studies, J. Am. Med. Assoc., 2001, vol. 286, no. 7, pp. 821–830. https://doi.org/10.1001/jama.286.7.821

    Article  CAS  Google Scholar 

  219. Odgaard-Jensen, J., Vist, G.E., Timmer, A., et al., Randomisation to protect against selection bias in healthcare trials, Cochrane Database Syst. Rev., 2011, no. 4, p. MR000012. https://doi.org/10.1002/14651858.MR000012.pub3

  220. Kunz, R. and Oxman, A.D., The upredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials, Br. Med. J., 1998, vol. 317, no. 7167, pp. 1185–1190. https://doi.org/10.1136/bmj.317.7167.1185

    Article  CAS  Google Scholar 

  221. Schulz, K.F., Chalmers, I., Altman, D.G., et al., “Allocation concealment”: the evolution and adoption of a methodological term, J. R. Soc. Med., 2018, vol. 111, no. 6, pp. 216–224. https://doi.org/10.1177/0141076818776604

    Article  PubMed  PubMed Central  Google Scholar 

  222. Sollmann, T., Experimental therapeutics, J. Am. Med. Assoc., 1912, vol. 58, no. 4, pp. 242–244. https://doi.org/10.1001/jama.1912.04260010244004

    Article  Google Scholar 

  223. USEPA 1995, U.S. Environmental Protection Agency. Final Water Quality Guidance for the Great Lakes System Rules and Regulations, Authenticated US Government Information, Federal Register, 1995, vol. 60, no. 56, pp. 15366–15425. www.epa.gov/sites/production/files/2015-12/documents/1995_water_quality_ guidance_for_great_lakes_sid.pdf. Accessed November 21, 2020.

    Google Scholar 

  224. Meek, M.E., Boobis, A., Cote, I., et al., New developments in the evolution and application of the WHO/IPCS framework on mode of action/species concordance analysis, J. Appl. Toxicol., 2014, vol. 34, no. 1, pp. 1–18. https://doi.org/10.1002/jat.2949

    Article  CAS  PubMed  Google Scholar 

  225. Hollingsworth, J.G. and Lasker, E.G., The Case against differential diagnosis: Daubert, medical causation. testimony, and the scientific method, J. Health Law, 2004, vol. 37, no. 1, pp. 85–111.

    PubMed  Google Scholar 

  226. Evans, J.S., Abrahamson, S., Bender, M.A., et al., Health Effects—Models for Nuclear Power-Plant Accident Consequence Analysis. Part I: Introduction, Integration, and Summary, U.S. Nuclear Regulatory Commission (NRC), NUREG/CR-4214., Rev. 2, TRI-141, 1993. https://www.nrc.gov/docs/ML0500/ML050030192.pdf. Accessed November 22, 2020).

  227. Statkiewicz Sherer, M.A., Visconti, P.J., and Ritenour, E.R., Radiation Protection in Medical Radiography, St. Louis, MO: Mosby Elsevier, 2011, 6th ed.

    Google Scholar 

  228. Heller, C.G., Effects on the germinal epithelium, Radiobiological Factors in Manned Space Flight, Langham, W.H., Ed., NRC Publication 1487. Washington, DC, National Academy of Sciences, National Research Council, 1967, pp. 124–133.

  229. Rowley, M.J., Leach, D.R., Warner, G.A., and Heller, C.G., Effect of graded doses of ionizing radiation on the human testis, Radiat. Res., 1974, vol. 59, no. 3, pp. 665–678. https://doi.org/10.2307/3574084

    Article  CAS  PubMed  Google Scholar 

  230. Clifton, D.K. and Bremner, W.J., The effect of testicular X-irradiation on spermatogenesis in man. a comparison with the mouse, J. Androl., 1983, vol. 4, no. 6, pp. 387–392. https://doi.org/10.1002/j.1939-4640.1983.tb00765.x

    Article  CAS  PubMed  Google Scholar 

  231. Advisory Committee on Human Radiation Experiments (ACHRE) USA, Final Report, Washington: U.S. Government Printing Office, 1995. https://bioethicsarchive.georgetown.edu/achre/final/report.html. Accessed November 22, 2020.

  232. Singh, V.K., Ducey, E.J., Brown, D.S., and Whitnall, M.H., A review of radiation countermeasure work ongoing at the armed forces radiobiology research institute, Int. J. Radiat. Biol., 2012, vol. 88, no. 4, pp. 296–310. https://doi.org/10.3109/09553002.2012.652726

    Article  CAS  PubMed  Google Scholar 

  233. Singh, V.K., Newman, V.L., Berg, A.N., and Macvittie, T.J., Animal models for acute radiation syndrome drug discovery, Exp. Opin. Drug Discov., 2015, vol. 10, no. 5, pp. 497–517. https://doi.org/10.1517/17460441.2015.1023290

    Article  CAS  Google Scholar 

  234. Singh, V.K. and Seed, T.M., A review of radiation countermeasures focusing on injury-specific medicinals and regulatory approval status: part I. Radiation sub-syndromes, animal models and FDA-approved countermeasures, Int. J. Radiat. Biol., 2017, vol. 93, no. 9, pp. 851–869. https://doi.org/10.1080/09553002.2017.1332438

    Article  CAS  PubMed  Google Scholar 

  235. Singh, V.K. and Olabisi, A.O., Nonhuman primates as models for the discovery and development of radiation countermeasures, Exp. Opin. Drug Discov., 2017, vol. 12, no. 7, pp. 695–709. https://doi.org/10.1080/17460441.2017.1323863

    Article  Google Scholar 

  236. Gray, G.M., Steven, B., Gail, C., et al., The Annapolis accords on the use of toxicology in risk assessment and decision-making: an Annapolis center workshop report, Toxicol. Methods, 2001, vol. 11, no. 3, pp. 225–231. https://doi.org/10.1080/105172301316871626

    Article  CAS  Google Scholar 

  237. Seed, J., Carney, E.W., Corley, R.A., et al., Overview: using mode of action and life stage information to evaluate the human relevance of animal toxicity data, Crit. Rev. Toxicol., 2005, vol. 35, nos. 8–9, pp. 663–672. https://doi.org/10.1080/10408440591007133

    Article  CAS  Google Scholar 

  238. Good, I.J., Weight of evidence, corroboration, explanatory power, information, and the utility of experiments, J. R. Stat. Soc., Ser. B: Methodol., 1960, vol. 22, no. 2, pp. 319–331. https://doi.org/10.1111/j.2517-6161.1960.tb00378.x

    Article  Google Scholar 

  239. Ecological Causal Assessment, Norton, S.B., Cormier, S.M., and Suter, G.W., II, Eds., U.S. Environmental Protection Agency, Cincinnati, OH, USA: CRC Press, 2015.

  240. NCRP, Report no. 150. Extrapolation of Radiation-Induced Cancer Risks from Nonhuman Experimental Systems to Humans, National Council on Radiation Protection and Measurements, 2005.

  241. The 2007 Recommendations of the International Commission on Radiological Protection, ICRP Publication 103, Annals of the ICRP, Valentin, J., Ed., Amsterdam: Elsevier, 2007.

    Google Scholar 

  242. Schep, L.J., Slaughter, R.J., Temple, W.A., et al., Diethylene glycol poisoning, Clin. Toxicol. (Phila), 2009, vol. 47, no. 6, pp. 525–535. https://doi.org/10.1080/15563650903086444

    Article  CAS  PubMed  Google Scholar 

  243. Hajar, R., Animal testing and medicine, Heart Views, 2011, vol. 12, no. 1, p. 42. https://doi.org/10.4103/1995-705X.81548

    Article  PubMed  PubMed Central  Google Scholar 

  244. Ballentine, C., Sulfanilamide disaster. Taste of raspberries, taste of death: the 1937 elixir sulfanilamide incident, FDA Consumer Mag., June 1981. www.fda.gov/files/about%20fda/published/The-Sulfanilamide-Disaster.pdf. Accessed November 25, 2020.

  245. Animal and Human Studies Addressing Health Effects, National Research Council, An Assessment of Potential Health Effects from Exposure to PAVE PAWS Low-Level Phased-Array Radiofrequency Energy, Washington, DC: The National Academies Press, 2005. www.nap.edu/catalog/11205/an-assessment-of-potential-health-effects-from-exposure-to-pave-paws-low-level-phased-array-radiofrequency-energy. https://doi.org/10.17226/11205

  246. USEPA 2016, Framework for Incorporating Human Epidemiologic and Incident Data in Risk Assessments for Pesticides, Office of Pesticide Programs’ U.S. Environmental Protection Agency, 2016.

  247. Freedman, D.A. and Zeisel, H., From mouse-to-man the quantitative assessment of cancer risks, Stat. Sci., 1988, vol. 3, no. 1, pp. 3–28.

    CAS  Google Scholar 

  248. Ivanov, I.V. and Ushakov, I.B., Basic approaches to extrapolation of data from animals to humans in a radiobiological experiment, Med. Radiol. Radiats. Bezop., 2020, vol. 65, no. 3, pp. 5–12. https://doi.org/10.12737/1024-6177-2020-65-3-5-12

    Article  Google Scholar 

  249. Kundi, M., Causality and the interpretation of epidemiologic evidence, Environ. Health Perspect., 2006, vol. 114, no. 7, pp. 969–974. https://doi.org/10.1289/ehp.8297

    Article  PubMed  PubMed Central  Google Scholar 

  250. Leenaars, C.H.C., Kouwenaar, C., Stafleu, F.R., et al., Animal to human translation: a systematic scoping review of reported concordance rates, J. Transl. Med., 2019, vol. 17, p. 223. https://doi.org/10.1186/s12967-019-1976-2

    Article  PubMed  PubMed Central  Google Scholar 

  251. Suter, G.W., II, Norton, S., and Cormier, S., The science and philosophy of a method for assessing environmental causes, Hum. Ecol. Risk Assess., 2010, vol. 16, no. 1, pp. 19–34. https://doi.org/10.1080/10807030903459254

  252. Schoeny, R., Haber, L., and Dourson, M., Data considerations for regulation of water contaminants, Toxicology, 2006, vol. 221, nos. 2–3, pp. 217–224. https://doi.org/10.1016/j.tox.2006.01.019

    Article  CAS  PubMed  Google Scholar 

  253. Becker, R.A., Patlewicz, G., Simon, T.W., et al., The adverse outcome pathway for rodent liver tumor promotion by sustained activation of the aryl hydrocarbon receptor, Regul. Toxicol. Pharmacol., 2015, vol. 73, no. 1, pp. 172–190. https://doi.org/10.1016/j.yrtph.2015.06.015

    Article  CAS  PubMed  Google Scholar 

  254. Lacchetti, C., Ioannidis, J., and Guyatt, G., Surprising results of randomized trials, in User”s Guides to the Medical Literature. A Manual for Evidence-Based Clinical Practice, Guyatt, G., Rennie, D., Meade, M.O., and Cook, D.J., Eds., JAMA Evidence, The Evidence Based Medicine Working Group, New York: McGraw Hill Medical, 2008, 2nd ed., pp. 113–151.

  255. Hartung, T., Luechtefeld, T., Maertens, A., and Kleensang, A., Integrated testing strategies for safety assessments, ALTEX, 2013, vol. 30, no. 1, pp. 3–18. https://doi.org/10.14573/altex.2013.1.003

    Article  PubMed  PubMed Central  Google Scholar 

  256. Non-Clinical Development: Basic Principles. Medicines R&D. Toolbox (online library), The European Patients’ Academy on Therapeutic Innovation (EUPATI): Patient Engagement Through Education, National Platforms. https://toolbox.eupati.eu/resources/non-clinical-development-basic-principles/. Accessed November 27, 2020.

  257. Environmental Health Risk Assessment. Guidelines for Assessing Human Health Risks from Environmental Hazards, Department of Health and Ageing and Health Council, Population Health Division, Publication Distribution Officer, 2002.

  258. Crump, K.S., Chen, C., and Louis, T.A., The future use of in vitro data in risk assessment to set human exposure standards: challenging problems and familiar solutions, Environ. Health Perspect., 2010, vol. 118, no. 10, pp. 1350–1354. https://doi.org/10.1289/ehp.1001931

    Article  PubMed  PubMed Central  Google Scholar 

  259. Romeo, D., Salieri, B., Hischier, R., et al., An integrated pathway based on in vitro data for the human hazard assessment of nanomaterials, Environ. Int., 2020, vol. 137, p. 105505. https://doi.org/10.1016/j.envint.2020.105505

    Article  CAS  PubMed  Google Scholar 

  260. Kirkland, D., Aardema, M., Henderson, L., and Muller, L., Evaluation of the ability of a battery of three in vitro genotoxicity tests to discriminate rodent carcinogens and non-carcinogens I. Sensitivity, specificity and relative predictivity, Mutat. Res., 2005, vol. 584, nos. 1–2, pp. 1–256. https://doi.org/10.1016/j.mrgentox.2005.02.004

    Article  CAS  PubMed  Google Scholar 

  261. Kirkland, D., Aardema, M., Muler, L., and Makoto, H., Evaluation of the ability of a battery of three in vitro genotoxicity tests to discriminate rodent carcinogens and non-carcinogens II. Further analysis of mammalian cell results, relative predictivity and tumour profiles, Mutat. Res., 2006, vol. 608, no. 1, pp. 29–42. https://doi.org/10.1016/j.mrgentox.2006.04.017

    Article  CAS  PubMed  Google Scholar 

  262. Kirkland, D. and Speit, G., Evaluation of the ability of a battery of three in vitro genotoxicity tests to discriminate rodent carcinogens and non-carcinogens III. Appropriate follow-up testing in vivo, Mutat. Res., 2008, vol. 654, no. 2, pp. 114–132. https://doi.org/10.1016/j.mrgentox.2008.05.002

    Article  CAS  PubMed  Google Scholar 

  263. Andersen, H., History and Philosophy of Modern Epidemiology, Based on a Talk Delivered at the &HPS Conference, Pittsburgh, October 2007. http://philsci-archive.pitt.edu/4159/. Accessed November 28, 2020.

  264. Weed, D.L., Analogy in causal inference: rethinking Austin Bradford Hill”s neglected consideration, Ann. Epidemiol., 2018, vol. 28, no. 5, pp. 343–346. https://doi.org/10.1016/j.annepidem.2018.03.004

    Article  PubMed  Google Scholar 

  265. Russo, F. and Williamson, J., Interpreting causality in the health sciences, Int. Stud. Philos. Sci., 2007, vol. 21, no. 2, pp. 157–170. https://doi.org/10.1080/02698590701498084

    Article  Google Scholar 

  266. Lowell, R.B., Culp, J.M., and Dube, M.G., A weight-of-evidence approach for northern river risk assessment: integrating the effects of multiple stressors, Environ. Toxicol. Chem., 2000, vol. 19, no. 4, pp. 1182–1190. https://doi.org/10.1002/etc.5620190452

    Article  CAS  Google Scholar 

  267. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans, Preamble, Lyon: France, 2006.

  268. Pai, M., Fundamentals of Epidemiology. Lections, Montreal, Canada: McGill University, 2014. www.teachepi.org/courses/fundamentals-of-epidemiology/. Accessed November 28, 2020.

  269. Boobis, A.R., Doe, J.E., Heinrich-Hirsch, B., et al., IPCS framework for analyzing the relevance of a noncancer mode of action for humans, Crit. Rev. Toxicol., 2008, vol. 38, no. 2, pp. 87–96. https://doi.org/10.1080/10408440701749421

    Article  CAS  PubMed  Google Scholar 

  270. Becker, R.A., Ankley, G.T., Edwards, S.W., et al., Increasing scientific confidence in adverse outcome pathways: application of tailored Bradford–Hill considerations for evaluating weight of evidence, Regul. Toxicol. Pharmacol., 2015, vol. 72, no. 3, pp. 514–537. https://doi.org/10.1016/j.yrtph.2015.04.004

    Article  PubMed  Google Scholar 

  271. Guess, H.A., Premarketing applications of pharmacoepidemiology, in Pharmacoepidemiology, Strom, B.L., Ed., Baffins Lane, Chichester, West Sussex: Wiley, 2000, 3rd ed., pp. 449–462.

    Google Scholar 

  272. Weed, D.L., Causal criteria and Popperian refutation, in Causal Inference, Rothman, K.J., Ed., Chestnut Hill, MS: Epidemiologic Resources, 1988, pp. 15–32.

    Google Scholar 

  273. Rothman, K., Modern Epidemiology, Boston: Little Brown MA, 1986, 1st ed.

    Google Scholar 

  274. Kleinberg, S. and Hripcsak, G., A review of causal inference for biomedical informatics, J. Biomed. Inform., 2011, vol. 44, pp. 1102–1112. https://doi.org/10.1016/j.jbi.2011.07.001

    Article  PubMed  PubMed Central  Google Scholar 

  275. Lucas, R.M. and McMichael, A.J., Association or causation: evaluating links between “environment and disease,” Bull. World Health Organ., 2005, vol. 83, no. 10, pp. 792–795. https://doi.org/10.1590/s0042-96862005001000017

    Article  PubMed  PubMed Central  Google Scholar 

  276. NCRP 1994, Science and Judgment in Risk Assessment, National Research Council, Washington, DC: National Academy Press, 1994. https://doi.org/10.17226/2125

  277. Shleien, B., Ruttenber, A.J., and Sage, M., Epidemiologic studies of cancer in populations near nuclear facilities, Health Phys., 1991, vol. 61, no. 6, pp. 699–713. https://doi.org/10.1097/00004032-199112000-00001

    Article  CAS  PubMed  Google Scholar 

  278. Wakeford, R., Antell, B.A., and Leigh, W.J., A review of probability of causation and its use in a compensation scheme for nuclear industry workers in the united kingdom, Health Phys., 1998, vol. 74, no. 1, pp. 1–9. https://doi.org/10.1097/00004032-199801000-00001

    Article  CAS  PubMed  Google Scholar 

  279. Wakeford, R., Association and causation in epidemiology—half a century since the publication of bradford hill’s interpretational guidance, J. R. Soc. Med., 2015, vol. 108, no. 1, pp. 4–6. https://doi.org/10.1177/0141076814562713

    Article  PubMed  PubMed Central  Google Scholar 

  280. Fairlie, I., Commentary: childhood cancer near nuclear power stations, Environ. Health, 2009, vol. 8, p. 43. https://doi.org/10.1186/1476-069X-8-43

    Article  PubMed  PubMed Central  Google Scholar 

  281. Martinez-Betancur, O., Causal judgment by Sir Austin Bradford Hill criteria: leukemias and radiation, Revista de la Facultad de Medicina, Universidad Nacional de Colombia, 2010, vol. 58, no. 3, pp. 236–249.

    Google Scholar 

  282. Ulsh, B.A., The new radiobiology: returning to our roots, Dose Response, 2012, vol. 10, no. 4, pp. 593–609. https://doi.org/10.2203/dose-response.12-021

    Article  PubMed  PubMed Central  Google Scholar 

  283. Jorgensen, T.J., Strange Glow. The Story of Radiation, Princeton: Princeton Univ. Press, 2016.

    Book  Google Scholar 

  284. Ivanov, E.P., Effekty malykh doz. Uchebnaya programma dlya spetsial’nosti 1-31 05 03 “Khimiya vysokikh energii” (Effects of Small Doses. Curriculum for the Specialty 1-31 05 03 “High Energy Chemistry”), Beloruss. Gos. Univ., 2016.

  285. IARC 2010 , International Agency for Research on Cancer, Carbon Black, Titanium Dioxide, and Talc, IARC Monographs on the Evaluation of Carcinogenic Risks to Humans, Lyon, France, 2010, vol. 93.

    Google Scholar 

  286. Faustman, E.M., Gohlke, J.M., Ponce, R.A., et al., Experimental approaches to evaluate mechanisms of developmental toxicity, in Handbook of Developmental Toxicology, Hood, R.D., Ed., New York: CRC Press, 1997, pp. 13–41.

    Google Scholar 

  287. Cole, P., Causality in epidemiology, health policy and law, Environ. Law Rep., 1997, vol. 27, no. 6, pp. 10279–10285.

    Google Scholar 

  288. Martin, P., Bladier, C., Meek, B., et al., Weight of evidence for hazard identification: a critical review of the literature, Environ. Health Perspect., 2018, vol. 126, no. 7, p. 076001. https://doi.org/10.1289/EHP3067

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Funding

This study, which was carried out within the framework of the broader budgetary theme of the R&D of the Federal Medical-Biological Agency of Russia, was not supported by any other sources of funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. N. Koterov.

Ethics declarations

The author declares that he has no conflicts of interest. This article does not contain any studies involving animals or human participants performed by the author.

There were no time limits, official requirements, restrictions, or other external objective or subjective confounding factors in the completion of this work.

Additional information

Translated by M. Batrukova

Published in the author’s version.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Koterov, A.N. Causal Criteria in Medical and Biological Disciplines: History, Essence, and Radiation Aspect. Report 3, Part 2: Hill’s Last Four Criteria: Use and Limitations. Biol Bull Russ Acad Sci 49, 2184–2222 (2022). https://doi.org/10.1134/S1062359022110115

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1062359022110115

Keywords:

Navigation