Translational Stroke Research

, Volume 2, Issue 1, pp 1–6

Resolving the Negative Data Publication Dilemma in Translational Stroke Research


DOI: 10.1007/s12975-010-0057-x

Cite this article as:
Lapchak, P.A. & Zhang, J.H. Transl. Stroke Res. (2011) 2: 1. doi:10.1007/s12975-010-0057-x

“Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning” ∼∼Albert Einstein (physicist, Nobel Prize in 1921. 1879–1955).

Stroke Therapies: The Continuing Challenge

The development of new treatments for stroke remains a formidable challenge as emphasized in the article by Frank Sharp and colleagues [1]. However, with the recent advances in genomic and proteomic profiling [24], in vitro screening methodologies [57], predictive animal models [812], and a greater understanding of the complexity of processes and mechanisms involved in cellular degeneration and behavioral deficits following a stroke [13, 14], we now have the basic tools in our repertoire to develop and systematically test new candidates in a preclinical translational setting. Moreover, with more sophisticated adaptive clinical trial design [1518], therapy can be targeted to specific stroke patient populations to ensure some extent of efficacy, rather than complete failure. If the best that we can expect to achieve is to treat some patients, then we should accept that fact!

Systematic testing of therapies pre-clinically and clinically does not always result in a positive results, or a positive clinical outcome! Whether or not to publish “negative” data is not a question unique to the area of translational stroke research. It has been a question that all translational researchers have had to deal with during their scientific careers. There is no simple solution on how to deal with the negative data that we all accumulate while testing hypotheses and developing novel treatments for stroke.

Recently, the editors of many different journals have commented on publication bias and the difficulty related to publishing negative data [1923]. Each has dealt with this problem in different ways. We will also have to provide some guidelines for prospective contributors of translational articles to this journal. This editorial article addresses some of the problems associated with publishing negative data in peer-reviewed journals such as Translational Stroke Research and provides insight into the requirements for publishing a “negative” study in Translational Stroke Research. The goal is to contribute significant findings to the literature, even if they are negative, so that, ultimately, translational stroke research results in new and effective treatments.

“It is important to expect nothing, to take every experience, including the negative ones, as merely steps on the path, and to proceed.” ∼∼Ram Dass (Richard Alpert, spiritualist, born 1931).

The Pressure to Publish: It Should Include Negative Translational Research Studies!

The necessity to publish research data in peer-reviewed journals has created a dilemma for both editors and researchers. As correctly stated by Fanelli [23], there is growing competition to publish and the publish or perish culture forces researchers to produce publishable data at all cost. Fortunately, for the research community, some percentage of data resulting from the studies is negative because always having “positive” generated in translational models is unrealistic and puts the researcher in an awkward position. If the data is always positive, then the model is inappropriate for therapy development.

“If one tells the truth, one is sure, sooner or later, to be found out.” ∼∼Oscar Wilde (writer, poet, playwright, 1838–1900).

Negative data must be viewed from multiple perspectives and viewpoints in order to address and fully understand the impact of such data on a journal, the scientific community, and funding agencies.

From an editor's point of view, it is imperative to maintain a high quality standard for all peer-reviewed translational studies to be published in this journal. High-quality publications providing extraordinary insight into disease processes and methods to treat diseases at some level will, no doubt, lead to citations of the specific papers and increase the overall impact factor of the journal. However, it has been reported that “negative” data publications, which measured a single primary efficacy outcome, can lower a journal's impact factor [22]. Thus, balance must be achieved to support positive and negative studies and then establish and sustain a journal's impact factor.

As part of the scientific community, researchers, we have been conflicted about negative study data for some time. As discussed during a recent National Institute of Neurological Disorders and Stroke (NINDS) study section, negative data should be viewed as extremely useful data to the scientific community. The publication of negative data may reduce the excessive use of valuable NINDS funding of hypotheses that do not require further testing. If the negative data is used as a tool, then as a community, we won't go down that wrong path again. It is an indispensable learning experience.

“Nothing is a waste of time if you use the experience wisely”. ∼∼Auguste Rodin (François-Auguste-René Rodin, sculptor, 1840–1917).

There are caveats associated with the acceptance of negative data in publications. The inherent problem with many negative data studies is that the investigator has used a very specific treatment regimen, maybe a single time point or a single drug dose. In essence, since the experiment was not optimized, solid conclusions cannot be drawn.

What Can Be Done to Ensure Publication of Negative Data?

There are many approaches that can be used to solidify negative data so that the results are represented in the literature. We propose the following guidelines for Translational Stroke Research, guidelines that may also be transferred to other neuroscience and translational science journals. There are some practical basic requirements for preclinical/translational studies including a fully randomized and blinded design to ensure transferability of the data to a clinical trial where a fully randomized, double-blind, placebo-controlled study must be used[24].

For a study to be acceptable for publication, the investigator should design a stringent study based upon a testable hypothesis with one or more valid efficacy endpoints, which will vary depending on the specific animal model being used. For instance, if a rodent acute ischemic stroke model was the model of choice to test a new drug, then at least two endpoints would be required, a direct measure of infarct volume using standard triphenyltetrazolium chloride staining or hematoxylin eosin (H&E) staining and a behavioral measure or composite behavioral/neurological function score that in some way parallels the clinical endpoints used in stroke trials (see [9, 12]). Similar behavioral and histological endpoints can be used in intracerebral hemorrhage (ICH) and subarachnoid hemorrhage (SAH) models [11, 12]. The investigator should consider developing a full dose–response curve and establish the therapeutic window for the treatment once an optimal dose is known. Belayev et al. [25] provide an excellent example of multiple endpoint measurement using the treatment of experimental stroke with docosahexaenoic acid (DHA), an omega-3 essential fatty acid, which induced behavioral recovery and also reduced stroke volume with a long therapeutic window. Moreover, using state-of-the art magnetic resonance imaging techniques, the investigators showed that the treatment reduced edema. Given the long therapeutic window of DHA in the rodent model, additional studies in other species should be considered, perhaps in the rabbit stroke model as suggested in the articles by Turner et al. [1] and Lapchak [9].

There are exceptions to the required two-endpoint experimental design noted above and the type of endpoint(s) being measured. For instance, the rabbit embolic stroke model used by Lapchak and colleagues, which is discussed in the new review article by Turner et al. [1] has been quite useful in predicting therapies that have been further developed in clinical trials. However, the model can only use a single behavioral endpoint in any experiment due to technical considerations such as the necessity to use a gamma-emitting isotope to quantify clot burden in brain. Nevertheless, the model has been validated using thrombolysis with tissue plasminogen activator (tPA) as a positive control [26] including complete dose–response curves and therapeutic windows. The head-to-head comparison of a new treatment with tPA in the rabbit embolic stroke model is also encouraged by Frank Sharp and colleagues [1]. It is likely that other unique circumstances will arise when using other translational models, and they should be addressed individually by the editors.

In a translational study, if the target is valid based upon in vitro analysis or another assay (i.e., test tube assay for enzyme activity) that can be used to steer the investigator, and the result on one or more efficacy outcomes is still negative, then the study deemed negative could be submitted for peer-review. At this point, if one agonist (or antagonist) is negative, then the investigator may choose to test another drug with similar bioactivity but structurally different, to confirm that the specific pathway is not involved or substantially unimportant to attenuate the insult in either an ischemia or hemorrhage model. It is almost certain that reviewers will question the solubility of the drug and aspects of ADME (absorption, distribution, metabolism, and elimination), pharmacokinetics, and drug metabolism. Some of the basic ADME concerns can be addressed using the CeeTox™ in vitro screening and evaluation system as described by Lapchak and McKim [6] for a neurotrophic curcuminoid small molecule (i.e., CNB-001) and two structurally and chemically different antioxidants (i.e., NXY-059 and Radicut). The investigator should consider those aspects of drug development while preparing an article for submission.

An uncommon approach that can be used to ensure that negative data is published is to combine the negative data with positive data in a publication so that it is represented in the literature as valuable information. There are multiple ways to do this using parallel treatment studies with a clinically validated positive control. With the use of a positive control or a treatment that has reproducibly been shown to provide improvement in the model of choice, an investigator will be able to convincingly show that the model has been optimized for the endpoints being measured. The positive control will have multiple uses. First, it will allow the investigator to supply power analysis data for group size so that statistical significance can be achieved on each endpoint measured in the study. Second, if an investigator can show a beneficial effect of the positive control, then combination studies to evaluate the effect of a new therapy on efficacy and/or safety with a thrombolytic can be done. It will also allow the investigator to delve into the possible extension of the therapeutic window of the thrombolytic as demonstrated using urokinase and taurine (see the study by Guan et al. [27]). Moreover, using multiple endpoints, the authors showed that reduced inflammation and reduced blood brain barrier (BBB) disruption may underlie some of the beneficial effects of the drug combination. Because tPA is the current FDA-approved therapy for acute ischemic stroke, it would be of great interest to determine if taurine has the same effect in combination with intravenous or intra-arterial tPA [9, 24]. The significance of the BBB damage in reperfusion-induced injury, ICH, edema, and neurodegeneration is discussed in two separate articles in this issue of Translational Stroke Research by Hoffmann et al. [28] and Woitzik et al. [29].

When using preclinical ischemic stroke models, it is necessary to show that the study population will respond to a previously Food and Drug Administration (FDA)-approved treatment. In the case of ischemic stroke models, the positive control of choice is the FDA-approved thrombolytic tPA [9, 24]. In the case of a hemorrhage model, such as SAH, clinically, the dihydropyridine L-type calcium channel antagonist nimodipine has been shown to produce consistent improvement [30]. For an informative review of current preclinical ICH models and their ability to predict efficacy in humans, please see the review article by Adeoye and colleagues [8]. The use of a positive control, such as nimodipine, can only be applied to new therapy development. However, if a more descriptive study is being done that does not directly assess the efficacy of a therapeutic agent, for example, see the article by Murakami and colleagues [31], then it is unnecessary to establish efficacy using a “positive” control as described above. The observation that SAH adversely affects brain tissue by inducing the pro-inflammatory cytokine high-mobility group box 1 protein (HMGB1) will ultimately lead to efficacy studies attempting to provide neuroprotection. Under those circumstances, it will be necessary to have a positive control should the studies aimed at antagonizing or blocking HMGB1 mechanisms not result in the expected benefit.

Even in negative data studies, mechanisms need to be established and confirmed. For example, if a specific protein kinase inhibitor is tested (i.e., p38 mitogen-activated protein (MAP) kinase), then the investigator needs to conclusively demonstrate that the activity of the MAP kinase is actually reduced or inhibited by the inhibitor to an extent where a biological correlate or effect should be observed. Alternatively, the investigator should demonstrate that downstream effectors are affected (i.e., suppressed) by the inhibitor. If the drug of choice for testing is attempting to inhibit a cell surface receptors (i.e., an N-methyl-d-aspartate subtype-specific antagonist), then the antagonists should prevent the effect of endogenous agonist and produce a behaviorally relevant effect. Moreover, for receptor-mediated effects, an exogenously applied agonist should be able to compete with an antagonist and nullify the effect of the antagonist.

The Scientific Circle of Life: Negative Clinical Findings Are Valuable Too!

“Man is unique in that he has plans, purpose and goals which require the need for criteria of choice. The need for ethical value is within man whose future may largely be determined by the choice he make”. ∼∼George Bernard Shaw (playwright, 1856–1950).

Positive translational research data is most often used as a basis for clinical trial development. However, many ischemic stroke clinical trials do not meet their primary endpoints using the National Institutes of Health Stroke Scale (NIHSS) or modified Rankin Scale (mRS) [15, 32, 33] and are considered failed on some level such as the nitrone NXY-059 (SAINT II[34]) trial, the thrombolytic Desmoteplase (DIAS-2 [35]) trial and the near-infrared transcranial laser trial (NEST-2 trial [36]). The three examples provided are all unique examples of the difficulty of developing a new therapy for stroke. They all “failed” for different reasons which were not apparent in earlier trials that were overwhelmingly positive [3739]. For example, it is now well accepted that NXY-059, being a hydrophilic compound, was an inferior compound to develop to treat stroke and that much of the preclinical and translational data over-estimated the efficacy of the drug [9, 40]. Moreover, the time to treatment in the clinical trial may have exceeded a realistic therapeutic window to achieve efficacy [9, 40]. NXY-059 development has been abandoned.

In contrast to the failed NXY-059 that was a much studied drug in many laboratories, the clinical development of Desmoteplase was not substantiated by extensive preclinical testing in multiple animal models, thus, a well-designed preclinical plan was not used. Desmoteplase was primarily developed in a clinical setting using state-of-the-art imaging techniques to select the “best” patient population with large “penumbral areas”. Desmoteplase in DIAS-2 [35] failed to show efficacy in the optimized patients population, a failure that may have been associated with an over ambitious time-to-treatment window, which included patients enrolled up to 9 h after a stroke. Counter intuitively, the DIAS-4 clinical trial was still designed to recruit patients using the same 3–9-h therapeutic window that was used in the failed DIAS-2 trial [41], even though data for tPA indicates that the maximum therapeutic window for tPA is 4.5 h [32]. After 4.5 h, the safety to benefit ratio is insufficient to prescribe the use of tPA [32]; in fact, the odds of a favorable outcome is significantly reduced.

The last example is of transcranial laser therapy, which was developed in the NEST-1 and NEST-2 clinical trials [36, 37]. The NEST-2 clinical trial was not positive on the predefined primary endpoint using the modified Rankin Scale (mRS). However, post hoc analysis indicated that only moderately affected stroke patients (enrolled with National Institutes of Health Stroke Scale (NIHSS) scores of 7–15) showed improvement at 90 days (P = 0.044). Thus, based upon this mixed negative and positive knowledge gained from the trial, the NEST-3 trial was designed to recruit patients using the same treatment regimen used in the NEST-1 and NEST-2 clinical trials, but the trial design has been modified to only include stroke patients in the NIHSS <15 group, the target population that responded to laser therapy in NEST-2 [42, 43].

Thus, the literature shows that negative findings can be dismissed such as with DIAS-4, or used to have an advantage in a subsequent clinical trial such as with NEST-3. The negative data with NXY-059 has also been duly acknowledged and has resulted in the cessation of NXY-059 development, not because free radicals are not appropriate targets (see [13, 44, 45]), but because a specific water-soluble nitrone compound such as NXY-059 was not clinically useful.

“Change is one thing, progress is another. “Change” is scientific, “progress” is ethical; change is indubitable, whereas progress is a matter of controversy.” ∼∼Bertrand Arthur William Russell (philosopher, logician, mathematician, historian, 1872–1970).

Is it Unethical to not Publish Negative Data?

For researchers to conduct preclinical and translational research funded by the National Institute of Health (NIH), National Institute of Aging (NIA), NINDS, among other prominent funding agencies such as the American Heart Association (AHA), a well-designed study supported by a specific Institutional Animal Care and Use Committee (IACUC) protocol must be adhered to by the investigator.

If an exhaustive preclinical study that tested a scientific hypothesis is negative, the study has merit and should be published in the literature. It would be unethical not to publish the findings. Even negative findings must be disseminated to the research community, patient population, and the public. In addition, if a negative study challenged a currently clinically accepted concept, then it also has importance for human health benefits, and it should be considered for publication. However, if a negative study simply tested one outcome without identifying or clarifying mechanisms, then the study would be deemed incomplete and would not be considered for peer-review and publication.


The authors were supported by National Institutes of Neurological Disorders and Stroke Grant U01 NS60685 (PAL), ARRA R01 NS060864 (PAL), and R01 NS043338, R01 NS053407, and R01 NS054685 (JHZ).

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Translational Research, Cedars-Sinai Medical Center, Department of NeurologyDavis Research Building, Room D-2091Los AngelesUSA
  2. 2.Loma Linda University School of MedicineLoma LindaUSA