The process of peer review represents the gold standard for evaluating medical science across journals and disciplines and is considered a key pillar to ensure reliable research, scientific integrity, timely dissemination of information, and ultimately, better patient care. However, the current process is far from perfect (e.g. lack of standardization, relatively slow), subjective (e.g. susceptible to conscious and unconscious bias) and does not seem ideally suited to tackling the new challenges, especially those posed by the growing number of scientific journals and preprint services containing manuscripts that are not yet peer reviewed or accepted by academic journals (especially following the coronavirus disease 2019 [COVID-19] pandemic) [1, 2].

These challenges are amplified in the field of pharmacovigilance, and specifically when dealing with disproportionality analysis (DA) of individual case safety report (ICSR) systems, a consolidated statistical approach to detect higher-than-expected adverse event reporting [3]. The key goal of DA remains early detection of rare but serious suspected adverse drug reactions (including those from drug interactions) that cannot be detected or fully appreciated from clinical trials or healthcare databases [4]. In recent years, we have witnessed an exponential increase in the number of publications on DA, mostly from academia [5]. Therefore, a focus on DA seems timely and Drug Safety, the official journal of the International Society of Pharmacovigilance (ISoP), is one of the journals of choice for researchers in the field. Of note, DAs have also attracted the interest of a number of ‘clinical’ journals, especially in the field of oncology in immune checkpoint inhibitors [6,7,8].

The widespread and public access to large-scale ICSR databases offers a great opportunity for various researchers across disciplines to easily and quickly perform DAs. This is the case with the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS), which allows data visualization, download and analysis through different public dashboards (e.g., OpenVigil). FAERS offers unrestricted access to raw downloadable data for customized DAs; ad hoc access to individual cases (narratives) can also be granted through a direct request to the FDA [9, 10].

Based on our personal experience as Editors and Reviewers of DAs in the past years, several manuscripts almost exclusively relied on DAs for monitoring the postmarketing safety of novel medications, especially those receiving fast-track conditional approval, a regulatory requisite. However, DAs should not be viewed as a simple “statistical exercise”, since there are major issues in the relevant conception and design to account for the nuances and peculiarities of the data to which DA is applied [11, 12]. In other words, DAs cannot be used per se as a standalone approach to assess a drug-related risk (they do not provide risk quantification and ranking) and cannot replace clinical judgement at the individual level. DAs should be complemented by a careful case-by-case analysis as the first step within the signal management process, including pharmacological plausibility, and further assessed in conjunction with an appraisal of available evidence [4].

However, the vast majority of published signals lacked clear reporting of how judgements of association or causality are made, although they might theoretically support regulatory decisions [5]. Moreover, recent meta-epidemiological studies found major concerns on transparency, reporting and interpretation of DAs, especially on key methodological aspects such as threshold definition or selection of comparator(s), undermining the credibility and reproducibility of the results, which are also frequently over-interpreted by researchers, notably in the abstracts [13,14,15,16]. This so-called ‘spin’ in the presentation and interpretation of results may be attributed to the fact that case/non-case approaches mirror case-control designs and the resulting estimates may be confused with the results of pharmacoepidemiological studies, thus being erroneously pooled when performing a safety meta-analysis of longitudinal observational studies [17].

This intricate scenario raises, once more, the debate on the benefit of publishing DAs [18, 19] and on the importance of a genuinely constructive peer review. We do believe that well-conducted studies based on DAs, appropriately reported, are worthy of publication and contribute to the body of evidence on adverse drug reactions. Peer review still represents the gold standard for research prioritization, accuracy and integrity, and platforms for responsible editorial policies, such as the Committee on Publication Ethics (COPE) [20] and the International Committee of Medical Journal Editors (ICMJE) [21], have been launched to promote shared and transparent peer-review procedures. Although there are mixed results with regard to the superiority of a given procedure over another in detecting research misconducts, some review procedures are significantly more effective in preventing dissemination of low-quality research and possible future retractions, such as involving the wider community in review, using digital tools, constraining interaction between authors and reviewers, and using presubmission review procedures such as registered reports [22].

Traditionally, peer review occurs between the submission and publication of a manuscript. The increasing academic pressure for publications poses a challenging time-demanding task for an Editor, who should find available, competent reviewers and obtain timely insightful comments. This, along with an increasing number of medical journals and submitted articles on DAs without no obvious increase in the pool of available reviewers, has created a kind of ‘reviewing fatigue’ that has led many researchers to skip or decline reviewing requests. Some initial proposals can be envisioned to tackle and evolve the peer-review system for DA, and, to a broader extent, for pharmacovigilance to the next level (Table 1).

Table 1 Controversies, challenges and proposals on the peer-review process in pharmacovigilance

The first key issue is to support the Editors in screening DAs for external review. While the expected impact and added value of a DA for the literature, mostly in terms of novelty and global quality of the study, require the expert evaluation of a competent reviewer, the rigor on the methodological reporting of DAs has the potential to be standardized, or possibly automated through artificial intelligence (AI) tools, for helping Editors in fast-track review and possible desk rejection. For example, AI tools such as the Artificial Intelligence Review Assistant (AIRA), developed by the open-access publisher Frontiers, can quickly help editors to evaluate the quality of manuscripts, including assessment of language quality, integrity of the figures, detection of plagiarism and potential conflicts of interest, compliance with reporting guidelines, and rapid identification of potential reviewers. A dedicated repository for submitting protocols and study results, similar to those established for clinical trials and systematic reviews, could also further increase awareness of researchers on the importance of prespecifying research questions and planning study design, while avoiding overlapping redundant research. Although in its infancy, some examples of protocols have been published in the Open Science Framework (OSF) [23, 24].

The second key issue is represented by effective engagement with reviewers for timely recruitment. Rewarding represents an editorial priority to achieve quality and timeliness. While financial incentives (e.g., direct monetary deposit or invitation to freely publish open access) do not appear to offer a long-lasting advantage, there is a pressing need to identify real, novel, non-financial incentives for reviewers, apart from an invitation to join an editorial board and/or write an editorial/linked commentary, or having continuing medical education accreditation. It is time to fully acknowledge reviewers’ efforts for tenure track position or as real research output. High-quality peer reviews have the potential to substantially increase the impact of research output, with indirect and direct benefit for public health. While ORCID and Reviewer Recognition Services such as Publons/Clarivate (a frequent optional choice for reviewers when submitting their revision) can be easily exploited to quantitatively track the activity of peer reviewers, the debated challenging aspect is how to score and grade the quality of peer review, as well as to identify top-level reviewers. Qualitative and quantitative criteria, ideally harmonized across journals, should be considered, including actual blinded feedback from authors.

A third connected issue is to increase the number of qualified reviewers. Inexperienced reviewers could allow not only the publication of poor-quality DAs but also the rejection of high-quality DAs, especially when, for time constraints, a reviewer cannot appreciate the scientific value of DAs and relevant implications for clinical practice, regulators and research. Although the actual impact of reviewer training on the quality of peer review is debated [25], general training courses on the foundations of peer review are continuously offered by institutions and publishers [26]. We call for a training programme focused on the reporting, communication and interpretation of DAs within the pharmacovigilance curriculum (with relevant implications on academic career and promotion) [27], and are also open to ideas to effectively credit the quality and time spent by ‘pharmacovigilance’ reviewers.

Accurate and constructive peer review is a big challenge but remains a priority goal for research integrity, as highlighted by the 2019 Hong Kong manifesto [28]. With regard to DAs, reviewers are asked to be updated to assess the novelty within the existing knowledge, knowledgeable to judge methodological aspects and innovative aspects, and meticulous to comprehensively peruse all aspects of the work, including potential ‘spin’ in the interpretation [13] as well as inaccurate citations [29]. This long-term mission could be specifically endorsed by scientific societies such as ISoP, the International Society for Pharmacoepidemiology (ISPE) and the European Association for Clinical Pharmacology and Therapeutics (EACPT), which could synergize to develop quality criteria to score DAs, including a dedicated risk-of-bias tool. Innovative ideas such as publishing outstanding peer-review reports could also simultaneously serve as a reward for reviewers' work and as a training base for other researchers. Moreover, replication of published studies has recently been shown to be feasible and scalable in entire fields and could be a way to encourage transparent and reproducible research practices [30]. Given data from international pharmacovigilance databases are easily accessible to researchers, such replication games may be a useful way to train junior researchers to DAs, and ultimately to correct and review DA results after their publication.

The 10th International Congress on Peer Review and Scientific Publication, announced to be held in Chicago, Illinois, on 3–5 September 2025, will be a great opportunity to share proposals and solutions to make peer review, publication, and dissemination processes more efficient, fair, open, transparent, reliable, equitable, and sustainable [31]. We welcome contribution from interested journals dealing with pharmacovigilance to pursue this call towards an effective innovative model of peer review. In the era of intelligence automation, there is urgent need once more for timely and valuable peer review to advance DAs and their actual transferability and exploitation by the various stakeholders.