Introduction

A drug-drug interaction (DDI) occurs when two or more drugs interact with each other in a way that affects their effectiveness, safety, or both [1]. DDIs are categorized into pharmacokinetic and pharmacodynamic interactions. In pharmacokinetic DDIs, drug A (the precipitant) affects the levels of drug B (the object) at the stage of absorption, metabolism, distribution, or elimination (Fig. 1A) [2]. In pharmacodynamic DDIs, drug A does not affect the levels of drug B but directly influences its effects, either as enhancement or counteraction (Fig. 1B). [2]

Fig. 1
figure 1

Illustration of drug concentrations in pharmacokinetic and pharmacodynamic drug-drug interactions. A Illustration of a pharmacokinetic drug-drug interaction, where concomitant use of the precipitant affects (in this example increases) the concentration of the object; the continuous line shows the concentration curve of the object without concomitant use of the precipitant, while the dotted line shows the concentration curve of the object with concomitant use of the precipitant. B Illustration of a pharmacodynamic drug-drug interaction, where concomitant use of the precipitant does not affect the concentration of the object; the continuous line shows the concentration curve of the object without concomitant use of the precipitant, while the dotted line shows the concentration curve of the object with concomitant use of the precipitant

A well-established example of a pharmacokinetic DDI may occur during concomitant use of statins metabolized by the cytochrome P450 3A4 (CYP3A4) enzyme (atorvastatin, simvastatin, and lovastatin) and macrolide antibiotics inhibiting CYP3A4 (erythromycin and clarithromycin), which has been shown to lead to increased systemic levels of statins and enhanced toxicity [3]. On the other hand, concomitant use of direct oral anticoagulants (DOACs) and antiplatelet agents, drug classes that both inhibit hemostasis through different mechanisms, can increase the risk of bleeding via a pharmacodynamic DDI. [4]

While DDIs may have beneficial effects in some occasions [5], they usually become the focus of clinicians, pharmacologists, pharmacists, and pharmacoepidemiologists because of the adverse clinical effects associated with their occurrence. For example, DDIs are known to contribute > 10% of total adverse drug effects [6], and they are also responsible for roughly 5% of hospital admissions among older adults [7]. Importantly, the prevalence of DDIs and thus their clinical and public health relevance is expected to strongly rise in the coming years. The main reason for this development is the ageing population, which leads to high numbers of multimorbid individuals and a subsequent increase in polypharmacy. [8]

DDIs have also become the focus of healthcare regulators. For example, the United States (US) Food and Drug Administration [9] and the European Medicine Agency [10] have issued guidelines on DDI assessment. Moreover, the US National Action Plan for Adverse Drug Event Prevention, released in 2014 by the Office of Disease Prevention and Health Promotion and organized around surveillance, evidence-based prevention, payment, policy incentives, and oversight, and research opportunities, highlights DDIs as one of the integral parts of drug safety. [11]

Currently, evidence for DDIs is mostly derived from Phase I studies and case reports. Phase I studies belong to the early stages of drug development and assess important pharmacokinetic properties such as the maximum plasma concentration or plasma half-life of the object drug upon concomitant use of certain precipitants. These studies are typically conducted among small groups of healthy volunteers, which limits their external validity with respect to routine clinical practice. Moreover, the small sample size often precludes the assessment of clinical outcomes.

Case reports are usually the main source of evidence when it comes to the clinical effects of DDIs [12]. Case reports are useful as they can aid with the early identification of rare adverse drug effects; this way, they enable the generation of hypotheses in the area of DDI safety. However, an estimation of the potential excess in the risk of adverse drug effects associated with a DDI based on case reports is not possible given the lack of a denominator.

Randomized controlled trials (RCTs) can also be used to assess clinical DDI effects [13, 14]. RCTs have the advantage of eliminating confounding, assuming optimal randomization. However, they are rarely feasible in the setting of DDIs due to the very high sample size requirements and ethical considerations. Moreover, RCTs are often conducted among highly selected populations, which could limit their external validity.

Overall, the clinical evidence is scarce and limited for the majority of DDIs. The poor quality of evidence has been acknowledged by a wide range of relevant stakeholders in the past [15]. It is further reflected in the common disagreement between major DDI compendia about the potential severity of specific DDIs [16]. In this review, we provide an overview of the pharmacoepidemiology of DDIs with a focus on cohort designs. We also highlight the decision-making process regarding the specifics of study design (cohort assembly, exposure definition, and comparator choice), potential biases that may arise after certain design-related decisions, and strategies to mitigate these biases.

Pharmacoepidemiology of Drug-Drug Interactions

Overview

Pharmacoepidemiology can assess the clinical effects of DDIs and has thus the potential to address this important knowledge gap and provide urgently needed evidence for prescribing clinicians and patients, while also aiding regulatory decision making [2]. Pharmacoepidemiologic DDI studies became feasible from a sample size standpoint in the recent past due to the increasing availability of large datasets with healthcare claims data or electronic medical records.

The designs most commonly used in contemporary pharmacoepidemiologic DDI studies include case-only designs such as the case-crossover design and the self-controlled case series. The main appeal of both designs is their effective control of time-fixed confounding, given that individuals serve as their own controls [17, 18]. In the DDI setting, studies applying case-only designs are nested within person-time exposed to the object, and the comparison is conducted between person-time exposed to the precipitant and person-time unexposed to the precipitant. [2]

Of note, these designs have certain assumptions that are either very strict and/or challenging to test. For example, the case-crossover design requires the (co-)exposure to be transient and the outcome to be acute [17], which limits the number of DDIs that can be studied with this approach. On the other hand, the self-controlled case series, a design that considers also the person-time after the outcome, requires that the outcome does not alter the probability of subsequent (co-)exposure [18]. This assumption, the violation of which can lead to outcome-dependent censoring and selection bias, can be hard to meet, especially in case of DDIs where a certain degree of awareness among prescribing healthcare professionals exists. Recent rigorous methodological work has focused on the advantages and disadvantages of case-only designs for DDI research. [19, 20•]

Cohort Designs for Drug-Drug Interactions

Another approach to study DDIs in pharmacoepidemiology is cohort designs. Same as with case-only designs, cohort studies for DDIs are nested within person-time exposed to the object and compare person-time exposed to the precipitant versus person-time unexposed to the precipitant [2]. Other than with case-only designs, though, cohort studies for DDIs do not compare individuals to themselves but to ‘controls.’

When designing cohort studies for DDIs, we think that it is important to ask three main questions. The first question is whether the precipitant has an independent effect on the outcome in absence of the object. The second question is whether the indication of the precipitant is related to the outcome. Finally, the third question is with respect to the cohort entry date for those exposed and those unexposed to the precipitant. Based on these questions, we elaborate below on the decision-making process of cohort studies for DDIs using examples from the work conducted by our group. We also attempt to make some recommendations on how to mitigate potential biases that may arise during this process (Table 1).

Table 1 Design-related questions for cohort studies on DDIs and decision-making process

Does the Precipitant Have an Independent Effect on the Outcome in Absence of the Object?

This question should be answered based on the hypothesized pharmacologic mechanism of the DDI of interest. For most pharmacodynamic interactions, where the precipitant either enhances or counteracts the effects of the object, the answer should be yes. For pharmacokinetic interactions, where the precipitant changes the levels of the object but does not exert any direct effects, the answer should be no. Based on the answer to this question, we can then decide how to approach precipitant use prior to cohort entry.

If the precipitant does have an independent effect on the outcome in absence of the object, allowing past or prevalent use of the precipitant may introduce prevalent user bias with depletion of susceptibles. This type of selection bias, which has thoroughly been described elsewhere [21], can lead to spurious associations with artificially decreased effect estimates [22]. To avoid this bias, prevalent users of the precipitant should be excluded. For example, in a study on the risk of severe hypoglycemia associated with the pharmacodynamic DDI between the antidiabetic drug class of sulfonylureas (object) and the cardiovascular drug class of beta-blockers (precipitant), we excluded patients with a beta-blocker prescription in the six months prior to cohort entry [23]. Our rationale was based on the ability of beta-blockers to independently cause hypoglycemia in rare occasions [24]; therefore, allowing their past or prevalent use could have introduced depletion of susceptibles.

On the other hand, when assessing the risk of severe hypoglycemia associated with the pharmacokinetic DDI between sulfonylureas (object) and the oral anticoagulant warfarin (precipitant) [25], we did not exclude patients with a warfarin prescription prior to cohort entry. Our rationale was that given the absence of a hypoglycemic potential with warfarin, allowing its past or prevalent use should not introduce bias, while at the same time it would preserve study power.

Is the Indication of the Precipitant Related to the Outcome?

This question should be answered based on our pharmacologic knowledge of the precipitant. If the indication of the precipitant is related to the outcome of interest, failing to account for this at the design stage of the study will possibly introduce confounding by indication, a common problem in pharmacoepidemiology [26]. To mitigate this bias, researchers can use a so-called control precipitant, a drug that is indicated in similar clinical settings as the actual precipitant but is not known to interact with the object [2]. In these cases, concomitant use of the object and the precipitant is compared to concomitant use of the object and the control precipitant.

The potential downside of this approach is the augmentation of a challenge that is inherent in DDI pharmacoepidemiology: limited study power leading to imprecise effect estimates. Thus, if the association between the indication for the precipitant and the outcome of interest is unclear, a compromise between confounding control and study feasibility may be attempted. For example, in the two aforementioned studies on the risk of severe hypoglycemia associated with the DDIs involving sulfonylureas, primary analyses did not include control precipitants [23, 25]. Our rationale was based on the absence of a well-established association between hypoglycemia and the main indications for the precipitants warfarin (i.e., prophylaxis and treatment of thrombosis) and beta-blockers (i.e., arterial hypertension, heart failure, heart arrhythmias, and secondary prophylaxis of myocardial infarction). However, we did use control precipitants in sensitivity analyses, choosing antiplatelet agents and direct oral anticoagulants in the warfarin study and thiazide diuretics in the beta-blockers study. The findings, albeit less precise that those of the primary analyses, suggested that while the warfarin study was possibly affected by confounding by indication [25], this was not the case in the beta-blockers study. [23]

What Is the Cohort Entry Date for Those Exposed and Those Unexposed to the Precipitant?

The answer to the question regarding the date of cohort entry or ‘time zero’ is considered one of the ‘holy grails’ in pharmacoepidemiology in general and has led to early methodological advancements such as the active-comparator new-user study design [27, 28]. The correct assignment of time zero can be particularly challenging when an active comparator does not exist and non-use of the study drug must serve as the comparator. In this case, an ‘1 vs. 0’ comparison is conducted, where the cohort entry date for the unexposed becomes unclear, as there is no readily available anchor point for this exposure group. To address the problem of non-use as comparator, different approaches exist ranging from ‘traditional’ epidemiologic methods such as time-varying exposure definitions and nested case–control analyses to more recent developments such as the emulation of target trials [29] and adaptations of the prevalent new-user design. [30••]

Often, DDI pharmacoepidemiologic studies compare concomitant use of the object and the precipitant to use of the object alone, a ‘2 vs. 1’ comparison. As a result, the issues with respect to time zero assignment resemble those from ‘1 vs. 0’ comparisons mentioned above. However, while the main challenge in ‘1 vs. 0’ comparisons is to correctly assign time zero among the unexposed, the main challenge in the ‘2 vs. 1’ comparison is to correctly assign time zero among those co-exposed to the object and the precipitant.

To illustrate this challenge, let us consider a simplified example with two groups of patients. (Fig. 2A–C). The first group (Fig. 2A) includes patients who initiate the object and then remain exposed to it without ever becoming exposed to the precipitant. The second group (Fig. 2B, C) includes patients who initiate the object and at a later time initiate the precipitant, therefore, becoming co-exposed. The aim of this hypothetical study would be to compare concomitant use of the object and the precipitant to use of the object alone.

Fig. 2
figure 2

Illustration of biases based on the choice of cohort entry date in DDI studies. A Illustration of the choice of time zero (earliest date of exposure to object) for those exposed to the object alone. B Illustration of depletion of susceptibles after assigning time zero for the co-exposed group as the earliest date of co-exposure to the object and precipitant; the bias is introduced due to prevalent use of the object. C Illustration of immortal time bias after assigning time zero for the co-exposed group as the earliest date of exposure to the object; the bias is introduced due to the misclassification of person-time exposed to the object only as person-time co-exposed both to the object and to the precipitant

While the choice of time zero for those exposed to the object alone is relatively simple (the earliest date of exposure to the object), there seem to be two choices regarding time zero for those initially exposed to the object alone and subsequently co-exposed to the object and the precipitant: (i) the earliest date of co-exposure (Fig. 2B) or (ii) the earliest date of exposure to the object (Fig. 2C). According to the first choice, patients will be prevalent users of the object at cohort entry, which will possibly introduce depletion of susceptibles (Fig. 2B). According to the second choice, though, person-time exposed to the object alone will be misclassified as person-time co-exposed to the object and the precipitant (Fig. 2C). This misclassified person-time becomes ‘immortal’ because patients cannot, by design, develop the outcome of interest during that time. The resulting immortal time bias is known to lead to spurious associations with strongly decreased effect estimates. [31, 32]

In both of our studies on sulfonylurea DDIs, time zero for every member in the cohort was the date of the initiation of sulfonylureas (object), regardless whether they were later co-exposed to the precipitant (warfarin or beta-blockers) or not [23, 25]. This way, depletion of susceptibles due to prevalent use of the object was avoided. To minimize immortal time bias, we used a time-varying exposure definition. According to this approach, patients are allowed to contribute person-time to more than one exposure categories over time. A time-dependent Cox proportional hazards model was then used to calculate confounder-adjusted hazard ratios. Figure 3 presents an illustration of the time-varying exposure definition for our warfarin study.

Fig. 3
figure 3

Illustration of a time-varying exposure definition. A The patient enters the cohort upon initiation of a sulfonylurea (blue line) and starts contributing person-time to the ‘sulfonylurea use alone’ exposure category. They then discontinue sulfonylureas and start contributing person-time to the ‘no current use of sulfonylureas’ exposure category. After some time, they re-initiate a sulfonylurea and remain exposed until the occurrence of the event, which is ascribed to the ‘sulfonylurea use alone’ exposure category. B The patient enters the cohort upon initiation of a sulfonylurea (blue line) and starts contributing person-time to the ‘sulfonylurea use alone’ exposure category. After some time, they initiate warfarin (orange line) and start contributing person-time to the ‘concomitant use of sulfonylureas and warfarin’ exposure category. They remain co-exposed to sulfonylureas and warfarin until the occurrence of the event, which is ascribed to the ‘concomitant use of sulfonylureas and warfarin” exposure category. C The patient enters the cohort upon initiation of a sulfonylurea (blue line) and starts contributing person-time to the ‘sulfonylurea use alone’ exposure category. After some time, they initiate treatment with warfarin (orange line) and start contributing person-time to the ‘concomitant use of sulfonylureas and warfarin’ exposure category. Finally, they initiate insulin (green line) and start contributing person-time to the ‘sulfonylurea use with other non-metformin antidiabetic drugs, with or without warfarin’ exposure category. They remain co-exposed to sulfonylureas, warfarin, and insulin until occurrence of the event, which is ascribed to the ‘sulfonylurea use with other non-metformin antidiabetic drugs (with or without warfarin)’ exposure category. D The patient initiates warfarin (orange line). After some time, they enter the cohort upon initiation of a sulfonylurea (blue line), while being a prevalent user of warfarin, and start contributing person-time to the ‘concomitant use of sulfonylureas and warfarin’ exposure category. They remain exposed to sulfonylureas and warfarin until the end of the study period. Abbreviations: SU, sulfonylurea; MET, metformin; AD, antidiabetic drug

Excursus: Time-Varying Exposure Definition for Drug-Drug Interaction Cohort Studies

The use of a time-varying exposure definition for ‘2 vs. 1’ comparisons in DDI studies has advantages also beyond the minimization of immortal time bias. First, it maximizes study power given the lack of censoring upon treatment switch or treatment discontinuation and the resulting longer follow-up. Second, in settings of chronic diseases with several steps of treatment escalation such as type 2 diabetes, arterial hypertension, or heart failure, a time-varying exposure definition may reflect more adequately the dynamic nature of pharmacotherapy over time.

The use of time-varying exposure definitions for DDI cohort studies also comes with certain challenges. First, it requires advanced programming skills and significant computational capacities compared to other exposure definitions such as intention-to-treat or as-treated, especially in the setting of large cohorts. For example, in our warfarin study that was based on a cohort of > 300,000 patients, running the confounder-adjusted outcome model took longer than 6 h. Second, the use of time-varying exposure definitions may augment time-dependent confounding, the type of confounding that occurs after cohort entry. This is of particular concern when follow-up is long and also when switches between exposure groups during follow-up could be related to the outcome of interest. Established tools such as the marginal structural Cox proportional hazards model can help mitigate time-dependent confounding [33]. Third, this approach may introduce some depletion of susceptibles. Given that patients are allowed to contribute multiple episodes of concomitant use during follow-up, small effect sizes may be ‘diluted’ and remain undetected.

Potential Adaptations of the Cohort Design for Drug-Drug Interaction Studies

As mentioned above, the correct assignment of time zero in cohort studies for DDIs is not straightforward, and design errors can introduce several biases. Moreover, the application of time-varying exposure definitions, a proposed solution to minimize these biases, does not come without challenges. Hence, repurposing recently developed adaptations of the ‘conventional’ cohort design such as target trial emulation and the prevalent new-user design for the assessment of clinical DDI effects could prove useful.

Target Trial Emulation for Drug-Drug Interactions

The overall notion that observational studies should aim to emulate the hypothetical target trial is not new [34, 35]. However, it has emerged as one of the key concepts in pharmacoepidemiology in the past years; this development can be seen as part of the ongoing debate regarding the role of observational studies in the benefit-risk assessment of medical treatments, and how confounding and other biases may affect their validity [29, 36••]. To our knowledge, target trial emulation has not yet explicitly been used in the area of DDIs. Hence, we will attempt only a brief delineation of the principles of target trial emulation when it comes to the study of DDI effects.

Let us use the example of the interaction between sulfonylureas and warfarin and the risk of severe hypoglycemia. First, we would specify the protocol of the target trial including eligibility criteria (e.g., ongoing sulfonylurea use and indication for warfarin use), treatment strategies (initiation or not of warfarin while on a sulfonylurea), treatment assignment (random assignment to any of the treatment strategies), outcome definition, follow-up, and statistical analyses. Second, we would emulate the target trial with the help of observational ‘equivalents’; for example, instead of the random assignment to a treatment strategy described in the protocol, we would classify patients according to the strategy they actually followed at baseline and attempt to emulate randomization by adjusting for baseline confounders. The time zero for patients not initiating warfarin would then be the first month where all eligibility criteria were met (e.g., new diagnosis of atrial fibrillation during treatment with sulfonylureas).

Prevalent New-User Design for Drug-Drug Interactions

Another potential option could be the prevalent new-user design. Similar to the emulation of a target trial, this design has yet to be used in DDI pharmacoepidemiology. The prevalent new-user design was initially developed as an extension of the active-comparator new-user design for comparisons between newer and older drugs [37]. In this case, applying the active-comparator new-user design would lead to the exclusion of all new users of the newer drug that were past users of the older drug, possibly a substantial fraction of the overall population. To avoid the resulting loss in statistical power and external validity, the prevalent new-user design was proposed [37]. To this end, we would first need to assemble a base cohort of all users of drugs with the same indication as the drugs of interest. Then, we would define exposure sets for every new user of the newer drug also including all users of the older drug who did not initiate the newer drug but with the same duration of treatment with the older drug. Finally, within each exposure set, a user of the older drug with very similar characteristics as the new user of the newer drug would be identified as a comparator. More relevant for the study of DDIs is the recent adaptation of this design for ‘1 vs. 0’ comparisons: settings without an appropriate active comparator where non-use needs to serve as reference group [30••]. There, new users of the drug of interest are matched to patients who had the opportunity to get exposed but did not either on the duration of the underlying indication or the number of physician visits.

Using a similar approach, we could study the sulfonylurea-warfarin interaction. First, we would need to form a base cohort of new users of sulfonylureas and then identify those adding on warfarin while on a sulfonylurea during the study period. For each co-exposed patient, we would define an exposure set based on the prior duration of sulfonylurea treatment. Accordingly, each exposure set would include one co-exposed patient and all other patients from the base cohort who were currently on sulfonylureas and had the same duration of sulfonylurea treatment as the co-exposed patient, but did not add-on warfarin. The time zero for patients not initiating warfarin would be the time point of the relevant exposure set.

Discussion

The present review focused on the implementation of cohort designs for the study of clinical effects of DDIs. We highlighted several key aspects that are part of the decision-making process while designing relevant pharmacoepidemiologic studies. Moreover, we elaborated on potential biases that may arise during this process and on strategies that can help mitigate these biases. Finally, we touched upon on recent methodological developments coming from other areas of pharmacoepidemiology that could also become useful for the study of DDIs.

DDIs have attracted increasing attention both from regulators and pharmacoepidemiologists in recent years. At the level of regulators, several DDI-related guidelines exist [9, 10]. However, the major focus of regulatory guidelines consists in the assessment of pharmacokinetic parameters in preclinical in vitro and in vivo studies and physiologically based pharmacokinetic modeling and simulation. As a result, pharmacoepidemiologic studies that could go beyond such ‘surrogate’ parameters and address the important knowledge gap of clinical drug risks are not included.

The growing interest in DDIs among pharmacoepidemiologists has largely been reflected by the publication of methodological work, mostly regarding the application of case-only designs [2, 19], and on initiatives in scientific societies such as the DDI Special Interest Group at the International Society of Pharmacoepidemiology that was launched in 2019 [38]. Moreover, an increasing number of pharmacoepidemiologic studies on DDIs has been published in higher-tier medical journals [39, 40]. However, the number of pharmacoepidemiologic research groups focusing on DDIs is still small; therefore, considering also the very high number (> 100,000) of potential DDIs, [41] there is still a long way to go before robust clinical evidence on DDIs becomes a common pattern.

In order to improve drug safety and patient outcomes, knowledge on potential risk factors of drug toxicity is necessary. Well-established risk factors include, but are not limited to, advanced age (due to the increased susceptibility to drug toxicity among older adults) and impaired kidney or liver function (due to the decreased renal or hepatic drug clearance with drug accumulation and enhanced drug toxicity). However, these risk factors are either not modifiable or only marginally modifiable. Therefore, modifiable and thus preventable risk factors of drug toxicity can be of major clinical importance.

DDIs can be viewed as modifiable risk factors of drug toxicity, assuming the availability of therapeutic alternatives for the precipitant. This point was supported by a recent podcast referring to our study on the risk of severe hypoglycemia associated with the DDI between sulfonylureas and beta-blockers that observed a 53% increased risk [42]. In the podcast, it was argued that based on these findings, patients on sulfonylureas diagnosed with arterial hypertension should be treated with non-beta-blocker antihypertensive drugs to prevent the excess hypoglycemic risk associated with this DDI.

In summary, DDI research still presents a ‘niche’ within the realm of pharmacoepidemiology despite the importance of interactions between commonly used medications. That being said, the availability of large datasets has rendered DDI studies increasingly feasible. Moreover, carefully considering the underlying pharmacology of the DDI and of the involved medications separately can help mitigate or even minimize most biases. Given the limitations of ‘traditional’ sources of evidence such as pharmacokinetic studies and case reports when it comes to the assessment of the clinical effects of DDIs, pharmacoepidemiology can push things forward and strongly contribute to the closure of this knowledge gap.