Introduction

The evaluation of hypersensitivity reactions to drugs involves three main strategies: accurately reviewing the patient’s clinical history [1•], conducting diagnostic tests (skin tests and/or in vitro tests] [2] and performing drug challenge tests [3]. Challenge tests are currently the gold standard for diagnosis, but they often involve significant risks, especially in patients who have received multiple drugs in the context of an adverse reaction and/or patients with multiple comorbidities. Diagnostic evaluations based on in vivo tests, although accessible, do not offer perfect sensitivity rates. Thus, in vitro techniques may provide an interesting complement for diagnosis before conducting challenges and may even be considered as an alternative, particularly in cases with a history of a life-threatening reaction.

However, laboratory tests in drug allergy diagnosis have limitations: they confer moderate sensitivity, availability is not guaranteed for all drugs, and some of the techniques are only available in specialised laboratories [4••]. Furthermore, limited evidence for some drugs results in a restricted validation for routinely use [4••, 5••]. Maximizing their potential utility, therefore, requires a tailored approach, focused on an appropriate clinical characterization of the reaction. In this review, the existing evidence backing the currently available in vitro tests for drug hypersensitivity reactions (DHR) is summarised. Additionally, useful tips to optimise the yield of these tools are provided for the clinician.

In vitro tests should be requested according to the type of reaction: clinical clues

The main objectives in the management of a DHR after its treatment and resolution are as follows: to search for the culprit drug — in order to instruct future avoidance, when possible — and to clarify tolerance to alternative treatments for the patient. Frequently, the patient is only evaluated by an allergist once the reaction has resolved. Two types of in vitro tests are available to meet these objectives: those which help characterise the type of reaction during the acute phase (focused on cells involved and mediators released) and those applied after reaction resolution, to seek the culprit drug. An approach that considers the possible mechanism involved in a reaction is necessary to select the best in vitro techniques and to optimise their performance. Thus, reactions should be properly characterised during the acute phase, if possible, or based on the patient report [1•].

Depending on the time passed between the consumption of the drug and symptom onset, reactions are classified as suggestive of either immediate hypersensitivity (0–6 h) or non-immediate hypersensitivity (hours-days) [2]. However, this classification is often difficult to apply, for instance, in the case of reactions occurring 1–6 h after drug intake, which have often been termed “accelerated drug reactions” [6]. Also, the time elapsed from the administration of the culprit drug to the onset of symptoms is often difficult to determine in patients who received several drugs in the context of a reaction.

To simplify, in this review, DHRs will be divided into immediate drug reactions (IDHR) (symptoms appearing in the first 6 h after drug intake) and non-immediate drug reactions (NIDHR) (symptoms appearing more than 6 h after drug intake). Clinical features indicate differences between these two types of reactions, since IDHR have a rapid onset and evolve very quickly (in less than an hour, the patient can turn from being asymptomatic to being in life-threatening condition), whereas NIDHR normally show persistent (lasting longer than 24 h) skin lesions (maculo-papules, pustules, blisters…) that can be accompanied by systemic alterations such as hepatotoxicity, eosinophilia, or lymphadenopathy (Fig. 1). In the case of IDHR, different clinical patterns can be observed: type 1 reactions (IgE or non-IgE mediated) typically manifesting with urticaria, angiooedema, bronchial hyperreactivity or even anaphylaxis and cytokine storm-like reactions, which can also result in anaphylaxis but may initially present with different clinical characteristics such as fever, chills and musculoskeletal pain [7•] (Fig. 1).

Fig. 1
figure 1

Most commonly used in vitro tests for the diagnosis of drug hypersensitivity.

Acute phase mediators aid in the identification of the reaction mechanism

Certain blood markers, when measured during the acute phase of a reaction, help characterise the mechanism underlying a DHR. In IDHR, for example, mast cell and basophil mediators, such as tryptase or histamine, are released, suggesting a type 1 reaction [7•]. On the other hand, in cytokine storm-like reactions, T cells and other cell types release cytokines, such as tumour necrosis factor (TNF)-α and interleukin (IL)-6 [8]. Prostaglandins, leukotrienes (LT) C4 and LTD4 are also increased in type 1 non-IgE reactions induced by nonsteroidal anti-inflammatory drugs (NSAIDs) [9], but the use of these markers for diagnosis has not been validated. Although the measurement of an increase in histamine levels showed high sensitivity in perioperative immediate reactions [10], its specificity is not very high [11] and sample extraction and preservation must be performed extremely quickly due to the short half-life of the molecule [12]. Tryptase measurement by means of the automatized fluorescence enzyme immunoassay (FEIA) system, on the other hand, is a validated technique, which is easily performed. It has established tryptase as the main mediator measured for acute phase IDHR and shows high sensitivity and high specificity. The latter can vary depending on the applied cut-off point and is influenced by the comparison between basal levels and levels measured within 2 h after the reaction [13]. Moreover, tryptase has been proposed as a specific acute phase marker for IgE-mediated reactions [10, 14]. Finally, although the sensitivity and specificity for the detection of interleukin-6 have not been evaluated for IDHR, an elevation of interleukin-6 levels, along with the absence of a relevant increase in tryptase levels, in an IDHR presenting with symptoms such as fever, chills, or musculoskeletal pain, could be suggestive of a cytokine storm-like reaction [15•].

Importantly, these acute phase biomarkers should always be measured during the reaction and should be compared to basal values, measured at least 24 h after symptom resolution [10, 16]. Alterations of the mast cell population, hematologic diseases and/or genetic disorders, for instance, may increase resting serum tryptase [17], and it is important to consider these entities in every patient’s differential diagnosis, particularly if elevated tryptase levels have been documented.

The clinical evaluation of NIDHR can be challenging, especially if systemic symptoms are present, and therefore, a broad differential diagnosis should be considered. In the context of an NIDHR, recommended ancillary tests include peripheral blood count, serologies for concomitant, pre-existing or reactivated infectious disease, liver enzymes, renal function and acute markers for inflammation. Although some biomarkers have been proposed for the acute phase in NIDHR, validation studies and technique availability are still necessary [18] Regarding other available tests for these types of reactions, associations between certain HLA alleles and severe drug delayed reactions have been described [19]. Identifying certain HLA alleles, depending on ethnicity and the drug involved, can be helpful in characterising a reaction during the acute phase, but the main utility of these tests may rest in the prevention of NIDHR [19].

Tests that may aid in the discovery of the culprit drug

Immediate hypersensitivity reactions mediated by immunoglobulin E (IgE)

Detection of specific IgE against drugs

Specific IgE (sIgE) can potentially be used to orient the diagnosis of IgE-mediated IDHR, but it can only be measured against a limited number of drugs. Tests to determine the presence of sIgE to drugs are based on enzyme immunoassays, which employ the suspected drug, immobilised in a solid phase, to capture the immunoglobulin. However, since drugs are haptens, the drug needs to be bound to carriers such as poly-L-lysine (PLL), human serum albumin (HSA), amino-aliphatic spacers, or dendrimer structures. The most widely used commercial system for the determination of drug sIgE is the FEIA (ImmunoCAP Thermo Fisher, Uppsala, Sweden), which uses PLL as a carrier. Furthermore, evidence for the use of sIgE-FEIA against drugs is mostly restricted to β-lactams (only available to benzylpenicillin, penicillin V, amoxicillin, ampicillin and cefaclor). In general, sIgEs against drugs show low sensitivity, which can be influenced by the severity of the IDHR [20], and the time elapsed since the reaction when the test is performed [21]. Higher values have been observed for specificity than for sensitivity [22•, 23,24,25], although some authors have also reported low specificity in patients with high levels of total IgE [26]. The currently available data for the diagnostic performance of sIgE against drugs by FEIA is summarized in Table 1. The low sensitivity values, together with the false-positive rates for penicillin G in populations with selective allergy to aminopenicillins, suggest that ImmunoCAP is a diagnostic tool with limitations when evaluating subjects with a suspected type 1 hypersensitivity reaction to penicillins [22•].

Table 1 Global sensitivity and specificity values for different drugs in sIgE and BAT

Different strategies have been proposed to improve the sensitivity and specificity of sIgE-FEIA (especially for beta-lactams), such as decreasing the cut-off point to 0.01 kUA/L [26] or using an sIgE to penicillins/total IgE ratio [27], with less than promising results [22•]. Alternate techniques for the detection of sIgE against drugs have been developed, focused on new drug carriers (i.e. dendrimers [28], detection systems [29, 30], or calibrators improvement [31]).

Basophil activation test

Among the cellular techniques employed for the diagnosis of allergic disease, the basophil activation test (BAT) is the most established and widespread one to date. The main difference between BAT and assays for the detection of specific IgE is that the former implies the presence of more than antibodies: it demonstrates whether the allergen is capable of activating an effector cell.

Basophils constitute a minor fraction of all leukocytes in peripheral blood (less than 1%) and, upon IgE cross-linking by its antigen, can activate and degranulate, expelling the preformed content from their granules (histamine, leukotrienes…), as well as de novo synthesised mediators [32]. The incubation of these cells with the suspected drug may trigger this activation cascade, which induces intracytoplasmic fusion of granules within the cell and fusion of these granules with the plasmatic membrane. Thus, molecules from the granular membrane, such as CD63, are expressed in the basophil’s membrane upon activation. The expression of CD63 is therefore correlated with degranulation, rendering it an ideal marker for basophil activation that can be easily detected with flow cytometry. It is also possible to differentiate basophil activation/degranulation using other markers such as CD203c or CD107a or with the use of avidin [32, 33]. The functionality of the cells can additionally be explored through the quantification of released mediators or through assessing intracellular pathways (e.g. phosphorylation of signalling molecules or intracellular calcium) [32, 34•]. Assays include one or more positive controls (with anti-IgE or anti-FcεRI, fMLP…) to prove capacity of the employed basophils for degranulation or viability. Ten to 20% of the population are nonresponders, with basophils that do not degranulate upon IgE pathway stimulation. The best way to interpret data in these cases is a matter of controversy.

Activated basophils are commonly identified by measuring the percentage of cells positive for CD63 and/or the change in mean fluorescence intensity (MFI) given by CD203c when compared to a negative control (unstimulated basophils). When using CD63, a cut-off point of 5% (proportion of activated basophils) is usually employed to define drug-specific cell activation, although some centres use other percentages or cut-off values based on ratios. When using CD203c, other cut-off values are applied [35]. A proper calculation involves the use of a ROC curve for each protocol and drug.

BAT shows variable sensitivity and specificity results for the study of allergy to certain drug groups (Table 1) [35], although, in general, it provides moderate to high specificity [33]. The usefulness of this technique is highly dependent on its appropriate application by the clinician, since it should only be requested when an IgE-mediated hypersensitivity reaction is suspected.

Several technical factors may also influence the technique performance and the quality of the results of the test [36]. These include the sampling conditions (Table 2), the use of relevant allergens and the use of a proper technique. The basophil-gating strategies employed during flow cytometry analysis, the markers utilised for cell identification (commonly CCR3 also known as CD193, CD123 + /HLA-DR − , CD203c, or IgE), and the markers used for the detection of activated cells (commonly CD63 or CD203c) may alter the findings obtained from this test [33]. Drug concentrations should be established for each method and, in many cases, can be found published in the literature.

Table 2 Most common in vitro tests for drug allergy with its clinical and technical considerations

Apart from technical aspects, other factors need to be considered, such as the time passed between the reaction and the extraction of the blood sample (Table 2). Ideally, the test should be performed at least 1 month (refractory period) and less than 1 year after the reaction. It is also important to consider the medication used by the patient who is being tested. Corticosteroids have been shown to reduce basophil degranulation capacity, and thus, it is recommended that systemic steroids should be suspended 3 weeks before the test, while topical treatments with steroids do not influence the result [33].

A variant of BAT, which would be generally performed with patients’ own cells, consists of a passive sensitization of basophils from healthy donors. Briefly, IgE is removed from the basophil surface, and cells are further incubated with sera from allergic patients, to sensitise them with the patient’s IgE. Afterwards, BAT is performed. Even though it is mainly used in allergy as a research tool, it has been successfully employed in drug-allergic patients, and, for some drugs, it has depicted sensitivity and specificity values comparable to the classic BAT [37, 38].

Mast cell activation test

Mast cells (MCs), a tissue-resident cell type, are considered the main effector cells in most IDHR. Like basophils, MC express FcεRI in their surface and are coated with IgE molecules that, upon cross-linking, can cause MC degranulation.

In a similar fashion to the BAT, the mast cell activation test (MAT) aims to expose these cells to the suspected allergen. Its application is still limited, mainly due to technical challenges, to the difficulty in obtaining cells to perform the assays and to its costs. Different strategies exist to obtain MC, such as differentiating the cells from peripheral blood progenitors [39, 40], or the use of cell lines [33, 41•]. After obtaining the cells, they need to be sensitised with the patient’s sera and further exposed to the suspected allergens. As in BAT, CD63 can be used to assess MC degranulation, as well as other cell surface markers (e.g. CD203c, CD107a) or to quantify the release of mediators such as β-hexosaminidase.

Due to their accessibility, basophils have traditionally been used as effector cells for allergy functional assays, although evidence suggests that using mast cells for the evaluation of drug-dependent activation in vitro may be more effective than BAT and other established diagnostic techniques [39]. The use of mast cells is also reasonable when detecting hypersensitivity reactions mediated by mechanisms not involving IgE, such as reactions mediated by the Mas-related G protein-coupled receptor X2 (MRGPRX2) [42].

In vitro tests available for in delayed, or non-immediate, drug hypersensitivity

Lymphocyte transformation test

The main technique available for the diagnosis of NIDHR is a cellular technique, based on lymphocyte proliferation after stimulation with suspected allergens, named lymphocyte transformation test (LTT) [43,44,45].

LTT addresses drug-specific T cell — the main cell type orchestrating DHR — proliferation. In brief, lymphocytes are isolated from venous blood and are cultured with the suspected drug, or drugs, for several days, most often 5 to 7. The goal is to observe cell proliferation greater than the basal level, which should occur in the case of a positive test. Several techniques allow the quantification of cell proliferation. Historically, determination of radiolabeled thymidine (3H-thymidine) incorporation has been used to study proliferation, but, due to its technical requirements and risks, its use is declining. The use of flow cytometry to monitor fluorescently labelled cells (with carboxyfluorescein succinimidyl ester, for example) or the use of non-radiolabeled agents incorporating into DNA (such as BrdU) is gaining relevance [46, 47].

In general, LTT has a good specificity (63–100%) and a low to moderate sensitivity (25–89%), although data differ for different drugs and clinical phenotypes [48, 49]. Several studies have noted that specificity and sensitivity are improved when considering only mild to moderate reactions [49]. A limitation of most studies is that, due to the risks encompassed in drug provocation tests, the gold standard to estimate specificity and sensitivity is often based on algorithms [50]. Approaches to improve sensitivity [46], like the use of antigen-presenting cells [51] or the removal of regulatory T-cell [47, 52], have been successfully applied.

Test performance is influenced by different factors (Table 2), such as the time elapsed after the moment of the reaction when the extraction is performed. Although the optimal moment to conduct the test is not fully elucidated, evidence suggests that the acute phase should be avoided (< 2–4 weeks), and that waiting for too long (> 12–36 months) also increases the chances of a negative result [53, 54]. However, some studies suggest that, for certain clinical syndromes, the acute phase may be better to perform LTT, so more data is still needed [54]. Another parameter to consider is patient treatment, as they may impair T-cell proliferation. In general, avoidance of corticosteroid treatment (or at most, the use of low doses) is recommended at the moment of sample extraction [53, 55]. Other immunosuppressive drugs may also interfere with the test, although more evidence would be needed to establish indications. LTT is based on specific lymphocyte responses. Thus, the presence of lymphopenia should be noted since, in patients with low counts, the performance of the test may be not optimal [55]. Finally, it is important to keep in mind that, for some clinical syndromes of NIDHR, LTT seems to have a better performance, although the lack of standardisation makes available data difficult to interpret. Drug reaction with eosinophilia and systemic symptoms (DRESS)/drug-induced hypersensitivity syndrome (DIHS) often shows a good performance for LTT, while for maculopapular exanthema (MPE), results are variable among studies. Doubtful results have been observed for Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) or for fixed drug exanthemas, where the number of affected cells in circulation is probably very low [54,55,56,57].

Although proliferation is the main read-out method used in LTT, other strategies can be used independently or in combination with proliferation, such as the detection of activation markers with flow cytometry (e.g. CD69) [46, 58].

Other in vitro tests for non-immediate drug hypersensitivity

Alternative in vitro tests for delayed drug reactions include the enzyme-linked immunosorbent spot assay (ELISpot) technique or the quantification of cytokines and cytotoxic mediators. ELISpot quantifies cells producing one or several mediators of interest, such as interferon (IFN)-γ, IL-5 or granzyme B [46, 49]. Another alternative approach is the quantification of cytokines and/or cytotoxic mediators in culture supernatant after culturing lymphocytes with the suspected drug (mostly interferon-γ but also IL-5, IL-2, IL-10, granulysin or granzyme B). The quantification is often performed with ELISA but can be conducted using other approaches [46, 52]. Both ELISpot and ELISA have shown sensitivities and specificities comparable to those obtained with LTT. In a recent meta-analysis, ELISA was reported to be more sensitive than classical LTT, although heterogeneity of the data remains a great challenge when comparing techniques [54].

Considerations for the future of in vitro diagnostic tests

Since in vitro tests for diagnosis are considered healthcare products, reviewing current legislation regarding these methods is relevant. The In vitro Diagnostic Medical Devices Regulation (IVDR) is a new legislation providing a regulatory framework for all in vitro diagnostic tests within the European Union. It was established on May 25, 2017. This date marks the beginning of a 5-year transition period for manufacturers and economic operators, since IVDR is replacing the 98/79/CE Directive, which applied to in vitro diagnostic devices.

Adaptation of in vitro diagnostic tests in drug allergy to the IVDR regulation is challenging, since commercially available products for diagnosis which meet its requirements scarcely exist. Techniques developed in the laboratory should meet a specific set of requirements and should be backed by extensive documentation. Suggestions to validate these cellular techniques to incorporate them in clinical practice include analytical validation studies and a continuous monitoring of methodological quality [59•]. Significant efforts are required from each laboratory but also a collaboration that aids in the development of standardised techniques and the establishment of robust quality controls.

Conclusion

Diagnosis of drug hypersensitivity reactions poses many challenges, especially when it comes to discovering the culprit drug. In vitro tests can be used to support this process. However, it is important to consider their limitations. The decision to use them should always involve a careful consideration of clinical symptoms, history and skin testing data available from patients. In addition, to maximise the potential benefit from such tools, it is important to carefully select the ideal settings for their applications. There is still a need for improving sensitivity in most of the techniques and specificity for certain drugs. Furthermore, several tests require specialised equipment and trained personnel and thus are not broadly available. It is essential to construct networks of specialised centres to expand the knowledge of these techniques and to adequately validate them in as many centres as possible.