Introduction

Numerous epidemiological studies have examined DNA methylation due to its important role in physiological processes, and the development and progression of human diseases [1]. Microarrays are widely used for DNA methylation profiling, and are affordable for studies with large sample sizes. Illumina’s methylation array is a common choice in many data repositories such as TCGA with ~ 12,000 samples, ENCODE with ~ 250 datasets and GEO with ~ 7000 datasets (April 2020).

DNA methylation array technologies have evolved so more individual methylation CpG sites can be evaluated on a single array. Illumina’s BeadChip methylation microarrays are extremely popular, the most recent being the HumanMethylationEPIC BeadChip (“EPIC”) released in 2016 that measures ~ 8,50,000 CpG sites (probes), which is an increase from the previous array (HumanMethylation450K BeadChip, “450K”). In many studies, both platforms have been used, due to technology updates in the middle of large projects, or multiple batches for studies over time [2,3,4,5,6]. Some investigators are interested in maximizing sample size for research questions by analyzing data from the current EPIC and older 450K arrays. Numerous publications focus on certain aspects of pre-processing such as normalization [7] or probe filtering [8] but there are currently no papers that consider the entire pipeline. Establishing best practices is relevant for other epidemiological studies that needed to change platforms mid-study, in addition to the re-analyses of public data.

We evaluate the performance of common harmonization tools of 450K and EPIC data at various pre-processing and analytical steps using the Diabetes Autoimmunity Study in the Young (DAISY), which prospectively follows genetically high-risk children for the development of type 1 diabetes (T1D) [4]. We explored normalization, probe-level QC and filtering, batch effect adjustment, and genomic inflation by testing methods that were easy to implement from well-established and documented R packages. Finally, we provide evaluation guidelines for studies facing similar harmonization challenges.

Main text

Methods

Figure 1 shows a summary of the pre-processing pipeline and the data harmonization evaluations.

Fig. 1
figure 1

Pipeline Methods Considered. The four main pre-processesing steps are: 1. Normalization and probe QC, 2. Batch effect adjustment, 3. Extra probe filtering and 4. Genomic inflation adjustment. The various methods considered for each step is listed along with the evaluation(s) used to assess these methods

Data

Peripheral whole blood was collected prospectively from individuals enrolled in DAISY. Cases of T1D were frequency matched to controls, and DNA methylation generated on up to five prospective samples per subject [4]. The EPIC platform replaced the 450 K during data acquisition. There were 42/42 and 45/45 matched T1D cases/controls (corresponding to 184 and 211 unique arrays) for the 450 K and EPIC platforms, respectively.

Normalization and probe-level QC

First, three established normalization and probe-level QC methods were evaluated (Fig. 1): subset-quantile within array normalization (SWAN, [9]), normal-exponential using out-of-band probes (Noob, [10]) and single-sample Noob (ssNoob,[11]). SWAN and ssNoob normalizations were performed within the minfi package [12], while Noob normalization was performed in the SeSAMe package [7]. We examined two detection above background methods: minfi’s default [12] and SeSAMe’s pooBAH [7] and coupled it to the normalization in the same R package. Filtering on probe-level QC was performed after each normalization prior to evaluation of platform effects and included removing probes with known SNPs in the probe sequence [13] as well as cross-reactive probes [14]. See Jonhson et al. for full detail [4].

For evaluating normalization and probe-level QC procedures, we looked at the first 10 principal components (PCs) to determine if there was a large platform effect across components, as well as the three technical replicate metrics mentioned below. For the PCA, all probes that passed QC were included, regardless if present or not on the other platform, excluding probes on chromosome X.

Twelve technical replicates were selected to be balanced with respect to sex, age, and islet autoimmunity (IA) status. To examine this data we used three metrics: (1) a difference in methylation Beta value at the individual probe (Additional file 1: Eqn S1), (2) correlation of all probes across a single technical replicate pair (Additional file 1: Eqn S2) and (3) correlation of the technical replicate pairs across a single probe (Eqn S3, see Additional file 1: Methods).

Batch effect adjustment

Second, we applied two different within platform batch effect methods: ComBat [15] and RUVm [16] (Fig. 1) in the sva (v3.30.0) and missMethyl (v1.16.0) packages respectively.

Additional probe filtering

Third, we explored supplemental probe filtering by removing low range probes (< 5% Beta) as suggested [8] and compared how well technical replicates correlated.

Statistical analysis

We performed a linear mixed model using T1D status to predict methylation (M-values) while adjusting for age and sex and using subject as a random effect using the R/nlme package (v3.1–137) [17].

Genomic inflation

We corrected for genomic inflation using the R/BACON package (v1.10.1) [18]. In brief, BACON estimates an empirical null distribution using a Bayesian method to account for the bias and inflation of test-statistics specific to EWAS datasets.

Meta-analysis

Stouffer’s meta-analysis method [19] combined the statistical results from the two different platforms. This method generates a single meta-analysis p-value for each probe, and accounts for consistent effect direction.

Results

For each of the processing Steps 1–4, we compared different options with a variety of data harmonization evaluation diagnostics (Fig. 1).

Normalization evaluation

First, we explored normalization of both datasets together using ssNoob (coupled with minfi probe QC), as recommended by Fortin [11]. After associating the first 10 PCs with platform, we found the first and second PC had extremely high associations with platform and sex respectively (Fig. 2). Sex differences are expected to be a large contributor to methylation profiles as methylation is known to have a large role in female X-chromosome inactivation. Applying subsequent batch adjustment did not reduce the strong platform effect (Additional file 1: Table S1), regardless of method applied. Therefore, we applied normalization procedures by each platform separately.

Fig. 2
figure 2

Platform Effect. The 1st and 2nd principal components (PCs) from the ssNoob normalization are plotted with colors symbolizing both platform and sex. Red and blue dots signify the 450K platform while purple and green dots signify the EPIC data. Red and purple dots signifiy females and blue and green dots signify males. Percent variance explained by each PC is noted in parentheses

To explore the effect of SWAN or SeSAMe on harmonization of platforms, we examined technical replicates across platforms. Correlation across probes for each pair of technical replicates (Additional file 1: Eqn S2) was extremely high (> 0.98) for both methods. This is not surprising given the large amount of probes used to calculate each correlation, and similar to high correlation between random pairs of samples (> 0.97). Individual probe correlations deemed much more informative (Eqn S3). We generated densities of probe-level correlations across the technical replicates as well as across random samples (Fig. 3). The distribution of the random sample correlations for both the SeSAMe and SWAN are centered around 0 and look more Gaussian compared to the distributions for the technical replicate correlations, which look like a mixture of two or more distributions in addition to being centered around a higher correlation coefficient.

Fig. 3
figure 3

Correlation of Technical Replicates. Density plots of correlations across the platform technical replicates for each probe (n = 12, green) as well as a random subset of pairs for comparison (n = 12, purple) for the data normalized using A SeSAMe and B SWAN. The median correlation coefficient among technical replicates is both 0.41 in the SeSAMe and SWAN methods. The 1st and 3rd quartiles for technical replicates for SeSAMe and SWAN were (0.06, 0.72) and (0.11, 0.67) respectively

We examined the absolute differences in methylation on the probe-level (Additional file 1: Eqn S1) of the Beta value (% methylation) for each technical replicate (Additional file 1: Figure S1). In all technical replicate pairs, SeSAMe has a tighter distribution closer to 0 and shown to harmonize better, while SWAN has higher absolute differences. We report the mean bias (and 95% CI) for the differences between platforms by normalization type (Additional file 1: Table S2). Given these results, we moved forward with the SeSAMe normalization (see Additional file 1: for probe QC filtering numbers).

Batch effect adjustment

Even normalizing within each platform type, we still have technical batch effects to consider, since a variety of factors can add unwanted technical variation [20]. Therefore, we examined within platform batch effect being defined as plate and row location combination. We performed ComBat and RUVm to adjust for within platform batch effects on the SeSAMe dataset. Based on the PC analysis, ComBat performed slightly better, as the top PCs were less associated with batches defined as plate by rows (Additional file 1: Figure S2, [20]). Our results of ComBat outperforming other methods is consistent with Jiao and colleagues [21].

Probe filtering

Applying the Logue beta range filteer [8], removing probes with < 5% Beta methylation range, resulted in removing 15.8% (59,397) and 33.9% (225,342) of probes in the 450 K and EPIC platforms respectively. The mean beta values for the probes which were removed fell only on the extremes for both platforms (Additional file 1: Figure S3), while the probes which passed this criteria had mean beta values throughout the potential 0–100% methylation values. Additional considerations are summarized in the Supplement.

Genomic inflation in statistical analysis

After statistical analysis, it’s important to consider the genomic inflation factor lambda (i.e., general inflation of test statistics due to population structure), which is the ratio of the median of the observed distribution of the test statistics to the expected median, and should be close to 1. In the 450 K platform, the SWAN normalized dataset resulted in an extreme genomic inflation whereas the EPIC was deflated (Additional file 1: Figure S6). However, genomic inflation was comparable between the platforms for the SeSAMe normalized datasets. To account for any additional genomic inflation we applied BACON [18], which was developed to control for genomic inflation specifically for EWAS. After this adjustment, the genomic inflation factor for the SeSAMe 450 K and EPIC datasets were 1.03 and 1.08 respectively.

To perform the meta-analysis, we only kept probes present in both the SeSAMe datasets (1,99,243 probes). Final results are reported by R. K. Johnson and colleagues [4]. This final pipeline as it gives comparable candidate probes to other DNA methylation papers in T1D [22,23,24,25].

Discussion

Pre-processing any ‘omics dataset including Illumina’s BeadChip array can have substantial effects on downstream analyses. The introduction of an updated array adds the additional hurdle of harmonizing more than one platform to leverage all available data. If possible, we recommend including technical replicates in the study design to aid in assessing the quality of pre-processing steps as it was key for our harmonization evaluation process of the various methods. Additional file 1: Figure S7 summarizes our recommended best practices based on the tested approaches. We realize new methods are constantly evolving in this field, and this flow chart aims to help guide analysts in what decisions need to be made throughout this process.

There are special considerations regarding probe filtering, which is performed at two stages. The first stage is based on low quality probes identified after normalization, and the second stage is before statistical modeling based on removing non-varying probes. Other filtering criteria such as probes with both high variance and high differences in beta values between technical replicates should be considered. In the first probe filtering, the pooBAH method (part of SeSAMe pipeline) removed a high number of probes compared to SWAN, specifically those on sex chromosomes. Other normalization procedures, such as functional normalization [26], which utilize control probes were not reported, but may work well for some datasets. However, the resulting genomic inflation values were more consistent among the platforms and closer to one, which is desired. This suggests pooBAH correctly identifies germline and somatic deletions that would be causing this inflated signal. However, the use of pooBAH filtering on sex chromosomes should be considered with caution.

Another consideration is adjusting for cell type proportion and for this specific analysis it is discussed in depth in Johnson et al. 2020 [4]. There are conflicting viewpoints for whether to include cell type adjustment [27, 28], and should depend on the specific study design.

In addition to a meta-analysis, we explored an alternative approach, where data were pooled into one statistical model and included a fixed covariate of platform and subsequently adjusted for genomic inflation afterward. In this study, results of the single model were qualitatively similar to the meta-analysis and provided little evidence to support one method over the other. We recommend that both approaches be considered, in addition to random effect methods [29].

In summary, our evaluation methods relied on technical replicates, which we highly recommend. The harmonization evaluation metrics on the technical replaces were used to compare methods at different steps, and can be used to evaluate other options as new methods are developed. We hope our guidelines aid others in their endeavors for performing analyses consisting of both 450 K and EPIC platforms.

Limitations

Others have explored individual steps in this pipeline [8, 11], therefore we did not examine individual steps in depth using multiple datasets or simulations. The goal of this work was to evaluate the entirety of steps involved in a methylation processing pipeline based on data from both Illumina’s 450 K and EPIC platforms and how it affects harmonization. We do not claim that our recommended pipeline will be best in all scenarios, but illustrate what factors need to be considered for selecting a pipeline with other datasets, and new methods as they are published.