Background

DNA methylation is an essential component of the epigenetic machinery that regulates gene expression. It involves a chemical modification whereby a methyl group is added to Cytosine bases [1]. DNA methylation is highly dynamic during development and aberrations in the mark are associated with a range of diseases including cancer, autoimmune and neurodegenerative disorders [2,3,4,5].

The gold-standard assays for DNA methylation are based on bisulfite sequencing, where unmethylated cytosines (C) are selectively and efficiently converted to thymines (T) allowing base-pair resolution methylation state to be read out by standard high-throughput sequencing [6]. Bisulfite sequencing can be applied to a whole genome library (WGBS) [7], or in targeted variants that include Reduced Representation Bisulfite Sequencing (RRBS) [8] that enriches for regions of high CpG density, and Hybrid Selection Bisulfite Sequencing (HSBS) [9] that uses capture probes to target a specific set of genomic regions of interest.

Preprocessing and quality control typically comprise the most computationally intensive portion of bisulfite sequencing data analysis, due to the large size of raw datasets which may contain > 100 GB of data for deeply sequenced individual samples, or thousands of cells in single-cell projects [10]. Here we present a set of preprocessing tools for bisulfite sequencing data that facilitate analyses of such datasets, by simplifying, and making more accessible, the use of large computational compute clusters. We also introduce a new R/Bioconductor package, scmeth, that is optimized for QC analysis of large datasets. The pipelines can be run locally or on cloud computing infrastructure, providing practically unlimited scalability without requiring local compute resources. The cloud implementation, in particular, is accessible through a web browser interface and lends itself to both researchers who have technical expertise and to users with limited bioinformatics analysis experience.

Implementation

The pipelines are designed to go from raw sequencing data to CpG-level methylation estimates. The workflows first perform read alignment and methylation calling in parallel across samples, followed by an aggregation and quality control analysis step. The workflows are implemented in the WDL workflow description language [11, 12] and use software packaged into Docker [13] containers. WDL files are typically structured to contain a workflow consisting of one or more tasks. Both workflows and tasks can specify input parameters such as raw read file names, and runtime parameters such as the amount of CPU and memory resources required for processing and the specific docker image to use. Docker containers are lightweight virtual machines that encapsulate the entire software environment required by the pipeline tools, including their dependencies. In addition to the option of running the WDL workflows locally on a single computer or on an HPC (High-Performance Computing) cluster using job management systems such as LSF (Load Sharing Facility) or SLURM (Simple Linux Utility for Resource Management), we also provide an implementation that is available through the Google Cloud-based FireCloud platform [14, 15]. FireCloud is accessible through a web-browser and allows execution of WDL-based workflows on cloud compute resources with scalability that is unlimited for most practical use cases. The scmeth package used for QC analysis is part of the R/Bioconductor project.

Results

The methylation workflows follow a two-step pattern, with a parallelized per-sample preprocessing step followed by an aggregation and QC step that integrates data across the dataset. Following initial preprocessing with the pipeline default bisulfite-aware aligner Bismark [16], the following outputs are generated for each input sample: (i) BAM and BAM index files; (ii) a per-CpG coverage file with unmethylated and methylated read counts; (iii) a bigwig file for visualization, and (iv) a set of quality assessment metrics such as fraction of aligned reads, bisulfite conversion rate and methylation value distributions. The aggregation step then prepares the individual sample outputs for downstream analysis by combining them into coverage and methylation matrices, available either as plain text or as an R/Bioconductor bsseq [17] object that is also annotated with metrics including the number of reads, number of covered CpGs and bisulfite conversion rate (Fig. 1).

Fig. 1
figure 1

Overview of methylation analysis workflow. Raw read (FASTQ) files and are first processed through a per-sample alignment and pre-processing step, followed by an aggregation step that combines data from all samples into a matrix format and generates a QC report

In addition to preprocessed methylation data, comprehensive HTML and plain text quality reports are also generated using tools implemented in the scmeth Bioconductor package [18]. The QC report can be used to identify low quality batches or samples, and provides metrics, including number of reads, total CpG coverage, bisulfite conversion rate, methylation distribution, genomic feature coverage (e.g. promoters, enhancers), a downsampling saturation curve and methylation distributions (Table 1). In order to scale to large sample sizes as is common in single-cell analysis, an on-disk representation of the methylation and coverage matrices as implemented in the bsseq [17] package is used by default. In order to improve QC analysis run time for large datasets, scmeth provides an option to subsample while calculating metrics. We find that estimates based on using as few as one million of the ~ 28 million CpGs in the human genome are unbiased and stable.

Table 1 Quality control metrics

We used 1000 single-cell RRBS samples with a median of 872,223 reads (range of 5437 to 4,165,149) to estimate the run time and cost for the workflows. For example, processing the full set of 1000 samples using default options took 62 h and accrued $66 of Google Cloud charges (Table 2).

Table 2 Run time and cost estimates

TCGA data analysis

We have preprocessed and made available 47 WGBS samples available from TCGA. These samples were sequenced with a median of 361,777,141 reads (range of 289,476,432 to 955,974,014). We confirmed a high concordance in methylation estimates with the available BEDgraph files from the NCI Genomic Data Commons (GDC), with a correlation of 0.99 when considering CpGs with a minimum read coverage 10. The raw (FASTQ) data, processed data and workflows are made available in a FireCloud workspace (See https://github.com/aryeelab/dna-methylation-tools/blob/master/README.md#tcga-data). We have also made the processed data available via tcgaWGBSData.hg19, an experiment data package in Bioconductor.

The workflows are pre-configured with the quantity of compute resources (e.g. memory and number of CPU cores) to request from either an HPC system (e.g. LSF) or the cloud environment for each analysis step, but these can be altered by the user if a different tradeoff between run time and cost is desired [15].

Discussion

To guarantee reproducible analyses, we take advantage of two components: First, we use a workflow description language, WDL, that can be executed without modifications on systems ranging from a laptop, to an HPC cluster, to cloud compute resources. This flexibility is provided by the workflow engine, Cromwell [15], which has various “back-ends” allowing it to execute workflow tasks on the various platforms. Second, we use Docker containers, lightweight virtual machines, that package the full software environment required by the pipeline tools. These two components together ensure that identical results are produced across different platforms and across multiple runs of the pipelines.

Scalability is achieved through parallelization across samples. For users with an HPC cluster that supports Docker containers, this parallelization is accessible locally. Alternatively, any user can take advantage of the FireCloud platform that uses the Google Compute Engine as the computing platform. The Google billing model charges per minute per machine, which enables all per-sample preprocessing to be performed within a near-fixed total time, regardless of the number of samples, as all samples can be processed in parallel. There are no additional charges for using the FireCloud platform itself although the user will accrue compute and storage costs billed by Google for resources used in workflow execution.

When analyzing a new dataset, it is often useful to compare the new samples to public data, either from individual published studies or large consortia like TCGA [19] and TARGET [20]. These data are often not directly comparable, however, due to differences in preprocessing and other upstream analysis. Applying a uniform processing pipeline is, on the other hand, challenging due to the size of the datasets (e.g. TCGA) making them difficult to download and process. As FireCloud already hosts raw TCGA data, an alternative is to take advantage of our DNA methylation workflow to process both TCGA and the user’s own data in a uniform manner on this platform. The preprocessed data, which is much smaller than the raw sequencing data, can then either be further analyzed using cloud resources, or downloaded for local downstream analysis.

Conclusion

We have developed a set of preprocessing and quality assessment pipelines for Bisulfite sequencing-based DNA Methylation analysis. By leveraging Docker containers and a workflow language that can be executed both locally and in the cloud, the pipelines produce reproducible output across different platforms and user environments. This also has the benefit of facilitating comparisons across datasets such as between local user data and data from public repositories (e.g. TCGA) as identical preprocessing can be guaranteed. We have also introduced the scmeth R/Bioconductor package that implements QC functions optimized for large methylation datasets, such as those common in single-cell analyses. We take advantage of the pipelines’ portability by providing an implementation in the Google Cloud-based FireCloud platform, which enables any user the ability to scale to very large datasets without local compute capacity restraints. We believe that these tools will be useful as the scale of DNA methylation datasets grow, and that they will serve as a template for tools for other types of large genomic data.

Availability and requirements

Project Documentation: http://aryee.mgh.harvard.edu/dna-methylation-tools/

Firecloud workspace: https://portal.firecloud.org/#workspaces/aryee-lab/dna-methylation (Users need to create a free account).

Operating System(s): Platform independent.

Programming Language: WDL, R.

License: MIT.

Any restrictions to use by non-academics: None.

Documentation for this pipeline and all the workflows can be accessed at http://aryee.mgh.harvard.edu/dna-methylation-tools/. scmeth is available through the Bioconductor project (https://www.bioconductor.org/packages/release/bioc/html/scmeth.html).