Background

Genomic region and single-base level data retrieval and processing, which represent fundamental steps in genomic analyses such as copy number estimation, variant calling and quality control, still constitute one of the major bottlenecks in NGS data analysis. To deal with the computationally intensive task of calculating depth of coverage and pileup statistics at specific chromosomal regions and/or positions, different tools have been developed. Most of them, including specific modules of SAMtools [1] and BEDTools [2] and the most recent Mosdepth [3], only measure and optimize the computation of depth of sequencing coverage. Few others, like the pileup modules of SAMtools, Sambamba [4], GATK [5] and ASEQ [6] provide instead statistics at single-base resolution, which is essential to perform variant calling, allele-specific analyses and exhaustive quality control. Although most of these tools offer parallel computation options, scalability in terms of memory and multiple processes/threads usage is still limited. To enable an efficient exploration of large scale NGS datasets, here we introduce PaCBAM, a tool that provides fast and scalable processing of targeted re-sequencing data of varying sizes, from WES to small gene panels. Specifically, PaCBAM computes depth of coverage and allele-specific pileup statistics at regions and single-base resolution levels and provides data summary visual reporting utilities. PaCBAM introduces also an innovative and efficient on-the-fly read duplicates filtering approach. While most tools for read duplicates filtering work on SAM/BAM files sorted by read name [1, 7] or read position (Tarasov et al., 2015, broadinstitute.github.io/picard) and generate new SAM/BAM files, PACBAM performs the filtering directly during the processing, not requiring the creation of intermediate BAM/SAM files and fully exploiting parallel resources.

Implementation

PaCBAM is a command line tool written in C programming language that combines multi-threaded computation, SAMTools APIs, and an ad-hoc data structures implementation. PaCBAM expects as input a sorted and indexed BAM file, a sorted BED file with the coordinates of genomic regions (namely the target, e.g. captured regions of a WES experiment), a VCF file specifying a list of SNPs of interest within the target and a reference genome in FASTA format. PaCBAM implements a multi-threaded solution that optimizes the execution time when multiple cores are available. The tool splits the list of regions provided in the BED file and spawns different threads to execute parallel computations using a shared and optimized data structure. The shared data structure collects both region and single-base level information and statistics which are processed and finally exposed through four different output options. Each output mode provides the user with only the statistics of interest, generating a combination of the following text output files: a) depth of coverage of all genomic regions, which for each region provides the mean depth of coverage, the GC content and the mean depth of coverage of the sub-region (user specified, default 0.5 fraction) that maximizes the coverage peak signal, to account for the reduced coverage depth due to incomplete match of reads to the captured regions (Additional file 1: Figure S1); b) single-base resolution pileup, which provides for each genomic position in the target the read depth for the 4 possible bases (A, C, G and T), the total depth of coverage, the variants allelic fraction (VAF), the strand bias information for each base; c) pileup of positions with alternative base support, which extracts the pileup statistics only for positions with positive VAF, computed using the alternative base with highest coverage (if any); d) pileup of SNPs positions, which extracts the pileup statistics for all SNPs specified in the input VCF file and uses the alternative alleles specified in the VCF file for the VAF calculation and the genotype assignment (Additional file 1 for details). All output files are tab-delimited text files and their format details are provided in the Additional file 1.

PaCBAM allows the user to specify the minimum base quality score and the minimum read mapping quality to filter out reads during the pileup processing.

In addition, we implemented an efficient on-the-fly duplicated reads filtering strategy which implements an approach that is similar to the Picard MarkDuplicates method but that applies the filter during region and single-base level information retrieval and processing without the need of creating new BAM files (Additional file 1). The filtering strategy, which fully exploits multi-core capabilities, uses single or paired read alignment positions (corrected for soft-clipping at the 5′ end) and total mapping size information to identify duplicates and implements ad-hoc data structures to obtain computational efficiency.

PaCBAM package also includes a Python script to generate visual data reports which can be directly used for quality control. Reports include plots summarizing distributions of regions and per-base depth of coverage, SNPs VAF distribution and genotyping, strand bias distribution, substitutions spectra, regions GC content (Additional file 1: Figure S2-S8).

Results

PaCBAM performances were tested on an AMD Opteron 6380 32-cores machine with 256 GB RAM. To mimic different application scenarios, we measured the execution time and memory used by PaCBAM to compute pileups from multiple input BAM files spanning different depth of coverage and different target sizes (Additional file 1: Table S1) using an increasing number of threads. We compared PACBAM performances against pileup modules of SAMtools, Sambamba and GATK (SAMtools offer no parallel pileup option).

In terms of runtime, as shown in Fig. 1a and Additional file 1: Figure S9-S11, PaCBAM and Sambamba are the only tools that scale with the number of threads used. PaCBAM outperforms all other tools in all tested conditions. Of note, while PaCBAM pileup output files are of constant size, output files of SAMtools, Sambamba and GATK have a size that is function of the coverage; among all the experiments we run in the performance analyses, PaCBAM output is up to 17.5x smaller with respect to outputs generated by the other tested tools.

Fig. 1
figure 1

PaCBAM performances. Time (a) and memory (b) required by PaCBAM to perform a pileup compared to SAMtools, GATK and Sambamba, using increasing number of threads. The figure focuses on the analysis of a BAM file with ~300X mean coverage and ~30Mbp target size using 30 threads. Note that parallel pileup option is not available for SAMtools and red lines in panel a and b refer to the average of single thread executions

While GATK and PaCBAM, as shown in Fig. 1b and Additional file 1: Figure S12-S14, have a memory usage that depends only on the target size, Sambamba usage depends on both target size and number of threads and SAMtools usage is constant. Above 8 cores, PaCBAM beats both GATK and Sambamba in all tested conditions in memory usage.

As an example of performance comparison, when analyzing a BAM file with ~300X mean coverage and ~30Mbp target size using 30 threads (Fig. 1a-b), PaCBAM improves execution time of 4.9x/5.27x and requires 80%/82% less memory compared to Sambamba/GATK.

Of note, in the sequencing scenarios here considered, PaCBAM demonstrates up to 100x execution time improvement and up to 90% less memory usage with respect to the single-base pileup module of our previous tool ASEQ (Additional file 1: Figure S15).

Depth of coverage and pileup statistics of PaCBAM pileup were compared to GATK results on a BAM file with ~300X average coverage and ~64Mbp target size observing almost perfect concordance (Fig. 2a-b).

Fig. 2
figure 2

Comparison of PaCBAM results with other tools. a Comparison of PaCBAM and GATK depth of coverage (left) with zoom in the coverage range [0,500] (right); number of positions considered in the analysis and correlation results are reported. b Comparison of allelic fraction of ~ 40 K positions annotated as SNPs in dbSNP database v144 and having an allelic fraction > 0.2 in both PaCBAM and GATK pileup output. c Single-base coverage obtained by running either Picard MarkDuplicates + PaCBAM pileup or PaCBAM pileup with duplicates filtering option active (left) with zoom in the coverage range [0,500] (right). d Regional mean depth of coverage obtained by running either Picard MarkDuplicates + PaCBAM pileup or PaCBAM pileup with duplicates filtering option active

PaCBAM duplicates removal strategy was tested by comparing PaCBAM pileups obtained from a paired-end BAM file first processed with Picard MarkDuplicates or parallel Sambamba markdup, to PaCBAM pileups obtained from the same initial BAM file but using the embedded on-the-fly duplicates filtering. As shown in Fig. 2c-d and Additional file 1: Figure S16, both single-base and region level statistics results are strongly concordant, with single-base total coverage difference (with respect to Picard) that in 99.94% of positions is < 10X, single-base allelic fraction difference that in 99.95% of positions is < 1% and region mean coverage difference that in 99.96% of regions is <10X (Additional file 1: Figure S17). In addition, PaCBAM strategy improves overall execution time of 2.5x/1.7x with a single thread and of 25x/3x with 30 threads compared to Picard and parallel Sambamba, respectively (Additional file 1: Table S2, Fig. 2c, Additional file 1: Figure S16A).

Overall, these analyses demonstrate that PaCBAM exploits parallel computation resources better than existing tools, resulting in evident reductions of processing time and memory usage, that enable a fast and efficient coverage and allele-specific characterization of large WES and targeted sequencing datasets. The performance analysis is completely reproducible using an ad-hoc Debian-based Singularity container (Additional file 1: Table S3).

Conclusion

We presented PaCBAM, a fast and scalable tool to process genomic regions from NGS data files and generate coverage and pileup statistics for downstream analysis such as copy number estimation, variant calling and data quality control. Although designed for targeted re-sequencing data, PaCBAM can be used to characterize any set of genomic regions of interest from NGS data. PaCBAM generates both region and single-base level statistics and provides a fast and innovative on-the-fly read duplicates filtering strategy. The tool is easy to use, can be integrated in any NGS pipeline and is available in source/binary version on Bitbucket and containerized from Docker and Singularity hubs.

Availability and requirements

Project name: PaCBAM

Project home page: bcglab.cibio.unitn.it/PaCBAM

Operating system(s): Platform independent

Programming language: C, Python

License: MIT