Background

Cancer continues to a burden worldwide and its frequency is expected to double in the coming decades [1]. Available treatment regimens include radiation therapy, targeted therapy, and chemotherapy, all of which are often accompanied by harmful side effects and result in high financial costs for both individuals and society [2, 3]. Anticancer peptides (ACPs) provide a new cost-efficient approach to cancer treatment, have minimal side effects, and have been shown to be promising in the treatment of various tumours by targeting mitochondria or membranolytic mechanisms [4]. Although progress has been made in preclinical applications of peptide-based methods against cancer cells, the mechanism behind the success of ACP treatments are still elusive. It is therefore highly important to be able to efficiently identify ACPs in both cancer research and drug development purposes. Due to the high costs and lengthy process of identifying ACP experimentally, various computational models have been developed to identify ACPs from peptide sequences. These advances include iACP development by g-gap dipeptide component (DPC) optimization [5, 6], and SAP peptide identification by 400-dimensional features with g-gap dipeptide pruned by the maximum relevance-maximum distance method [7]. In addition, various types of amino acid compositions (AACs) of peptide sequences have been introduced to develop prediction models such as Chou’s pseudo amino acid composition (PseAAC) [8], combinations of AACs, average chemical shifts (acACS) and reduced AAC (RAAC) [6], pseudo g-Gap DPC, amphiphilic PseAAC, and reduced amino acid alphabet (RAAAC) [9]. Other methods include computational tools developed based on the q-Wiener graph indices for ACP predication [10]. In addition, machine learning methods were adopted to promote model efficiency [6, 9, 11]. Several models have utilized support vector machine (SVM) and random forest (RF) machine learning methods [11, 12], combinations of the quantitative outcomes of individual classifiers (RF, K-nearest neighbor, SVM, generalized neural network and probabilistic neural network) [9], or a pool of SVM-based models trained by sequence-based features [13].

Novel computational models based on machine learning have also been applied to identify virulent proteins in infection pathophysiology. Virulent proteins consist of a diverse set of proteins and are important for host invasion and pathogenesis. Drug resistance to bacterial pathogens has created an urgent need to identify novel virulent proteins that may facilitate drug target and vaccine developments. Several computational models have been developed to identify virulent proteins. The first methods were developed based on similarity search methods such as the Basic Local Alignment Search Tool (BLAST) [14] and Position-specific Iterated BLAST (PSI-BLAST) [15]. Machine learning algorithms for predicting virulent proteins have also been reported that apply SVM-based models based on AAC and DPC [16], an ensemble of SVM-based models trained with features extracted directly from amino acid sequences [17], a bi-layer cascade SVM model [18], and a model based on an SVM and a variant of input decimated ensembles and their random subspace [19]. Studies have also focused on conducting feature extraction of sequences such as protein presentations, by using amino acid sequence features and evolutionary information of a given protein [19]. Moreover, a computational tool based on the q-Wiener graph indices was also proposed to effectively predict virulent proteins [10]. Despite substantial progress, identifying specific peptides from massive protein databases remains challenging.

To date, deep learning applications have been successful in numerous fields other than medicine, including image classification and recognition [2022], object detection [23, 24], scene recognition [25], character recognition [26], sentence classification [27], chromatin accessibility prediction [28] and so on. Inspired by these successful deep learning applications, we propose a novel computational model called PTPD, which is based on deep learning, to identify ACPs and virulent proteins from peptide sequences (Fig. 1). To verify the efficiency of our approach, we also performed ACP and virulent protein prediction on publicly available datasets [12, 18, 29]. Our results show that PTPD is able to identify ACPs and virulent proteins with high efficiency.

Fig. 1
figure 1

Flowchart of PTPD

Methods

Datasets

The ACP datasets were extracted from publicly available resources [12, 29]. A total of 225 validated ACPs from the AMPs dataset and the database of Anuran defence peptides (DADP) [30] were used as positive samples, while 2,250 randomly selected proteins from the SwissProt protein database were used as negative samples. This dataset was used to build the model. An alternative dataset and two balanced datasets were employed to evaluate the model. To compare our methods with other existing methods, we also obtained an independent dataset (i.e. Hajisharifi-Chen (HC)) from a previous study [12]. The HC dataset, which contains 138 ACPs and 206 non-ACPs, was also employed to develop prediction models in [31, 32].

The virulent protein datasets were obtained from VirulentPred [18] and NTX-pred method [16]. We adopted the SPAAN adhesins dataset, which contains 469 adhesion and 703 non-adhesion proteins, to build the PTPD model for virulent protein prediction. The neurotoxin dataset was applied as an independent dataset to evaluate the model. It contains 50 neurotoxins (positive samples) and 50 non-virulent proteins (negative samples) obtained by the NTX-pred method [16].

Representation of k-mers by word2vec

Each peptide sequence was divided into k-mers by windowing method as previously described in [33, 34]. To represent the k-mers, we used the publicly available word2vec tool, which creates high-quality word embedding vectors according to a large number of k-mers.

The word2vec tool computes vector representations of words and has been widely applied in many natural language processing tasks as well as other research applications [3538]. Two learning algorithms are available in word2vec: continuous bag-of-words and continuous skip-gram. These algorithms learn word representations to help to predict other words in the sentence. The skip-gram model in word2vec trains the word vectors of each word based on the given corpus. Given a word (W(t)) in a sentence, skip-gram can predict the probabilities P(W(t+i)|W(t)) of nearby words Wi(tkit+k) based on the probability of the current word W(t). Each word vector reflects the positions of the nearby words, as illustrated in Fig. 2. The goal of the skip-gram model is to maximize the following value:

$$ E=\frac{1}{n}\sum\limits_{t=1}^{n}{\left(\sum\limits_{-k\le i\le k,i\ne 0}{{{log}_{2}}P(W(t+i)|W(t))} \right)}, $$
(1)
Fig. 2
figure 2

Skip-gram model structure

where k denotes the size of the window, and W(t+i)(−kik) denotes k words near the current word W(t), and n denotes the number of words.

Because word2vec can reflect the positional relationships of words in a sequence and preserve structural information, we treated the k-mers as the words. Using word2vec, the word embedding vector of each k-mer with 100 dimensions was obtained.

Input layer

After constructing the word representation of all the k-mers, we mapped the peptide sequence to numeric vectors. First, we used stride st to divide a peptide sequence S with length L0 into k-mers of length k. The number of k-mers and the subsequent number of vectors varied because the peptide sequences (S) had different original lengths (L0). The vectors for one peptide were set to be the same length L-the length of the longest vector for those peptide sequences. Vectors with lengths shorter than L zero-padded at the end as in the natural language process. Finally, the peptide sequence was converted to a vector \(\tilde {S}\) by the word vectors with dimensions L×100.

$$ {{\tilde{S}}_{L\times 100}}=padding({{f}_{map}}(k\_mer({{S}_{{{L}_{0}}}}))). $$
(2)

To prevent over-fitting and to improve model generalization, dropout was applied to a fraction of the inputs (i.e., a portion of the inputs was randomly set to zero).

Feature map

To extract features, a set of one-dimensional convolution filters was adopted to process the vectors of peptide sequences. The convolution kernel was a shape kernel with a size of (c×100). We used three types of convolution filters with sizes of three, four, and five. All the kernels performed convolutions on the entire representation vector. For example, using one convolution kernel with a size of (c×100), the feature map was constructed as follows:

$$ {{F}_{c}}={{[f(m)]}_{(L-c+1)\times 1}}, $$
(3)
$$ \begin{aligned} &f(m)=g(W\otimes \tilde{S}_{m}+b)\\ &=ReLU(\sum \limits_{i=0}^{c}{ \sum \limits_{j=0}^{100}{w(i,j)\times \tilde{s}(m+i,j)+b)}},\\ \end{aligned} $$
(4)

where f(m) denotes the mth element of the feature map, ReLU denotes the rectified linear unit (ReLU) activation function, w(i, j) denotes the weight of the convolution kernel compiled by training, c denotes the size of filter, and \(\tilde {S}_{m}\) denotes the mth block of the representation vector of the peptide sequence. ReLU [39] was used to set the negative results of the convolution calculation to zero, and is defined as follows:

$$ ReLU(a)=max(0,a)=\left\{ \begin{aligned} 0,&~\text{if }a \le \text{0},\\ a, &~\text{otherwise}. \end{aligned} \right. $$
(5)

Multiple filters were used for each filter type. Let nc be the number of convolution filters, we applied

$$ {{\tilde{F}}_{c}}={{[F_{c}^{1},F_{c}^{2}\ldots,F_{c}^{nc}]}_{(L-c+1)\times nc}}. $$
(6)

To reduce the spatial dimensions of the feature maps, max pooling was adopted following a convolution operation. A max pooling layer with a pooling window of size 2×1 and a stride of 2 was defined by the function

$$ \begin{aligned} &{{Z}_{c}}=({{z}_{i,j}})=pool({{{\tilde{F}}}_{c}})\\ &=[\max {{{\tilde{F}}}_{c}}(:,1),\dots,\max {{{\tilde{F}}}_{c}}(:,j),\dots,\max {{{\tilde{F}}}_{c}}(:,nc)], \end{aligned} $$
(7)

where

$$ \max {{\tilde{F}}_{c}}(i,j)=\underset{i'\in [i,i+2]}{\mathop{\max }}\,{{\tilde{F}}_{c}}({{i}^{'}},j). $$
(8)

The results were finally merged concatenated as follows:

$$ F{{A}_{m}}=[Z_{c1},Z_{c2},Z_{c3}], $$
(9)

where c1=3, c2=4, and c3=5 denote the three filter sizes we used. Then FAm was processed by a fully connected hidden layer to produce FM=ReLU(FAmWft), where ReLU represents a rectified linear activation unit, and Wft is the weight matrix of the fully-connected layer.

Classification

The last layer of PTPD adopted a fully-connected layer to obtain a single output. A sigmoid activation function was set to generate the output probability between zero and one, which was defined as

$$ \operatorname{Sigmoid}(x)=\frac{1}{1+{{e}^{-x}}}. $$
(10)

Loss function and optimizer

A binary cross entropy loss function was used to train the model. The model was trained with the RMSprop optimizer. The binary cross entropy loss function between the predictions and targets was defined as

$$ L({{y}_{i}},{{\hat{y}}_{i}})={{y}_{i}}log({{\hat{y}}_{i}})+(1-{{y}_{i}})log(1-{{\hat{y}}_{i}}). $$
(11)

The total cost of the two classes was

$$ L=\sum\limits_{i=1}^{2}{L({{y}_{i}},}{{\hat{y}}_{i}}). $$
(12)

Model evaluation

The performance of PTPD was evaluated by various metrics, including the sensitivity (Sn), specificity (Sp), prediction accuracy (Acc), Matthew’s correlation coefficient (MCC), and the area under the curve (AUC) of the receiver-operating characteristic (ROC) curve. These metrics were defined as follows:

$$ \left\{ \begin{array}{l} Sn=\frac{TP}{TP+FN} \\ Sp=\frac{TN}{TN+FP} \\ Acc=\frac{TP+TN}{TP+TN+FP+FN} \\ MCC=\frac{(TP\times TN)-(FP\times FN)}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}} \\ \end{array} \right., $$
(13)

where TP denotes true positives, TN denotes true negatives, FP denotes false positives, FN denotes false negatives.

Results

Model performance

To verify the proposed method, we executed the proposed model on ACPs and virulent protein datasets. Each dataset was randomly divided into three groups. The first group, which consisted of 75% of the complete dataset, was used to train the model. The second group of data, 15% of the entire dataset, was used to minimize over-fitting. The third group, 10% of the entire dataset, was used to evaluate the performance of the trained PTPD model. For ACP identification, the performance of PTPD was first measured using the test data from the main dataset, and then further tested on an alternative dataset. Furthermore, we also evaluated the performance of PTPD on two types of balanced datasets (Table 1).

Table 1 Performance of PTPD on the ACP dataset

PTPD achieved high performance scores of Sn = 94.2%, Sp = 86.2%, Acc = 90.2%, Mcc = 0.8, and AUC = 0.97, respectively. Moreover, to evaluate the generalizability or robustness of the prediction model, we executed PTPD on the independent HC dataset, as shown in Table 1. The AUCs of the five data sets were all higher than 0.97. Thus, PTPD offers stable performance even on unbalanced data sets (Table 1).

To evaluate the performance of PTPD, we conducted an evaluation on the test data of the SPAAN adhesins dataset. We also tested the performance of PTPD on an independent Neurotoxins dataset (Table 2).

Table 2 Performance of PTPD on the virulent protein dataset

The five performance metrics (Sn, Sp, Acc, MCC, and AUC) achieved by PTPD on the virulent protein dataset are higher than 95.6%, 73.3%, 88.2%, 0.7, and 0.93, respectively, which confirms the good performance of PTPD. Sp on the SPAAN adhesins dataset had a relatively lower value (Table 2).

Comparison with the state-of-the-art methods

For verification purposes, we compared the proposed method with other state-of-the-art methods on the identification of ACPs and virulent proteins on two independent datasets.

Comparison performed on independent aCP dataset

To further evaluate the performance of PTPD to predict ACPs, we compared its performance with those of some state-of-the-art methods (i.e., AntiCP [29], MLACP [12], and mACPpred [40]) on an independent HC dataset (Table 3 and Fig. 3). PTPD performed equally as well as MLACP (RF) on the HC dataset. The proposed PTPD has the highest sensitivity, relatively higher AUC, ACC, and MCC, and intermediate specificity. Thus, PTPD offers relatively better generalizability on independent datasets than do the other tested state-of-the-art methods for identifying ACPs.

Fig. 3
figure 3

Comparison of different methods on the HC dataset. a Sn, Sp and Acc of different methods. b MCC and AUC of different methods. Sn: the sensitivity; Sp: the specificity; Acc: the prediction accuracy; MCC: Matthew’s correlation coefficient; AUC: the area under the curve of the receiver-operating characteristic curve

Table 3 Comparison of PTPD with state-of-the-art methods on the HC dataset

Comparison performed on an independent virulent protein dataset

We also compared the performance of PTPD with that of q-FP [10], AS and 2Gram [41], VirulentPred [18], and NTX-pred [16] on a bacterial neurotoxins dataset (Table 4 and Fig. 4).

Fig. 4
figure 4

Comparison of different methods on the neurotoxin virulent proteins dataset. a Sn, Sp and Acc of different methods. b MCC and AUC of different methods. Sn: the sensitivity; Sp: the specificity; Acc: the prediction accuracy; MCC: Matthew’s correlation coefficient; AUC: the area under the curve of the receiver-operating characteristic curve

Table 4 Comparison of PTPD with state-of-the-art methods on the Neurotoxins dataset

Again, the overall performance of PTPD was relatively better than those of other methods. Thus, we can conclude that PTPD is able to predict potential virulent proteins with high accuracy.

Parameter settings

Because model convergence is related to the learning rate, we set the learning rate variously to 0.5, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001, 0.00005, and 0.00001 for ACP training. The accuracy and loss values under the different learning rates are shown in Fig. 5.

Fig. 5
figure 5

Performances under different learning rates: a accuracy under different learning rates; b loss under different learning rates

The model achieved its highest accuracy (98.5%) and the lowest loss (0.03) when the learning rate was set to 0.0001, which was subsequently selected for model training. The detailed parameter settings are shown in Table 5.

Table 5 Parameter setting

Discussion

The model performance presented in this study suggests that PTPD possesses good generalizability and robustness. The comparison between PTPD and other methods showed that PTPD outperformed the other tested state-of-the-art methods for independent data analysis.

The performance of PTPD benefits from several major factors: (1) word2vec was applied to extract representation vectors of k-mers to consider the co-existence information of k-mers in peptide sequences. (2) For the feature map, a convolution neural network (CNN) architecture was used to automatically extract features without domain experts. (3) Dropout and max-pooling operations were adopted to avoid over-fitting.

Conclusions

Identifying new ACPs and virulent proteins is an extremely labour-intensive and time-consuming process. In this paper, we proposed a computational model based on deep learning that predicts therapeutic peptides with in a highly efficient manner. We then present a new deep learning-based prediction model that achieves better recognition performances compared to those of other state-of-the-art methods. We first trained a model to extract feature vectors of all k-mers using word2vec. Next, the peptide sequences were converted into k-mers, and each peptide sequence was represented by the vectors compiled by word2vec. The CNN then automatically extracted features without expert assistance, which decreases the reliance on domain experts for feature construction. The CNN was configured with three types of filters, and dropout and max-pooling operations were applied to avoid over-fitting. After fusing the features, ReLU activation was used to replace any negative values in the output of the CNN layer with zeros. Finally, the sigmoid function was used to classify the peptide.

The performance and generalizability of PTPD were verified on two independent datasets. The trained model achieved AUCs of 0.99 and 0.93, respectively, which confirmed that the proposed model can effectively identify ACPs and virulent proteins.

In summary, the PTPD model presented in this paper outperformed other tested methods. Nevertheless, the approach still suffers because the inability to assign values regarding which features are most important for identifying favourable bioactivity. In future studies on potential structures and feature selection methods, we may consider other available network architectures such as generative adversarial networks. Some new methods that have been successfully applied to natural language processes might also facilitate further research. Our study confirmed that PTPD is an effective means for identifying and designing novel therapeutic peptides. Our approach might be extensible to other peptide sequence-based predictions, including antihypertensive [42, 43], cell-penetrating [44], and proinflammatory [45].