1 Introduction

The development of a safe and effective drug takes more than 10 years and costs approximately US $2.6 billion [1]. Virtual screening (VS), which predicts the activity of untested compounds at a target drug protein using computational methods, is widely used in drug discovery research to reduce the developmental cost of medication [2]. Ligand-based virtual screening (LBVS) is a VS method [3] where predictions are formulated as classification or regression problems, and the activity of untested compounds is predicted by machine learning methods using the activity of tested compounds.

Learning-to-rank is a machine learning framework in the field of information retrieval used to treat ranking models [4], and has lately been introduced to LBVS. In LBVS with learning-to-rank, VS is formulated as a problem of ranking prediction concerning the activity of compounds, such as half the maximum inhibitory concentration (\({\text{IC}}_{50}\)), and ranking is predicted using the ranking prediction model of learning-to-rank. Agarwal et al. [5] introduced learning-to-rank to VS for the first time, and showed that their method using RankSVM outperformed the method that simply employs the support vector machine (SVM) and the support vector regression (SVR). Rathke et al. [6] proposed StructRank, which directly solves the ranking problem and focuses on the most promising compounds in terms of activity. Zhang et al. [7] compared several learning-to-rank prediction models, and concluded that RankSVM is the best. Furthermore, they noted that learning-to-rank can treat multiple heterogeneous experimental data measured for different targets or platforms. This is major advantage of learning-to-rank because a traditional VS approach, such as classification and regression, cannot integrate multiple heterogeneous experimental data. Their method was based on the tensor product of the feature vectors of a compound and a target protein.

Drug–target interaction problems [8] have been studied as well as LBVS. This problem involves multiple compounds and proteins, where a multiplicity of interactions between compounds and proteins is predicted. We note that the drug–target interaction problem differs from LBVS because it focuses on the predictive accuracy of entire interactions involving multiple compounds and multiple proteins, whereas LBVS focuses on the predictive accuracy of interactions involving multiple compounds and a specific protein as drug target. A pairwise kernel method was proposed for the drug–target interaction problem [9]. It is a kernel-based machine learning method. A pairwise kernel is defined as the product of a compound kernel and a protein kernel, thus the pairwise kernel method has extensibility in terms of selecting the compound kernel and the protein kernel.

We note that the method involving the tensor product proposed by Zhang et al. is a special case of the pairwise kernel method. If both the compound kernel and the protein kernel are represented as a linear kernel, the pairwise kernel method is equivalent to the method that uses a tensor product.

In this paper, we propose a novel VS method called PKRank, which is a learning-to-rank-based VS method, using a pairwise kernel and RankSVM. PKRank has several advantages over the method that uses a tensor product: (a) PKRank can handle high-dimensional feature vectors. (b) Any kernel function can be used for the compound kernel and the protein kernel. (c) PKRank can handle similarity measurement for prediction.

The purpose of this study is to obtain a more accurate prediction model through PKRank than the method that uses the tensor product [7]. A comparison in terms of prediction accuracy between PKRank and the previous method with compound activity data recorded in BindingDB [10] showed that the former is superior.

2 Methods

We use the following notation throughout this paper: let \(\mathbf {c} \equiv (c_1, ..., c_{d(\mathbf {c})})^{\top }\) be a feature vector of a compound and \(\mathbf {p} \equiv (p_1, ..., p_{d(\mathbf {p})})^{\top }\) be a feature vector of a protein, where \(d(\mathbf {c})\) and \(d(\mathbf {p})\) are the number of dimensions of vector \(\mathbf {c}\) and vector \(\mathbf {p}\), respectively.

The ranking prediction model f of learning-to-rank is represented as \(f(\mathbf {x}) = f(\varvec {\Phi }(\mathbf {c}, \mathbf {p}))\), where \(\mathbf {x} \equiv \varvec {\Phi }(\mathbf {c}, \mathbf {p})\) is an input feature vector and \(\varvec {\Phi }\) is a feature map. In this section, we explain the method proposed by Zhang et al. that uses the tensor product as \(\varvec {\Phi }\) [7] as well as the proposed method PKRank, which is a learning-to-rank-based VS using a pairwise kernel. The former method is a special case of the latter, as described presently.

2.1 Previously proposed method (tensor product)

Zhang et al. [7] introduced the tensor product as feature map \(\varvec {\Phi }\) as follows:

$$\varvec {\Phi }(\mathbf {c}, \mathbf {p})=\mathbf {c}\otimes \mathbf {p},$$
(1)

where \(\otimes\) is the tensor product operator. If \(\mathbf {c}\) is a \(d(\mathbf {c})\)-dimensional feature vector and \(\mathbf {p}\) is a \(d(\mathbf {p})\)-dimensional feature vector, \({\Phi }(\mathbf {c}, \mathbf {p})=\mathbf {c}\otimes \mathbf {p}\) is a \(d(\mathbf {c})\times d(\mathbf {p})\)-dimensional feature vector. Zhang et al. used a general descriptor [11] (GD, 32 dimensions) as compound feature vector \(\mathbf {c}\), and composition transition and the distribution feature [12] (CTD, 147 dimensions) as protein feature vector \(\mathbf {p}\). Hence, they used a 4,704-dimensional feature vector as input to the ranking prediction model f. GD and CTD represent the physicochemical properties of a compound and a protein, respectively.

2.2 Proposed method (PKRank)

The pairwise kernel [9] was originally proposed in the context of the drug–target interaction problem [8]. Pairwise kernel \(k: \mathbb {R}^{d(\mathbf {c})\times d(\mathbf {p})} \times \mathbb {R}^{d(\mathbf {c}) \times d(\mathbf {p})} \rightarrow \mathbb {R}\) is defined between two pairs of proteins and compounds \((\mathbf {c}, \mathbf {p})\) and \((\mathbf {c}', \mathbf {p}')\) as follows:

$$k((\mathbf {c}, \mathbf {p}),(\mathbf {c}', \mathbf {p}')),$$
(2)
$$\quad = {\Phi }(\mathbf {c}, \mathbf {p})^{\top } {\Phi }(\mathbf {c}', \mathbf {p}'),$$
(3)
$$\quad =({\Phi }_{\text {com}}(\mathbf {c})\otimes {\Phi }_{\text {pro}}(\mathbf {p}))^{\top }({\Phi }_{\text {com}}(\mathbf {c}')\otimes {\Phi }_{\text {pro}}(\mathbf {p}')),$$
(4)
$$\quad =({\Phi }_{\text {com}}(\mathbf {c})^{\top } {\Phi }_{\text {com}}(\mathbf {c}'))\times ({\Phi }_{\text {pro}}(\mathbf {p})^{\top } {\Phi }_{\text {pro}}(\mathbf {p}')),$$
(5)
$$\quad =k_{\text {com}}(\mathbf {c}, \mathbf {c}') \times k_{\text {pro}}(\mathbf {p}, \mathbf {p}'),$$
(6)

where \(k_{\text {com}}: \mathbb {R}^{d(\mathbf {c})} \times \mathbb {R}^{d(\mathbf {c})} \rightarrow \mathbb {R}\) is a compound kernel between two compounds, and \(k_{\text {pro}}: \mathbb {R}^{d(\mathbf {p})} \times \mathbb {R}^{d(\mathbf {p})} \rightarrow \mathbb {R}\) is a protein kernel between two proteins.

RankSVM [13] is a learning-to-rank model based on a pairwise approach using SVM. Zhang et al. compared several learning-to-rank prediction models and concluded that RankSVM is the best. RankSVM can be extended to use the kernel method as well as SVM; thus, the pairwise kernel can be used.

Our proposed PKRank is a learning-to-rank method that uses a pairwise kernel and RankSVM. There are two steps in the training of PKRank: (1) A Gram matrix of the pairwise kernel K is generated. (2) RankSVM is trained with the input of the Gram matrix of the pairwise kernel K and the order of activity of compounds against target proteins. PKRank requires \(k_{\text {com}}\) and \(k_{\text {pro}}\) to generate the Gram matrix of the pairwise kernel K. Figure 1 shows the training overview of PKRank and Fig. 2 shows the overview of the generation of the Gram matrix of the pairwise kernel K.

Fig. 1
figure 1

The overview in the training of proposed method: PKRank

Fig. 2
figure 2

Generating a Gram matrix of the pairwise kernel K

If both \(k_{\text {com}}\) and \(k_{\text {pro}}\) are represented as linear kernel \(k(\mathbf {x},\mathbf {x}') = {\Phi }(\mathbf {x})^{\top } {\Phi }(\mathbf {x}') \equiv \mathbf {x}^{\top }\mathbf {x}'\), we obtain \({\Phi }_{\text {com}}(\mathbf {c}) = \mathbf {c}\) and \({\Phi }_{\text {pro}}(\mathbf {p}) = \mathbf {p}\) from (5) and (6). Then, we get \({\Phi }(\mathbf {c},\mathbf {p}) = \mathbf {c}\otimes \mathbf {p}\) from (3) and (4). This is equivalent to the method that uses the tensor product; hence, this method is a special case of PKRank.

PKRank has several advantages: (a) The tensor product method cannot handle high-dimensional feature vectors because the number of dimensions of the tensor product is large [as previously described, \(\mathbf {c}\otimes \mathbf {p}\) is a \(d(\mathbf {c})\times d(\mathbf {p})\)-dimensional feature vector]. However, PKRank can handle it because the pairwise kernel uses not a feature map \({\Phi }(\mathbf {c}, \mathbf {p})\), but the compound kernel \(k_{\text {com}}\) and the protein kernel \(k_{\text {pro}}\). (b) In case of the tensor product method, the compound kernel \(k_{\text {com}}\) and the protein kernel \(k_{\text {pro}}\) are fixed to use a linear kernel, as described. However, PKRank can use any kernel function in addition to the linear kernel. (c) PKRank can handle similarity measurements for prediction. This is because the pairwise kernel method needs only the Gram matrix, whose elements represent the similarity between compounds or proteins; thus, a feature vector representation of compound \({\Phi }_{\text {com}}\) or protein \({\Phi }_{\text {pro}}\) is not always required.

To make full use of advantage (a) of PKRank, we introduce Extended-connectivity Fingerprints [14] (ECFP4, 2,048 dimensions) as a compound feature vector, which is a topological fingerprint representing the presence of substructures. ECFP4 cannot be dealt with by the method of tensor product because of its large dimensionality (if ECFP4 and CTD are used in this method, a 301,056-dimensional feature vector is the input to the ranking prediction model). With regard to advantage (b), we introduce a polynomial kernel, a radial basis function (RBF) kernel and the Tanimoto kernel [15], for the compound kernel \(k_{\text {com}}\), and a polynomial kernel, an RBF kernel, for the protein kernel \(k_{\text {pro}}\). To exploit advantage (c), we introduce the normalized Smith–Waterman score (nSW) [16] for protein kernel \(k_{\text {pro}}\), which is a normalized local alignment score between sequences. The nSW is calculated only using two amino acid sequences, and shows the similarity between proteins. Thus, the nSW can also be used as protein kernel \(k_{\text {pro}}\).

2.3 Kernels

We use a linear kernel, a polynomial kernel, an RBF kernel, and the Tanimoto kernel for the compound kernel \(k_{\text {com}}\), and a linear kernel, an RBF kernel, and the normalized Smith–Waterman score (nSW) for protein kernel \(k_{\text {pro}}\). Here, we explain these kernels.

  • A linear kernel between two features \(\mathbf {x}, \mathbf {x}'\) is

    $$k(\mathbf {x}, \mathbf {x}')=\mathbf {x}^{\top }\mathbf {x}' .$$
    (7)

    As seen above, if both \(k_{\text {com}}\) and \(k_{\text {pro}}\) are represented as a linear kernel, PKRank is equivalent to the tensor product method.

  • A polynomial kernel between two features \(\mathbf {x}, \mathbf {x}'\) is

    $$k(\mathbf {x}, \mathbf {x}')=(\mathbf {x}^{\top }\mathbf {x}'+1)^{z}.$$
    (8)

    It has a hyper-parameter z, the manner of tuning which is explained in Sect. 3.4.

  • An RBF kernel between two features \(\mathbf {x}, \mathbf {x}'\) is

    $$k(\mathbf {x}, \mathbf {x}')=\exp (-\gamma \Vert \mathbf {x}-\mathbf {x}'\Vert ^2).$$
    (9)

    It is widely used in kernel-based machine learning. It has a hyper-parameter \(\gamma\), the manner of tuning which is explained in Sect. 3.4.

  • The Tanimoto kernel between two binary vectors \(\mathbf {x}, \mathbf {x}'\) is

    $$k(\mathbf {x}, \mathbf {x}')=\frac{\mathbf {x}^{\top }\mathbf {x}'}{\mathbf {x}^{\top }\mathbf {x}+{\mathbf {x}'}^{\top }\mathbf {x}'-\mathbf {x}^{\top }\mathbf {x}'}.$$
    (10)

    The Tanimoto coefficient is used to measure similarity between compounds using a binary feature. In the drug–target interaction problem, the Tanimoto coefficient is used as a compound kernel. We use the Tanimoto kernel only with ECFP4 (binary feature).

  • The normalized Smith–Waterman score (nSW) between two proteins \(seq, seq'\) is:

    $$k(seq, seq') =\frac{\text {SW}(seq, seq')}{\sqrt{\text {SW}(seq, seq)}\sqrt{\text {SW}(seq', seq')}}, $$
    (11)

    where \(\text {SW}(\cdot ,\cdot )\) is the Smith–Waterman score, which is a local alignment score between amino acid sequences.

Table 1 Three datasets used as benchmark in this study (PDE, CTS, ADOR)

3 Experiments

3.1 Data

To compare the predictive accuracy of the method that uses the tensor product and PKRank, compound activity data measured in \(\text {IC}_{50}\) against proteins of phosphodiesterase (PDE), cathepsin (CTS), and the adenosine receptor (ADOR) family, recorded in BindingDB [10], were used as benchmark. We note that the compound activity data against the PDE and the CTS families had been used in a previous study [7]. To remove highly similar compounds from the dataset, clustering based on Butina’s method [17] with ECFP4 and the Tanimoto coefficient was performed. We set the threshold of similarity cutoff of Butina’s algorithm to 0.8. The non-redundant datasets compiled are shown in Table 1.

3.2 Evaluation criteria

In information retrieval, normalized discounted cumulative gain (NDCG) [19] is widely used for evaluation. There are two types of NDCG: NDCG1 and NDCG2. NDCG1 is defined as follows:

$$\text {DCG}1@m= rel _1 + \sum _{i=2}^m \frac{ rel _i}{\log _2 i},$$
(12)
$$\text {NDCG}1@m=\frac{\text {DCG}1@m}{\text {IdealDCG}1@m},$$
(13)

where m is the number of items considered for evaluation, \(rel _i\) is the relevance of the item at position i in predicted ranking, and \(\text {IdealDCG}1@m\) is the normalization term defined as \(\text {DCG}1@m\) if all items contained in the dataset are sorted according to their true relevance. On the other hand, NDCG2 is defined as follows:

$$\text {DCG}2@m=\sum _{i=1}^m \frac{2^{ rel _i}-1}{\log _2 (i+1)},$$
(14)
$$\text {NDCG}2@m=\frac{\text {DCG}2@m}{\text {IdealDCG}2@m}.$$
(15)

The study where the method using the tensor product was proposed [7] used NDCG2@10 for evaluation, but this is unstable because it changes drastically due to the \(rel _i\)-th power of 2, even if the predicted ranking changes only slightly. Thus, we used NDCG1@100 and NDCG1@10 in addition to NDCG2@10 for evaluation.

We note that \(\text {pIC}_{50} \equiv -\log _{10}(\text {IC}_{50})\) is used for the relevance of a compound. The higher the \(\text {pIC}_{50}\) of a compound, the more strongly it binds to a target protein; hence, \(\text {pIC}_{50}\) shows the relevance of a compound.

3.3 Training method

We used RankSVM with kernel method implemented by Kuo et al. [20].

We tested ten methods by changing the feature vectors and kernels. For the compound feature vector, GD and ECFP4 were calculated by RDKit (version 2016.09.1) [21]. For the protein feature vector, CTD was calculated by PROFEAT (version 2016) [12], and the normalized Smith–Waterman score (nSW) was calculated by EMBOSS (version 6.6.0) [22].

3.4 Parameter settings

We tuned the hyper-parameters as follows: (1) We randomly split the test data into two parts (a validation part for hyper-parameter tuning and a test part for evaluation). (2) We used NDCG1@100, NDCG1@10, and NDCG2@10 to assess the test part using a hyper-parameter combination that maximized the evaluation score for the validation part. (3) We repeated (1) and (2) five times and reported the mean of the five evaluation scores.

RankSVM has a cost parameter C as SVM does, and we chose a values for it from the set \(\{10^{-9}, 10^{-8}, ..., 10^0\}\). The polynomial kernel has a degree parameter z, which was chosen from \(\{2, 3\}\). The RBF kernel has a bandwidth parameter \(\gamma\), which was chosen from \(\{10^{-6}, 10^{-5}, 10^{-4}, 10^{-3}\}\).

4 Results

The evaluation scores (NDCG1@100, NDCG1@10 and NDCG2@10) are shown in Tables 2, 3 and 4. The best score among the 10 methods is shown in bold. The italicized first lines in Tables 2, 3 and 4 are equivalent to the method using tensor product [7], as shown in Sect. 2.2. The other lines (from the second to the last line) correspond to the proposed method PKRank. To check the significance of the evaluation score of PKRank, the Wilcoxon signed-rank test was performed between each of the proposed methods (regular lines) and the method using tensor product (the italicized first line). The scores with an asterisk (*) show that the score of the combination of feature vectors and kernels was significant at \(P < 0.05\).

For the three datasets, PKRank outperformed the method using tensor product in all three evaluations. The best combination of feature vectors and kernels was different for each dataset and evaluation criterion, but the combination of GD compound feature, RBF compound kernel, CTD protein feature, and RBF protein kernel (Line 3 in Tables 2, 3 and 4) comprehensively worked well.

Table 2 Experimental results of PDE
Table 3 Experimental results of CTS
Table 4 Experimental results of ADOR

5 Discussion and conclusion

The experimental results showed that the proposed PKRank is superior to the method using tensor product in NDCGs evaluation.

It is not advisable to use a linear kernel for the compound kernel or the protein kernel, but other kernels worked well. This is because non-linear kernels can represent more complicated ranking models. Since PKRank has extensibility in terms of kernel selection, these non-linear kernels can be used.

Since the evaluation scores of the number of cases were significant, it can be said that the settings of Line 3 in Tables 2, 3 and 4 worked well. There was only one case where the evaluation score was not significant for the ADOR dataset on NDCG1@10 evaluation, but was still better than that for methods using tensor product. One way to determine the best combination of features and kernels is by using cross-validation by changing features and kernels.

The nSW for the protein kernel worked well in the PDE dataset, but this tendency was not replicated in the results for the CTS and ADOR datasets. The training set of the PDE dataset had many proteins related to PDE5 (test data). This might have caused such a result, and further study is needed to determine when the nSW works well.

The purpose of this study was to obtain a more accurate prediction model by PKRank than the previous method that uses tensor product. This study showed that PKRank outperforms the tensor product-based method due to its several advantages. We believe that methods that can cope with multiple heterogeneous experimental data, like PKRank can, are important for drug discovery research. Moreover, classifying objects that are sampled jointly from two or more domains has many applications, such as in bioinformatics [23, 24], social network analysis [25], and world wide web [26]. The tensor product feature space is useful for modeling interactions between feature sets in different domains where PKRank can be applied to yield better performance.