Background

Understanding combinatorial or cooperative transcriptional regulation by two or more transcription factors (TFs) has become an important research topic in the recent decade. Researchers have studied and modelled various types of TF-TF interactions which contribute to positive or negative synergy in regulating genes [13]. Attributing to the availability of various kinds of genome-wide datasets (e.g. gene expression data, ChIP-chip data, TF binding site motifs, protein-protein interaction data and TF knockout data), researchers continued developing advanced algorithms to predict cooperative TF pairs. Some algorithms only utilized ChIP-chip data [36] or gene expression data [7], and the others integrated multiple data sources [817].

Since different algorithms integrated different data sources, used different rationales and predicted distinct lists of cooperative TF pairs, it is hard to tell which one is the best. Typically, researchers only compared their algorithm with a few existing algorithms using a few performance indices (see Table 1) and claimed their algorithm to be the best one. However, this kind of comparison is incomplete and subjective [18]. A comprehensive and objective performance comparison framework is urgently needed.

Table 1 The numbers of the compared algorithms, the performance indices, and the predicted cooperative TF pairs (PCTFPs) for each of the 15 existing algorithms.

To meet this need, in our previous study [19], we proposed/adopted eight performance indices to compare the performance of 14 existing algorithms. Our results showed that the performance of an algorithm varies widely across different performance indices, implying that researchers may make a biased conclusion based on only a few performance indices. Therefore, in order to conduct a comprehensive and objective performance comparison, we designed two overall performance scores to summarize the comparison results of the eight performance indices.

Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. Therefore, researchers who develop a new algorithm definitely would like to use our performance comparison framework to quickly evaluate the prediction performance in order for improvement when needed. However, to use our framework, researchers have to put a lot of effort to construct it first. Constructing our framework involves collecting and processing multiple genome-wide datasets from the public domain, collecting the lists of the predicted cooperative TF pairs from 15 existing algorithms in the literature, and writing a lot of codes to implement the eight performance indices. To save researchers time and effort, here we develop a web tool called PCTFPeval (Predicted Cooperative TF Pair evaluator) to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. Constructing PCTFPeval is not a daunting task for us since we already have many experiences in developing databases and web tools [2026].

Implementation

Fifteen existing algorithms used for performance comparison

Our tool provides 15 existing algorithms for users to conduct a performance comparison. As far as we know, this is the most comprehensive collection of the existing algorithms whose lists of the predicted cooperative TF pairs in yeast are available. The numbers of the predicted cooperative TF pairs from different algorithms vary widely, ranging from 13 to 300 (see Table 1).

Eight existing performance indices used for performance evaluation

Our tool implements eight existing performance indices for users to evaluate the performance of an algorithm for predicting cooperative TF pairs in yeast. As far as we know, this is the most comprehensive collection of the existing performance indices. These eight performance indices can be divided into two types: TF-based indices and target gene based (TG-based) indices. Each type has four indices and different indices utilize different data sources and rationales (see Table 2).

Table 2 The eight performance indices implemented in our tool

Two existing overall performance scores used for representing the comprehensive performance comparison results

Our tool implements two existing overall performance scores [19] to summarize the comparison results of the selected performance indices. The first one is called the comprehensive ranking score defined as the sum of the rankings in the selected performance indices [19]. The ranking of an algorithm in an index is k if its performance ranks #k among all the compared algorithms in that index. For example, the ranking of the best performing algorithm is 1. Therefore, the smaller the comprehensive ranking score, the better the overall performance of an algorithm.

The second overall performance score is called the comprehensive normalized score (CNS) defined as the sum of the normalized scores in the selected performance indices [19]. The CNS of the algorithm i is calculated as follows:

C N S ( i ) = j = 1 L N S j ( i ) = j = 1 L O S j ( i ) max O S j ( 1 ) , O S j ( 2 ) , . . . , O S j ( n )

where N S j ( i ) and O S j ( i ) is the normalized score and the original score of the algorithm i calculated using the index j, respectively; n is the number of the algorithms being compared; L is the number of the selected indices. Note that 0 N S j ( i ) 1 and N S j ( i ) = 1 if and only if the algorithm i is the best performing algorithm in the index j (i.e. it has the highest original score calculated using the index j). The larger the CNS, the better the performance of an algorithm.

Results and discussion

Usage

The conceptual flowchart of our tool is shown in Figure 1. The friendly web interface allows users to input a list of the predicted cooperative TF pairs from their algorithm. Then three kinds of settings of our tool have to be specified. First, users have to choose the compared algorithms among the 15 existing algorithms. Second, users have to choose the performance indices among the eight existing indices. Finally, users have to choose the overall performance scores from the comprehensive ranking score and the comprehensive normalized score. After the submission, our tool conducts a comprehensive performance comparison of the user's algorithm to the compared algorithms using the selected performance indices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables.

Figure 1
figure 1

The conceptual flowchart of our tool. The flowchart shows the procedure of using our tool to conduct a comprehensive performance comparison of the user's algorithm to many existing algorithms using various performance indices.

Case study

In our tool, a list of 40 TF pairs is provided as a sample data. For demonstration purpose, we regard the sample data as the list of the predicted cooperative TF pairs from a new algorithm and would like to conduct a comprehensive performance comparison of this new algorithm to the various existing algorithms using our tool. As shown in Figure 2, users input the sample data to our tool and select (i) 10 existing algorithms for comparison, (ii) eight performance indices for evaluation, and (iii) the comprehensive ranking score as the overall performance score. After the submission, the comprehensive comparison results are generated and shown as both bar charts and tables (see Figure 3). It can be seen that the new algorithm performs well in the first five performance indices but performs worse in the last three performance indices. The overall performance of the new algorithm ranks three among all the 11 algorithms being compared. Getting the comprehensive comparison results from our tool, researchers immediately know that there is still room to improve the performance of their new algorithm.

Figure 2
figure 2

The input and three settings of our tool. To use our tool, users have to (a) input a list of the predicted cooperative TF pairs (PCTFPs) from their algorithm and select (b) the compared algorithms among the 15 existing algorithms, (c) the performance indices among the eight existing indices, and (d) the overall performance scores from the comprehensive ranking score and the comprehensive normalized score.

Figure 3
figure 3

The output of our tool. Here we input the sample data (a list of 40 TF pairs) as a list of the predicted cooperative TF pairs (PCTFPs) from a user's algorithm and select 10 existing algorithms, eight performance indices, and the comprehensive ranking score as the overall performance score. (a) The comprehensive performance comparison results are shown as a bar chart and a table. It can be seen that the overall performance of the user's algorithm ranks three among all the 11 algorithms being compared. (b) When clicking the hyperlink of "Index5", users will get the performance comparison results (shown as both a bar chart and a table) using only the index 5. It can be seen that the user's algorithm is the best performing algorithm in the index 5. (c) When clicking the hyperlink of "Details of the score of Index5 for each compared algorithm", users will get a text file containing the original scores (calculated using the index 5) of all PCTFPs of each algorithm being compared.

Conclusions

Knowing the cooperative TFs is crucial for understanding the combinatorial regulation of gene expression in eukaryotic cells. This is why the computational identification of cooperative TF pairs has become a hot research topic. Researchers will keep developing new algorithms. Using our tool, researchers can quickly conduct a comprehensive and objective performance comparison of their new algorithm to the various existing algorithms. If the performance of their new algorithm is not satisfactory, researchers can modify their algorithm and use our tool again to see if the performance is improved. Therefore, having our tool in hand, researchers can now totally focus on designing new algorithms and need not worry about how to comprehensively and objectively evaluate the performance of their new algorithms. In conclusion, our tool can greatly expedite the progress in this research topic.

Availability and requirements

Project name: PCTFPeval

Project home page: http://cosbi.ee.ncku.edu.tw/PCTFPeval/

Operating system(s): platform independent.

Programming language: PHP, Python and Javascript.

Other requirements: Internet connection.

License: none required.

Any restrictions to use by non-academics: no restriction.