Background

Cancer therapy is a difficult research area due to its level of complexity. Lately, the mere superposition of single drugs is found to generate side-effects and crosstalk with another drug which may cancel out the final success of treatments. Thus, current research focuses on measuring the drug treatments as a whole rather than considering them individually [1, 2]. Later, a synergistic concept is proposed to evaluate the drug treatments [3]. However, evaluations are still based on cases and do not have a systematic approach. In [4], a network methodology is first used to evaluate efficiency of drug treatments. Thus, Li et al. use a parameter, namely a SS (Synergy Score) to introduce the topology factor of the network based on the disease and the drug agent combination [5].

Our approach is first to build a more precise target network from the selected biomarkers (by AMFES) [6]. Then, we identify the intrinsic properties by computing mutual information of the interactions among these biomarkers. Our approach is to improve Li’s results by considering the mutual information in the target network. And we provide a general framework of synergistic therapy, which may include several different approaches.

Methods

AMFES

The COD (Curse of Dimensionality) has been a major challenge of microarray data analysis due to the large number of genes (features) and relatively small number of samples (patterns). To tackle this problem, many gene selection methodologies were developed to select only significant subsets of genes in a microarray dataset. AMFES selects an optimal subset of genes by training a SVM with subsets of genes generated adaptively [6].

When AMFES runs a dataset, all samples are randomly divided into a training subset S of samples and a testing subset T of samples at a heuristic ratio of 5:1. S is used for ranking and selecting of genes and for constructing a classifier out of the selected genes. T is used for computing test accuracy. When a training subset S is given, we extract r training-validation pairs from S according to the heuristic rule r = max (5, (int) (500/n+0.5)) and n is the number of samples in S. Each pair randomly divides S into a training component of samples and a validation component of samples at a ratio of 4:1. The heuristic ratio and rule are chosen based on the experimental experiences at the balance of time consumption and performance. Basically, AMFES has two fundamental processes, ranking and selection. We first explain each process in details and then the integrated version at the end.

Ranking

The gene ranking process contains a few ranking stages. At first stage, all genes are ranked by their ranking scores in a descending order. Then, in the next stage, only the top half ranked genes are ranked again while the bottom half holds the current order in the subsequent stage. The same iteration repeats recursively until only three genes are remained to be ranked again to complete one ranking process. Assume at a given ranking stage, there are k genes indexed from 1 to k. To rank these k genes, we follow 4 steps below. (I) We first generate m independent subsets S 1 … Sm. Each subset S i , i = 1, 2… m, has j genes which are selected randomly and independently from the k genes, where j = (int) (k/2). (II) Let C i be the SVM classifier that is trained on each subset of genes , i = 1, 2… m. For each gene of k genes, we compute the ranking score θ m (g) of the gene g, as equation (1). (III) We use the average weight of the gene g, the summation of weights of g in m subsets divided by the number of subsets for which g is randomly selected. This increases the robustness to represent the true classifying ability of the gene g. (IV) Rank k genes in the descending order by their ranking scores.

θ m g = i = 1 m I g S i w e i g h t i g i = 1 m I g S i
(1)

where I is an indicator function such that Iproposition = 1 if the proposition is true; otherwise, Iproposition = 0. In other word, if gene g is randomly selected for the subset S i , it is denoted as g S i and Iproposition = 1.

We denote the objective function of C i as o b j i v 1 , v 2 , , v s where v1, v2vs are support vectors of C i . The weight i (g) is then defined as the change in the objective function due to g, i.e.,

w e i g h t i g = o b j i v 1 , v 2 , v s o b j i v 1 g , v 2 g , , v 3 g
(2)

[6][7, 8]. Note that if v is a vector, v(g) is the vector obtained by dropping gene g from v. Let θm be a vector comprising the ranking scores derived from the m gene subsets generated thus far and θm-1 is the vector at the previous stage. The m value is determined when θm satisfies the equation (3) by adding a gene to an empty subset once a time.

θ m 1 θ m 2 θ m 1 2 < 0.01
(3)

where ||θ|| is understood as the Euclidean norm of vector θ. The pseudo codes of ranking process are shown in below.

Pseudo codes for ranking process of AMFES

RANK-SUBROUTINE

INPUT: a subset of k genes to be ranked

Generate k artificial genes and put them next to the original genes

Pick an initial tentative value of m

DO WHILE m does not satisfies equation (3)

FOR each subset Si of m subsets

Randomly select j elements from k genes to form the subset Si.

Train an SVM to get weight i (g) for each gene in the subset

ENDFOR

FOR each gene of k genes

Compute the average score of the gene from m subsets

ENDFOR

List k genes in descending order by their ranking scores

ENDDO

OUPUT: a ranked k genes

Selection

Ranking artificial features together with original features has been demonstrated as a useful tool to distinguish relevant features from irrelevant ones as in [911]. In our selection process, we also use this technique to find the optimal subset of genes.

Assume a set of genes is given. We generate artificial genes and rank them together with original ones. After finishing ranking the set, we assign a gene-index to each original gene by the proportion of artificial ones that are ranked above it where the gene-index is the real numerical value between 0 and 1. Then, we generate a few subset candidates from which the optimal subset is chosen. Let p 1 , p 2 , be the sequence of subset-indices of the candidates with p 1 < p 2 < ….where p i = i×0.005 and i= 1,2,…200. Let B(p i ) denote the corresponding subset of subset-index p i , and it contains original genes whose indices are smaller than or equal to p i . Then, we train a SVM on every B(p i ), and compute its validation accuracy v(p i ).

We stop at the first p k at which v(p k ) ≥ v baseline and v(p k ) ≥ v(p l ) for klk+10, where v baseline is the validation accuracy rate of the SVM trained on the baseline, i.e., the case in which all features are involved in training. The final result, B(p k ), is then the optimal subset for the given set of genes. The pseudo codes for selection process of AMFES are listed below.

Pseudo codes for selection process of AMFES

SELECTION-SUBROUTINE

INPUT: a few subsets with their validation accuracies, av(p i )

Compute the validation accuracy of all genes, vbaseline.

FOR each subset given

IF v(p k ) ≥ v baseline and v(p k ) ≥ v(p l ) for k ≤ l ≤ k+10 THEN

Resulted subset is B(p k )

ENDIF

ENDFOR

OUPUT: B(p k )

Integrated version

The ranking and selection processes from previous sections are for one training- validation pair. To increase the reliability of validation, we generate r pairs to find the optimal subset. We calculate the validation accuracy of the qth pair for all p q-i subsets where q denotes pair-index and i denotes the subset-index. Then, we compute av(p i ), the average of v(p q-i ) over r training-validation pairs and perform the subset search as explained in selection section on av(p i ) to find the optimal p i , denoted as p*.However, p* does not correspond to a unique subset, since each pair has its own B(p*) and they can be all different. Thus, we adopt all samples of S as training samples in order to find a unique subset. We generate artificial genes and rank them together with original genes. Finally, we select the original genes whose indices are smaller than or equal to the p* as the genes we select for S. The integrated version of process is shown below. In the pseudo codes below, the AMFES-ALGORITHM represents the integrated version of the whole process while RANK-SUBROUTINE represents the ranking process and SELECTION-SUBROUTINE represents the selection process.

Pseudo codes for integrated version of AMFES

AMFES ALGORITHM-Integrated Version

INPUT: a dataset

Divide a dataset into train samples and test samples.

Divide the train samples into r training-validation components pairs

FOR each pair of r train-validation components pairs

Generate 200 candidate subsets p q- i

FOR each subset of 200 subsets

CALL RANK subroutine to rank each subset.

Assign each original gene a gene-index

Train each subset on an SVM and compute corresponding validation accuracy, v(p q-i ), for the subset

END FOR

END FOR

FOR each subset of 200 subsets

Compute average validation rate, av(p i ), of the subsetfrom r pairs.

END FOR

CALL SELECTION subroutine to search for the optimal subset by its average validation rate and denotes it as p*

CALL RANK subroutine to rank original genes again and select original genes which belong to the subset B(p*).

OUPUT: an optimal subset of genes B(p*)

Mutual information

Mutual information has been used to measure the dependency between two random variables based on the probability of them. If two random variables X and Y, the mutual information of X and Y, I(X; Y), can be expressed as these equivalent equations [12]:

I X ; Y = H X H X | Y
(4)
= H Y H Y | X
(5)
= H X + H Y H X , Y
(6)

where H(X), H(Y) denote marginal entropies, H(X|Y) and H(Y|X) denote conditional entropies and H(X,Y) denotes joint entropy of the X and Y. To compute entropy, the probability distribution functions of the random variables are required to be calculated first. Because gene expressions are usually continuous numbers, we used the kernel estimation to calculate the probability distribution [13].

Assume the two random variables X and Y are continuous numbers. The mutual information is defined as [12]:

I X , Y = f x , y log f x , y f x f y d x d y
(7)

where f(x,y) denotes the joint probability distribution, and f(x) and f(y) denote marginal probability distribution of X and Y. By using the Gaussian kernel estimation, the f(x, y),f(x) and f(y) can be further represented as equations below [14]:

f x , y = 1 M 2 π h 2 e 1 2 h 2 x x u 2 + y y u 2
(8)
f x = 1 M Σ 1 2 π h 2 e 1 2 h 2 x y u 2 ,
(9)

where M represents the number of samples for both X and Y, u is index of samples u = 1 , 2 , M , and h is a parameter controlling the width of the kernels. Thus, the mutual information I X , Y can then be represented as:

I X , Y = 1 M i log M i e 1 2 h 2 x w x u 2 + y wi y u 2 j e 1 2 h 2 x w x u 2 j e 1 2 h 2 y wi y u 2
(10)

where both w, u are indices of samples w , u = 1 , 2 , M .

Computation of pairwise genes of a microarray dataset usually involves nested loops calculation which takes a dramatic amount of time. Assume a dataset has N genes and each gene has M samples. To calculate the pairwise mutual information values, the computation usually first finds the kernel distance between any two samples for a given gene. Then, the same process goes through every pair of genes in the dataset. In order to be computation efficient, two improvements are applied [13]. The first one is to calculate the marginal probability of each gene in advance and use it repeatedly during the process [13][15].The second improvement is to move the summation of each sample pair for a given gene to the most outer for-loop rather than inside a nested for-loop for every pairwise gene. As a result, the kernel distance between two samples is only calculated twice instead N times which saves a lot of computation time. LNO (Loops Nest Optimization) which changes the order of nested loops is a common time-saving technique in computer science field [16].

Target network

The effect of drugs with multiple components should be viewed as a whole rather than a superposition of individual components [1][2]. Thus, a synergic concept is formed and considered as an efficient manner to design a drug [3]. In [17], mathematical models are used to measure the effect generated by the multiple components. However, it does not consider practical situation such as crosstalk between pathways. A network approach starts to be used to analyze the interactions among multiple components [4]. Initiated by work in [4], another system biological methodology, NIMS (Network-target-based Identification of Multicomponent Synergy) is proposed to measure the effect of drug agent pairs depending on their gene expression data [5]. NIMS focuses on ranking the drug agent pairs of Chinese Medicine components by their SS.

In [5], it assumes that a drug component is denoted as a drug agent and with which a set of genes associated are denoted as agent genes of the drug agent. For a given disease, assume there are N drug agents where N =1, 2…n. Initially, NIMS randomly chooses two drug agents from N, A1, and A2, and builds a background target network by their agent genes in a graph. From the graph, NIMS calculates TS (Topology Score) of the graph by applying the PCA (Principle Component Analysis) to form a IP value which is integrated by betweenness, closeness and a variant of Eigenvalues PageRank [18]. The TS is used to evaluate the topology significance of the target network for the drug agent pair, A1 and A2, and is defined as

T S 1 , 2 = 1 2 × i I P 1 i × exp min d i , j i I P 1 i + j I P 2 j × exp min d j , i j I P 2 j ,
(11)

where IP1 and IP2 denote IP values for drug agent A1 agent and A2. Min(di,j) denotes minimum shortest path from gene i of A1 to all genes of A2 and min(dj,i) denotes the one from gene j of A1 to all genes of A2.

NIMS define another term, AS (Agent Score), to evaluate the similarity of a disease phenotype for a drug agent. For a given drug agent, if one of its agent genes has a phenotype record in the OMIM (Online Mendelian Inheritance in Man) database, the drug agent has that phenotype as one of its phenotype. The similarity score of a drug agent pair is defined as the cosine value of the pair’s feature vector angle [19]. The AS is defined as:

A S 1 , 2 = i , j P i , j M ,
(12)

where Pi,j denotes similarity score of ith phenotype of A1 and jth phenotype of A2 and M denotes the total number of phenotypes.

The SS of the pair is then defined as the product of TS and AS. NIMS calculates SS for all possible drug agent pairs for a disease and then can find potential drug agent pairs after ranking them by SS.

Results

MIROARRAY data description

We made a brief description of these three datasets in Table 1. It listed the number of biomarkers, types of biomarkers, number of samples and variation of samples used.

Table 1 Descriptions of 3 datasets: GSE18655 (prostate cancer), GSE19536 (breast cancer) and GSE21036 (prostate cancer)

The prostate cancer dataset with RNA biomarkers

In order to give a better prognosis, pathologists have used a cancer stage to measure cell tissues and tumors’ aggressions as an indicator for doctors to choose a suitable treatment. The most widely used cancer staging system is TNM (Tumor, Node, and Metastasis) system [20]. Depending on levels of differentiation between normal and tumor cells, a different histologic grade is given. Tumors with grade 1 indicate almost normal tissues, with grade 2 indicating somewhat normal tissues and with grade 3 indicating tissues far away from normal conditions. Although most of cancers can be adapted to TNM grading system, some specific cancers require additional grading systems for pathologists to better interpret tumors.

The Gleason Grading System is especially used for prostate cancers and a GS (Gleason Score) is given based on cellular contents and tissues of cancer biopsies from patients. The higher the GS are, the worse the prognoses are. The prostate cancer dataset, GSE18655, includes 139 patients with 502 molecular markers, RNAs [21]. In [21], it showed that prostate tumors with gene fusions, TMPRSS2: ERG T1/E,4 have higher risk of recurrences than tumors without the gene fusions. 139 samples were prostate fresh-frozen tumor tissues of patients after a radical prostatectomy surgery. All samples were taken from the patients’ prostates at the time of prostatectomy and liquid nitrogen was used to freeze middle sections of prostates at extreme low temperature. Among these patients, 38 patient samples have GS 5–6 corresponding to histologic grade 1, 90 samples have GS 7 corresponding to histologic grade 2 and 11 samples have GS 8–9 corresponding to histologic grade 3. The platform used for the datasets is GPL5858, DASL (cDNA-mediated, annealing, selection, extension and ligation) Human Cancer Panel by Gene manufactured by Illumina. The FDR (false discovery rate) of all RNAs expressions in the microarray is less than 5%.

Breast cancer dataset with Non-coding miRNA biomarkers

The miRNAs have strong correlation with some cellular processes, such as proliferation, which has been used as a breast cancer dataset [22]. It has 799 miRNAs and 101 patients’ samples. Differential expressions of miRNAs indicated different level of proliferations corresponding to 6 intrinsic breast cancer subtypes: luminal A, luminal B, basal-like, normal-like, and ERBB2. The original dataset has 101 samples and among them, 41 samples are luminal A, 15 samples are basal-like, 10 samples are normal-like, 12 samples are luminal B, 17 samples are ERBB2, 2 samples have T35 mutation status, another sample has T35 wide type mutation and 3 samples are not classified. GSE19536 was represented in two platforms GPL8227, an Agilient-09118 Human miRNA microarray 2.0 G4470B (miRNA ID version) and the GPL6480, an Agilent-014850 whole Human Genome Microarray 4x44k G4112F (Probe Name). For this paper, we only used the expressions from GPL8227.

Prostate cancer dataset of cancerous and normal samples with miRNA biomarkers

The CNAs (Copy Number Alterations) of some genes may associate with growth of prostate cancers [23]. In addition, some changes are discovered in mutations of fusion gene, mRNA expressions and pathways in a majority of primary prostate samples. The analysis was applied to four platforms and consists of 3 subseries, GSE21034, GSE21035 and GSE21036 [23]. For this paper, we only use the GSE 21036 for analysis. The microarray dataset has 142 samples which include 114 primary prostate cancer samples and 28 normal cells samples. The platform is Agilent-019118 Human miRNA Microarray 2.0 G4470B (miRNA ID version).

Results of AMFES

We employ the AMFES on the prostate cancer (GSE18655), breast cancer (GSE19536) and another prostate cancer (GSE21036) datasets. Consequently, for GSE18655, AMFES selects 96 biomarkers. The classification is performed in two steps. The first step performs classification between grade1 and above samples and it selects 93 biomarkers. At the second step, AMFES classifies between grade2 and grade3 samples and it selects 3 biomarkers. Thus, we can assume that these 96 biomarkers can classify among grade1, grade2 and grade3 samples [6]. For GSE19536, AMFES also performs classification in two steps. At the first step, AMFES classify between luminal and non-luminal types samples and it selects 47 biomarkers [6]. At the second step, AMFES further classifies luminal samples into luminal A and luminal B and selects 27 biomarkers. For the non-luminal samples, AMFES also classifies them into basal-like and normal-like samples and selects 25 biomarkers [6]. After removing duplicate biomarkers, AMFES has 72 (47+27-2(duplicated)) for classifying luminal samples and 68 (47+25-4(duplicated)) for classifying non-luminal ones [6]. For GSE21036, AMFES simply selects 22 biomarkers for classifying cancerous and normal samples. Table 2. shows the number of selected genes. The complete lists of these biomarkers can be found in Additional file 1 GSE18655_96_Biomarkers.xlsx, Additional file 2 GSE19536_72_Biomarkers.xlsx, Additional file 3 GSE19536_68_Biomarkers.xlsx, and Additional file 4 GSE21036_22_Biomakers.xlsx.

Table 2 Results of selected subsets of genes

We then apply the MI calculation described in the Mutual Information section on 96 biomarkers for GSE18655 and represent the pairwise MI values of grade 1, grade 2 and grade 3 samples in three 96*96 matrixes which can be found in Additional file 5 GSE18655 Grade1 MI.xlsx, Additional file 6 GSE18655 Grade2 MI.xlsx and Additional file 7 GSE18655 Grade3 MI.xlsx. We also represent the four MI matrixes of 72 and 68 biomarkers for GSE19536 in Additional file 8 GSE19536 Luminal-A MI.xlsx, Additional file 9 GSE19536 Luminal-B MI.xlsx, Additional file 10 GSE19536 Basal-Like MI.xlsx, and Additional file 11 GSE19536 Normal-Like MI.xlsx. The two MI matrixes for GSE21036 are in Additional file 12 GSE21036 Cancer MI.xlsx, Additional file 13 GSE21036 Normal MI.xlsx.

We analyze these MI matrixes and list differences between them under different conditions in Table 3. For a given matrix, the first column in Table 3 denotes the mean value; the second column denotes the standard deviation; the third column shows the number of positive values in the matrix; the fourth column shows the number of negative values; the sixth column shows the minimum value and the seventh column displays the maximum. In the fifth column, we compare MI matrixes under two different conditions such as luminal A vs. luminal B. If the signs of two entries at the same position in these two matrixes are different, we count it as one sign difference. The fifth column denotes the number of sign differences of the samples compared. We employ the same process for comparing basal-like versus normal-like for GSE19536 and the cancerous versus normal for GSE21036. To visualize the differences, we display the histograms of MI values of grade1s, grade2s and grade3s in Figure 1. Figure 2 shows the histograms for luminal As versus luminal Bs. Figure 3 shows basal-likes versus normal-likes and Figure 4 shows the cancerous versus normals.

Table 3 Results of analysis of MI matrices
Figure 1
figure 1

Comparison of 96 MI of grade1, grade2 and grade3 samples.

Figure 2
figure 2

Comparison of 72 MI of luminal A and luminal B samples.

Figure 3
figure 3

Comparison of 68 MI of basal-like and normal-like samples.

Figure 4
figure 4

Comparison of 22 MI of prostate cancerous and normal samples.

For the fifth column of comparison of GSE18655, since there are three types prostate, they cannot be fairly compared, so we skipped the process for it. In addition, because there are many MI entries for all histograms, we only show the densest section of each histogram in figures.

Results of calculating mutual information

The statistic results of calculating mutual information are shown in Table 3 at the end of this paper.

Synergistic therapy

Based on the interpretation of the network [4, 5], we proposed a framework that can help to elucidate the underlying interactions between multi-target biomarkers and multi-component drug agents. The framework consists of three parts: selecting biomarkers of a complex disease such as cancer, building target networks of biomarkers, and forming interaction between biomarkers and drug agents to provide a personalized and synergistic therapy plan.

From the GEO datasets of cancers, we have discovered the genetic model of each cancer, called signature of that particular cancer. Among different cancers, their signatures (target networks) may be quite different which corresponds to different biomarkers in Additional file 1 GSE18655_96_Biomarkers.xlsx, Additional file 2 GSE19536_72_Biomarkers.xlsx, Additional file 3 GSE19536_68_Biomarkers.xlsx, and Additional file 4 GSE21036_22_Biomakers.xlsx. For these different signatures, we would discover various synergistic mechanisms which have exemplified in [24].

Assume we would like to provide a synergistic therapy plan of a patient A. By collecting his/her bodily data such as saliva, blood samples, we first obtain the corresponding microarray dataset of patient A and apply it to the genetic model as shown in Figure 5.

Figure 5
figure 5

Diagram of detailed process of building the genetic model.

A complete synergistic therapy should be able to select small subset of biomarkers and correlate them with drug agents in a multi-target multi-components network approach as shown in Figure 6. In Figure 6, a disease associates with several biomarkers such as RNAs, miRNAs or proteins denoted by R1, R2, R3, R4 and R5 which are the regulators for operons O1, O2, and O3. An operon is a basic unit of DNAs and formed by a group of genes controlled by a gene regulator. These operons initiate molecular mechanisms as promoters. The gene regulators can enable organs to regulate other genes either by induction or repression. For each target biomarker, it may have a list of pharmacons used as enzyme inhibitors. Traditionally, pharmacons are referred to biological active substances which are not limited to drug agents only. For example, the herbal extractions whose ingredients have a promising anti-AD (Alzheimer’s Disease) effect can be used as pharmacons [24]. Meanwhile, pharmacons denoted by D1, D2, and D3, have effects for some target biomarkers. For example, D1 affects target biomarker R3, D2 affects target biomarker R5 and D3 affects biomarker R1. Compared with drug agent pair methodology [5], the proposed framework in Figure 6 represents a more accurate interpretation of biomarkers with multi-component drug agents.

Figure 6
figure 6

Relationships between biomarkers, pharmacons and operons where R 1 , R 2 , R 3 , R 4 and R 5 denote 5 biomarkers. Among all the biomarkers, R 2 , R 3 and R 5 are regulators.

Discussion

Among the MI values obtained, we see positive values and negative values. The positive value can represent the attractions among the biomarkers while the negative may represent the repulsion among the biomarkers, which matches the concept of Yin-Yang in TCM (Traditional Chinese Medicine). From these results, we observed that there is minimal difference of mutual information values between cancer stages. However, the difference of mean MI value of the prostate cancer versus normal cells is move obvious. The mean MI value of the last prostate cancer cell is approximately twice that of normal cells. This may be intriguing for medical people for further investigations.

Conclusions

We have presented a comprehensive approach to diagnosis and therapy of complex diseases, such as cancer. A complete procedure is proposed for clinical application to cancer patients. While the genetic model provides a standard framework to design synergistic therapy, the actual plan for individual patient is personalized and flexible. With careful monitoring, physicians may adaptively change or modify the therapy plan. Much further analysis of this framework in clinical settings should be experimented.