Introduction

In recent years, technological innovations have led to a massive amount of data with relatively low cost. These massive and high-throughput data is commonly called Big Data. However, there is no universally agreed-upon definition of Big Data, but the more widely accepted explanations tend to describe it in terms of challenges that it presents. In terms of computational efficiency and time processing, Big Data motivate the development of new computational tools and data storage methods [17, 20, 27, 29]. Regarding this issue, [38] evokes three principal challenges which are related to dimensions of Big Data and which address volume, velocity and variety. Other authors have proposed additional dimensions such as veracity, validity or value [15]. The volume, one of the famous five Vs that characterize Big Data, is the main challenge that interests the statistician when analyzing high dimension datasets. The other Big Data dimensions, interest particularly computer scientists and data investigators.

Besides challenges, Big Data give many opportunities in terms of results analysis and information extraction in different fields such as genomics and biology [30], climatology and water research [19], geosciences [44], neurology [18], spam detection and telecom [6, 13], Cyber-security [56], Software engineering [14, 40], social media analysis [37, 46], biomedical imaging [53], economics [21, 35], high frequency finance and marketing strategies [7]. The goal of using Big Data in the aforementioned fields is to develop accurate methods to predict the future, to gain insight into the relationship between the features and responses, to explore the hidden structures and to extract important common features across sub-populations.

The main problem with Big Data is still how to efficiently process it. To handle this challenge, we need new statistical thinking and computational methods. In fact, many statistical approaches that perform well for low dimension data, are inadequate when analyzing Big Data. Thus, to design effective statistical procedures for the exploration and prediction in this context, new needs will be identified, aside classical issues such as heterogeneity, noise accumulation, spurious correlations [23], incidental endogeneity [39], and [26], and sure independence screening [25], Hall and Miller [32, 33], and [12]. In terms of statistical accuracy, dimension reduction and variables selection play pivotal roles in analyzing high dimension data. For example, in high dimension classification, [48], and [22] showed that conventional classification rules using all features perform no better than random guess due to noise accumulation. This motivates new regularization methods [9, 10, 24, 54, 55].

The aim of dimension reduction procedures is to summarize the original p-dimensional data space in a form of a lower k-dimensional components subspace \((k \ll p)\). To achieve this goal, statistical and mathematical theory provide many approaches. Based on frequency use, the most commonly applied methods are still principal component analysis (PCA) [2], and [34], partial least squares (PLS) [4, 5] and [45], linear discriminant analysis (LDA) [8], and sliced inverse regression (SIR) [3]. Rash Model (RM) is another recent efficient way for feature extraction which provides an appealing framework for handling high-dimensional datasets [36].

For all aforementioned considerations, and given the growing importance of alternative statistical approaches, we propose a new approach to reduce a dataset dimension, especially for classification purposes. The approach addresses the case where the number of variables p largely exceeds the sample size n \((p \gg n)\), which is common in the Big Data context. To handle high dimension datasets in the prediction framework, we propose to proceed in five steps. The first three steps seek to reduce the number of variables using correlation arguments. The fourth and fifth steps consist in eliminating redundant or irrelevant variables, using adapted techniques of discriminant analysis. The performance of our approach is evaluated by measuring its accuracy of class prediction and processing time.

Before introducing a detailed description of our approach, it is worth to have a good understanding of state of the art in the concern of extraction and selection methodologies, especially for Big Data. Thus, the following section proposes to conduct a review of published studies to identify key trends with respect to the types of used methods.

Background and statistical review

The high dimension dataset can be represented by the following real-valued expression matrix

(1)

where individuals are scattered on K classes \(C_1,\ldots ,C_K\), \(n_k\) denotes the size of a kth class, for \(k= 1,\ldots , K\) and \(n=n_1+\cdots +n_K\) is the global sample size. The objective is to explain the class membership defined by a categorical response \(\mathbb {Y}\), using p variables \(\mathbb {X}_{1},\ldots ,\mathbb {X}_{p}\), where \(X_{ij}^k\) is the ith value in the kth class of the variable \(\mathbb {X}_j\), for \(i= 1,\ldots , n\) and \(j= 1,\ldots , p\).

For p smaller than n, classical methods of classification (LDA, PCA, \(\ldots\)) can be applied. In this work, we consider the case where p is much larger than n. This data structure has been used in special cases of gene expression data [11], to characterize different types of cancers [31] and the Lymphoma dataset [1].

The analysis of a high dimension dataset is primarily based on comparison of variables or observations, using a variety of similarity measures. The correlation, can be used as a measure of association between variables. To measure correlation between categorical and numerical variables, “the statistic \(\eta\)” can be used [28, 47] and [5152]. This statistic represents the ratio of variability between groups to the total variability.

In this paper, we elaborate a new approach to deal with the large dimension challenge presented by the Big Data framework. Our approach is summarized in an algorithm in five steps. The first three steps lead to the reduction of the number of columns (variables) in a dataset, the two others identify pertinent variables for building an accurate classifier. We apply our techniques to publicly available microarray datasets and compare our results with findings discussed in [36]. Our approach can clearly be used in many other areas (economy, finance, environment...etc.) where “high dimension” is a Big Data challenge.

A dimension reduction algorithm

Consider the dataset, represented by (1), of n observations and p variables with \(p \gg n\). The following steps lead to a pertinent reduction of the dataset dimension p.

  • Step 1: Calculate the correlation ratio between each variable \(\mathbb {X}_j\) and nominal response (\(\mathbb {Y}\)) defined as:

    $$\begin{aligned} \eta ^2_j=\frac{\sum \limits_{k=1}^K n_{k}(\bar{X}_j^k-\bar{{X}}_j)^2}{\sum \limits_{k=1}^K \sum \limits_{i=1}^{n_k} ({X}_{ij}^k-\bar{X}_j^k)^2} \end{aligned}$$
    (2)

    where \({X}_{ij}^k\) is the value of variable \(\mathbb {X}_j\) measured on the ith individual belonging to the kth class, \(\bar{ X}_j^k\) is the mean of the restricted \(\mathbb {X}_j\) to the kth class, and \(\bar{{X}}_j\) is the (unrestricted) mean of \(\mathbb {X}_j\).

  • Step 2: For \(j= 1,\ldots , p\), sort \(\mathbb {X}_j\) in descending order according to \(\eta ^2_j\) values, and extract a basis of the first \({p}^\prime\) linearly independent variables, following the process of Gram-Schmidt ([49]).

This basis is optimal is the sense that it contains all the information about \(\mathbb {Y}\) included in the p original variables. The linear independence condition reduces greatly the number of variables \(({p}^\prime \le n)\).

  • Step 3: For j and \(j^\prime\) in \(\{1,\ldots , p^\prime \}\) with \(j<j^\prime\), calculate \(\tau (\mathbb {X}_j,\mathbb {X}_{j^\prime })\), the Kendall rank correlation coefficient between the \(\mathbb {X}_j\) and \(\mathbb {X}_{j^\prime }\). If \(\tau (\mathbb {X}_j,\mathbb {X}_{j^\prime }) \ge 0.5\), eliminate \(\mathbb {X}_{j^\prime }\) (because \(\eta ^2_{j^\prime }<\eta ^2_{j}\)). Otherwise keep \(\mathbb {X}_j\) and \(\mathbb {X}_{j^\prime }\).

At the end of this step, we are left with \({p}^{\prime \prime }\) linearly independent variables (\({p}^{\prime \prime }\le n\)) ranked in descending order according to their correlations with \(\mathbb {Y}\).

For classification purposes, it is desirable to further reduce the number of variables and keep only the most pertinent for building an accurate classifier. Numerous supervised classification methods can be used to achieve that. In our situation, we use the LDA [41, 42] and [50]. The objective is to explore the relationship between the numerical (independent) variables \(\mathbb {X}_{j}\) and categorical (dependent) variable \(\mathbb {Y}\), and use it to predict the value of the dependent variable.

Fig. 1
figure 1

Cross validation percentage against the number of genes for a 60, b 55, c 50, d 45 and e 40 samples of size

The LDA is an implemented package in SPSS that leads to observations classification using scores, discriminant functions, and cross validation. For more details about implementation and output, we refer to SPSS guide users [43].

  • Step 4: For \(\ell\) ranging from 2 upto \({p}^{\prime \prime }\), perform the LDA to subsets, of the dataset resulting from Step 3, involving the \(\ell\) first variables. For the classification purpose, retain the variables that maximize the cross validation percentage.

At this point, the retained variables could be considered the most reliable for predicting the dependent variable. The objective of the next step is an ultimate filter to discard variables that might be sensitive to the sample size.

  • Step 5: Repeat the steps 1 to 4 with different sample sizes. The final set of retained variables contains those proven reliable predictors at least \(m\%\) of the time (m may be set as 70%).

Application and results

In this section we consider the application of our approach to some real datasets recently used in cancer gene expression studies by several authors. The first dataset has been obtained from acute leukemia patients at the time of diagnosis [31]. This dataset comes from a study of gene expression in two types of acute leukemias, acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML). The data consist of 47 cases of ALL (38 B-cell ALL and 9 T-cell ALL) and 25 cases of AML of \(p = 3571\) human genes. The second dataset concerns the Prostate cancer that contains 52 prostate tumor observations and 50 non-tumor prostate observations of \(p = 6033\) genes.

Both data sets are from Affymetrix high-density oligonucleotide microarrays and are publicly available [16].

Application to Leukemia dataset

The Leukemia dataset contains 72 observations. We randomly select 12 as a test sample. The 60 remaining are used as a training sample. We apply our approach to different sizes, and we retain the variables that maximize the cross validation percentage. These are the highly informative genes. Table 1 summarizes our results.

Table 1 Reduction of number of genes for different sample size in dataset

For a sample of 60 observations, the selected basis contains 60 vectors holding all the information from the 3571 initial genes. At this stage we get a dimension reduction of approximately 62%. In the next step (column 3), using the Kendall rank correlation we keep only 33 genes. The retained genes are the most highly correlated with the nominal response (cancer class). The Figure 1 represents the cross validation percentage against the number of genes (from Step 4). The 4th column contains the number of genes that maximize the cross validation percentage. The number of variables is reduced from 33 to a pertinent 3 genes which lead to a 98% correct classifications.

Table 2 Final retained classifiers

The steps described above are repeated for different sample sizes to ensure the model’s stability, and we retain the variables which appear as reliable classifiers. Table 2 presents the genes occurrences with their cross validation percentages. The 11 retained genes have led to about 98% of good classification. The genes will be utilized to predict the classification of the 12 observations in the test sample. These prediction results, given in Table 3, show that our approach is highly accurate.

Table 3 Class prediction for the test sample

Kastrin and Peterlin [36] studied the potential of RM modeling using the same dataset. They demonstrate that the RM is as effective as the principal component analysis (PCA) with re-randomization scheme. Table 4 shows that our approach, applied to the Leukemia dataset, outperforms the RM.

Table 4 Performances comparison

Application to prostate cancer state

The prostate tumor dataset contains 102 observations. We randomly select 13 as a test sample. The 89 remaining are used as a training sample. We apply our approach to different sizes, and we retain the variables that maximize the cross validation percentage. These are the highly informative genes. Table 5 summarizes our results.

Table 5 Reduction of number of genes for different sample size in dataset

Table 6 presents the genes occurrences with their cross validation percentages. The 9 retained genes have led to about 95.5% of good classification. These genes are used to predict the classification of the 13 observations in the test sample. These prediction results, given in Table 7, show that our approach is highly accurate.

Table 6 Final retained classifiers
Table 7 Class prediction for testing sample

Table 8 shows that our approach, applied to the prostate tumor dataset, outperforms the RM.

Table 8 Performance comparison

It is worth noting that the use of the developed approach is not restricted to binary prediction problems. It can be extended to cover multiclass prediction. Indeed, we applied the approach on a third dataset which concerns the small blue cell tumors (SRBCTs) presented as a matrix of 2308 genes (columns) and 83 samples (rows), from a set of microarray experiments. The SRBCTs are 4 different childhood tumors classified into four major types: BL (Brkitt lymphoma), EWS (Ewings sarcoma), NB (neuroblastoma), and RMS (rhabdomyosarcoma). After applying the same approach described above for (2308 × 83) dataset, 8 genes are selected. Even if, we have 4 different classes, our approach performs well. It gives a mean accuracy rate of 90%.

Conclusions

Big Data is a highly topical issue of major importance in healthcare research. In fact, the role of Big Data in medicine consists to better build health profiles and predictive models around individual patients, so that we can better diagnose and treat disease. Big data comes into play an important role to overcome major challenges posed by cancer which represents an incredibly complex disease. The cancer disease is always changing, evolving, and adapting, where a single tumor can have more than 100 billion cells, and each cell can acquire mutations individually. To best understand evolution of cancer or to best distinguish tumor classes, we need advanced modeling by integrating Big Data. Different techniques are available, but it suffers from a lack of accuracy or processing complexity.

The purpose of this article is to present methods to reduce the number of variables and keep those that contain more information for reliable and informative classification. The article proposes methods for dimensionality reduction and classification, in several stages, using gene expression data from two recent studies. This way of proceeding, allows to retrieve the variables that contain most information for proper classification according to type of cancer. The retained model is the one that guarantees the best classification by cross-validation. The final model is then used to predict the class samples of the test set.

A comparative study was developed, for binary problems, between the results of our approach and that of the model developed by Rash [36]. The main conclusion is that our approach performs well the RM-LDA based approach with a null error rate and a 100% of accuracy.

It is worth to note that our approach can be compared with other multiclass prediction problems by integrating multiple ROC analysis and can be used to analyze other prediction problems in different fields such as, finance and banking, marketing and environment.