1 Introduction

In the last few years, considerable attention has been paid to the problem of dimensionality reduction and many approaches have been proposed (Van Der Maaten et al., 2009). There are two main techniques for reducing the number of features of a high-dimensional dataset: feature extraction and feature selection. Feature extraction focuses on transforming the data into a lower-dimensional space. This transformation is done through a mapping which results in a new set of features (Liu and Motoda, 1998). Feature selection reduces the feature space by selecting a subset of the original attributes without generating new features (Chandrashekar & Sahin, 2014). Based on the availability of the labels, feature selection methods are divided into three categories: supervised (Ang et al., 2015; Chandrashekar & Sahin, 2014), semi-supervised (Sheikhpour et al., 2017; Zhao & Liu, 2007), and unsupervised (Dy and Brodley, 2004; Miao & Niu, 2016). Supervised feature selection algorithms try to maximize some function of predictive accuracy given the class labels. In unsupervised learning, the search for discriminative features is done blindly, without having the class labels. Therefore, unsupervised feature selection is considered as a much harder problem (Dy & Brodley, 2004).

Feature selection methods improve the scalability of machine learning algorithms since they reduce the dimensionality of data. Besides, they reduce the ever-increasing demands for computational and memory resources that are introduced by the emergence of big data. This can lead to a considerable decrease in energy consumption in data centers. This can ease not only the problem of high energy costs in data centers but also the critical challenges imposed on the environment (Yang et al., 2018). As outlined by the High-Level Expert Group on Artificial Intelligence (AI) (AI High-level Expert Group, 2020), environmental well-being is one of the requirements of a trust-worthy AI system. The development, deployment, and process of an AI system should be assessed to ensure that it would function in the most environmentally friendly way possible. For example, resource usage and energy consumption through training can be evaluated.

However, a challenging problem that arises in the feature selection domain is that selecting features from datasets that contain a huge number of features and samples, may require a massive amount of memory, computational, and energy resources. Since most of the existing feature selection techniques were designed to process small-scale data, their efficiency can be downgraded with high-dimensional data (Bolón-Canedo et al., 2015). Only a few studies have focused on designing feature selection algorithms that are efficient in terms of computation (Aghazadeh et al., 2018; Tan et al., 2014). The main contributions of this paper can be summarized as follows:

  • We propose a new fast and robust unsupervised feature selection method named QuickSelection. As briefly sketched in Fig. 1, It has two key components: (1) Inspired by node strength in graph theory, the method proposes the neuron strength of sparse neural networks as a criterion to measure the feature importance; and (2) The method introduces sparsely connected Denoising Autoencoders (sparse DAEs) trained from scratch with the sparse evolutionary training procedure to model the data distribution efficiently. The imposed sparsity before training also reduces the amount of required memory and the training running time.

  • We implement QuickSelection in a completely sparse manner in Python using the SciPy library and Cython rather than using a binary mask over connections to simulate sparsity. This ensures minimum resource requirements, i.e., just Random-Access Memory (RAM) and Central Processing Unit (CPU), without demanding Graphic Processing Unit (GPU).

The experiments performed on eight benchmark datasets suggest that QuickSelection has several advantages over the state-of-the-art, as follows:

  • It is the first or the second-best performer in terms of both classification and clustering accuracy in almost all scenarios considered.

  • It is the best performer in terms of the trade-off between classification and clustering accuracy, running time, and memory requirement.

  • The proposed sparse architecture for feature selection has at least one order of magnitude fewer parameters than its dense equivalent. This leads to the outstanding fact that the wall clock training time of QuickSelection running on CPU is smaller than the wall clock training time of its autoencoder-based competitors running on GPU in most of the cases.

  • Last but not least, QuickSelection computational efficiency makes it have the minimum energy consumption among the autoencoder-based feature selection methods considered.

Fig. 1
figure 1

A high-level overview of the proposed method, “QuickSelection”. a At epoch 0, connections are randomly initialized. b After initializing the sparse structure, we start the training procedure. After 5 epochs, some connections are changed during the training procedure, and as a result, the strength of some neurons has increased or decreased. At epoch 10, the network has converged, and we can observe which neurons are important (larger and darker blue circles) and which are not. c When the network is converged, we compute the strength of all input neurons. d Finally, we select K features corresponding to neurons with the highest strength values

2 Related work

2.1 Feature selection

The literature on feature selection shows a variety of approaches that can be divided into three major categories, including filter, wrapper, and embedded methods. Filter methods use a ranking criterion to score the features and then remove the features with scores below a threshold. These criteria can be Laplacian score (He et al., 2006), Correlation, Mutual Information (Chandrashekar & Sahin, 2014), and many other scoring methods such as Bayesian scoring function, t-test scoring, and Information theory-based criteria (Lazar et al., 2012). These methods are usually fast and computationally efficient. Wrapper methods evaluate different subsets of features to detect the best subset. Wrapper methods usually give better performance than filter methods; they use a predictive model to score each subset of features. However, this results in high computation complexity. Seminal contributions for this type of feature selection have been made by Kohavi and John (1997). In (Kohavi and John 1997), the authors used a tree structure to evaluate the subsets of features. Embedded methods unify the learning process, and the feature selection (Lal et al., 2006). Multi-Cluster Feature Selection (MCFS) (Cai et al., 2010) is an unsupervised method for embedded feature selection, which selects features using spectral regression with L1-norm regularization. A key limitation of this algorithm is that it is computationally intensive since it depends on computing the eigenvectors of the data similarity matrix and then solving an L1-regularized regression problem for each eigenvector (Farahat et al., 2013). Unsupervised Discriminative Feature Selection (UDFS) (Yang et al., 2011) is another unsupervised embedded feature selection algorithm that simultaneously utilizes both feature and discriminative information to select features (Li et al., 2018).

2.2 Autoencoders for feature selection

In the last few years, many deep learning-based models have been developed to select features from the input data using the learning procedure of deep neural networks (Li et al., 2016). In (Lu et al., 2018), a Multi-Layer Perceptron (MLP) is augmented with a pairwise-coupling layer to feed each input feature along with its knockoff counterpart into the network. After the training, the authors use the filter weights of the pairwise-coupling layer to rank input features. Autoencoders which are generally known as a strong tool for feature extraction (Bengio et al. 2013), are being explored to perform unsupervised feature selection. In (Han et al., 2018), authors combine autoencoder regression and group lasso task for unsupervised feature selection named AutoEncoder Feature Selector (AEFS). In (Doquet and Sebag 2019), an autoencoder is combined with three variants of structural regularization to perform unsupervised feature selection. These regularizations are based on slack variables, weights, and gradients, respectively. Another recently proposed autoencoder-based embedded method is feature selection with Concrete Autoencoder (CAE) (Balın et al., 2019). This method selects features by learning a concrete distribution over input features. They proposed a concrete selector layer that selects a linear combination of input features that converges to a discrete set of K features during training. In (Singh and Yamada 2020), the authors showed that a large set of parameters in CAE might lead to over-fitting in case of having a limited number of samples. In addition, CAE may select features more than once since there is no interaction between the neurons of the selector layer. To mitigate these problems, they proposed a concrete neural network feature selection (FsNet) method, which includes a selector layer and a supervised deep neural network. The training procedure of FsNet considers reducing the reconstruction loss and maximizing the classification accuracy simultaneously. In our research, we focus mostly on unsupervised feature selection methods.

Denoising Autoencoder (DAE) is introduced to solve the problem of learning the identity function in the autoencoders. This problem is most likely to happen when we have more hidden neurons than inputs (Baldi, 2012). As a result, the network output may be equal to the inputs, which makes the autoencoder useless. DAEs solve the aforementioned problem by introducing noise on the input data and trying to reconstruct the original input from its noisy version (Vincent et al., 2008). As a result, DAEs learn a representation of the input data that is robust to small irrelevant changes in the input. In this research, we use the ability of this type of neural network to encode the input data distribution and select the most important features. Moreover, we demonstrate the effect of noise addition on the feature selection results.

2.3 Sparse training

Deep neural networks usually have at least some fully-connected layers, which results in a large number of parameters. In a high-dimensional space, this is not desirable since it may cause a significant decrease in training speed and a rise in memory requirement. To tackle this problem, sparse neural networks have been proposed. Pruning the dense neural networks is one of the most well-known methods to achieve a sparse neural network (LeCun et al., 1990; Hassibi & Stork, 1993). In (Han et al. 2015), authors start from a pre-trained network, prune the unimportant weights, and retrain the network. Although this method can output a network with the desired sparsity level, the minimum computation cost is as much as the cost of training a dense network. To reduce this cost, Lee et al. (2018) start with a dense neural network, and prune it prior to training based on connection sensitivity; then, the sparse network is trained in the standard way. However, starting from a dense neural network requires at least the memory size of the dense neural network and the computational resources for one training iteration of a dense network. Therefore, this method might not be suitable for low-resource devices.

In 2016, Mocanu et al. (2016) had introduced the idea of training sparse neural networks from scratch, a concept which recently has started to be known as sparse training. The sparse connectivity pattern was fixed before training using graph theory, network science, and data statistics. While it showed promising results, outperforming the dense counterpart, the static sparsity pattern did not always model the data optimally. In order to address these issues, in 2018, Mocanu et al. (2018) have proposed the Sparse Evolutionary Training (SET) algorithm which makes use of dynamic sparsity during training. The idea is to start with a sparse neural network before training and dynamically change its connections during training in order to automatically model the data distribution. This results in a significant decrease in the number of parameters and increased performance. SET evolves the sparse connections at each training epoch by removing a fraction \(\zeta\) connections with the smallest magnitude, and randomly adding new connections in each layer. Bourgin et al. (2019) have shown that a sparse MLP trained with SET achieves state-of-the-art results on tabular data in predicting human decisions, outperforming fully-connected neural networks and Random Forest, among others.

In this work, we introduce for the first time sparse training in the world of denoising autoencoders, and we named the newly introduced model sparse denoising autoencoder (sparse DAE). We train the sparse DAE with the SET algorithm to keep the number of parameters low during the training. Then, we then exploit the trained network to select the most important features.

3 Proposed method

To address the problem of the high dimensionality of the data, we propose a novel method, named “QuickSelection”, to select the most informative attributes from the data based on their strength (importance). In short, we train a sparse denoising autoencoder network from scratch in an unsupervised adaptive manner. Then, we use the trained network to derive the strength of each neuron in the input features.

The basic idea of our proposed approach is to impose sparse connections on DAE, which proved its success in the related field of feature extraction, to efficiently handle the computational complexity of high-dimensional data in terms of memory resources. Sparse connections are evolved in an adaptive manner that helps in identifying informative features.

A couple of methods have been proposed for training deep neural networks from scratch using sparse connections and sparse training (Dettmers & Zettlemoyer, 2019; Mocanu et al., 2018; Bellec et al., 2017; Mostafa & Wang, 2019; Evci et al., 2019; Zhu & Jin, 2019). All these methods are implemented using a binary mask over connections to simulate sparsity since all standard deep learning libraries and hardware (e.g., GPUs) are not optimized for sparse weight matrix operations. Unlike the aforementioned methods, we implement our proposed method in a purely sparse manner to meet our goal of actually using the advantages of a very small number of parameters during training. We decided to use SET in training our sparse DAE.

The choice of SET is due to its desirable characteristic. SET is a simple method yet achieves satisfactory performance. Unlike other methods that calculate and store information for all the network weights, including the non-existing ones, SET is memory efficient. It stores the weights for the existing sparse connections only. It does not need any high computational complexity as the evolution procedure depends on the magnitude of the existing connections only. This is a favourable advantage to our proposed method to select informative features quickly. In the following subsections, we first present the structure of our proposed sparse denoising autoencoder network and then explain the feature selection method. The pseudo-code of our proposed method can be found in Algorithm 1.

Algorithm 1
figure a

.

3.1 Sparse DAE

3.1.1 Structure

As the goal of our proposed method is to do fast feature selection in a memory-efficient way, we consider here the model with the least possible number of hidden layers, one hidden layer, as more layers mean more computation. Initially, sparse connections between two consecutive layers of neurons are initialized with an Erdős-Rényi random graph, in which the probability of the connection between two neurons is given by

$$\begin{aligned} P\left( W^{l}_{ij}\right) = \frac{\epsilon \left( n^{l-1}+n^l\right) }{n^{l-1} \times n^{l}}, \end{aligned}$$
(1)

where \(\epsilon\) denotes the parameter that controls the sparsity level, \(n^l\) denotes number of neurons at layer l, and \({{W}_{ij}^l}\) is the connection between neuron i in layer \(l-1\) and neuron j in layer l, stored in the sparse weight matrix \({\mathbf {W}}^l\).

Input denoising We use the additive noise model to corrupt the original data:

$$\begin{aligned} {\widetilde{\mathbf {x}}} = {\mathbf {x}} + \textit{nf} {\mathcal {N}}\left( \mu ,\,\sigma ^{2}\right) , \end{aligned}$$
(2)

where \({\mathbf {x}}\) is the input data vector from dataset X, \(nf\) (noise factor) is a hyperparameter of the model which determines the level of corruption, and \({\mathcal {N}}(\mu ,\,\sigma ^{2})\)is a Gaussian noise. After denoising the data, we derive the hidden representation \({\mathbf {h}}\) using this corrupted input. Then, the output \({\mathbf {z}}\) is reconstructed from the hidden representation. Formally, the hidden representation \({\mathbf {h}}\) and the output \({\mathbf {z}}\) are computed as follows:

$$\begin{aligned} {\mathbf {h}}= & {} a\left( {\mathbf {W}}^1{\widetilde{\mathbf {x}}} + {\mathbf {b}}^1\right) , \end{aligned}$$
(3)
$$\begin{aligned} {\mathbf {z}}= & {} a\left( {\mathbf {W}}^2{\mathbf {h}} + {\mathbf {b}}^2\right) , \end{aligned}$$
(4)

where \({\mathbf {W}}^1\) and \({\mathbf {W}}^2\) are the sparse weight matrices of hidden and output layers respectively, \({\mathbf {b}}^1\) and \({\mathbf {b}}^2\) are the bias vectors of their corresponding layer, and a is the activation function of each layer. The objective of our network is to reconstruct the original features in the output. For this reason, we use mean squared error (MSE) as the loss function to measure the difference between original features \({\mathbf {x}}\) and the reconstructed output \({\mathbf {z}}\):

$$\begin{aligned} L_{MSE}= \left\| {\mathbf {z}}-{\mathbf {x}} \right\| _2^2. \end{aligned}$$
(5)

Finally, the weights can be optimized using the standard training algorithms (e.g., Stochastic Gradient Descent (SGD), AdaGrad, and Adam) with the above reconstruction error.

3.1.2 Training procedure

We adapt the SET training procedure (Mocanu et al., 2018) in training our proposed network for feature selection. SET works as follows. After each training epoch, a fraction \(\zeta\) of the smallest positive weights and a fraction \(\zeta\) of the largest negative weights at each layer is removed. The selection is based on the magnitude of the weights. New connections in the same amount as the removed ones are randomly added in each layer. Therefore the total number of connections in each layer remains the same, while the number of connections per neuron varies, as represented in Fig. 1. The weights of these new connections are initialized from a standard normal distribution. The random addition of new connections do not have a high risk of not finding good sparse connectivity at the end of the training process because it has been shown in (Liu et al. 2020) that sparse training can unveil a vast number of very different sparse connectivity local optima which achieve very similar performance.

Fig. 2
figure 2

Neuron’s strength on the MNIST dataset. The heat-maps above are a 2D representation of the input neuron’s strength. It can be observed that the strength of neurons is random at the beginning of training. After a few epochs, the pattern changes, and neurons in the center become more important and similar to the MNIST data pattern

3.2 Feature selection

We select the most important features of the data based on the weights of their corresponding input neurons of the trained sparse DAE. Inspired by node strength in graph theory (Barrat et al., 2004), we determine the importance of each neuron based on its strength. We estimate the strength of each neuron (\({{s}_{i}}\)) by the summation of absolute weights of its outgoing connections.

$$\begin{aligned} {{s}_{i}}=\sum \limits _{j=1}^{n^1}{|{{W}^{1}_{ij}}}|, \end{aligned}$$
(6)

where \(n^1\) is the number of neurons of the first hidden layer, and \({{W}^{1}_{ij}}\) denotes the weight of connection linking input neuron i to hidden neuron j.

As represented in Fig. 1, the strength of the input neurons changes during training; we have depicted the strength of the neurons according to their size and color. After convergence, we compute the strength for all of the input neurons; each input neuron corresponds to a feature. Then, we select the features corresponding to the neurons with K largest strength values:

$${\mathbb{F}}_{s}^{*} = \mathop {\arg \max }\limits_{{{\mathbb{F}}_{s} \subset {\mathbb{F}},|{\mathbb{F}}_{s} | = k}} \sum\limits_{{f_{i} \in {\mathbb{F}}_{s} }} {s_{i} } ,$$
(7)

where \({\mathbb {F}}\) and \({\mathbb {F}}^{*}_s\) are the original feature set and the final selected features respectively, \(f_i\) is the \(i^{th}\) feature of \({\mathbb {F}}\), and K is the number of features to be selected. In addition, by sorting all the features based on their strength, we will derive the importance of all features in the dataset. In short, we will be able to rank all input features by training just once a single sparse DAE model.

For a deeper understanding of the above process, we analyze the strength of each input neuron in a 2D map on the MNIST dataset. This is illustrated in Fig. 2. At the beginning of training, all the neurons have small strength due to the random initialization of each weight to a small value. During the network evolution, stronger connections are linked to important features gradually. We can observe that after ten epochs, the neurons in the center of the map become stronger. This pattern is similar to the pattern of MNIST data in which most of the digits appear in the middle of the picture.

We studied other metrics for estimating the neuron importance, such as the strength of output neurons, degree of input and output neurons, and strength and degree of neurons simultaneously. However, in our experiments, all these methods have been outperformed by the strength of the input neurons in terms of accuracy and stability.

4 Experiments

In order to verify the validity of our proposed method, we carry out several experiments. In this section, first, we state the settings of the experiments, including hyperparameters and datasets. Then, we perform feature selection with QuickSelection and compare the results with other methods, including MCFS, Laplacian Score, and three autoencoder-based feature selection methods. After that, we do different analyses on QuickSelection to understand its behavior. Finally, we discuss the scalability of QuickSelection and compare it with the other methods considered.

4.1 Settings

The experiment settings, including the values of hyperparameters, implementation details, the structure of the sparse DAE, datasets we use for evaluation, and the evaluation metric, are as follows.

4.1.1 Hyperparameters and implementation

For feature selection, we consider the case of the simplest sparse DAE with one hidden layer consisting of 1000 neurons. This choice is made due to our main objective to decrease the model complexity and the number of parameters. The activation function used for the hidden and output layer neurons is “Sigmoid” and “Linear” respectively, except for the Madelon dataset where we use “Tanh” for the output activation function. We train the network with SGD and a learning rate of 0.01. The hyperparameter \(\zeta\), the fraction of weights to be removed in the SET procedure, is 0.2. Also, \(\epsilon\), which determines the sparsity level, is set to 13. We set the noise factor (nf) to 0.2 in the experiments. To improve the learning process of our network, we standardize the features of our dataset such that each attribute has zero mean and unit variance. However, for SMK and PCMAC datasets, we use Min-Max scaling. The preprocessing method for each dataset is determined with a small experiment of the two preprocessing method.

We implement sparse DAE and QuickSelectionFootnote 1 in a purely sparse manner in Python, using the Scipy library (Jones et al. 2001) and Cython. We compare our proposed method to MCFS, Laplacian score (LS), AEFS, and CAE, which have been mentioned in Sect. 2. We also performed some experiments with UDFS; however, since we were not able to obtain many of the results in the considered time limit (24 hours), we do not include the results in the paper. We have used the scikit-feature repository for the implementation of MCFS, and Laplacian score (Li et al., 2018). Also, we use the implementation of feature selection with CAE and AEFS from GithubFootnote 2. In addition, to highlight the advantages of using sparse layers, we compare our results with a fully-connected autoencoder (FCAE) using the neuron strength as a measure of the importance of each feature. To have a fair comparison, the structure of this network is considered similar to our DAE, one hidden layer containing 1000 neurons implemented using TensorFlow. Furthermore, we have studied the effect of other components of QuickSelection, including input denoising and SET training algorithm, in Appendix B.1 and F, respectively.

For all the other methods (except FCAE for which all the hyperparameters and preprocessing are similar to QuickSelection), we scaled the data between zero and one, since it yields better performance than data standardization for these methods. The hyperparameters of the aforementioned methods have been set similar to the ones reported in the corresponding code or paper. For AEFS, we tuned the regularization hyperparameter between 0.0001 and 1000, since this method is sensitive to this value. We perform our experiments on a single CPU core, Intel Xeon Processor E5 v4, and for the methods that require GPU, we use NVIDIA TESLA P100.

4.1.2 Datasets

We evaluate the performance of our proposed method on eight datasets, including five low-dimensional datasets and three high-dimensional ones. Table 1 illustrates the characteristics of these datasets.

  • COIL-20 (Nene et al., 1996) consists of 1440 images taken from 20 objects (72 poses for each object).

  • Madelon (Guyon et al., 2008) is an artificial dataset with 5 informative features and 15 linear combinations of them. The rest of the features are distractor features since they have no predictive power.

  • Human Activity Recognition (HAR) (Anguita et al., 2013) is created by collecting the observations of 30 subjects performing 6 activities such as walking, standing, and sitting. The data was recorded by a smart-phone connected to the subjects’ body.

  • Isolet (Fanty & Cole, 1991) has been created with the spoken name of each letter of the English alphabet.

  • MNIST (LeCun, 1998) is a database of 28x28 images of handwritten digits.

  • SMK-CAN-187 (Spira et al., 2007) is a gene expression dataset with 19993 features. This dataset compares smokers with and without lung cancer.

  • GLA-BRA-180 (Sun et al., 2006) consists of the expression profile of Stem cell factor useful to determine tumor angiogenesis.

  • PCMAC (Lang, 1995) is a subset of the 20 Newsgroups data.

Table 1 Datasets characteristics

4.1.3 Evaluation metrics

To evaluate our model, we compute two metrics: clustering accuracy and classification accuracy. To derive clustering accuracy (Li et al., 2018), first, we perform K-means using the subset of the dataset corresponding to the selected features and get the cluster labels. Then, we find the best match between the class labels and the cluster labels and report the clustering accuracy. We repeat the K-means algorithm 10 times and report the average clustering results since K-means may converge to a local optimal.

To compute classification accuracy, we use a supervised classification model named “Extremely randomized trees” (ExtraTrees), which is an ensemble learning method that fits several randomized decision trees on different parts of the data (Geurts et al., 2006). The choice of the classification method is made due to the computational-efficiency of the ExtraTrees classifier. To compute classification accuracy, first, we derive the K selected features using each feature selection method considered. Then, we train the ExtraTrees classifier with 50 trees as estimators on the K selected features of the training set. Finally, we compute the classification accuracy on the unseen test data. For the datasets that do not contain a test set, we split the data into training and testing sets, including \(80\%\) of the total original samples for the training set and the remaining \(20\%\) for the testing set. In addition, we have evaluated the classification accuracy of feature selection using the random forest classifier (Liaw et al., 2002) in Appendix G.

4.2 Feature selection

We select 50 features from each dataset except Madelon, for which we select just 20 features since most of its features are non-informative noise. Then, we compute the clustering and classification accuracy on the selected subset of features; the more informative features selected, the higher accuracy will be achieved. The clustering and classification accuracy results of our model and the other methods are summarized in Tables 2 and 3, respectively. These results are an average of 5 runs for each case. For the autoencoder-based feature selection methods, including CAE, AEFS, and FCAE, we consider 100 training epochs. However, we present the results of QuickSelection at epoch 10 and 100 named QuickSelection10 and QuickSelection100, respectively. This is mainly due to the fact that our proposed method is able to achieve a reasonable accuracy after the first few epochs. Moreover, we perform hyperparameter tuning for \(\epsilon\) and \(\zeta\) using the grid search method over a limited number of values for all datasets; the best result is presented in Table 2 and 3 as QuickSelectionbest. The results of hyperparameters selection can be found in Appendix B.2. However, we do not perform hyperparameter optimization for the other methods (except AEFS). Therefore, in order to have a fair comparison between all methods, we do not compare the results of QuickSelectionbest with the other methods.

Table 2 Clustering accuracy (%) using 50 selected features (except Madelon for which we select 20 features)
Table 3 Classification accuracy (%) using 50 selected features (except Madelon for which we select 20 features)

From Table 2, it can be observed that QuickSelection outperforms all the other methods on Isolet, Madelon, and PCMAC, in terms of clustering accuracy, while being the second-best performer on Coil20, MNIST, SMK, and GLA. Furthermore, On the HAR dataset, it is the best performer among all the autoencoder-based feature selection methods considered. As shown in Table 3, QuickSelection outperforms all the other methods on Coil20, SMK, and GLA, in terms of classification accuracy, while being the second-best performer on the other datasets. From these tables, it is clear that QuickSelection can outperform its equivalent dense network (FCAE) in terms of classification and clustering accuracy on all datasets.

It can be observed in Tables 2 and 3, that Lap_score has a poor performance when the number of samples is large (e.g. MNIST). However, in the tasks with a low number of samples and features, even on noisy environments such as Madelon, Lap_score has a relatively good performance. In contrast, CAE has a poor performance in noisy environments (e.g., Madelon), while it has a decent classification accuracy on the other datasets considered. It is the best or second-best performer on five datasets, in terms of classification accuracy, when \(K=50\). AEFS and FCAE cannot achieve a good performance on Madelon, either. We believe that the dense layers are the main cause of this behaviour; the dense connections try to learn all input features, even the noisy features. Therefore, they fail to detect the most important attributes of the data. MCFS performs decently on most of the datasets in terms of clustering accuracy. This is due to the main objective of MCFS to preserve the multi-cluster structure of the data. However, this method also has a poor performance on the datasets with a large number of samples (e.g., MNIST) and noisy features (e.g., Madelon).

However, since evaluating the methods using a single value of K might not be enough for comparison, we performed another experiment using different values of K. In Appendix A.1, we test other values for K on all datasets, and compare the methods in terms of classification accuracy, clustering accuracy, running time, and maximum memory usage. The summary of the results of this Appendix has been summarized in Sect. 5.1.

Fig. 3
figure 3

Influence of feature removal on Madelon dataset. After deriving the importance of the features with QuickSelection, we sort and then remove them based on two orders

4.2.1 Relevancy of selected features

To illustrate the ability of QuickSelection in finding informative features, we analyze thoroughly the Madelon dataset results, which has the interesting property of containing many noisy features. We perform the following experiments; first, we sort the features based on their strength. Then, we remove the features one by one from the least important feature to the most important one. In each step, we train an ExtraTrees classifier with the remained features. We repeat this experiment by removing the feature from the most important ones to the least important ones. The result of classification accuracy for both experiments can be seen in Fig. 3. On the left side of Fig. 3, we can observe that removing the least important features, which are noise, increases the accuracy. The maximum accuracy occurs after we remove 480 noise features. This corresponds to the moment when all the noise features are supposed to be removed. In Fig. 3 (right), it can be seen that removing the features in a reverse order results in a sudden decrease in the classification accuracy. After removing 20 features (indicated by the vertical blue line), the classifier performs like a random classifier. We conclude that QuickSelection is able to find the most informative features in good order.

To better show the relevancy of the features found by QuickSelection, we visualize the 50 features selected on the MNIST dataset per class, by averaging their corresponding values from all data samples belonging to one class. As can be observed in Fig. 4, the resulting shape resembles the actual samples of the corresponding digit. We discuss the results of all classes at different training epochs in more detail in Appendix C.

Fig. 4
figure 4

Average values of all data samples of each class corresponding to the 50 selected features on MNIST after 100 training epochs (bottom), along with the average of the actual data samples of each class (top)

5 Discussion

5.1 Accuracy and computational efficiency trade-off

In this section, we perform a thorough comparison between the models in terms of running time, energy consumption, memory requirement, clustering accuracy, and classification accuracy. In short, we change the number of features to be selected (K) and measure the accuracy, running time, and maximum memory usage across all methods. Then, we compute two scores to summarize the results and compare methods.

We analyse the effect of changing K on QuickSelection performance and compare it with other methods; the results are presented in Fig. 10 in Appendix A.1. Figure 10a compares the performance of all methods when K is changing between 5 and 100 on low-dimensional datasets, including Coil20, Isolet, HAR, and Madelon. Figure 10b illustrates performance comparison for K between 5 and 300 on the MNIST dataset, which is also a low-dimensional dataset. We discuss this dataset separately since it has a large number of samples that makes it different from other low-dimensional datasets. Figure 10c represents a similar comparison on three high-dimensional datasets, including SMK, GLA, and PCMAC. It should be noted that to have a fair comparison, we use a single CPU core to run these methods; however, since the implementations of CAE and AEFS are optimized for parallel computation, we use a GPU to run these methods. We also measure the running time of feature selection with CAE on CPU.

To compare the memory requirement of each method, we profile the maximum memory usage during feature selection for different values of K. The results are presented in Fig. 11 in Appendix A.1, derived using a Python library named resourceFootnote 3. Besides, to compare memory occupied by the autoencoder-based models, we count the number of parameters for each model. The results are shown in Figure 14 in Appendix A.3.

Fig. 5
figure 5

Feature selection comparison in terms of classification accuracy, clustering accuracy, speed, and memory requirement, on each dataset and for different values of K, using two scoring variants

However, comparing all of these methods only by looking into the graphs in Figs. 10 and 11 is not easily possible, and the trade-off between the factors is not clear. For this reason, we compute two scores to take all these metrics into account simultaneously.

5.1.1 Score 1

To compute this score, on each dataset and for each value of K, we rank the methods based on the running time, memory requirement, clustering accuracy, and classification accuracy. Then, we give a score of 1 to the best and second-best performers; this is mainly due to the fact that in most cases, the difference between these two is negligible. After that, we compute the summation of these scores for each method on all datasets. The results are presented in Fig. 5a; to ease the comparison of different components in the score, a heat-map visualization of the scores is presented in Fig. 5c. The cumulative score for each method consists of four parts that correspond to each metric considered. As it is obvious in this figure, QuickSelection (cumulative score of QuickSelection\(_{10}\) and QuickSelection\(_{100}\)) outperforms all other methods by a significant gap. Our proposed method is able to achieve the best trade-off between accuracy, running time, and memory usage, among all these methods. Laplacian score, the second-best performer, has a decent performance in terms of running time and memory, while it cannot perform well in terms of accuracy. On the other hand, CAE has a satisfactory performance in terms of accuracy. However, it is not among the best two performers in terms of computational resources for any values of K. Finally, FCAE and AEFS cannot achieve a decent performance compared to the other methods. A more detailed version of Fig. 5a is available in Fig. 12 in Appendix A.1.

5.1.2 Score 2

In addition to the raking-based score, we calculate another score to consider all the methods, even the lower-ranking ones. With this aim, on each dataset and value of K, we normalize each performance metric between 0 and 1, using the values of the best performer and worst performer on each metric. The value of 1 in the accuracy score means the highest accuracy. However, for the memory and running time, the value of 1 means the least memory requirement and the least running time, respectively. After normalizing the metrics, we accumulate the normalized values for each method and on all datasets. The results are depicted in Fig. 5b. As can be seen in this diagram, QuickSelection (we consider the results of QuickSelection\(_{100}\)) outperforms the other methods by a large margin. CAE has a close performance to QuickSelection in terms of both accuracy metrics, while it has a poor performance in terms of memory and running time. In contrast, Lap_score is computationally efficient while having the lowest accuracy score. In summary, it can be observed in Fig. 5b, that QuickSelection achieves the best trade-off of the four objectives among the considered methods.

5.1.3 Energy consumption

The next analysis we perform concerns the energy consumption of each method. We estimate the energy consumption of each method using the running time of the corresponding algorithm for each dataset and value of K. We assume that each method uses the maximum power of the corresponding computational resources during its running time. Therefore, we derive the power consumption of each method, using the running time and maximum power consumption of CPU and/or GPU, which can be found within the specification of the corresponding CPU or GPU model. As shown in Fig. 13 in Appendix A.2, the Laplacian score feature selection needs the least amount of energy among the methods on all datasets except the MNIST dataset. QuickSelection\(_{10}\) is the best performer on MNIST in terms of energy consumption. Laplacian score and MCFS are sensitive to the number of samples. They cannot perform well on MNIST, either in terms of accuracy or efficiency. The maximum memory usage during feature selection for Laplacian score and MCFS on MNIST is 56 GB and 85 GB, respectively. Therefore, they are not a good choice in case of having a large number of samples. QuickSelection is the second-best performer in terms of energy consumption, and also the best performer among the autoencoder-based methods. QuickSelection is not sensitive to the number of samples or the number of dimensions.

5.1.4 Efficiency versus accuracy

In order to study the trade-off between accuracy and resource efficiency, we perform another in-depth analysis. In this analysis, we plot the trade-off between accuracy (including, classification and clustering accuracy) and resource requirement (including, memory and energy consumption). The results are shown in Figs. 6 and 7 that correspond to the energy-accuracy and memory-accuracy trade-off, respectively. Each point in these plots refers to the results of a particular combination between a specific method and dataset when selecting 50 features (except Madelon, for which we select 20 features). As can be observed in these plots, QuickSelection, MCFS, and Lap_score usually have a good trade-off between the considered metrics. A good trade-off between a pair of metrics is to maximize the accuracy (classification or clustering accuracy) while minimizing the computational cost (power consumption or memory requirement). However, when the number of samples increases (on the MNIST dataset), both MCFS and Lap_score fail to maintain a low computational cost and high accuracy. Therefore, when the dataset size increases, these two methods are not an optimal choice. Among the autoencoder-based methods, in most cases QuickSelection10 and QuickSelection100 are among the Pareto optimal points. Another significant advantage of our proposed method is that it gives the ranking of the features as the output. Therefore, unlike the MCFS or CAE that need the value of K as their input, QuickSelection is not dependent on K and needs just a single training of the sparse DAE model for any values of K. Therefore, the computational cost of QuickSelection is the same for all values of K, and only a single run of this algorithm is required to get the hierarchical importance of features.

Fig. 6
figure 6

Estimated power consumption (Kwh) versus accuracy (%) when selecting 50 features (except Madelon for which we select 20 features). Each point refers to the result of a single dataset (specified by colors) and method (specified by markers) where the x and y-axis show the accuracy and the estimated power consumption, respectively

Fig. 7
figure 7

Maximum memory requirement (Kb) versus accuracy (%) when selecting 50 features (except Madelon for which we select 20 features). Each point refers to the result of a single dataset (specified by colors) and method (specified by markers) where the x and y-axis show the accuracy and the maximum memory requirement, respectively. Due to the high memory requirement of MCFS and Lap_score on the MNIST dataset which makes it difficult to compare the other results (upper plots), we zoom in this section in the bottom plots

Fig. 8
figure 8

Running time comparison on an artificially generated dataset. The features are generated using a standard normal distribution and the number of samples for each case is 5000

5.2 Running time comparison on an artificially generated dataset

In this section, we perform a comparison of the running time of the autoencoder-based feature selection methods on an artificially generated dataset. Since on the benchmark datasets both the number of features and samples are different, it is not easily possible to compare clearly the efficiency of the methods. This experiment aims at comparing the models real wall-clock training time in a controlled environment with respect to the number of input features and hidden neurons. In addition, in Appendix E, we have conducted another experiment regarding the evaluation of the methods on a very large artificial dataset in terms of both computational resources and accuracy.

In this experiment, we aim to compare the speed of QuickSelection versus other autoencoder-based feature selection methods for different numbers of input features. We run all of them on an artificially generated dataset with various numbers of features and 5000 samples, for 100 training epochs (10 epochs for QuickSelection\(_{10}\)). The features of this dataset are generated using a standard normal distribution. In addition, we aim to compare the running time of different structures for these algorithms. The specifications of the network structure for each method, the computational resources used for feature selection, and the corresponding results can be seen in Fig. 8.

For CAE, we consider two different values of K. The structure of CAE depends on this value. CAE has two hidden layers including a concrete selector and a decoder that have K and 1.5K neurons, respectively. Therefore, by increasing the number of selected features, the running time of the model will also increase. In addition, we consider the cases of CAE with 1000 and 10000 hidden neurons in the decoder layer (manually changed in the code) to be able to compare it with the other models. We also measure the running time of performing feature selection with CAE using only a single CPU core. It can be seen from Fig. 8 that its running time is considerably high. The general structures of AEFS, QuickSelection, and FCAE are similar in terms of the number of hidden layers. They are basic autoencoders with a single hidden layer. For AEFS, we considered three structures with different numbers of hidden neurons, including 300, 1000, and 10000. Finally, for QuickSelection and FCAE, we consider two different values for the number of hidden neurons, including 1000 and 10000.

It can be observed that the running time of AEFS with 1000 and 10000 hidden neurons using a GPU, is much larger than the running time of QuickSelection\(_{100}\) with similar numbers of hidden neurons using only a single CPU core, respectively. The same pattern is also visible in the case of CAE with 1000 and 10000 hidden neurons. This pattern also repeats in the case of FCAE with 10000 hidden neurons. The running time of FCAE with 1000 hidden neurons is approximately similar to QuickSelection\(_{100}\). However, the difference between these two methods is more significant when we increase the number of hidden neurons to 10000. This is mainly due to the fact that the difference between the number of parameters of QuickSelection and the other methods become much higher for large values of K. Besides, these observations depict that the running time of QuickSelection does not change significantly by increasing the number of hidden neurons.

As we have also mentioned before, QuickSelection gives the ranking of the features as the output. Therefore, unlike CAE, which should be run separately for different values of K, QuickSelection is not affected by choice of K because it computes the importance of all features at the same time and after finishing the training. In short, QuickSelection\(_{10}\) has the least running time among other autoencoder-based methods while being independent of the value of K. In addition, unlike the other methods, the running time of QuickSelection is not sensitive to the number of hidden neurons since the number of parameters is low even for a very large hidden layer.

5.3 Neuron strength analysis

In this section, we discuss the validity of neurons strength as a measure of feature importance. We observe the evolution of the network during training to analyze how the neuron strength of important and unimportant neurons changes during training.

We argue that the most important features that lead to the highest accuracy of feature selection are the features corresponding to neurons with the highest strength. In a neural network, weight magnitude is a metric that shows the importance of each connection (Kavzoglu and Mather 1998). This stems from the fact that weights with a small magnitude have a small effect on the performance of the model. At the beginning of training, we initialize all connections to a small random value. Therefore, all the neurons have almost the same strength/importance. As the training proceeds, some connections grow to a larger value while some others are pruned from the network during the dynamic connections removal and regrowth of the SET training procedure. The growth of the stable connection weights demonstrates their significance in the performance of the network. As a result, the neurons connected to these important weights contain important information. In contrast, the magnitude of the weights connected to unimportant neurons gradually decreases until they are removed from the network. In short, important neurons receive connections with a larger magnitude. As a result, neuron strength, which is the summation of the magnitude of weights connected to a neuron, can be a measure of the importance of an input neuron and its corresponding feature.

To support our claim, we observe the evolution of neurons’ strength on the Madelon dataset. This choice is made due to the distinction between informative and non-informative features in the Madelon dataset. As described earlier, this dataset has 20 informative features, and the rest of the features are non-informative noise. We consider 20 most informative and non-informative features detected by \(QS_{10}\) and \(QS_{100}\), and monitor their strength during training (as observed in Fig. 3, the maximum accuracy is achieved using the 20 most informative features, while the least accuracy is achieved using the least important features). The features selected by \(QS_{10}\) are also being monitored after the algorithm is finished (epoch 10) until epoch 100, in order to compare the quality of the selected features by \(QS_{10}\) with \(QS_{100}\). In other words, we extract the index of important features using \(QS_{10}\), and continue the training without making any changes in the network and monitor how the strength of the neurons corresponding to the selected index would evolve after epoch 10. The results are presented in Fig. 9. At the initialization (epoch 0), the strength of all these neurons is almost similar and below 5. As the training starts, the strength of significant neurons increases, while the strength of unimportant neurons does not change significantly. As can be seen in Fig. 9, some of the important features selected by \(QS_{10}\) are not among those of \(QS_{100}\); this can explain the difference in the performance of these two methods in Table 2 and 3. However, \(QS_{10}\) is able to detect a large majority of the features found by \(QS_{100}\); these features are among the most important ones among the final 20 selected features. Therefore, we can conclude that most of the important features are detectable by QuickSelection, even at the first few epochs of the algorithm.

Fig. 9
figure 9

Strength of the 20 most informative and non-informative features of Madelon dataset, selected by \(QS_{10}\) and \(QS_{100}\). Each line in the plots corresponds to the strength values of a selected feature by \(QS_{10}\)/\(QS_{100}\) during training. The features selected by \(QS_{10}\) have been observed until epoch 100 to compare the quality of these features with \(QS_{100}\)

6 Conclusion

In this paper, a novel method (QuickSelection) for energy-efficient unsupervised feature selection has been proposed. It introduces neuron strength in sparse neural networks as a measure of feature importance. Besides, it proposes sparse DAE to accurately model the data distribution and to rank all features simultaneously based on their importance. By using sparse layers instead of dense ones from the beginning, the number of parameters drops significantly. As a result, QuickSelection requires much less memory and computational resources than its equivalent dense model and its competitors. For example, on low-dimensional datasets, including Coil20, Isolet, HAR, and Madelon, and for all values of K, QuickSelection100 which runs on one CPU core is at least 4 times faster than its direct competitor, CAE, which runs on a GPU, while having a close performance in terms of classification and clustering accuracy. We empirically demonstrate that QuickSelection achieves the best trade-off between clustering accuracy, classification accuracy, maximum memory requirement, and running time, among other methods considered. Besides, our proposed method requires the least amount of energy among autoencoder-based methods considered.

The main drawback of the the proposed method is the lack of a parallel implementation. The running time of QuickSelection can be further decreased by an implementation that takes advantage of multi-core CPU or GPU. We believe that interesting future research would be to study the effects of sparse training and neuron strength in other types of autoencoders for feature selection, e.g. CAE. Nevertheless, this paper has just started to explore one of the most important characteristics of QuickSelection, i.e. scalability, and we intend to explore further its full potential on datasets with millions of features. Besides, this paper showed that we can perform feature selection using neural networks efficiently in terms of computational cost and memory requirement. This can pave the way for reducing the ever-increasing computational costs of deep learning models imposed on data centers. As a result, this will not only save the energy costs of processing high-dimensional data but also will ease the challenges of high energy consumption imposed on the environment.