Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders

Major complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem. Most of the existing feature selection methods are computationally inefficient; inefficient algorithms lead to high energy consumption, which is not desirable for devices with limited computational and energy resources. In this paper, a novel and flexible method for unsupervised feature selection is proposed. This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance. This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously. We implement QuickSelection in a purely sparse manner as opposed to the typical approach of using a binary mask over connections to simulate sparsity. It results in a considerable speed increase and memory reduction. When tested on several benchmark datasets, including five low-dimensional and three high-dimensional datasets, the proposed method is able to achieve the best trade-off of classification and clustering accuracy, running time, and maximum memory usage, among widely used approaches for feature selection. Besides, our proposed method requires the least amount of energy among the state-of-the-art autoencoder-based feature selection methods.


Introduction
In the last few years, considerable attention has been paid to the problem of dimensionality reduction and many approaches have been proposed [53].There are two main techniques for reducing the number of features of a high-dimensional dataset: feature extraction and feature selection.Feature extraction focuses on transforming the data into a lower-dimensional space.This transformation is done through a mapping which results in a new set of features [40].Feature selection reduces the feature space by selecting a subset of the original attributes without generating new features [12].Based on the availability of the labels, feature selection methods are divided into three categories: supervised [2,12], semi-supervised [58,48], and unsupervised [43,16].Supervised feature selection algorithms try to maximize some function of predictive accuracy given the class labels.In unsupervised learning, the search for discriminative features is done blindly, without having the class labels.Therefore, unsupervised feature selection is considered as a much harder problem [16].
Feature selection methods improve the scalability of machine learning algorithms since they reduce the dimensionality of data.Besides, they reduce the ever-increasing demands for computational and memory resources that are introduced by the emergence of big data.This can lead to a considerable decrease in energy consumption in data centers.This can ease not only the problem of high energy costs in data centers but also the critical challenges imposed on the environment [56].As outlined by the High-Level Expert Group on Artificial Intelligence (AI) [22], environmental well-being is one of the requirements of a trust-worthy AI system.The development, deployment, and process of an AI-system should be assessed to ensure that they would function in the most environmentally friendly way possible.For example, resource usage and energy consumption through training can be evaluated.
However, a challenging problem that arises in the feature selection domain is that selecting features from datasets that contain a huge number of features and samples, may require a massive amount of memory, computational, and energy resources.Since most of the existing feature selection techniques were designed to process small-scale data, their efficiency can be downgraded with high-dimensional data [9].Only a few studies have focused on designing feature selection algorithms that are efficient in terms of computation [52,1].The main contributions of this paper can be summarized as follows: • We propose a new fast and robust unsupervised feature selection method, named QuickSelection.As briefly sketched in Figure 1, It has two key components: (1) Inspired by node strength in graph theory, the method proposes the neuron strength of sparse neural networks as a criterion to measure the feature importance; and (2) The method introduces sparsely connected Denoising Autoencoders (sparse DAEs) trained from scratch with the sparse evolutionary training procedure to model the data distribution efficiently.The imposed sparsity before training also reduces the amount of required memory and the training running time.• We implement QuickSelection in a completely sparse manner in Python using the SciPy library and Cython rather than using a binary mask over connections Fig. 1: A high-level overview of the proposed method, "QuickSelection".(a) At epoch 0, connections are randomly initialized.(b) After initializing the sparse structure, we start the training procedure.After 5 epochs, some connections are changed during the training procedure, and as a result, the strength of some neurons has increased or decreased.At epoch 10, the network has converged, and we can observe which neurons are important (larger and darker blue circles) and which are not.(c) When the network is converged, we compute the strength of all input neurons.(d) Finally, we select K features corresponding to neurons with the highest strength values.to simulate sparsity.This ensures minimum resource requirements, i.e., just Random-Access Memory (RAM) and Central Processing Unit (CPU), without demanding Graphic Processing Unit (GPU).
The experiments performed on 8 benchmark datasets suggest that QuickSelection has several advantages over the state-of-the-art, as follows: • It is the first or the second-best performer in terms of both classification and clustering accuracy in almost all scenarios considered.• It is the best performer in terms of the trade-off between classification and clustering accuracy, running time, and memory requirement.• The proposed sparse architecture for feature selection has at least one order of magnitude fewer parameters than its dense equivalent.This leads to the outstanding fact that the wall clock training time of QuickSelection running on CPU is smaller than the wall clock training time of its autoencoder-based competitors running on GPU in most of the cases.• Last but not least, QuickSelection computational efficiency makes it have the minimum energy consumption among the autoencoder-based feature selection methods considered.
2 Related Work

Feature Selection
The literature on feature selection shows a variety of approaches that can be divided into three major categories, including filter, wrapper, and embedded methods.Filter methods use a ranking criterion to score the features and then remove the features with scores below a threshold.These criteria can be Laplacian score [27], Correlation, Mutual Information [12], and many other scoring methods such as Bayesian scoring function, t-test scoring, and Information theory-based criteria [33].These methods are usually fast and computationally efficient.Wrapper methods evaluate different subsets of features to detect the best subset.Wrapper methods usually give better performance than filter methods; they use a predictive model to score each subset of features.However, this results in high computation complexity.Seminal contributions for this type of feature selection have been made by Kohavi and John [30].In [30], the authors used a tree structure to evaluate the subsets of features.Embedded methods unify the learning process, and the feature selection [31].Multi-Cluster Feature Selection (MCFS) [11] is an unsupervised method for embedded feature selection, which selects features using spectral regression with L1-norm regularization.A key limitation of this algorithm is that it is computationally intensive since it depends on computing the eigenvectors of the data similarity matrix and then solving an L1-regularized regression problem for each eigenvector [19].Unsupervised Discriminative Feature Selection (UDFS) [57] is another unsupervised embedded feature selection algorithm that simultaneously utilizes both feature and discriminative information to select features [37].

Autoencoders for Feature Selection
In the last few years, many deep learning-based models have been developed to select features from the input data using the learning procedure of deep neural networks [38].In [42], a Multi-Layer Perceptron (MLP) is augmented with a pairwise-coupling layer to feed each input feature along with its knockoff counterpart into the network.After the training, the authors use the filter weights of the pairwise-coupling layer to rank input features.Autoencoders which are generally known as a strong tool for feature extraction [8], are being explored to perform unsupervised feature selection.
In [24], authors combine autoencoder regression and group lasso task for unsupervised feature selection named AutoEncoder Feature Selector (AEFS).In [15], an autoencoder is combined with three variants of structural regularization to perform unsupervised feature selection.These regularizations are based on slack variables, weights, and gradients, respectively.Another recently proposed autoencoder-based embedded method is feature selection with Concrete Autoencoder (CAE) [5].This method selects features by learning a concrete distribution over input features.They proposed a concrete selector layer that selects a linear combination of input features that converges to a discrete set of K features during training.In [49], the authors showed that a large set of parameters in CAE might lead to over-fitting in case of having a limited number of samples.In addition, CAE may select features more than once since there is no interaction between the neurons of the selector layer.To mitigate these problems, they proposed a concrete neural network feature selection (FsNet) method, which includes a selector layer and a supervised deep neural network.The training procedure of FsNet considers reducing the reconstruction loss and maximizing the classification accuracy simultaneously.In our research, we focus mostly on unsupervised feature selection methods.Denoising Autoencoder (DAE) is introduced to solve the problem of learning the identity function in the autoencoders.This problem is most likely to happen when we have more hidden neurons than inputs [4].As a result, the network output may be equal to the inputs, which makes the autoencoder useless.DAEs solve the aforementioned problem by introducing noise on the input data and trying to reconstruct the original input from its noisy version [54].As a result, DAEs learn a representation of the input data that is robust to small irrelevant changes in the input.In this research, we use the ability of this type of neural network to encode the input data distribution and select the most important features.Moreover, we demonstrate the effect of noise addition on the feature selection results.

Sparse Training
Deep neural networks usually have at least some fully-connected layers, which results in a large number of parameters.In a high-dimensional space, this is not desirable since it may cause a significant decrease in training speed and a rise in memory requirement.To tackle this problem, sparse neural networks have been proposed.Pruning the dense neural networks is one of the most well-known methods to achieve a sparse neural network [35,26].In [25], Han et al. start from a pre-trained network, prune the unimportant weights, and retrain the network.Although this method can output a network with the desired sparsity level, the minimum computation cost is as much as the cost of training a dense network.To reduce this cost, Lee et al. [36] start with a dense neural network, and prune it prior to training based on connection sensitivity; then, the sparse network is trained in the standard way.However, starting from a dense neural network requires at least the memory size of the dense neural network and the computational resources for one training iteration of a dense network.Therefore, this method might not be suitable for low resource devices.
In 2016, Mocanu et al. [44] have introduced the idea of training sparse neural networks from scratch, a concept which recently has started to be known as sparse training.The sparse connectivity pattern was fixed before training using graph theory, network science, and data statistics.While it showed promising results, outperforming the dense counterpart, the static sparsity pattern did not always model the data optimally.In order to address these issues, in 2018, Mocanu et al.
[45] have proposed the Sparse Evolutionary Training (SET) algorithm which makes use of dynamic sparsity during training.The idea is to start with a sparse neural network before training and dynamically change its connections during training in order to automatically model the data distribution.This results in a significant decrease in the number of parameters and increased performance.SET evolves the sparse connections at each training epoch by removing a fraction ζ connections with the smallest magnitude, and randomly adding new connections in each layer.Bourgin et al. [10] have shown that a sparse MLP trained with SET achieves stateof-the-art results on tabular data in predicting human decisions, outperforming fully-connected neural networks and Random Forest, among others.
In this work, we introduce for the first time sparse training in the world of denoising autoencoders, and we named the newly introduced model sparse denoising autoencoder (sparse DAE).We train the sparse DAE with the SET algorithm to keep the number of parameters low, during the training.Then, we then exploit the trained network to select the most important features.

Proposed Method
To address the problem of the high dimensionality of the data, we propose a novel method, named "QuickSelection", to select the most informative attributes from the data, based on their strength (importance).In short, we train a sparse denoising autoencoder network from scratch in an unsupervised adaptive manner.Then, we use the trained network to derive the strength of each neuron in the input features.
The basic idea of our proposed approach is to impose sparse connections on DAE, which proved its success in the related field of feature extraction, to efficiently handle the computational complexity of high-dimensional data in terms of memory resources.Sparse connections are evolved in an adaptive manner that helps in identifying informative features.
A couple of methods have been proposed for training deep neural networks from scratch using sparse connections and sparse training [14,45,7,46,17,59].All these methods are implemented using a binary mask over connections to simulate sparsity since all standard deep learning libraries and hardware (e.g.GPUs) are not optimized for sparse weight matrix operations.Unlike the aforementioned methods, we implement our proposed method in a purely sparse manner to meet our goal of actually using the advantages of a very small number of parameters during training.We decided to use SET in training our sparse DAE.
The choice of SET is due to its desirable characteristic.SET is a simple method yet achieves satisfactory performance.Unlike other methods that calculate and store information for all the network weights, including the non-existing ones, SET is memory efficient.It stores the weights for the existing sparse connections only.It does not need any high computational complexity as the evolution procedure depends on the magnitude of the existing connections only.This is a favourable advantage to our proposed method to select informative features quickly.In the following subsections, we first present the structure of our proposed sparse denoising autoencoder network and then explain the feature selection method.The pseudocode of our proposed method can be found in Algorithm 1.

Sparse DAE
Structure As the goal of our proposed method is to do fast feature selection in a memory-efficient way, we consider here the model with the least possible number of hidden layers, one hidden layer, as more layers mean more computation.Initially, sparse connections between two consecutive layers of neurons are initialized with an Erdős-Rényi random graph, in which the probability of the connection between two neurons is given by where denotes the parameter that controls the sparsity level, n l denotes number of neurons at layer l, and W l ij is the connection between neuron i in layer l − 1 and neuron j in layer l, stored in the sparse weight matrix W l .
Input denoising We use the additive noise model to corrupt the original data: where x is the input data vector from dataset X, nf (noise factor) is a hyperparameter of the model which determines the level of corruption, and N (µ, σ 2 ) is a Gaussian noise.After denoising the data, we derive the hidden representation h using this corrupted input.Then, the output z is reconstructed from the hidden representation.Formally, the hidden representation h and the output z are computed as follows: where W 1 and W 2 are the sparse weight matrices of hidden and output layers respectively, b 1 and b 2 are the bias vectors of their corresponding layer, and a is the activation function of each layer.The objective of our network is to reconstruct the original features in the output.For this reason, we use mean squared error (MSE) as the loss function to measure the difference between original features x and the reconstructed output z: Finally, the weights can be optimized using the standard training algorithms (e.g., Stochastic Gradient Descent (SGD), AdaGrad, and Adam) with the above reconstruction error.
Training procedure We adapt the SET training procedure [45] in training our proposed network for feature selection.SET works as follows.After each training epoch, a fraction ζ of the smallest positive weights and a fraction ζ of the largest negative weights at each layer is removed.The selection is based on the magnitude of the weights.New connections in the same amount as the removed ones are randomly added in each layer.Therefore the total number of connections in each layer remains the same, while the number of connections per neuron varies, as represented in Figure 1.The weights of these new connections are initialized from a standard normal distribution.The random addition of new connections do not have a high risk of not finding a good sparse connectivity at the end of the training process because it has been shown in [41] that sparse training can unveil a vast number of very different sparse connectivity local optima which achieve a very similar performance.

Feature Selection
We select the most important features of the data based on the weights of their corresponding input neurons of the trained sparse DAE.Inspired by node strength in graph theory [6], we determine the importance of each neuron based on its strength.We estimate the strength of each neuron (s i ) by the summation of absolute weights of its outgoing connections.
where n 1 is the number of neurons of the first hidden layer, and W 1 ij denotes the weight of connection linking input neuron i to hidden neuron j.As represented in Figure 1, the strength of the input neurons changes during training; we have depicted the strength of the neurons according to their size and color.After convergence, we compute the strength for all of the input neurons; each input neuron corresponds to a feature.Then, we select the features corresponding to the neurons with K largest strength values: where F and F * s are the original feature set and the final selected features respectively, f i is the i th feature of F, and K is the number of features to be selected.In addition, by sorting all the features based on their strength, we will derive the importance of all features in the dataset.In short, we will be able to rank all input features by training just once a single sparse DAE model.
For a deeper understanding of the above process, we analyze the strength of each input neuron in a 2D map on the MNIST dataset.This is illustrated in Figure 2. At the beginning of training, all the neurons have small strength due to the random initialization of each weight to a small value.During the network evolution, stronger connections are linked to important features gradually.We can observe that after ten epochs, the neurons in the center of the map become stronger.This pattern is similar to the pattern of MNIST data in which most of the digits appear in the middle of the picture.
We studied other metrics for estimating the neuron importance such as the strength of output neurons, degree of input and output neurons, and strength and degree of neurons simultaneously.However, in our experiments, all these methods have been outperformed by the strength of the input neurons in terms of accuracy and stability.

Experiments
In order to verify the validity of our proposed method, we carry out several experiments.In this section, first, we state the settings of the experiments, including hyperparameters and datasets.Then, we perform feature selection with QuickSelection and compare the results with other methods, including MCFS, Laplacian Score, and three autoencoder-based feature selection methods.After that, we do different analyses on QuickSelection to understand its behavior.Finally, we discuss the scalability of QuickSelection and compare it with the other methods considered.

Settings
The experiment settings, including the values of hyperparameters, implementation details, the structure of the sparse DAE, datasets we use for evaluation, and the evaluation metric, are as follows.

Hyperparameters and Implementation
For feature selection, we consider the case of the simplest sparse DAE with one hidden layer consisting of 1000 neurons.This choice is made due to our main objective to decrease the model complexity and the number of parameters.The activation function used for the hidden and output layer neurons is "Sigmoid" and "Linear" respectively, except for the Madelon dataset where we use "Tanh" for the output activation function.We train the network with SGD and a learning rate of 0.01.The hyperparameter ζ, the fraction of weights to be removed in the SET procedure, is 0.2.Also, , which determines the sparsity level, is set to 13.We set the noise factor (nf ) to 0.2 in the experiments.To improve the learning process of our network, we standardize the features of our dataset such that each attribute has zero mean and unit variance .However, for SMK and PCMAC datasets, we use Min-Max scaling.The preprocessing method for each dataset is determined with a small experiment of the two preprocessing method.
We implement sparse DAE and QuickSelection2 in a purely sparse manner in Python, using the Scipy library [28] and Cython.We compare our proposed method to MCFS, Laplacian score (LS), AEFS, and CAE, which have been mentioned in Section 2. We also performed some experiments with UDFS; however, since we were not able to obtain many of the results in the considered time limit (24 hours), we do not include the results in the paper.We have used the scikit-feature repository for the implementation of MCFS, and Laplacian score [37].Also, we use the implementation of feature selection with CAE and AEFS from Github3 .In addition, to highlight the advantages of using sparse layers, we compare our results with a fully-connected autoencoder (FCAE) using the neuron strength as a measure of the importance of each feature.To have a fair comparison, the structure of this network is considered similar to our DAE, one hidden layer containing 1000 neurons implemented using TensorFlow.Furthermore, we have studied the effect of other components of QuickSelection, including input denoising and SET training algorithm, in Appendix B.1 and F, respectively.
For all the other methods (except FCAE for which all the hyperparameters and preprocessing are similar to QuickSelection), we scaled the data between zero and one, since it yields better performance than data standardization for these methods.The hyperparameters of the aforementioned methods have been set similar to the ones reported in the corresponding code or paper.For AEFS, we tuned the regularization hyperparameter between 0.0001 and 1000, since this method is sensitive to this value.We perform our experiments on a single CPU core, Intel Xeon Processor E5 v4, and for the methods that require GPU, we use NVIDIA TESLA P100.

Datasets
We evaluate the performance of our proposed method on eight datasets, including five low-dimensional datasets and three high-dimensional ones.Table 1 illustrates the characteristics of these datasets.
-COIL-20 [47] consists of 1440 images taken from 20 objects (72 poses for each object).-Madelon [23] is an artificial dataset with 5 informative features and 15 linear combinations of them.The rest of the features are distractor features since they have no predictive power.-Human Activity Recognition (HAR) [3] is created by collecting the observations of 30 subjects performing 6 activities such as walking, standing, and sitting.The data was recorded by a smart-phone connected to the subjects' body.
-Isolet [18] has been created with the spoken name of each letter of the English alphabet.
-MNIST [34] is a database of 28x28 images of handwritten digits.
-SMK-CAN-187 [50] is a gene expression dataset with 19993 features.This dataset compares smokers with and without lung cancer.-GLA-BRA-180 [51] consists of the expression profile of Stem cell factor useful to determine tumor angiogenesis.
-PCMAC [32] is a subset of the 20 Newsgroups data.

Evaluation Metrics
To evaluate our model, we compute two metrics: clustering accuracy and classification accuracy.To derive clustering accuracy [37], first, we perform K-means using the subset of the dataset corresponding to the selected features and get the cluster labels.Then, we find the best match between the class labels and the cluster labels and report the clustering accuracy.We repeat the K-means algorithm 10 times and report the average clustering results since K-means may converge to a local optimal.
To compute classification accuracy, we use a supervised classification model named "Extremely randomized trees" (ExtraTrees), which is an ensemble learning method that fits several randomized decision trees on different parts of the data [21].The choice of the classification method is made due to the computational-efficiency of the ExtraTrees classifier.To compute classification accuracy, first, we derive the K selected features using each feature selection method considered.Then, we train the ExtraTrees classifier with 50 trees as estimators on the K selected features of the training set.Finally, we compute the classification accuracy on the unseen test data.For the datasets that do not contain a test set, we split the data into training and testing sets, including 80% of the total original samples for the training set and the remaining 20% for the testing set.In addition, we have evaluated the classification accuracy of feature selection using the random forest classifier [39] in Appendix G.

Feature Selection
We select 50 features from each dataset except Madelon, for which we select just 20 features since most of its features are non-informative noise.Then, we compute the clustering and classification accuracy on the selected subset of features; the more informative features selected, the higher accuracy will be achieved.The clustering and classification accuracy results of our model and the other methods is summarized in Tables 2 and 3, respectively.These results are an average of 5 runs for each case.For the autoencoder-based feature selection methods, including CAE, AEFS, and FCAE, we consider 100 training epochs.However, we present Table 2: Clustering accuracy (%) using 50 selected features (except Madelon for which we select 20 features).On each dataset, the bold entry is the best-performer, and the italic one is the second-best performer.the results of QuickSelection at epoch 10 and 100 named QuickSelection 10 and QuickSelection 100 , respectively.This is mainly due to the fact that our proposed method is able to achieve a reasonable accuracy after the first few epochs.Moreover, we perform hyperparameter tuning for and ζ using the grid search method over a limited number of values for all datasets; the best result is presented in Table 2 and 3 as QuickSelection best .The results of hyperparameters selection can be found in Appendix B.2.However, we do not perform hyperparameter optimization for the other methods (except AEFS).Therefore, in order to have a fair comparison between all methods, we do not compare the results of QuickSelection best with the other methods.
From Table 2, it can be observed that QuickSelection outperforms all the other methods on Isolet, Madelon, and PCMAC, in terms of clustering accuracy, while being the second-best performer on Coil20, MNIST, SMK, and GLA.Furthermore, On the HAR dataset, it is the best performer among all the autoencoder-based feature selection methods considered.As shown in Table 3, QuickSelection outperforms all the other methods on Coil20, SMK, and GLA, in terms of classification accuracy, while being the second-best performer on the other datasets.From these Tables, it is clear that QuickSelection can outperform its equivalent dense network (FCAE) in terms of classification and clustering accuracy on all datasets.It can be observed in Tables 2 and 3, that Lap_score has a poor performance when the number of samples is large (e.g.MNIST).However, in the tasks with a low number of samples and features, even on noisy environments such as Madelon, Lap_score has a relatively good performance.In contrast, CAE has a poor performance in noisy environments (e.g., Madelon), while it has a decent classification accuracy on the other datasets considered.It is the best or second-best performer on five datasets, in terms of classification accuracy, when K = 50.AEFS and FCAE cannot achieve a good performance on Madelon, either.We believe that the dense layers are the main cause of this behaviour; the dense connections try to learn all input features, even the noisy features.Therefore, they fail to detect the most important attributes of the data.MCFS performs decently on most of the datasets in terms of clustering accuracy.This is due to the main objective of MCFS to preserve the multi-cluster structure of the data.However, this method also has a poor performance on the datasets with a large number of samples (e.g., MNIST) and noisy features (e.g., Madelon).
However, since evaluating the methods using a single value of K might not be enough for comparison, we performed another experiment using different values of K.In Appendix A.1, we test other values for K on all datasets, and compare the methods in terms of classification accuracy, clustering accuracy, running time, and maximum memory usage.The summary of the results of this Appendix has been summarized in Section 5.1.

Relevancy of Selected Features
To illustrate the ability of QuickSelection in finding informative features, we analyze thoroughly the Madelon dataset results, which has the interesting property of containing many noisy features.We perform the following experiments; first, we sort the features based on their strength.Then, we remove the features one by one from the least important feature to the most important one.In each step, we train an ExtraTrees classifier with the remained features.We repeat this experiment by removing the feature from the most important ones to the least important ones.The result of classification accuracy for both experiments can be seen in Figure 3. On the left side of Figure 3, we can observe that removing the least important features, which are noise, increases the accuracy.The maximum accuracy occurs after we remove 480 noise features.This corresponds to the moment when all the noise features are supposed to be removed.In Figure 3 (right), it can be seen that removing the features in a reverse order results in a sudden decrease in the classification accuracy.After removing 20 features (indicated by the vertical blue line), the classifier performs like a random classifier.We conclude that QuickSelection is able to find the most informative features in good order.
To better show the relevancy of the features found by QuickSelection, we visualize the 50 features selected on the MNIST dataset per class, by averaging their corresponding values from all data samples belonging to one class.As can be observed in Figure 4, the resulting shape resembles the actual samples of the corresponding digit.We discuss the results of all classes at different training epochs in more detail in Appendix C.

Accuracy and Computational Efficiency Trade-off
In this section, we perform a thorough comparison between the models in terms of running time, energy consumption, memory requirement, clustering accuracy, and classification accuracy.In short, we change the number of features to be selected (K) and measure the accuracy, running time, and maximum memory usage across all methods.Then, we compute two scores to summarize the results and compare methods.
We analyse the effect of changing K on QuickSelection performance and compare with other methods; the results are presented in Figure 10 in Appendix A.1.Figure 10a compares the performance of all methods when K is changing between 5 and 100 on low-dimensional datasets, including Coil20, Isolet, HAR, and Madelon.Figure 10b illustrates performance comparison for K between 5 and 300 on the MNIST dataset, which is also a low-dimensional dataset.We discuss this dataset separately since it has a large number of samples that makes it different from other low-dimensional datasets.Figure 10c represents a similar comparison on three high-dimensional datasets, including SMK, GLA, and PCMAC.It should be noted that to have a fair comparison, we use a single CPU core to run these methods; however, since the implementations of CAE and AEFS are optimized for parallel  computation, we use a GPU to run these methods.We also measure the running time of feature selection with CAE on CPU.
To compare the memory requirement of each method, we profile the maximum memory usage during feature selection for different values of K.The results are presented in Figure 11 in Appendix A.1, derived using a Python library named resource 4 .Besides, to compare memory occupied by the autoencoder-based models, we count the number of parameters for each model.The results are shown in Figure 14 in the Appendix A.3.
However, comparing all of these methods only by looking into the graphs in Figure 10 and Figure 11 is not easily possible, and the trade-off between the factors is not clear.For this reason, we compute two scores to take all these metrics into account simultaneously.
Score 1.To compute this score, on each dataset and for each value of K, we rank the methods based on the running time, memory requirement, clustering accuracy, and classification accuracy.Then, we give a score of 1 to the best and second-best performer; this is mainly due to the fact that in most cases, the difference between these two is negligible.After that, we compute the summation of these scores for each method on all datasets.The results are presented in Figure 5a; to ease the comparison of different components in the score, a heat-map visualization of the scores is presented in Figure 5c.The cumulative score for each method consists of four parts that correspond to each metric considered.As it is obvious in this Figure, QuickSelection (cumulative score of QuickSelection 10 and QuickSelection 100 ) outperforms all other methods by a significant gap.Our proposed method is able to achieve the best trade-off between accuracy, running time, and memory usage, among all these methods.Laplacian score, the secondbest performer, has a decent performance in terms of running time and memory, while it cannot perform well in terms of accuracy.On the other hand, CAE has a satisfactory performance in terms of accuracy.However, it is not among the best two performers in terms of computational resources for any values of K. Finally, FCAE and AEFS cannot achieve a decent performance compared to the other methods.A more detailed version of Figure 5a is available in Figure 12 in Appendix A.1.Score 2. In addition to the raking-based score, we calculate another score to consider all the methods, even the lower-ranking ones.With this aim, on each dataset and value of K, we normalize each performance metric between 0 and 1, using the values of the best performer and worst performer on each metric.The value of 1 in the accuracy score means the highest accuracy.However, for the memory and running time, the value of 1 means the least memory requirement and the least running time, respectively.After normalizing the metrics, we accumulate the normalized values for each method and on all datasets.The results are depicted in Figure 5b.As can be seen in this diagram, QuickSelection (we consider the results of QuickSelection 100 ) outperforms the other methods by a large margin.CAE has a close performance to QuickSelection in terms of both accuracy metrics, while it has a poor performance in terms of memory and running time.In contrast, Lap_score is computationally efficient while having the lowest accuracy score.In summary, it can be observed in Figure 5b, that QuickSelection achieves the best trade-off of the four objectives among the considered methods.
Energy Consumption.The next analysis we perform concerns the energy consumption of each method.We estimate the energy consumption of each method using the running time of the corresponding algorithm for each dataset and value of K. We assume that each method uses the maximum power of the corresponding computational resources during its running time.Therefore, we derive the power consumption of each method, using the running time and maximum power consumption of CPU and/or GPU, which can be found within the specification of the corresponding CPU or GPU model.As shown in Figure 13 in Appendix A.2, the Laplacian score feature selection needs the least amount of energy among the methods on all datasets except the MNIST dataset.QuickSelection 10 is the best performer on MNIST in terms of energy consumption.Laplacian score and MCFS are sensitive to the number of samples.They cannot perform well on MNIST, either in terms of accuracy or efficiency.The maximum memory usage during feature selection for Laplacian score and MCFS on MNIST is 56 GB and 85 GB, respectively.Therefore, they are not a good choice in case of having a large number of samples.QuickSelection is the second-best performer in terms of energy consumption, and also the best performer among the autoencoder-based methods.QuickSelection is not sensitive to the number of samples or the number of dimensions.
Efficiency vs Accuracy.In order to study the trade-off between accuracy and resource efficiency, we perform another in-depth analysis.In this analysis, we plot the trade-off between accuracy (including, classification and clustering accuracy) and resource requirement (including, memory and energy consumption).The results are shown in Figures 6 and 7 that correspond to the energy-accuracy and memory-accuracy trade-off, respectively.Each point in these plots refers to the results of a particular combination between a specific method and dataset when selecting 50 features (except Madelon, for which we select 20 features).As can be observed in these plots, QuickSelection, MCFS, and Lap_score usually have a good trade-off between the considered metrics.A good trade-off between a pair of metrics is to maximize the accuracy (classification or clustering accuracy) while minimizing the computational cost (power consumption or memory requirement).However, when the number of samples increases (on the MNIST dataset), both MCFS and Lap_score fail to maintain a low computational cost and high accuracy.Therefore, when the dataset size increases, these two methods are not an optimal choice.Among the autoencoder-based methods, in most cases QuickSelection 10 Fig. 6: Estimated power consumption (Kwh) vs. accuracy (%) when selecting 50 features (except Madelon for which we select 20 features).Each point refers to the result of a single dataset (specified by colors) and method (specified by markers) where the x and y-axis show the accuracy and the estimated power consumption, respectively.7: Maximum memory requirement (Kb) vs. accuracy (%) when selecting 50 features (except Madelon for which we select 20 features).Each point refers to the result of a single dataset (specified by colors) and method (specified by markers) where the x and y-axis show the accuracy and the maximum memory requirement, respectively.Due to the high memory requirement of MCFS and Lap_score on the MNIST dataset which makes it difficult to compare the other results (upper plots), we zoom in this section in the bottom plots.and QuickSelection 100 are among the Pareto optimal points.Another significant advantage of our proposed method is that it gives the ranking of the features as the output.Therefore, unlike the MCFS or CAE that need the value of K as their input, QuickSelection is not dependent on K and needs just a single training of the sparse DAE model for any values of K. Therefore, the computational cost of QuickSelection is the same for all values of K, and only a single run of this algorithm is required to get the hierarchical importance of features.QS_10 (K=All, n=1000, CPU) QS_100 (K=All, n=1000, CPU) QS_100 (K=All, n=10000, CPU) CAE (K=100, n=150, GPU) CAE (K=300, n=450, GPU) CAE (K=100, n=1000, GPU) CAE (K=100, n=10000, GPU) CAE (K=100, n=150, CPU) AEFS (K=All, n=300, GPU) AEFS (K=All, n=1000, GPU) AEFS (K=All, n=10000, GPU) FCAE (K=All, n=1000, CPU) FCAE (K=All, n=10000, CPU) Fig. 8: Running time comparison on an artificially generated dataset.The features are generated using a standard normal distribution and the number of samples for each case is 5000.

Running Time Comparison on an Artificially Generated Dataset
In this section, we perform a comparison of the running time of the autoencoderbased feature selection methods on an artificially generated dataset.Since on the benchmark datasets both the number of features and samples are different, it is not easily possible to compare clearly the efficiency of the methods.This experiment aims at comparing the models real wall-clock training time in a controlled environment with respect to the number of input features and hidden neurons.In addition, in Appendix E, we have conducted another experiment regarding evaluation of the methods on a very large artificial dataset, in terms of both computational resources and accuracy.
In this experiment, we aim to compare the speed of QuickSelection versus other autoencoder-based feature selection methods for different numbers of input features.We run all of them on an artificially generated dataset with various numbers of features and 5000 samples, for 100 training epochs (10 epochs for QuickSelection 10 ).The features of this dataset are generated using a standard normal distribution.In addition, we aim to compare the running time of different structures for these algorithms.The specifications of the network structure for each method, the computational resources used for feature selection, and the corresponding results can be seen in Figure 8.
For CAE, we consider two different values of K.The structure of CAE depends on this value.CAE has two hidden layers including a concrete selector and a decoder that have K and 1.5K neurons, respectively.Therefore, by increasing the number of selected features, the running time of the model will also increase.In addition, we consider the cases of CAE with 1000 and 10000 hidden neurons in the decoder layer (manually changed in the code) to be able to compare it with the other models.We also measure the running time of performing feature selection with CAE using only a single CPU core.It can be seen from Figure 8 that its running time is considerably high.The general structures of AEFS, QuickSelection, and FCAE are similar in terms of the number of hidden layers.They are basic autoencoders with a single hidden layer.For AEFS, we considered three structures with different numbers of hidden neurons, including 300, 1000, and 10000.Finally, for QuickSelection and FCAE, we consider two different values for the number of hidden neurons, including 1000 and 10000.
It can be observed that the running time of AEFS with 1000 and 10000 hidden neurons using a GPU, is much larger than the running time of QuickSelection 100 with similar numbers of hidden neurons using only a single CPU core, respectively.The same pattern is also visible in the case of CAE with 1000 and 10000 hidden neurons.This pattern also repeats in the case of FCAE with 10000 hidden neurons.The running time of FCAE with 1000 hidden neurons is approximately similar to QuickSelection 100 .However, the difference between these two methods is more significant when we increase the number of hidden neurons to 10000.This is mainly due to the fact that the difference between the number of parameters of QuickSelection and the other methods become much higher for large values of K. Besides, these observations depict that the running time of QuickSelection does not change significantly by increasing the number of hidden neurons.
As we have also mentioned before, QuickSelection gives the ranking of the features as the output.Therefore, unlike CAE which should be run separately for different values of K, QuickSelection is not affected by the choice of K because it computes the importance of all features at the same time and after finishing the training.In short, QuickSelection 10 has the least running time among other autoencoder-based methods while being independent of the value of K.In addition, unlike the other methods, the running time of QuickSelection is not sensitive to the number of hidden neurons since the number of parameters is low even for a very large hidden layer.

Neuron Strength Analysis
In this section, we discuss the validity of neurons strength as a measure of the feature importance.We observe the evolution of the network during training to analyze how the neuron strength of important and unimportant neurons changes during training.
We argue that the most important features that lead to the highest accuracy of feature selection are the features corresponding to neurons with the highest strength.In a neural network, weight magnitude is a metric that shows the importance of each connection [29].This stems from the fact that weights with a small magnitude have small effect on the performance of the model.At the beginning of training, we initialize all connections to a small random value.Therefore, all the neurons have almost the same strength/importance.As the training proceeds, some connections grow to a larger value while some others are pruned from the network during the dynamic connections removal and regrowth of the SET training procedure.The growth of the stable connection weights demonstrates their significance in the performance of the network.As a result, the neurons connected to these important weights contain important information.In contrast, the magnitude of the weights connected to unimportant neurons gradually decreases until they are removed from the network.In short, important neurons receive connections with a larger magnitude.As a result, neuron strength, which is the summation of the magnitude of weights connected to a neuron, can be a measure of the importance of an input neuron and its corresponding feature.
To support our claim, we observe the evolution of neurons' strength on the Madelon dataset.This choice is made due to the distinction of informative and non-informative features in the Madelon dataset.As described earlier, this dataset has 20 informative features, and the rest of the features are non-informative noise.We consider 20 most informative and non-informative features detected by QS 10 and QS 100 , and monitor their strength during training (as observed in Figure 3, the maximum accuracy is achieved using the 20 most informative features, while the least accuracy is achieved using the least important features).The features selected by QS 10 are also being monitored after the algorithm is finished (epoch 10) until epoch 100, in order to compare the quality of the selected features by QS 10 with QS 100 .In other words, we extract the index of important features using QS 10 , and continue the training without making any changes in the network and monitor how the strength of the neurons corresponding to the selected index would evolve after epoch 10.The results are presented in Figure 9.At the initialization (epoch 0), the strength of all these neurons is almost similar and below 5.As the training starts, the strength of significant neurons increases, while the strength of unimportant neurons does not change significantly.As can be seen in Figure 9, some of the important features selected by QS 10 are not among those of QS 100 ; this can explain the difference in the performance of these two methods in Table 2 and 3.However, QS 10 is able to detect a large majority of the features found by QS 100 ; these features are among the most important ones among the final 20 selected features.Therefore, we can conclude that most of the important features are detectable by QuickSelection, even at the first few epochs of the algorithm.

Conclusion
In this paper, a novel method (QuickSelection) for energy-efficient unsupervised feature selection has been proposed.It introduces neuron strength in sparse neural networks as a measure of feature importance.Besides, it proposes sparse DAE to accurately model the data distribution and to rank all features simultaneously based on their importance.By using sparse layers instead of dense ones from the beginning, the number of parameters drops significantly.As a result, QuickSelection requires much less memory and computational resources than its equivalent dense model and its competitors.For example, on low-dimensional datasets, including Coil20, Isolet, HAR, and Madelon, and for all values of K, QuickSelection 100 which runs on one CPU core is at least 4 times faster than its direct competitor, CAE, which runs on a GPU, while having a close performance in terms of classification and clustering accuracy.We empirically demonstrate that QuickSelection achieves the best trade-off between clustering accuracy, classification accuracy, maximum memory requirement, and running time, among other methods considered.Besides, our proposed method requires the least amount of energy among autoencoder-based methods considered.
The main drawback of the the proposed method is the lack of a parallel implementation.The running time of QuickSelection can be further decreased by an implementation that takes advantage of multi-core CPU or GPU.We believe that interesting future research would be to study the effects of sparse training Nevertheless, this paper has just started to explore one of the most important characteristics of QuickSelection, i.e. scalability, and we intend to explore further its full potential on datasets with millions of features.Besides, this paper showed that we can perform feature selection using neural networks efficiently in terms of computational cost and memory requirement.This can pave the way for reducing the ever-increasing computational costs of deep learning models imposed on data centers.As a result, this will not only save the energy costs of processing highdimensional data but also will ease the challenges of high energy consumption imposed on the environment.From this figure, it can be observed that the improvement of adding noise, is more obvious in QuickSelection 100 than QuickSelection 10 .When we add noise to the data, it needs more time to learn the original structure of the data.So, we need to run it for more epochs to get a proper result.

B.2 SET Hyperparameters
As explained in the paper, ζ and are the hyperparameters of the SET algorithm which control the number of connections to remove/add for each topology change and the sparsity level, respectively.The corresponding density level of each value for each dataset can be observed in Table 4.
To illustrate the effect of the hyperparameters ζ and , we perform a grid search within a small set of values on all of the datasets.The obtained results can be found in Tables 5 and 6.As we increase the value, the number of connections in our model increases, and therefore, the computation time will increase.So, we prefer using small values for this parameter.Additionally, for a large value of , in some cases the model is not able to converge in 100 epochs; for example, on the MNIST dataset, we can observe that for an value of 25, the model has lower performance in terms of clustering and classification accuracy.It can be observed that ζ = 0.2 and = 13 (as chosen for the experiments performed in the paper) lead to a decent performance on all datasets.For these values, QuickSelection is able to achieve high clustering and classification accuracy.
Overall, although searching for the best pair of ζ and will improve the performance, QuickSelection is not extremely sensitive to these values.As can be seen in Tables 5 and 6, for all values of these hyperparameters QuickSelection has a reasonable performance.Even with = 2 which leads to a very sparse model, QuickSelection has decent performance, and in some cases better than a denser network.

C Visualization of Selected Features on MNIST
In Figure 16, we visualize the 50 best features found by QuickSelection on the MNIST dataset at different epochs.These features are mostly at the center of the image, similar to the pattern of MNIST digits.
Then, we visualize the features selected for each class separately.In Figure 17, each picture at different epochs is the average of the 50 selected features of all the samples of each class along with the average of the actual samples of the corresponding class.As we can see, during training, these features become more similar to the pattern of digits of each class.Thus, QuickSelection is able to find the most relevant features for all classes.

D Feature Extraction
Although it is not the main focus of the paper, we perform a small analysis on the MNIST dataset to study the performance of sparse DAE as a feature extractor.We train it to map the high-dimensional features into a lower-dimensional space.
The structure we consider for feature extraction has three hidden layers with 1000, 50, and 1000 neurons, respectively; the middle layer (50 neurons) is the extracted low-dimensional representation.We compare the results with fully-connected DAE (FC-DAE -implemented in Keras [13]).We also extract features using the Principal Component Analysis (PCA) [55] technique as a baseline method.Then, we train an ExtraTrees classifier on these extracted features and compute the classification accuracy.The results are presented in Figure 18.
To achieve the best density level that suits our network, we test different values.As shown in Figure 18, sparse DAE (density = 3.26%) has the best performance among them.Sparse DAE (density = 3.26%), FC-DAE, and PCA achieve 95.2%, 96.2%, and 95.6% accuracy, respectively.Although sparse DAE can not perform as well as the FC-DAE, it approximately has 54 k parameters compared to 1.67 m parameters of FC-DAE.Such a small number of parameters of this model results in a high rise in the running speed and a significant drop in the memory requirement.Furthermore, it is interesting to observe that a very sparse DAE (below 1% density) can achieve more than 90.0%accuracy on MNIST while having about 150 times fewer parameters than FC-DAE.

E Feature Selection on a Large Dataset
In this appendix, we evaluate the performance of the methods on a very large dataset, in terms of both number of samples and dimensions.In this experiment, first, we generate two artificial datasets with high number of samples and features.The choice of an artificial dataset was made to easily control the number of relevant features of the dataset, as in most of the real-world datasets the number of informative features are not clear.These datasets are generated using sklearn5 library tools, make_classification function, which generates datasets with a desired number of features and samples.This function allows us to adjust the number of informative, redundant, and non-informative features.Table 7 shows the characteristics of the two artificially generated datasets.We generated 2 datasets

F Sparse Training Algorithm Analysis
In this appendix, we aim to analyze the effect of the SET training procedure on the performance of QuickSelection.
We perform QuickSelection using another algorithm to obtain and train the sparse network, and then, compare the result with the original QuickSelection.We derive the sparse denoising autoencoder using the lottery ticket hypothesis algorithm [20], as follows.The lottery ticket hypothesis (LTH), first, starts with training a dense network.After that, it derives the topology of the sparse network by pruning the unimportant weights of the trained dense network.Then, using both the sparse topology and the initial weight values of the connections in the dense training phase, the network is retrained.On the final obtained sparse model, we apply QuickSelection principles to select the most informative features.
In this experiment, the structure, sparsity level, and other hyperparameters are similar to the settings described in Section 4.1.1;we use a simple autoencoder with one hidden layer containing 1000 hidden neurons, trained for 100 epochs.The results of feature selection (K = 50) are available in Tables 9 and 10.We refer to the feature selection performed using QuickSelection principles and the Sparse DAE obtained with LTH as QS LT H  100 .We use QS 100 for the QuickSelection that is done using the Sparse DAE obtained with SET.
As can be observed in Tables 9 and 10, in most of the cases QS 100 outperforms QS LT H 100 .We believe that optimizing the sparse topology and the weights, simultaneously, results in feature strength that are more meaningful for the feature selection.We discussed neuron strength in more detail in Section 5.3.In addition, due to having an extra phase of dense training, the computational resource requirements of LTH are much higher than the ones of SET.To clarify this aspect, we present a comparison for the number of parameters between these two methods.The results can be found in Table 11.The much higher number of parameters in QS LT H 100 in comparison with the number of parameters in QS 100 is given by the dense training phase of LTH.

G Performance Evaluation using Random Forest Classifier
In this appendix, we validate the classification accuracy results using another classifier.We repeat the experiment from Section 4.2 in the manuscript; however, we measure the accuracy of selecting 50 features (for Madelon, we select 20 features) using the RandomForest classifier [39] instead of the ExtraTrees classifier.The results are presented in Table 12.
As can be seen in Table 12, QuickSelection 100 is the best performer in 5 out of 8 cases.By comparing the results with Table 3 which demonstrates the classification accuracy measured by the ExtraTrees classifier, it is clear that there have been subtle changes in the accuracy values.This has resulted in some changes in the ranking of the methods in terms of the performance, as in several cases, the performance of the methods are very close.The reason behind choosing ExtraTrees classifier in the experiment was due to the low computational cost.However, as discussed in the paper, to perform an extensive evaluation, we have also measured the performance using clustering accuracy.Overall, by looking into the results of the three approaches to compute accuracy, it is clear that QuickSelection is a performant feature selection method in terms of the quality of the selected features.
Table 12: Classification accuracy (%) using 50 selected features (except Madelon for which we select 20 features).On each dataset, the bold entry is the best-performer, and the italic one is the second-best performer.The classifier used for evaluation is the random forest classifier.

Fig. 2 :
Fig. 2: Neuron's strength on the MNIST dataset.The heat-maps above are a 2D representation of the input neuron's strength.It can be observed that the strength of neurons is random at the beginning of training.After a few epochs, the pattern changes, and neurons in the center become more important and similar to the MNIST data pattern.

Fig. 3 :
Fig.3: Influence of feature removal on Madelon dataset.After deriving the importance of the features with QuickSelection, we sort and then remove them based on the above two methods.

Fig. 4 :
Fig. 4: Average values of all data samples of each class corresponding to the 50 selected features on MNIST after 100 training epochs (bottom), along with the average of the actual data samples of each class (top).

Fig. 5 :
Fig. 5: Feature selection comparison in terms of classification accuracy, clustering accuracy, speed, and memory requirement, on each dataset and for different values of K, using two scoring variants.
Fig.7: Maximum memory requirement (Kb) vs. accuracy (%) when selecting 50 features (except Madelon for which we select 20 features).Each point refers to the result of a single dataset (specified by colors) and method (specified by markers) where the x and y-axis show the accuracy and the maximum memory requirement, respectively.Due to the high memory requirement of MCFS and Lap_score on the MNIST dataset which makes it difficult to compare the other results (upper plots), we zoom in this section in the bottom plots.

Fig. 9 :
Fig. 9: Strength of the 20 most informative and non-informative features of Madelon dataset, selected by QS 10 and QS 100 .Each line in the plots corresponds to the strength values of a selected feature by QS 10 /QS 100 during training.The features selected by QS 10 have been observed until epoch 100 to compare the quality of these features with QS 100 .

Fig. 18 :
Fig. 18: Classification accuracy for feature extraction using sparse DAE with different density level on the MNIST dataset (number of extracted features = 50) compared with FC-DAE and PCA.

Table 3 :
Classification accuracy (%) using 50 selected features (except Madelon for which we select 20 features).On each dataset, the bold entry is the best-performer, and the italic one is the second-best performer.
that adding 20% noise to the original data improves both classification and clustering accuracy of QuickSelection 100 by approximately 3%.

Table 4 :
values and their corresponding density level.

Table 11 :
Number of parameters of QS 100 and QS LT H 100 (divided by 10 6 ).