Abstract
The curse of dimensionality plays a vital role in data mining and pattern recognition applications. There are two methods which can address curse of dimensionality namely—feature reduction and feature selection (FS). The application of FS is such that it selects the most relevant subset of features with the less redundancy. Main objective of the proposed method is to manipulate irrelevant features and redundant features in (high–medium–low) dimensional data. We will aim to provide higher classification accuracy. In this proposed method, it is implemented with genetic algorithm. In this paper, we are working on different datasets to train and test our model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Spolaor, N., Lorena, A. C., & Lee, H. D. (2011). Multi-objective genetic algorithm evaluation in feature selection. In International Conference on Evolutionary MultiCriterion Optimization (pp. 462–476).
Mao, Q., & Tsang, W. H. (2013). A feature selection method for multivariate performance measures. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(9), 2051–2063.
Fan, M., Hu, Q., & Zhu, W. (2014). Feature selection with test cost constraint. International Journal of Approximate Reasoning, 55(1), 167–179.
Chakravarty, K., Das, D., Sinha, A., Konar, A. (2013). Feature selection by differential evolution algorithm—A case study in personnel identification.
Seera, M., & Lim, C. P. (2014). A hybrid intelligent system for medical data classification. Expert Systems with Applications 41(5), 2239–2249.
Chen, G., Chen, J. (2015). A Novel Wrapper Method for Feature Selection and its Applications. B. V.: Elsevier Science Publishers.
Goldberg. (1989). Genetic Algorithms-in Search, Optimization and Machine Learning. London: AddisonWesley Publishing Company Inc.
Gunday, G., Ulusoy, G., Kilic, K., & Alpkan, L. (2011). Effects of innovation types on firm performance. Int. J. Prod. Econ., 133(2), 662–676.
Dahea, W., & Fadewar, H. S. (2018). Multimodal biometric system: A review. International Journal of Research in Advanced Engineering and Technology, 4(1), 25-31.
Xue, B., Zhang, M., Browne, W. N., & Yao, X. (2016). A survey on evolutionary computation approaches to feature selection. IIEEE Transactions on Evolutionary Computation 20(4), 606–626.
Kashef, S., & Nezamabadi-Pour, H. (2015). An advanced aco algorithm for feature subset selection. Neurocomputing, 147, 271–279.
Harvey, D. Y., & Todd, M. D. (2014). Automated feature design for numeric sequence classification by geneticprogramming. IIEEE Transactions on Evolutionary Computation 19(4), 474–489.
Bhadra, T., & Bandyopadhyay, S. (2015). Unsupervised feature selection using an improved version of differential evolution. Expert Systems with Applications 42(8), 4042–4053.
Brown, G., Pocock, A., Zhao, M. J., & Lujn, M. (2012). Conditional likelihood aximization: A unifying framework for information theoretic feature selection. Journal of Machine Learning Research 13(1), 27–66.
Dong, H., Li, T., Ding, R., & Sun, J. (2018). A novel hybrid genetic algorithm with granular information for feature selection and optimization. Applied Soft Computing, 65, 33-46.
Mitra, P., Murthy, C. A., & Pal, S. K. (2002). Unsupervised feature selection using feature similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(3), 301–312.
He, X., Cai, D., & Niyogi, P. (2005). Laplacian score for feature selection. In International Conference on Neural Information Processing Systems (pp. 507–514).
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297.
Peng, H., Long, F., & Ding, C. (2005). Feature selection based on mutual information: criteria of max-dependency, maxrelevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 8, 1226–1238.
Cai, D., Zhang, C., & He, X. (2010). Unsupervised feature selection for multi-cluster data. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 333–342).
Yu, L., & Liu, H. (2004). Efficient feature selection via analysis of relevance and redundancy. Journal of Machine Learning Research 5(12), 1205–1224.
Ruiz, R. (2006). Incremental wrapper-based gene selection from microarray data for cancer classification. Pattern Recognition 39(12), 2383–2392.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Padmalatha, E., Sailekhya, S., Athyaab, S.A., Harsh Raj, J. (2021). Feature Selection Optimization Using a Hybrid Genetic Algorithm. In: Fong, S., Dey, N., Joshi, A. (eds) ICT Analysis and Applications. Lecture Notes in Networks and Systems, vol 154. Springer, Singapore. https://doi.org/10.1007/978-981-15-8354-4_41
Download citation
DOI: https://doi.org/10.1007/978-981-15-8354-4_41
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-8353-7
Online ISBN: 978-981-15-8354-4
eBook Packages: EngineeringEngineering (R0)