Abstract
We know that the main advantage of building a prediction model based on Machine Learning is to predict future outcomes depending on the historic data. In recent times, Deep learning gained popularity due to its applications in image processing in Neural networks (NN). However, the disadvantage is due to its black box approach. Deep learning models perform efficiently on large data sets. It may not perform well in all the cases when the data is small in size. In this paper, a comparative study is being done on Deep Learning and Traditional Machine Learning algorithms like SVM, Random Forest, KNN, Gradient Boosting, AdaBoost, Naive Bayes, Neural Networks, and Decision Tree. In this paper, Diabetes, Blood Cancer, Heart disease datasets are used to study and compare the performance accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
N.M. Adams, Scientific data mining: a practical perspective Chandrika Kamath, SIAM, 2009. Stat. Med. 30, 799 (2011)
M. Lichman, UCI Machine Learning Repository (2013), p. 114. http://archive.ics.uci.edu/ml
A. Sisto, C. Kamath, Ensemble feature selection in scientific data analysis. Technical report, Lawrence Livermore National Laboratory (LLNL), Livermore, CA (2013)
D Lakshmi Padmaja, B Vishnu Vardhan, Classification performance improvement using random subset feature selection algorithm for data mining, Big Data Res. 12, 1–12 (2018). https://doi.org/10.1016/j.bdr.2018.02.007
M. Dash, H. Liu, Feature selection for classification. Intell. Data Anal. 1, 131–156 (1997)
J. Pohjalainen, O. Räsänen, S. Kadioglu, Feature selection methods and their combinations in high-dimensional classification of speaker likability, intelligibility and personality traits. Comput. Speech Lang. 29, 145–171 (2015)
L. Yu, H. Liu, Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res. 5, 1205–1224 (2004)
D. Lakshmi Padmaja, B. Vishnu Vardhan, Random subset feature selection for classification. IJARCS 9(2) (2018). http://dx.doi.org/10.26483/ijarcs.v9i2.5496. ISSN: 0976-5697
L. Breiman, Random forests. Mach. Learn. 45, 5–32 (2001)
D.L. Padmaja, B. Vishnuvardhan, Survey of dimensionality reduction and mining techniques on scientific data. Int. J. Comput. Sci. Eng. Technol. 1, 1062–1066 (2014)
L. Yu, C. Ding, S. Loscalzo, Stable feature selection via dense feature groups, in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, 2008), pp. 803–811
F. Livingston, Implementation of Breiman’s random forest machine learning algorithm ECE591Q machine learning journal paper, 2005
J. Reunanen, Overfitting in making comparisons between variable selection methods. J. Mach. Learn. Res. 3, 1371–1382 (2003)
D. Lakshmi Padmaja, B. Vishnu Vardhan, Comparative study of feature subset selection methods for dimensionality reduction on scientific data, in IEEE Conference 27–28 Feb 2016, IACC-2016. https://doi.org/10.1109/iacc.2016.16
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Padmaja, D.L., Surya Deepak, G., Sriharsha, G.K., Ramana Rao, G.N.V. (2021). Ensemble Methods for Scientific Data—A Comparative Study. In: Kaiser, M.S., Xie, J., Rathore, V.S. (eds) Information and Communication Technology for Competitive Strategies (ICTCS 2020). Lecture Notes in Networks and Systems, vol 190. Springer, Singapore. https://doi.org/10.1007/978-981-16-0882-7_51
Download citation
DOI: https://doi.org/10.1007/978-981-16-0882-7_51
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-0881-0
Online ISBN: 978-981-16-0882-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)