Advertisement

Feature Selection Using Fast Ensemble Learning for Network Intrusion Detection

  • Ujjwal PasupuletyEmail author
  • C. D. Adwaith
  • Suraj Hegde
  • Nagamma Patil
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 940)

Abstract

Network security plays a critical role in today’s digital system infrastructure. Everyday, there are hundreds of cases of data theft or loss due to the system’s integrity being compromised. The root cause of this issue is the lack of systems in place which are able to foresee the advent of such attacks. Network Intrusion detection techniques are important to prevent any system or network from malicious behavior. By analyzing a dataset with features summarizing the method in which connections are made to the network, any attempt to access it can be classified as malicious or benign. To improve the accuracy of network intrusion detection, various machine learning algorithms and optimization techniques are used. Feature selection helps in finding important attributes in the dataset which have a significant effect on the final classification. This results in the reduction of the size of the dataset, thereby simplifying the task of classification. In this work, we propose using multiple techniques as an ensemble for feature selection. To reduce training time and retain accuracy, the important features of a subset of the KDD Network Intrusion detection dataset were analyzed using this ensemble learning technique. Out of 41 possible features for network intrusion, it was found that host-based statistical features of network flow play an import role in predicting network intrusion. Our proposed methodology provides multiple levels of overall selected features, correlated to the number of individual feature selection techniques that selected them. At the highest level of selected features, our experiments yielded a 6% increase in intrusion detection accuracy, an 81% decrease in dataset size and a 5.4\(\times \) decrease in runtime using a Multinomial Naive Bayes classifier on the original dataset.

Keywords

Feature selection Intrusion detection Network security Machine learning classifiers Ensemble learning 

References

  1. 1.
    I. University of California: KDD cup 1999 dataset for intrusion detection. http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html. Accessed 28 Feb 2018
  2. 2.
    Kumar, V., Minz, S.: Feature selection: a literature review. Smart CR 4, 211–229 (2014)Google Scholar
  3. 3.
    Kumar, R.: A review of network intrusion detection system using machine learning algorithms, vol. 5, pp. 94–100 (2017)Google Scholar
  4. 4.
    Wan, S., Yang, H.: Comparison among methods of ensemble learning. In: 2013 International Symposium on Biometrics and Security Technologies, July 2013, pp. 286–290 (2013)Google Scholar
  5. 5.
    Gaikwad, D., Thool, R.: Intrusion detection system using bagging ensemble method of machine learning, pp. 291–295 (2015)Google Scholar
  6. 6.
    Tavallaee, M., et al.: A detailed analysis of the KDD cup 99 data set. In: Proceedings of the Second IEEE International Conference on Computational Intelligence for Security and Defense Applications, CISDA 2009, pp. 53–58. IEEE Press, Piscataway (2009). http://dl.acm.org/citation.cfm?id=1736481.1736489
  7. 7.
    Mukherjee, S., Sharma, N.: Intrusion detection using Naive Bayes classifier with feature reduction, vol. 4 (2012)CrossRefGoogle Scholar
  8. 8.
    Chang, Y., et al.: Network intrusion detection based on random forest and support vector machine. In: 2017 IEEE International Conference on Computational Science and Engineering and IEEE International Conference on Embedded and Ubiquitous Computing, July 2017, vol. 1, pp. 635–638 (2017)Google Scholar
  9. 9.
    Kuhn, M.: Building predictive models in R using the caret package. J. Stat. Softw. 28(5), 1–26 (2008). https://www.jstatsoft.org/v028/i05
  10. 10.
    Ceriani, L., Verme, P.: The origins of the gini index: extracts from variabilità e mutabilità (1912) by corrado gini. J. Econ. Inequality 10(3), 421–443 (2012). https://doi.org/10.1007/s10888-011-9188-xCrossRefGoogle Scholar
  11. 11.
    Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R.P., Tang, J., Liu, H.: Feature selection: a data perspective. arXiv preprint arXiv:1601.07996 (2016)
  12. 12.
    Geurts, P., et al.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006). https://doi.org/10.1007/s10994-006-6226-1CrossRefGoogle Scholar
  13. 13.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Oliphant, T.E.: Guide to NumPy, 2nd edn. CreateSpace Independent Publishing Platform, Santa Monica (2015)Google Scholar
  15. 15.
    McKinney, W.: Data structures for statistical computing in Python. In: van der Walt, S., Millman, J. (eds.) Proceedings of the 9th Python in Science Conference, pp. 51–56 (2010)Google Scholar
  16. 16.
    Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the 14th International Joint Conference on AI (IJCAI 1995), vol. 2, 1137–1143. Morgan Kaufmann Publishers Inc., USA (1995)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Ujjwal Pasupulety
    • 1
    Email author
  • C. D. Adwaith
    • 1
  • Suraj Hegde
    • 1
  • Nagamma Patil
    • 1
  1. 1.Department of Information TechnologyNational Institute of Technology KarnatakaMangaluruIndia

Personalised recommendations