Advertisement

A Metric for Ranking the Classifiers for Evaluation of Intrusion Detection System

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 380)

Abstract

Imbalance in data is quite obvious while studying intrusion detection system (IDS). Classification algorithms are used to identify the attacks in IDS, which has many parameters for its performance evaluation. Due to imbalance in data, the classification results need to be revisited given that IDS generally evaluates detection rate and false alarm rate which belongs to two different classes. This paper validates a new metric NPR used for ranking the classifiers for IDS. The metric is made functional on KDD data set and then the classifiers are ranked and compared with results on another data set.

Keywords

Intrusion detection system Imbalanced data KDD data set False alarm rate Detection rate NPR 

References

  1. 1.
    Tsai, C.-F., et al.: Intrusion detection by machine learning: a review. Expert Syst. Appl. 36(10), 11994–12000 (2009)CrossRefGoogle Scholar
  2. 2.
    DARPA Intrusion Detection Evaluation. MIT Lincoln Labs. http://www.ll.mit.edu/mission/communications/ist/corpora/ideval/index.html
  3. 3.
  4. 4.
    NSL-KDD Data Set for Network-Based Intrusion Detection Systems. http://nsl.cs.unb.ca/NSL-KDD/
  5. 5.
    Tavallaee, M., Bagheri, E., Lu, W., Ghorbani, A.A.: A detailed analysis of the KDD CUP 99 data set. In: Proceedings of IEEE Symposium on Computational Intelligence in Security and Defense Applications, pp. 1–6 (2009)Google Scholar
  6. 6.
    Chawla, N.V.: Data Mining for Imbalanced Datasets: an Overview. Data Mining and Knowledge Discovery Handbook, pp. 875–886. Springer, New York (2010)Google Scholar
  7. 7.
    Kotsiantis, S., Kanellopoulos, D., Pintelas, P.: Handling imbalanced datasets: a review. GESTS Int. Trans. Comput. Sci. Eng. 30(1), 25–36 (2006)Google Scholar
  8. 8.
    Hulse, V., Jason, Khoshgoftaar, T.M., Napolitano, A.: Experimental perspectives on learning from imbalanced data. In: Proceedings of the 24th International Conference on Machine Learning. ACM (2007)Google Scholar
  9. 9.
    González, S., et al.: Testing Ensembles for Intrusion Detection: On the Identification of Mutated Network Scans. Computational Intelligence in Security for Information Systems, pp. 109–117. Springer, Berlin Heidelberg (2011)Google Scholar
  10. 10.
    Han, J., Kamber, M.: Data Mining: Concepts and Techniques, 3rd edn. Morgan Kaufmann, San Francisco (2012)Google Scholar
  11. 11.
    Ferri, C., Hernández-Orallo, J., Modroiu, R.: An experimental comparison of performance measures for classification. Pattern Recogn. Lett. 30(1), 27–38 (2009)CrossRefGoogle Scholar
  12. 12.
    Aggarwal, P., Sharma, S.K.: A new metric for proficient performance evaluation of intrusion detection system. In: Proceedings of 8th International Conference on CISIS, Advances in Intelligent Systems and Computing, Spain, Springer (2015) (Accepted on 27 Feb 2015)Google Scholar
  13. 13.
    Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Inf. Process. Manage. 45(4), 427–437 (2009)CrossRefGoogle Scholar
  14. 14.
    Witten, I.H., Frank, E. Hall, M.A.: Data Mining-Practical Machine Learning Tools and Techniques, Morgan Kaufmann, San Francisco (2011)Google Scholar
  15. 15.
    Aggarwal, P. and Sharma, S. K.: An empirical comparison of classifiers to analyze intrusion detection. In: Proceedings of International Conference on ACCT, India, IEEE Xplore (2015)Google Scholar
  16. 16.
    Waikato Environment for Knowledge Analysis (weka) version 3.7.11. http://www.cs.waikato.ac.nz/ml/weka/

Copyright information

© Springer India 2016

Authors and Affiliations

  1. 1.School of Engineering and TechnologyAnsal UniversityGurgaonIndia

Personalised recommendations