Abstract
Performance evaluation is a major part of any network traffic anomaly detection technique or system. Without proper evaluation, it is difficult to make the case that a detection mechanism can be deployed in a real-time environment. An evaluation of a method or a system in terms of accuracy or quality provides a snapshot of its performance in time. As time passes, new vulnerabilities may evolve, and current evaluations may become irrelevant. The evaluation of an intrusion detection system (IDS) involves activities such as collection of attack traces, construction of a proper IDS evaluation environment, and adoption of solid evaluation methodologies. In this chapter, we introduce commonly used performance evaluation measures for IDS evaluation. The main measures include accuracy, performance, completeness, timeliness, reliability, quality, and AUC area. It is beneficial to identify the advantages and disadvantages of different detection methods or systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Axelsson, S.: The base-rate fallacy and the difficulty of intrusion detection. ACM Trans. Inf. Syst. Secur. 3(3), 186–205 (2000)
Dokas, P., Ertoz, L., Lazarevic, A., Srivastava, J., Tan, P.N.: Data mining for network intrusion detection. In: Proceedings of the NSF Workshop on Next Generation Data Mining (2002)
Ghorbani, A.A., Lu, W., Tavallaee, M.: Network Intrusion Detection and Prevention: Concepts and Techniques. Advances in Information Security. Springer, New York (2009)
Joshi, M.V., Agarwal, R.C., Kumar, V.: Predicting rare classes: can boosting make any weak learner strong? In: Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 297–306. ACM, New York (2002)
Lippmann, R.P., Fried, D.J., Graf, I., Haines, J., Kendall, K., McClung, D., Weber, D., Wyschogord, S.W.D., Cunningham, R.K., Zissman, M.A.: Evaluating intrusion detection systems: the 1998 DARPA offline intrusion detection evaluation. In: Proceedings of the DARPA Information Survivability Conference and Exposition, pp. 12–26 (2000)
Maxion, R.A., Roberts, R.R.: Proper use of ROC curves in intrusion/anomaly detection. Technical report CS-TR-871, School of Computing Science, University of Newcastle upon Tyne (2004)
Portnoy, L., Eskin, E., Stolfo, S.: Intrusion detection with unlabeled data using clustering. In: Proceedings of the ACM CSS Workshop on Data Mining Applied to Security, Philadelphia, pp. 5–8 (2001)
Provost, F.J., Fawcett, T.: Robust classification for imprecise environments. Mach. Learn. 42(3), 203–231 (2001)
Sekar, R., Guang, Y., Verma, S., Shanbhag, T.: A high-performance network intrusion detection system. In: Proceedings of the 6th ACM Conference on Computer and Communications Security, pp. 8–17. ACM, New York (1999)
Wang, Y.: Statistical Techniques for Network Security: Modern Statistically-Based Intrusion Detection and Protection. Information Science Reference, IGI Publishing, Hershey (2008)
Weiss, S.M., Zhang, T.: Performance alanysis and evaluation. In: Ye, N. (ed.) The Handbook of Data Mining, pp. 426–439. Lawrence Erlbaum Associates, Mahwah (2003)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this chapter
Cite this chapter
Bhuyan, M.H., Bhattacharyya, D.K., Kalita, J.K. (2017). Evaluation Criteria. In: Network Traffic Anomaly Detection and Prevention. Computer Communications and Networks. Springer, Cham. https://doi.org/10.1007/978-3-319-65188-0_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-65188-0_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-65186-6
Online ISBN: 978-3-319-65188-0
eBook Packages: Computer ScienceComputer Science (R0)