Skip to main content

Evaluation of the Limit of Detection in Network Dataset Quality Assessment with PerQoDA

  • Conference paper
  • First Online:
Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2022)

Abstract

Machine learning is recognised as a relevant approach to detect attacks and other anomalies in network traffic. However, there are still no suitable network datasets that would enable effective detection. On the other hand, the preparation of a network dataset is not easy due to privacy reasons but also due to the lack of tools for assessing their quality. In a previous paper, we proposed a new method for data quality assessment based on permutation testing. This paper presents a parallel study on the limits of detection of such an approach. We focus on the problem of network flow classification and use well-known machine learning techniques. The experiments were performed using publicly available network datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We can choose any performance metric such as accuracy, precision, recall, etc.

  2. 2.

    percentage of positives in the dataset.

References

  1. Anderson, M.J.: Permutational multivariate analysis of variance (PERMANOVA), pp. 1–15. Wiley (2017). https://doi.org/10.1002/9781118445112.stat07841

  2. Apruzzese, G., Laskov, P., Tastemirova, A.: SoK: the impact of unlabelled data in cyberthreat detection, May 2022. https://doi.org/10.48550/arXiv.2205.08944

  3. Auer, F., Felderer, M.: Addressing data quality problems with metamorphic data relations. In: IEEE/ACM 4th International Workshop on Metamorphic Testing (MET), pp. 76–83 (2019). https://doi.org/10.1109/MET.2019.00019

  4. Batarseh, F.A., Freeman, L., Huang, C.-H.: A survey on artificial intelligence assurance. J. Big Data 8(1), 1–30 (2021). https://doi.org/10.1186/s40537-021-00445-7

    Article  Google Scholar 

  5. Bergman, M., Milo, T., Novgorodov, S., Tan, W.C.: Query-oriented data cleaning with oracles. In: Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, SIGMOD 2015, pp. 1199–1214. Association for Computing Machinery, New York (2015). https://doi.org/10.1145/2723372.2737786

  6. Bhatt, S., Sheth, A., Shalin, V., Zhao, J.: Knowledge graph semantic enhancement of input data for improving AI. IEEE Internet Comput. 24(2), 66–72 (2020). https://doi.org/10.1109/MIC.2020.2979620

    Article  Google Scholar 

  7. Caiafa, C.F., Zhe, S., Toshihisa, T., Pere, M.P., Solé-Casals, J.: Machine learning methods with noisy, incomplete or small datasets. Appl. Sci. 11(9) (2021). https://doi.org/10.3390/app11094132

  8. Camacho, J., Wasielewska, K.: Dataset quality assessment in autonomous networks with permutation testing. In: IEEE/IFIP Network Operations and Management Symposium (NOMS), pp. 1–4 (2022). https://doi.org/10.1109/NOMS54207.2022.9789767

  9. Caviglione, L., et al.: Tight arms race: overview of current malware threats and trends in their detection. IEEE Access 9, 5371–5396 (2021). https://doi.org/10.1109/ACCESS.2020.3048319

    Article  Google Scholar 

  10. Cordeiro, F.R., Carneiro, G.: A survey on deep learning with noisy labels: how to train your model when you cannot trust on the annotations? In: 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 9–16 (2020). https://doi.org/10.1109/SIBGRAPI51738.2020.00010

  11. Ding, J., Li, X.: An approach for validating quality of datasets for machine learning. In: 2018 IEEE International Conference on Big Data (Big Data), pp. 2795–2803 (2018). https://doi.org/10.1109/BigData.2018.8622640

  12. Dudjak, M., Martinovic, G.: An empirical study of data intrinsic characteristics that make learning from imbalanced data difficult. Expert Syst. Appl. 182, 115297 (2021)

    Article  Google Scholar 

  13. Elsayed, M.S., Le-Khac, N.A., Jurcut, A.D.: InSDN: a novel SDN intrusion dataset. IEEE Access 8, 165263–165284 (2020). https://doi.org/10.1109/ACCESS.2020.3022633

    Article  Google Scholar 

  14. Engelen, G., Rimmer, V., Joosen, W.: Troubleshooting an intrusion detection dataset: the CICIDS2017 case study. In: 2021 IEEE Security and Privacy Workshops (SPW) (2021). https://doi.org/10.1109/SPW53761.2021.00009

  15. Gupta, S., Gupta, A.: Dealing with noise problem in machine learning data-sets: a systematic review. Procedia Comput. Sci. 161, 466–474 (2019). https://doi.org/10.1016/j.procs.2019.11.146. 5th Information Systems International Conference, Surabaya, Indonesia

  16. Ibrahim, M., Helmy, Y., Elzanfaly, D.: Data quality dimensions, metrics, and improvement techniques. Future Comput. Inform. J. 6, 25–44 (2021). https://doi.org/10.54623/fue.fcij.6.1.3

    Article  Google Scholar 

  17. Joyce, R.J., Raff, E., Nicholas, C.: A framework for cluster and classifier evaluation in the absence of reference labels. In: AISec 2021. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3474369.3486867

  18. Lindgren, F., Hansen, B., Karcher, W., Sjöström, M., Eriksson, L.: Model validation by permutation tests: applications to variable selection. J. Chemometr. 10(5–6), 521–532 (1996)

    Article  Google Scholar 

  19. MacDougall, D., Crummett, W.B.: Guidelines for data acquisition and data quality evaluation in environmental chemistry. Anal. Chem. 52(14), 2242–2249 (1980). https://doi.org/10.1021/ac50064a004

    Article  Google Scholar 

  20. Maciá-Fernández, G., Camacho, J., Magán-Carrión, R., García-Teodoro, P., Therón, R.: UGR’16: a new dataset for the evaluation of cyclostationarity-based network IDSs. Comput. Secur. 73, 411–424 (2018). https://doi.org/10.1016/j.cose.2017.11.004

    Article  Google Scholar 

  21. Ojala, M., Garriga, G.: Permutation tests for studying classifier performance. J. Mach. Learn. Res. 11, 1833–1863 (2010)

    MATH  Google Scholar 

  22. Pesarin, F., Salmaso, L.: The permutation testing approach: a review. Statistica (Bologna) 70(4), 481–509 (2010)

    MATH  Google Scholar 

  23. Pesarin, F., Salmaso, L.: A review and some new results on permutation testing for multivariate problems. Stat. Comput. 22(2), 639–646 (2012). https://doi.org/10.1007/s11222-011-9261-0

    Article  MATH  Google Scholar 

  24. Pin, K., Kim, J.Y., Chang, J.H., Nam, Y.: Quality evaluation of fundus images using transfer learning. In: International Conference on Computational Science and Computational Intelligence (CSCI), pp. 742–744 (2020). https://doi.org/10.1109/CSCI51800.2020.00139

  25. Sahu, A., Mao, Z., Davis, K., Goulart, A.E.: Data processing and model selection for machine learning-based network intrusion detection. In: 2020 IEEE International Workshop Technical Committee on Communications Quality and Reliability (CQR), pp. 1–6 (2020). https://doi.org/10.1109/CQR47547.2020.9101394

  26. Sarker, I.H., Kayes, A.S.M., Badsha, S., Alqahtani, H., Watters, P., Ng, A.: Cybersecurity data science: an overview from machine learning perspective. J. Big Data 7(1), 1–29 (2020). https://doi.org/10.1186/s40537-020-00318-5

    Article  Google Scholar 

  27. Shaukat, K., Luo, S., Varadharajan, V., Hameed, I.A., Xu, M.: A survey on machine learning techniques for cyber security in the last decade. IEEE Access 8, 222310–222354 (2020). https://doi.org/10.1109/ACCESS.2020.3041951

    Article  Google Scholar 

  28. Soukup, D., Tisovčík, P., Hynek, K., Čejka, T.: Towards evaluating quality of datasets for network traffic domain. In: 17th International Conference on Network and Service Management (CNSM), pp. 264–268 (2021). https://doi.org/10.23919/CNSM52442.2021.9615601

  29. Stapor, K., Ksieniewicz, P., García, S., Woźniak, M.: How to design the fair experimental classifier evaluation. Appl. Soft Comput. 104, 107219 (2021). https://doi.org/10.1016/j.asoc.2021.107219

    Article  Google Scholar 

  30. Taleb, I., El Kassabi, H., Serhani, M., Dssouli, R., Bouhaddioui, C.: Big data quality: a quality dimensions evaluation, July 2016. https://doi.org/10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0122

  31. Udeshi, S., Jiang, X., Chattopadhyay, S.: Callisto: entropy-based test generation and data quality assessment for machine learning systems. In: IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), Los Alamitos, CA, USA, pp. 448–453. IEEE Computer Society, October 2020. https://doi.org/10.1109/ICST46399.2020.00060

  32. Wasielewska, K., Soukup, D., Čejka, T., Camacho, J.: Dataset quality assessment with permutation testing showcased on network traffic datasets, June 2022. https://doi.org/10.36227/techrxiv.20145539.v1

  33. Webb, G.I., Lee, L.K., Goethals, B., Petitjean, F.: Analyzing concept drift and shift from sample data. Data Min. Knowl. Disc. 32(5), 1179–1199 (2018). https://doi.org/10.1007/s10618-018-0554-1

    Article  Google Scholar 

  34. Wu, M., et al.: Learning deep networks with crowdsourcing for relevance evaluation. EURASIP J. Wirel. Commun. Netw. 2020(1), 1–11 (2020). https://doi.org/10.1186/s13638-020-01697-2

    Article  Google Scholar 

  35. Yoon, J., Arik, S., Pfister, T.: Data valuation using reinforcement learning. In Proceedings of the 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 119, pp. 10842–10851 (2020). https://proceedings.mlr.press/v119/yoon20a.html

  36. Zhou, Z.Q., Xiang, S., Chen, T.Y.: Metamorphic testing for software quality assessment: a study of search engines. IEEE Trans. Softw. Eng. 42(3), 264–284 (2016). https://doi.org/10.1109/TSE.2015.2478001

    Article  Google Scholar 

Download references

Acknowledgment

This work is partially funded by the European Union’s Horizon 2020 research, innovation programme under the Marie Skłodowska-Curie grant agreement No 893146, by the Agencia Estatal de Investigación in Spain, grant No PID2020-113462RB-I00, and by the Ministry of Interior of the Czech Republic (Flow-Based Encrypted Traffic Analysis) under grant number VJ02010024. The authors would like to thank Szymon Wojciechowski for his support on the Weles tool.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katarzyna Wasielewska .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wasielewska, K., Soukup, D., Čejka, T., Camacho, J. (2023). Evaluation of the Limit of Detection in Network Dataset Quality Assessment with PerQoDA. In: Koprinska, I., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2022. Communications in Computer and Information Science, vol 1753. Springer, Cham. https://doi.org/10.1007/978-3-031-23633-4_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23633-4_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23632-7

  • Online ISBN: 978-3-031-23633-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics