Advertisement

Supervised Performance Anomaly Detection in HPC Data Centers

  • Mohamed Soliman HalawaEmail author
  • Rebeca P. Díaz Redondo
  • Ana Fernández Vilas
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 921)

Abstract

High Performance Computing (HPC) systems play an important role in advancing scientific research due to a significant demand for processing power and speed grows. In practice, HPC systems are in the spot of interest of different businesses which account on this growing technology. The growing complexity of the HPC systems made it exposed to a great range of performance anomalies. Permanent management of such systems health has a huge impact financially and operationally. Several machine learning techniques can be used to identify these performance anomalies in such complex systems. This study compares the most commonly used three supervised machine learning algorithms for anomaly detection. We had applied these algorithms on the Fundación Pública Galega Centro Tecnolóxico de Supercomputación de Galicia (CESGA) memcpy metrics which is a benchmark used to measure memory performance for each CPU socket. Our study shows that Neural Network algorithm had the highest accuracy (93%), KNN algorithm had the highest value of precision (0.97), Gaussian Anomaly Detection algorithm had the highest value of recall (0. 99), and Neural Network algorithm had the highest value of F-measure (0.96).

Keywords

Cloud computing High Performance Computing Anomaly detection Machine learning 

Notes

Acknowledgments

The European Regional Development Fund (ERDF) and the Galician Regional Government under the agreement for funding the Atlantic Research Center for Information and Communication Technologies (AtlantTIC), the Spanish Ministry of Economy and Competitiveness under the National Science Program (TEC2014-54335-C4-3-R and TEC2017-84197-C4-2-R). Finally, the authors would like to thank the Supercomputing Center of Galicia (CESGA) for their support and resources in this research.

References

  1. 1.
    Tuncer, O., et al.: Diagnosing performance variations in HPC applications using machine learning. In: International Supercomputing Conference. Springer (2017)Google Scholar
  2. 2.
    Sorkunlu, N., Chandola, V., Patra, A.: Tracking system behavior from resource usage data. In: IEEE International Conference on Cluster Computing (CLUSTER). IEEE (2017)Google Scholar
  3. 3.
    Garg, S.K., Versteeg, S., Buyya, R.: SMICloud: a framework for comparing and ranking cloud services. In: Fourth IEEE International Conference on Utility and Cloud Computing (UCC). IEEE (2011)Google Scholar
  4. 4.
    Kumar, N., Agarwal, S.: QoS based cloud service provider selection framework. Res. J. Recent Sci. (2014). ISSN 2277-2502Google Scholar
  5. 5.
    Ibidunmoye, O., Hernández-Rodriguez, F., Elmroth, E.: Performance anomaly detection and bottleneck identification. ACM Comput. Surv. (CSUR) 48(1), 4 (2015)CrossRefGoogle Scholar
  6. 6.
    Shipmon, D.T., et al.: Time series anomaly detection; detection of anomalous drops with limited features and sparse examples in noisy highly periodic data. arXiv preprint arXiv:1708.03665 (2017)
  7. 7.
    Ahmad, S., Purdy, S.: Real-time anomaly detection for streaming analytics. arXiv preprint arXiv:1607.02480 (2016)
  8. 8.
    Peiris, M., et al.: PAD: performance anomaly detection in multi-server distributed systems. In: IEEE 7th International Conference on Cloud Computing (CLOUD). IEEE (2014)Google Scholar
  9. 9.
    Cesga. http://www.cesga.es/en/. Accessed 16 Oct 2018
  10. 10.
    Seabold, S., Perktold, J.: Proceedings of the 9th Python in Science Conference (2010)Google Scholar
  11. 11.
    Walt, S.v.d., Colbert, S.C., Varoquaux, G.: The NumPy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13(2), 22–30 (2011)CrossRefGoogle Scholar
  12. 12.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–28309 (2011)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Shyu, M.-L., et al.: A novel anomaly detection scheme based on principal component classifier. Miami Univ Coral Gables Fl Dept of Electrical and Computer Engineering (2003)Google Scholar
  14. 14.
    Thill, M., Konen, W., Bäck, T.: Online anomaly detection on the webscope S5 dataset: a comparative study. In: Evolving and Adaptive Intelligent Systems (EAIS). IEEE (2017)Google Scholar
  15. 15.
    Panchal, G., Panchal, M.: Review on methods of selecting number of hidden nodes in artificial neural network. Int. J. Comput. Sci. Mob. Comput. 3(11), 455–464 (2014)MathSciNetGoogle Scholar
  16. 16.
    Agrawal, S., Agrawal, J.: Survey on anomaly detection using data mining techniques. Procedia Comput. Sci. 60, 708–713 (2015)CrossRefGoogle Scholar
  17. 17.
    Hodo, E., et al.: Shallow and deep networks intrusion detection system: a taxonomy and survey. arXiv preprint arXiv:1701.02145 (2017)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Mohamed Soliman Halawa
    • 1
    Email author
  • Rebeca P. Díaz Redondo
    • 2
  • Ana Fernández Vilas
    • 2
  1. 1.Information System DepartmentArab Academy for Science Technology and Maritime TransportCairoEgypt
  2. 2.Information & Computing Lab., AtlantTIC Research Center School of Telecommunications EngineeringUniversity of VigoVigoSpain

Personalised recommendations