Skip to main content

A Selective Survey of Deep Learning Techniques and Their Application to Malware Analysis

  • Chapter
  • First Online:
Malware Analysis Using Artificial Intelligence and Deep Learning

Abstract

In this chapter, we consider neural networks and deep learning, within the context of malware research. A variety of architectures are introduced, including multilayer perceptrons (MLP), convolutional neural networks (CNN), recurrent neural networks (RNN), long short-term memory (LSTM), residual networks (ResNet), generative adversarial networks (GAN), and Word2Vec. We provide a selective survey of applications of each of these architectures to malware-related problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In stark contrast to the nonsensical hype that envelopes far too much of the discussion of deep learning and (especially) AI, there does exist some clear-headed thinking that points to the great transformative potential of learning technology in the real world, rather than the world of science fiction. For a fine example of this latter genre, see the intriguingly titled article, “Models will run the world” [14]. (Spoiler alert: “Models will run the world” is not about world domination by skinny women in swimsuits).

  2. 2.

    If any learning model truly saturates, then adding more data will be counterproductive beyond some point, as the work factor for training on larger datasets increases, while there is no added benefit from the resulting trained model. It would therefore be useful to be able to predetermine a “score” of some sort that would tell us approximately how much data is optimal when training a particular learning model for a given type of data.

  3. 3.

    We see examples of filters applied to simple images in Sect. 6.3.

  4. 4.

    Color and grayscale images are more complex. For grayscale, a nonlinear encoding (i.e., gamma encoding) is employed, so as to make better use of the range of values available. For color images, the RGB (red, green, and blue, respectively) color scheme implies that each pixel is represented by 24 bits (in an uncompressed format), in which case convolutional filters can be viewed as operating over a three-dimensional box that is 3 bytes deep.

  5. 5.

    It is also sometimes claimed that pooling improves certain desirable characteristics of CNNs, such as translation invariance and deformation stability. However, this is disputed, and the current trend seems to clearly be in the direction of fully convolutional architectures, i.e., CNNs with no pooling layers [69].

  6. 6.

    Obviously, the inventors of RNNs were not familiar with Back to the Future or Star Trek, both of which conclusively demonstrate that the future can have a large influence on the past.

  7. 7.

    Unfortunately, “recursive neural network” is typically also abbreviated as RNN. Here, we reserve RNN for recurrent neural networks and we do not use any abbreviation when referring to recursive neural networks.

  8. 8.

    Cosine similarity is not a true metric, since it does not, in general, satisfy the triangle inequality.

References

  1. Annapurna, Annadatha, and Mark Stamp. 2018. Image spam analysis and detection. Journal of Computer Virology and Hacking Techniques 14 (1): 39–52.

    Google Scholar 

  2. Ben Athiwaratkun and Jack W. Stokes. 2017. Malware classification with LSTM and GRU language models and a character-level CNN. https://www.microsoft.com/en-us/research/wp-content/uploads/2017/07/LstmGruCnnMalwareClassifier.pdf.

  3. Banerjee, Suvro. 2018. Word2vec — A baby step in deep learning but a giant leap towards natural language processing. https://medium.com/explore-artificial-intelligence/word2vec-a-baby-step-in-deep-learning-but-a-giant-leap-towards-natural-language-processing-40fe4e8602ba.

  4. Barot, Ketul, Jialing Zhang, and Seung Woo Son. 2016. Using natural language processing models for understanding network anomalies. http://ieee-hpec.org/2016/techprog2016/index_htm_files/R-w2vec-final.pdf.

  5. Basole, Samanvitha, Fabio Di Troia, and Mark Stamp. 2020. Multifamily malware models. Journal of Computer Virology and Hacking Techniques 16 (1): 79–92.

    Article  Google Scholar 

  6. Bhodia, Niket, Pratikkumar Prajapati, Fabio Di Troia, and Mark Stamp. 2019. Transfer learning for image-based malware classification. In Proceedings of the 5th International Conference on Information Systems Security and Privacy, ICISSP 2019, eds. Paolo Mori, Steven Furnell, and Olivier Camp, 719–726.

    Google Scholar 

  7. The Brown corpus of standard American English. http://www.cs.toronto.edu/~gpenn/csc401/a1res.html.

  8. Cave, Robert L., and Lee P. Neuwirth. 1980. Hidden Markov models for English. In Hidden Markov models for speech, 16–56, IDA-CRD. New Jersey: Princeton. https://www.cs.sjsu.edu/~stamp/RUA/CaveNeuwirth/index.html.

  9. Chandak, Aniket, Fabio Di Troia, and Mark Stamp. 2020. A comparison of word embedding techniques for malware classification. In Malware analysis using artificial intelligence and deep learning, eds. Stamp, Mark, Mamoun Alazab, and Andrii Shalaginov. Berlin: Springer.

    Google Scholar 

  10. Chavda, Aneri, Katerina Potika, Fabio Di Troia, and Mark Stamp. 2018. Support vector machines for image spam analysis. In Proceedings of the 15th international joint conference on e-business and telecommunications, ICETE 2018, eds. Callegari, Christian, Marten van Sinderen, Paulo Novais, Panagiotis G. Sarigiannidis, Sebastiano Battiato, Ángel Serrano Sánchez de León, Pascal Lorenz, and Mohammad S. Obaidat, 597–607.

    Google Scholar 

  11. Chen, Rui, Jing Yang, Rong-gui Hu, and Shu-guang Huang. 2013. A novel lstm-rnn decoding algorithm in CAPTCHA recognition. https://ieeexplore.ieee.org/document/6840561.

  12. Chen, T., Q. Mao, M. Lv, H. Cheng, and Y. Li. 2019. Droidvecdeep: Android malware detection based on Word2Vec and deep belief network. KSII Transactions on Internet and Information Systems 13 (4): 2180–2197.

    Google Scholar 

  13. Cheng, Min, Qian Xu, Jianming Lv, Wenyin Liu, Qing Li, and Jianping Wang. 2016. MS-LSTM: A multi-scale LSTM model for BGP anomaly detection. In 2016 IEEE 24th International Conference on Network Protocols (ICNP), 1–6.

    Google Scholar 

  14. Cohen, Steven A., and Matthew W. Granade. 2018. Models will run the world. Wall Street Journal. https://www.wsj.com/articles/models-will-run-the-world-1534716720.

  15. Cornelisse, Daphne. 2018. An intuitive guide to convolutional neural networks. https://medium.freecodecamp.org/an-intuitive-guide-to-convolutional-neural-networks-260c2de0a050.

  16. Deshpande, Adit. 2018. A beginner’s guide to understanding convolutional neural networks. https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/.

  17. Extreme learning machine implementation in Python. https://github.com/dclambert/Python-ELM.

  18. Fernández-Navarro, Francisco, César Hervás-Martinez, Javier Sanchez-Monedero, and Pedro Antonio Gutiérrez. 2011. MELM-GRBF: A modified version of the extreme learning machine for generalized radial basis function neural networks. Neurocomputing 74(16): 2502–2510.

    Google Scholar 

  19. Gasmi, Houssem, Jannik Laval, and Abdelaziz Bouras. 2019. Cold-start cybersecurity ontology population using information extraction with LSTM. In 2019 international conference on cyber security for emerging technologies, CSET, 1–6.

    Google Scholar 

  20. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep learning. Cambridge: MIT Press. http://www.deeplearningbook.org.

  21. Goodfellow, Ian J, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the 27th international conference on neural information processing systems, NIPS’14, vol. 2, 2672–2680.

    Google Scholar 

  22. Gormley, Matthew R. 2017. Neural networks and backpropagation. https://www.cs.cmu.edu/~mgormley/courses/10601-s17/slides/lecture20-backprop.pdf.

  23. Greff, Klaus, Rupesh Kumar Srivastava, Jan Koutník, Bas R. Steunebrink, and Jürgen Schmidhuber. 2017. LSTM: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems 28 (10): 2222–2232. https://arxiv.org/pdf/1503.04069.pdf.

  24. Huang, Guang-Bin, Qin-Yu Zhu, and Chee-Kheong Siew. 2004. Extreme learning machine: A new learning scheme of feedforward neural networks. In 2004 IEEE international joint conference on neural networks, vol. 2, 985–990.

    Google Scholar 

  25. Gupta, Arpit. 2018. Alexa blogs: How Alexa is learning to converse more naturally. https://developer.amazon.com/blogs/alexa/post/15bf7d2a-5e5c-4d43-90ae-c2596c9cc3a6/how-alexa-is-learning-to-converse-more-naturally.

  26. Hardesty, Larry. 2017. Explained: Neural networks. http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414.

  27. He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. https://arxiv.org/pdf/1512.03385.pdf.

  28. Hern, Alex. 2017. The guardian. Elon Musk says AI could lead to third world war. https://www.theguardian.com/technology/2017/sep/04/elon-musk-ai-third-world-war-vladimir-putin.

  29. Hinton, Geoffrey. 2007. Deep belief nets. https://www.cs.toronto.edu/~hinton/nipstutorial/nipstut3.pdf.

  30. Hochreite, Sepp and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8): 1735–1780. http://www.bioinf.jku.at/publications/older/2604.pdf.

  31. Hu, Weiwei and Ying Tan. 2017. Black-box attacks against RNN based malware detection algorithms. https://arxiv.org/abs/1705.08131.

  32. Hu, Weiwei and Ying Tan. 2017. Generating adversarial malware examples for black-box attacks based on gan. https://arxiv.org/pdf/1702.05983.pdf.

  33. Jahromi, Amir Namavar, Sattar Hashemi, Ali Dehghantanha, Kim-Kwang Raymond Choo, Hadis Karimipour, David Ellis Newton, and Reza M. Parizi. 2019. An improved two-hidden-layer extreme learning machine for malware hunting. Computers and Security 89.

    Google Scholar 

  34. Jain, Mugdha, William Andreopoulos, and Mark Stamp. Convolutional neural networks and extreme learning machines for malware classification. Journal of Computer Virology and Hacking Techniques.

    Google Scholar 

  35. Kaan, Can. 2018. Deep learning tutorial for beginners. https://www.kaggle.com/kanncaa1/deep-learning-tutorial-for-beginners.

  36. Kale, Aparna Sunil, Fabio Di Troia, and Mark Stamp. 2020. Malware classification with hmm2vec and word2vec features. submitted for publication.

    Google Scholar 

  37. Kalfas, Ioannis. 2018. Modeling visual neurons with convolutional neural networks. https://towardsdatascience.com/modeling-visual-neurons-with-convolutional-neural-networks-e9c01ddfdfa7.

  38. Karpathy, Andrej. 2018. Convolutional neural networks for visual recognition. http://cs231n.github.io/convolutional-networks/.

  39. Khaitan, Pranav. 2016. Google AI blog: Chat smarter with Allo. https://ai.googleblog.com/2016/05/chat-smarter-with-allo.html.

  40. Khan, Riaz Ullah, Xiaosong Zhang, and Rajesh Kumar. 2019. Analysis of resnet and googlenet models for malware detection. Journal of Computer Virology and Hacking Techniques 15 (1): 29–57.

    Google Scholar 

  41. Kim, Gyuwan, Hayoon Yi, Jangho Lee, Yunheung Paek, and Sungroh Yoon. 2016. LSTM-based system-call language modeling and robust ensemble method for designing host-based intrusion detection systems. https://arxiv.org/abs/1611.01726.

  42. Kim, Jin-Young, Bu Seok-Jun, and Sung-Bae Cho. 2018. Zero-day malware detection using transferred generative adversarial networks based on deep autoencoders. Information Sciences 460–461: 83–102.

    Article  Google Scholar 

  43. Kravchik, Moshe, and Asaf Shabtai. 2018. Detecting cyberattacks in industrial control systems using convolutional neural networks. https://arxiv.org/pdf/1806.08110.pdf.

  44. Kurenkov, Andrey. 2015. A ‘brief’ history of neural nets and deep learning. http://www.andreykurenkov.com/writing/ai/a-brief-history-of-neural-nets-and-deep-learning/.

  45. Levy, Omer, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics 3: 211–225. https://levyomer.files.wordpress.com/2015/03/improving-distributional-similarity-tacl-2015.pdf.

  46. Levy, Steven. 2016. The iBrain is here—and it’s already inside your phone. Wired. https://www.wired.com/2016/08/an-exclusive-look-at-how-ai-and-machine-learning-work-at-apple/.

  47. Li, Fei-Fei, Justin Johnson, and Serena Yeung. 2017. Lecture 10: Recurrent neural networks. http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture10.pdf.

  48. Li, Fei-Fei, Justin Johnson, and Serena Yeung. 2017. Lecture 13: Generative models. http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture13.pdf.

  49. Li, Shixuan, and Dongmei Zhao. 2019. A LSTM-based method for comprehension and evaluation of network security situation. In 2019 18th IEEE international conference on trust, security and privacy in computing and communications, 723–728.

    Google Scholar 

  50. Lu, Renjie. 2019. Malware detection with lstm using opcode language. https://arxiv.org/abs/1906.04593.

  51. McCormick, Chris. 2016. Word2vec tutorial — The skip-gram model. http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/.

  52. McCulloch , Warren S, and Walter Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5. https://pdfs.semanticscholar.org/5272/8a99829792c3272043842455f3a110e841b1.pdf.

  53. Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. https://arxiv.org/abs/1301.3781.

  54. Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf.

  55. Minsky, Marvin, and Seymour Papert. 1969. Perceptrons: An introduction to computational geometry. Cambridge: MIT Press.

    Google Scholar 

  56. Moody, Chris. Stop using word2vec. https://multithreaded.stitchfix.com/blog/2017/10/18/stop-using-word2vec/.

  57. Moradi, Mehdi, and Mohammad Zulkernine. A neural network based system for intrusion detection and classification of attacks. https://pdfs.semanticscholar.org/cbf2/57a638aff38eae99bf88d8e22f150d9d8c47.pdf.

  58. Narwekar, Abhishek, and Anusri Pampari. 2016. Recurrent neural network architectures. http://slazebni.cs.illinois.edu/spring17/lec20_rnn.pdf.

  59. Neubig, Graham. 2018. NLP programming tutorial 8 — Recurrent neural nets. http://www.phontron.com/slides/nlp-programming-en-08-rnn.pdf.

  60. Ng , Andrew Y, and Michael I. Jordan. 2001. On discriminative vs. generative classifiers: A comparison of logistic regression and naïve Bayes. In Proceedings of the 14th international conference on neural information processing systems: natural and synthetic, NIPS’01, 841–848.

    Google Scholar 

  61. Olah, Christopher. 2014. Understanding convolutions. http://colah.github.io/posts/2014-07-Understanding-Convolutions/.

  62. Paul, Sunhera, Fabio Di, and Troia Mark Stamp. Word embedding techniques for malware evolution detection. submitted for publication.

    Google Scholar 

  63. Philipp, George, Dawn Song, and Jaime G. Carbonell. 2018. The exploding gradient problem demystified — Definition, prevalence, impact, origin, tradeoffs, and solutions. https://arxiv.org/pdf/1712.05577.pdf.

  64. Popov, I. 2017. Malware detection using machine learning based on Word2Vec embeddings of machine code instructions. In 2017 Siberian symposium on data science and engineering, SSDSE, 1–4.

    Google Scholar 

  65. Rabiner, Lawrence R. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2): 257–286. https://www.cs.sjsu.edu/~stamp/RUA/Rabiner.pdf.

  66. Rezende, E, G. Ruppert, T. Carvalho, F. Ramos, and P. de Geus. 2017. Malicious software classification using transfer learning of resnet-50 deep neural network. In 16th IEEE international conference on machine learning and applications, ICMLA 2017, 1011–1014.

    Google Scholar 

  67. Rigaki, Maria, and Sebastian Garcia. 2018. Bringing a GAN to a knife-fight: Adapting malware communication to avoid detection. https://mariarigaki.github.io/publication/gan-knife-fight/.

  68. Rosenblatt, Frank. 1961. Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. http://www.dtic.mil/dtic/tr/fulltext/u2/256582.pdf.

  69. Ruderman, Avraham, Neil C. Rabinowitz, Ari S. Morcos, and Daniel Zoran. 2018. Pooling is neither necessary nor sufficient for appropriate deformation stability in CNNs. https://arxiv.org/abs/1804.04438.

  70. Rumelhart, David, Geoffrey Hinton, and Ronald Williams. 1986. Learning representations by back-propagating errors. Nature 323 (9)

    Google Scholar 

  71. Shamshirband, Shahab, and Anthony T. Chronopoulos. 2019. A new malware detection system using a high performance-elm method. In Proceedings of the 23rd international database applications & engineering symposium, IDEAS’19, 33:1–33:10.

    Google Scholar 

  72. Sharmin, Tazmina, Fabio Di Troia, Katerina Potika, and Mark Stamp. 2020. Convolutional neural networks for image spam detection. Information Security Journal: A Global Perspective 29 (3): 103–117.

    Google Scholar 

  73. Shen, Yun, and Gianluca Stringhini. 2019. Attack2vec: Leveraging temporal word embeddings to understand the evolution of cyberattacks. https://seclab.bu.edu/people/gianluca/papers/attack2vec-usenix2019.pdf.

  74. Singh, Tanuvir, Fabio Di Troia, Corrado Aaron Visaggio, Thomas H. Austin, and Mark Stamp. 2016. Support vector machines and malware detection. Journal of Computer Virology and Hacking Techniques 12 (4): 203–212.

    Google Scholar 

  75. Springenber, Jost Tobias, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. 2014. Striving for simplicity: The all convolutional net. https://arxiv.org/abs/1412.6806.

  76. Spruston, Nelson. 2019. Pyramidal neurons: Dendritic structure and synaptic integration. Nature Reviews Neuroscience 9: 206–221. https://www.nature.com/articles/nrn2286.

  77. Stamp, Mark. 2004. A revealing introduction to hidden Markov models. https://www.cs.sjsu.edu/~stamp/RUA/HMM.pdf.

  78. Stamp, Mark. 2018. A survey of machine learning algorithms and their application in information security. In Guide to vulnerability analysis for computer networks and systems: an artificial intelligence approach, eds. Parkinson, Simon, Andrew Crampton, and Richard Hill, chapter 2, 33–55. Berlin: Springer.

    Google Scholar 

  79. Stamp, Mark. 2019. Alphabet soup of deep learning topics. https://www.cs.sjsu.edu/~stamp/RUA/alpha.pdf.

  80. Stamp, Mark. 2019. Deep thoughts on deep learning. https://www.cs.sjsu.edu/~stamp/RUA/ann.pdf.

  81. Veit, Andreas, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of relatively shallow networks. https://arxiv.org/pdf/1605.06431.pdf.

  82. Wallis, Charles. 2017. History of the perceptron. https://web.csulb.edu/~cwallis/artificialn/History.htm.

  83. Wu, Peilun, Hui Guo, and Nour Moustafa. 2020. Pelican: A deep residual network for network intrusion detection. https://arxiv.org/pdf/2001.08523.pdf.

  84. Wu, Yonghui, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. https://arxiv.org/abs/1609.08144.

  85. Xu, Ke, Yingjiu Li, Robert H. Deng, and Kai Chen. 2018. Deeprefiner: Multi-layer android malware detection system applying deep neural networks. In 2018 IEEE European symposium on security and privacy, Euro SP, 473–487.

    Google Scholar 

  86. Xue, Di, Jingmei Li, Tu Lv, Weifei Wu, and JiaXiang Wang. 2019. Malware classification using probability scoring and machine learning. IEEE Access, 91641–91656.

    Google Scholar 

  87. Yagcioglu, Semih, Mehmet Saygin Seyfioglu, Begum Citamak, Batuhan Bardak, Seren Guldamlasioglu, Azmi Yuksel, and Emin Islam Tatli. 2019. Detecting cybersecurity events from noisy short text. https://arxiv.org/abs/1904.05054.

  88. Yajamanam, Sravani, Vikash Raja Samuel Selvin, Fabio Di Troia, and Mark Stamp. 2018. Deep learning versus gist descriptors for image-based malware classification. In Proceedings of the 4th international conference on information systems security and privacy, ICISSP 2018, eds. Mori, Paolo, Steven Furnell, and Olivier Camp, 553–561.

    Google Scholar 

  89. Zeiler, Matthew D, and Rob Fergus. 2014. Visualizing and understanding convolutional networks. https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf.

  90. Zhang, Wei, Huan Ren, Qingshan Jiang, and Kai Zhang. 2015. Exploring feature extraction and ELM in malware detection for Android devices. Advances in Neural Networks, ISNN, eds. Hu, Xiaolin, Yousheng Xia, Yunong Zhang, and Dongbin Zhao, 489–498.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark Stamp .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Stamp, M. (2021). A Selective Survey of Deep Learning Techniques and Their Application to Malware Analysis. In: Stamp, M., Alazab, M., Shalaginov, A. (eds) Malware Analysis Using Artificial Intelligence and Deep Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-62582-5_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62582-5_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62581-8

  • Online ISBN: 978-3-030-62582-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics