Abstract
A honeypot system can play a significant role in exposing cybercrimes and maintaining reliable cybersecurity. Markov decision process (MDP) is an important method in systems engineering research and machine learning. The data analytics of a honeypot system based on an MDP model is conducted using R language and its functions in this paper. Specifically, data analytics over a finite planning horizon (for an undiscounted MDP and a discounted MDP) and an infinite planning horizon (for a discounted MDP) is performed, respectively. Results obtained using four kinds of algorithms (value iteration, policy iteration, linear programming, and Q-learning) are compared to check the validity of the MDP model. The simulation of expected total rewards for various states is implemented using various transition probability parameters and various transition reward parameters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alsheikh, M.A., D.T. Hoang, D. Niyato, H.P. Tan, and S. Lin. 2015. Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey. IEEE Communications Surveys & Tutorials 17 (3): 1239–1267.
Bao, N., and J. Musacchio. 2009. Optimizing the Decision to Expel Attackers from an Information System. In 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton), IEEE, pp 644–651.
Bar, A., B. Shapira, L. Rokach, and M. Unger. 2016. Scalable Attack Propagation Model and Algorithms for Honeypot Systems. In 2016 IEEE International Conference on Big Data (Big Data), IEEE, pp. 1130–1135.
Baykara, M., and R. Das. 2018. A Novel Honeypot-Based Security Approach for Real-Time Intrusion Detection and Prevention Systems. Journal of Information Security and Applications 41: 103–116.
Chen, Y., J. Hong, and C.C. Liu. 2018. Modeling of Intrusion and Defense for Assessment of Cyber Security at Power Substations. IEEE Transactions on Smart Grid 9 (4): 2541–2552.
Hayatle, O., H. Otrok, and A. Youssef. 2013. A Markov Decision Process Model for High Interaction Honeypots. Information Security Journal: A Global Perspective 22 (4): 159–170.
Liu, Y., H. Liu, and B. Wang. 2017. Autonomous Exploration for Mobile Robot Using Q-Learning. In 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM), IEEE, pp. 614–619.
Madni, A.M., M. Sievers, A. Madni, E. Ordoukhanian, and P. Pouya. 2018. Extending Formal Modeling for Resilient Systems Design. INSIGHT 21 (3): 34–41.
Magalhaes, A., & G. Lewis. 2016. Modeling Malicious Network Packets with Generative Probabilistic Graphical Models, pp. 1–5.
Majeed, S.J., and M. Hutter. 2018. On Q-learning Convergence for Non-Markov Decision Processes. In IJCAI, pp. 2546–2552.
Mohri, M., A. Rostamizadeh, and A. Talwalkar. 2012. Foundations of Machine Learning. Adaptive Computation and Machine Learning. MIT Press, 31, p. 32.
Pauna, A., A.C. Iacob, and I. Bica. 2018. QRASSH-A Self-Adaptive SSH Honeypot Driven by Q-learning. In 2018 International Conference on Communications (COMM), IEEE, pp. 441–446.
Sigaud, O., and O. Buffet, eds. 2013. Markov Decision Processes in Artificial Intelligence. Wiley.
Sutton, R.S., and A.G. Barto. 2018. Reinforcement Learning: An Introduction. MIT Press.
van Otterlo, M. 2009. Markov Decision Processes: Concepts and Algorithms. Course on ‘Learning and Reasoning’.
van Otterlo, M., and M. Wiering. 2012. Reinforcement Learning and Markov Decision Processes. In Reinforcement Learning, 3–42. Berlin\Heidelberg: Springer.
Zanini, E. 2014. Markov Decision Processes. [Online]. https://www.lancaster.ac.uk/pg/zaninie.MDP.pdf
Acknowledgments
This paper is based upon work performed under Contract No. W912HZ-17-C-0015 with the US Army Engineer Research and Development Center (ERDC). Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the author(s) and do not reflect the views of the ERDC.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, L., Jones, R., Falls, T.C. (2022). Data Analytics of a Honeypot System Based on a Markov Decision Process Model. In: Madni, A.M., Boehm, B., Erwin, D., Moghaddam, M., Sievers, M., Wheaton, M. (eds) Recent Trends and Advances in Model Based Systems Engineering. Springer, Cham. https://doi.org/10.1007/978-3-030-82083-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-82083-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82082-4
Online ISBN: 978-3-030-82083-1
eBook Packages: EngineeringEngineering (R0)