Skip to main content

Data Analytics of a Honeypot System Based on a Markov Decision Process Model

  • Conference paper
  • First Online:
Recent Trends and Advances in Model Based Systems Engineering

Abstract

A honeypot system can play a significant role in exposing cybercrimes and maintaining reliable cybersecurity. Markov decision process (MDP) is an important method in systems engineering research and machine learning. The data analytics of a honeypot system based on an MDP model is conducted using R language and its functions in this paper. Specifically, data analytics over a finite planning horizon (for an undiscounted MDP and a discounted MDP) and an infinite planning horizon (for a discounted MDP) is performed, respectively. Results obtained using four kinds of algorithms (value iteration, policy iteration, linear programming, and Q-learning) are compared to check the validity of the MDP model. The simulation of expected total rewards for various states is implemented using various transition probability parameters and various transition reward parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  • Alsheikh, M.A., D.T. Hoang, D. Niyato, H.P. Tan, and S. Lin. 2015. Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey. IEEE Communications Surveys & Tutorials 17 (3): 1239–1267.

    Article  Google Scholar 

  • Bao, N., and J. Musacchio. 2009. Optimizing the Decision to Expel Attackers from an Information System. In 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton), IEEE, pp 644–651.

    Google Scholar 

  • Bar, A., B. Shapira, L. Rokach, and M. Unger. 2016. Scalable Attack Propagation Model and Algorithms for Honeypot Systems. In 2016 IEEE International Conference on Big Data (Big Data), IEEE, pp. 1130–1135.

    Google Scholar 

  • Baykara, M., and R. Das. 2018. A Novel Honeypot-Based Security Approach for Real-Time Intrusion Detection and Prevention Systems. Journal of Information Security and Applications 41: 103–116.

    Article  Google Scholar 

  • Chen, Y., J. Hong, and C.C. Liu. 2018. Modeling of Intrusion and Defense for Assessment of Cyber Security at Power Substations. IEEE Transactions on Smart Grid 9 (4): 2541–2552.

    Article  Google Scholar 

  • Hayatle, O., H. Otrok, and A. Youssef. 2013. A Markov Decision Process Model for High Interaction Honeypots. Information Security Journal: A Global Perspective 22 (4): 159–170.

    Google Scholar 

  • Liu, Y., H. Liu, and B. Wang. 2017. Autonomous Exploration for Mobile Robot Using Q-Learning. In 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM), IEEE, pp. 614–619.

    Google Scholar 

  • Madni, A.M., M. Sievers, A. Madni, E. Ordoukhanian, and P. Pouya. 2018. Extending Formal Modeling for Resilient Systems Design. INSIGHT 21 (3): 34–41.

    Article  Google Scholar 

  • Magalhaes, A., & G. Lewis. 2016. Modeling Malicious Network Packets with Generative Probabilistic Graphical Models, pp. 1–5.

    Google Scholar 

  • Majeed, S.J., and M. Hutter. 2018. On Q-learning Convergence for Non-Markov Decision Processes. In IJCAI, pp. 2546–2552.

    Google Scholar 

  • Mohri, M., A. Rostamizadeh, and A. Talwalkar. 2012. Foundations of Machine Learning. Adaptive Computation and Machine Learning. MIT Press, 31, p. 32.

    Google Scholar 

  • Pauna, A., A.C. Iacob, and I. Bica. 2018. QRASSH-A Self-Adaptive SSH Honeypot Driven by Q-learning. In 2018 International Conference on Communications (COMM), IEEE, pp. 441–446.

    Google Scholar 

  • Sigaud, O., and O. Buffet, eds. 2013. Markov Decision Processes in Artificial Intelligence. Wiley.

    Google Scholar 

  • Sutton, R.S., and A.G. Barto. 2018. Reinforcement Learning: An Introduction. MIT Press.

    MATH  Google Scholar 

  • van Otterlo, M. 2009. Markov Decision Processes: Concepts and Algorithms. Course on ‘Learning and Reasoning’.

    Google Scholar 

  • van Otterlo, M., and M. Wiering. 2012. Reinforcement Learning and Markov Decision Processes. In Reinforcement Learning, 3–42. Berlin\Heidelberg: Springer.

    Chapter  Google Scholar 

  • Zanini, E. 2014. Markov Decision Processes. [Online]. https://www.lancaster.ac.uk/pg/zaninie.MDP.pdf

Download references

Acknowledgments

This paper is based upon work performed under Contract No. W912HZ-17-C-0015 with the US Army Engineer Research and Development Center (ERDC). Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the author(s) and do not reflect the views of the ERDC.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lidong Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, L., Jones, R., Falls, T.C. (2022). Data Analytics of a Honeypot System Based on a Markov Decision Process Model. In: Madni, A.M., Boehm, B., Erwin, D., Moghaddam, M., Sievers, M., Wheaton, M. (eds) Recent Trends and Advances in Model Based Systems Engineering. Springer, Cham. https://doi.org/10.1007/978-3-030-82083-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-82083-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-82082-4

  • Online ISBN: 978-3-030-82083-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics