Advertisement

Cyber Security Deception

  • Mohammed H. AlmeshekahEmail author
  • Eugene H. Spafford
Chapter

Abstract

Our physical and digital worlds are converging at a rapid pace, putting a lot of our valuable information in digital formats. Currently, most computer systems’ predictable responses provide attackers with valuable information on how to infiltrate them. In this chapter, we discuss how the use of deception can play a prominent role in enhancing the security of current computer systems. We show how deceptive techniques have been used in many successful computer breaches. Phishing, social engineering, and drive-by-downloads are some prime examples. We discuss why deception has only been used haphazardly in computer security. Additionally, we discuss some of the unique advantages deception-based security mechanisms bring to computer security. Finally, we present a framework where deception can be planned and integrated into computer defenses.

Keywords

Intrusion Detection System Security Control Conjunction Fallacy Power Distance Index Advance Persistent Threat 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The material in the chapter is derived from [29]. Portions of this work were supported by National Science Foundation Grant EAGER-1548114, by Northrop Grumman Corporation (NGCRC), and by sponsors of the Center for Education and Research in Information Assurance and Security (CERIAS).

References

  1. 1.
    Verizon, “Threats on the Horizon – The Rise of the Advanced Persistent Threat.” http://www.verizonenterprise.com/DBIR/.
  2. 2.
    J. J. Yuill, Defensive Computer-Security Deception Operations: Processes, Principles and Techniques. PhD Dissertation, North Carolina State University, 2006.Google Scholar
  3. 3.
    B. Cheswick, “An Evening with Berferd in Which a Cracker is Lured, Endured, and Studied,” in Proceedings of Winter USENIX Conference, (San Francisco), 1992.Google Scholar
  4. 4.
    C. P. Stoll, The Cuckoo’s Egg: Tracing a Spy Through the Maze of Computer Espionage. Doubleday, 1989.Google Scholar
  5. 5.
    E. H. Spafford, “More than Passive Defense.” http://goo.gl/5lwZup, 2011.
  6. 6.
    L. Spitzner, Honeypots: Tracking Hackers. Addison-Wesley Reading, 2003.Google Scholar
  7. 7.
    G. H. Kim and E. H. Spafford, “Experiences with Tripwire: Using Integrity Checkers for Intrusion Detection,” tech. rep., Department of Computer, Purdue University, West Lafayette, IN, 1994.Google Scholar
  8. 8.
    D. Dagon, X. Qin, G. Gu, W. Lee, J. Grizzard, J. Levine, and H. Owen, “Honeystat: Local Worm Detection Using Honeypots,” in Recent Advances in Intrusion Detection, pp. 39–58, Springer, 2004.Google Scholar
  9. 9.
    C. Fiedler, “Secure Your Database by Building HoneyPot Architecture Using a SQL Database Firewall.” http://goo.gl/yr55Cp.
  10. 10.
    C. Mulliner, S. Liebergeld, and M. Lange, “Poster: Honeydroid-Creating a Smartphone Honeypot,” in IEEE Symposium on Security and Privacy, 2011.Google Scholar
  11. 11.
    M. Wählisch, A. Vorbach, C. Keil, J. Schönfelder, T. C. Schmidt, and J. H. Schiller, “Design, Implementation, and Operation of a Mobile Honeypot,” tech. rep., Cornell University Library, 2013.Google Scholar
  12. 12.
    C. Seifert, I. Welch, and P. Komisarczuk, “Honeyc: The Low Interaction Client Honeypot,” Proceedings of the 2007 NZCSRCS, 2007.Google Scholar
  13. 13.
    K. G. Anagnostakis, S. Sidiroglou, P. Akritidis, K. Xinidis, E. Markatos, and A. D. Keromytis, “Detecting Targeted Attacks Using Shadow Honeypots,” in Proceedings of the 14th USENIX Security Symposium, 2005.Google Scholar
  14. 14.
    D. Moore, V. Paxson, S. Savage, C. Shannon, S. Staniford, and N. Weaver, “Inside the Slammer Worm,” IEEE Security & Privacy, vol. 1, no. 4, pp. 33–39, 2003.CrossRefGoogle Scholar
  15. 15.
    T. Liston, “LaBrea: “Sticky” Honeypot and IDS.” http://labrea.sourceforge.net/labrea-info.html, 2009.
  16. 16.
    F. Cohen, “The Deception Toolkit.” http://www.all.net/dtk/, 1998.
  17. 17.
    N. Rowe, E. J. Custy, and B. T. Duong, “Defending Cyberspace with Fake Honeypots,” Journal of Computers, vol. 2, no. 2, pp. 25–36, 2007.CrossRefGoogle Scholar
  18. 18.
    T. Holz and F. Raynal, “Detecting Honeypots and Other Suspicious Environments,” in Information Assurance Workshop, pp. 29–36, IEEE, 2005.Google Scholar
  19. 19.
    C. Kreibich and J. Crowcroft, “Honeycomb: Creating Intrusion Detection Signatures Using Honeypots,” ACM SIGCOMM Computer Communication Review, vol. 34, no. 1, pp. 51–56, 2004.CrossRefGoogle Scholar
  20. 20.
    D. Moore, C. Shannon, D. J. Brown, G. M. Voelker, and S. Savage, “Inferring Internet Denial-of-Service Activity,” ACM Transactions on Computer Systems (TOCS), vol. 24, no. 2, pp.115–139, 2006.Google Scholar
  21. 21.
    L. Spitzner, “Honeytokens: The Other Honeypot.” http://www.symantec.com/connect/articles/honeytokens-other-honeypot, 2003.
  22. 22.
    J. J. Yuill, M. Zappe, D. Denning, and F. Feer, “Honeyfiles: Deceptive Files for Intrusion Detection,” in Information Assurance Workshop, pp. 116–122, IEEE, 2004.Google Scholar
  23. 23.
    M. Bercovitch, M. Renford, L. Hasson, A. Shabtai, L. Rokach, and Y. Elovici, “HoneyGen: An Automated Honeytokens Generator,” in IEEE International Conference on Intelligence and Security Informatics (ISI’11), pp. 131–136, IEEE, 2011.Google Scholar
  24. 24.
    A. Juels and R. L. Rivest, “Honeywords: Making Password-Cracking Detectable,” in Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, pp. 145–160, ACM, 2013.Google Scholar
  25. 25.
    X. Chen, J. Andersen, Z. M. Mao, M. Bailey, and J. Nazario, “Towards an Understanding of Anti-Virtualization and Anti-Debugging Behavior in Modern Malware,” in IEEE International Conference on Dependable Systems and Networks, pp. 177–186, IEEE, 2008.Google Scholar
  26. 26.
    M. Sourour, B. Adel, and A. Tarek, “Ensuring Security-In-Depth Based on Heterogeneous Network Security Technologies,” International Journal of Information Security, vol. 8, no. 4, pp. 233–246, 2009.CrossRefGoogle Scholar
  27. 27.
    K. Heckman, “Active Cyber Network Defense with Denial and Deception.” http://goo.gl/Typwi4, Mar. 2013.
  28. 28.
    R. V. Jones, Reflections on Intelligence. London: William Heinemann Ltd, 1989.Google Scholar
  29. 29.
    M. H. Almeshekah, Using Deception to Enhance Security: A Taxonomy, Model and Novel Uses. PhD thesis, Purdue University, 2015.Google Scholar
  30. 30.
    M. Harkins, “A New Security Architecture to Improve Business Agility,” in Managing Risk and Information Security, pp. 87–102, Springer, 2013.Google Scholar
  31. 31.
    J. Boyd, “The Essence of Winning and Losing.” http://www.danford.net/boyd/essence.htm, 1995.
  32. 32.
    E. M. Hutchins, M. J. Cloppert, and R. M. Amin, “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains,” Leading Issues in Information Warfare & Security Research, vol. 1, p. 80, 2011.Google Scholar
  33. 33.
    K. J. Higgins, “How Lockheed Martin’s ’Kill Chain’ Stopped SecurID Attack.” http://goo.gl/r9ctmG, 2013.
  34. 34.
    F. Petitcolas, “La Cryptographie Militaire.” http://goo.gl/e5IOj1.
  35. 35.
    K. D. Mitnick and W. L. Simon, The Art of Deception: Controlling the Human Element of Security. Wiley, 2003.Google Scholar
  36. 36.
    P. Vogt, F. Nentwich, N. Jovanovic, E. Kirda, C. Kruegel, and G. Vigna, “Cross-Site Scripting Prevention with Dynamic Data Tainting and Static Analysis,” in The 2007 Network and Distributed System Security Symposium (NDSS’07), 2007.Google Scholar
  37. 37.
    A. Barth, C. Jackson, and J. C. Mitchell, “Robust Defenses for Cross-Site Request Forgery,” Proceedings of the 15th ACM Conference on Computer and Communications Security (CCS’08), 2008.Google Scholar
  38. 38.
    O. W. A. S. P. (OWASP), “OWASP Top 10.” http://owasptop10.googlecode.com/files/OWASPTop10-2013.pdf, 2013.
  39. 39.
    M. H. Almeshekah and E. H. Spafford, “Planning and Integrating Deception into Computer Security Defenses,” in New Security Paradigms Workshop (NSPW’14), (Victoria, BC, Canada), 2014.Google Scholar
  40. 40.
    J. B. Bell and B. Whaley, Cheating and Deception. Transaction Publishers New Brunswick, 1991.Google Scholar
  41. 41.
    M. Bennett and E. Waltz, Counterdeception Principles and Applications for National Security. Artech House, 2007.Google Scholar
  42. 42.
    J. R. Thompson, R. Hopf-Wichel, and R. E. Geiselman, “The Cognitive Bases of Intelligence Analysis,” tech. rep., US Army Research Institute for the Behavioral and Social Sciences, 1984.Google Scholar
  43. 43.
    R. Jervis, Deception and Misperception in International Politics. Princeton University Press, 1976.Google Scholar
  44. 44.
    G. Hofstede, G. Hofstede, and M. Minkov, Cultures and Organizations. McGraw-Hill, 3rd ed., 2010.Google Scholar
  45. 45.
    D. Gus and D. Dorner, “Cultural Difference in Dynamic Decision-Making Strategies in a Non-lines, Time-delayed Task,” Cognitive Systems Research, vol. 12, no. 3–4, pp. 365–376, 2011.CrossRefGoogle Scholar
  46. 46.
    R. Godson and J. Wirtz, Strategic Denial and Deception. Transaction Publishers, 2002.Google Scholar
  47. 47.
    A. Tversky and D. Kahneman, “Judgment under Uncertainty: Heuristics and Biases.,” Science, vol. 185, pp. 1124–31, Sept. 1974.Google Scholar
  48. 48.
    S. A. Sloman, “The Empirical Case for Two Systems of Reasoning,” Psychological Bulletin, vol. 119, no. 1, pp. 3–22, 1996.CrossRefGoogle Scholar
  49. 49.
    A. Tversky and D. Koehler, “Support Theory: A Nonextensional Representation of Subjective Probability.,” Psychological Review, vol. 101, no. 4, p. 547, 1994.Google Scholar
  50. 50.
    A. Tversky and D. Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological review, vol. 90, no. 4, pp. 293–315, 1983.CrossRefGoogle Scholar
  51. 51.
    L. Zhao and M. Mannan, “Explicit Authentication Response Considered Harmful,” in New Security Paradigms Workshop (NSPW ’13), (New York, New York, USA), pp. 77–86, ACM Press, 2013.Google Scholar
  52. 52.
    R. S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology, vol. 2, pp. 175–220, June 1998.Google Scholar
  53. 53.
    C. Sample, “Applicability of Cultural Markers in Computer Network Attacks,” in 12th European Conference on Information Warfare and Security, (University of Jyvaskyla, Finland), pp. 361–369, 2013.Google Scholar
  54. 54.
    S. B. Murphy, J. T. McDonald, and R. F. Mills, “An Application of Deception in Cyberspace: Operating System Obfuscation,” in Proceedings of the 5th International Conference on Information Warfare and Security (ICIW 2010), pp. 241–249, 2010.Google Scholar
  55. 55.
    W. Wang, J. Bickford, I. Murynets, R. Subbaraman, A. G. Forte, and G. Singaraju, “Detecting Targeted Attacks by Multilayer Deception,” Journal of Cyber Security and Mobility, vol. 2, no. 2, pp. 175–199, 2013.CrossRefGoogle Scholar
  56. 56.
    X. Fu, On Traffic Analysis Attacks and Countermeasures. PhD Dissertation, Texas A & M University, 2005.Google Scholar
  57. 57.
    S. A. Hofmeyr, S. Forrest, and A. Somayaji, “Intrusion Detection Using Sequences of System Calls,” Journal of Computer Security, vol. 6, no. 3, pp. 151–180, 1998.CrossRefGoogle Scholar
  58. 58.
    F. Cohen and D. Koike, “Misleading Attackers with Deception,” in Proceedings from the 5th annual IEEE SMC Information Assurance Workshop, pp. 30–37, IEEE, 2004.Google Scholar
  59. 59.
    T. E. Carroll and D. Grosu, “A Game Theoretic Investigation of Deception in Network Security,” Security and Communication Networks, vol. 4, no. 10, pp. 1162–1172, 2011.CrossRefGoogle Scholar
  60. 60.
    R. Hesketh, Fortitude: The D-Day Deception Campaign. Woodstock, NY: Overlook Hardcover, 2000.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.King Saud UniversityRiyadhSaudi Arabia
  2. 2.Purdue UniversityWest LafayetteUSA

Personalised recommendations