Skip to main content

Honey-X

  • Chapter
  • First Online:
Game Theory for Cyber Deception

Part of the book series: Static & Dynamic Game Theory: Foundations & Applications ((SDGTFA))

  • 1040 Accesses

Abstract

The previous chapter discussed obfuscation, in which the defender’s goal is to hide valuable information within noise. Obfuscation, in other words, is a species of crypsis (Sect. 4.3). But in other species of deception, the defender aims to create a specific false belief. This is called mimesis. The present chapter studies static mimesis, or, honey-x, which takes its name from technologies related to honeypots, honeytokens, etc.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Since honeynets allow dynamic interaction with the attacker, some honeynets could quality as attacker engagement (Chap. 7).

  2. 2.

    By one shot, we mean that the interaction between the players is not repeated, although the interaction is dynamic in the sense that one player transmits a signal and the other player acts after he observes the signal. If the interaction is repeated, we call the deception attacker engagement, studied in Chap. 7.

  3. 3.

    Deception in all of these cases requires some expense, but the cost is little compared to the utility gain from successful deception. For example, the difference for the Allies between winning and losing at Normandy was likely much higher than the cost of making misleading preparations to invade at Pas de Calais.

  4. 4.

    Harsanyi conceptualized type selection as a randomized move by a non-strategic player called nature (in order to map an incomplete information game to one of complete information) [84].

  5. 5.

    In pooling PBNE, the message “on the equilibrium path” is the one that is sent by both types of S. Messages “off the equilibrium path” are never sent in equilibrium, although determining the actions that R would play if S were to transmit a message off the path is necessary in order to determine the existence of equilibria.

  6. 6.

    For instance, consider an application to product reviews in an online marketplace. A product may be low (\(\theta =0\)) or high (\(\theta =1\)) quality. A reviewer (S) may describe the product as poor (\(m=0\)) or as good (\(m=1\)). Based on the wording of the review, a reader (R) may be suspicious (\(e=1\)) that the review is fake, or he may not be suspicious (\(e=0\)). He can then buy (\(a=1\)) or do not buy (\(a=0\)) the product. According to Remark 6.7, if R has a strong prior belief that the product is high quality (\(p(1)\approx 1\)), then he will ignore both the review m and the evidence e,  and he will always buy the product (\(a=1\)).

  7. 7.

    For the same application to online marketplaces as in footnote 6, if R does not have a strong prior belief about the quality of the product (e.g., \(p(1)\approx 0.5\)), then he will trust the review (play \(a=m\)) if \(e=0,\) and he will not trust the review (he will play \(a=1-m\)) if \(e=1.\)

  8. 8.

    On the other hand, for conservative detectors R plays a pure strategy when \(e=1\) and a mixed strategy when \(e=0.\)

  9. 9.

    We chose the pooling equilibrium in which \(\sigma ^{S*}(1\,|\,0)\) and \(\sigma ^{S*}(1\,|\,1)\) are continuous with the partially separating \(\sigma ^{S*}(1\,|\,0)\) and \(\sigma ^{S*}(1\,|\,1)\) that are supported in the Middle regime.

  10. 10.

    Feasible detectors have \(J\le 1-\left| G\right| .\) In addition, we only analyze detectors in which \(\beta >\alpha ,\) which gives \(J>0.\)

  11. 11.

    Interestingly, some research has also suggested artificially making normal systems appear to be honeypots, as a method of deterring attackers [100]. This is an opposite form of deceptive signaling, and can also be detected.

  12. 12.

    For example, consider \(p=0.1.\) If R has access to a low-quality detector, then \(p=0.1\) is within the Zero-Heavy regime. Therefore, R ignores e and always chooses \(a=0.\) This is a “reckless” strategy that is highly damaging to S. On the other hand, if R has access to a high-quality detector, then \(p=0.1\) is within the Middle regime. In that case, R chooses a based on e. This “less reckless” strategy actually improves S’s expected utility, because R chooses \(a=0\) less often.

  13. 13.

    In other words, the detector in our game emits evidence when the message does not truthfully represent the type—that is, when the sender is lying. This is based on the idea that liars often give off cues of deceptive behavior.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey Pawlick .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Pawlick, J., Zhu, Q. (2021). Honey-X. In: Game Theory for Cyber Deception. Static & Dynamic Game Theory: Foundations & Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-030-66065-9_6

Download citation

Publish with us

Policies and ethics