On Policies and Intents

  • Matthew L. Bolton
  • Celeste M. Wallace
  • Lenore D. Zuck
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7671)


A policy is a set of guidelines meant to accomplish some intent. In information security, a policy will take the form of an access control policy that describes the conditions under which entities can perform actions on data objects. Further, such policies are prolific in modern society, where information must flow between different enterprises, states, and countries, all of which will likely have different policies. Unfortunately, policies have proven to be extremely difficult to evaluate. Even with formal policies, basic questions about policy completeness and consistency can be undecidable. These problems are confounded when multiple policies must be considered in aggregation. Even worse, many policies are merely “formal-looking” or are completely informal. Thus, they cannot be reasoned about in a formal way and it may not even be possible to reliably determine whether a given course of action is allowed. Even with all of these problems, policies face issues related to their validity. That is, to be valid, a policy should reflect the intent of the policy makers and it should be clear what the consequences are if a policy is violated. It is the contention of the authors that when evaluating policies, one needs to be able to understand and reason about the policy maker’s intentions and the consequences associated with them. This paper focuses on the intent portion of this perspective. Unfortunately, because policy makers are humans, policy maker intentions are not readily captured by existing policy languages and notations. To rectify this, we take inspiration from task analytic methods, a set of tools and techniques human factors engineers and cognitive scientists use to represent and reason about the intentions behind human behavior. Using task analytic models as a template, we describe how policies can be represented in task-like models as hierarchies of goals and rules, with logics specifying when goals are contextually relevant and what outcomes are expected when goals are achieved. We then discuss how this framing could be used to reason about policy maker intent when evaluating policies. We further outline how this approach could be extended to facilitate reasoning about consequences. Support for legacy systems is also explored.


Policies Intent Access Control Firewalls Complex Systems 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [ALRP12]
    Applebaum, A., Levitt, K.N., Rowe, J., Parsons, S.: Arguing about firewall policy. In: Verheij, B., Szeider, S., Woltran, S. (eds.) Computational Models of Argument - Proceedings of COMMA 2012, Vienna, Austria, September 10-12. Frontiers in Artificial Intelligence and Applications, vol. 245, pp. 91–102. IOS Press (2012)Google Scholar
  2. [ASH03]
    Al-Shaer, E., Hamed, H.: Firewall policy advisor for anomaly detection and rule editing. In: Proc. IEEE/IFIP 8th Int. Symp. Integrated Network Management, IM 2003, pp. 17–30 (March 2003)Google Scholar
  3. [ASH04]
    Al-Shaer, E., Hamed, H.: Discovery of policy anomalies in distributed firewalls. In: INFOCOM (2004)Google Scholar
  4. [BB10]
    Bolton, M.L., Bass, E.J.: Formally verifying human-automation interaction as part of a system model: Limitations and tradeoffs. Innovations in Systems and Software Engineering: A NASA Journal 6(3), 219–231 (2010)CrossRefGoogle Scholar
  5. [BBS12]
    Bolton, M.L., Bass, E.J., Siminiceanu, R.I.: Using formal verification to evaluate human-automation interaction in safety critical systems, a review. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans (in press, expected 2012)Google Scholar
  6. [BDMN06]
    Barth, A., Datta, A., Mitchell, J.C., Nissenbaum, H.: Privacy and contextual integrity: Framework and applications. In: Proceedings of 27th IEEE Symposium on Security and Privacy (May 2006)Google Scholar
  7. [BL75]
    Bell, D., LaPadula, L.: Secure computer system unified exposition and multics interpretation. Technical Report MTR-2997, MITRE Corp., Bedford, MA (July 1975)Google Scholar
  8. [BSB11]
    Bolton, M.L., Siminiceanu, R.I., Bass, E.J.: A systematic approach to model checking human-automation interaction using task-analytic models. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans 41(5), 961–976 (2011)CrossRefGoogle Scholar
  9. [EHRLR80]
    Erman, L.D., Hayes-Roth, F., Lesser, V.R., Reddy, D.R.: The hearsay-II speech understanding system: Integrating knowledge to resolve uncertainty. ACM Computing Surveys 12(2), 213–253 (1980)CrossRefGoogle Scholar
  10. [GMP+08]
    Giese, M., Mistrzyk, T., Pfau, A., Szwillus, G., von Detten, M.: AMBOSS: A Task Modeling Approach for Safety-Critical Systems. In: Forbrig, P., Paternò, F. (eds.) HCSE/TAMODIA 2008. LNCS, vol. 5247, pp. 98–109. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  11. [Gut00]
    Guttman, J.D.: Security Goals: Packet Trajectories and Strand Spaces. In: Focardi, R., Gorrieri, R. (eds.) FOSAD 2000. LNCS, vol. 2171, pp. 197–261. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  12. [HM03]
    Harel, D., Marelly, R.: Come, let’s play: Scenario-based programming using LSCs and the play-engine. Springer (2003)Google Scholar
  13. [HMW12]
    Harel, D., Marron, A., Weiss, G.: Behavioral programming. Commun. ACM 55(7), 90–100 (2012)CrossRefGoogle Scholar
  14. [HR85]
    Hayes-Roth, B.: A blackboard architecture for control. Artificial Intelligence 26(3), 251–321 (1985)CrossRefGoogle Scholar
  15. [HRU76]
    Harrison, M.A., Ruzzo, W.L., Ullman, J.D.: Protection in operating systems. Communications of the ACM 19(8), 461–471 (1976)MathSciNetzbMATHCrossRefGoogle Scholar
  16. [HSH90]
    Hartson, H.R., Siochi, A.C., Hix, D.: The UAN: A user-oriented representation for direct manipulation interface designs. ACM Transactions on Information Systems 8(3), 181–203 (1990)CrossRefGoogle Scholar
  17. [KA92]
    Kirwan, B., Ainsworth, L.K.: A Guide to Task Analysis. Taylor and Francis, London (1992)Google Scholar
  18. [Lev00]
    Leveson, N.G.: Intent specifications: An approach to building human-centered specifications. IEEE Transactions on Software Engineering 26(1), 15–35 (2000)CrossRefGoogle Scholar
  19. [MM86]
    Mitchell, C.M., Miller, R.A.: A discrete control model of operator function: A methodology for information display design. IEEE Transactions on Systems Man Cybernetics Part A: Systems and Humans 16(3), 343–357 (1986)CrossRefGoogle Scholar
  20. [Nor83]
    Norman, D.: Some observations on mental models. In: Gentner, D., Stevens, A.L. (eds.) Mental Models, pp. 7–14. Lawrence Erlbaum Associates, Mahwah (1983)Google Scholar
  21. [PMM97]
    Paternò, F., Mancini, C., Meniconi, S.: ConcurTaskTrees: A diagrammatic notation for specifying task models. In: Proceedings of the IFIP TC13 Interantional Conference on Human-Computer Interaction, pp. 362–369. Chapman and Hall, Ltd., London (1997)Google Scholar
  22. [RJM88]
    Rubin, K.S., Jones, P.M., Mitchell, C.M.: OFMspert: Inference of operator intentions in supervisory control using a blackboard architecture. IEEE Transactions on Systems, Man and Cybernetics 18(4), 618–637 (1988)CrossRefGoogle Scholar
  23. [RSMS10]
    Rahmouni, H.B., Solomonides, T., Mont, M.C., Shiu, S.: Privacy compliance and enforcement on european healthgrids: an appraoch through ontology. Philosophical Transactions of the Royal Society (368), 4057–4072 (2010)Google Scholar
  24. [SCS00]
    Schraagen, J.M., Chipman, S.F., Shalin, V.L.: Cognitive Task Analysis. Lawrence Erlbaum Associates, Inc., Philadelphia (2000)Google Scholar
  25. [Woo04]
    Wool, A.: A quantitative study of firewall configuration errors. Computer 37(6), 62–67 (2004)CrossRefGoogle Scholar
  26. [Woo10]
    Wool, A.: Trends in firewall configuration errors: Measuring the holes in swiss cheese. IEEE Internet Computing 14(4), 58–65 (2010)CrossRefGoogle Scholar
  27. [YMS+06]
    Yuan, L., Mai, J., Su, Z., Chen, H., Chuah, C., Mohapatra, P.: FIREMAN: A toolkit for FIREwall Modeling and ANalysis. In: IEEE Symposium on Security and Privacy, pp. 199–213. IEEE Computer Society (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Matthew L. Bolton
    • 1
  • Celeste M. Wallace
    • 1
  • Lenore D. Zuck
    • 1
  1. 1.University of Illinois at ChicagoUSA

Personalised recommendations