Programming with Explicit Security Policies

  • Andrew C. Myers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3444)


Are computing systems trustworthy? To answer this, we need to know three things: what the systems are supposed to do, what they are not supposed to do, and what they actually do. All three are problematic. There is no expressive, practical way to specify what systems must do and must not do. And if we had a specification, it would likely be infeasible to show that existing computing systems satisfy it. The alternative is to design it in from the beginning: accompany programs with explicit, machine-checked security policies, written by programmers as part of program development. Trustworthy systems must safeguard the end-to-end confidentiality, integrity, and availability of information they manipulate. We currently lack both sufficiently expressive specifications for these information security properties, and sufficiently accurate methods for checking them. Fortunately there has been progress on both fronts. First, information security policies can be made more expressive than simple noninterference or access control policies, by adding notions of ownership, declassification, robustness, and erasure. Second, program analysis and transformation can be used to provide strong, automated security assurance, yielding a kind of security by construction. This is an overview of programming with explicit information security policies with an outline of some future challenges.


Information Security Security Policy Access Control Policy Security Assurance Dynamic Security 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. [ABHR99]
    Abadi, M., Banerjee, A., Heintze, N., Riecke, J.: A core calculus of dependency. In: Proc. 26th ACM Symp. on Principles of Programming Languages (POPL), San Antonio, TX, January 1999, pp. 147–160 (1999)Google Scholar
  2. [CM04]
    Chong, S., Myers, A.C.: Security policies for downgrading. In: Proc. 11th ACM Conference on Computer and Communications Security (October 2004)Google Scholar
  3. [ML00]
    Myers, A.C., Liskov, B.: Protecting privacy using the decentralized label model. ACM Transactions on Software Engineering and Methodology 9(4), 410–442 (2000)CrossRefGoogle Scholar
  4. [Mye99]
    Myers, A.C.: JFlow: Practical mostly-static information flow control. In: Proc. 26th ACM Symp. on Principles of Programming Languages (POPL), San Antonio, TX, January 1999, pp. 228–241 (1999)Google Scholar
  5. [SM03]
    Sabelfeld, A., Myers, A.: Language-based information-flow security. IEEE Journal on Selected Areas in Communications 21(1), 5–19 (2003)CrossRefGoogle Scholar
  6. [VSI96]
    Volpano, D., Smith, G., Irvine, C.: A sound type system for secure flow analysis. Journal of Computer Security 4(3), 167–187 (1996)Google Scholar
  7. [ZM04]
    Zheng, L., Myers, A.C.: Dynamic security labels and noninterference. In: Proc. 2nd Workshop on Formal Aspects in Security and Trust (August 2004)Google Scholar
  8. [ZZNM02]
    Zdancewic, S., Zheng, L., Nystrom, N., Myers, A.C.: Secure program partitioning. ACM Transactions on Computer Systems 20(3), 283–328 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Andrew C. Myers
    • 1
  1. 1.Cornell University 

Personalised recommendations