Advertisement

The Search for Trust Evidence

  • David E. OttEmail author
  • Claire Vishik
  • David Grawrock
  • Anand Rajan
Conference paper
  • 754 Downloads
Part of the Communications in Computer and Information Science book series (CCIS, volume 589)

Abstract

Trust Evidence addresses the problem of how devices or systems should mutually assess trustworthiness at the onset and during interaction. Approaches to Trust Evidence can be used to assess risk, for example, facilitating the choice of threat posture as devices interact within the context of a smart city. Trust Evidence may augment authentication schemes by adding information about a device and its operational context. In this paper, we discuss Intel’s 3-year collaboration with university researchers on approaches to Trust Evidence. This collaboration included an exploratory phase that looked at several formulations of Trust Evidence in varied contexts. A follow-up phase looked more specifically at Trust Evidence in software runtime environments, and whether techniques could be developed to generate information on correct execution. We describe various research results associated with two key avenues of investigation, programming language extensions for numerical Trust Evidence and an innovative protected module architecture. We close with reflections on industry-university researcher collaborations and several suggestions for enabling success.

Keywords

Combined Evidence Protected Module Architecture Protected Mode Programming Language Extension Numerous Trust 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bethea, D., Cochran, R.A., Reiter, M.K.: Server-side verification of client be-havior in online games. ACM Trans. Inf. Syst. Secur. 14(4), 32 (2011)CrossRefGoogle Scholar
  2. 2.
    Bauer, L., Liang, Y., Reiter, M.K., Spensky, C.: Discovering access-control mis-configurations: new approaches and evaluation methodologies. In: Proceedings of the 2nd ACM Conference on Data and Application Security and Privacy, February 2012Google Scholar
  3. 3.
    Caselden, D., Bazhanyuk, A., Payer, M., McCamant, S., Song, D.: HI-CFG: construction by binary analysis and application to attack polymorphism. In: Crampton, J., Jajodia, S., Mayes, K. (eds.) ESORICS 2013. LNCS, vol. 8134, pp. 164–181. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  4. 4.
    Bratus, S., Locasto, M., Otto, B., Shapiro, R., Smith, S.W., Weaver, G.: Beyond SELinux: the Case for behavior-based policy and trust languages. computer science Technical report TR2011–701, Dartmouth College, August 2011Google Scholar
  5. 5.
    Denning, T., Kohno, T., Levy, H.: Computer security and the modern home. Commun. ACM 56(1), 94–103 (2013)CrossRefGoogle Scholar
  6. 6.
    Abadi, M., Budiu, M., Erlingsson, U., Ligatti, J.: Control-flow integrity principles, implementations, and applications. ACM Trans. Inf. Syst. Secur. 13(1), 4 (2009)CrossRefGoogle Scholar
  7. 7.
    Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: a survey. ACM Comput. Surv. 41(3), 15 (2009)CrossRefGoogle Scholar
  8. 8.
    Vishik, C., Ott, D., Grawrock, D.: Intention semantics and trust evidence. In: Information Security Solution Europe (ISSE), November 2012Google Scholar
  9. 9.
    Huth, M., Kuo, J.H.-P., Sasse, A., Kirlappos, I.: Towards usable generation and enforcement of trust evidence from programmers’ intent. In: Marinos, L., Askoxylakis, I. (eds.) HAS 2013. LNCS, vol. 8030, pp. 246–255. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Huth, M., Kuo, J.H.-P.: Towards verifiable trust management for software execution. In: Huth, M., Asokan, N., Čapkun, S., Flechais, I., Coles-Kemp, L. (eds.) TRUST 2013. LNCS, vol. 7904, pp. 275–276. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  11. 11.
    Huth, M., Kuo, J.H.-P.: PEALT: an automated reasoning tool for numerical aggregation of trust evidence. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 109–123. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  12. 12.
    Huth, M., Kuo, J.H.-P.: On designing usable policy languages for declarative trust aggregation. In: Tryfonas, T., Askoxylakis, I. (eds.) HAS 2014. LNCS, vol. 8533, pp. 45–56. Springer, Heidelberg (2014)Google Scholar
  13. 13.
    Strackx, R., Piessens, F.: Fides: selectively hardening software application components against kernel-level or process-level malware. In: ACM Conference on Computer and Communications Security(CCS), October 2012Google Scholar
  14. 14.
    Avonds, N., Strackx, R., Agten, P., Piessens, F.: Salus: non-hierarchical memory access rights to enforce the principle of least privilege. In: Zia, T., Zomaya, A., Varadharajan, V., Mao, M. (eds.) SecureComm 2013. LNICST, vol. 127, pp. 252–269. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  15. 15.
    Noorman, J., Agten, P., Daniels, W., Strackx, R., Van Herrewege, A., Huygens, C., Preneel, B., Verbauwhede, I., Piessens, F.: Sancus: low-cost trustworthy extensible networked devices with a zero-software trusted computing base. In: 22nd USENIX Security Symposium, pp. 479–494, August 2013Google Scholar
  16. 16.
    De Clercq, R., Schellekens, D., Piessens, F., Verbauwhede, I.: Secure interrupts on low-end microcontrollers. In: 25th IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP 2014), June 2014Google Scholar
  17. 17.
    Agten, P., Jacobs, B., Piessens, F.: Sound modular verification of C code executing in an unverified context. In: Proceedings of the 42nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2015), January 2015Google Scholar
  18. 18.
    Strackx, R., Agten, P., Avonds, N., Piessens, F.: Salus: kernel support for secure process compartments. EAI Endorsed Trans. Secur. Saf. 15(3), 30 (2015)Google Scholar
  19. 19.
    Patrignani, M., Agten, P., Strackx, R., Jacobs, B., Clarke, D., Piessens, F.: Secure compilation to protected module architectures. ACM Trans. Program. Lang. Syst. 37(2), 6:1–6:50 (2015)CrossRefGoogle Scholar
  20. 20.
    Trusted Computing Group. http://www.trustedcomputinggroup.org
  21. 21.
    Elbaum, S., Munson, J.C.: Intrusion detection through dynamic software measurement. In: Proceedings of USENIX Workshop on Intrusion Detection and Network Monitoring, April 1999Google Scholar
  22. 22.
    Wang, Y., Singh, M.P.: Evidence-based trust: a mathematical model geared for multiagent systems. ACM Trans. Auton. Adapt. Syst., September 2010Google Scholar
  23. 23.
    Huynh, T.D., Jennings, N.R., Shadbolt, N.R.: An integrated trust and reputation model for open multi-agent systems. J. Auton. Agents MultiAgent Syst. 13(2), 119–154 (2006)CrossRefGoogle Scholar
  24. 24.
    Fullam, K., Barber, K.S.: Dynamically learning sources of trust information: experience vs. reputation. In: Proceedings of the 6th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS) (2007)Google Scholar
  25. 25.
    Wang, Y., Hang, C.-W., Singh, M.P.: A probabilistic approach for maintaining trust based on evidence. J. Artif. Intell. Res. 40(1), 221–267 (2011)zbMATHGoogle Scholar
  26. 26.
    Yu, H., Shen, Z., Leung, C., Miao, C., Lesser, V.: A survey of multi-agent trust management systems. IEEE Access 1, 35–50 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • David E. Ott
    • 1
    Email author
  • Claire Vishik
    • 1
  • David Grawrock
    • 1
  • Anand Rajan
    • 1
  1. 1.Intel CorporationChandlerUSA

Personalised recommendations