Advertisement

Remotely Assessing Integrity of Software Applications by Monitoring Invariants: Present Limitations and Future Directions

  • Alessio Viticchié
  • Cataldo Basile
  • Antonio Lioy
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10694)

Abstract

Invariants monitoring is a software attestation technique that aims at proving the integrity of a running application by checking likely invariants, which are predicates built on variables’ values. Being very promising in literature, we developed a software protection that remotely checks invariants. However, we faced a series of issues and limitations. This paper, after presenting an extensive background on invariants and their use, reports, analyses, and categorizes the identified limitations. Our work suggests that, even if it is still promising, further studies are needed to decree if invariants monitoring could be practically used as a remote protection of software applications.

References

  1. 1.
    Abrath, B., Coppens, B., Volckaert, S., Wijnant, J., De Sutter, B.: Tightly-coupled self-debugging software protection. In: Proceedings of the 6th Workshop on Software Security, Protection, and Reverse Engineering, SSPREW, pp. 7–10. ACM (2016)Google Scholar
  2. 2.
    Ackermann, C., Cleaveland, R., Huang, S., Ray, A., Shelton, C., Latronico, E.: Automatic requirement extraction from test cases. In: Barringer, H., Falcone, Y., Finkbeiner, B., Havelund, K., Lee, I., Pace, G., Roşu, G., Sokolsky, O., Tillmann, N. (eds.) RV 2010. LNCS, vol. 6418, pp. 1–15. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-16612-9_1 CrossRefGoogle Scholar
  3. 3.
    Armknecht, F., Sadeghi, A.-R., Schulz, S., Wachsmann, C.: A security framework for the analysis and design of software attestation. In: Proceedings of the 2013 ACM SIGSAC Conference on Computer & communications security, pp. 1–12. ACM (2013)Google Scholar
  4. 4.
    Baliga, A., Ganapathy, V., Iftode, L.: Detecting kernel-level rootkits using data structure invariants. IEEE Trans. Dependable Secure Comput. 8(5), 670–684 (2011)CrossRefGoogle Scholar
  5. 5.
    Beyer, D., Henzinger, T.A., Majumdar, R., Rybalchenko, A.: Path invariants. In: ACM Sigplan Notices, vol. 42, pp. 300–309. ACM (2007)Google Scholar
  6. 6.
    Blanchet, B., Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: A static analyzer for large safety-critical software. In: ACM SIGPLAN Notices, vol. 38, pp. 196–207. ACM (2003)Google Scholar
  7. 7.
    Boshernitsan, M., Doong, R., Savoia, A.: From daikon to agitator: lessons and challenges in building a commercial tool for developer testing. In: Proceedings of the 2006 International Symposium on Software Testing and Analysis, pp. 169–180. ACM (2006)Google Scholar
  8. 8.
    Cohen, E., Dahlweid, M., Hillebrand, M., Leinenbach, D., Moskal, M., Santen, T., Schulte, W., Tobies, S.: VCC: a practical system for verifying concurrent C. In: Berghofer, S., Nipkow, T., Urban, C., Wenzel, M. (eds.) TPHOLs 2009. LNCS, vol. 5674, pp. 23–42. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-03359-9_2 CrossRefGoogle Scholar
  9. 9.
    Committee, T., et al.: Trusted computing platform alliance (TCPA) main specification v1. Technical report, 1b TCPA Alliance (2002)Google Scholar
  10. 10.
    Cristian, F.: Exception handling and software fault tolerance. IEEE Trans. Comput. 31(6), 531–540 (1982)CrossRefGoogle Scholar
  11. 11.
    Csallner, C., Smaragdakis, Y., Xie, T.: DSD-Crasher: a hybrid analysis tool for bug finding. ACM Trans. Softw. Eng. Methodol. (TOSEM) 17(2), 8 (2008)CrossRefGoogle Scholar
  12. 12.
    Csallner, C., Tillmann, N., Smaragdakis, Y.: DySy. In: 30th ACM/IEEE International Conference on Software Engineering, ICSE 2008, pp. 281–290. IEEE (2008)Google Scholar
  13. 13.
    Delgado, N., Gates, A.Q., Roach, S.: A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Trans. Softw. Eng. 30(12), 859–872 (2004)CrossRefGoogle Scholar
  14. 14.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Trans. Softw. Eng. 27(2), 99–123 (2001)CrossRefGoogle Scholar
  15. 15.
    Ernst, M.D., Perkins, J.H., Guo, P.J., McCamant, S., Pacheco, C., Tschantz, M.S., Xiao, C.: The daikon system for dynamic detection of likely invariants. Sci. Comput. Program. 69(1), 35–45 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Gries, D.: The Science of Programming. Springer, New York (1981).  https://doi.org/10.1007/978-1-4612-5983-1 CrossRefzbMATHGoogle Scholar
  17. 17.
    Guo, P.J., Perkins, J.H., McCamant, S., Ernst, M.D.: Dynamic inference of abstract types. In: Proceedings of the 2006 International Symposium on Software Testing and Analysis, pp. 255–265. ACM (2006)Google Scholar
  18. 18.
    Hangal, S., Chandra, N., Narayanan, S., Chakravorty, S.: IODINE: a tool to automatically infer dynamic invariants for hardware designs. In: Proceedings of the 42nd Annual Design Automation Conference, pp. 775–778. ACM (2005)Google Scholar
  19. 19.
    Hangal, S., Lam, M.S.: Tracking down software bugs using automatic anomaly detection. In: Proceedings of the 24th International Conference on Software Engineering, pp. 291–301. ACM (2002)Google Scholar
  20. 20.
    Hoare, C.A.R.: An axiomatic basis for computer programming. Commun. ACM 12(10), 576–580 (1969)CrossRefzbMATHGoogle Scholar
  21. 21.
    Jazequel, J.-M., Meyer, B.: Design by contract: the lessons of Ariane. Computer 30(1), 129–130 (1997)CrossRefGoogle Scholar
  22. 22.
    Kil, C., Sezer, E.C., Azab, A.M., Ning, P., Zhang, X.: Remote attestation to dynamic system properties: towards providing complete system integrity evidence. In: Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2009, pp. 115–124. IEEE (2009)Google Scholar
  23. 23.
    Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., et al.: seL4: formal verification of an OS kernel. In: Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles, pp. 207–220. ACM (2009)Google Scholar
  24. 24.
    Lemieux, C., Park, D., Beschastnikh, I.: General LTL specification mining (t). In: Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 81–92. IEEE (2015)Google Scholar
  25. 25.
    Lorenzoli, D., Mariani, L., Pezze, M.: Towards self-protecting enterprise applications. In: The 18th IEEE International Symposium on Software Reliability, ISSRE 2007, pp. 39–48. IEEE (2007)Google Scholar
  26. 26.
    Perkins, J.H., Kim, S., Larsen, S., Amarasinghe, S., Bachrach, J., Carbin, M., Pacheco, C., Sherwood, F., Sidiroglou, S., Sullivan, G., et al.: Automatically patching errors in deployed software. In: Proceedings of the 22nd ACM SIGOPS Symposium on Operating Systems Principles, pp. 87–102. ACM (2009)Google Scholar
  27. 27.
    Sahoo, S.K., Criswell, J., Geigle, C., Adve, V.: Using likely invariants for automated software fault localization. In: ACM SIGARCH Computer Architecture News, vol. 41, pp. 139–152. ACM (2013)Google Scholar
  28. 28.
    Sailer, R., Zhang, X., Jaeger, T., Van Doorn, L.: Design and implementation of a TCG-based integrity measurement architecture. In: USENIX Security Symposium, vol. 13, pp. 223–238 (2004)Google Scholar
  29. 29.
    Schiller, T.W., Ernst, M.D.: Reducing the barriers to writing verified specifications. ACM SIGPLAN Not. 47(10), 95–112 (2012)CrossRefGoogle Scholar
  30. 30.
    Schuler, D., Dallmeier, V., Zeller, A.: Efficient mutation testing by checking invariant violations. In Proceedings of the 18th International Symposium on Software Testing and Analysis, pp. 69–80. ACM (2009)Google Scholar
  31. 31.
    Tan, G., Chen, Y., Jakubowski, M.H.: Delayed and controlled failures in tamper-resistant software. In: Camenisch, J.L., Collberg, C.S., Johnson, N.F., Sallee, P. (eds.) IH 2006. LNCS, vol. 4437, pp. 216–231. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-74124-4_15 CrossRefGoogle Scholar
  32. 32.
    Viticchié, A., Regano, L., Torchiano, M., Basile, C., Ceccato, M., Tonella, P., Tiella, R.: Assessment of source code obfuscation techniques. In: 2016 IEEE 16th International Working Conference on Source Code Analysis and Manipulation (SCAM), pp. 11–20. IEEE (2016)Google Scholar
  33. 33.
    Wei, J., Pu, C., Rozas, C.V., Rajan, A., Zhu, F.: Modeling the runtime integrity of cloud servers: a scoped invariant perspective. In: Pearson, S., Yee, G. (eds.) Privacy and Security for Cloud Computing. CCN, pp. 211–232. Springer, London (2013).  https://doi.org/10.1007/978-1-4471-4189-1_6 CrossRefGoogle Scholar
  34. 34.
    Xie, T., Notkin, D.: Tool-assisted unit test selection based on operational violations. In Proceedings of the 18th IEEE International Conference on Automated Software Engineering, pp. 40–48. IEEE (2003)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Alessio Viticchié
    • 1
  • Cataldo Basile
    • 1
  • Antonio Lioy
    • 1
  1. 1.Politecnico di TorinoTorinoItaly

Personalised recommendations