Automated Detection of Logical Errors in Programs

  • George Stergiopoulos
  • Panagiotis Katsaros
  • Dimitris Gritzalis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8924)

Abstract

Static and dynamic program analysis tools mostly focus on the detection of a priori defined defect patterns and security vulnerabilities. Automated detection of logical errors, due to a faulty implementation of applications’ functionality is a relatively uncharted territory. Automation can be based on profiling the intended behavior behind the source code. In this paper, we present a new code profiling method that combines the crosschecking of dynamic program invariants with symbolic execution, an information flow analysis, and the use of fuzzy logic. Our goal is to detect logical errors and exploitable vulnerabilities. The theoretical underpinnings and the practical implementation of our approach are discussed. We test the APP_LogGIC tool that implements the proposed analysis on two real-world applications. The results show that profiling the intended program behavior is feasible in diverse applications. We discuss the heuristics used to overcome the problem of state space explosion and of the large data sets. Code metrics and test results are provided to demonstrate the effectiveness of the approach.

Keywords

Risk Logical errors Source code profiling Static analysis Dynamic analysis Input vectors Fuzzy logic 

Notes

Acknowledgment

This research has been co-financed by the European Union (European Social Fund ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thalis Athens University of Economics and Business - Software Engineering Research Platform.

References

  1. 1.
    Dobbins, J.: Inspections as an Up-Front Quality Technique. In: Handbook of Software Quality Assurance, pp. 217–252. Prentice Hall, New York (1998)Google Scholar
  2. 2.
    McLaughlin, B.: Building Java Enterprise Applications. Architecture, vol. 1. O’ Reilly, Sebastopol (2002)Google Scholar
  3. 3.
    Peng, W. Wallace, D.: Software Error Analysis. In: NIST Special Publication 500-209. NIST, Gaithersburg, pp. 7–10 (1993)Google Scholar
  4. 4.
    Kimura, M.: Software vulnerability, definition, modeling, and practical evaluation for e-mail transfer software. Int. J. Pressure Vessels Pip. 83(4), 256–261 (2006)CrossRefGoogle Scholar
  5. 5.
    Felmetsger, V., Cavedon, L., Kruegel, C., Vigna, J.: Toward automated detection of logic vulnerabilities in web applications. In: Proceedings of the 19th USENIX Symposium, USA, p. 10 (2010)Google Scholar
  6. 6.
    Stergiopoulos, G., Tsoumas, B., Gritzalis, D.: Hunting application-level logical errors. In: Barthe, G., Livshits, B., Scandariato, R. (eds.) ESSoS 2012. LNCS, vol. 7159, pp. 135–142. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  7. 7.
    Stergiopoulos, G., Tsoumas, B., Gritzalis, D.: On business logic vulnerabilities hunting: the APP_LogGIC framework. In: Lopez, J., Huang, X., Sandhu, R. (eds.) NSS 2013. LNCS, vol. 7873, pp. 236–249. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  8. 8.
    Păsăreanu, C.S., Visser, W.: Verification of Java programs using symbolic execution and invariant generation. In: Graf, S., Mounier, L. (eds.) SPIN 2004. LNCS, vol. 2989, pp. 164–181. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  9. 9.
    The Java PathFinder tool, NASA Ames Research Center, US. http://babelfish.arc.nasa.gov/trac/jpf/
  10. 10.
    Doupe, A., Boe, B., Vigna, G.: Fear the EAR: discovering and mitigating execution after redirect vulnerabilities. In: Proceedings of the 18th ACM Conference on Computer and Communications Security, pp. 251–262. ACM (2011)Google Scholar
  11. 11.
    Balzarotti, D., Cova, M., Felmetsger, V., Vigna, G.: Multi-module vulnerability analysis of web-based applications. In: Proceedings of the 14th ACM Conference on Computer and Communications Security, pp. 25–35. ACM (2007)Google Scholar
  12. 12.
    Ernst, M., Perkins, J., Guo, P., McCamant, S., Pacheco, C., Tschantz, M., Xiao, C.: The Daikon system for dynamic detection of likely invariants. Sci. Comput. Program. 69, 35–45 (2007)CrossRefMATHMathSciNetGoogle Scholar
  13. 13.
    The Daikon Invariant Detector Manual. http://groups.csail.mit.edu/pag/daikon/
  14. 14.
    Brumley, D., Newsome, J., Song, D., Wang, H., Jha, S.: Towards automatic generation of vulnerability-based signatures. In: IEEE Symposium on Security and Privacy (2006)Google Scholar
  15. 15.
    Natella, R., Cotronneo, D., Duraes, J., Madeira, H.: On fault representativeness of software fault injection. IEEE Trans. Softw. Eng. 39(1), 80–96 (2013)CrossRefGoogle Scholar
  16. 16.
    Foundations of Fuzzy Logic, Fuzzy Operators, Mathworks. http://www.mathworks.com/help/toolbox/fuzzy/bp78l6_-1.html
  17. 17.
    Systems Engineering Fundamentals: Supplementary text prepared by the Defense Acquisition University Press, Defense Acquisition University, USA (2001)Google Scholar
  18. 18.
    JSCH SSH framework, JCraft. http://www.jcraft.com/jsch/
  19. 19.
    Cingolani, P., Alcala-Fdez, J.: jFuzzyLogic: a robust and flexible fuzzy-logic inference system language implementation. In: Proceedings of the IEEE International Conference on Fuzzy Systems, pp. 1–8. IEEE (2012)Google Scholar
  20. 20.
    Leekwijck, W., Kerre, E.: Defuzzification: criteria and classification. Fuzzy Sets Syst. 108(2), 159–178 (1999)CrossRefMATHGoogle Scholar
  21. 21.
    Stoneburner G., Goguen, A.: SP 800-30. Risk management guide for information technology systems. Technical report. NIST, USA (2002)Google Scholar
  22. 22.
    Burns, A., Burns, R.: Basic Marketing Research. Pearson Education, p. 245 (2008)Google Scholar
  23. 23.
    Fenton, N., Pfleeger, S.: Software Metrics: A Rigorous and Practical Approach. PWS, Boston (1998)Google Scholar
  24. 24.
    Giannakopoulou, D., Pasareanu, C., Cobleigh, J.: Assume-guarantee verification of source code with design-level assumptions. In: Proceedings of the 26th International Conference on Software Engineering, pp. 211–220. IEEE (2004)Google Scholar
  25. 25.
    The OWASP Risk Rating Methodology, www.owasp.org/index.php/OWASP_Risk_Rating_Methodology
  26. 26.
    Theoharidou, M., Kotzanikolaou, P., Gritzalis, D.: Risk assessment methodology for interdependent critical infrastructures. Int. J. Risk Assess. Manage. 15(2/3), 128–148 (2011)CrossRefGoogle Scholar
  27. 27.
    Kandias M., Mitrou L., Stavrou V., Gritzalis, D.: Which side are you on? A new Panopticon vs. privacy. In: Proceedings of 10th International Conference on Security and Cryptography, pp. 98–110. SciTePress (2013)Google Scholar
  28. 28.
    Albaum, G.: The Likert scale revisited. J. Market res. soc. 39, 331–348 (1997)Google Scholar
  29. 29.
    Mylonas A., Dritsas, S., Tsoumas V., Gritzalis, D.: Smartphone security evaluation - the malware attack case. In: Proceedings of the 8th International Conference on Security and Cryptography, pp. 25–36. SciTepress, (2011)Google Scholar
  30. 30.
    Theoharidou, M., Mylonas, A., Gritzalis, D.: A risk assessment method for smartphones. In: Gritzalis, D., Furnell, S., Theoharidou, M. (eds.) SEC 2012. IFIP AICT, vol. 376, pp. 443–456. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  31. 31.
    Chatzieleftheriou, G., Katsaros, P.: Test driving static analysis tools in search of C code vulnerabilities. In: Proceedings of the 35th IEEE Computer Software and Applications Conference on Workshops (COMPSACW), Munich, Germany, pp. 96–103. IEEE Computer Society (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • George Stergiopoulos
    • 1
  • Panagiotis Katsaros
    • 2
  • Dimitris Gritzalis
    • 1
  1. 1.Information Security and Critical Infrastructure Protection Laboratory, Department of InformaticsAthens University of Economics and Business (AUEB)AthensGreece
  2. 2.Department of InformaticsAristotle University of ThessalonikiThessalonikiGreece

Personalised recommendations