Enforcer – Efficient Failure Injection

  • Cyrille Artho
  • Armin Biere
  • Shinichi Honiden
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4085)


Non-determinism of the thread schedule is a well-known problem in concurrent programming. However, other sources of non-determinism exist which cannot be controlled by an application, such as network availability. Testing a program with its communication resources being unavailable is difficult, as it requires a change on the host system, which has to be coordinated with the test suite. Essentially, each interaction of the application with the environment can result in a failure. Only some of these failures can be tested. Our work identifies such potential failures and develops a strategy for testing all relevant outcomes of such actions. Our tool, Enforcer, combines the structure of unit tests, coverage information, and fault injection. By taking advantage of a unit test infrastructure, performance can be improved by orders of magnitude compared to previous approaches. Our tool has been tested on several real-world programs, where it found faults without requiring extra test code.


Model Check Test Suite Unit Test Coverage Measurement System Under Test 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Artho, C.: Combining Static and Dynamic Analysis to Find Multi-threading Faults Beyond Data Races. PhD thesis, ETH Zürich (2005)Google Scholar
  2. 2.
    Artho, C., Biere, A.: Applying static analysis to large-scale, multithreaded Java programs. In: Proc. 13th Australian Software Engineering Conference (ASWEC 2001), Canberra, Australia, pp. 68–75. IEEE Computer Society Press, Los Alamitos (2001)CrossRefGoogle Scholar
  3. 3.
    Ball, T., Podelski, A., Rajamani, S.: Boolean and Cartesian Abstractions for Model Checking C Programs. In: Margaria, T., Yi, W. (eds.) TACAS 2001. LNCS, vol. 2031, pp. 268–285. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  4. 4.
    Candea, G., Delgado, M., Chen, M., Fox, A.: Automatic failure-path inference: A generic introspection technique for Internet applications. In: Proc. 3rd IEEE Workshop on Internet Applications (WIAPP 2003), Washington, USA, p. 132. IEEE Computer Society, Los Alamitos (2003)CrossRefGoogle Scholar
  5. 5.
    Colby, C., Godefroid, P., Jagadeesan, L.: Automatically closing open reactive programs. In: Proc. SIGPLAN Conf. on Programming Language Design and Implementation (PLDI 1998), Montreal, Canada, pp. 345–357 (1998)Google Scholar
  6. 6.
    Engler, D., Musuvathi, M.: Static analysis versus software model checking for bug finding. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 191–210. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Fenton, N., Pfleeger, S.: Software metrics: a rigorous and practical approach, 2nd edn. PWS Publishing Co., Boston (1997)Google Scholar
  8. 8.
    Forrester, J.E., Miller, B.P.: An empirical study of the robustness of windows NT applications using random testing. In: 4th USENIX Windows System Symposium, Seattle, USA, pp. 59–68 (2000)Google Scholar
  9. 9.
    Freshmeat (2005),
  10. 10.
    Fu, C., Martin, R., Nagaraja, K., Nguyen, T., Ryder, B., Wonnacott, D.: Compiler-directed program-fault coverage for highly available Java internet services. In: Proc. 2003 Intl. Conf. on Dependable Systems and Networks (DSN 2003), San Francisco, USA, pp. 595–604 (2003)Google Scholar
  11. 11.
    Fu, C., Ryder, B., Milanova, A., Wonnacott, D.: Testing of Java web services for robustness. In: Proc. ACM/SIGSOFT Intl. Symposium on Software Testing and Analysis (ISSTA 2004), Boston, USA, pp. 23–34 (2004)Google Scholar
  12. 12.
    Gosling, J., Joy, B., Steele, G., Bracha, G.: The Java Language Specification, 2nd edn. Addison-Wesley, Reading (2000)Google Scholar
  13. 13.
    Hsueh, M., Tsai, T., Iyer, R.: Fault injection techniques and tools. IEEE Computer 30(4), 75–82 (1997)Google Scholar
  14. 14.
    Kiczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., Griswold, W.: An overview of AspectJ. In: Knudsen, J.L. (ed.) ECOOP 2001. LNCS, vol. 2072, pp. 327–355. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  15. 15.
    Kim, M., Lee, I., Sammapun, U., Shin, J., Sokolsky, O.: Monitoring, checking, and steering of real-time systems. In: Proc. 2nd Intl. Workshop on Run-time Verification (RV 2002). ENTCS, vol. 70. Elsevier, Amsterdam (2002)Google Scholar
  16. 16.
    Link, J., Fröhlich, P.: Unit Testing in Java: How Tests Drive the Code. Morgan Kaufmann Publishers, Inc., San Francisco (2003)zbMATHGoogle Scholar
  17. 17.
    Meyer, B.: Eiffel: the language. Prentice-Hall Inc., Upper Saddle River (1992)zbMATHGoogle Scholar
  18. 18.
    Microsoft Corporation: Microsoft Visual C#. NET Language Reference. Microsoft Press, Redmond (2002)Google Scholar
  19. 19.
    Myers, G.: Art of Software Testing. John Wiley & Sons Inc., Chichester (1979)Google Scholar
  20. 20.
    Pasareanu, C., Dwyer, M., Visser, W.: Finding feasible abstract counter-examples. Intl. Journal on Software Tools for Technology Transfer (STTT) 5(1), 34–48 (2003)CrossRefGoogle Scholar
  21. 21.
    Peled, D.: Software Reliability Methods. Springer, Heidelberg (2001)zbMATHGoogle Scholar
  22. 22.
    Sinha, S., Harrold, M.: Criteria for testing exception-handling constructs in Java programs. In: Proc. IEEE Intl. Conf. on Software Maintenance (ICSM 1999), Washington, USA, p. 265. IEEE Computer Society Press, Los Alamitos (1999)Google Scholar
  23. 23.
    Stoller, S.: Testing concurrent Java programs using randomized scheduling. In: Proc. 2nd Intl. Workshop on Run-time Verification (RV 2002), Copenhagen, Denmark. ENTCS, vol. 70(4), pp. 143–158. Elsevier, Amsterdam (2002)Google Scholar
  24. 24.
    Stroustrup, B.: The C++ Programming Language, 3rd edn. Addison-Wesley Longman Publishing Co., Inc., Boston (1997)Google Scholar
  25. 25.
    Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Automated Software Engineering Journal 10(2), 203–232 (2003)CrossRefGoogle Scholar
  26. 26.
    Weimer, W., Necula, G.: Finding and preventing run-time error handling mistakes. In: Proc. 19th ACM SIGPLAN Conf. on Object-Oriented Programming Systems, Languages & Applications (OOPSLA 2004), Vancouver, Canada, pp. 419–431. ACM Press, New York (2004)CrossRefGoogle Scholar
  27. 27.
    White, A.: SERP, an Open Source framework for manipulating Java bytecode (2002),

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Cyrille Artho
    • 1
  • Armin Biere
    • 2
  • Shinichi Honiden
    • 1
  1. 1.National Institute of InformaticsTokyoJapan
  2. 2.Johannes Kepler UniversityLinzAustria

Personalised recommendations