Advertisement

Automatically Repairing Concurrency Bugs with ARC

  • David Kelk
  • Kevin Jalbert
  • Jeremy S. Bradbury
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8063)

Abstract

In this paper we introduce ARC – a fully automated system for repairing deadlocks and data races in concurrent Java programs. ARC consists of two phases: (1) a bug repair phase and (2) an optimization phase. In the first phase, ARC uses a genetic algorithm without crossover to mutate an incorrect program, searching for a variant of the original program that fixes the deadlocks and data races. As this first phase may introduce unneeded synchronization that can negatively affect performance, a second phase attempts to optimize the concurrent source code by removing any excess synchronization without sacrificing program correctness. We describe both phases of our approach and report on our results.

Keywords

bug repair concurrency concurrency testing evolutionary algorithm SBSE 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Acree, A.T., Budd, T.A., DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Mutation analysis. Tech. rep., GIT-ICS-79/08, Georgia Institute of Technology (1979)Google Scholar
  2. 2.
    Arcuri, A.: On the automation of fixing software bugs. In: Proc. of Int. Conf. on Soft. Eng. (ICSE 2008), pp. 1003–1006 (2008)Google Scholar
  3. 3.
    Arcuri, A., Yao, X.: A novel co-evolutionary approach to automatic software bug fixing. In: Proc. of IEEE Congress on Evolutionary Computation (CEC 2008), pp. 162–168 (2008)Google Scholar
  4. 4.
    Bradbury, J., Cordy, J., Dingel, J.: Mutation Operators for Concurrent Java (J2SE 5.0). In: Proc. of the Work. on Mutation Analysis (Mutation 2006), pp. 83–92 (2006)Google Scholar
  5. 5.
    Bradbury, J., Jalbert, K.: Defining a Catalog of Programming Anti-Patterns for Concurrent Java. In: Proc. of the Int. Work. on Software Patterns and Quality (SPAQu 2009), pp. 6–11 (2009)Google Scholar
  6. 6.
    Cordy, J., Halpern, C., Promislow, E.: TXL: A rapid prototyping system for programming language dialects. In: Proc. of the Int. Conf. on Computer Languages, pp. 280–285 (1988)Google Scholar
  7. 7.
    Edelstein, O., Farchi, E., Nir, Y., Ratsaby, G., Ur, S.: Multithreaded Java program test generation. IBM Systems Journal 41(1), 111–125 (2002)CrossRefGoogle Scholar
  8. 8.
    Eytani, Y., Tzoref, R., Ur, S.: Experience with a concurrency bugs benchmark. In: Proc. of Software Testing Benchmark Work (TESTBENCH 2008) (2008)Google Scholar
  9. 9.
    Eytani, Y., Ur, S.: Compiling a benchmark of documented multi-threaded bugs. In: Proc. of Work. on Parallel and Distributed Sys.: Testing, Analysis, and Debugging (PADTAD 2004) (2004)Google Scholar
  10. 10.
    Fiedor, J., Křena, B., Letko, Z., Vojnar, T.: A uniform classification of common concurrency errors. In: Moreno-Díaz, R., Pichler, F., Quesada-Arencibia, A. (eds.) EUROCAST 2011, Part I. LNCS, vol. 6927, pp. 519–526. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Galletly, J.: An overview of genetic algorithms. Kybernetes 21(6), 26–30 (1992)CrossRefGoogle Scholar
  12. 12.
    Harman, M.: Why the Virtual Nature of Software Makes it Ideal for Search Based Optimization. In: Rosenblum, D.S., Taentzer, G. (eds.) FASE 2010. LNCS, vol. 6013, pp. 1–12. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  13. 13.
    Havelund, K., Stoller, S., Ur, S.: Benchmark and framework for encouraging research on multi-threaded testing tools. In: Proc. of Work. on Parallel and Distributed Sys.: Testing, Analysis, and Debugging (PADTAD 2003), pp. 22–26 (2003)Google Scholar
  14. 14.
    Jin, G., et al.: Automated atomicity-violation fixing. In: Proc. of ACM SIGPLAN Conf. on Prog. Lang. Design and Implementation (PLDI 2011), pp. 389–400 (2011)Google Scholar
  15. 15.
    Krena, B., Letko, Z., Tzoref, R., Ur, S., Vojnar, T.: Healing Data Races On-The-Fly. In: Proc. of Work. on Parallel and Distributed Sys.: Testing, Analysis, and Debugging (PADTAD 2007), pp. 54–64 (2007)Google Scholar
  16. 16.
    Krena, B., Letko, Z., Vojnar, T., Ur, S.: A platform for search-based testing of concurrent software. In: Proc. of Work. on Parallel and Distributed Sys.: Testing, Analysis, and Debugging (PADTAD 2010), pp. 48–58 (2010)Google Scholar
  17. 17.
    Le Goues, C., Dewey-Vogt, M., Forrest, S., Weimer, W.: A systematic study of automated program repair: Fixing 55 out of 105 bugs for $8 each. In: Proc. of Int. Conf. on Soft. Eng. (ICSE 2012), pp. 3–13 (2012)Google Scholar
  18. 18.
    Letko, Z., Vojnar, T., Krena, B.: AtomRace: Data Race and Atomicity Violation Detector and Healer. In: Proc. of Work. on Parallel and Distributed Sys.: Testing, Analysis, and Debugging (PADTAD 2008) (2008)Google Scholar
  19. 19.
    Liu, P., Zhang, C.: Axis: automatically fixing atomicity violations through solving control constraints. In: Proc. of Int. Conf. on Soft. Eng. (ICSE 2012), pp. 299–309 (2012)Google Scholar
  20. 20.
    Long, B., Strooper, P., Wildman, L.: A method for verifying concurrent Java components based on an analysis of concurrency failures. Concurrency and Computation: Practice & Experience 19(3), 281–294 (2007)CrossRefGoogle Scholar
  21. 21.
    Musuvathi, M., Qadeer, S., Ball, T.: CHESS: A Systematic Testing Tool for Concurrent Software. Tech. rep., Microsoft Research (2007)Google Scholar
  22. 22.
    Naik, M., Aiken, A.: Conditional must not aliasing for static race detection. In: Proc. of ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages (POPL 2007), pp. 327–338 (January 2007)Google Scholar
  23. 23.
    Sutter, H., Larus, J.: Software and the concurrency revolution. Queue 3(7), 54–62 (2005)CrossRefGoogle Scholar
  24. 24.
    Weimer, W., et al.: Automatically finding patches using genetic programming. In: Proc. of Int. Conf. on Soft. Eng. (ICSE 2009), pp. 364–374 (2009)Google Scholar
  25. 25.
    Wilkerson, J., Tauritz, D.: Coevolutionary automated software correction. In: Proc. of Genetic and Evolutionary Computation Conf. (GECCO 2010), pp. 1391–1392 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • David Kelk
    • 1
  • Kevin Jalbert
    • 1
  • Jeremy S. Bradbury
    • 1
  1. 1.Software Quality Research LaboratoryUniversity of Ontario Institute of TechnologyOshawaCanada

Personalised recommendations