Technical Aspects: Muggl

  • Tim A. Majchrzak
Part of the SpringerBriefs in Information Systems book series (BRIEFSINORMAT)


In this chapter the research on Muggl is introduced. It is the technical part of software testing included in this book.


Virtual Machine Symbolic Execution Choice Point Satisfiability Modulo Theory Constraint Solver 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Glass, R.L.: Two mistakes and error-free Software: a confession. IEEE Softw. 25(4), 96 (2008)CrossRefGoogle Scholar
  2. 2.
    Hamburger, P.: On an automated method of symbolically analyzing times of computer programs. Commun. ACM 9(7), 481 (1966)CrossRefGoogle Scholar
  3. 3.
    Hanford, K.V.: Automatic generation of test cases. IBM Syst. J. 9(4), 242–257 (1970)CrossRefGoogle Scholar
  4. 4.
    Bird, D.L., Munoz, C.U.: Automatic generation of random self-checking test cases. IBM Syst. J. 22(3), 229–245 (1983)CrossRefGoogle Scholar
  5. 5.
    Tsai, W.T., Volovik, D., Keefe, T.F.: Automated test case generation for programs specified by relational algebra queries. IEEE Trans. Softw. Eng. 16(3), 316–324 (1990)CrossRefGoogle Scholar
  6. 6.
    Kilperäinen, P., Mannila, H.: Generation of test cases for simple Prolog programs. Acta Cybern. 9(3), 235–246 (1990)Google Scholar
  7. 7.
    Gotlieb, A., Botella, B., Rueher, M.: Automatic test data generation using constraint solving techniques. SIGSOFT Softw. Eng. Notes 23(2), 53–62 (1998).
  8. 8.
    Liggesmeyer, P.: Software-Qualität: Testen, Analysieren und Verifizieren von Software, 2nd edn. Spektrum-Akademischer, Berlin (2009)CrossRefGoogle Scholar
  9. 9.
    Wallmüller, E.: Software-Qualitätsmanagement in der Praxis, 2nd edn. Hanser, München (2001)Google Scholar
  10. 10.
    Hoffman, D., Wang, H.Y., Chang, M., Ly-Gagnon, D., Sobotkiewicz, L., Strooper, P.: Two case studies in grammar-based test generation. J. Syst. Softw. 83(12), 2369–2378 (2010)CrossRefGoogle Scholar
  11. 11.
    Chatterjee, R., Johari, K.: A prolific approach for automated generation of test cases from informal requirements. SIGSOFT Softw. Eng. Notes 35(5), 1–11 (2010)CrossRefGoogle Scholar
  12. 12.
    Mueller, R.A., Lembeck, C., Kuchen, H.: Generating glass-box test cases using a symbolic virtual machine. In: Proceedings IASTED SE 2004 (2004)Google Scholar
  13. 13.
    Mueller, R.A., Lembeck, C., Kuchen, H.: GlassTT—a symbolic Java virtual machine using constraint solving techniques for glass-box test case generation. Technical Report, No. 102. Department of Information Systems, Arbeitsbericht Universitaet Muenster (2003)Google Scholar
  14. 14.
    Lembeck, C., Caballero, R., Mueller, R.A., Kuchen, H.: Constraint solving for generating glass-box test cases. In: Proceedings WFLP ’04, pp. 19–32 (2004)Google Scholar
  15. 15.
    Prasanna, M., Sivanandam, S., Venkatesan, R., Sundarrajan, R.: A survey on automatic test case generation. Acad. Open Internet J. 15, 1–6 (2005)Google Scholar
  16. 16.
    Edvardsson, J.: A survey on automatic test data generation. In: Proceedings of the Second Conference on Computer Science and Engineering in Linköping, pp. 21–28. ECSEL (1999)Google Scholar
  17. 17.
    Liu, Y., Li, Y., Wang, P.: Design and implementation of automatic generation of test cases based on model driven architecture. In: Proceedings of the 2010 Second International Conference on Information Technology and Computer Science, ITCS ’10, pp. 344–347. IEEE Computer Society, Washington, DC (2009)Google Scholar
  18. 18.
    Ogata, S., Matsuura, S.: A method of automatic integration test case generation from UML-based scenario. WSEAS Trans. Info. Sci. App. 7(4), 598–607 (2010)Google Scholar
  19. 19.
    Chen, M., Qiu, X., Xu, W., Wang, L., Zhao, J., Li, X.: UML activity diagram-based automatic test case generation For Java programs. Comput. J. 52(5), 545–556 (2009)CrossRefGoogle Scholar
  20. 20.
    Samuel, P., Mall, R.: Slicing-based test case generation from UML activity diagrams. SIGSOFT Softw. Eng. Notes 34(6), 1–14 (2009)CrossRefGoogle Scholar
  21. 21.
    Doyle, J., Meudec, C.: IBIS: an Interactive Bytecode Inspection System, using symbolic execution and constraint logic programming. In: PPPJ ’03: Proceedings, pp. 55–58. New York (2003)Google Scholar
  22. 22.
    Fischer, S., Kuchen, H.: Systematic generation of glass-box test cases for functional logic programs. In: PPDP ’07: Proceedings of the 9th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming, pp. 63–74. ACM, New York (2007)Google Scholar
  23. 23.
    Fischer, S., Kuchen, H.: Data-flow testing of declarative programs. In: ICFP’08: Proceedings of the 13th ACM SIGPLAN International Conference on Functional Programming, pp. 201–212. ACM, New York (2008)Google Scholar
  24. 24.
    Tillmann, N., de Halleux, J.: Pex-white box test generation for .NET. In: 2nd International Conference on Tests and Proofs, pp. 134–153 (2008)Google Scholar
  25. 25.
    Godefroid, P., de Halleux, P., Nori, A.V., Rajamani, S.K., Schulte, W., Tillmann, N., Levin, M.Y.: Automating software testing using program analysis. IEEE Softw. 25(5), 30–37 (2008)CrossRefGoogle Scholar
  26. 26.
    Tillmann, N., Schulte, W.: Unit tests reloaded: parameterized unit testing with symbolic execution. IEEE Softw. 23(4), 38–47 (2006)CrossRefGoogle Scholar
  27. 27.
    de Moura, L., Bjoerner, N.: Z3: an efficient SMT solver, LNCS, vol. 4963/2008, pp. 337–340. Springer, Berlin (2008)Google Scholar
  28. 28.
  29. 29.
    Tillmann, N., de Halleux, J., Xie, T.: Parameterized unit testing: theory and practice. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, vol. 2, ICSE’10, pp. 483–484. ACM, New York (2010)Google Scholar
  30. 30.
    Veanes, M., Halleux, P.D., Tillmann, N.: Rex: symbolic regular expression explorer. In: Proceedings of the 2010 Third International Conference on Software Testing, Verification and Validation, ICST ’10, pp. 498–507. IEEE Computer Society, Washington, DC (2010)Google Scholar
  31. 31.
    Xie, T., Marinov, D., Schulte, W., Notkin, D.: Symstra: a framework for generating object-oriented unit tests using symbolic execution. In: 11th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, LNCS, pp. 365–381. Springer, Berlin (2005)Google Scholar
  32. 32.
    Meudec, C.: ATGen: automatic test data generation using constraint logic programming and symbolic execution. Softw. Test. Verif. Reliab. 11(2), 81–96 (2001)CrossRefGoogle Scholar
  33. 33.
    Gavanelli, M., Rossi, F.: Constraint logic programming. In: Dovier, A., Pontelli, E. (eds.) A 25-Year Perspective on Logic Programming, pp. 64–86. Springer, Berlin (2010)Google Scholar
  34. 34.
    Colin, S., Legeard, B., Peureux, F.: Preamble computation in automated test case generation using constraint logic programming: research articles. Softw. Test. Verif. Reliab. 14(3), 213–235 (2004)CrossRefGoogle Scholar
  35. 35.
    Zeng, Z., Ciesielski, M.J., Rouzeyre, B.: Functional test generation using constraint logic programming. In: Proceedings of the IFIP TC10/WG10.5 Eleventh International Conference on Very Large Scale Integration of Systems-on/Chip: SOC Design Methodologies, VLSISOC ’01, pp. 375–387. Kluwer, Deventer (2002)Google Scholar
  36. 36.
    Majchrzak, T.A., Kuchen, H.: Automated test case generation based on coverage analysis. In: TASE ’09: Proceedings of the 2009 3rd IEEE International Symposium on Theoretical Aspects of Software Engineering, pp. 259–266. IEEE Computer Society, Washington, DC (2009)Google Scholar
  37. 37.
    Majchrzak, T.A., Kuchen, H.: Automatische Testfallerzeugung auf Basis der Überdeckungs-analyse. In: Hanus, M., Brassel, B. (eds.) Technischer Bericht des Instituts für Informatik Nr. 0915: 26. Workshop der GI-Fachgruppe “Programmiersprachen und Rechenkonzepte”, pp. 14–25. Christian-Albrechts-Universität Kiel, Bad Honnef (2009)Google Scholar
  38. 38.
    Balland, E., Moreau, P.E., Reilles, A.: Bytecode rewriting in Tom. Electron. Notes Theor. Comput. Sci. 190(1), 19–33 (2007)CrossRefGoogle Scholar
  39. 39.
    Bernardeschi, C., Francesco, N.D., Martini, L.: Efficient bytecode verification using immediate postdominators in control flow graphs. In: OTM Workshops, Lecture Notes in Computer Science, vol. 2889, pp. 425–436. Springer, Berlin (2003)Google Scholar
  40. 40.
    Zhao, J.: Analyzing control flow in Java bytecode. In: Proceedings of the 16th Conference of Japan Society for Software Science and Technology, pp. 313–316 (1999)Google Scholar
  41. 41.
  42. 42.
  43. 43.
    Klein, G.: Verified Java bytecode verification. Inf. Technol. 47(2), 107–110 (2005)Google Scholar
  44. 44.
    Qian, Z.: Constraint-based specification and dataflow analysis for Java TM byte code verification. Technical Report, Kestrel Institute (1997)Google Scholar
  45. 45.
    Engel, C., Haehnle, R.: Generating unit tests from formal proofs. In: Proceedings of the Testing and Proofs. Springer, Berlin (2007)Google Scholar
  46. 46.
    Baluda, M., Braione, P., Denaro, G., Pezzè, M.: Structural coverage of feasible code. In: Proceedings of the 5th Workshop on Automation of Software Test, AST ’10, pp. 59–66. ACM, New York (2010)Google Scholar
  47. 47.
    Pinte, F., Oster, N., Saglietti, F.: Techniques and tools for the automatic generation of optimal test data at code, model and interface level. In: ICSE Companion ’08: Companion of the 30th International Conference on Software Engineering, pp. 927–928. ACM, New York (2008)Google Scholar
  48. 48.
    Sofokleous, A.A., Andreou, A.S.: Automatic, evolutionary test data generation for dynamic software testing. J. Syst. Softw. 81(11), 1883–1898 (2008)CrossRefGoogle Scholar
  49. 49.
    Dharsana, C.S.S., Jennifer, D.N., Askarunisha, A., Ramaraj, N.: Java based test case generation and optimization using evolutionary testing. In: Proceedings of the International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 2007), vol. 01, pp. 44–49. IEEE Computer Society, Washington, DC (2007)Google Scholar
  50. 50.
    Thummalapenta, S., de Halleux, J., Tillmann, N., Wadsworth, S.: DyGen: automatic generation of high-coverage tests via mining gigabytes of dynamic traces. In: Proceedings of the 4th International Conference on Tests and proofs, TAP’10, pp. 77–93. Springer, Berlin (2010)Google Scholar
  51. 51.
    Yang, Q., Li, J.J., Weiss, D.: A survey of coverage based testing tools. In: Proceedings of the 2006 International Workshop on Automation of Software Test, AST’06, pp. 99–103. ACM, New York (2006)Google Scholar
  52. 52.
    Johnson, R., Hoeller, J., Arendsen, A., Risberg, T., Kopylenko, D.: Professional Java Development with the Spring Framework. Wrox, Birmingham (2005)Google Scholar
  53. 53.
    Ladd, S., Davison, D., Devijver, S., Yates, C.: Expert Spring MVC and Web Flow. Apress, Berkely (2006)Google Scholar
  54. 54.
    Minter, D.: Beginning Spring 2: From Novice to Professional. Apress, Berkely (2007)Google Scholar
  55. 55.
    Demakov, A.V., Zelenov, S.V., Zelenova, S.A.: Using abstract models for the generation of test data with a complex structure. Program. Comput. Softw. 34(6), 341–350 (2008).
  56. 56.
    Korel, B.: Automated software test data generation. IEEE Trans. Softw. Eng. 16(8), 870–879 (1990). Google Scholar
  57. 57.
    Zhao, R., Li, Q.: Automatic test generation for dynamic data structures. In: Proceedings SERA ’07, pp. 545–549. IEEE Computer Society, Washington, DC (2007).
  58. 58.
    Thomas, D., Hunt, A.: Mock objects. IEEE Softw. 19(3), 22–24 (2002)CrossRefGoogle Scholar
  59. 59.
    Tillmann, N., Schulte, W.: Mock-object generation with behavior. In: Proceedings ASE ’06, pp. 365–368. IEEE Computer Society, Washington, DC (2006).
  60. 60.
    Galler, S.J., Maller, A., Wotawa, F.: Automatically extracting mock object behavior from design by contract specification for test data generation. In: Proceedings AST ’10, pp. 43–50. ACM, New York (2010).
  61. 61.
    Liu, H., Tan, H.B.K.: Automated verification and test case generation for input validation. In: Proceedings of the 2006 International Workshop on Automation of Software Test, AST ’06, pp. 29–35. ACM, New York (2006)Google Scholar
  62. 62.
    Majchrzak, T.A., Kuchen, H.: Muggl: the Muenster generator of glass-box test cases. In: Becker, J., Backhaus, K., Grob, H., Hellingrath, B., Hoeren, T., Klein, S., Kuchen, H., Müller-Funk, U., Thonemann, U.W., Vossen, G. (eds.) Working Papers, No. 10. European Research Center for Information Systems (ERCIS) (2011)Google Scholar
  63. 63.
    Lindholm, T., Yellin, F.: The Java Virtual Machine Specification, 2nd edn. Prentice Hall, Englewood Cliffs (1999)Google Scholar
  64. 64.
    Gosling, J., Joy, B., Steele, G., Bracha, G.: Java(TM) Language Specification, 3rd edn. Addison-Wesley, Boston (2005)Google Scholar
  65. 65.
    Oracle: Java 2 Platform, Standard Edition (J2SE Platform), version 1.4.2 Performance White Paper (2010).
  66. 66.
    Suganuma, T., Yasue, T., Kawahito, M., Komatsu, H., Nakatani, T.: Design and evaluation of dynamic optimizations for a Java just-in-time compiler. ACM Trans. Program. Lang. Syst. 27, 732–785 (2005)CrossRefGoogle Scholar
  67. 67.
    Hoste, K., Georges, A., Eeckhout, L.: Automated just-in-time compiler tuning. In: Proceedings of the 8th Annual IEEE/ACM International Symposium on Code Generation and Optimization, CGO ’10, pp. 62–72. ACM, New York (2010)Google Scholar
  68. 68.
    Doets, K.: From Logic to Logic Programming. MIT Press, Cambridge (1994)Google Scholar
  69. 69.
    Bramer, M.: Logic Programming with Prolog. Springer, Secaucus (2005)Google Scholar
  70. 70.

Copyright information

© Springer-Verlag Berlin Heidelberg  2012

Authors and Affiliations

  1. 1.Institut für WirtschaftsinformatikWestfälische Wilhelms-UniversitätMünsterGermany

Personalised recommendations