FloPSy - Search-Based Floating Point Constraint Solving for Symbolic Execution

  • Kiran Lakhotia
  • Nikolai Tillmann
  • Mark Harman
  • Jonathan de Halleux
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6435)


Recently there has been an upsurge of interest in both, Search–Based Software Testing (SBST), and Dynamic Symbolic Execution (DSE). Each of these two approaches has complementary strengths and weaknesses, making it a natural choice to explore the degree to which the strengths of one can be exploited to offset the weakness of the other. This paper introduces an augmented version of DSE that uses a SBST–based approach to handling floating point computations, which are known to be problematic for vanilla DSE. The approach has been implemented as a plug in for the Microsoft Pex DSE testing tool. The paper presents results from both, standard evaluation benchmarks, and two open source programs.


Strategy Parameter Benchmark Function Symbolic Execution Constraint Solver Test Data Generation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Alglib. Alglib,
  2. 2.
    Botella, B., Gotlieb, A., Michel, C.: Symbolic execution of floating-point computations. Softw. Test, Verif. Reliab 16(2), 97–121 (2006)CrossRefGoogle Scholar
  3. 3.
    de Moura, L.M., Bjørner, N.: Z3: An efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Flanagan, C., Leino, K.R.M., Lillibridge, M., Nelson, G., Saxe, J.B., Stata, R.: Extended static checking for Java. In: Conference on Programming language design and implementation, pp. 234–245 (2002)Google Scholar
  5. 5.
    Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: Conference on Programming Language Design and Implementation, pp. 213–223. ACM, New York (2005)Google Scholar
  6. 6.
    Harman, M., Lakhotia, K., McMinn, P.: A multi-objective approach to search-based test data generation. In: GECCO 2007, pp. 1098–1105 (2007)Google Scholar
  7. 7.
    Harman, M., McMinn, P.: A theoretical and empirical study of search based testing: Local, global and hybrid search. IEEETransactions on Software Engineering 36(2) (to appear, 2010)Google Scholar
  8. 8.
    Inkumsah, K., Xie, T.: Evacon: A framework for integrating evolutionary and concolic testing for object-oriented programs. In: ASE 2007, pp. 425–428 (2007)Google Scholar
  9. 9.
    Korel, B.: Automated software test data generation. IEEE Transactions on Software Engineering 16(8), 870–879 (1990)CrossRefGoogle Scholar
  10. 10.
    Majumdar, R., Sen, K.: Hybrid concolic testing. In: ICSE 2007, pp. 416–426. IEEE Computer Society, Los Alamitos (2007)Google Scholar
  11. 11.
    Miller, W., Spooner, D.L.: Automatic generation of floating-point test data. IEEE Transactions on Software Engineering 2(3), 223–226 (1976)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Software 7(1), 17–41 (1981)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    QLNet. QLNet,
  14. 14.
    Rechenberg, I.: Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzboog, Stuttgart (1973)Google Scholar
  15. 15.
    Schwefel, H.-P.: Numerical optimization of Computer models. John Wiley & Sons Ltd., Chichester (1981)zbMATHGoogle Scholar
  16. 16.
    Sen, K., Agha, G.: CUTE and jCUTE: Concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  17. 17.
    Tillmann, N., de Halleux, J.: Pex-white box test generation for.NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  18. 18.
    Tonella, P.: Evolutionary testing of classes. In: ISSTA 2004, pp. 119–128 (2004)Google Scholar
  19. 19.
    Wegener, J., Baresel, A., Sthamer, H.: Evolutionary test environment for automatic structural testing. Information and Software Technology 43(14), 841–854 (2001)CrossRefGoogle Scholar
  20. 20.
    Wheeler, D.A.: More than a gigabuck: Estimating GNU/Linux’s size (2001),
  21. 21.
    Xie, T., Tillmann, N., de Halleux, P., Schulte, W.: Fitness-guided path exploration in dynamic symbolic execution. In: International Conference on Dependable Systems and Networks, DSN 2009 (2009)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2010

Authors and Affiliations

  • Kiran Lakhotia
    • 1
  • Nikolai Tillmann
    • 2
  • Mark Harman
    • 1
  • Jonathan de Halleux
    • 2
  1. 1.CREST Centre, Department of Computer ScienceUniversity College LondonLondonUK
  2. 2.Microsoft ResearchOne Microsoft WayRedmondUSA

Personalised recommendations