Formal testing for separation assurance

  • Dimitra Giannakopoulou
  • David H. Bushnell
  • Johann Schumann
  • Heinz Erzberger
  • Karen Heere
Article

Abstract

In order to address the rapidly increasing load of air traffic operations, innovative algorithms and software systems must be developed for the next generation air traffic control. Extensive verification of such novel algorithms is key for their adoption by industry. Separation assurance algorithms aim at predicting if two aircraft will get closer to each other than a minimum safe distance; if loss of separation is predicted, they also propose a change of course for the aircraft to resolve this potential conflict. In this paper, we report on our work towards developing an advanced testing framework for separation assurance. Our framework supports automated test case generation and testing, and defines test oracles that capture algorithm requirements. We discuss three different approaches to test-case generation, their application to a separation assurance prototype, and their respective strengths and weaknesses. We also present an approach for statistical analysis of the large numbers of test results obtained from our framework.

Keywords

Software testing Verification Model checking Automatic test generation Clustering Separation assurance Air-traffic control 

Mathematics Subject Classification (2010)

68N30 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Betin-Can, A., Bultan, T., Lindvall, M., Lux, B., Topp, S.: Application of design for verification with concurrency controllers to air traffic control software. In: ASE 2005: Proceedings of the 20th International Conference on Automated Software Engineering, pp. 14–23 (2005)Google Scholar
  2. 2.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer (2006)Google Scholar
  3. 3.
    Boyapati, R., Khurshid, S., Marinov, D.: Korat: automated testing based on Java predicates. In: Proceedings of the International Symposium on Software Testing and Analysis (ISSTA), pp. 123–133. ACM Press (2002)Google Scholar
  4. 4.
    Buntine, W.L.: Operations for learning with graphical models. J. Artif. Intell. Res. 2, 159–225 (1994)Google Scholar
  5. 5.
    Bushnell, D.H., Giannakopoulou, D., Mehlitz, P., Paielli, R., Pǎsǎreanu, C.: Verification and validation of air traffic systems: tactical separation assurance. In: Proc. IEEE Aerospace Conference. IEEE Press (2009)Google Scholar
  6. 6.
    Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: EXE: automatically generating inputs of death. In: Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS) (2006)Google Scholar
  7. 7.
    Cheeseman, P., Stutz, J.: Bayesian classification (AutoClass): theory and results. In: Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P., Uthurusamy, R. (eds.) Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining, pp. 153–180. AAAI Press (1996)Google Scholar
  8. 8.
    The Choco Constraint Solver. http://choco.sourceforge.net/. Accessed February 2011
  9. 9.
    Clarke, E., Grumberg O., Peled, D.: Model Checking. MIT Press (2000)Google Scholar
  10. 10.
    Codecover—an open-source glass-box testing tool. http://codecover.org (2009). Accessed February 2011
  11. 11.
    Cohen, D., Dalal, S., Parelius, J., Patton, G.: The combinatorial design approach to automatic test generation. IEEE Softw. 13(5), 83–88 (1996)CrossRefGoogle Scholar
  12. 12.
    Consiglio, M., Carreno, V., Williams, D., Munoz, C.: Conflict prevention and separation assurance method in the small aircraft transportation system. J. Aircr. 45(0021-8669), 353–358 (2008)CrossRefGoogle Scholar
  13. 13.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm (with discussion). J. R. Stat. Soc. Ser. B 39, 1–38 (1977)MathSciNetMATHGoogle Scholar
  14. 14.
    Dowek, G., Munoz, C.: Conflict detection and resolution for 1,2,...,N aircraft. In: Proceedings of the 7th AIAA Aviation, Technology, Integration, and Operations Conference (2007)Google Scholar
  15. 15.
    Dunietz, I.S., Ehrlich, W.K., Szablak, B.D., Mallows, C.L., Iannino, A.: Applying design of experiments to software testing: experience report. In: ICSE ’97: Proceedings of the 19th International Conference on Software Engineering, pp. 205–215 (1997)Google Scholar
  16. 16.
    Erzberger, H., Heere, K.: Algorithm and operational concept for resolving short-range conflicts. Proc. Inst. Mech. Eng. G: J. Aerosp. Eng. 224(2), 225–243 (2010)Google Scholar
  17. 17.
    Erzberger, H., Lauderdale, T.A., Chu, Y.C.: Automated conflict resolution, arrival management and weather avoidance for ATM. In: Proceedings of the 27th International Congress of the Aeronautical Sciences (2010)Google Scholar
  18. 18.
    Fischer, B., Schumann, J.: AutoBayes: a system for generating data analysis programs from statistical models. J. Funct. Program. 13(3), 483–508 (2003)MathSciNetMATHCrossRefGoogle Scholar
  19. 19.
    Fraley, C., Raftery, A.E.: MCLUST: software for model-based clustering, density estimation, and discriminant analysis. Tech. Rep. 415, Department of Statistics, University of Washington (2002)Google Scholar
  20. 20.
    Gligoric, M., Gvero, T., Jagannath, V., Khurshid, S., Kuncak, V., Marinov, D.: Test generation through programming in UDITA. In: ICSE ’10: 32nd International Conference on Software Engineering (2010)Google Scholar
  21. 21.
    Godefroid, P.: Compositional dynamic test generation. In: POPL, pp. 47–54 (2007)Google Scholar
  22. 22.
    Godefroid, P., Klarlund, N., Sen, K.: DART: Directed automated random testing. SIGPLAN Notes 40(6), 213–223 (2005)CrossRefGoogle Scholar
  23. 23.
    Godefroid, P., Nori, A.V., Rajamani, S.K., Tetali, S.: Compositional may-must program analysis: unleashing the power of alternation. In: POPL, pp. 43–56 (2010)Google Scholar
  24. 24.
    Grindal, M., Offutt, J., Andler, S.F.: Combination testing strategies: a survey. Softw. Test. Verif. Reliab. 15(3), 167–199 (2005)CrossRefGoogle Scholar
  25. 25.
  26. 26.
    JavaPathfinder. http://babelfish.arc.nasa.gov/trac/jpf. Accessed February 2011
  27. 27.
    JUnit. http://www.junit.org/. Accessed February 2011
  28. 28.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)MATHCrossRefGoogle Scholar
  29. 29.
    Leveson, N.G., Heimdahl, M.P.E., Hildreth, H., Reese, J.D.: Requirements specification for process-control systems. IEEE Trans. Softw. Eng. 20(9), 684–707 (1994)CrossRefGoogle Scholar
  30. 30.
    Majumdar, R., Sen, K.: Hybrid concolic testing. In: ICSE ’07: Proceedings of the 29th International Conference on Software Engineering, pp. 416–426. IEEE Computer Society, Washington, DC (2007)Google Scholar
  31. 31.
    McLachlan, G., Peel, D., Basford, K.E., Adams, P.: The EMMIX software for the fitting of mixtures of normal and t-components. Journal Statistical Software 4(2), 1–14 (1999)Google Scholar
  32. 32.
    Murphy, M.: Octave: a free, high-level language for mathematics. Linux J. 39 (1997)Google Scholar
  33. 33.
  34. 34.
    Pǎsǎreanu, C., Visser, W.: A survey of new trends in symbolic execution for software testing and analysis. Int. J. Softw. Tools Technol. Transf. (STTT) 11, 339–353 (2009)CrossRefGoogle Scholar
  35. 35.
    Pǎsǎreanu, C.S., Mehlitz, P.C., Bushnell, D.H., Gundy-Burlet, K., Lowry, M.R., Person, S., Pape, M.: Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In: ISSTA ’08: Proceedings of the International Symposium on Software Testing and Analysis, pp. 15–26. ACM (2008)Google Scholar
  36. 36.
    RTCA. DO-178B: software considerations in airborne systems and equipment certification. http://www.rtca.org (1992)
  37. 37.
    Schumann, J., Gundy-Burlet, K., Pǎsǎreanu, C., Menzies, T., Barrett, T.: Software V&V support by parametric analysis of large software simulation systems. In: Proc. IEEE Aerospace. IEEE Press (2009)Google Scholar
  38. 38.
    Schumann, J., Jafari, H., Pressburger, T., Denney, E., Buntine, W., Fischer, B.: AutoBayes program synthesis system users manual. Tech. Rep. NASA/TM-2008-215366, NASA (2008)Google Scholar
  39. 39.
    Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: ESEC/FSE-13: Proceedings of the 10th European Software Engineering Conference, pp. 263–272. ACM, New York (2005)Google Scholar
  40. 40.
    T-Vec. http://www.t-vec.com (2002). Accessed February 2011
  41. 41.
    Tai, K., Lie, Y.: A test generation strategy for pairwise testing. IEEE Trans. Softw. Eng. 28(1), 109–111 (2002)CrossRefGoogle Scholar
  42. 42.
    Tillmann, N., de Halleux, J.: Pex: white box test generation for .NET. In: Beckert, B., Hähnle, R. (eds.) Tests and Proofs. Lecture Notes in Computer Science, vol. 4966, pp. 134–153. Springer (2008)Google Scholar
  43. 43.
    Visser, W., Havelund, K., Brat, G.P., Park, S., Lerda, F.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)CrossRefGoogle Scholar
  44. 44.
    Visser, W., Pǎsǎreanu, C.S., Khurshid, S.: Test input generation with Java PathFinder. SIGSOFT Softw. Eng. Notes 29(4), 97–107 (2004)CrossRefGoogle Scholar
  45. 45.
    Wallace, D.R., Kuhn, D.R.: Failure modes in medical device software: an analysis of 15 years of recall data. Int. J. Reliab. Qual. Saf. Eng. 8(4) (2001)Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2011

Authors and Affiliations

  • Dimitra Giannakopoulou
    • 1
  • David H. Bushnell
    • 2
  • Johann Schumann
    • 3
  • Heinz Erzberger
    • 4
  • Karen Heere
    • 5
  1. 1.NASA AmesCarnegie Mellon UniversityMoffett FieldUSA
  2. 2.NASA AmesTRACLabsMoffett FieldUSA
  3. 3.NASA AmesSGT, Inc.Moffett FieldUSA
  4. 4.NASA AmesUC Santa CruzMoffett FieldUSA
  5. 5.University Affiliated Research CenterMoffett FieldUSA

Personalised recommendations