Advertisement

Testing Robots Using CSP

  • Ana CavalcantiEmail author
  • James Baxter
  • Robert M. Hierons
  • Raluca Lefticaru
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11823)

Abstract

This paper presents a technique for automatic generation of tests for robotic systems based on a domain-specific notation called RoboChart. This is a UML-like diagrammatic notation that embeds a component model suitable for robotic systems, and supports the definition of behavioural models using enriched state machines that can feature time properties. The formal semantics of RoboChart is given using tock-CSP, a discrete-time variant of the process algebra CSP. In this paper, we use the example of a simple drone to illustrate an approach to generate tests from RoboChart models using a mutation tool called Wodel. From mutated models, tests are generated using the CSP model checker FDR. The testing theory of CSP justifies the soundness of the tests.

Notes

Acknowledgements

This work is funded by the EPSRC grants EP/M025756/1 and EP/R025479/1, and by the Royal Academy of Engineering. We have benefited from discussions with Pablo Gómez-Abajo and Mercedes Merayo with regards to Wodel implementation, and Sharar Ahmadi, Alvaro Miyazawa, and Augusto Sampaio with regards to our example and its simulation.

References

  1. 1.
    Aichernig, B., Jifeng, H.: Mutation testing in UTP. Formal Aspects Comput. 21(1–2), 33–64 (2008)zbMATHGoogle Scholar
  2. 2.
    Aichernig, B.K.: Mutation testing in the refinement calculus. Formal Aspects Comput. 15(2), 280–295 (2003)zbMATHCrossRefGoogle Scholar
  3. 3.
    Aichernig, B.K.: Model-based mutation testing of reactive systems. In: Liu, Z., Woodcock, J., Zhu, H. (eds.) Theories of Programming and Formal Methods. LNCS, vol. 8051, pp. 23–36. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-39698-4_2zbMATHCrossRefGoogle Scholar
  4. 4.
    Aichernig, B.K., Hörmaier, K., Lorber, F.: Debugging with timed automata mutations. In: Bondavalli, A., Di Giandomenico, F. (eds.) SAFECOMP 2014. LNCS, vol. 8666, pp. 49–64. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10506-2_4CrossRefGoogle Scholar
  5. 5.
    Aichernig, B.K., Jöbstl, E., TiranStefan, S.: Model-based mutation testing via symbolic refinement checking. Sci. Comput. Program. 97(P4), 383–404 (2015)CrossRefGoogle Scholar
  6. 6.
    Aichernig, B.K., Lorber, F., Ničković, D.: Time for mutants—model-based mutation testing with timed automata. In: Veanes, M., Viganò, L. (eds.) TAP 2013. LNCS, vol. 7942, pp. 20–38. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-38916-0_2CrossRefGoogle Scholar
  7. 7.
    Alberto, A., Cavalcanti, A.L.C., Gaudel, M.-C., Simao, A.: Formal mutation testing for Circus. Inf. Softw. Technol. 81, 131–153 (2017)CrossRefGoogle Scholar
  8. 8.
    Ammann, P.E., Black, P.E., Majurski, W.: Using model checking to generate tests from specifications. In: 2nd International Conference on Formal Engineering Methods, pp. 46–54. IEEE (1998)Google Scholar
  9. 9.
    Brillout, A., et al.: Mutation-based test case generation for simulink models. In: de Boer, F.S., Bonsangue, M.M., Hallerstede, S., Leuschel, M. (eds.) FMCO 2009. LNCS, vol. 6286, pp. 208–227. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-17071-3_11CrossRefGoogle Scholar
  10. 10.
    Budd, T.A., Gopal, A.S.: Program testing by specification mutation. Comput. Lang. 10(1), 63–73 (1985)zbMATHCrossRefGoogle Scholar
  11. 11.
    Cavalcanti, A., Gaudel, M.-C.: Testing for refinement in CSP. In: Butler, M., Hinchey, M.G., Larrondo-Petrie, M.M. (eds.) ICFEM 2007. LNCS, vol. 4789, pp. 151–170. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-76650-6_10CrossRefGoogle Scholar
  12. 12.
    Cavalcanti, A.L.C., Gaudel, M.-C.: Testing for refinement in Circus. Acta Informatica 48(2), 97–147 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Cavalcanti, A.L.C., Gaudel, M.-C., Hierons, R.M.: Conformance relations for distributed testing based on CSP. In: Wolff, B., Zaïdi, F. (eds.) ICTSS 2011. LNCS, vol. 7019, pp. 48–63. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-24580-0_5CrossRefGoogle Scholar
  14. 14.
    Cavalcanti, A.L.C., Hierons, R.M.: Testing with inputs and outputs in CSP. In: Cortellessa, V., Varró, D. (eds.) FASE 2013. LNCS, vol. 7793, pp. 359–374. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-37057-1_26CrossRefGoogle Scholar
  15. 15.
    Cavalcanti, A.L.C., Hierons, R.M., Nogueira, S., Sampaio, A.C.A.: A suspension-trace semantics for CSP. In: International Symposium on Theoretical Aspects of Software Engineering, pp. 3–13 (2016). Invited paperGoogle Scholar
  16. 16.
    Cavalcanti, A., Miyazawa, A., Sampaio, A., Li, W., Ribeiro, P., Timmis, J.: Modelling and verification for swarm robotics. In: Furia, C.A., Winter, K. (eds.) IFM 2018. LNCS, vol. 11023, pp. 1–19. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-98938-9_1CrossRefGoogle Scholar
  17. 17.
    Cavalcanti, A.L.C., et al.: Verified simulation for robotics. Sci. Comput. Program. 174, 1–37 (2019)CrossRefGoogle Scholar
  18. 18.
    Cavalcanti, A.L.C., Sampaio, A.C.A., Woodcock, J.C.P.: A refinement strategy for Circus. Formal Aspects Comput. 15(2–3), 146–181 (2003)zbMATHCrossRefGoogle Scholar
  19. 19.
    Delamaro, M.E., Maldonado, J.C., Mathur, A.P.: Interface mutation: an approach for integration testing. IEEE Trans. Softw. Eng. 27(3), 228–247 (2001)CrossRefGoogle Scholar
  20. 20.
    Conserva Filho, M.S., Marinho, R., Mota, A., Woodcock, J.: Analysing RoboChart with probabilities. In: Massoni, T., Mousavi, M.R. (eds.) SBMF 2018. LNCS, vol. 11254, pp. 198–214. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-03044-5_13CrossRefGoogle Scholar
  21. 21.
    Foster, S., Baxter, J., Cavalcanti, A.L.C., Miyazawa, A., Woodcock, J.C.P.: Automating verification of state machines with reactive designs and Isabelle/UTP. In: Bae, K., Ölveczky, P.C. (eds.) FACS 2018. LNCS, vol. 11222, pp. 137–155. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-02146-7_7CrossRefGoogle Scholar
  22. 22.
    Fraser, G., Wotawa, F., Ammann, P.E.: Testing with model checkers: a survey. Softw. Test. Verif. Reliab. 19(3), 215–261 (2009)CrossRefGoogle Scholar
  23. 23.
    Gibson-Robinson, T., Armstrong, P., Boulgakov, A., Roscoe, A.W.: FDR3—a modern refinement checker for CSP. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 187–201. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-54862-8_13zbMATHCrossRefGoogle Scholar
  24. 24.
    Gómez-Abajo, P., Guerra, E., de Lara, J.: Wodel: a domain-specific language for model mutation. In: Ossowski, S. (ed.) Proceedings of the 31st Annual ACM Symposium on Applied Computing, Pisa, Italy, 4–8 April 2016, pp. 1968–1973. ACM (2016)Google Scholar
  25. 25.
    Gómez-Abajo, P., Guerra, E., de Lara, J.: A domain-specific language for model mutation and its application to the automated generation of exercises. Comput. Lang. Syst. Struct. 49, 152–173 (2017)Google Scholar
  26. 26.
    Gómez-Abajo, P., Guerra, E., de Lara, J., Merayo, M.G.: A tool for domain-independent model mutation. Sci. Comput. Program. 163, 85–92 (2018)CrossRefGoogle Scholar
  27. 27.
    Granda, M.F., Condori-Fernández, N., Vos, T.E.J., Pastor, O.: Mutation operators for UML class diagrams. In: Nurcan, S., Soffer, P., Bajec, M., Eder, J. (eds.) CAiSE 2016. LNCS, vol. 9694, pp. 325–341. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-39696-5_20CrossRefGoogle Scholar
  28. 28.
    Guan, J., Offutt, J.: A model-based testing technique for component-based real-time embedded systems. In: Eighth IEEE International Conference on Software Testing, Verification and Validation, ICST 2015 Workshops, Graz, Austria, 13–17 April 2015, pp. 1–10. IEEE Computer Society (2015)Google Scholar
  29. 29.
    Herzner, W., Schlick, R., Brandl, H., Wiessalla, J.: Towards fault-based generation of test cases for dependable embedded software. Softwaretechnik-Trends 31(3) (2011)Google Scholar
  30. 30.
    Hierons, R.M., Merayo, M.G., Núñez, M.: Implementation relations and test generation for systems with distributed interfaces. Distrib. Comput. 25(1), 35–62 (2012)zbMATHCrossRefGoogle Scholar
  31. 31.
    Hierons, R.M., Ural, H.: The effect of the distributed test architecture on the power of testing. Comput. J. 51(4), 497–510 (2008)CrossRefGoogle Scholar
  32. 32.
    Hoare, C.A.R.: Programming: sorcery or science? IEEE Trans. Softw. Eng. 4 (1984)Google Scholar
  33. 33.
    Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. IEEE Softw. Eng. 37(5), 649–678 (2011)CrossRefGoogle Scholar
  34. 34.
    Krenn, W., Aichernig, B.K.: Test case generation by contract mutation in Spec\(\#\). Electron. Notes Theor. Comput. Sci. 253(2), 71–86 (2009)Google Scholar
  35. 35.
    Larsen, K.G., Lorber, F., Nielsen, B., Nyman, U.: Mutation-based test-case generation with Ecdar. In: 2017 IEEE International Conference on Software Testing, Verification and Validation Workshops, ICST Workshops 2017, Tokyo, Japan, 13–17 March 2017, pp. 319–328. IEEE Computer Society (2017)Google Scholar
  36. 36.
    Lorber, F., Larsen, K.G., Nielsen, B.: Model-based mutation testing of real-time systems via model checking. In: 2018 IEEE International Conference on Software Testing, Verification and Validation Workshops, ICST Workshops, Västerås, Sweden, 9–13 April 2018, pp. 59–68. IEEE Computer Society (2018)Google Scholar
  37. 37.
    Miyazawa, A., Ribeiro, P., Li, W., Cavalcanti, A.L.C., Timmis, J.: Automatic property checking of robotic applications. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3869–3876 (2017)Google Scholar
  38. 38.
    Miyazawa, A., Ribeiro, P., Li, W., Cavalcanti, A.L.C., Timmis, J., Woodcock, J.C.P.: RoboChart: modelling and verification of the functional behaviour of robotic applications. Softw. Syst. Model. 18, 3097–3149 (2019)CrossRefGoogle Scholar
  39. 39.
    Naylor, B., Read, M., Timmis, J., Tyrrell, A.: The Relay Chain: A Scalable Dynamic Communication link between an Exploratory Underwater Shoal and a Surface Vehicle (2014)Google Scholar
  40. 40.
    Nilsson, R., Offutt, J.: Automated testing of timeliness: a case study. In: Zhu, H., Wong, W.E., Paradkar, A.M. (eds.) Proceedings of the Second International Workshop on Automation of Software Test, AST 2007, Minneapolis, MN, USA, 26–26 May 2007, pp. 55–61. IEEE Computer Society (2007)Google Scholar
  41. 41.
    Nilsson, R., Offutt, J., Andler, S.F.: Mutation-based testing criteria for timeliness. In: Proceedings of the 28th International Computer Software and Applications Conference (COMPSAC 2004), Design and Assessment of Trustworthy Software-Based Systems, Hong Kong, China, 27–30 September 2004, pp. 306–311. IEEE Computer Society (2004)Google Scholar
  42. 42.
    Nilsson, R., Offutt, J., Mellin, J.: Test case generation for mutation-based testing of timeliness. Electr. Notes Theor. Comput. Sci. 164(4), 97–114 (2006)CrossRefGoogle Scholar
  43. 43.
    Papadakis, M., Malevris, N.: Searching and generating test inputs for mutation testing. SpringerPlus 2(1), 1–12 (2013)CrossRefGoogle Scholar
  44. 44.
    Park, H.W., Ramezani, A., Grizzle, J.W.: A finite-state machine for accommodating unexpected large ground-height variations in bipedal robot walking. IEEE Trans. Rob. 29(2), 331–345 (2013)CrossRefGoogle Scholar
  45. 45.
    Rabbath, C.A.: A finite-state machine for collaborative airlift with a formation of unmanned air vehicles. J. Intell. Rob. Syst. 70(1), 233–253 (2013)CrossRefGoogle Scholar
  46. 46.
    Roscoe, A.W.: Understanding Concurrent Systems. Texts in Computer Science. Springer, London (2011).  https://doi.org/10.1007/978-1-84882-258-0zbMATHCrossRefGoogle Scholar
  47. 47.
    Siavashi, F., Truscan, D., Vain, J.: Vulnerability assessment of web services with model-based mutation testing. In: 2018 IEEE International Conference on Software Quality, Reliability and Security, QRS 2018, Lisbon, Portugal, 16–20 July 2018, pp. 301–312 (2018)Google Scholar
  48. 48.
    Srivatanakul, T., Clark, J.A., Stepney, S., Polack, F.: Challenging formal specifications by mutation: a CSP security example. In: 10th Asia-Pacific Software Engineering Conference, pp. 340–350. IEEE Press (2003)Google Scholar
  49. 49.
    Swain, S.K., Mohapatra, D.P., Mall, R.: Test case generation based on state and activity models. J. Object Technol. 9(5), 1–27 (2010)CrossRefGoogle Scholar
  50. 50.
    Tomic, T., et al.: Toward a fully autonomous UAV: research platform for indoor and outdoor urban search and rescue. IEEE Rob. Autom. Mag. 19(3), 46–56 (2012)CrossRefGoogle Scholar
  51. 51.
    Trab, M.S.A., Counsell, S., Hierons, R.M.: Specification mutation analysis for validating timed testing approaches based on timed automata. In: 36th Annual IEEE Computer Software and Applications Conference, COMPSAC 2012, Izmir, Turkey, 16–20 July 2012, pp. 660–669 (2012)Google Scholar
  52. 52.
    University of York. RoboChart Reference Manual. www.cs.york.ac.uk/circus/RoboCalc/robotool/Google Scholar
  53. 53.
    Vega, J.J.O., Perrouin, G., Amrani, M., Schobbens, P.-Y.: Model-based mutation operators for timed systems: a taxonomy and research agenda. In: 2018 IEEE International Conference on Software Quality, Reliability and Security, QRS 2018, Lisbon, Portugal, 16–20 July 2018, pp. 325–332 (2018)Google Scholar
  54. 54.
    Wimmel, G., Jürjens, J.: Specification-based test generation for security-critical systems using mutations. In: George, C., Miao, H. (eds.) ICFEM 2002. LNCS, vol. 2495, pp. 471–482. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-36103-0_48CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ana Cavalcanti
    • 1
    Email author
  • James Baxter
    • 1
  • Robert M. Hierons
    • 2
  • Raluca Lefticaru
    • 2
  1. 1.Department of Computer ScienceUniversity of YorkYorkUK
  2. 2.Department of Computer ScienceUniversity of SheffieldSheffieldUK

Personalised recommendations