Advertisement

Inference and Abstraction of the Biometric Passport

  • Fides Aarts
  • Julien Schmaltz
  • Frits Vaandrager
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6415)

Abstract

Model-based testing is a promising software testing technique for the automation of test generation and test execution. One obstacle to its adoption is the difficulty of developing models. Learning techniques provide tools to automatically derive automata-based models. Automation is obtained at the cost of time and unreadability of the models. We propose an abstraction technique to reduce the alphabet and large data sets. Our idea is to extract a priori knowledge about the teacher and use this knowledge to define equivalence classes. The latter are then used to define a new and reduced alphabet. The a priori knowledge can be obtained from informal documentation or requirements. We formally prove soundness of our approach. We demonstrate the practical feasibility of our technique by learning a model of the new biometric passport. Our automatically learned model is of comparable size and complexity of a previous model manually developed in the context of testing a passport implementation. Our model can be learned within one hour and slightly refines the previous model.

Keywords

Label Transition System Input Symbol Membership Query Output Symbol Equivalence Query 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aarts, F.: Inference and Abstraction of Communication Protocols. Master’s thesis, Radboud University Nijmegen and Uppsala University (2009)Google Scholar
  2. 2.
    Aarts, F., Jonsson, B., Uijen, J.: Generating Models of Infinite-State Communication Protocols using Regular Inference with Abstraction. In: Proceedings ICTSS 2010, 22nd IFIP International Conference on Testing Software and Systems (2010)Google Scholar
  3. 3.
    Aarts, F., Vaandrager, F.: Learning I/O Automata. In: Proceedings CONCUR 2010, 21th International Conference on Concurrency Theory, pp. 71–85 (2010)Google Scholar
  4. 4.
    Angluin, D.: Learning regular sets from queries and counterexamples. Information and Computation 75(2), 87–106 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Belinfante, A.: JTorX: A tool for on-line model-driven test derivation and execution. In: Esparza, J., Majumdar, R. (eds.) TACAS 2010. LNCS, vol. 6015, pp. 266–270. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  6. 6.
    BSI. Advanced security mechanisms for machine readable travel documents - extended access control (eac) - version 1.11. Technical Report TR-03110, German Federal Office for Information Security (BSI), Bonn, Germany (2008)Google Scholar
  7. 7.
    Chow, T.S.: Testing software design modeled by Infinite-state machines. IEEE Trans.on Software Engineering 4(3), 178–187 (1978); Special collection based on COMPSACCrossRefzbMATHGoogle Scholar
  8. 8.
    Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample- guided abstraction refinement for symbolic model checking. Journal of the ACM 50(5), 752–794 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    de Alfaro, L., Henzinger, T.A.: Interface automata. In: Gruhn, V. (ed.) Proceedings of the Joint 8th European Software Engineering Conference and 9th ACM SIGSOFT Symposium on the Foundation of Software Engineering (ESEC/FSE- 2001). Software Engineering Notes, vol. 26, pp. 109–120. ACM Press, New York (September 2001)Google Scholar
  10. 10.
    ICAO. Doc 9303 - machine readable travel documents - part 1-2. Technical report, International Civil Aviation Organization, 6th edn. (2006)Google Scholar
  11. 11.
    Lee, D., Yannakakis, M.: Principles and methods of testing Finite state machines a survey. Proc. IEEE 84(8), 1090–1126 (1996)CrossRefGoogle Scholar
  12. 12.
    Loiseaux, C., Graf, S., Sifakis, J., Boujjani, A., Bensalem, S.: Property preserving abstractions for the verifIcation of concurrent systems. Formal Methods in System Design 6(1), 11–44 (1995)CrossRefzbMATHGoogle Scholar
  13. 13.
    Mostowski, W., Poll, E., Schmaltz, J., Tretmans, J., Wichers Schreur, R.: Model-based testing of electronic passports. In: Cofer, D., Fantechi, A. (eds.) FMICS 2008. LNCS, vol. 5596, pp. 207–209. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  14. 14.
    Niese, O.: An integrated approach to testing complex systems. Technical report, Dortmund University, Doctoral thesis (2003)Google Scholar
  15. 15.
    Raffelt, H., Steffen, B., Berg, T.: Learnlib: a library for automata learning and experimentation. In: FMICS 2005: Proceedings of the 10th International Workshop on Formal Methods for Industrial Critical Systems, pp. 62–71. ACM Press, New York (2005)Google Scholar
  16. 16.
    Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Software - Concepts and Tools 17(3), 103–120 (1996)zbMATHGoogle Scholar
  17. 17.
    Tretmans, J.: Model based testing with labelled transition systems. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) FORTEST. LNCS, vol. 4949, pp. 1–38. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  18. 18.
    Tretmans, J., Brinksma, H.: TorX: Automated model-based testing. In: Hartman, A., Dussa-Ziegler, K. (eds.) First European Conference on Model-Driven Software Engineering, pp. 31–43 (December 2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Fides Aarts
    • 1
  • Julien Schmaltz
    • 1
    • 2
  • Frits Vaandrager
    • 1
  1. 1.Institute for Computing and Information SciencesRadboud University NijmegenNetherlands
  2. 2.School of Computer ScienceOpen University of The NetherlandsNetherlands

Personalised recommendations