Distributed Computing

, Volume 25, Issue 1, pp 35–62 | Cite as

Implementation relations and test generation for systems with distributed interfaces

  • Robert M. Hierons
  • Mercedes G. Merayo
  • Manuel Núñez


Some systems interact with their environment at physically distributed interfaces called ports and we separately observe sequences of inputs and outputs at each port. As a result we cannot reconstruct the global sequence that occurred and this reduces our ability to distinguish different systems in testing or in use. In this paper we explore notions of conformance for an input output transition system that has multiple ports, adapting the widely used ioco implementation relation to this situation. We consider two different scenarios. In the first scenario the agents at the different ports are entirely independent. Alternatively, it may be feasible for some external agent to receive information from more than one of the agents at the ports of the system, these local behaviours potentially being brought together and here we require a stronger implementation relation. We define implementation relations for these scenarios and prove that in the case of a single-port system the new implementation relations are equivalent to ioco. In addition, we define what it means for a test case to be controllable and give an algorithm that decides whether this condition holds. We give a test generation algorithm to produce sound and complete test suites. Finally, we study two implementation relations to deal with partially specified systems.


Formal approaches to testing Systems with distributed ports Formal methodologies to develop distributed software systems 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alur R., Etessami K., Yannakakis M.: Inference of message sequence charts. IEEE Trans. Softw. Eng. 29(7), 623–633 (2003)CrossRefGoogle Scholar
  2. 2.
    Barnett, M., Grieskamp, W., Nachmanson, L., Schulte, W., Tillmann, N., Veanes, M.: Towards a tool environment for model-based testing with AsmL. In: 3rd International Workshop on Formal Approaches to Testing of Software, FATES’03, LNCS 2931, pp. 252–266. Springer (2003)Google Scholar
  3. 3.
    Bauer, A., Leucker, M., Schallhart, C.: Model-based runtime analysis of distributed reactive systems. In: 17th Australian Software Engineering Conference, ASWEC’06, pp. 243–252. IEEE Computer Society (2006)Google Scholar
  4. 4.
    Bochmann, G.V., Haar, S., Jard, C., Jourdan, G.V.: Testing systems specified as partial order input/output automata. In: Joint 20th IFIP TC6/WG6.1 International Conference on Testing of Software and Communicating Systems, TestCom’08, and 8th International Workshop on Formal Approaches to Software Testing, FATES’08, LNCS 5047, pp. 169–183. Springer (2008)Google Scholar
  5. 5.
    Bosik B.S., Uyar M.Ü.: Finite state machine based formal methods in protocol conformance testing. Comput. Netw. ISDN Syst. 22, 7–33 (1991)CrossRefGoogle Scholar
  6. 6.
    Boyd S., Ural H.: The synchronization problem in protocol testing and its complexity. Inf. Process. Lett. 40(3), 131–136 (1991)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Brinksma, E., Heerink, L., Tretmans, J.: Factorized test generation for multi-input/output transition systems. In: 11th IFIP Workshop on Testing of Communicating Systems, IWTCS’98, pp. 67–82. Kluwer (1998)Google Scholar
  8. 8.
    Brinksma, E., Tretmans, J.: Testing transition systems: an annotated bibliography. In: 4th Summer School on Modeling and Verification of Parallel Processes, MOVEP’00, LNCS 2067, pp. 187–195. Springer (2001)Google Scholar
  9. 9.
    Cacciari L., Rafiq O.: Controllability and observability in distributed testing. Inf. Softw. Technol. 41(11–12), 767–780 (1999)CrossRefGoogle Scholar
  10. 10.
    Chen, J., Hierons, R.M., Ural, H.: Testing in the distributed test architecture. In: Formal Methods and Testing, LNCS 4949, pp. 157–183. Springer (2008)Google Scholar
  11. 11.
    Chen W., Ural H.: Synchronizable checking sequences based on multiple UIO sequences. IEEE/ACM Trans. Netw. 3, 152–157 (1995)CrossRefGoogle Scholar
  12. 12.
    Cunha de Almeida, E., Marynowski, J., Sunyé, G., Traon, Y.L., Valduriez, P.: Efficient distributed test architecture for large-scale systems. In: 22nd International Conference on Testing Software and Systems, ICTSS’10, LNCS 6435, pp. 174–187. Springer (2010)Google Scholar
  13. 13.
    Dssouli, R., Bochmann, G.V.: Error detection with multiple observers. In: 5th WG6.1 International Conference on Protocol Specification, Testing and Verification, PSTV’85, pp. 483–494. North-Holland (1985)Google Scholar
  14. 14.
    Dssouli, R., Bochmann, G.V.: Conformance testing with multiple observers. In: 6th WG6.1 International Conference on Protocol Specification, Testing and Verification, PSTV’86, pp. 217–229. North-Holland (1986)Google Scholar
  15. 15.
    En-Nouaary A., Dssouli R., Khendek F.: Timed Wp-method: testing real time systems. IEEE Trans. Softw. Eng. 28(11), 1024–1039 (2002)CrossRefGoogle Scholar
  16. 16.
    Farchi E., Hartman A., Pinter S.: Using a model-based test generator to test for standard conformance. IBM Syst. J. 41(1), 89–110 (2002)CrossRefGoogle Scholar
  17. 17.
    Grieskamp W., Kicillof N., Stobie K., Braberman V.A.: Model-based quality assurance of protocol documentation: tools and methodology. Softw. Test. Verification Reliab. 21(1), 55–71 (2011)CrossRefGoogle Scholar
  18. 18.
    Gunter, D., Tierney, B., Crowley, B., Holding, M., Lee, J.: Netlogger: a toolkit for distributed system performance analysis. In: 8th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, MASCOTS’00, pp. 267–273. IEEE Computer Society (2000)Google Scholar
  19. 19.
    Haar, S., Jard, C., Jourdan, G.V.: Testing input/output partial order automata. In: Joint 19th IFIP TC6/WG6.1 International Conference on Testing of Software and Communicating Systems, TestCom’07, and 7th Int. Workshop on Formal Approaches to Software Testing, FATES’07, LNCS 4581, pp. 171–185. Springer (2007)Google Scholar
  20. 20.
    Hierons R.M.: Using status messages in the distributed test architecture. Inf. Softw. Technol. 51(7), 1123–1130 (2009)CrossRefGoogle Scholar
  21. 21.
    Hierons R.M.: Canonical finite state machines for distributed systems. Theor. Comput. Sci. 411(2), 566–580 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Hierons, R.M., Bogdanov, K., Bowen, J., Cleaveland, R., Derrick, J., Dick, J., Gheorghe, M., Harman, M., Kapoor, K., Krause, P., Luettgen, G., Simons, A., Vilkomir, S., Woodward, M., Zedan, H.: Using formal methods to support testing. ACM Comput. Surv. 41(2), 9:1–9:76 (2009)Google Scholar
  23. 23.
    Hierons, R.M., Bowen, J., Harman, M. (eds): Formal Methods and Testing, LNCS 4949. Springer, Berlin (2008)Google Scholar
  24. 24.
    Hierons, R.M., Merayo, M.G., Núñez, M.: Controllable test cases for the distributed test architecture. In: 6th Internationl Symposium on Automated Technology for Verification and Analysis, ATVA’08, LNCS 5311, pp. 201–215. Springer (2008)Google Scholar
  25. 25.
    Hierons, R.M., Merayo, M.G., Núñez, M.: Implementation relations for the distributed test architecture. In: Joint 20th IFIP TC6/WG6.1 International Conference on Testing of Software and Communicating Systems, TestCom’08, and 8th International Workshop on Formal Approaches to Software Testing, FATES’08, LNCS 5047, pp. 200–215. Springer (2008)Google Scholar
  26. 26.
    Hierons R.M., Merayo M.G., Núñez M.: Scenarios-based testing of systems with distributed ports. Softw. Pract. Exp. 41(10), 999–1026 (2011)Google Scholar
  27. 27.
    Hierons R.M., Ural H.: Synchronized checking sequences based on UIO sequences. Inf. Softw. Technol. 45(12), 793–803 (2003)CrossRefGoogle Scholar
  28. 28.
    Hierons R.M., Ural H.: Checking sequences for distributed test architectures. Distrib. Comput. 21(3), 223–238 (2008)CrossRefGoogle Scholar
  29. 29.
    Hierons R.M., Ural H.: The effect of the distributed test architecture on the power of testing. Comput. J. 51(4), 497–510 (2008)CrossRefGoogle Scholar
  30. 30.
    Hopcroft J., Motwani R., Ullman J.: Introduction to Automata Theory, Languages, and Computation. 3rd edn. Addison-Wesley, Reading (2006)Google Scholar
  31. 31.
    Huang, C.M., Chang, Y.I., Liu, M.: A computer-aided incremental protocol test sequence generation: the production system approach. In: IEEE Annual Phoenix Conference on Computers and Communications, pp. 608–614. IEEE Computer Society (1991)Google Scholar
  32. 32.
    Huo, J., Petrenko, A.: On testing partially specified IOTS through lossless queues. In: 16th International Conference on Testing Communicating Systems, TestCom’04, LNCS 2978, pp. 76–94. Springer (2004)Google Scholar
  33. 33.
    Huo, J., Petrenko, A.: Covering transitions of concurrent systems through queues. In: 16th International Symposium on Software Reliability Engineering, ISSRE’05, pp. 335–345. IEEE Computer Society (2005)Google Scholar
  34. 34.
    Huo J., Petrenko A.: Transition covering tests for systems with queues. Softw. Test. Verification Reliab. 19(1), 55–83 (2009)CrossRefGoogle Scholar
  35. 35.
    ISO/IEC JTC 1, J.T.C.: International Standard ISO/IEC 9646-1. Information Technology—Open Systems Interconnection—Conformance testing methodology and framework—Part 1: General concepts. ISO/IEC (1994)Google Scholar
  36. 36.
    Jacob, J.: Refinement of shared systems. In: McDermid J. (ed.) The Theory and Practice of Refinement: approaches to the Formal Development of Large-Scale Software Systems, pp. 27–36. Butterworths (1989)Google Scholar
  37. 37.
    Jard, C., Jéron, T., Kahlouche, H., Viho, C.: Towards automatic distribution of testers for distributed conformance testing. In: TC6 WG6.1 Joint International Conference on Formal Description Techniques and Protocol Specification, Testing and Verification, FORTE’98, pp. 353–368. Kluwer (1998)Google Scholar
  38. 38.
    Khoumsi A.: A temporal approach for testing distributed systems. IEEE Trans. Softw. Eng. 28(11), 1085–1103 (2002)CrossRefGoogle Scholar
  39. 39.
    Lee D., Yannakakis M.: Principles and methods of testing finite state machines: a survey. Proc. IEEE 84(8), 1090–1123 (1996)CrossRefGoogle Scholar
  40. 40.
    Luo, G., Dssouli, R., Bochmann, G.v.: Generating synchronizable test sequences based on finite state machine with distributed ports. In: 6th IFIP Workshop on Protocol Test Systems, IWPTS’93, pp. 139–153. North-Holland (1993)Google Scholar
  41. 41.
    Mansorui-Samani M., Sloman M.: Monitoring distributed systems. IEEE Netw. 7(6), 20–30 (1993)CrossRefGoogle Scholar
  42. 42.
    Merayo M.G., Núñez M., Rodríguez I.: Formal testing from timed finite state machines. Comput. Netw. 52(2), 432–460 (2008)zbMATHCrossRefGoogle Scholar
  43. 43.
    Petrenko, A.: Fault model-driven test derivation from finite state models: annotated bibliography. In: 4th Summer School on Modeling and Verification of Parallel Processes, MOVEP’00, LNCS 2067, pp. 196–205. Springer (2001)Google Scholar
  44. 44.
    Petrenko A., Boroday S., Groz R.: Confirming configurations in EFSM testing. IEEE Trans. Softw. Eng. 30(1), 29–42 (2004)CrossRefGoogle Scholar
  45. 45.
    Petrenko A., Yevtushenko N.: Testing from partial deterministic FSM specifications. IEEE Trans. Comput. 54(9), 1154–1165 (2005)CrossRefGoogle Scholar
  46. 46.
    Rafiq O., Cacciari L.: Coordination algorithm for distributed testing. J. Supercomput. 24(2), 203–211 (2003)zbMATHCrossRefGoogle Scholar
  47. 47.
    Rodríguez, I.: A general testability theory. In: 20th International Conference on Concurrency Theory, CONCUR’09, LNCS 5710, pp. 572–586. Springer (2009)Google Scholar
  48. 48.
    Rodríguez I., Merayo M.G., Núñez M.: \({\mathcal H \mathcal O \mathcal T\,\mathcal L}\) Hypotheses and observations testing logic. J. Log. Algebraic Program. 74(2), 57–93 (2008)zbMATHCrossRefGoogle Scholar
  49. 49.
    Sarikaya B., Bochmann G.v.: Synchronization and specification issues in protocol testing. IEEE Trans. Commun. 32, 389–395 (1984)CrossRefGoogle Scholar
  50. 50.
    Tai K.C., Young Y.C.: Synchronizable test sequences of finite state machines. Comput. Netw. ISDN Syst. 30(12), 1111–1134 (1998)CrossRefGoogle Scholar
  51. 51.
    Tretmans J.: Test generation with inputs, outputs and repetitive quiescence. Softw. Concepts Tools 17(3), 103–120 (1996)zbMATHGoogle Scholar
  52. 52.
    Ural H., Whittier D.: Distributed testing without encountering controllability and observability problems. Inf. Process. Lett. 88(3), 133–141 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  53. 53.
    Ural H., Williams C.: Constructing checking sequences for distributed testing. Formal Aspects Comput. 18(1), 84–101 (2006)zbMATHCrossRefGoogle Scholar
  54. 54.
    Utting M., Legeard B.: Practical Model-Based Testing: A Tools Approach. Morgan-Kaufmann, Los Altos (2007)Google Scholar
  55. 55.
    Veanes, M., Campbell, C., Grieskamp, W., Schulte, W., Tillmann, N., Nachmanson, L.: Model-based testing of object-oriented reactive systems with spec explorer. In: Formal Methods and Testing, LNCS 4949, pp. 39–76. Springer (2008)Google Scholar
  56. 56.
    Wu W.J., Chen W.H., Tang C.: Synchronizable test sequence for multi-party protocol conformance testing. Comput. Commun. 21(13), 1177–1183 (1998)CrossRefGoogle Scholar
  57. 57.
    Young, Y., Tai, K.: Observational inaccuracy in conformance testing with multiple testers. In: IEEE 1st Workshop on Application-specific Software Engineering and Technology, pp. 80–85. IEEE Computer Society (1998)Google Scholar
  58. 58.
    Zulkernine, M., Seviora, R.: A compositional approach to monitoring distributed systems. In: 3rd International Conference on Dependable Systems and Networks, DSN’02, pp. 763–772. IEEE Computer Society (2002)Google Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  • Robert M. Hierons
    • 1
  • Mercedes G. Merayo
    • 2
  • Manuel Núñez
    • 2
  1. 1.Department of Information Systems and ComputingBrunel UniversityUxbridge, MiddlesexUK
  2. 2.Departamento de Sistemas Informáticos y Computación, Facultad de InformáticaUniversidad Complutense de MadridMadridSpain

Personalised recommendations