Abstract
We provide a systematic approach for testing self-organization (SO) algorithms. The main challenges for such a testing domain are the strongly ramified state space, the possible error masking, the interleaving of mechanisms, and the oracle problem resulting from the main characteristics of SO algorithms: their inherent non-deterministic behavior on the one hand, and their dynamic environment on the other. A key to success for our SO algorithm testing framework is automation, since it is rarely possible to cope with the ramified state space manually. The test automation is based on a model-based testing approach where probabilistic environment profiles are used to derive test cases that are performed and evaluated on isolated SO algorithms. Besides isolation, we are able to achieve representative test results with respect to a specific application. For illustration purposes, we apply the concepts of our framework to partitioning-based SO algorithms and provide an evaluation in the context of an existing smart-grid application.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The classification of known-knowns, known-unknowns, and unknown-unknowns is borrowed from United States Secretary of Defense Donald Rumsfeld’s response given to a question at a U.S. Department of Defense news briefing on February 12, 2002.
- 2.
We use the term “prosumer” to refer to producers as well as consumers.
- 3.
Due to the fully automated evaluation of test cases by the oracle component of IsoTeSO, test case generation reduces to test input generation as no expected output is needed. This concept builds up on Artho et al. [6] also combining run-time verification and test input generation for creating test cases. In the remainder of this paper we use test case generation in the sense of test input generation.
- 4.
Note that the environment also covers in this case other SO algorithms of the system.
- 5.
That technique of state reduction is performed according to the state abstraction principles that are well known in classical testing [33].
- 6.
Note that not every test case execution leads directly to constraint violations and thus to an activation of the SOuT. To form a realistic system structure within the test system, it is necessary to allow the system to take transitions that do not violate the CCB.
- 7.
According to Grottke and Trivedi [21] a Mandelbug is “[a] fault whose activation and/or error propagation are complex, where ‘complexity’ can take [the following form]: [...] The activation and/or error propagation depend on interactions between conditions occurring inside the application and conditions that accrue within the system-internal environment of the application [...]”.
References
Anders, G., Siefert, F., Mair, M., Reif, W.: Proactive guidance for dynamic and cooperative resource allocation under uncertainties. In: Proceedings of the 8th IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO). IEEE Computer Society (2014)
Anders, G., Siefert, F., Msadek, N., Kiefhaber, R., Kosak, O., Reif, W., Ungerer, T.: TEMAS - a trust-enabling multi-agent system for open environments. Technical report 2013–04, Universität Augsburg (2013). http://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/index/index/docId/2311
Anders, G., Siefert, F., Reif, W.: A particle swarm optimizer for solving the set partitioning problem in the presence of partitioning constraints. In: Proceedings of the 7th International Conference on Agents and Artificial Intelligence (ICAART). SciTePress (2015)
Anders, G., Siefert, F., Steghöfer, J.-P., Reif, W.: A decentralized multi-agent algorithm for the set partitioning problem. In: Rahwan, I., Wobcke, W., Sen, S., Sugawara, T. (eds.) PRIMA 2012. LNCS (LNAI), vol. 7455, pp. 107–121. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32729-2_8
Anders, G., Steghöfer, J.P., Klejnowski, L., Wissner, M., Hammer, S., Siefert, F., Seebach, H., Bernard, Y., Reif, W., Müller-Schloer, C., André, E.: Reference architectures for trustworthy energy management, desktop grid computing applications, and ubiquitous display environments. Technical report 2013–05, Universität Augsburg (2013). http://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/index/index/docId/2303
Artho, C., Barringer, H., Goldberg, A., Havelund, K., Khurshid, S., Lowry, M., Pasareanu, C., Roşu, G., Sen, K., Visser, W., et al.: Combining test case generation and runtime verification. Theoret. Comput. Sci. 336(2), 209–234 (2005)
Balas, E., Padberg, M.W.: Set partitioning: a survey. SIAM Rev. 18(4), 710–760 (1976)
Bauer, T., Eschbach, R.: Enabling statistical testing for component-based systems. In: Fähnrich, K.P., Franczyk, B. (eds.) GI Jahrestagung. LNI, vol. 176, pp. 357–362. GI (2010)
Binder, R.V.: Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison Wesley, Boston (1999)
Cámara, J., de Lemos, R.: Evaluation of resilience in self-adaptive systems using probabilistic model-checking. In: Proceedings of the 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), pp. 53–62 (2012)
Cheng, B.H.C., et al.: Software engineering for self-adaptive systems: a research roadmap. In: Cheng, B.H.C., de Lemos, R., Giese, H., Inverardi, P., Magee, J. (eds.) Software Engineering for Self-Adaptive Systems. LNCS, vol. 5525, pp. 1–26. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02161-9_1
Eberhardinger, B., Anders, G., Seebach, H., Siefert, F., Reif, W.: A framework for testing self-organisation algorithms. In: 37 Treffen der GI Fachgruppe TAV, vol. 35:1. Softwaretechnik-Trends der Gesellschaft für Informatik (2015)
Eberhardinger, B., Anders, G., Seebach, H., Siefert, F., Reif, W.: A research overview and evaluation of performance metrics for self-organization algorithms. In: Proceedings of the 9th IEEE International Conference on Self-Adaptive and Self-Organizing Systems Workshop (SASOW), pp. 122–127. IEEE Computer Society (2015)
Eberhardinger, B., Seebach, H., Knapp, A., Reif, W.: Towards testing self-organizing, adaptive systems. In: Merayo, M.G., de Oca, E.M. (eds.) ICTSS 2014. LNCS, vol. 8763, pp. 180–185. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44857-1_13
Eberhardinger, B., Steghöfer, J.P., Nafz, F., Reif, W.: Model-driven synthesis of monitoring infrastructure for reliable adaptive multi-agent systems. In: Proceedings of the 24th IEEE International Symposium on Software Reliability Engineering (ISSRE), pp. 21–30. IEEE Computer Society (2013)
Ehlers, J., van Hoorn, A., Waller, J., Hasselbring, W.: Self-adaptive software system monitoring for performance anomaly localization. In: Proceedings of the 8th ACM International Conference on Autonomic Computing (ICAC), pp. 197–200. ACM (2011)
Falcone, Y., Jaber, M., Nguyen, T.-H., Bozga, M., Bensalem, S.: Runtime verification of component-based systems. In: Barthe, G., Pardo, A., Schneider, G. (eds.) SEFM 2011. LNCS, vol. 7041, pp. 204–220. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-24690-6_15
Filieri, A., Ghezzi, C., Tamburrelli, G.: A formal approach to adaptive software: continuous assurance of non-functional requirements. Formal Asp. Comp. 24(2), 163–186 (2012)
Fredericks, E.M., DeVries, B., Cheng, B.H.C.: Towards run-time adaptation of test cases for self-adaptive systems in the face of uncertainty. In: Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), pp. 17–26. ACM (2014)
Fredericks, E.M., Ramirez, A.J., Cheng, B.H.C.: Towards run-time testing of dynamic adaptive systems. In: Proceedings of the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), pp. 169–174. IEEE (2013)
Grottke, M., Trivedi, K.S.: A classification of software faults. J. Reliab. Eng. Assoc. Jpn. 27(7), 425–438 (2005)
Güdemann, M., Nafz, F., Ortmeier, F., Seebach, H., Reif, W.: A specification and construction paradigm for organic computing systems. In: Brueckner, S.A., Robertson, P., Bellur, U. (eds.) Proceedings of the 2nd IEEE International Conference on Self-Adaptive and Self-Organizing Systems, pp. 233–242. IEEE Computer Society (2008)
Hierons, R.M.: Oracle for distributed testing. IEEE Trans. Softw. Eng. 38(3), 629–641 (2012)
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948 (1995)
Kephart, J.O., Chess, D.M.: The vision of autonomic computing. Computer 36(1), 41–50 (2003)
de Lemos, R., et al.: Software engineering for self-adaptive systems: a second research roadmap. In: de Lemos, R., Giese, H., Müller, H.A., Shaw, M. (eds.) Software Engineering for Self-Adaptive Systems II. LNCS, vol. 7475, pp. 1–32. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35813-5_1
Leucker, M., Schallhart, C.: A brief account of runtime verification. J. Logic Algebraic Program. 78(5), 293–303 (2009). Proceedings of the 1st Workshop on Formal Languages and Analysis of Contract-Oriented Software (FLACOS)
Luckey, M., Thanos, C., Gerth, C., Engels, G.: Multi-staged quality assurance for self-adaptive systems. In: Proceedings of the 6th International Conference on Self-Adaptive and Self-Organizing Systems Workshop (SASOW), pp. 111–118 (2012)
Musuvathi, M., Qadeer, S., Ball, T., Basler, G., Nainar, P.A., Neamtiu, I.: Finding and reproducing Heisenbugs in concurrent programs. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation (OSDI), pp. 267–280. USENIX Association (2008)
Nguyen, C.D.: Testing techniques for software agents. Ph.D. thesis, Università di Trento (2009)
Nguyen, C.D., Marchetto, A., Tonella, P.: Automated oracles: an empirical study on cost and effectiveness. In: Meyer, B., Baresi, L., Mezini, M. (eds.) Proceedings of the Joint Meeting of the European Software Engineering Conference and ACM SIGSOFT Symposium on Foundations of Software Engineering (ESEC/FSE), pp. 136–146. ACM (2013)
Padgham, L., Thangarajah, J., Zhang, Z., Miller, T.: Model-based test oracle generation for automated unit testing of agent systems. IEEE Trans. Softw. Eng. 39(9), 1230–1244 (2013)
Pezzé, M., Young, M.: Software Testing and Analysis: Process, Principles and Techniques. Wiley, New York (2005)
Popovic, M., Kovacevic, J.: A statistical approach to model-based robustness testing. In: Proceedings of the 14th IEEE Conference and Workshops on Engineering of Computer-Based Systems (ECBS), pp. 485–494 (2007)
Püschel, G., Götz, S., Wilke, C., Aßmann, U.: Towards systematic model-based testing of self-adaptive software. In: Proceedings of the 5th International Conference on Adaptive and Self-Adaptive Systems and Applications (ADAPTIVE), pp. 65–70 (2013)
Püschel, G., Götz, S., Wilke, C., Piechnick, C., Aßmann, U.: Testing self-adaptive software: requirement analysis and solution scheme. Int. J. Adv. Softw. 7(1 & 2), 88–100 (2014)
Ramchurn, S.D., Vytelingum, P., Rogers, A., Jennings, N.R.: Putting the “smarts” into the smart grid: a grand challenge for artificial intelligence. Commun. ACM 55(4), 86–97 (2012)
Ramirez, A.J., Jensen, A.C., Cheng, B.H.C., Knoester, D.B.: Automatically exploring how uncertainty impacts behavior of dynamically adaptive systems. In: Alexander, P., et al. (eds.) Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 568–571. IEEE (2011)
Samih, H., Le Guen, H., Bogusch, R., Acher, M., Baudry, B.: An approach to derive usage models variants for model-based testing. In: Merayo, M.G., de Oca, E.M. (eds.) ICTSS 2014. LNCS, vol. 8763, pp. 80–96. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44857-1_6
Sammodi, O., Metzger, A., Franch, X., Oriol, M., Marco, J., Pohl, K.: Usage-based online testing for proactive adaptation of service-based applications. In: Proceedings of the 35th IEEE Computer Software and Applications Conference (COMPSAC), pp. 582–587 (2011)
Schmeck, H., Müller-Schloer, C., Çakar, E., Mnif, M., Richter, U.: Adaptivity and self-organization in organic computing systems. ACM Trans. Auton. Adapt. Syst. 5(3), 10 (2010)
Scott, P., Thiébaux, S., van den Briel, M., Van Hentenryck, P.: Residential demand response under uncertainty. In: Schulte, C. (ed.) CP 2013. LNCS, vol. 8124, pp. 645–660. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40627-0_48
Smidts, C., Mutha, C., Rodríguez, M., Gerber, M.J.: Software testing with an operational profile: OP definition. ACM Comput. Surv. 46(3), 39:1–39:39 (2014)
Steghöfer, J.P., Anders, G., Siefert, F., Reif, W.: A system of systems approach to the evolutionary transformation of power management systems. In: Proceedings of Informatik 2013 - Workshop on “Smart Grids”. Lecture Notes in Informatics. Bonner Köllen Verlag (2013)
Stott, D.T., Floering, B., Burke, D., Kalbarczyk, Z., Iyer, R.K.: NFTAPE: a framework for assessing dependability in distributed systems with lightweight fault injectors. In: Proceedings of the IEEE International Computer Performance and Dependability Symposium (IPDS), pp. 91–100. IEEE (2000)
Thillen, F., Mordinyi, R., Biffl, S.: Isolated testing of software components in distributed software systems. In: Winkler, D., Biffl, S., Bergsmann, J. (eds.) SWQD 2014. LNBIP, vol. 166, pp. 170–184. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-03602-1_11
Thomson, P., Donaldson, A.F., Betts, A.: Concurrency testing using schedule bounding: an empirical study. SIGPLAN Not. 49(8), 15–28 (2014)
Whittle, J., Sawyer, P., Bencomo, N., Cheng, B.H.C., Bruel, J.: RELAX: incorporating uncertainty into the specification of self-adaptive systems. In: Proceedings of the 17th IEEE International Requirements Engineering Conference (RE), pp. 79–88. IEEE Computer Society (2009)
Wotawa, F.: Adaptive autonomous systems – from the system’s architecture to testing. In: Hähnle, R., Knoop, J., Margaria, T., Schreiner, D., Steffen, B. (eds.) ISoLA 2011. CCIS, pp. 76–90. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34781-8_6
Wu, J., Yang, L., Luo, X.: Jata: a language for distributed component testing. In: 15th Asia-Pacific Software Engineering Conference (APSEC), pp. 145–152 (2008)
Yao, Y., Wang, Y.: A framework for testing distributed software components. In: Proceedings of the IEEE Conference Electrical and Computer Engineering, pp. 1566–1569. IEEE (2005)
Zhang, Z., Thangarajah, J., Padgham, L.: Model based testing for agent systems. In: Decker, et al. (eds.) Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 1333–1334. IFAAMAS (2009)
Acknowledgment
This research is sponsored by the research project Testing Self-Organizing, adaptive Systems (TeSOS) of the German Research Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Eberhardinger, B., Anders, G., Seebach, H., Siefert, F., Knapp, A., Reif, W. (2017). An Approach for Isolated Testing of Self-Organization Algorithms. In: de Lemos, R., Garlan, D., Ghezzi, C., Giese, H. (eds) Software Engineering for Self-Adaptive Systems III. Assurances. Lecture Notes in Computer Science(), vol 9640. Springer, Cham. https://doi.org/10.1007/978-3-319-74183-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-74183-3_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-74182-6
Online ISBN: 978-3-319-74183-3
eBook Packages: Computer ScienceComputer Science (R0)