Advertisement

Abstract

Autonomic computing promises improvements of systems quality of service in terms of availability, reliability, performance, security, etc. However, little research and experimental results have so far demonstrated this assertion, nor provided proof of the return on investment stemming from the efforts that introducing autonomic features requires. Existing works in the area of benchmarking of autonomic systems can be characterized by their qualitative and fragmented approaches. Still a crucial need is to provide generic (i.e. independent from business, technology, architecture and implementation choices) autonomic computing benchmarking tools for evaluating and/or comparing autonomic systems from a technical and, ultimately, an economical point of view. This article introduces a methodology and a process for defining and evaluating factors, criteria and metrics in order to qualitatively and quantitatively assess autonomic features in computing systems. It also discusses associated experimental results on three different autonomic systems.

Keywords

Autonomic computing benchmark metrics criteria evaluation comparison return on investment ROI 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Brown, A.B., Hellerstein, J.L., Hogstrom, M., Lau, T., Lightstone, S., Shum, P., Yost, M.P.: Benchmarking Autonomic Capabilities: Promises and Pitfalls. In: 1st International Conference on Autonomic Computing, pp. 266–267. IEEE Computer Society, New York (2004)Google Scholar
  2. 2.
    Brown, A.B., Redlin, C.: Measuring the Effectiveness of Self-Healing Autonomic Systems. In: 2nd International Conference on Autonomic Computing, pp. 328–329. IEEE Computer Society, New York (2005)Google Scholar
  3. 3.
    Chen, H., Hariri, S.: An Evaluation Scheme of Adaptive Configuration Techniques. In: 22nd IEEE/ACM International Conference on Automated Software Engineering, pp. 493–496. ACM Press, New York (2007)Google Scholar
  4. 4.
    De Wolf, T., Holvoet, T.: Evaluation and Comparison of Decentralized Autonomic Computing Systems. Technical report. Department of Computer Science, K.U. Leuven, Leuven, Belgium (2006)Google Scholar
  5. 5.
    European IST 6th FWP Selman project (self management for large-scale distributed systems based on structured overlay networks and components), http://www.ist-selfman.org/wiki/index.php/Selfman_project
  6. 6.
    Forse, T.: Qualimétrie des Systèmes Complexes - Mesure de la Qualité du Logiciel (Qualimetry of Complex Systems - Measurement of Software Quality). Editions d’organization (1989)Google Scholar
  7. 7.
    Horn, P.: Autonomic Computing: IBM’s Perspective on the State of Information Technology, http://www.research.ibm.com/autonomic/manifesto/autonomic_computing.pdf
  8. 8.
    JOnAS OpenSource Java EE Application Server, http://jonas.ow2.org/
  9. 9.
    Kephart, J.O., Chess, D.M.: The Vision of Autonomic Computing. IEEE Computer 1, 41–50 (2003)CrossRefGoogle Scholar
  10. 10.
    Kluth, A.: Make It Simple. The Economist (2004-10-28)Google Scholar
  11. 11.
    Lin, P., MacArthur, A., Leaney, J.: Defining Autonomic Computing: A Software Engineering Perspective. In: 16th Australian Software Engineering Conference, pp. 88–97. IEEE Computer Society, New York (2005)Google Scholar
  12. 12.
    McCann, J.A., Huebscher, M.C.: Evaluation Issues in Autonomic Computing. In: Grid and Cooperative Computing - GCC 2004 Workshops: GCC 2004 International Workshops, IGKG, SGT, GISS, AAC-GEVO, and VVS, pp. 597–608. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  13. 13.
    Milicic, D.: Software Quality Models and Philosophies. In: Lundberg, L., Mattsson, M., Wohlin, C. (eds.) Software Quality Attributes and Trade-offs, p. 100. Blekinge Institute of Technology (2005)Google Scholar
  14. 14.
    Oyenan, W.H., DeLoach, S.A.: Design and Evaluation of a Multiagent Autonomic Information System. In: 2007 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, pp. 182–188. IEEE Computer Society, New York (2007)CrossRefGoogle Scholar
  15. 15.
    Salehie, M., Tahvildari, L.: Autonomic Computing: Emerging Trends and Open Problems. ACM SIGSOFT Software Engineering Notes 4, 1–4 (2005)CrossRefGoogle Scholar
  16. 16.
    Walsh, W.E., Tesauro, G., Kephart, J.O., Das, R.: Utility Functions in Autonomic Systems. In: 1st International Conference on Autonomic Computing, pp. 70–77. IEEE Computer Society, New York (2004)Google Scholar
  17. 17.
    Wildstrom, J., Stone, P., Witchel, E.: Autonomous Return on Investment analyzis of Additional Processing Resources. In: 4th International Conference on Autonomic Computing, p. 15. IEEE Computer Society, New York (2007)Google Scholar
  18. 18.
    Zeiss, B., Vega, D., Schieferdecker, I., Neukirchen, H., Grabowski, J.: Applying the ISO 9126 Quality Model to Test Specifications - Exemplified for TTCN-3 Test Specifications. In: Software Engineering 2007, Fachtagung des GI-Fachbereichs Softwaretechnik, pp. 231–244. GI (2007)Google Scholar
  19. 19.
    Zhang, H., Whang, H., Zheng, R.: An Autonomic Evaluation Model of Complex Software. In: International Conference on Internet Computing in Science and Engineering, pp. 343–348 (2008)Google Scholar

Copyright information

© ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering 2010

Authors and Affiliations

  • Xavier Etchevers
    • 1
  • Thierry Coupaye
    • 1
  • Guy Vachet
    • 1
  1. 1.France Télécom GroupOrange LabsMeylanFrance

Personalised recommendations