Advertisement

A practical strategy for the evaluation of software tools

  • Antony Powell
  • Andrew Vickers
  • Eddie Williams
  • Brian Cooke
Chapter
Part of the IFIP — The International Federation for Information Processing book series (IFIPAICT)

Abstract

This paper describes a working strategy for software tool evaluations that has resulted from work within Rolls-Royce plc in response to the difficulty, and mixed successes, we have experienced in the selection of software tools. The lack of an acceptable methodology has meant that industrial evaluations are commonly time-consuming, fail to capture both tool and problem knowledge in a form suitable to aid future evaluations, and frequently give inconclusive results. Even where rigorous selection methods are used we raise the concern that tool evaluators are failing to address perhaps the most important factors in determining final success namely the non-technical or ‘soft’ factors.

In an attempt to overcome some of these problems the proposed strategy provides a qualitative list of important issues distilled from many years experience of making tool selection decisions. This generic issue checklist is used to form domain specific criteria against which tools can be compared in a more quantitative manner. This process ensures traceability between issues, tool requirements criteria and supporting evidence in order to document decisions and provide assurance that all issues have been addressed. It also helps us to capture valuable corporate knowledge for future evaluations in order to become more efficient at evaluating tools, provide more consistent criteria and to limit the risk of expensive mistakes.

This industrial perspective on tool selection will be of interest to managers and evaluators of organisations who purchase software tools. To a lesser degree the issue guidelines cover method evaluation and tool emplacement but further refinement and practical application is recognised. The strategy may also form the basis of a process for tool evaluations as required by higher levels of the SEI Capability Maturity Model (Humphrey, 1988; Humphrey, 1990). Finally, we hope that tool vendors will use it to provide better support for eliciting and meeting customer requirements during the evaluation process.

Keywords

Software tools industrial practice evaluation experience 

References

  1. Anderson, E. E. (1989) A Heuristic for Software Evaluation and Selection. Software: Practice and Experience, 19 (8), 707–717.CrossRefGoogle Scholar
  2. Cheng, D. Y., and Pane, D. M. “An Evaluation of Automatic and Interactive Parallel Programming Tools.” Supercomputing ‘81,Albuquerque, USA, 412–423.Google Scholar
  3. DeSantis, J. (1994) Evaluating Multi-Platform Development Tools. Object Magazine, 4 (4), 41–44.MathSciNetGoogle Scholar
  4. Dick, R., and Hunter, R. “Subjective Software Evaluation.” Software Quality Management II: Building Quality into Software,Edinburgh, UK, 321–334.Google Scholar
  5. Heller, R. S. (1991) Evaluating Software: A Review of the Options. Computers and Education, 17 (4), 285–291.CrossRefGoogle Scholar
  6. Humphrey, W. A. (1988) Characterising the Software Process: A Maturity Framework. IEEE Software.Google Scholar
  7. Humphrey, W. S. (1990). Managing the Software Process,Addison-Wesley.Google Scholar
  8. Jeanrenaud, J., and Romanazzi, P. “Software Product Evaluation: A Methodological Approach.” Software Quality Management II: Building Quality into Software, Edinburgh, UK, 59–69.Google Scholar
  9. Kitchenham, B., Pickard, L., and Pfleeger, S. L. (1995) Case Studies for Method and Tool Evaluation. IEEE Software, 12 (4), 55–62.CrossRefGoogle Scholar
  10. Klopping, I. M., and Bolgiano, C. F. (1994) Effective Evaluation of off-the-shelf Microcomputer Software. Office Systems Research Journal,9(1), 46–40.Google Scholar
  11. McDougal, A., and Squires, D. (1995) A Critical Examination of the Checklist Approach to Software Selection. Journal of Educational Computing Research, 12 (3), 263–274.CrossRefGoogle Scholar
  12. Miller, J. R., and Jeffries, R. (1992) Interface-Usability Evaluation: Science of Trade-Offs. IEEE Software, 9 (5), 97–102.CrossRefGoogle Scholar
  13. Misra, S. K. (1990) Analysing CASE System Characteristics: Evaluative Framework. Information and Software Technology, 32 (6), 415–422.CrossRefGoogle Scholar
  14. Mosley, V. (1992) How to Assess Tools Efficiently and Quantitatively. IEEE Software, 9 (3), 29–32.CrossRefGoogle Scholar
  15. Polvia, P. (1992). “A Comprehensive Model and Evaluation of the Software Engineering Environment.” Information Resources Management Association International Conference,Harrisburg, USA, 302–307.Google Scholar
  16. Rowley, J. E. (1993) Selection and Evaluation of Software. ASLIB Proceedings, 45(3), 77–81. Schamp, A. (1995) CM-Tool Evaluation and Selection. IEEE Software, 12(4), 114–118.Google Scholar
  17. Scheffler, F. L., and Marshall, R. R. “The Software Technology Support Centre: Help for Acquiring Sofware Tools.” National Aerospace and Electronics Conference,Dayton, OH, USA, 647–653.Google Scholar
  18. Shin, H., and Lee, J. (1996) A Process Model of Application Software Package Acquisition and Implementation. Journal of Systems and Software, 32, 57–64.MathSciNetCrossRefGoogle Scholar
  19. Williams, F. (1992) Appraisal and Evaluation of Software Products. Journal of Information Science,Principles and Practice, 18 (2), 121–125.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1996

Authors and Affiliations

  • Antony Powell
    • 1
  • Andrew Vickers
    • 1
  • Eddie Williams
    • 2
  • Brian Cooke
    • 2
  1. 1.Department of Computer ScienceUniversity of YorkYorkUK
  2. 2.Rolls-Royce plcDerbyUK

Personalised recommendations