Abstract
Building and evaluating predictionsystems is an important activity for software engineering researchers.Increasing numbers of techniques and datasets are now being madeavailable. Unfortunately systematic comparison is hindered bythe use of different accuracy indicators and evaluation processes.We argue that these indicators are statistics that describe propertiesof the estimation errors or residuals and that the sensible choiceof indicator is largely governed by the goals of the estimator.For this reason it may be helpful for researchers to providea range of indicators. We also argue that it is useful to formallytest for significant differences between competing predictionsystems and note that where only a few cases are available thiscan be problematic, in other words the research instrument mayhave insufficient power. We demonstrate that this is the casefor a well known empirical study of cost models. Simulation,however, could be one means of overcoming this difficulty.
Similar content being viewed by others
References
Benington, H. D. 1956. Production of large computer programs. Proc. Symp. on Advanced Computer Programs for Digital Computers. Washington, D.C.: Office of Naval Research.
Boehm, B. W. 1984. Software engineering economics. IEEE Trans. on Softw. Eng. 10(1): 4-21.
Conte, S., Dunsmore, H., and Shen, V. Y. 1986. Software Engineering Metrics and Models. Menlo Park, CA: Benjamin Cummings.
Desharnais, J. M. 1989. Analyse statistique de la productivitie des projets informatique a partie de la technique des point des fonction. University of Montreal.
Finnie, G. R., Wittig, G. E., and Desharnais, J.-M. 1997. A comparison of software effort estimation techniques using function points with neural networks, case based reasoning and regression models. J. of Systems Software 39: 281-289.
Jeffery, D. R., and Low, G. C. 1990. Calibrating estimation tools for software development. Softw. Eng. J. 5(4): 215-221.
Kemerer, C. F. 1987. An empirical validation of software cost estimation models. CACM 30(5): 416-429.
Kitchenham, B. A., and Linkman, S. G. 1997. Estimates, uncertainty and risk. IEEE Softw. 14(3): 69-74.
Kitchenham, B. A., and Taylor, N. R. 1984. Software cost models. ICL Tech. J. 4(3): 73-102.
Kok, P., Kitchenham, B. A., and Kirakowski, J. 1990. The MERMAID approach to software cost estimation. Proc. Esprit Technical Week.
Lo, B. W. N., and Gao, X. Assessing software cost estimation models: Criteria for accuracy, consistency and regression. Australian J. of Information Systems 5(1): 30-44.
Pickard, L., Kitchenham, B., and Linkman, S. 1999. An investigation of analysis techniques for software datasets. Technical Report No. TR99-05, Keele University.
Samson, B., Ellison, D., and Dugard, P. 1997. Software cost estimation using an Albus Perceptron (CMAC). Information & Softw. Technol. 39(1–2).
Shepperd, M. J., and Schofield, C. 1997. Estimating software project effort using analogies. IEEE Trans. on Softw. Eng. 23(11): 736-743.
Stensrud, E., and Myrtveit, I. 1998. Human performance estimating with analogy and regression models: An empirical validation. Proc. 5th Intl. Metrics Symp. Bethesda, MD: IEEE Computer Society.
Walston, C. E., and Felix, C. P. 1997. A method of programming measurement and estimation. IBM Syst. J. 16(1): 54-73.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Shepperd, M., Cartwright, M. & Kadoda, G. On Building Prediction Systems for Software Engineers. Empirical Software Engineering 5, 175–182 (2000). https://doi.org/10.1023/A:1026582314146
Issue Date:
DOI: https://doi.org/10.1023/A:1026582314146