Skip to main content
Log in

Quality of manual data collection in Java software: an empirical investigation

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Data collection, both automatic and manual, lies at the heart of all empirical studies. The quality of data collected from software informs decisions on maintenance, testing and wider issues such as the need for system re-engineering. While of the two types stated, automatic data collection is preferable, there are numerous occasions when manual data collection is unavoidable. Yet, very little evidence exists to assess the error-proneness of the latter. Herein, we investigate the extent to which manual data collection for Java software compared with its automatic counterpart for the same data. We investigate three hypotheses relating to the difference between automated and manual data collection. Five Java systems were used to support our investigation. Results showed that, as expected, manual data collection was error-prone, but nowhere near the extent we had initially envisaged. Key indicators of mistakes in manual data collection were found to be poor developer coding style, poor adherence to sound OO coding principles, and the existence of relatively large classes in some systems. Some interesting results were found relating to the collection of public class features and the types of error made during manual data collection. The study thus offers an insight into some of the typical problems associated with collecting data manually; more significantly, it highlights the problems that poorly written systems have on the quality of visually extracted data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Basili VR, Briand LC, Melo WL (1996) A validation of object-oriented design metrics as quality indicators. IEEE Trans Softw Eng 22(10):751–761

    Article  Google Scholar 

  • Bieman J, Straw G, Wang H, Munger P, Alexander R (2003) Design patterns and change proneness: an examination of five evolving systems. In: Proceedings IEEE international symposium on software metrics (METRICS ’03), Sydney, Australia, pp 40–49

  • Briand L, Bunse L, Daly J, Differding C (1997) An experimental comparison of the maintainability of object-oriented and structured design documents. Empir Softw Eng J 2(3):291–312

    Article  Google Scholar 

  • Counsell S, Loizou G, Najjar R (2006) Ignore size and inner classes, poor class layout and feature order are the real enemies of OSS developers. Birkbeck Technical Report, BBKCS-06-08

  • Counsell S, Loizou G, Najjar R, Mannock K (2002) On the relationship between encapsulation, inheritance and friends in C++ software. In: Proceedings of international conference on software and systems engineering and their applications, ICSSEA’02, Paris, France

  • Counsell S, Newson P, Mendes E (2000) Architectural level hypothesis testing through reverse engineering of object-oriented software. In: Proceedings of the 8th international workshop on program comprehension (IWPC’2000), Limerick, Ireland, pp 60–66

  • El Emam K, Benlarbi S, Goel N, Rai N (2001) The confounding effect of class size on the validity of object-oriented metrics. IEEE Trans Softw Eng 27(7):630–650

    Article  Google Scholar 

  • Fenton NE, Pfleeger SL (1996) Software Metrics: A Rigorous and Practical Approach. International Thomson Computer Press, London, UK

    Google Scholar 

  • Fowler M, Beck K, Brant J, Opdyke W, Roberts D (1999) Refactoring: Improving the Design of Existing Code. Addison Wesley, Massachusetts, USA

    Google Scholar 

  • Gamma E, Helm R, Johnson R, Vlissides J (1995) Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley, Massachusetts, USA

    Google Scholar 

  • Harrison R, Counsell SJ, Nithi R (1998a) Coupling metrics for OO design. In: IEEE international symposium on software metrics, Bethesda, Maryland, USA, pp 94–98

  • Harrison R, Counsell SJ, Nithi R (1998b) An investigation into the applicability and validity of object-oriented design metrics. Empir Softw Eng J 3:255–273

    Article  Google Scholar 

  • Kitchenham B, Pfleeger S, McColl B, Eagan S (2002a) An empirical study of maintenance and development accuracy. J Syst Softw 64:57–77

    Article  Google Scholar 

  • Kitchenham B, Pfleeger S, Pickard L, Jones P, Hoaglin D, El Emam K, Rosenberg J (2002b) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734

    Article  Google Scholar 

  • Kitchenham BA, Hughes RT, Linkman S (2001) Modeling software measurement data. IEEE Trans Softw Eng 27(9):788–804

    Article  Google Scholar 

  • Kitchenham BA, Pfleeger SL (1996) Software quality: the elusive target. IEEE Softw 13(1):12–21

    Article  Google Scholar 

  • Najjar R, Counsell S, Loizou G, Mannock K (2003) The role of constructors in the context of refactoring object-oriented systems. In: Proceedings of the 7th European conference on software maintenance and reengineering, Benevento, Italy, pp 111–120

  • Schneidewind NF (1992) Methodology for validating software metrics. IEEE Trans Softw Eng 18(5):410–422

    Article  Google Scholar 

  • Siegel S, Castellan NJ (1988) Nonparametric Statistics for the Behavioural Sciences. McGraw-Hill, New York

    Google Scholar 

  • Weinand A, Gamma E, Marty R (1998) ET++-an object-oriented application framework in C++. In: Proceedings of object-oriented programming systems, languages and applications (OOP-SLA), San Diego, USA, pp 46–57

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steve Counsell.

Additional information

Editor: William Agresti

Rights and permissions

Reprints and permissions

About this article

Cite this article

Counsell, S., Loizou, G. & Najjar, R. Quality of manual data collection in Java software: an empirical investigation. Empir Software Eng 12, 275–293 (2007). https://doi.org/10.1007/s10664-006-9028-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-006-9028-y

Keywords

Navigation