Quality of manual data collection in Java software: an empirical investigation
- 97 Downloads
Data collection, both automatic and manual, lies at the heart of all empirical studies. The quality of data collected from software informs decisions on maintenance, testing and wider issues such as the need for system re-engineering. While of the two types stated, automatic data collection is preferable, there are numerous occasions when manual data collection is unavoidable. Yet, very little evidence exists to assess the error-proneness of the latter. Herein, we investigate the extent to which manual data collection for Java software compared with its automatic counterpart for the same data. We investigate three hypotheses relating to the difference between automated and manual data collection. Five Java systems were used to support our investigation. Results showed that, as expected, manual data collection was error-prone, but nowhere near the extent we had initially envisaged. Key indicators of mistakes in manual data collection were found to be poor developer coding style, poor adherence to sound OO coding principles, and the existence of relatively large classes in some systems. Some interesting results were found relating to the collection of public class features and the types of error made during manual data collection. The study thus offers an insight into some of the typical problems associated with collecting data manually; more significantly, it highlights the problems that poorly written systems have on the quality of visually extracted data.
KeywordsData collection Java Software metrics Empirical investigation
Unable to display preview. Download preview PDF.
- Bieman J, Straw G, Wang H, Munger P, Alexander R (2003) Design patterns and change proneness: an examination of five evolving systems. In: Proceedings IEEE international symposium on software metrics (METRICS ’03), Sydney, Australia, pp 40–49Google Scholar
- Counsell S, Loizou G, Najjar R (2006) Ignore size and inner classes, poor class layout and feature order are the real enemies of OSS developers. Birkbeck Technical Report, BBKCS-06-08Google Scholar
- Counsell S, Loizou G, Najjar R, Mannock K (2002) On the relationship between encapsulation, inheritance and friends in C++ software. In: Proceedings of international conference on software and systems engineering and their applications, ICSSEA’02, Paris, FranceGoogle Scholar
- Counsell S, Newson P, Mendes E (2000) Architectural level hypothesis testing through reverse engineering of object-oriented software. In: Proceedings of the 8th international workshop on program comprehension (IWPC’2000), Limerick, Ireland, pp 60–66Google Scholar
- Fenton NE, Pfleeger SL (1996) Software Metrics: A Rigorous and Practical Approach. International Thomson Computer Press, London, UKGoogle Scholar
- Fowler M, Beck K, Brant J, Opdyke W, Roberts D (1999) Refactoring: Improving the Design of Existing Code. Addison Wesley, Massachusetts, USAGoogle Scholar
- Gamma E, Helm R, Johnson R, Vlissides J (1995) Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley, Massachusetts, USAGoogle Scholar
- Harrison R, Counsell SJ, Nithi R (1998a) Coupling metrics for OO design. In: IEEE international symposium on software metrics, Bethesda, Maryland, USA, pp 94–98Google Scholar
- Najjar R, Counsell S, Loizou G, Mannock K (2003) The role of constructors in the context of refactoring object-oriented systems. In: Proceedings of the 7th European conference on software maintenance and reengineering, Benevento, Italy, pp 111–120Google Scholar
- Siegel S, Castellan NJ (1988) Nonparametric Statistics for the Behavioural Sciences. McGraw-Hill, New YorkGoogle Scholar
- Weinand A, Gamma E, Marty R (1998) ET++-an object-oriented application framework in C++. In: Proceedings of object-oriented programming systems, languages and applications (OOP-SLA), San Diego, USA, pp 46–57Google Scholar