Advertisement

In this issue

  • Rachel HarrisonEmail author
Article
  • 26 Downloads

In this issue, we have twelve regular research papers. The first six of these are all connected by the common theme of testing, whereas the next three are all related to metrics and benchmarks, and the final three are concerned with process and projects.

Testing is a crucially important part of the software life cycle and so it should come as no surprise that half of the papers in this issue are related to software testing. In “A Study Examining Relationships Between Micro Patterns and Security Vulnerabilities,” Kazi Zakia Sultana, Byron J. Williams, and Tanmay Bhowmik investigate the correlation between vulnerabilities and code micro patterns. By analyzing Apache Tomcat and three Java web applications, the authors found that certain micro patterns are frequently present in vulnerable classes. This research will help developers and testers to detect code vulnerabilities.

The paper “A vector table model-based systematic analysis of spectral fault localization techniques” by Chunyan Ma, Chenyang Nie, Weicheng Chao, and Bowei Zhang presents a method to evaluate and compare the reliability and effectiveness of spectral fault localization techniques, i.e., techniques that work with data collected at run-time. As there are a large number of spectral fault localization techniques, this method will be of use to developers who need to choose the optimal method for their system testing.

In “Code Coverage Differences of Java Bytecode and Source Code Instrumentation Tools,” Ferenc Horváth, Tamás Gergely, Árpád Beszédes, Dávid Tengeri, Gergő Balogh, and Tibor Gyimóthy discuss an empirical study comparing code coverage results provided by a number of bytecode instrumentation tools for Java. The impacts on test prioritization and test suite reduction are also investigated. The results show that significant differences occur between measurements of bytecode and source code. The authors suggest that source code–based instrumentation is the correct approach to code coverage measurement.

The understandability of documentation has a considerable impact on test development. The paper “Comprehensibility of System Models during Test Design: a Controlled Experiment Comparing UML Activity Diagrams and State Machines” by Michael Felderer and Andrea Herrmann compares the comprehensibility of UML activity diagrams and state machines during test case derivation. The authors performed experiments with 84 student participants at two institutions. The results show that activity diagrams are more comprehensible but also more error-prone with regard to test case development.

In “Automated Functional Testing of Mobile Applications: a Systematic Mapping Study,” Porfirio Tramontana, Domenico Amalfitano, Nicola Amatucci, and Anna Rita Fasolino report the results of a systematic mapping study on the automation of functional testing of mobile applications, including research trends and gaps in the field. The authors note a lack of contributions from industry, and the absence of specific venues and journals focused on mobile testing automation.

The final paper related to testing in this issue is “Usability Improvement through A/B Testing and Refactoring” by Sergio Firmenich, Alejandra Garrido, Julián Grigera, José Matias Rivero, and Gustavo Rossi. In it, they propose a method to help usability experts design user tests, run them, analyze the results, and assess alternative solutions to usability problems. The authors conducted experiments to demonstrate the feasibility of the method for any intermediate minimum testable products as well as the usefulness of the tools.

The discipline of software architecture makes quite infrequent use of software metrics. In “Exploring the Suitability of Source Code Metrics for Indicating Architectural Inconsistencies,” Jörg Lenhard, Martin Blom, and Sebastian Herold investigate the extent to which source code metrics can be used to characterize classes contributing to the degradation of software architecture. The authors performed a case study on three open-source systems, collecting and analyzing data for 49 different source code metrics. They conclude that class size seems to have a confounding effect on most metrics, except for the fan-in and lack-of-cohesion metrics.

Continuing with this metric theme, the paper “On the Proposal and Evaluation of a Benchmark-based Threshold Derivation Method” by Gustavo Vale, Eduardo Fernandes, and Eduardo Figueiredo proposes a new method for deriving metric thresholds. To validate the method, the authors analyze three benchmarks composed of multiple software product lines. They also applied the method to a benchmark of 103 Java open-source software systems. The results suggest that the new method provides realistic and reliable thresholds.

In “Modeling Variability in the Video Domain: Language and Experience Report,” Mauricio Alférez, Mathieu Acher, José A. Galindo, Benoit Baudry, and David Benavides describe the development of a variability modeling language, called VM, which enables practitioners to benchmark video algorithms over large and diverse datasets. The research was performed in close collaboration with industrial partners over a 2-year timespan.

As society becomes increasingly dependent on safety-critical systems, the hazard analysis of such systems becomes paramount. The paper “Comparison of the FMEA and STPA safety analysis methods - A case study” by Sardar Muhammad Sulaman, Armin Beer, Michael Felderer, and Martin Höst presents a comparison of two hazard analysis methods, failure mode and effect analysis and system theoretic process analysis. The authors used a collision avoidance system to compare the effectiveness of the methods and to investigate their differences. The result shows that both methods deliver similar analysis results.

In “FLEX-RCA - A lean based method for root cause analysis in software process improvement,” Joakim Pernstål, Robert Feldt, Tony Gorschek, and Dan Florén propose a root cause analysis method building on Lean Six Sigma which can be used for evaluation and process improvement activities. The authors suggest that the method can be used to produce a broad base of the causes of issues which arise during software process improvement and to explore these underlying root causes.

The final paper “Correlation of critical success factors with success of software projects: an empirical investigation” by Vahid Garousi, Ayca Tarhan, Dietmar Pfahl, Ahmet Coskuncay, and Onur Demirors describes how they designed an online survey and gathered data related to critical success factors for 101 software projects in the Turkish software industry. They report that the top three factors linked to project success are (1) team experience with software development methods, (2) team expertise, and (3) project monitoring and control.

As always, I am grateful for your suggestions or comments on this issue; please email me at rachel.harrison@brookes.ac.uk.

Notes

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Engineering, Computing and MathematicsOxford Brookes UniversityOxfordUK

Personalised recommendations