Advertisement

Design of an Empirical Study for Evaluating an Automatic Layout Tool

  • Haoyuan Zhang
  • Tong Li
  • Yunduo Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11158)

Abstract

Generating meaningful layout of iStar models is a challenging task, which currently requires significant manual efforts. However, it is time-consuming when dealing with large-scale iStar modeling, rising the need of having an automatic iStar layout tool. Previously, we have proposed an algorithm for laying out iStar SD models and have implemented a corresponding prototype tool. In this paper, we report our ongoing empirical work which aims to evaluate the effectiveness and usability of the prototype tool. In particular, we present a research design which is applied to compare manual layout and automatic layout in terms of efficiency and model comprehensibility. Based on such a design, we are planning to carry out empirical studies accordingly in the near future.

Keywords

iStar models Automatic layout Controlled experiment Prototype tool 

Notes

Acknowledgements

This work is supported by National Key R&D Program of China (No. 2017YFC08033007), the National Natural Science of Foundation of China (No. 91546111, 91646201) and Basic Research Funding of Beijing University of Technology (No. 040000546318516).

References

  1. 1.
    Horkoff, J., et al.: Goal-oriented requirements engineering: an extended systematic mapping study. Requir. Eng. 1–28 (2017)Google Scholar
  2. 2.
    Santos, M., Gralha, C., Goulao, M., Araújo, J., Moreira, A., Cambeiro, J.: What is the impact of bad layout in the understandability of social goal models? In: IEEE 24th International Requirements Engineering Conference (RE) 2016, pp. 206–215. IEEE (2016)Google Scholar
  3. 3.
    Du, X., Li, T., Wang, D.: An automatic layout approach for istar models (2017)Google Scholar
  4. 4.
    Cares, C., Franch, X., Perini, A., Susi, A.: istarml: an xml-based interchange format for i* models. In: Proceedings of the 3rd International i* Workshop, Recife, Brazil, February, pp. 11–12. Citeseer (2008)Google Scholar
  5. 5.
    Matulevičius, R., Heymans, P.: Comparing goal modelling languages: an experiment. In: Sawyer, P., Paech, B., Heymans, P. (eds.) REFSQ 2007. LNCS, vol. 4542, pp. 18–32. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-73031-6_2CrossRefGoogle Scholar
  6. 6.
    Opdahl, A.L., Sindre, G.: Experimental comparison of attack trees and misuse cases for security threat identification. Inf. Softw. Technol. 51(5), 916–932 (2009)CrossRefGoogle Scholar
  7. 7.
    Buyens, K., De Win, B., Joosen, W.: Empirical and statistical analysis of risk analysis-driven techniques for threat management. In: The Second International Conference on Availability, Reliability and Security, ARES 2007, PP. 1034–1041. IEEE (2007)Google Scholar
  8. 8.
    España, S., Condori-Fernandez, N., González, A., Pastor, Ó.: An empirical comparative evaluation of requirements engineering methods. J. Braz. Comput. Soc. 16(1), 3–19 (2010)CrossRefGoogle Scholar
  9. 9.
    Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empir. Softw. Eng. 14(2), 131 (2009)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Beijing University of TechnologyBeijingChina

Personalised recommendations