Advertisement

Artifact Sampling in Experimental Conceptual Modeling Research

  • Roman Lukyananko
  • Jeffrey Parsons
  • Binny M. Samuel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11158)

Abstract

Experimental research in conceptual modeling typically involves comparing grammars or variations within a grammar, where differences between experimental groups are based on a focal construct of interest. However, a conceptual modeling grammar is a collection of many constructs and there is a danger that grammatical features other than those under consideration in an experiment can influence or confound the results obtained. To address this issue, we propose the use of artifact sampling as a way to systematically vary non-focal grammatical features in experimental conceptual modeling research to control for potential confounds or interactions between constructs of interest and other grammatical features. In this paper, we describe the approach and illustrate its application to the design of a large-scale study to compare alternative notations within the Entity-Relationship family of grammars.

Keywords

Artifact sampling Conceptual modeling Experimental research 

References

  1. 1.
    Moody, D.L.: The “Physics” of notations: toward a scientific basis for constructing visual notations in software engineering. IEEE Trans. Softw. Eng. 35, 756–779 (2009)CrossRefGoogle Scholar
  2. 2.
    Bodart, F., Patel, A., Sim, M., Weber, R.: Should optional properties be used in conceptual modelling? A theory and three empirical tests. Inf. Syst. Res. 12, 384–405 (2001)CrossRefGoogle Scholar
  3. 3.
    Parsons, J.: An experimental study of the effects of representing property precedence on the comprehension of conceptual schemas. J. Assoc. Inf. Syst. 12, 441–462 (2011)Google Scholar
  4. 4.
    Briggs, R.O., Nunamaker, J.J., Sprague, R.: 1001 unanswered research questions in GSS. J. Manag. Inf. Syst. 14, 3–21 (1997)CrossRefGoogle Scholar
  5. 5.
    Lukyanenko, R., Parsons, J.: Reconciling theories with design choices in design science research. In: vom Brocke, J., Hekkala, R., Ram, S., Rossi, M. (eds.) DESRIST 2013. LNCS, vol. 7939, pp. 165–180. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-38827-9_12CrossRefGoogle Scholar
  6. 6.
    Ableitner, L., Tiefenbeck, V., Hosseini, S., Schöb, S., Fridgen, G., Staake, T.: Real-world impact of information systems: the effect of seemingly small design choices. In: Workshop on Information Technologies and Systems (WITS 2017) (2017)Google Scholar
  7. 7.
    Lukyanenko, R., Evermann, J., Parsons, J.: Instantiation validity in IS design research. In: Tremblay, M.C., VanderMeer, D., Rothenberger, M., Gupta, A., Yoon, V. (eds.) DESRIST 2014. LNCS, vol. 8463, pp. 321–328. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-06701-8_22CrossRefGoogle Scholar
  8. 8.
    Lukyanenko, R., Parsons, J., Samuel, B.M.: Artifact sampling: using multiple information technology artifacts to increase research rigor. In: Proceedings of the 51st Hawaii International Conference on System Sciences (HICSS 2018), Big Island, Hawaii, pp. 1–12 (2018)Google Scholar
  9. 9.
    Lohr, S.L.: Sampling: Design and Analysis. Cengage Learning, Boston (2009)zbMATHGoogle Scholar
  10. 10.
    Hammond, K.R., Stewart, T.R.: The Essential Brunswik: Beginnings, Explications Applications. Oxford University Press, Oxford (2001)Google Scholar
  11. 11.
    Brunswik, E.: Organismic achievement and environmental probability. Psychol. Rev. 50, 255 (1943)CrossRefGoogle Scholar
  12. 12.
    Fontenelle, G.A., Phillips, A.P., Lane, D.M.: Generalizing across stimuli as well as subjects: a neglected aspect of external validity. J. Appl. Psychol. 70, 101 (1985)CrossRefGoogle Scholar
  13. 13.
    Wells, G.L., Windschitl, P.D.: Stimulus sampling and social psychological experimentation. Pers. Soc. Psychol. Bull. 25, 1115–1125 (1999)CrossRefGoogle Scholar
  14. 14.
    Snodgrass, J.G., Vanderwart, M.: A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. J. Exp. Psychol. [Hum. Learn.] 6, 174–215 (1980)CrossRefGoogle Scholar
  15. 15.
    Khatri, V., Vessey, I., Ramesh, V., Clay, P., Park, S.-J.: Understanding conceptual schemas: exploring the role of application and IS domain knowledge. Inf. Syst. Res. 17, 81–99 (2006)CrossRefGoogle Scholar
  16. 16.
    Lukyanenko, R., Evermann, J., Parsons, J.: Guidelines for establishing instantiation validity in IT artifacts: a survey of is research. In: Donnellan, B., Helfert, M., Kenneally, J., VanderMeer, D., Rothenberger, M., Winter, R. (eds.) DESRIST 2015. LNCS, vol. 9073, pp. 430–438. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-18714-3_35CrossRefGoogle Scholar
  17. 17.
    VanVoorhis, C.R.W., Morgan, B.L.: Understanding power and rules of thumb for determining sample sizes. Tutor. Quant. Methods Psychol. 3, 43–50 (2007)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Roman Lukyananko
    • 1
  • Jeffrey Parsons
    • 2
  • Binny M. Samuel
    • 3
  1. 1.HEC MontrealMontrealCanada
  2. 2.Memorial University of NewfoundlandSt. John’sCanada
  3. 3.University of CincinnatiCincinnatiUSA

Personalised recommendations