Advertisement

Evidence-Based Methods for Privacy and Identity Management

  • Kovila P. L. Coopamootoo
  • Thomas GroßEmail author
Chapter
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 498)

Abstract

In the advent of authoritative experiments and evidence-based methods in security research [2, 4, 21, 29], we are convinced that privacy and identity research will benefit from the scientific method, as well. This workshop offers an introduction to selected tools of experiment design and systematic analysis. It includes key ingredients of evidence-based methods: hallmarks of sound experimentation, templates for the design of true experiments, and inferential statistics with sound power analysis. To gauge the state of play, we include a systematic literature review of the pre-proceedings of the 2016 IFIP Summer school on Privacy and Identity Management as well as the participants’ feedback on their perception on evidence-based methods. Finally, we make our case for the endorsement of evidence-based methods in privacy and identity management.

Keywords

Null Hypothesis Statistical Inference Identity Management True Experiment Theory Section 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

We are grateful for the contributions and feedback from participants of the workshop. We are grateful for the discussions with Roy Maxion on evidence-based methods for cyber security. The preparation of the evidence-based methods workshop was in parts funded by the EPSRC Research Institute in Science of Cyber Security (RISCS), grant EP/K006568/1. This work was supported by a Newcastle-sponsored International Research Collaboration Award (IRCA) for work with Carnegie-Mellon University (CMU). We have been asked by a number of people, whether this paper could be used as primer for lectures or student projects. We certainly approve of that. We ask that the paper be cited and the corresponding author sent a brief notification, such that we can track interest in evidence-based methods in the community.

References

  1. 1.
    American Psychological Association (APA): Publication Manual of the American Psychological Association. American Psychological Association (APA), 6th revised edn., July 2009Google Scholar
  2. 2.
    Balenson, D., Tinnel, L., Benzel, T.: Cybersecurity Experimentation of the Future (CEF): Catalyzing a New Generation of Experimental Cybersecurity Research - Final Report. Technical report, SRI International and USC Information Sciences Institute, July 2015Google Scholar
  3. 3.
    Brewer, M.B.: Research design and issues of validity. In: Handbook of Research Methods in Social and Personality Psychology, pp. 3–16 (2000)Google Scholar
  4. 4.
    Carroll, T.E., Manz, D., Edgar, T., Greitzer, F.L.: Realizing scientific methods for cyber security. In: Proceedings of the 2012 Workshop on Learning from Authoritative Security Experiment Results, pp. 19–24. ACM (2012)Google Scholar
  5. 5.
    Cohen, J.: The statistical power of abnormal-social psychological research: a review. J. Abnorm. Soc. Psychol. 65(3), 145–153 (1962)CrossRefGoogle Scholar
  6. 6.
    Cohen, J.: Statistical Power Analysis for the Behavioral Sciences, 2nd edn. Psychology Press, Taylor & Francis Group, LCC, New York (1988)zbMATHGoogle Scholar
  7. 7.
    Cohen, J.: Things I have learned (so far). Am. Psychol. 45(12), 1304–1312 (1990)CrossRefGoogle Scholar
  8. 8.
    Cohen, J.: A power primer. Psychol. Bull. 112(1), 155–159 (1992)CrossRefGoogle Scholar
  9. 9.
    Cook, T.D., Campbell, D.T.: Quasi-Experimentation: Design and Analysis for Field Settings. Rand McNally, Chicago (1979)Google Scholar
  10. 10.
    Dodge, Y. (ed.): Oxford Dictionary of Statistical Terms. Oxford University Press, Oxford (2006)zbMATHGoogle Scholar
  11. 11.
    Ellis, P.D.: The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results. Cambridge University Press, Cambridge (2010)CrossRefGoogle Scholar
  12. 12.
    Everitt, B.: Cambridge Dictionary of Statistics. Cambridge University Press, Cambridge (1998)zbMATHGoogle Scholar
  13. 13.
    Faul, F., Erdfelder, E., Lang, A.G., Buchner, A.: G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 39(2), 175–191 (2007)CrossRefGoogle Scholar
  14. 14.
    Field, A., Hole, G.: How to Design and Report Experiments. Sage, London (2003)Google Scholar
  15. 15.
    Fisher, R.A.: Statistical Methods for Research Workers. Genesis Publishing Pvt Ltd, Edinburgh (1925)zbMATHGoogle Scholar
  16. 16.
    Fritz, C.O., Morris, P.E., Richler, J.J.: Effect size estimates: current use, calculations, and interpretation. J. Exp. Psychol. Gen. 141(1), 2–18 (2012)CrossRefGoogle Scholar
  17. 17.
    Gardner, M.J., Altman, D.G.: Confidence intervals rather than p values: estimation rather than hypothesis testing. Br. Med. J. (Clin. Res. Ed.) 292(6522), 746–750 (1986)CrossRefGoogle Scholar
  18. 18.
    Howell, D.C.: Statistical Methods for Psychology, 8th edn. Cengage Learning, Wadsworth (2012)Google Scholar
  19. 19.
    Ioannidis, J.P.: Why most published research findings are false. PLoS Med. 2(8), e124 (2005)CrossRefGoogle Scholar
  20. 20.
    Lehmann, E.L.: The fisher, neyman-pearson theories of testing hypotheses: one theory or two? In: Selected Works of EL Lehmann, pp. 201–208. Springer (2012)Google Scholar
  21. 21.
    Maxion, R.: Making Experiments Dependable. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  22. 22.
    Maxwell, S.E., Delaney, H.D.: Designing Experiments and Analyzing Data: A Model Comparison Perspective, vol. 1, 2nd edn. Psychology Press, New York (2004)zbMATHGoogle Scholar
  23. 23.
    Miller, S.: Experimental Design and Statistics. Routledge, London (2005)Google Scholar
  24. 24.
    Montgomery, D.C.: Design and Analysis of Experiments, 8th edn. Wiley, New York (2012)Google Scholar
  25. 25.
    Neyman, J., Pearson, E.S.: On the use and interpretation of certain test criteria for purposes of statistical inference: Part I. Biometrika, 175–240 (1928)Google Scholar
  26. 26.
    Nickerson, R.S.: Null hypothesis significance testing: a review of an old and continuing controversy. Psychol. Methods 5(2), 241–301 (2000)CrossRefGoogle Scholar
  27. 27.
    Open Science Collaboration: an open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspect. Psychol. Sci. 7(6), 657–660 (2012)Google Scholar
  28. 28.
    Open Science Collaboration: estimating the reproducibility of psychological science. Science 349(6251), aac4716 (2015)Google Scholar
  29. 29.
    Peisert, S., Bishop, M.: How to design computer security experiments. In: Futcher, L., Dodge, R. (eds.) WISE 2007. IIFIP, vol. 237, pp. 141–148. Springer, Boston, MA (2007). doi: 10.1007/978-0-387-73269-5_19 CrossRefGoogle Scholar
  30. 30.
    Popper, K.: The Logic of Scientific Discovery. Routledge, London (2005)zbMATHGoogle Scholar
  31. 31.
    Wacholder, S., Chanock, S., Garcia-Closas, M., Rothman, N., et al.: Assessing the probability that a positive report is false: an approach for molecular epidemiology studies. J. Natl. Cancer Inst. 96(6), 434–442 (2004)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2016

Authors and Affiliations

  1. 1.University of DerbyDerbyUK
  2. 2.Newcastle UniversityNewcastle upon TyneUK

Personalised recommendations