Advertisement

Çorba: crowdsourcing to obtain requirements from regulations and breaches

  • Hui GuoEmail author
  • Özgür Kafalı
  • Anne-Liz Jeukeng
  • Laurie Williams
  • Munindar P. Singh
Article
  • 15 Downloads

Abstract

Context

Modern software systems are deployed in sociotechnical settings, combining social entities (humans and organizations) with technical entities (software and devices). In such settings, on top of technical controls that implement security features of software, regulations specify how users should behave in security-critical situations. No matter how carefully the software is designed and how well regulations are enforced, such systems are subject to breaches due to social (user misuse) and technical (vulnerabilities in software) factors. Breach reports, often legally mandated, describe what went wrong during a breach and how the breach was remedied. However, breach reports are not formally investigated in current practice, leading to valuable lessons being lost regarding past failures.

Objective

Our research aim is to aid security analysts and software developers in obtaining a set of legal, security, and privacy requirements, by developing a crowdsourcing methodology to extract knowledge from regulations and breach reports.

Method

We present Çorba, a methodology that leverages human intelligence via crowdsourcing, and extracts requirements from textual artifacts in the form of regulatory norms. We evaluate Çorba on the US healthcare regulations from the Health Insurance Portability and Accountability Act (HIPAA) and breach reports published by the US Department of Health and Human Services (HHS). Following this methodology, we have conducted a pilot and a final study on the Amazon Mechanical Turk crowdsourcing platform.

Results

Çorba yields high quality responses from crowd workers, which we analyze to identify requirements for the purpose of complementing HIPAA regulations. We publish a curated dataset of the worker responses and identified requirements.

Conclusions

The results show that the instructions and question formats presented to the crowd workers significantly affect the response quality regarding the identification of requirements. We have observed significant improvement from the pilot to the final study by revising the instructions and question formats. Other factors, such as worker types, breach types, or length of reports, do not have notable effect on the workers’ performance. Moreover, we discuss other potential improvements such as breach report restructuring and text highlighting with automated methods.

Keywords

Regulatory norms Sociotechnical systems HIPAA 

Notes

Acknowledgements

This research is supported by the US Department of Defense under the Science of Security Lablet (SoSL) grant to NC State University and by the National Science Foundation under the Research Experiences for Undergraduates (REU) program.

References

  1. Allen IE, Seaman CA (2007) Likert scales and data analyses. Qual Prog 40 (7):64Google Scholar
  2. Arora C, Sabetzadeh M, Briand L, Zimmer F (2015) Automated checking of conformance to requirements templates using natural language processing. IEEE Trans Softw Eng 41(10):944–968CrossRefGoogle Scholar
  3. Barth A, Datta A, Mitchell JC, Nissenbaum H (2006) Privacy and contextual integrity: framework and applications. In: Proceedings of the IEEE symposium on security and privacy (SP). IEEE Computer Society, Washington, DC, pp 184–198Google Scholar
  4. Bhatia J, Breaux TD, Schaub F (2016) Mining privacy goals from privacy policies using hybridized task recomposition. ACM Transa Softw Eng Methodol (TOSEM) 25 (3):1–24CrossRefGoogle Scholar
  5. Breaux TD, Antón AI (2008) Analyzing regulatory rules for privacy and security requirements. IEEE Trans Softw Eng 34(1):5–20CrossRefGoogle Scholar
  6. Breaux TD, Schaub F (2014) Scaling requirements extraction to the crowd: experiments with privacy policies. In: Proceedings of the 22nd international requirements engineering conference (RE), pp 163–172Google Scholar
  7. Dalpiaz F, Paja E, Giorgini P (2016) Security requirements engineering: designing secure socio-technical systems. The MIT PressGoogle Scholar
  8. Dam HK, Savarimuthu BTR, Avery D, Ghose A (2015) Mining software repositories for social norms. In: Proceedings of the 37th international conference on software engineering (ICSE). IEEE Press, pp 627–630Google Scholar
  9. DataLossDB (2015) 2015 reported data breaches surpasses all previous years. https://blog.datalossdb.org/2016/02/11/2015-reported-data-breaches-surpasses-all-previous-years/
  10. Dean D, Gaurino S, Eusebi L, Keplinger A, Pavlik T, Watro R, Cammarata A, Murray J, McLaughlin K, Cheng J et al (2015) Lessons learned in game development for crowdsourced software formal verification. In: Proceedings of USENIX summit on gaming, games, and gamification in security education (3GSE 15). USENIX Association, Washington, D.CGoogle Scholar
  11. Downs JS, Holbrook MB, Sheng S, Cranor LF (2010) Are your participants gaming the system?: screening mechanical turk workers. In: Proceedings of the SIGCHI conference on human factors in computing systems CHI ’10. ACM, New York, pp 2399–2402Google Scholar
  12. Dwarakanath A, Shrikanth NC, Abhinav K, Kass A (2016) Trustworthiness in enterprise crowdsourcing: a taxonomy & evidence from data. In: Proceedings of the 38th international conference on software engineering companion. ACM, pp 41–50Google Scholar
  13. Gao X, Singh MP (2014) Extracting normative relationships from business contracts. In: Proceedings of the 13th international conference on autonomous agents and multiagent systems (AAMAS). IFAAMAS, Paris, pp 101–108Google Scholar
  14. Getman AP, Karasiuk VV (2014) A crowdsourcing approach to building a legal ontology from text. Artif Intell Law 22(3):313–335CrossRefGoogle Scholar
  15. Ghanavati S, Rifaut A, Dubois E, Amyot D (2014) Goal-oriented compliance with multiple regulations. In: Proceedings of IEEE 22nd international requirements engineering conference (RE), pp 73–82Google Scholar
  16. Gürses S, Rizk R, Günther O (2008) Privacy design in online social networks: learning from privacy breaches and community feedback. In: Proceedings of international conference on information systems (ICIS), p 90Google Scholar
  17. Hao J, Kang E, Sun J, Jackson D (2016) Designing minimal effective normative systems with the help of lightweight formal methods. In: Proceedings of the 24th ACM SIGSOFT international symposium on the foundations of software engineering (FSE), pp 50–60Google Scholar
  18. Hashmi M (2015) A methodology for extracting legal norms from regulatory documents. In: Proceedings of IEEE 19th international enterprise distributed object computing workshop, pp 41–50Google Scholar
  19. HHS (2003) Summary of the HIPAA privacy rule. United States Department of Health and Human Services (HHS). http://www.hhs.gov/ocr/privacy/hipaa/understanding/summary/
  20. HHS Breach Portal (2016) Notice to the Secretary of HHS breach of unsecured protected health information affecting 500 or more individuals. United States Department of Health and Human Services (HHS). https://ocrportal.hhs.gov/ocr/breach/
  21. Kafalı Ö, Ajmeri N, Singh MP (2016a) Revani: revising and verifying normative specifications for privacy. IEEE Intell Syst 31(5):8–15CrossRefGoogle Scholar
  22. Kafalı Ö, Singh MP, Williams L (2016b) Nane: identifying misuse cases using temporal norm enactments. In: Proceedings of the 24th IEEE international requirements engineering conference (RE). IEEE Computer Society, Beijing, pp 136–145Google Scholar
  23. Kafalı Ö, Jones J, Petruso M, Williams L, Singh MP (2017) How good is a security policy against real breaches? a HIPAA case study. In: Proceedings of the 39th international conference on software engineering (ICSE). IEEE Computer Society, Buenos Aires, pp 530–540Google Scholar
  24. Kashyap A, Han L, Yus R, Sleeman J, Satyapanich T, Gandhi S, Finin T (2016) Robust semantic text similarity using LSA, machine learning, and linguistic resources. Lang Resour Eval 50(1):125–161CrossRefGoogle Scholar
  25. Landwehr N, Hall M, Frank E (2005) Logistic model trees. Mach Learn 59 (1–2):161–205CrossRefzbMATHGoogle Scholar
  26. Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: Proceedings of the 31st International conference on international conference on machine learning - vol 32, ICML’14, pp 1188–1196Google Scholar
  27. Liu Y, Sarabi A, Zhang J, Naghizadeh P, Karir M, Bailey M, Liu M (2015) Cloudy with a chance of breach: forecasting cyber security incidents. In: Proceedings of the 24th USENIX conference on security symposium, pp 1009–1024Google Scholar
  28. MacLean DL, Heer J (2013) Identifying medical terms in patient-authored text: a crowdsourcing-based approach. J Am Med Inform Assoc 20(6):1120–1127CrossRefGoogle Scholar
  29. Massey AK, Rutledge RL, Antón AI, Swire PP (2014) Identifying and classifying ambiguity for regulatory requirements. In: 2014 IEEE 22nd international requirements engineering conference (RE), pp 83– 92Google Scholar
  30. Matulevic̆ius R, Mayer N, Heymans P (2008) Alignment of misuse cases with security risk management. In: Proceedings of the 3rd international conference on availability, reliability and security (ARES), pp 1397–1404Google Scholar
  31. Maxwell JC, Anton AI (2009) Developing production rule models to aid in acquiring requirements from legal texts. In: 2009 17th IEEE International requirements engineering conference, pp 101–110Google Scholar
  32. Murukannaiah PK, Ajmeri N, Singh MP (2016) Acquiring creative requirements from the crowd: understanding the influences of individual personality and creative potential in crowd RE. In: Proceedings of the 24th IEEE international requirements engineering conference (RE). IEEE Computer Society, Beijing, pp 176–185Google Scholar
  33. Murukannaiah PK, Dabral C, Sheshadri K, Sharma E, Staddon J (2017) Learning a privacy incidents database. In: Proceedings of the hot topics in science of security: symposium and bootcamp, HoTSoS. ACM, New York, pp 35–44Google Scholar
  34. Patwardhan M, Sainani A, Sharma R, Karande S, Ghaisas S (2018) Towards automating disambiguation of regulations: using the wisdom of crowds. In: Proceedings of the 33rd ACM/IEEE international conference on automated software engineering, pp 850–855Google Scholar
  35. Reidenberg JR, Breaux T, Carnor LF, French B (2015) Disagreeable privacy policies: mismatches between meaning and users’ understanding. Berkeley Technol Law J 30(1):39Google Scholar
  36. Riaz M, King J, Slankas J, Williams L (2014) Hidden in plain sight: automatically identifying security requirements from natural language artifacts. In: Proceedings of the 22nd IEEE international requirements engineering conference (RE), pp 183–192Google Scholar
  37. Riaz M, Stallings J, Singh MP, Slankas J, Williams L (2016) DIGS: a framework for discovering goals for security requirements engineering. In: Proceedings of the 10th ACM/IEEE international symposium on empirical software engineering and measurement (ESEM). ACM, pp 35:1–35:10Google Scholar
  38. Savarimuthu BTR, Dam HK (2014) Towards mining norms in open source software repositories. In: Agents and data mining interaction, lecture notes in computer science.  https://doi.org/10.1007/978-3-642-55192-5_3, vol 8316. Springer, Berlin, pp 26–39
  39. Siena A, Jureta I, Ingolfo S, Susi A, Perini A, Mylopoulos J (2012) Capturing variability of law with Nomoś 2. In: Atzeni P, Cheung D, Ram S (eds) Conceptual modeling. Springer, Berlin, pp 383–396Google Scholar
  40. Sindre G, Opdahl AL (2005) Eliciting security requirements with misuse cases. Requir Eng 10(1):34–44CrossRefGoogle Scholar
  41. Singh MP (2013) Norms as a basis for governing sociotechnical systems. ACM Trans Intell Syst Technol (TIST) 5(1):21,1–21,23Google Scholar
  42. Slankas J, Williams L (2013) Access control policy extraction from unconstrained natural language text. In: Proceedings of the international conference on social computing (SocialCom), pp 435–440Google Scholar
  43. Sleimi A, Sannier N, Sabetzadeh M, Briand L, Dann J (2018) Automated extraction of semantic legal metadata using natural language processing. In: Proceedings of IEEE international requirements engineering conference (RE), pp 124–135Google Scholar
  44. Staddon J (2016) Privacy incidents database: the data mining challenges and opportunities. Cyber Security PractitionerGoogle Scholar
  45. Sumner M, Frank E, Hall M (2005) Speeding up logistic model tree induction. In: Proceedings of the 9th European conference on principles and practice of knowledge discovery in databases. Springer, Berlin, pp 675–683Google Scholar
  46. Verizon (2016) Data breach investigations reports. http://www.verizonenterprise.com/verizon-insights-lab/dbir/
  47. Von Wright GH (1999) Deontic logic: a personal view. Ratio Juris 12(1):26–38CrossRefGoogle Scholar
  48. Wilson S, Schaub F, Ramanath R, Sadeh N, Liu F, Smith NA, Liu F (2016) Crowdsourcing annotations for websites’ privacy policies: can it really work?. In: Proceedings of the 25th international conference on world wide web. International World Wide Web Conferences Steering Committee, pp 133–143Google Scholar
  49. Zeni N, Kiyavitskaya N, Mich L, Cordy JR, Mylopoulos J (2015) GaiusT: supporting the extraction of rights and obligations for regulatory compliance. Requir Eng 20(1):1–22CrossRefGoogle Scholar
  50. Zeni N, Mich L, Mylopoulos J (2017) Annotating legal documents with GaiusT 2.0. Int J Metadata Semant Ontol 12:47CrossRefGoogle Scholar
  51. Zeni N, Seid EA, Engiel P, Mylopoulos J (2018) NómosT: building large models of law with a tool-supported process. Data Knowl Eng 117:407–418CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceNorth Carolina State UniversityRaleighUSA
  2. 2.School of ComputingUniversity of KentCanterburyUK
  3. 3.Department of Computer & Information Science & EngineeringUniversity of FloridaGainesvilleUSA

Personalised recommendations