NLP in OTF Computing: Current Approaches and Open Challenges

  • Frederik S. BäumerEmail author
  • Michaela Geierhos
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 920)


On-The-Fly Computing is the vision of covering software needs of end users by fully-automatic compositions of existing software services. End users will receive so-called service compositions tailored to their very individual needs, based on natural language software descriptions. This everyday language may contain inaccuracies and incompleteness, which are well-known challenges in requirements engineering. In addition to existing approaches that try to automatically identify and correct these deficits, there are also new trends to involve users more in the elaboration and refinement process. In this paper, we present the relevant state of the art in the field of automated detection and compensation of multiple inaccuracies in natural language service descriptions and name open challenges needed to be tackled in NL-based software service composition.


Inaccuracy detection Natural language software requirements 



This work was partially supported by the German Research Foundation (DFG) within the Collaborative Research Center “On-The-Fly Computing” (SFB 901). Furthermore, we thank our student assistant Edwin Friesen for his contribution.


  1. 1.
    Bajwa, I.S., Lee, M., Bordbar, B.: Resolving syntactic ambiguities in natural language specification of constraints. In: Gelbukh, A. (ed.) CICLing 2012. LNCS, vol. 7181, pp. 178–187. Springer, Heidelberg (2012). Scholar
  2. 2.
    Bano, M.: Addressing the challenges of requirements ambiguity: a review of empirical literature. In: Proceedings of the 5th International Workshop on EmpiRE, pp. 21–24. IEEE, Ottawa, August 2015Google Scholar
  3. 3.
    Bäumer, F.S.: Indikatorbasierte Erkennung und Kompensation von ungenauen und unvollständig beschriebenen Softwareanforderungen. Ph.D. thesis, University of Paderborn, Paderborn, Germany, July 2017. ISBN: 978-3-942647-91-5Google Scholar
  4. 4.
    Bäumer, F.S., Dollmann, M., Geierhos, M.: Studying software descriptions in SourceForge and app stores for a better understanding of real-life requirements. In: Sarro, F., Shihab, E., Nagappan, M., Platenius, M.C., Kaimann, D. (eds.) Proceedings of the 2nd ACM SIGSOFT International Workshop on App Market Analytics, pp. 19–25. ACM, Paderborn, September 2017Google Scholar
  5. 5.
    Bäumer, F.S., Geierhos, M.: Flexible ambiguity resolution and incompleteness detection in requirements descriptions via an indicator-based configuration of text analysis pipelines. In: Proceedings of the 51st Hawaii International Conference on System Sciences, Big Island, Waikoloa Village, USA, pp. 5746–5755 (2018)Google Scholar
  6. 6.
    Bucchiarone, A., Gnesi, S., Fantechi, A., Trentanni, G.: An experience in using a tool for evaluating a large set of natural language requirements. In: Proceedings ACM Symposium on Applied Computing, SAC 2010, pp. 281–286. ACM, New York (2010)Google Scholar
  7. 7.
    Caron, M., Bäumer, F.S., Geierhos, M.: Back to basics: extracting software requirements with a syntactic approach. In: Schmid, K., et al. (eds.) Joint Proceedings of REFSQ-2018 Workshops, Doctoral Symposium, Live Studies Track, and Poster Track Co-located with the 23rd International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2018). (2018)Google Scholar
  8. 8.
    Dollmann. M.: Frag die Anwender: Extraktion und Klassifikation von funktionalen Anforderungen aus User-Generated-Content. Master thesis, University of Paderborn, Paderborn, Germany, March 2016Google Scholar
  9. 9.
    Dollmann, M., Geierhos, M.: On- and off-topic classification and semantic annotation of user-generated software requirements. In: Proceedings of the Conference on EMNLP. ACL, Austin, November 2016Google Scholar
  10. 10.
    Fabbrini, F., Fusani, M., Gnesi, S., Lami, G.: The linguistic approach to the natural language requirements quality: benefit of the use of an automatic tool. In: Proceedings of the 26th Annual NASA Goddard Software Engineering Workshop, pp. 97–105. IEEE, Greenbelt, November 2001Google Scholar
  11. 11.
    Ferrari, A., dell’Orletta, F., Spagnolo, G.O., Gnesi, S.: Measuring and improving the completeness of natural language requirements. In: Salinesi, C., van de Weerd, I. (eds.) REFSQ 2014. LNCS, vol. 8396, pp. 23–38. Springer, Cham (2014). Scholar
  12. 12.
    Firesmith, D.: Common requirements problems, their negative consequences, and the industry best practices to help solve them. J. Object Technol. 6(1), 17–33 (2007)CrossRefGoogle Scholar
  13. 13.
    Friesen, E., Bäumer, F.S., Geierhos, M.: CORDULA: software requirements extraction utilizing chatbot as communication interface. In: Schmid, K., et al. (eds.) Joint Proceedings of REFSQ-2018 Workshops, Doctoral Symposium, Live Studies Track, and Poster Track Co-located with the 23rd International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2018). (2018)Google Scholar
  14. 14.
    Geierhos, M., Bäumer, F.S.: How to complete customer requirements: using concept expansion for requirement refinement. In: Métais, E., Meziane, F., Saraee, M., Sugumaran, V., Vadera, S. (eds.) NLDB 2016. LNCS, vol. 9612, pp. 37–47. Springer, Cham (2016). Scholar
  15. 15.
    Geierhos, M., Schulze, S., Bäumer, F.S.: What did you mean? Facing the challenges of user-generated software requirements. In: Loiseau, S., Filipe, J., Duval, B., van den Herik, J. (eds.) Proceedings of the 7th ICAART, Special Session on PUaNLP 2015, pp. 277–283. SCITEPRESS – Science and Technology Publications, Lissabon (2015)Google Scholar
  16. 16.
    Huertas, C., Juárez-Ramírez, R.: NLARE, a natural language processing tool for automatic requirements evaluation. In: Proceedings of the CUBE International Information Technology Conference, CUBE 2012, pp. 371–378. ACM, New York (2012)Google Scholar
  17. 17.
    Huertas, C., Juárez-Ramírez, R.: Towards assessing the quality of functional requirements using English/Spanish controlled languages and context free grammar. In: Proceedings of the 3rd DICTAP, pp. 234–241. SDIWC, Ostrava, July 2013Google Scholar
  18. 18.
    Husain, S., Beg, R.: Advances in ambiguity less NL SRS: a review. In: Proceedings of ICETECH 2015, pp. 221–225. IEEE, Coimbatore, March 2015Google Scholar
  19. 19.
    Kamsties, E.: Understanding ambiguity in requirements engineering. In: Aurum, A., Wohlin, C. (eds.) Engineering and Managing Software Requirements, pp. 245–266. Springer, Heidelberg (2005). Scholar
  20. 20.
    Kamsties, E., Berry, D.M., Paech, B.: Detecting ambiguities in requirements documents using inspections. In: Proceedings of 1st WISE, Paris, France, pp. 68–80 (2001)Google Scholar
  21. 21.
    Kamsties, E., Paech, B.: Taming ambiguity in natural language requirements. In: Proceedings of the 13th ICSOFT-EA, Paris, France, pp. 1–8, December 2000Google Scholar
  22. 22.
    Körner, S.J.: RECAA - Werkzeugunterstützung in der Anforderungserhebung. Ph.D. thesis, Karlsruher Institut für Technologie. KIT Scientific Publishing, Karlsruhe, February 2014Google Scholar
  23. 23.
    Körner, S.J., Brumm, T.: Natural language specification improvement with ontologies. Int. J. Semant. Comput. 03(04), 445–470 (2010)CrossRefGoogle Scholar
  24. 24.
    Lami, G.: QuARS: a tool for analyzing requirements. Technical report ESC- TR-2005-014, Carnegie Mellon University, September 2005Google Scholar
  25. 25.
    Moens, M.-F., Li, J., Chua, T.-S.: Mining User Generated Content. CRC Press, Leuven/Beijing/Singapore (2014)Google Scholar
  26. 26.
    Navigli, R., Ponzetto, S.P.: BabelNet: the automatic construction, evaluation and application of a wide-coverage multilingual semantic network. In: Artificial Intelligence, vol. 193, pp. 217–250. Elsevier, Essex, December 2012MathSciNetCrossRefGoogle Scholar
  27. 27.
    Navigli, R., Ponzetto, S.P.: Joining forces pays off: multilingual joint word sense disambiguation. In: Proceedings of the 2012 Joint Conference on EMNLP and CONLL, pp. 1399–1410. ACL, Jeju (2012)Google Scholar
  28. 28.
    Pekar, V., Felderer, M., Breu, R.: Improvement methods for software requirement specifications: a mapping study. In: Proceedings of the 9th QUATIC, pp. 242–245. IEEE, Guimarães, September 2014Google Scholar
  29. 29.
    Platenius, M.C. et al.: An Overview of Service Specification Language and Matching in On-The-Fly Computing (v0.3). Technical report Tr-ri-16-349, Software Engineering Group, Heinz Nixdorf Institut, University of Paderborn, Germany, January 2016Google Scholar
  30. 30.
    Shah, U.S., Jinwala, D.C.: Resolving ambiguities in natural language software requirements: a comprehensive survey. SIGSOFT 40(5), 1–7 (2015)CrossRefGoogle Scholar
  31. 31.
    Tichy, W.F., Landhäußer, M., Körner, S.J.: nlrpBENCH: a benchmark for natural language requirements processing. In: MK Software Engineering and Management (2015)Google Scholar
  32. 32.
    Tjong, S.F., Berry, D.M.: The design of SREE – a prototype potential ambiguity finder for requirements specifications and lessons learned. In: Doerr, J., Opdahl, A.L. (eds.) REFSQ 2013. LNCS, vol. 7830, pp. 80–95. Springer, Heidelberg (2013). Scholar
  33. 33.
    Umber, A., Bajwa, I.S.: Minimizing ambiguity in natural language software requirements specification. In: Proceedings of the 6th ICDIM, pp. 102–107. IEEE, Melbourn, September 2011Google Scholar
  34. 34.
    Vlas, R., Robinson, W.N.: A rule-based natural language technique for requirements discovery and classification in open-source software development projects. In: Proceedings of the 44th HICSS, pp. 1–10. IEEE, Kauai (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Paderborn UniversityPaderbornGermany

Personalised recommendations