Soft Computing

, Volume 20, Issue 8, pp 3321–3334 | Cite as

Fuzzy system to adapt web voice interfaces dynamically in a vehicle sensor tracking application definition

  • Guillermo Cueva-FernandezEmail author
  • Jordán Pascual Espada
  • Vicente García-Díaz
  • Rubén González Crespo
  • Nestor Garcia-Fernandez
Methodologies and Application


The Vitruvius platform is focused on vehicles and the possibility of working with their multiple sensors, and the real-time data they can provide. With Vitruvius, users can create software applications specialized for the automotive context (e.g., monitor certain vehicles, warn when a vehicle sensor exceeds a certain value, etc.), with the help of fuzzy rules to make decisions. To create applications, users are provided with a domain-specific language that greatly facilitates the process. However, drivers and some passengers cannot create applications on the fly since they need to type to accomplish such a goal. In this paper, we present an adaptive speech interface to allow users to create applications by only using their voice. In addition, the application is based on fuzzy rules to suit the level of experience of users. The application provides an interface that is balanced between the amount of work users have to do and the help the system provides based on the knowledge and ability of each potential user.


Fuzzy logic Fuzzy decision making  Vehicle sensor  Tracking Speech interface Adaptive 


  1. Arnold SC, Mark L, Goldthwaite J (2000) Programming by voice, vocalprogramming. In: Proceedings of the fourth international ACM conference on assistive technologies, Arlington, USA, pp 149–155Google Scholar
  2. Beattie D, Baillie L, Halvey M, McCall R (2014) What’s around the corner? Enhancing driver awareness in autonomous vehicles via in-vehicle spatial auditory displays. In: Proceedings of the 8th nordic conference on human-computer interaction: fun, fast, foundational, vol 8. Helsinki, Finland, pp 189–198Google Scholar
  3. Begel A, Graham SL (2006) An assessment of a speech-based programming environment. In: Proceedings of IEEE symposium on visual languages and human-centric computing, Brighton, UKGoogle Scholar
  4. Bouchon-Meunier B, Valverde L (1999) A fuzzy approach to analogical reasoning. Soft Comput 3(3):141–147CrossRefGoogle Scholar
  5. Cingolani P, Alcalá-Fdez J (2013) jFuzzyLogic: a java library to design fuzzy logic controllers according to the standard for fuzzy control programming. Int J Comput Intell Syst 6(sup1):61–75CrossRefGoogle Scholar
  6. Cingolani P, Alcala-Fdez J (2012) jFuzzyLogic: a robust and flexible fuzzy-logic inference system language implementation. In: Proceedings of IEEE international conference on fuzzy systems (FUZZ-IEEE), pp 1–8Google Scholar
  7. Cueva-Fernandez G, Espada JP, García-Díaz V, García CG, Garcia-Fernandez N (2014) Vitruvius: an expert system for vehicle sensor tracking and managing application generation. J Netw Comput Appl 42(1):178–188CrossRefGoogle Scholar
  8. Cueva-Fernandez G, Espada JP, García-Díaz V, Gonzalez-Crespo R (2015) Fuzzy decision method to improve the information exchange in a vehicle sensor tracking system. Appl Soft Comput (in press)Google Scholar
  9. Dutta S, Chakraborty MK (2015) Fuzzy relation and fuzzy function over fuzzy sets: a retrospective. Soft Comput 19(1):99–112MathSciNetCrossRefGoogle Scholar
  10. Espada JP, Díaz VG, Crespo RG, Martínez OS, G-Bustelo BCP, Lovelle JMC (2013) Using extended web technologies to develop bluetooth multi-platform mobile applications for interact with smart things.Inf Fusion 21(1):30–41Google Scholar
  11. Fernandez GC, Espada JP, Díaz VG, Rodríguez MG (2013) Kuruma: the vehicle automatic data capture for urban computing collaborative systems. Int J Interact Multimed Artif Intell 2(2):28–32Google Scholar
  12. Ghafoor Y, Huang Y-P, Liu S-I (2015) An intelligent approach to discovering common symptoms among depressed patients. Soft Comput 19(4):819–827CrossRefGoogle Scholar
  13. Gong Y (1995) Speech recognition in noisy environments: a survey. Speech Commun 16(3):261–291CrossRefGoogle Scholar
  14. Gorniak P, Roy D (2003) Augmenting user interfaces with adaptive speech commands. In: Proceedings of the 5th international conference on multimodal interfaces, Vancouver, Canada, pp 176–179Google Scholar
  15. Guo L, Ma J, Chen Z, Zhong H (2015) Learning to recommend with social contextual information from implicit feedback. Soft Comput 19(5):1351–1362CrossRefGoogle Scholar
  16. Kaiser EC (2005) Shacer: a speech and handwriting recognizer. In: Proceedings of the international conference on multimodal interfaces (ICMI), Trento, Italy, 2005, pp 63–70Google Scholar
  17. Kóczy LT (2006) Fuzziness and computational intelligence: dealing with complexity and accuracy. Soft Comput 10(2):178–184CrossRefGoogle Scholar
  18. Langley P (1997) Machine learning for adaptive user interfaces. In: Proceedings of the 21st annual German conference on artificial intelligence: advances in artificial intelligence, Freiburg, Germany, pp 53–62Google Scholar
  19. Larsson S, Berlin S, Eliasson A, Mecel AB, Kronlid F, Talkamatic AB (2013) Visual distraction test setup for an multimodal in-vehicle dialogue system. In: Proceedings of The 17th workshop on the semantics and pragmatics of dialogue, Amsterdam, Netherlands, pp 215–217Google Scholar
  20. Lavie T, Meyer J (2010) Benefits and costs of adaptive user interfaces. Int J Hum Comput Stud 68(8):508–524CrossRefGoogle Scholar
  21. Lee JD, Caven B, Haake S, Brown TL (2001) Speech-based interaction with in-vehicle computers: the effect of speech-based e-mail on drivers’ attention to the roadway. Hum Factors J Hum Factors Ergon Soc 43(4):631–640CrossRefGoogle Scholar
  22. Leopold JL, Ambler AL (1997) Keyboardless visual programming using voice, handwriting, and gesture. In: Proceedings of the 1997 IEEE symposium on visual languages (VL ’97), Capri, Italy, pp 28–35Google Scholar
  23. Lledó LD, Bertomeu A, Díez J, Badesa FJ, Morales R, Sabater JM, Garcia-Aracil N (2015) Auto-adaptative robot-aided therapy based in 3D virtual tasks controlled by a supervised and dynamic neuro-fuzzy system. Int J Artif Intell Interact Multimed 3(2):63–68Google Scholar
  24. Mäntyjärvi J, Seppänen T (2003) Adapting applications in handheld devices using fuzzy context information. Interact Comput 15(4):521–538CrossRefzbMATHGoogle Scholar
  25. Martinson E, Brock D (2007) Improving human-robot interaction through adaptation to the auditory scene. In: Proceedings of the ACM/IEEE international conference on human-robot interaction, Arlington, USA, 2007, pp 113–120Google Scholar
  26. May KR, Gable TM, Walker BN (2014) A multimodal air gesture interface for in vehicle menu navigation. In: Proceedings of the 6th international conference on automotive user interfaces and interactive vehicular applications, Seattle, USA, pp 1–6Google Scholar
  27. Neale VL, Dingus TA, Klauer SG, Sudweeks J, Goodman M (2005) An overview of the 100-car naturalistic study and findings. Natl Highw Traffic Saf Adm 1(05–0400):1–10Google Scholar
  28. Papakostopoulos V, Marmaras N (2012) Conventional vehicle display panels: the drivers’ operative images and direc-tions for their redesign. Appl Ergon 43(5):821–828CrossRefGoogle Scholar
  29. Planet S, Iriondo I (2012) Comparative study on feature selection and fusion schemes for emotion recog-nition from speech. Int J Interact Multimed Artif Intell 1(6):44–51Google Scholar
  30. Rajamäki J, Timonen T, Nevalainen J, Uusipaaval-niemi H, Töyrylä T, Arte E (2014) Human-machine interactions in future police vehicles: applying speech user interface and RFID technology. Int J Syst Appl Eng Dev 8(1):163–170Google Scholar
  31. Silva W, Serra G (2014) Intelligent genetic fuzzy inference system for speech recognition: an ap-proach from low order feature based on discrete cosine transform. J Control Autom Electr Syst 25(6):689–698CrossRefGoogle Scholar
  32. Soui M, Abed M, Ghedira K (2013) Fuzzy logic approach for adaptive systems design. In: Proceedings of interaction human-computer, towards intelligent and implicit interaction, Springer, Heidelberg pp 141–150Google Scholar
  33. Sridhar S, Ng-Thow-Hing V (2012) Generation of virtual display surfaces for in-vehicle contextual augmented reality. In: Proceedings of IEEE international symposium on mixed and augmented reality (ISMAR), Japan, FukuokaGoogle Scholar
  34. Takahashi J, Katae N, Harada S, Matsumoto C, Noguchi Y, Murase K, Watanabe K, Matsuo N, Iwamida H, Fukuoka T (2012) Intuitive speech interface for vehicle information systems. In: Proceedings of 19th ITS world congress. Austria, ViennaGoogle Scholar
  35. Tchankue P, Wesson J, Vogts D (2011) The impact of an adaptive user interface on reducing driver distraction. In: Proceedings of the 3rd international conference on automotive user inter-faces and interactive vehicular applications, Salzburg, Austria, pp 87–94Google Scholar
  36. Wang Y, Zhu J, Zheng T, Gao F, Guo X (2015) Comparing three smart device setups for the use of speech interface in desti-nation search while driving. In: Proceedings of transportation research board 94th annual meeting, Washington, USA, no 15–0469, pp 1–11Google Scholar
  37. Yankelovich N, Lai J (1998) Designing speech user interfaces. In: Proceedings of CHI 98 conference summary on human factors in computing systems, Los Angeles, USAGoogle Scholar
  38. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of OviedoAsturiasSpain
  2. 2.College of EngineeringUniversidad Internacional de La RiojaMadridSpain

Personalised recommendations