Advertisement

Multimodal Information Coding System for Wearable Devices of Advanced Uniform

  • Andrey L. RonzhinEmail author
  • Oleg O. Basov
  • Anna I. Motienko
  • Alexey A. Karpov
  • Yuri V. Mikhailov
  • Milos Zelezny
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9734)

Abstract

The paper presents a mathematical model of a subsystem for multimodal information coding. Analytical expressions for the quality and speed of information transmission are obtained. The results of experimental studies of the developed multimodal information coding system are presented. The requirements for using the developed model and system for data processing in wearable devices of advanced uniform are discussed.

Keywords

Multimodal information Coding algorithms Uniform Wearable devices Data transmission Energy consumption reduction 

Notes

Acknowledgments

This work is partially supported by the Russian Foundation for Basic Research (grants № 16-08-00696-a, 15-07-06774-a).

References

  1. 1.
    Gregory, F.D., Dai, L.: Multisensory information processing for enhanced human-machine symbiosis. In: Yamamoto, S., Abbott, A.A. (eds.) HIMI 2015. LNCS, vol. 9172, pp. 354–365. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-20612-7_34 CrossRefGoogle Scholar
  2. 2.
    Goldberg, D.H., Vogelstein, R., Socolinsky, D.A., Wolff, L.B.: Toward a wearable, neurally-enhanced augmented reality system. In: Schmorrow, D.D., Fidopiastis, C.M. (eds.) FAC 2011. LNCS, vol. 6780, pp. 493–499. Springer, Heidelberg (2011)Google Scholar
  3. 3.
    Tao, X.: Handbook of Smart Textiles. Springer, Singapore (2015)CrossRefGoogle Scholar
  4. 4.
    Meng, F., Spence, C.: Tactile warning signals for in-vehicle systems. Accid. Anal. Prev. 75, 333–346 (2015)CrossRefGoogle Scholar
  5. 5.
    White, T.L., Krausman, A.S.: Effects of inter-stimulus interval and intensity on the perceived urgency of tactile patterns. Appl. Ergon. 48, 121–129 (2015)CrossRefGoogle Scholar
  6. 6.
    Ayuso, A.J.R., Lopez-Soler, J.M.: Speech Recognition and Coding: New Advances and Trends. NATO ASI Series, vol. 147. Springer, Berlin (1995). Germany, 464 p.CrossRefzbMATHGoogle Scholar
  7. 7.
    Karpov, A., Ronzhin, A.: A universal assistive technology with multimodal input and multimedia output interfaces. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2014, Part I. LNCS, vol. 8513, pp. 369–378. Springer, Heidelberg (2014)Google Scholar
  8. 8.
    Karpov, A., Akarun, L., Yalçın, H., Ronzhin, Al., Demiröz, B., Çoban, A., Zelezny, M.: Audio-visual signal processing in a multimodal assisted living environment. In: Proceeding of 15th International Conference INTERSPEECH-2014, Singapore, pp. 1023–1027 (2014)Google Scholar
  9. 9.
    Karpov, A., Ronzhin, A., Kipyatkova, I.: An assistive bi-modal user interface integrating multi-channel speech recognition and computer vision. In: Jacko, J.A. (ed.) Human-Computer Interaction, Part II, HCII 2011. LNCS, vol. 6762, pp. 454–463. Springer, Heidelberg (2011)Google Scholar
  10. 10.
  11. 11.
    Basov, O.O.: Reasoning of the transition to polymodal infocommunicational systems. In: Distributed Computer and Communication Networks: Control, Computation, Communication. – DCCN-2015, pp. 19–22, October 2015Google Scholar
  12. 12.
    Saveliev, A., Basov, O., Ronzhin, A., Ronzhin, A.: Algorithms for low bit-rate coding with adaptation to statistical characteristics of speech signal. In: Ronzhin, A., Potapova, R., Fakotakis, N. (eds.) SPECOM 2015. LNCS, vol. 9319, pp. 65–72. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  13. 13.
    Balatskaya, L.N., Choinzonov, E.L., Chizevskaya, Svetlana Yu., Kostyuchenko, E.U., Meshcheryakov, R.V.: Software for assessing voice quality in rehabilitation of patients after surgical treatment of cancer of oral cavity, oropharynx and upper jaw. In: Železný, M., Habernal, I., Ronzhin, A. (eds.) SPECOM 2013. LNCS, vol. 8113, pp. 294–301. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  14. 14.
    Volf, D., Meshcheryakov, R., Kharchenko, S.: The singular estimation pitch tracker. In: Ronzhin, A., Potapova, R., Fakotakis, N. (eds.) SPECOM 2015. LNCS, vol. 9319, pp. 454–462. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  15. 15.
    Karpov, A., Ronzhin, A., Kipyatkova, I.: An assistive bi-modal user interface integrating multi-channel speech recognition and computer vision. In: Jacko, J.A. (ed.) Human-Computer Interaction, Part II, HCII 2011. LNCS, vol. 6762, pp. 454–463. Springer, Heidelberg (2011)Google Scholar
  16. 16.
    Potapova, R., Komalova, L., Bobrov, N.: Acoustic markers of emotional state “aggression”. In: Ronzhin, A., Potapova, R., Fakotakis, N. (eds.) SPECOM 2015. LNCS, vol. 9319, pp. 55–64. Springer, Heidelberg (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Andrey L. Ronzhin
    • 1
    Email author
  • Oleg O. Basov
    • 2
  • Anna I. Motienko
    • 1
  • Alexey A. Karpov
    • 1
  • Yuri V. Mikhailov
    • 1
  • Milos Zelezny
    • 3
  1. 1.SPIIRASSt. PetersburgRussia
  2. 2.Academy of FAP of RussiaOrelRussia
  3. 3.University of West BohemiaPilsenCzech Republic

Personalised recommendations