Dynamic Information Space Based on High-Speed Sensor Technology

  • Masatoshi IshikawaEmail author
  • Idaku Ishii
  • Yutaka Sakaguchi
  • Makoto Shimojo
  • Hiroyuki Shinoda
  • Hirotsugu Yamamoto
  • Takashi Komuro
  • Hiromasa Oku
  • Yutaka Nakajima
  • Yoshihiro Watanabe


The purpose of this research is to realize a dynamic information space harmonizing human perception system, recognition system, and motor system. Toward this purpose, our key technology is high-speed sensor technology and display technology focusing on vision and haptic sense which performs at the order of kHz. Based on these technologies, our information space can obtain the dynamics of humans and objects in perfect condition and displays information at high speed. As subsystems for our goal, we have newly developed four important elemental technologies including high-speed 3D vision toward insensible dynamics sensing, high-speed resistor network proximity sensor array for detecting nearby object, noncontact low-latency haptic feedback, and high-speed display of visual information for information sharing and operation in real space. Also, in order to achieve the coordinated interaction between individual humans and this information space, we have conducted the research about the human perceptual and motor functions for coordinated interaction with high-speed information environment. In addition, we have developed various application systems based on the concept of dynamic information space by integrating the subsystems.


High-speed vision Proximity sensor Airborne Ultrasound Tactile Display (AUTD) High-speed LED display Human interface Dynamic information environment 


  1. 1.
    I. Ishii, T. Tatebe, Q. Gu, Y. Moriue, T. Takaki, K. Tajima, 2000 fps real-time vision system with high-frame-rate video recording. Proceedings IEEE International Conference on Robotics and Automation, pp. 1536–1541 (2010)Google Scholar
  2. 2.
    Y. Liu, H. Gao, Q. Gu, T. Aoyama, T. Takaki, I. Ishii, High-frame-rate structured light 3-D vision for fast moving objects. J. Robot. Mechat. 26(3), 311–320 (2014)CrossRefGoogle Scholar
  3. 3.
    S. Inokuchi, K. Sato, F. Matsuda, Range imaging system for 3-D object recognition. Proceedings Conference on Pattern Recognition, pp. 806–808 (1984)Google Scholar
  4. 4.
    J. Chen, T. Yamamoto, T. Aoyama, T. Takaki, I. Ishii, Simultaneous projection mapping using high-frame-rate depth vision. Proceedings of IEEE International Conference on Robotics and Automation, pp. 4506–4511 (2014)Google Scholar
  5. 5.
    M. Shimojo, T. Araki, S.Teshigawara, A. Ming, M. Ishikawa, A net-structure tactile sensor covering free-form surface and ensuring high-speed response. Proceedings of IEEE International Conference on Intelligent Robots and Systems, pp. 670–675 (2007)Google Scholar
  6. 6.
    H. Arita, Y. Suzuki, H. Ogawa, K. Tobita, M. Shimojo, Hemispherical net-structure proximity sensor detecting azimuth and elevation for guide dog robot. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 653–658 (2013)Google Scholar
  7. 7.
    K. Hasegawa, H. Shinoda, Aerial display of vibrotactile sensation with high spatial-temporal resolution using Large-Aperture airborne ultrasound phased array. Proceedings of IEEE World Haptics Conference 2013, pp. 31–36 (Daejeon, Korea, 2013)Google Scholar
  8. 8.
    T. Iwamoto, M. Tatezono, H. Shinoda, Non-contact method for producing tactile sensation using airborne ultrasounders. Haptics: Perception, Devices and Scenarios: 6th International Conference, Eurohaptics 2008 Proceedings (Lecture Notes in Computer Science), pp. 504–513 (2008)Google Scholar
  9. 9.
    T. Hoshi, T. Masafumi, T. Iwamoto, H. Shinoda, Noncontact tactile display based on radiation pressure of airborne ultrasound. IEEE Trans. Haptics 3(3), 155–165 (2010)CrossRefGoogle Scholar
  10. 10.
    J. Awatani, Studies on acoustic radiation pressure. I (General considerations). J. Acoust. Soc. Am. 27, 278–281 (1955)MathSciNetCrossRefGoogle Scholar
  11. 11.
    P.J.J. Lamoreet, H. Muijser, C.J. Keemink, Envelope detection of amplitude-modulated high-frequency sinusoidal signals by skin mechanoreceptors. J. Acoust. Soc. Am. 79, 1082–1085 (1986)CrossRefGoogle Scholar
  12. 12.
    H. Yamamoto, M. Tsutsumi, K. Matsushita, R. Yamamoto, K. Kajimoto, S. Suyama, Development of high-frame-rate LED panel and its applications for stereoscopic 3D display. Proc. SPIE 7956, 79560R (2011)CrossRefGoogle Scholar
  13. 13.
    S. Farhan, S. Suyama, H. Yamamoto, Hand-waving decodable display by use of a high frame rate LED panel. Proceedings of  IDW ’11, vol. 3, pp. 1983–1986 (2011)Google Scholar
  14. 14.
    H. Yamamoto, K. Sato, S. Farhan, S. Suyama, Hand-waving steganography by use of a high-frame-rate LED Panel. SID 2014 DIGEST, pp. 915–917 (2014)Google Scholar
  15. 15.
    K. Sato, A. Tsuji, S. Suyama, H. Yamamoto, LED module integrated with microcontroller, sensors, and wireless communication. Proceedings of the International Display Workshops 20, 1504–1507 (2013)Google Scholar
  16. 16.
    H. Yamamoto, S. Suyama, Aerial 3D LED display by use of retroreflective sheeting. Proc. SPIE 8648, 86480Q (2013)CrossRefGoogle Scholar
  17. 17.
    H. Yamamoto, Y. Tomiyama, S. Suyama, Floating aerial LED signage based on aerial imaging by retro-reflection (AIRR). Optics Express 22(22), 26919–26924 (2014)CrossRefGoogle Scholar
  18. 18.
    C.B. Burckhardt, R.J. Collier, E.T. Doherty, Formation and inversion of pseudoscopid images. Appl. Opt. 7, 627–631 (1968)CrossRefGoogle Scholar
  19. 19.
    T. Tokimoto, K. Sato, S. Suyama, H. Yamamoto, High-frame-rate LED display with pulse-width modulation by use of nonlinear clock. Proceedings of 2013 IEEE 2nd Global Conference on Consumer Electronics, pp. 83–84 (2013)Google Scholar
  20. 20.
    S. Kitazawa, T. Kohno, T. Uka, Effects of delayed visual information on the rate and amount of prism adaptation in the human. J. Neurosci. 15, 7644–7652 (1995)Google Scholar
  21. 21.
    H. Tanaka, K. Homma, H. Imamizu, Physical delay but not subjective delay determines learning rate in prism adaptation. Exp. Brain Res. 208, 257–268 (2011)CrossRefGoogle Scholar
  22. 22.
    T. Honda, M. Hirashima, D. Nozaki, Adaptation to visual feedback delay influences visuomotor learning. PLoS ONE 7, e37900 (2012)CrossRefGoogle Scholar
  23. 23.
    T. Ishikawa, Y. Sakaguchi, Both movement-end and task-end are critical for error feedback in visuomotor adaptation: a behavioral experiment. PLoS ONE 8, e55801 (2014)CrossRefGoogle Scholar
  24. 24.
    S. Cheadle, A. Parton, H. Muller, M. Usher, Subliminal gamma flicker draws attention even in the absence of transition-flash cues. J. Neurophys. 105, 827–833 (2011)Google Scholar
  25. 25.
    Y. Nakajima, Y. Sakaguchi. Abrupt transition between an above-CFF flicker and a stationary stimulus induces twinkle perception: evidence for high-speed visual mechanism for detecting luminance change. J. Vis. 13, 311 (2013)Google Scholar
  26. 26.
    M. Sinico, G. Parovel, C. Casco, S. Anstis, Perceived shrinkage of motion paths. J. Exp. Psychol. Hum. Percept Perform. 35, 948–957 (2009)CrossRefGoogle Scholar
  27. 27.
    Y. Nakajima, Y. Sakaguchi, Perceptual shrinkage of motion path observed in one-way high-speed motion. Proceedings 24th Annual Conference JNNS, pp. 88–89 (2014)Google Scholar
  28. 28.
    M. Higuchi, T. Komuro, Multi-finger AR typing interface for mobile devices using high-speed hand motion recognition. Extended Abstracts on ACM SIGCHI Conferene on Human Factors in Computing Systems (CHI 2015), pp. 1235–1240 (2015)Google Scholar
  29. 29.
    K. Okumura, H. Oku, M. Ishikawa, High-speed gaze controller for millisecond-order Pan/tilt Camera. Proc. IEEE ICRA 2011, 6186–6191 (2011)Google Scholar
  30. 30.
    K. Okumura, K. Yokoyama, H. Oku, M. Ishikawa, 1 ms auto Pan-Tilt–video shooting technology for objects in motion based on Saccade Mirror with background subtraction. Adv. Robot. 29, 457–468 (2015)CrossRefGoogle Scholar
  31. 31.
  32. 32.
  33. 33.
    K. Okumura, H. Oku, M. Ishikawa, Acitve projection AR using high-speed optical axis control and appearance estimation algorithm. Proceedings of IEEE ICME (2013). doi: 10.1109/ICME.2013.6607637
  34. 34.
    T. Sueishi, H. Oku, M. Ishikawa, Robust high-speed tracking against illumination changes for dynamic projection mapping. Proceedings of IEEE VR2015, pp. 97–104 (2015)Google Scholar
  35. 35.
    A. Zerroug, A. Cassinelli, M. Ishikawa, Invoked computing: spatial audio and video AR invoked through miming. Proceedings of Virtual Reality International Conference (2011)Google Scholar
  36. 36.
    L. Miyashita, Y. Zou, M. Ishikawa, VibroTracker: a vibrotactile sensor tracking objects. SIGGRAPH 2013, Emerging Technologies (2013)Google Scholar
  37. 37.
    T. Niikura, Y. Watanabe, M. Ishikawa, Anywhere surface touch: utilizing any surface as an input area. The 5th Augmented Human International Conference (2014)Google Scholar
  38. 38.
    M.S. Alvissalim, M. Yasui, C. Watanabe, M. Ishikawa, Immersive virtual 3D environment based on 499 fps hand gesture interface. International Conference on Advanced Computer Science and Information Systems, pp. 198–203 (2014)Google Scholar

Copyright information

© Springer Japan 2016

Authors and Affiliations

  • Masatoshi Ishikawa
    • 1
    Email author
  • Idaku Ishii
    • 2
  • Yutaka Sakaguchi
    • 3
  • Makoto Shimojo
    • 3
  • Hiroyuki Shinoda
    • 1
  • Hirotsugu Yamamoto
    • 4
  • Takashi Komuro
    • 5
  • Hiromasa Oku
    • 6
  • Yutaka Nakajima
    • 3
  • Yoshihiro Watanabe
    • 1
  1. 1.The University of TokyoTokyoJapan
  2. 2.Hiroshima UniversityHiroshimaJapan
  3. 3.University of Electro-CommunicationsTokyoJapan
  4. 4.Utsunomiya UniversityTochigiJapan
  5. 5.Saitama UniversitySaitamaJapan
  6. 6.Gunma UniversityGunmaJapan

Personalised recommendations