An Implementation for Capturing Clickable Moving Objects

  • Toshiharu Sugawara
  • Satoshi Kurihara
  • Shigemi Aoyagi
  • Koji Sato
  • Toshihiro Takada
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3101)


This paper discusses a method for identifying clickable objects/regions in still and moving images when they are being captured. A number of methods and languages have recently been proposed for adding point-and-click interactivity to objects in moving pictures as well as still images. When these pictures are displayed in Internet environments or broadcast on digital TV channels, users can follow links specified by URLs (e.g., for buying items online or getting detailed information about a particular item) by clicking on these objects. However, it is not easy to specify clickable areas of objects in a video because their position is liable to change from one frame to the next. To cope with this problem, our method allows content creators to capture moving (and still) images with information related to objects that appear in these images including the coordinates of the clickable areas of these objects in the captured images. This is achieved by capturing the images at various infrared wavelengths simultaneously. This is also applicable to multi-target motion capture.


Light Source Augmented Reality Collateral Information Infrared Wavelength Teddy Bear 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Aoki, H., Matsushita, S.: Balloon Tag (In)visible Marker Which Tells Who’s Who. In: Proceedings of the 4th Int. Sym. on Wearable Computers (ISWC 2000), pp. 181–182 (2000)Google Scholar
  2. 2.
    ATVEF (1999),
  3. 3.
    Azuma, R.T.: A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments 6(4), 355–385 (1997)Google Scholar
  4. 4.
    Hirotsu, T., et al.: Cmew/U — a multimedia web annotation sharing system. In: Proceedings of the IEEE Region 10 Conference (TENCON 1999), pp. 356–359 (1999)Google Scholar
  5. 5.
    Igarashi, S., Kushima, K., Sakata, T.: A multimedia database system VHM and its application. In: Proceedings of 14th IFIP World Computer Congress (1996)Google Scholar
  6. 6.
  7. 7.
    Rekimoto, J., Ayatsuka, Y.: Cybercode: Designing augmented reality environments with visual tags. In: Proceedings of DARE (2000)Google Scholar
  8. 8.
  9. 9.
    Takada, T., et al.: Cmew: Integrating continuous media with the web. In: Proceedings of the 3rd Annual Multimedia Technology and Applications Conference (MTAC 1998), pp. 136–140 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Toshiharu Sugawara
    • 1
  • Satoshi Kurihara
    • 2
  • Shigemi Aoyagi
    • 1
  • Koji Sato
    • 3
  • Toshihiro Takada
    • 1
  1. 1.NTT Communication Science LaboratoriesKyotoJapan
  2. 2.NTT Network Innovation LaboratoriesTokyoJapan
  3. 3.NTT Cyberspace LaboratoriesYokosuka, KanagawaJapan

Personalised recommendations