Skip to main content

Exploiting SenseCam for Helping the Blind in Business Negotiations

  • Conference paper
Computers Helping People with Special Needs (ICCHP 2006)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4061))

Included in the following conference series:

Abstract

During business meetings, blind persons are not able to see the meaningful movements, and facial gestures of the participants. The formal meeting minutes and / or participants’ conversation during the meeting normally lack this important feedback in order to determine who is in favor and who is against their proposed suggestions. This is crucial in business negotiations, where one has to convince people and do lobbying for winning the business case in upcoming meetings. Today the devices already exist for instantly and seamlessly capturing the snapshots everywhere. The proposition suggests data capture using a similar device called SenseCam, and then making these snapshots accessible for the benefit of the visually impaired users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. NIST Automatic Meeting Room Recognition Project (March 5, 2006), http://www.nist.gov/speech/test_beds/mr_proj/index.html

  2. Yang, J., Zhu, X., Gross, R., Kominek, J., Pan, Y., Waibel, A.: Multimodal People ID for a Multimedia Meeting Browse. In: Proceedings of the ACM Multimedia 1999, pp. 159–168 (1999)

    Google Scholar 

  3. Ahmed, M., Hoang, H.H., Karim, M.S., Khusro, S., Lanzenberger, M., Latif, K., Michlmayr, E., Mustofa, K., Nguyen, H.T., Rauber, A., Schatten, A., Tho, M.N., Tjoa, A.M.: ’SemanticLIFE’ - A Framework for Managing Information of A Human Lifetime. In: Proceedings of the International Conference on Information Integration, Web-Applications and Services, IIWAS 2004, Jakarta-Indonesia, September 27-29 (2004)

    Google Scholar 

  4. ICSI Meeting Recorder Project (March 5, 2006), http://www.icsi.berkeley.edu/Speech/mr/

  5. interACT Meeting Room Project (March 5, 2006), http://penance.is.cs.cmu.edu/meeting_room/

  6. Augmented Multi-Party Interaction (March 5, 2006), http://www.amiproject.org/

  7. NCCR Interactive Multimodal Information Management (March 5, 2006), http://www.im2.ch/

  8. Reidsma, D., Rienks, R.J., Jovanovic, N.: Meeting modeling in the context of Multimodal research. In: Bengio, S., Bourlard, H. (eds.) MLMI 2004. LNCS, vol. 3361, pp. 22–35. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  9. Wactlar, H., Bharucha, A., Stevens, S., Hauptmann, A., Christel, M.: A System of Video Information Capture, Indexing and Retrieval for Interpreting Human Activity. In: IEEE International Symposium on Image and Signal Processing and Analysis (ISPA 2003), Special Session on System Perspectives in Information Retrieval, Rome, Italy, September 18-20 (2003)

    Google Scholar 

  10. Elizabeth, S., Raj, D., Sonali, B., Jeremy, A., Hannah, C.: The ICSI Meeting Recorder Dialog Act (MRDA) Corpus. In: Proceedings of SIGDIAL 2004 (5th SIGdial Workshop on Discourse and Dialogue), pp. 97–100 (2004)

    Google Scholar 

  11. Image Annotation on the Semantic Web, http://www.w3.org/2001/sw/BestPractices/MM/image_annotation.html#use_cases

  12. Gemmell, J., Bell, G., Lueder, R.: MyLifeBits: a personal database for everything. Communications of the ACM 49(1), 88–95 (2006)

    Article  Google Scholar 

  13. Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: Proceedings of the 26th International ACM SIGIR Conference SIGIR 2003, pp. 119–126. ACM, New York (2003)

    Chapter  Google Scholar 

  14. Pühretmair, F.: It’s Time to Make eTourism Accessible. In: Miesenberger, K., Klaus, J., Zagler, W., Burger, D. (eds.) ICCHP 2004. LNCS, vol. 3118, pp. 272–279. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  15. Gray, J., Chaudhuri, S., Bosworth, A., Layman, A., Reichart, D., Venkatrao, M., Pellow, F., Pirahesh, H.: Data Cube: A Relational Aggregation Operator Generalizing Group-by, Cross-Tab and Sub Totals. Data Mining and Knowledge Discovery Journal 1(1), 29–53 (1997)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Karim, S., Andjomshoaa, A., Tjoa, A.M. (2006). Exploiting SenseCam for Helping the Blind in Business Negotiations. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds) Computers Helping People with Special Needs. ICCHP 2006. Lecture Notes in Computer Science, vol 4061. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11788713_166

Download citation

  • DOI: https://doi.org/10.1007/11788713_166

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-36020-9

  • Online ISBN: 978-3-540-36021-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics