Advertisement

How Does User’s Access to Object Make HCI Smooth in Recipe Guidance?

  • Atsushi Hashimoto
  • Jin Inoue
  • Takuya Funatomi
  • Michihiko Minoh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8528)

Abstract

This paper aims firstly to provide a flexible framework for developing recipe guiding system that displays information step-by-step along to events recognized in user’s activity, and secondly to introduce an example of our implementation on the proposed framework. Those who are working on a task requiring high concentration can be easily distracted by the interactive systems that require any kind of explicit manipulations. In such situation, recognizing events in the task is helpful as an alternative of the manipulations. The framework allows a system designer to incorporate his/her own recognizer to the guiding system. Based on this framework, we implemented a system working with user’s grabbing and releasing objects. A grabbed object tells the user’s intention of what is about to do next, and releasing the object indicates its completion. In the experiments using the WOZ method, we confirmed that these actions worked well as switches for the interface. We also summarize some of our efforts for automating the system.

Keywords

Augmented Reality Partial Order Relation Navigation Algorithm Load Sensor Child Process 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Norman, D.A.: The design of everyday things. Basic Books (2002)Google Scholar
  2. 2.
    FormatData: The Recipe Markup Landuage (RecipeML) (2006), http://www.formatdata.com/recipeml/
  3. 3.
    Hamada, R., Okabe, J., Ide, I., Sakai, S., Tanaka, H.: Cooking navi: assistant for daily cooking in kitchen. In: Proc. of the 13th ACM International Conf. on Multimedia, pp. 371–374 (2005)Google Scholar
  4. 4.
    Matsushima, Y., Funabiki, N., Zhang, Y., Nakanishi, T., Watanabe, K.: Extensions of cooking guidance function on android tablet for homemade cooking assistance system. In: IEEE 2nd Global Conf. on Consumer Electronics, pp. 397–401 (2013)Google Scholar
  5. 5.
    Nakauchi, Y., Fukuda, T., Noguchi, K., Matsubara, T.: Time sequence data mining for cooking support robot. In: Proc. of International Symposium on Computational Intelligence in Robotics and Automation, pp. 481–486 (2005)Google Scholar
  6. 6.
    Siltanen, S., Hakkarainen, M., Korkalo, O., Salonen, T., Saaski, J., Woodward, C., Kannetis, T., Perakakis, M., Potamianos, A.: Multimodal user interface for augmented assembly. In: IEEE 9th Workshop on Multimedia Signal Processing, MMSP 2007, pp. 78–81. IEEE (2007)Google Scholar
  7. 7.
    Ueda, M., Funatomi, T., Hashimoto, A., Watanabe, T., Minoh, M.: Developing a real-time system for measuring the consumption of seasoning. In: IEEE International Symposium on Multimedia, pp. 393–398. IEEE (2011)Google Scholar
  8. 8.
    Prasad, V.S.N., Kellokumpu, V., Davis, L.S.: Ballistic hand movements. In: Perales, F.J., Fisher, R.B. (eds.) AMDO 2006. LNCS, vol. 4069, pp. 153–164. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  9. 9.
    Hashimoto, A., Mori, N., Funatomi, T., Mukunoki, M., Kakusho, K., Minoh, M.: Tracking food materials with changing their appearance in food preparing. In: IEEE International Symposium on Multimedia, pp. 248–253 (2010)Google Scholar
  10. 10.
    Schmidt, A., Strohbach, M., Van Laerhoven, K., Friday, A., Gellersen, H.: Context acquisition based on load sensing. In: Borriello, G., Holmquist, L.E. (eds.) UbiComp 2002. LNCS, vol. 2498, pp. 333–350. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  11. 11.
    Chi, P.-Y.(P.), Chen, J.-H., Chu, H.-H., Lo, J.-L.: Enabling calorie-aware cooking in a smart kitchen. In: Oinas-Kukkonen, H., Hasle, P., Harjumaa, M., Segerståhl, K., Øhrstrøm, P. (eds.) PERSUASIVE 2008. LNCS, vol. 5033, pp. 116–127. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  12. 12.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. ACM Computing Surveys 38(4), 1–45 (2006)CrossRefGoogle Scholar
  13. 13.
    Grabner, H., Matas, J., Van Gool, L., Cattin, P.: Tracking the invisible: Learning where the object might be. In: IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1285–1292 (2010)Google Scholar
  14. 14.
    Yasuoka, R., Hashimoto, A., Funatomi, T., Minoh, M.: Detecting start and end times of object-handlings on a table by fusion of camera and load sensors. In: Proc. of the 5th International Workshop on Multimedia for Cooking and Eating Activities, pp. 51–56 (2013)Google Scholar
  15. 15.
    Jafri, R., Ali, S.A., Arabnia, H.R., Fatima, S.: Computer vision-based object recognition for the visually impaired in an indoors environment: a survey. The Visual Computer, 1–26 (2013)Google Scholar
  16. 16.
    Bolle, R.M., Connell, J., Haas, N., Mohan, R., Taubin, G.: Veggievision: A produce recognition system. In: Proc. of IEEE Workshop on Applications of Computer Vision, pp. 244–251 (1996)Google Scholar
  17. 17.
    Faria, F.A., dos Santos, J.A., Rocha, A., Torres, R.S.: Automatic classifier fusion for produce recognition. In: Proc. of the 25th Conf. on Graphics, Patterns and Images, pp. 252–259 (2012)Google Scholar
  18. 18.
    Arivazhagan, S., Shebiah, R.N., Nidhyanandhan, S., Ganesan, L.: Fruit recognition using color and texture features. Journal of Emerging Trends in Computing and Information Sciences 1(2), 90–94 (2010)Google Scholar
  19. 19.
    Yamakata, Y., Kakusho, K., Minoh, M.: Object recognition based on object’s identity for cooking recognition task. In: 2010 IEEE International Symposium on Multimedia, pp. 278–283. IEEE (2010)Google Scholar
  20. 20.
    Hashimoto, A., Inoue, J., Nakamura, K., Funatomi, T., Ueda, M., Yamakata, Y., Minoh, M.: Recognizing ingredients at cutting process by integrating multimodal features. In: Proc. of the ACM Multimedia 2012 Workshop on Multimedia for Cooking and Eating Activities, pp. 13–18 (2012)Google Scholar
  21. 21.
    Zauner, J., Haller, M., Brandl, A., Hartmann, W.: Authoring of a mixed reality assembly instructor for hierarchical structures. In: ACM International Symp. on Mixed and Augmented Reality, vol. 1, pp. 237–246 (2003)Google Scholar
  22. 22.
    Yuan, M.L., Ong, S.K., Nee, A.Y.C.: Assembly guidance in augmented reality environments using a virtual interactive tool. In: Innovation in Manufacturing Systems and Technology (2005)Google Scholar
  23. 23.
    Tang, A., Owen, C., Biocca, F., Mou, W.: Comparative effectiveness of augmented reality in object assembly. In: Proc. of the SIGCHI Conf. on Human Factors in Computing Systems, pp. 73–80 (2003)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Atsushi Hashimoto
    • 1
  • Jin Inoue
    • 1
  • Takuya Funatomi
    • 1
  • Michihiko Minoh
    • 1
  1. 1.Kyoto UniversityKyotoJapan

Personalised recommendations