Advertisement

Investigating Effects of Users’ Background in Analyzing Long-Term Images from a Stationary Camera

  • Koshi IkegawaEmail author
  • Akira Ishii
  • Kazunori Okamura
  • Buntarou Shizuki
  • Shin Takahashi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10904)

Abstract

Images recorded over a long term using a stationary camera have the potential for revealing various facts regarding the recorded target. We have been developing an analyzing system with a heatmap-based interface designed for visual analytics of long-term images from a stationary camera. In our previous study, we experimented with participants who were recorded in the images (recorded participants). In this study, we conducted a further experiment with participants who are not recorded in the images (unrecorded participants) to reveal the discoveries that participants obtain. By comparing the results of participants with different backgrounds, we investigated the difference between discoveries, functions used, and analysis process. The comparison suggests that unrecorded participants could discover many facts about environment, and recorded participants could discover many facts about people. Moreover, the comparison also suggests that unrecorded participants could discover many facts comparable to recorded participants.

Keywords

Data visualization Big data management Information presentation Heatmap Surveillance system Visual analytics Lifelog 

References

  1. 1.
    Chiang, C.C., Yang, H.F.: Quick browsing and retrieval for surveillance videos. Multimedia Tools Appl. 74(9), 2861–2877 (2015)CrossRefGoogle Scholar
  2. 2.
    DeCamp, P., Shaw, G., Kubat, R., Roy, D.: An immersive system for browsing and visualizing surveillance video. In: Proceedings of the International Conference on Multimedia, MM 2010, pp. 371–380. ACM, New York (2010)Google Scholar
  3. 3.
    Higuchi, K., Yonetani, R., Sato, Y.: EgoScanning: quickly scanning first-person videos with egocentric elastic timelines. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI 2017, pp. 6536–6546. ACM, New York (2017). http://doi.acm.org/10.1145/3025453.3025821
  4. 4.
    Ishii, A., Abe, T., Hakoda, H., Shizuki, B., Tanaka, J.: Evaluation of a system to analyze long-term images from a stationary camera. In: Yamamoto, S. (ed.) HIMI 2016. LNCS, vol. 9734, pp. 275–286. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40349-6_26CrossRefGoogle Scholar
  5. 5.
    Kubat, R., DeCamp, P., Roy, B.: TotalRecall: visualization and semi-automatic annotation of very large audio-visual corpora. In: Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI 2007, pp. 208–215. ACM, New York (2007)Google Scholar
  6. 6.
    Nguyen, C., Niu, Y., Liu, F.: Video summagator: an interface for video summarization and navigation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2012, pp. 647–650. ACM (2012)Google Scholar
  7. 7.
    Nogami, R., Shizuki, B., Hosobe, H., Tanaka, J.: An exploratory analysis tool for a long-term video from a stationary camera. In: Proceedings of the 5th IEEE International Symposium on Monitoring & Surveillance Research, ISMSR 2012, vol. 2, pp. 32–37 (2012)Google Scholar
  8. 8.
    Romero, M., Summet, J., Stasko, J., Abowd, G.: Viz-A-Vis: toward visualizing video through computer vision. IEEE Vis. Comput. Graph. 14(6), 1261–1268 (2008)CrossRefGoogle Scholar
  9. 9.
    Romero, M., Vialard, A., Peponis, J., Stasko, J., Abowd, G.: Evaluating video visualizations of human behavior. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2011, pp. 1441–1450. ACM, New York (2011)Google Scholar
  10. 10.
    Shin, G., Choi, T., Rozga, A., Romero, M.: VizKid: a behavior capture and visualization system of adult-child interaction. In: Salvendy, G., Smith, M.J. (eds.) Human Interface 2011. LNCS, vol. 6772, pp. 190–198. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21669-5_23CrossRefGoogle Scholar
  11. 11.
    Thomas, J.J., Cook, K., et al.: A visual analytics agenda. IEEE Comput. Graph. Appl. 26(1), 10–13 (2006)CrossRefGoogle Scholar
  12. 12.
    Tompkin, J., Pece, F., Shah, R., Izadi, S., Kautz, J., Theobalt, C.: Video collections in panoramic contexts. In: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST 2013, pp. 131–140. ACM (2013)Google Scholar
  13. 13.
    Wong, P.C., Thomas, J.: Visual analytics. IEEE Comput. Graph. Appl. 24(5), 20–21 (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Koshi Ikegawa
    • 1
    Email author
  • Akira Ishii
    • 1
  • Kazunori Okamura
    • 1
  • Buntarou Shizuki
    • 1
  • Shin Takahashi
    • 1
  1. 1.University of TsukubaTsukubaJapan

Personalised recommendations