Abstract
The proliferation of surveillance cameras has created new privacy concerns as people are captured daily without explicit consent, and the video is kept in databases for a very long time. With the increasing popularity of wearable cameras like Google Glass the problem is set to increase substantially. An important computer vision task is to enable a person (“subject”) to query the video database (“observer”) whether he/she has been captured on the video. Following a positive answer, the subject may request a copy of the video, or ask to be “forgotten” by erasing this video from the database. Two properties such queries should possess are: (i) The query should not reveal more information about the subject, further breaching his privacy. (ii) The query should certify that the subject is indeed the captured person before sending him the video or erasing it. This paper presents a possible solution when the subject has a head mounted camera, e.g. Google Glass. We propose to create a unique signature, based on pattern of head motion, that could identify that the subject is indeed the person seen in a video. Unlike traditional biometric methods (face, gait recognition etc.), the proposed signature is temporally volatile, and can identify the subject only at a particular time. It is of no use for any other place or time.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
We note that inertial devices (as used by [11]) could have been used for computing head activity signatures as well. However, our experiments with such inertial devices have yielded a very noisy signal which is not useful for our case. In any case, the requirement of additional hardware restricts the potential application areas, while using image based solution widens the scope of application.
- 2.
The observed displacement in observer’s signature could be opposite or in phase with subjects’s signature depending upon whether the subject is seen from front or back by the observer.
- 3.
Any unequal division of the total score requirement would be more difficult to meet.
- 4.
In our implementation, the dimension of the vectors (length of the signature) is usually more than \(200\) frames (corresponding to \(3\)–\(4\) s of video at \(60\) frames per second). This is a sufficiently large dimension for the proposed probabilistic analysis.
References
Shiraga, K., Trung, N.T., Mitsugami, I., Mukaigawa, Y., Yagi, Y.: Gait-based person authentication by wearable cameras. In: International Conference on Networked Sensing Systems, pp. 1–7 (2012)
Yao, A.C.C.: How to generate and exchange secrets. In: FOCS, pp. 162–167 (1986)
Avidan, S., Butman, M.: Blind vision. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3953, pp. 1–13. Springer, Heidelberg (2006)
Upmanyu, M., Namboodiri, A., Srinathan, K., Jawahar, C.: Efficient privacy preserving video surveillance. In: ICCV, pp. 1639–1646 (2009)
Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, New York (2003)
Raudies, F., Neumann, H.: A review and evaluation of methods estimating ego-motion. CVIU 116, 606–633 (2012)
Castle, R.O., Klein, G., Murray, D.W.: Video-rate localization in multiple maps for wearable augmented reality. In: IEEE ISWC (2008)
Wu, C.: VisualSFM: A visual structure from motion system. http://ccwu.me/vsfm/
VISCODA: Voodoo camera tracker. http://www.digilab.uni-hannover.de/
Poleg, Y., Arora, C., Peleg, S.: Temporal segmentation of egocentric videos. In: CVPR (2014)
Spriggs, E., Torre, F.D.L., Hebert, M.: Temporal segmentation and activity classification from first-person sensing. In: CVPRW (2009)
Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI, pp. 674–679 (1981)
Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. IJCV 92, 1–31 (2011)
Enzweiler, M., Gavrila, D.: Monocular pedestrian detection: survey and experiments. TPAMI 31, 2179–2195 (2009)
Ramanan, D.: Part-based models for finding people and estimating their pose. In: Moeslund, T.B., Hilton, A., Krúger, V., Sigal, L. (eds.) Visual Analysis of Humans, pp. 199–223. Springer, London (2011)
Kidron, E., Schechner, Y.Y., Elad, M.: Pixels that sound. In: CVPR, pp. 88–95 (2005)
Schmid Jr., J.: The relationship between the coefficient of correlation and the angle included between regression lines. J. Educ. Res. 41, 311–313 (1947)
Wikipedia: Pearson product-moment correlation coefficient. http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
Fathi, A., Hodgins, J.K., Rehg, J.M.: Social interactions: a first-person perspective. In: CVPR (2012)
Bradski, G.: Opencv ver 2.4.3. (2013)
Ramanan, D., Zhu, X.: Face detection, pose estimation, and landmark localization in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2879–2886 (2012)
Ho, H.T., Chellappa, R.: Automatic head pose estimation using randomly projected dense sift descriptors. In: ICIP, pp. 153–156 (2012)
Acknowledgement
This research was supported by Intel ICRC-CI, by Israel Ministry of Science, and by Israel Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Poleg, Y., Arora, C., Peleg, S. (2015). Head Motion Signatures from Egocentric Videos. In: Cremers, D., Reid, I., Saito, H., Yang, MH. (eds) Computer Vision -- ACCV 2014. ACCV 2014. Lecture Notes in Computer Science(), vol 9005. Springer, Cham. https://doi.org/10.1007/978-3-319-16811-1_21
Download citation
DOI: https://doi.org/10.1007/978-3-319-16811-1_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-16810-4
Online ISBN: 978-3-319-16811-1
eBook Packages: Computer ScienceComputer Science (R0)