Abstract
360° realistic contents are omnidirectional media contents that support front, back, left, right, top and bottom. In addition, they are combined images of images produced using two or more cameras through the stitching process. Therefore 4K UHD is basically supported to represent all directions and distortion occurs in each direction, especially above and below. In this paper, we propose a feature point extraction and similarity comparison method for 360° realistic images by extracting representative frames and correcting distortions. In the proposed method, distortion-less parts for an extracted frame such as the front, back, left, and right directions of the image, except for the largest distortion area such as the up and down directions, are first corrected by a rectangular coordinate system method. Then, the sequence for the similar frames is classified and the representative frame is selected. The feature points are extracted from the selected representative frames by the distortion correction and the similarity can be compared in the subsequent query images. The proposed method is shown, through the experiments, to be superior in speed for the image comparison than other methods, and it is also advantageous when the data to be stored in the server increase in the future.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chun, H.W., Han, M.K., Jang, J.H.: Application trends in virtual reality. Electron. Telecommun. Trends (2017)
Yoon, J.S.: The study on the commons of copyright works. J. Inf. Law 10, 1–44 (2016)
Kim, J.Y.: Design of 360 degree video and VR contents. Communication Books (2017)
Kijima, R., Yamaguchi, K.: VR device time - Hi-precision time management by synchronizing times between devices and host PC through USB. In: IEEE Virtual Reality (VR) (2016)
Ho, Y.S.: MPEG-I standard and 360 degree video content generation. J. Electr. Eng. (2017)
W16824, Text of ISO/IEC DIS 23090-2 Omnidirectional Media Format (OMAF)
Ha, W.J., Sohn, K.A.: Image classification approach for Improving CBIR system performance. In: 2016 KICS Conference (2016)
Song, J.S., Hur, S.J., Park, Y.W., Choi, J.H.: User positioning method based on image similarity comparison using single camera. J. KICS 40(8), 1655–1666 (2015)
Yasmin, M., Mohsinm, S., Irum, I., Sharif, M.: Content based image retrieval by shape, color and relevance feedback. Life Sci. J. 10, 593–598 (2013)
Everingham, M., et al.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)
Ke, Y., Sukthankarm, R.: PCA-SIFT: a more distinctive representation for local image descriptors. IEEE Comput. Soc. Conf. CVPR 2, 2014 (2004)
Jung, H.J., Yoo, J.S.: Feature matching algorithm robust to viewpoint change. KECS 40(12), 2363–2371 (2015)
Woo, N.H.: Research on the improvement plan for the protection and use activation of image copyright. Korea Sci. Art Forum 15, 233–246 (2014)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1137–1149 (2017)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV (2004)
Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai Hawaii (2001)
Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: European Conference on Computer Vision ECCV, pp. 430–443 (2006)
Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)
Ke, Y., Sukthankar, R.: PCA-SIFT: a more distinctive representation for local image descriptor. In: IEEE CVPR (2004)
Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: ECCV (2006)
Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded-up robust features. Comput. Vis. Image Underst. 110(3), 346–359 (2008)
Lienhart, R., Maydt, J.: An extended set of haar-like features for rapid object detection. In: IEEE ICIP 2002, vol. 1, pp. 900 (2002)
Acknowledgements
This research project was supported by Ministry of Culture, Sport and Tourism (MCST) and Korea Copyright Commission in 2019 (2018-360_DRM-9500).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Park, B., Kim, Y., Kim, SY. (2020). A Feature Point Extraction and Comparison Method Through Representative Frame Extraction and Distortion Correction for 360° Realistic Contents. In: Lee, R. (eds) Big Data, Cloud Computing, and Data Science Engineering. BCD 2019. Studies in Computational Intelligence, vol 844. Springer, Cham. https://doi.org/10.1007/978-3-030-24405-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-24405-7_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24404-0
Online ISBN: 978-3-030-24405-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)