Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Location Discriminative Vocabulary Coding for Mobile Landmark Search

  • 1449 Accesses

  • 135 Citations

Abstract

With the popularization of mobile devices, recent years have witnessed an emerging potential for mobile landmark search. In this scenario, the user experience heavily depends on the efficiency of query transmission over a wireless link. As sending a query photo is time consuming, recent works have proposed to extract compact visual descriptors directly on the mobile end towards low bit rate transmission. Typically, these descriptors are extracted based solely on the visual content of a query, and the location cues from the mobile end are rarely exploited. In this paper, we present a Location Discriminative Vocabulary Coding (LDVC) scheme, which achieves extremely low bit rate query transmission, discriminative landmark description, as well as scalable descriptor delivery in a unified framework. Our first contribution is a compact and location discriminative visual landmark descriptor, which is offline learnt in two-step: First, we adopt spectral clustering to segment a city map into distinct geographical regions, where both visual and geographical similarities are fused to optimize the partition of city-scale geo-tagged photos. Second, we propose to learn LDVC in each region with two schemes: (1) a Ranking Sensitive PCA and (2) a Ranking Sensitive Vocabulary Boosting. Both schemes embed location cues to learn a compact descriptor, which minimizes the retrieval ranking loss by replacing the original high-dimensional signatures. Our second contribution is a location aware online vocabulary adaption: We store a single vocabulary in the mobile end, which is efficiently adapted for a region specific LDVC coding once a mobile device enters a given region. The learnt LDVC landmark descriptor is extremely compact (typically 10–50 bits with arithmetical coding) and performs superior over state-of-the-art descriptors. We implemented the framework in a real-world mobile landmark search prototype, which is validated in a million-scale landmark database covering typical areas e.g. Beijing, New York City, Lhasa, Singapore, and Florence.

This is a preview of subscription content, log in to check access.

References

  1. Bay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: speed up robust features. In ECCV (pp. 450–459).

  2. Chandrasekhar, V., Takacs, G., Chen, D., Tsai, S., Grzeszczuk, R., & Girod, B. (2009a). CHoG: Compressed histogram of gradients a low bit-rate feature descriptor. In CVPR (pp. 2504–2511).

  3. Chandrasekhar, V., Takacs, G., Chen, D., Tsai, S., Singh, J., & Girod, B. (2009b). Transform coding of image feature descriptors. In VCIP. doi:10.1117/12.805982.

  4. Chandrasekhar, V., Chen, D., Lin, A., Takacs, G., Tsai, S., Cheung, N., Reznik, Y., Grzeszczuk, R., & Girod, B. (2010). Comparison of local feature descriptors for mobile visual search. In ICIP (pp. 3885–3888).

  5. Chen, D., Tsai, S., & Chandrasekhar, V. (2009). Tree histogram coding for mobile image matching. In DCC (pp. 143–152).

  6. Chen, D., Tsai, S., Chandrasekhar, V., Takacs, G., Vedantham, R., Grzeszczuk, R., & Girod, B. (2010). Inverted index compression for scalable image matching. In DCC (pp. 525–552).

  7. Crandall, D., Backstrom, L., Huttenlocher, D., & Kleinberg, J. (2009). Mapping the world’s photos. In WWW (pp. 761–770).

  8. Cristani, M., Perina, A., Castellani, U., & Murino, V. (2008). Geolocated image analysis using latent representations. In CVPR (pp. 1–9).

  9. Eade, E.-D., & Drummond, T.-W. (2008). Unified loop closing and recovery for real time monocular SLAM. In BMVC

  10. Freund, Y., & Schapire, R. (1994). A decision-theoretic generalization of on-line learning and an application to boosting. In European conference on computational learning theory (Vol. 904, pp. 23–37).

  11. Hays, J., & Efros, A. (2008). IMG2GPS: estimating geographic information from a single image. In CVPR (pp. 1–8).

  12. Hua, G., Brown, M., & Winder, S. (2007). Discriminant embedding for local image descriptors. In ICCV (pp. 1–8).

  13. Irschara, A., Zach, C., Frahm, J., & Bischof, H. (2009). From structure-from-motion point clouds to fast location recognition. In CVPR (pp. 2599–2606).

  14. Jegou, H., Douze, M., & Schmid, C. (2009). Packing bag-of-features. In ICCV (pp. 1–8).

  15. Jegou, H., Douze, M., Schmid, C., & Perez, P. (2010a). Aggregating local descriptors into a compact image representation. In CVPR (pp. 3304–3311).

  16. Jegou, H., Douze, M., & Schmid, C. (2010b). Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99, 1.

  17. Jegou et al. (2010c). http://www.irisa.fr/texmex/people/jegou/src/compactimgcodes/index.php.

  18. Ji, R., Xie, X., Yao, H., Ma, W.-Y., & Wu, Y. (2008). Vocabulary tree incremental indexing for scalable scene recognition. In ICME (pp. 869–872).

  19. Ji, R., Xie, X., Yao, H., & Ma, W.-Y. (2009a). Hierarchical optimization of visual vocabulary for effective and transferable retrieval. In CVPR (pp. 1161–1168).

  20. Ji, R., Xie, X., Yao, H., & Ma, W.-Y. (2009b). Mining city landmarks from blogs by graph modeling. In ACM Multimedia (pp. 105–114).

  21. Kalogerakis, E., Vesselova, O., Hays, J., Efros, A., & Hertzmann, A. (2009). Image sequence geolocation with human travel priors. In CVPR (pp. 1–8).

  22. Ke, Y., & Sukthankar, R. (2004). PCA-SIFT: A more distinctive representation for local image descriptors. In CVPR (pp. II-506–II-513).

  23. Kennedy, L., Naaman, M., Ahern, S., Nail, R., & Rattenbury, T. (2007). How Flickr helps us make sense of the world: context and content in community-contributed media collections. In ACM Multimedia (pp. 631–640).

  24. Lee, J.-A., Yow, K.-C., & Sluzek, A. (2008). Image-based information guide on mobile devices. In Advances in Visual Computing (pp. 346–355).

  25. Li, X., Wu, C., Zach, C., Lazebnik, S., & Frahm, J.-M. (2008). Modeling and recognition of landmark image collections using iconic scene graphs. In ECCV (pp. 427–440).

  26. Liu, D., Scott, M., Ji, R., Yao, H., & Xie, X. (2009). Location sensitive indexing for image-based advertising. In ACM Multimedia (pp. 793–796).

  27. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.

  28. Makar, M., Chang, C., Chen, D., Tsai, S., & Girod, B. (2009). Compression of image patches for local feature extraction. In ICASSP (pp. 821–824).

  29. Mikolajczyk, K., & Schmid, C. (2005). Performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630.

  30. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., & Van Gool, L. (2006). A comparison of affine region detectors. International Journal of Computer Vision, 29(11), 1735–1783.

  31. Ng, A., Jordan, M., & Weiss, Y. (2001). On spectral clustering: analysis and an algorithm. In NIPS (pp. 849–856).

  32. Nister, D., & Stewenius, H. (2006). Scalable recognition with a vocabulary tree. In CVPR (pp. 2161–2168).

  33. Philbin, J., Chum, O., Isard, M., Sivic, J., & Zisserman, A. (2007). Object retrieval with large vocabulary and fast spatial matching. In CVPR (pp. 1–8).

  34. Salton, G., & Buckley, C. (1988). Term-weighting approaches in automatic text retrieval. Information Processing and Management.

  35. Schindler, G., & Brown, M. (2007). City-scale location recognition. In CVPR (pp. 1–7).

  36. Shao, H., Svoboda, T., Tuytelaars, T., & Van Gool, L. (2003). Hpat indexing for fast object/scene recognition based on local appearance. In CIVR, (Vol. 2728, pp. 71–80).

  37. Sivic, J., & Zisserman, A. (2003). Video Google: a text retrieval approach to object matching in videos. In ICCV (pp. 1470–1477).

  38. Tipping, M., & Bishop, C. (1997). Probabilistic principle component analysis. Technical Report, Neural Computing Research Group, Aston University.

  39. Torralba, A., Fergus, R., & Weiss, Y. (2008). Small codes and large databases for recognition. In CVPR (pp. 1–8).

  40. Tsai, S., Chen, D., Takacs, G., & Chandrasekhar, V. (2010). Location coding for mobile image retrieval. In MobileMedia

  41. Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., & Gong, Y. (2010). Locality-constrained linear coding for image classification. In CVPR (pp. 3360–3367).

  42. Weiss, Y., Torralba, A., & Fergus, R. (2009). Spectral hashing. In NIPS (pp. 1753–1760).

  43. Witten, I., Moffat, A., & Bell, T. (1999). Managing gigabytes: compressing and indexing documents and images (2nd edn.). San Francisco: Morgan Kaufmann.

  44. Xiao, J.-X., Chen, J.-N., Yeung, D.-Y., & Quan, L. (2008). Structuring visual words in 3D for arbitrary-view object localization. In ECCV (pp. 725–737).

  45. Yeh, T., Lee, J., & Darell, T. (2007). Adaptive vocabulary forest for dynamic indexing and category learning. In CVPR (pp. 1–8).

  46. Yeo, C., Ahammad, P., & Ramchandran, K. (2008). Rate-efficient visual correspondences using random projections. In ICIP (pp. 217–220).

  47. Zhang, W., & Kosecka, J. (2006). Image based localization in urban environments. In 3DVT (pp. 33–40).

  48. Zheng, Y. T., Zhao, M., Song, Y., & Adam, H. (2009). Tour the world: building a web-scale landmark recognition engine. In CVPR (pp. 1085–1092).

Download references

Author information

Correspondence to Ling-Yu Duan.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Ji, R., Duan, L., Chen, J. et al. Location Discriminative Vocabulary Coding for Mobile Landmark Search. Int J Comput Vis 96, 290–314 (2012). https://doi.org/10.1007/s11263-011-0472-9

Download citation

Keywords

  • Mobile landmark search
  • Compact visual descriptor
  • Vocabulary compression
  • Two-way coding
  • Descriptor adaption
  • System applications