Social Tag Enrichment via Automatic Abstract Tag Refinement
Collaborative image tagging systems, such as Flickr, are very attractive for supporting keyword-based image retrieval, but some social tags of these collaboratively-tagged social images might be imprecise. Some people may use general or high-level words (i.e., abstract tags) to tag their images for saving time and effort, thus such general or high-level tags are too abstract to describe the visual content of social images precisely. As a result, users may not be able to find what they need when they use the specific keywords for query specification. To tackle this problem of abstract tags, a concept ontology is constructed for detecting the abstract tags from large-scale social images. The co-occurrence contexts of social tags and k-NN algorithm with Gaussian Weight are used to find the most specific tags which can signify out the abstract tags. In addition, all the relevant keywords, which are corresponded with intermediate nodes between the high-level concepts (abstract tags) and object classes (most specific tags) on our concept ontology, are added to enrich the lists of social tags, so that users can have more choices to select various keywords for query specification. We have tested our proposed algorithms on two data sets with different images.
Keywordstag refinement tag enrichment concept ontology co-occurrence contexts abstract tags
Unable to display preview. Download preview PDF.
- 1.Bucak, S., Jin, R., Jain, A.: Multi-label learning with incomplete class assignments. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 2801–2808 (2011)Google Scholar
- 2.Chua, T., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.: Nus-wide: A real-world web image database from national university of singapore. In: Proceeding of the ACM International Conference on Image and Video Retrieval, p. 48 (2009)Google Scholar
- 4.Fan, J., Shen, Y., Zhou, N., Gao, Y.: Harvesting large-scale weakly-tagged image databases from the web. In: IEEE CVPR, pp. 802–809 (2010)Google Scholar
- 6.Liu, D., Hua, X., Wang, M., Zhang, H.: Image retagging. In: Proceedings of the International Conference on Multimedia, pp. 491–500 (2010)Google Scholar
- 9.Lu, Y., Zhang, L., Tian, Q., Ma, W.: What are the high-level concepts with small semantic gaps? In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)Google Scholar
- 10.Ma, W., Manjunath, B.: Texture features and learning similarity. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 425–430 (1996)Google Scholar
- 14.Rosch, E.: Principles of categorization. In: Concepts: Core Readings, pp. 189–206 (1999)Google Scholar
- 15.Rubner, Y., Tomasi, C., Guibas, L.: A metric for distributions with applications to image databases. In: IEEE Sixth International Conference on Computer Vision, pp. 59–66 (1998)Google Scholar
- 16.Tang, J., Yan, S., Hong, R., Qi, G., Chua, T.: Inferring semantic concepts from community-contributed images and noisy tags. In: Proceedings of the 17th ACM International Conference on Multimedia, pp. 223–232. ACM (2009)Google Scholar
- 17.Yang, K., Hua, X., Wang, M., Zhang, H.: Tag tagging: Towards more descriptive keywords of image content. IEEE Transactions on Multimedia (99), 1 (2011)Google Scholar