Skip to main content

Cross-Modal Fashion Search

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 9517))

Included in the following conference series:

Abstract

In this demo we focus on cross-modal (visual and textual) e-commerce search within the fashion domain. Particularly, we demonstrate two tasks: (1) given a query image (without any accompanying text), we retrieve textual descriptions that correspond to the visual attributes in the visual query; and (2) given a textual query that may express an interest in specific visual characteristics, we retrieve relevant images (without leveraging textual meta-data) that exhibit the required visual attributes. The first task is especially useful to manage image collections by online stores who might want to automatically organize and mine predominantly visual items according to their attributes without human input. The second task renders useful for users to find items with specific visual characteristics, in the case where there is no text available describing the target image. We use a state-of-the-art visual and textual features, as well as a state-of-the-art latent variable model to bridge between textual and visual data: bilingual latent Dirichlet allocation. Unlike traditional search engines, we demonstrate a truly cross-modal system, where we can directly bridge between visual and textual content without relying on pre-annotated meta-data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Examples of our data are available in http://glenda.cs.kuleuven.be/multimodal_search under the ‘Training Data’ tab.

  2. 2.

    http://www.zappos.com/glossary.

References

  1. De Smet, W., Moens, M.-F.: Cross-language linking of news stories on the Web using interlingual topic modeling. In: Proceedings of the CIKM 2009 Workshop on Social Web Search and Mining (SWSM), pp. 57–64 (2009)

    Google Scholar 

  2. De Smet, W., Tang, J., Moens, M.-F.: Knowledge transfer across multilingual corpora via latent topics. In: Huang, J.Z., Cao, L., Srivastava, J. (eds.) PAKDD 2011, Part I. LNCS, vol. 6634, pp. 549–560. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  3. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)

    Google Scholar 

  4. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)

  5. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates Inc., (2012)

    Google Scholar 

  6. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)

    Google Scholar 

  7. Mason, R., Charniak, E.: Annotation of online shopping images without labeled training examples. In: North American Chapter of the ACL Human Language Technologies, vol. 2013, p. 1 (2013)

    Google Scholar 

  8. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)

    Google Scholar 

  9. Zoghbi, S., Heyman, G., Gomez, J.C., Moens, M.-F.: Fashion meets computer vision and natural language processing. In: Submitted to ACM International Conference on Web Search and Data Mining WSDM 2015 (2015)

    Google Scholar 

Download references

Acknowledgments

We greatly thank Anirudh Tomer for building the Web interface of our demonstrator. This project is part of the SBO Program of the IWT (IWT-SBO-Nr. 110067).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Susana Zoghbi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Zoghbi, S., Heyman, G., Gomez, J.C., Moens, MF. (2016). Cross-Modal Fashion Search. In: Tian, Q., Sebe, N., Qi, GJ., Huet, B., Hong, R., Liu, X. (eds) MultiMedia Modeling. MMM 2016. Lecture Notes in Computer Science(), vol 9517. Springer, Cham. https://doi.org/10.1007/978-3-319-27674-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-27674-8_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-27673-1

  • Online ISBN: 978-3-319-27674-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics