A Global-Local Approach to Extracting Deformable Fashion Items from Web Images

  • Lixuan Yang
  • Helena Rodriguez
  • Michel Crucianu
  • Marin Ferecatu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9917)

Abstract

In this work we propose a new framework for extracting deformable clothing items from images by using a three stage global-local fitting procedure. First, a set of initial segmentation templates are generated from a handcrafted database. Then, each template initiates an object extraction process by a global alignment of the model, followed by a local search minimizing a measure of the misfit with respect to the potential boundaries in the neighborhood. Finally, the results provided by each template are aggregated, with a global fitting criterion, to obtain the final segmentation. The method is validated on the Fashionista database and on a new database of manually segmented images. Our method compares favorably with the Paper Doll clothing parsing and with the recent GrabCut on One Cut foreground extraction method. We quantitatively analyze each component, and show examples of both successful segmentation and difficult cases.

Keywords

Clothing extraction Segmentation Active contour GrabCut 

References

  1. 1.
    Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process. 10(2), 266–277 (2001)CrossRefMATHGoogle Scholar
  2. 2.
    Chen, H., Gallagher, A., Girod, B.: Describing clothing by semantic attributes. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 609–623. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33712-3_44 CrossRefGoogle Scholar
  3. 3.
    Chen, Q., Huang, J., Feris, R., Brown, L.M., Dong, J., Yan, S.: Deep domain adaptation for describing people based on fine-grained clothing attributes. In: CVPR, June 2015Google Scholar
  4. 4.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, pp. 886–893 (2005)Google Scholar
  5. 5.
    Di, W., Wah, C., Bhardwaj, A., Piramuthu, R., Sundaresan, N.: Style finder: fine-grained clothing style detection and retrieval. In: IEEE International Workshop on Mobile Vision, CVPR, pp. 8–13, June 2013Google Scholar
  6. 6.
    Dollár, P., Zitnick, C.L.: Fast edge detection using structured forests. arXiv (2014)Google Scholar
  7. 7.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
  8. 8.
    Kalantidis, Y., Kennedy, L., Li, L.J.: Getting the look: clothing recognition and segmentation for automatic product suggestions in everyday photos. In: ACM International Conference on Multimedia Retrieval, pp. 105–112 (2013)Google Scholar
  9. 9.
    Kaufman, L., Rousseeuw, P.: Clustering by means of meds. In: Dodge, Y. (ed.) Statistical Data Analysis Based on the L1-Norm and Related Methods, pp. 405–416. North-Holland, Amsterdam (1987)Google Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Bartlett, P., Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) NIPS, vol. 25, pp. 1106–1114 (2012)Google Scholar
  11. 11.
    Liu, S., Feng, J., Song, Z., Zhang, T., Lu, H., Xu, C., Yan, S.: Hi, magic closet, tell me what to wear! In: ACM Multimedia, pp. 619–628. ACM (2012)Google Scholar
  12. 12.
    Liu, S., Song, Z., Liu, G., Xu, C., Lu, H., Yan, S.: Street-to-shop: cross-scenario clothing retrieval via parts alignment and auxiliary set. In: CVPR, pp. 3330–3337 (2012)Google Scholar
  13. 13.
    M. Hadi, K., Xufeng, H., Svetlana, L., Alexander, C.B., Tamara, L.B.: Where to buy it: matching street clothing photos in online shops. In: ICCV (2015)Google Scholar
  14. 14.
    Rother, C., Kolmogorov, V., Blake, A.: GrabCut - interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23, 309–314 (2004)CrossRefGoogle Scholar
  15. 15.
    Simo-Serra, E., Fidler, S., Moreno-Noguer, F., Urtasun, R.: Neuroaesthetics in fashion: modeling the perception of fashionability. In: CVPR (2015)Google Scholar
  16. 16.
    Song, Z., Wang, M., sheng Hua, X., Yan, S.: Predicting occupation via human clothing and contexts. In: ICCV, pp. 1084–1091. IEEE Computer Society, Washington, DC (2011)Google Scholar
  17. 17.
    Tang, M., Gorelick, L., Veksler, O., Boykov, Y.: Grabcut in one cut. In: ICCV, pp. 1769–1776. IEEE Computer Society, Washington, DC (2013)Google Scholar
  18. 18.
    Veit, A., Kovacs, B., Bell, S., McAuley, J., Bala, K., Belongie, S.: Learning visual clothing style with heterogeneous dyadic co-occurrences. In: ICCV, Santiago, Chile (2015)Google Scholar
  19. 19.
    Yamaguchi, K., Hadi, K., Luis, E., Tamara, L.B.: Retrieving similar styles to parse clothing. IEEE TPAMI 37, 1028–1040 (2015)CrossRefGoogle Scholar
  20. 20.
    Yamaguchi, K., Kiapour, M.H., Berg, T.L.: Paper doll parsing: retrieving similar styles to parse clothing items. In: ICCV, Washington, DC, pp. 3519–3526 (2013)Google Scholar
  21. 21.
    Yang, Y., Ramanan, D.: Articulated human detection with flexible mixtures of parts. IEEE TPAMI 35, 2878–2890 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Lixuan Yang
    • 1
    • 2
  • Helena Rodriguez
    • 2
  • Michel Crucianu
    • 1
  • Marin Ferecatu
    • 1
  1. 1.Conservatoire National des Arts et MetiersParisFrance
  2. 2.Shopedia SASParisFrance

Personalised recommendations