Advertisement

Rank Beauty

  • Yanbing Liao
  • Weihong Deng
  • Can Cui
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 663)

Abstract

It is useful to automatically select the most attractive face images from large photo collections. Previous works in this area have little attention on facial attractiveness for one subject, but different objects. In this paper, we have a collection of subjects’ faces including a range of expression, postures, makeup, lighting and resolutions from Bing Search. Given training data of faces scored based on the majority of subjects’ tastes, we train a model to learn how to rank novel faces and show how it can be used to automatically mine attractive photos from personal photo collections. Our system achieves an average accuracy of 73 % on pairwise comparisons of novel faces.

Keywords

Facial aesthetic Crowdsourcing Rank 

Notes

Acknowledgments

This work was partially sponsored by supported by the NSFC (National Natural Science Foundation of China) under Grant No. 61375031, No. 61573068, No. 61471048, and No. 61273217, the Fundamental Research Funds for the Central Universities under Grant No. 2014ZD03-01, This work was also supported by Beijing Nova Program, CCF-Tencent Open Research Fund, and the Program for New Century Excellent Talents in University.

References

  1. 1.
    Springer, I.N., Wiltfang, J., Kowalski, J.T., Russo, R.A.J., Schulze, M., Becker, S., Wolfart, S.: Mirror, mirror on the wall: self-perception of facial beauty versus judgement by others. J. Craniomaxillofac. Surg. 40(8), 773–776 (2012)CrossRefGoogle Scholar
  2. 2.
    Altwaijry, H., Belongie, S.: Relative ranking of facial attractiveness. In: IEEE Workshop on Application of Computer Vision (WACV), pp. 117–124, 15–17 January 2013Google Scholar
  3. 3.
    Bosch, A., Zisserman, A., Muñoz, X.: Scene classification via pLSA. In: Proceedings of ECCV 2006, pp. 517–530 (2006)Google Scholar
  4. 4.
    Zhu, J.-Y., Agarwala, A., Efros, A.A., Shecht-man, E., Wang, J.: Mirror mirror: crowdsourcing better portraits. ACM TOG 33(6)Google Scholar
  5. 5.
    Cao, Z., Qin, T., Liu, T.-Y., Tsai, M.-F., Li, H.: Learning to rank: from pairwise approach to listwise approach. In: Proceedings of ICML 2007, pp. 129–136. ACM, New York (2007)Google Scholar
  6. 6.
    Li, H.: A short introduction to learning to rank. IEICE Trans. 94-D(10), 1854–1862 (2011)CrossRefGoogle Scholar
  7. 7.
    Gevers, T., Smeulders, A.W.M.: Pictoseek: combining color and shape invariant features for image retrieval. IEEE Trans. Image Process. 9(1), 102–119 (2000)CrossRefGoogle Scholar
  8. 8.
    He, J., Li, M., Zhang, H.-J., Tong, H., Zhang, C.: Manifold-ranking based image retrieval. In: Proceedings of MULTIMEDIA 2004, pp. 9–16. ACM, New York (2004)Google Scholar
  9. 9.
    Parikh, D., Grauman, K.: Relative attributes. In: ICCV (2011)Google Scholar
  10. 10.
    Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Robust discriminative response map fitting with constrained local models. In: Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013), Portland, Oregon, USA, June 2013Google Scholar
  11. 11.
    Zhang, L., Snavely, N., Curless, B., Seitz, S.M.: Spacetime faces: high-resolution capture for modeling and animation (2004)Google Scholar
  12. 12.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Conference on Computer Vision and Pattern Recognition (2005)Google Scholar
  13. 13.
    Kumar, R., Vassilvitskii, S.: Generalized distances between rankings. In: Proceedings of WWW 2010, pp. 571–580 (2010)Google Scholar
  14. 14.
    Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., Hullender, G.: Learning to rank using gradient descent. In: ICML (2005)Google Scholar
  15. 15.
    Kalayci, S., Ekenel, H.K., Gunes, H.: Automatic analysis of facial attractiveness from video. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 4191–4195, October 2014Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2016

Authors and Affiliations

  1. 1.Beijing University of Posts and TelecommunicationsBeijingChina
  2. 2.Beijing Jiaotong UniversityBeijingChina

Personalised recommendations