Advertisement

Enhancing Lipstick Try-On with Augmented Reality and Color Prediction Model

  • Nuwee Wiwatwattana
  • Sirikarn Chareonnivassakul
  • Noppon Maleerutmongkol
  • Chavin Charoenvitvorakul
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 738)

Abstract

One of the important tasks in purchasing cosmetics is the selection process. Swatching is the best way in shade-matching the look and feel of cosmetics. However, swatching the lipstick color on skins is far from being a good representation of the lips color. This paper aims to develop a virtual lipstick try-on application based on augmented reality and color prediction model. The goal of the color prediction model is to predict the RGB of the worn lips color given an undertone color of the lips and a lipstick shade. We have studied the performance of several learning models including simple and multiple linear regression, reduced-error pruning decision tree, M5P model tree, support vector regression, stacking technique, and random forests. We find that ensemble methods work best. However, since ensemble methods win only a small margin, our application is implemented with a simpler algorithm that is faster to train and to test, the M5P. The detection and tracking of lips are implemented using the OpenFace toolkit’s facial landmark detection sub-module. Measuring the prediction accuracy with MAE and RMSE, we have demonstrated that our approach that predicts worn lips colors performs better than without the prediction. Lipstick shades that resemble human skins have been shown to give more accurate results than dark shades or light pink shades.

Keywords

Virtual makeup Lipstick color Mobile augmented reality M5P Regression 

Notes

Acknowledgements

The work of the first author has been supported by Grant No. 533/2560 from Srinakharinwirot University, Thailand. Undergraduate research students have been supported by SSUP Group—Oriental Princess. The authors express our warmest thanks to SSUP Group—Oriental Princess for sponsoring the lipsticks, and Dr. Thitima Srivatanakul for invaluable comments. The authors would also like to thank Srinakharinwirot University for the travel grant to present and discuss the research with the community.

References

  1. 1.
    Sephora virtual artist — try on makeup virtually — sephora. Online Available: https://sephoravirtualartist.com
  2. 2.
    Makeup genius - virtual makeup application — loreal paris. Online Available: http://www.loreal-paris.co.uk/products/make-up/apps/make-up-genius
  3. 3.
    T. Shoneye, G. Mour Barrett, R. Patel, A. Malone, N. Lobanova. We tried the same lipsticks on different skin tones. Online Available: https://www.buzzfeed.com/tolanishoneye/heres-what-budget-lipstick-looks-like-on-different-skin
  4. 4.
    M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I.H. Witten, The weka data mining software: an update. SIGKDD Explor. 11(1), 10–18 (2009)CrossRefGoogle Scholar
  5. 5.
    J.R. Quinlan, C4.5: Programs for Machine Learning (Morgan Kaufmann Publishers Inc., San Francisco, CA, 1993)Google Scholar
  6. 6.
    Y. Wang, I. Witten, Inducing model trees for continuous classes, in Proceedings of the Ninth European Conference on Machine Learning, vol. 09 (1997)Google Scholar
  7. 7.
    S. Shevade, S. Keerthi, C. Bhattacharyya, K. Murthy, Improvements to the SMO algorithm for SVM regression. IEEE Trans. Neural Netw. 11, 1188–1193 (1999)CrossRefGoogle Scholar
  8. 8.
    D.H. Wolpert, Stacked generalization. Neural Netw. 5, 241–259 (1992)CrossRefGoogle Scholar
  9. 9.
    L. Breiman, Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  10. 10.
    T. Baltrušaitis, P. Robinson, L.P. Morency, Openface: an open source facial behavior analysis toolkit, in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), March (2016), pp. 1–10Google Scholar
  11. 11.
    T. Baltrusaitis, P. Robinson, L.P. Morency, Constrained local neural fields for robust facial landmark detection in the wild, in 2013 IEEE International Conference on Computer Vision Workshops (2013), pp. 354–361Google Scholar
  12. 12.
    K. Ren, Facear/openfaceios (2016). Online Available: https://github.com/FaceAR/OpenFaceIOS
  13. 13.
    Opencv library. Online Available: https://opencv.org/
  14. 14.
    Modiface - augmented reality. Online Available: http://modiface.com/
  15. 15.
    P. Aarabi, System and method for the indication of modification region boundaries on facial images, Sept 1 (2016), US Patent App. 15/056,748. Online Available: https://google.com/patents/US20160253713
  16. 16.
    I.S. Jang, J.W. Kim, J.S. Kim, Makeup color reproduction based on spectrum data, in The 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision, Jan (2013), pp. 233–236Google Scholar
  17. 17.
    A.S.M.M. Rahman, T.T. Tran, S.A. Hossain, A.E. Saddik, Augmented rendering of makeup features in a smart interactive mirror system for decision support in cosmetic products selection, in 2010 IEEE/ACM 14th International Symposium on Distributed Simulation and Real Time Applications, Oct (2010), pp. 203–206Google Scholar
  18. 18.
    W.-S. Tong, C.-K. Tang, M.S. Brown, Y.-Q. Xu, Example-based cosmetic transfer, in Proceedings of the 15th Pacific Conference on Computer Graphics and Applications (2007), pp. 211–218Google Scholar
  19. 19.
    D. Guo, T. Sim, Digital face makeup by example, in 2009 IEEE Conference on Computer Vision and Pattern Recognition, June (2009), pp. 73–79Google Scholar
  20. 20.
    L. Xu, Y. Du, Y. Zhang, An automatic framework for example-based virtual makeup, in 2013 IEEE International Conference on Image Processing, Sept (2013), pp. 3206–3210Google Scholar
  21. 21.
    L. Liu, H. Xu, J. Xing, S. Liu, X. Zhou, S. Yan, wow! you are so beautiful today!, in Proceedings of the 21st ACM International Conference on Multimedia, Series MM ’13 (ACM, New York, NY, 2013), pp. 3–12Google Scholar
  22. 22.
    T. Alashkar, S. Jiang, Y. Fu, Rule-based facial makeup recommendation system, in 2017 12th IEEE International Conference on Automatic Face Gesture Recognition (FG 2017), May (2017), pp. 325–330Google Scholar
  23. 23.
    S. Liu, X. Ou, R. Qian, W. Wang, X. Cao, Makeup like a superstar: deep localized makeup transfer network. CoRR abs/1604.07102 (2016) Online Available: http://arxiv.org/abs/1604.07102

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Nuwee Wiwatwattana
    • 1
  • Sirikarn Chareonnivassakul
    • 1
  • Noppon Maleerutmongkol
    • 1
  • Chavin Charoenvitvorakul
    • 1
  1. 1.Department of Computer Science, Faculty of ScienceSrinakharinwirot UniversityBangkokThailand

Personalised recommendations