1 Introduction

Since your face is an important part of your body that leads to your overall attractiveness [1], women mainly use make-up as a method of changing the impression of their face [2]. However, with make-up, there are many products for each part of your face such as your cheek and mouth, therefore it is difficult to choose which products to buy and how to combine them.

To resolve this issue, you can get advice from a make-up advisor to suit your skin color and facial contour but your desire may fade due to the difference between the advice offered and your own preference. As a method of reflecting the preference of an individual such as when deciding on a hairstyle at a hair salon, you designate a particular style from a magazine or a catalogue. With make-up, as well, if you can choose a facial picture that you want to be from a facial image with make-up done and decide on the make-up, we believe each individual’s taste can be reflected to the fullest. However, it is not simple to determine a specific make-up method and products to bring you close to the facial picture chosen.

Therefore, in this paper, we propose a make-up support system based on the facial image chosen by an individual to help you do make-up that reflects your taste. The outline of the system is shown in Fig. 1. If you input the facial image chosen by the user and the user’s own facial image into the system, the system will show a make-up simulation image that will bring the user close to her favorite facial image and a list of make-up products to realize the simulation. We believe, by using this system, user’s will have an image of their own face with make-up done reflecting their taste and will make it easier for the user to choose the right combination of make-up products to realize that. As of April 2017, iOS application “YUMEKA” which uses part of the result of this paper is open to the public (http://yumeka.tokyo/).

Fig. 1.
figure 1

Outline of system

2 System Overview

This system, to assist make-up reflecting the user’s taste, is made up of [a] color simulation image generating function based on facial image of the user’s choice and [b] make-up product presentation function to realize the color simulation image. With make-up there are many components including quality such as color, laminate and matt and materials but as we expect the resolution of the inputted image to be not always high, we concentrated on especially important colors that change the image of the face [3] and therefore decided on the four varieties of foundation, eye shadow, cheek and lipstick which are techniques used to change the impression by layering color on the face.

With regards to [a], it is shown in Fig. 2 and further explanations are given below.

Fig. 2.
figure 2

Color simulation image generation technique based on your preferred facial image

  1. (1)

    Facial image of the user and characteristics of the user’s favorite facial image is outputted and the user manually adjusts the points and saves the result

  2. (2)

    From the characteristics saved each part of the face where color is to be extracted is identified

  3. (3)

    The specific color of each facial region that has been identified is extracted

  4. (4)

    From the specific color of each area of the two facial images, the color to be superimposed to the user’s facial image is calculated

  5. (5)

    Simulation area to be superimposed to the user’s facial image is determined from the characteristics and a mask is generated

  6. (6)

    The mask is superimposed to the user’s facial image

With regards to [b], the name and number attached as information of the color included in make-up products differ between cosmetic companies. Therefore, in this paper, we propose a technique of obtaining make-up product images from an internet mail order site and analyzing the product color by image processing and presenting products which match the make-up color extracted from the favorite facial image.

3 Experiments

The result and study of subjective evaluation by the user will be noted. The assessors are 18 women aged between 18 to 27 years of age (average age 22.2 years old, SD ± 1.9) and the assessors used an iOS application running on iPad Air2 and answered questions in the questionnaire. Each question was given a score of 1 to 5 ((a)(b): 1. Do not think so 2. Probably not 3. Neither 4. Probably so 5. Think so, (c): 1. I don’t know 2. Hard to tell 3. Neither 4. Probably understand 5. Understand) and evaluated in 9 grades of 0.5 intervals.

  1. (a)

    Do you think color simulation close to your favorite facial image is presented?

  2. (b)

    Do you think make-up products which will bring you close to your favorite facial image is presented?

  3. (c)

    Do you understand the combination of make-up reflecting your own taste?

With regards to (a), whether you think color simulation close to your favorite facial image is presented, each make-up’s average score excluding the cheek was above 4.0. With regards to (b), whether you think make-up products which will bring you close to your favorite facial image is presented, average score was more than 4.0 with each make-up. The comments received regarding (a) was that it was easy to imagine the actual make-up by viewing the simulation image and that it would lead to the desire to purchase the product such as “it was easy to imagine what it would be like when the same make-up as that of the favorite facial image was done to my face and I thought it would make me want to buy that product” and hence we were able to confirm that this system was very useful in presenting a simulation image. Comment received regarding (b) was “I was able to understand what cosmetics to use and which colors to combine to bring me closer to my favorite facial image” and hence we were able to confirm that the system was very useful in presenting make-up products (Fig. 3).

Fig. 3.
figure 3

Average and SD of questionnaire evaluation result(*: p < 0.001)

With regards to (c), whether you understand the make-up combination reflecting your taste, the paired t-test result for both before and after the system use showed significant difference of 0.1% (p < 0.001), and before use < after use. Significant difference was seen in the increased level of understanding, therefore there is a high likelihood that this system of presenting a simulation image and make-up products bringing the user closer to their favorite facial image will help find make-up reflecting the taste of the assessor. However, since the increase in the level of understanding did not go above 3.5 points there is a possibility that the information provided by the system is not sufficient to do make-up that will bring the user close to their actual preferred facial image. From comments like “I could find out the right combination but I could not understand how to do the make-up”, technical advice of presenting how to use make-up products was seen to be necessary to improve the level of understanding.

Among others there was a comment of “I did not know the color of the cheek that I was after but after using this system it was good to find out that it was not pink as I thought but an orange type of color”, hence it led to the discovery of the preference of the assessor who did not understand her own preference. Therefore, we were able to confirm the usefulness of doing make-up that will bring you closer to your preferred face which reflects your taste.

4 Conclusion

We proposed an assistance system that presents simulation and products that will bring the user closer to the color of the facial image desired by the user. The result of the subjective user evaluation confirmed that the proposed system presented color simulation image and make-up products bringing the user closer to the favored facial image color and this assisted in combining make-up reflecting the user’s preference.