Skip to main content

Prediction of the layered ink layout for 3D printers considering a desired skin color and line spread function


We propose a layout estimation method for multi-layered ink using a measurement of the line spread function (LSF) and machine learning. The three-dimensional printing market for general consumers focuses on the reproduction of realistic appearance. In particular, for the reproduction of human skin, it is important to control translucency by adopting a multilayer structure. Traditionally, layer design has depended on the experience of designers. We, therefore, developed an efficient layout estimation to provide arbitrary skin color and translucency. In our method, we create multi-layered color patches of human skin and measure the LSF as a metric of translucency, and we employ a neural network trained with the data to estimate the layout. As an evaluation, we measured the LSF from the computer-graphics-created skin and fabricate skin using the estimated layout; evaluation with root-mean-square error showed that we can obtain color and translucency that are close to the target.


Three-dimensional (3D) printers have been widely used in recent years for design assessment and rapid prototyping in industrial fields. There are various modeling methods for 3D printers, such as fused filament, stereolithography, selective laser sintering, and inkjet methods [1]. The most accurate type is stereolithography modeling, which is used in the medical [2] and dental [3] fields, although there are limitations in terms of printing material. Parts of automobiles [4] and consumer concept models have recently been made using inkjet-type 3D printers with fine jetting. This inkjet type has the unique and superior feature of creating a colored product [5]. Furthermore, by combining with various surface fabrication techniques, other applications, such as welfare devices and architectural models, are expected to be realized in the foreseeable future.

Many studies have aimed to improve the reproducibility of 3D printers. Error-diffusion halftoning is a technique that allows smooth tonal representation in two-dimensional (2D) printing, and one study applied this technique to 3D printing [6]. This method allows the detailed representation of color in 3D printing, which is limited to 3D materials and inks. Other studies have investigated material appearance, with most focusing on the control of translucency. One study proposed a method of reproducing complex scattering properties [7], where the scattering properties of several materials are measured and the radial reflection and scattering profiles are generated. This allows the proper arrangement of material in the depth direction and reproduces complex scattering properties. In a study [8] that reproduced complex light scattering using the BSSRDF (bidirectional scattering surface reflectance distribution function), a function that represents surface subsurface scattering, materials with different scattering effects were stacked with varying spatial thicknesses to represent inhomogeneous scattering. When modeling with translucent materials, it is difficult to have a texture that contains detailed information because of lateral light scattering in a highly translucent material, which blurs the surface texture. A study solved this problem [9] adopting an inverse Monte Carlo simulation-based method to optimize the material arrangement under the surface. Another study adopted the alternative approach of mixing translucent materials [10]. In that study, the concentration of the mixture of several translucent materials was estimated for the reproduction of the desired appearance and scattering properties. Furthermore, a recent study [11] proposed a method of performing full-color modeling with spatially varying translucency using RGBA signals instead of the BSSRDF, which has high measurement and processing costs. Here, A denotes the signal for translucency while RGB denotes the red, green, and blue signals. The accuracy of this method was further enhanced in a study [12] that optimized the signal A to link to both optical material properties and human perceptual uniformity, independent of hardware and software.

The ability to express appearance, including the color, surface reflectance, and translucency, is important to faithful reproduction as an industrial application. Taking advantage of a polisher and clear-coat layer, the inkjet 3D printer can reproduce various properties of surface reflection with glossiness. In addition, it is possible to reproduce the expression of translucency using a stack of thin layers. As an application of 3D printing, the market for character figures for general consumers is expanding, and these figures are required to have a realistic appearance of human skin. As an example, the skin of a Japanese humanoid doll has been realized by combining beige and red layers for the real shading of a muscled body. This attempt at translucency for stomach muscles provides a dynamic volume and vivacious appearance for a realistic human body. The application of translucency is essential to the formation of a natural-looking object, although the disadvantage is a long manufacturing time by reason of the thin layers. In addition, there are other difficulties in choosing the colors of many layers and deciding their order for the realization of translucency. In general, the translucency and scattering property depend on the materials of the inkjet 3D printer. In the case of a few layers, it is possible to estimate the result of reproduction. A professional designer may be able to decide an appropriate combination of color layers on the basis of experience. However, it is difficult to determine the combination of layers that achieve the desired color and degree of dispersion in the case of many layers.

In the present work, therefore, we propose a method of estimating an appropriate combination of multiple layers that realizes the modeling with desired color and translucency. Figure 1 outlines the proposed method. The target color and translucency are first derived from the rendering engine on the basis of the designer’s concept. In the estimation process, we employ machine learning with a neural network. This network calculates the best selection of the layer structure relative to the desired color and translucency. The results of our research can be applied not only to character figures but also to paintings and cosmetic surgery. We note that the present research is limited in that the target object comprises only human skin with multiple layers and the characteristic of translucency is prioritized against other aspects of appearance, such as color and surface reflectance.

Fig. 1
figure 1

Outline of our proposed method

Related works

Color reproduction using a 3D inkjet printer has been developed adopting approaches different from those adopted for a 2D inkjet printer. The use of 2D approaches for 3D printing, such as in the case of the halftone technique, often generates artifacts because the dot generated by the 3D printer is larger than that generated by the 2D printer. Brunton et al. proposed a novel traversal algorithm for voxel surfaces to reduce the presence of these artifacts [6]. They accomplished faithful color reproduction, color gradients, and fine-scale details. However, their algorithm loses image contrast owing to overpainting.

Instead of halftoning, Babael et al. adopted contoning to realize a wide color gamut [13]. The use of this method should consider the combination of inks with various thicknesses inside the object’s volume. Babael et al., therefore, proposed a color prediction model based on Kubelka–Munk absorption. Although it was difficult to reproduce perfect color matching with only a few layers of CMYK inks, there is the possibility of realizing fine reproduction using the 3D inkjet printer by adopting contoning with multiple ink layers. Moreover, the contribution of this research highlights the importance of controlling the line spread function (LSF) and the color relating to layer overlap.

As a measurement-based method, Shi et al. attempted faithful color matching using multi-spectral imaging and machine-learning techniques as shown in Fig. 2 [14]. In their work, they used two neural networks to learn the color reproduction with a combination of layers and the gamut range with multiple layers in an adversarial manner. They first obtained spectral reflectance for 20,878 patches and learned a network (F) to associate them with their layout. F is a model that predicts the spectral reflectance from the layout. Using this model, the layout prediction model (B) was trained to minimize the errors of loss functions with respect to the output spectrum. Here, E_spec is a loss function that minimizes the difference between the spectrum of the sample (painting) and F. E_LAB is a loss function that expresses the LAB color space between the spectrum of the sample and the spectrum from F. Finally, E_thick is a loss function that minimizes the thickness of the color layer. This function is necessary for inhabitation of the thickness in the color layer and increasing dot gains. They took the sum of the above three loss functions as the total loss function of model B.

Fig. 2
figure 2

The network structure by L. Shi et al. [14]

The method proposed by Shi et al. achieves state-of-art color matching even if more colored layers are needed. However, we think that it is necessary to consider the translucency if the characteristic of dispersion is different for each color layer. In particular, in the reproduction of human skin, it is important to control translucency in the stack of layers. Therefore, in our research, we use LSF as a metric of translucency, which represents the spread of light due to lateral light scattering. In the field of computer graphics, Jensen et al. have shown that the translucency of human skin is caused by subsurface scattering of light [15], and they used the point spread function to express the translucency of skin. In addition, we focus on the control of translucency using measurements and a machine-learning method.

Overview of the methodology

We present an overview of our methodology. We first create many multi-layered 3D-printed samples that imitate human skin as shown in Sect. 4. We next obtain the LSF of each created sample through actual measurement using the method shown in Sect. 5. The dataset obtained in this way (i.e., the relationship between the multi-layered ink layout and LSF) is used to train a neural network and thus estimate the layout. This process is explained in Sect. 6. Finally, in Sect. 7, we compare the LSF of the skin in the computer graphics (CG) simulation with the LSF of the actual print using the root-mean-square error (RMSE) and discuss the results of evaluating the accuracy of the network.

Definition of a color patch for human skin

In reproducing human skin with a 3D printer, we define a skin model based on the biological skin structure. The human skin has a layered structure and can be roughly divided into the epidermis, dermis, and subcutaneous tissue. Such a biological model of skin is often used in the field of measurement and simulation [16,17,18]. The epidermis contains melanin pigment and below the epidermis, there is a layer called the dermis, which contains blood. The color of the dermis is determined by the color of oxidized hemoglobin and deoxidized hemoglobin present in the blood. Since Donner et al. and Tsumura et al. have shown that the major factors that determine skin color are melanin in the epidermal layer and hemoglobin in the dermal layer [17, 18], we also consider that the two-layer structure is sufficient for color reproduction in 3D printings. Therefore, in this study, the translucency is simply controlled by the layout of the flat layers, the more complex structure in the skin was ignored from the limitation of experimental resources. Here, the total number of layers should be fixed in consideration of the number of combinations and the convenience of patch creation. Therefore, we use clear ink which has almost no scattering or absorption effect. If the number of color layers is less than the total number of layers, fill the upper layer with a clear layer.

We next selected a color for each of the three types of approximate layer. Each epidermal layer (epidermis) is a brown layer acting as the cover of the skin while each dermal layer (dermis) is a red layer in which blood flows. The colors applied to each layer are defined by CMYK values because CMYK values are required when each layer is fabricated with a 3D printer. In addition to CMYK ink, clear ink can also be used. The clear layer is fabricated with 100% clear ink. For the epidermal layer, three colors were completely selected empirically by the authors to represent various type of skin color, assuming typical Asian skin. For the dermis layer, 10 different red colors were selected to represent blood. In total, 14 different layer colors were combined to create the skin with multi-layered ink. The reason for limiting the number of layer colors to 14 is that the cost of creating patches on a 3D printer is high and the number of combinations thus needs to be limited. As outlined in Fig. 3, these layers are combined to create a number of multilayer layouts. In this study, the number of patch layers is set to 10, because printing experiments have shown that the color became almost black if the number of coloring layers exceeded 10. Therefore, we considered that the LSF and color range could be appropriately controlled by changing the layout within the 10 layers. The order of the layers is fixed as clear, epidermis, and dermis from top to bottom to imitate the original skin structure. Furthermore, as mentioned above, we need to limit the number of combinations and we thus restrict ourselves to selecting at most one clear layer, one epidermis layer, and one dermis layer. This implies that no or one type of clear, epidermal, or dermal layer is selected. (However, it is not possible to select no layers in total.) Under these conditions, the total number of combinations is 1412. In addition, the machine we use is capable of molding 625 patches at once, which means that we need to model at least three times. Therefore, we redundantly set the maximum number of patches that can be formed in three times to 1875.

Fig. 3
figure 3

Conditions for making human skin color patches

As the final step, we made test color patches for the simulation of skin color and translucency. Each color patch size was 1 cm square, and the total number of combined patches was 1875. The thickness of the patch is 0.3 mm, and it can be obtained with high accuracy because a surface flattening roller can be used. An opaque white margin was set between patches to prevent ink on each patch from bleeding and the incident light on any patch from striking an adjacent patch. Although this margin may prevent the light from spreading, we did not use white margin around the patch to calculate the LSF. In addition, white ink was placed on the back of each patch to prevent the transmission of incident light and to prevent the printed plate from bending. The white ink in the base layer is essential in full-color 3D printing to reflect light and reproduce colors. Therefore, in our experiment, we assumed that white ink is always set in the base layer. If we change this white ink into other color ink in the base layer, the translucency and color will be changed. In this paper, we did not change the ink color in the base layer throughout the experiments. An inkjet 3D printer (3DUJ-553, MIMAKI ENGINEERING) was used to create the patches. The color gamut in RGB space of the created patches is shown in Fig. 4. The RGB data for each patch are taken from the LSF data obtained in the next section. A typical skin color is shown with a red dot for reference [19]. It can be seen that the color gamut of the patch contains colors like Mongolian skin.

Fig. 4
figure 4

A color gamut in RGB space of the fabricated patch

Acquisition of the LSF

We measured the LSF as an index of translucency to reproduce human skin [20]. The actual skin has complex structure and it is necessary to consider the LSF in two directions in the more practical applications. Specially, anisotropic property in optical scattering is important to reproduce more realistic reproduction of skin. However, in this study, for the limitation of experimental resources, we assumed the isotropic property in optical scattering in the skin. We need to head to measure anisotropic property in optical scattering in the future works.

The setting of the measuring devices is shown in Fig. 5. A total of 1875 color patches were printed on three sheets in a 25 × 25 grid. By illuminating each patch with an edge image and capturing the image with a camera, the change in pixel value near the edge was obtained and the LSF was measured. Due to the characteristics of the UV-curing inks used in 3D printers, the color patches have a slight gloss effect. However, in our measurement specular reflection was not observed in the captured image even if the geometry setup of light source, sample and camera is as in Fig. 5. In our next step or research, we need to improve this geometry setup. A laser projector (Smart Beam Laser, United Object) was used as an illuminator to emit a line toward each patch. This projector had 1280 × 720 pixels, and it was possible to control each pixel using a liquid–crystal-on-silicon device. Even if we used laser projector, there will spatial intensity distribution. In this study, ignored this distribution and assumed that it is uniformed. Laser light of three wavelengths was used for edge projection; its spectral distribution is shown in Fig. 6. It is known that the use of a light source with a spiky spectral distribution reduce the accuracy of color measurement due to the poor color rendering properties (color rendering index) of the illumination in color engineering [21]. In contrast, white light has a wide spectral distribution with usual color filters exhibits very high color rendering properties. In this research, we used a laser light source since the optical set up is very easy, since we do not need to consider the distance between the laser light source (laser projector) and sample from the focusing free property of laser projector. From a spectroscopic point of view, there is room for improvement in the light source. An image with 25 × 25 edges, as shown in Fig. 7, was created and irradiated onto a color patch using a laser projector. This illuminated the right half of all patches, as shown in Fig. 8. Since it is difficult to accurately arrange the projected pattern on 625 patches, there is a slight inclination in the edge at each patch. The slight inclination causes the slight blur in the obtained LSF since we average the transitions of pixel values in multiple lines along the edge. A magnified view of one of the patches in Fig. 8 is shown in Fig. 9a. A camera (α5100, SONY) with 6000 × 4000 pixels was used as the capturing device. We set the shutter speed to 1 s and the ISO setting to 100. The plate was 30 cm in size with 1-cm patches, and the camera thus had a sufficient number of pixels for measurement of the LSF of each patch. As a result, we were able to capture each patch with 100 × 100 pixels.

Fig. 5
figure 5

Setting for imaging the color patch

Fig. 6
figure 6

Spectral distribution of the laser projector

Fig. 7
figure 7

Image input to the projector

Fig. 8
figure 8

Color patches illuminated by the laser projector

Fig. 9
figure 9

Processing flow for LSF acquisition

An example of a projected color patch is shown in Fig. 9. Many color patches exist for each color, and it is thus necessary to automate each LSF calculation for the purpose of efficiency. First, the right half of a patch was illuminated as shown in Fig. 9a. We, therefore, obtained the change in pixel value from left to right in the middle of the patch (y = 50 in Fig. 9a) to measure the LSF. The obtained results are shown in Fig. 9b. However, it is seen that there is much noise in the transition of the acquired pixel values. This is due to the speckle noise caused by the use of a laser light and the resolution of the captured image that is only 100 × 100 pixels. We, therefore, averaged the pixel value transitions from left to right in the vertical direction. Differentiation was then performed on the data from which the noise had been removed. The result is shown in Fig. 9c. The LSF was divided into RGB signals to obtain patch color information. Here, even though the LSF is usually assumed to be isotropic and uniform, the differential value differs before and after the peak due to the effect of noise. Therefore, assuming isotropy, only the side before the peak (from 0 to the maximum value of x) was used. After smoothing to remove the noise of the differential value, the data having a value less than zero were corrected to zero. In addition, there are patches where the edges are off-center because the edges are irradiating many patches at once. Therefore, to align the format of the LSF data, we shifted the data so that the peak is at the right end of the data (x = 99). The LSF obtained through the above process is shown in Fig. 9d. The same process was applied to all 1875 skin patches. Some of the measurement results are shown in Fig. 10. It can be seen that as the number of colored layers decreases, the LSF shows a larger light spreading. This indicates that the LSF can be controlled by the ink layout.

Fig. 10
figure 10

Relationship between the number of colored layers and LSF

Training of a neural network model

We next train a neural network to associate the LSF data for the 1875 skin patches with layouts representing the color assignment to each layer in the corresponding patch. It is possible to predict the layout of human skin patches with arbitrary translucency using this learning result. When training a neural network, it is difficult to determine the accuracy of the output layout. Therefore, in this study, we used an encoder–decoder type of neural network, which is often applied to such problems [14, 22]. A schematic diagram of the network structure is shown in Fig. 11. LSF values were used for both input and output. As explained in the previous section, the representation is a (3 × 100) vector such that each of the three RGB components has 100 elements, and the values range from 0 to 1. The values have been normalized within each sample. Although the setting of the maximum value to 1 may be affected by noise, such normalization was performed because no such effect was observed in this study. The layout of the layered structure is a middle output. It is a vector of 14 elements, representing the number of layers for each of the 14 inks used. In this study, the structure was fixed as 10 layers, such that the sum of the vectors was 10. During training, values were normalized from 0 to 1. The encoder and decoder were fully connected layers, and loss functions were provided in the middle output section and the output section using the RMSE. This loss function \(\mathcal{L}\) is expressed by the following equation:

$$ \begin{array}{*{20}c} {L = \sqrt {\frac{{\left( {y_{{{\text{pred}}}} - y_{{{\text{true}}}} } \right)^{2} }}{N}} } \\ \end{array} , $$

where \(y_{{{\text{pred}}}}\) is the vector predicted by the neural network, \(y_{{{\text{true}}}}\) is the ground-truth vector, and \(N\) is the size of the vector. The loss for LSF is calculated in RGB space which depends on the camera (α5100, SONY) used for LSF acquisition. This is an intensity linear space, since we used RAW mode for shooting. The number of layers for each ink estimated using this network needs to be an integer, but simple rounding makes differentiation impossible. A soft quantization layer was, therefore, provided to make the output of the encoder close to an integer value when denormalized. Although this operation may cause quantization errors, Shi et al. have shown that it is possible to estimate the layout using the same method [14].

Fig. 11
figure 11

Structure of our neural network

We here describe the learning process. In our network, we first learnt only the decoder part. We next combined the encoders, fixed the decoder weights, and trained them. Of the 1875 data, 1500 were used for training and 375 for testing. We used a GPU instance of Google Colab for training, and the total training time was about 5 min. The training results are shown in Fig. 12.

Fig. 12
figure 12

Learning curves of our neural network

We see that the learning converged for both the encoder and decoder. Furthermore, the accuracy for the validation data is the same as that for the training data. The final loss for training was 0.049 and the loss value for the 375 test data was 0.051, which indicates that the accuracy of the training data was comparable to even that of training data for unknown data.


Making a human skin with CG

The purpose of the present study is to predict the multilayer layout that provides the desired color and translucency of human skin created by the designer. It is, therefore, possible to predict the layout using a neural network by measuring the LSF from human skin designed in CG simulation. We, therefore, evaluated the result of machine learning by reproducing the CG of human skin with expected translucency. It is necessary to specify the absorption coefficient and scattering coefficient of the object to control the color and translucency of translucent objects, such as human skin. In rendering human skin in CG, we used the Mitsuba renderer [23], which is an open-source physics-based renderer. In this renderer, the desired translucency can be reproduced by specifying the absorption and scattering coefficients. We used known values of the absorption and scattering coefficients for typical Asian skin, which were calculated in Ref. [24]. Each coefficient was calculated for RGB wavelengths (700.00, 546.10, 435.80 nm). As a detailed setting of Asian skin, the ratio of the two types of melanin in the skin was set at 0.7 for eumelanin to 0.3 for pheomelanin. In addition, the ratio of the melanin portion to the baseline portion (non-pigmented skin tissue) of the skin was set at 0.12–0.88. These values have been given as average values for Asians [24]. The absorption coefficients of the human skin thus set were (R, G, B) = (2.2035, 5.3338, 12.178) \({\text{mm}}^{ - 1}\) and the isotropic scattering coefficients were (R, G, B) = (191.28, 381.59, 774.31)\({\text{mm}}^{ - 1}\). Here, we assume that the designer has no knowledge of the biological skin structure and from the limitation of rendering program, we can only use the rendering technique by Jensen et al. which can only handle single set of µs and µa in the media [15]. Therefore, we used a single set of µs and µa to create our CG of skin. The rendering result for a cube with the above coefficients is shown in Fig. 13a. Figure 13b shows an enlarged view of standard human skin with a spotlight (width of 0.1 in world coordinate system) applied. The skin in Fig. 13a has the general translucency of Asian skin, but to reproduce various translucencies, the absorption coefficient was empirically multiplied by an appropriate constant. Figure 13c is a rendered image of skin with a high absorption coefficient; i.e., 100 times the absorption coefficient in Fig. 13a. Figure 13d shows the skin in Fig. 13c illuminated by a point light source. A comparison of the two images confirms that the light spreads differently and there is a different translucency. Thus, CG samples with various degrees of translucency are created by empirically multiplying the absorption coefficient by a constant. We only changed the absorption coefficient in rendering process as the first step of series of researches, and the change of scattering coefficient should be considered in the next step of research.

Fig. 13
figure 13

CG-created human skin and its PSF (512 × 512 px)

Prediction of layered ink layout

The LSF of the rendered human skin was acquired and input to the learned neural network to predict its layout. We calculated the LSF using the point spread function (PSF) of the image where the narrow spotlight was projected on the CG human skin. The LSF gave each RGB component 100 values in a total of 300 arrays as for the learned setting. The obtained LSF was input to the learned network, and we show an example of the layout estimated using the LSF in Fig. 14a. The figure shows how many layers of each ink are required and is a denormalized version of the output of the neural network. However, considering the cost of 3D printing, in this study, the estimated layout by the neural network is converted to the layout in the dataset used for learning of the neural network, and the color patches are considered as fabricated objects for evaluation. The specific operation of the layout modification is that if multiple layers are selected in each of the epidermal or dermal layers, they are merged into the largest number. The result of these modifications is shown in Fig. 14b. The concern about the no evaluation of the actual fabricated skin is considered to be the error caused by converting the estimated layout to the layout in the dataset used for learning of neural network. In evaluating the LSF, the above modification of the layout from actual estimated layout will give disadvantageous to the evaluation results if the experimental setting of 3D printer is same as the setting when the 1875 patches were printed. Therefore, we can conclude that our estimated layout will give enough high evaluation if the evaluation results for layout in the dataset which was converted from the estimated layout give a enough high evaluation results. In addition, we consider that limiting the number of layouts by converting to layouts in the dataset is effective in terms of reducing printing costs.

Fig. 14
figure 14

Outputs (ink layer layout)

Figure 15 shows the results of estimating the layout using the LSF obtained from the rendered human skin. In this study, the LSF of the fabricating results is known because the layout is selected from the dataset, as shown in Fig. 15. Therefore, the LSF of each result is also included. The results show that (a) the universal skin and (b) the red skin are subjectively similar in appearance. However, (c) the skin with absorption coefficients that are 100 times higher has a color different from that of the CG skin. In material appearance reproduction, subjective appearance is also a very important factor. We next computed the RMSE between the LSF of the CG and the LSF of the object with the estimated layout to compare the LSFs. Results are given in Table 1. The color reproduction was also evaluated by RMSE using RGB data which is ranged from 0 to 1 in each channel. As a result, the color error in RGB space was 0.032 for (a), 0.058 for (b), and 0.054 for (c) in the Fig. 15. Furthermore, to evaluate the error, the patch with the lowest RMSE for the LSF of the CG skin was searched for and selected from a dataset, like a lookup table (LUT). The RMSE of the LSF between the selected patches and the CG skin is also given in Table 1. In addition, a subjective comparison of the estimation result using the neural network and the search result using the LUT is made in Fig. 16. Table 1 shows that the LSF difference in RMSE between the estimation results by the neural network and the search results by the LUT is about 1 ~ 3%. In Fig. 16, the rough shape of the LSF is similar in (a) and (b), but there is a difference in (c). This result suggests that our neural network is unstable in estimating the LSF, which shows strong absorption.

Fig. 15
figure 15

Layout prediction and fabrication results

Table 1 RMSE for LSF values
Fig. 16
figure 16

Comparison of the patch with the minimum RMSE with the target and the predicted patch

In a more detailed evaluation, the error statistics were examined using a larger number of CG samples. In addition to the three samples shown in Fig. 15, we included five samples as shown in Fig. 17. These samples were created by varying the absorption coefficient of the skin, as explained above. We estimated the layout using the LSF obtained from these samples as in the above procedure and took the RMSE between the LSF of the fabrication result and the LSF of the input. These eight results are shown in Table 2. The results show that the RMSE is about 3 ~ 6%, and the accuracy of the estimation results varied depending on the difference of absorption coefficient.

Fig. 17
figure 17

Various CG-skin samples

Table 2 RMSE for various samples

The above results show that we were able to estimate a layout with translucency close to the target translucency within the range of the dataset. The limitation of the present study is that it is difficult to reproduce the LSF of the target with high accuracy because the output is modified by the condition that the number of layers is 10 and that only one clear ink, only one brown ink, and only one red ink can be used. Therefore, the accuracy may be further improved when modeling using the layout output by the neural network. However, because of the cost of 3D printing, this study is limited to evaluating the accuracy within a color patch. In addition, Fig. 15c shows that there are cases where the colors are different even though the LSFs are similar. We, therefore, consider that obtaining the LSF with RGB values is not well suited to color reproduction. It is thus clear that we need to reconsider the methods of LSF calculation and color acquisition for the more accurate reproduction of translucency and color. In addition, we consider that it is possible to obtain similar LSF with different layouts. In this case, it is not possible to distinguish between those layouts with the current evaluation method. By adding a new evaluation aspect such as the number of used inks, it will be possible to distinguish similar LSF with different layouts.

Finally, we discuss some of the other accuracy issues that need to be considered in this study. First, regarding the LSF measurement accuracy of the CG skin, since our CG is based on simulation with enough number of lay tracing, it is not affected by noise. The second is the accuracy of fabricating the skin using the predicted layout. 3D printers can control the ink dot by dot, and we use our own system to perform half-toning; therefore, we have very high reproducibility (if we have the same 3D printer). Third, about the accuracy of LSF measurement of the 3D fabricated skin model. In this study, the images of 625 patches are taken in one shot for efficiency. As you can see in Sect. 5, there is some noise due to resolution and speckle. These are expected to be removed by improving the imaging system in our next step of research.

Conclusions and future works

We proposed an estimation method with which to select the best multiple layers of human skin with arbitrary translucency for the application of an inkjet 3D printer. Various combinations of LSF information for translucency were measured, and an appropriate layout of multiple layers was derived through machine learning.

In a preliminary study, we focused on LSF matching with the limitation of the application for human skin. It was possible to select a combination of layers that produces similar LSFs as the result of estimation, despite the complexity of there being 10 layers. Meanwhile, it was difficult to find the best combination that satisfies both color and the LSF. We think this is due to the design of our neural network to minimize the average error of the LSF. This may lead to a case where reducing the overall average error of LSF lead to the situation that the estimation system deal with the translucency as priority compared to RGB ratio. Therefore, the design of the network needs to be improved, taking into account that there is some trade-off between color and translucency. The evaluation was done only on the LSF and not on the layout, because we could not build index to compare the quality of layouts. Shi et al. did not evaluate the layout for the same reason [14]. We also gave up to compare the quality in layout layer. In addition, a simple two-layer model was applied in this study, despite the fact that more complex modeling of human skin structure has been done in the field of CG. This was preliminarily determined by considering the limitations of materials and fabrication methods in 3D printing. In future work, we hope to improve the appearance using models that are more complex. The integration of knowledge from biomedical optics and skin biology will be required.

Moreover, we imposed restrictions on overpainting in each part of the skin. However, we did not restrict the learning process. We assume that it is necessary to separate networks using features such as the BSSRDF [15] and LSF to avoid complicated learning, and error propagation must be comprehensively investigated. The skin color gamut used in this study was empirically selected as a common skin color for Asians. It is difficult to cover a large skin color gamut with this limited color gamut, and it is thus necessary to consider how to deal with a wide color gamut in combination with evaluation using a skin color database. Our research is expected to be applied to skin phantoms and tattoo coloring effects.

Availability of data and material

Not applicable.

Code availability

Not applicable.


  1. Ngo, T.D., Kashani, A., Imbalzano, G., Nguyen, K.T., Hui, D.: Additive manufacturing (3D printing): a review of materials, methods, applications and challenges. Compos. Part B: Eng. 143, 172–196 (2018)

    Article  Google Scholar 

  2. Lee, V.C.: Medical applications for 3D printing: current and projected uses. Pharm. Therap. 39(10), 704–711 (2014)

    Google Scholar 

  3. Dawood, A., Marti, B., Sauret-Jackson, V., Darwood, A.: 3D printing in dentistry. Br. Dent. J. 219, 521–529 (2015)

    Article  Google Scholar 

  4. Wang, X., Guo, Q., Cai, X., Zhou, S., Kobe, B., Yang, J.: Initiator-integrated 3D printing enables the formation of complex metallic architectures. ACS Appl. Mater. Interfaces 6(4), 2583–2587 (2014)

    Article  Google Scholar 

  5. Sun, P.L., Sie, Y.P.: Color uniformity improvement for an inkjet color 3D printing system. Electron. Imaging 2016(20), 1–6 (2016)

    Google Scholar 

  6. Brunton, A., Arikan, C.A., Urban, P.: Pushing the limits of 3D color printing: error diffusion with translucent materials. ACM Trans. Graph. (TOG) 35(1), 1–13 (2015)

    Article  Google Scholar 

  7. Hašan, M., Fuchs, M., Matusik, W., Pfister, H., Rusinkiewicz, S.: Physical reproduction of materials with specified subsurface scattering. In: ACM SIGGRAPH, pp 1–10 (2010)

  8. Dong, Y., Wang, J., Pellacini, F., Tong, X., Guo, B.: Fabricating spatially-varying subsurface scattering. In: ACM SIGGRAPH, pp 1–10 (2010)

  9. Elek, O., Sumin, D., Zhang, R., Weyrich, T., Myszkowski, K., Bickel, B., Wilkie, A., Křivánek, J.: Scattering-aware texture reproduction for 3D printing. ACM Trans. Graph. (TOG) 36(6), 241 (2017)

    Article  Google Scholar 

  10. Papas, M., Regg, C., Jarosz, W., Bickel, B., Jackson, P., Matusik, W., Marschner, S., Gross, M.: Fabricating translucent materials using continuous pigment mixtures. ACM Trans. Graph. (TOG) 32(4), 1–12 (2013)

    Article  Google Scholar 

  11. Brunton, A., Arikan, C.A., Tanksale, T.M., Urban, P.: 3D printing spatially varying color and translucency. ACM Trans. Graph. (TOG) 37(4), 1–13 (2018)

    Article  Google Scholar 

  12. Urban, P., Tanksale, T.M., Brunton, A., Vu, B.M., Nakauchi, S.: Redefining a in rgba: towards a standard for graphical 3d printing. ACM Trans. Graph. (TOG) 38(3), 1–14 (2019)

    Article  Google Scholar 

  13. Babaei, V., Vidimče, K., Foshey, M., Kaspar, A., Didyk, P., Matusik, W.: Color contoning for 3D printing. ACM Trans. Graph. (TOG) 36(4), 1–15 (2017)

    Article  Google Scholar 

  14. Shi, L., Babaei, V., Kim, C., Foshey, M., Hu, Y., Sitthi-Amorn, P., Rusinkiewicz, S., Matusik, W.: Deep multispectral painting reproduction via multi-layer, custom-ink printing. ACM Trans. Graph. (TOG) 37(6), 271–281 (2018)

    Google Scholar 

  15. Jensen, H.W., Marschner, S.R., Levoy, M., Hanrahan, P.: A practical model for subsurface light transport. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp 511–518 (2001)

  16. Meglinsky, I.V., Matcher, S.J.: Modelling the sampling volume for skin blood oxygenation measurements. Med. Biol. Eng. Comput. 39(1), 44–50 (2001)

    Article  Google Scholar 

  17. Donner, C., Weyrich, T., d’Eon, E., Ramamoorthi, R., Rusinkiewicz, S.: A layered, heterogeneous reflectance model for acquiring and rendering human skin. ACM Trans. Graph. (TOG) 27(5), 1–12 (2008)

    Article  Google Scholar 

  18. Tsumura, N., Ojima, N., Sato, K., Shiraishi, M., Shimizu, H., Nabeshima, H., Akazaki, S., Hori, K., Miyake, Y.: Image-based skin color and texture analysis/synthesis by extracting hemoglobin and melanin information in the skin. ACM SIGGRAPH 2003(2003), 770–779 (2003)

    Article  Google Scholar 


  20. Happel, K., Walter, M., Dörsam, E.: Measuring anisotropic light scatter within graphic arts papers for modeling optical dot gain. In: 18th Color and Imaging Conference, pp 347–352 (2010)

  21. Berns, R.S.: Billmeyer and Saltzman’s principles of color technology. Wiley (2019)

    Book  Google Scholar 

  22. Tominaga, S.: Color control of printers by neural networks. J. Electron. Imaging 7(3), 664–672 (1998)

    ADS  Article  Google Scholar 

  23. Jakob, W.: Mitsuba renderer,, (2010)

  24. Donner, C., Jensen, H.W.: A spectral BSSRDF for shading human skin. In: Proceedings of the 17th Eurographics Conference on Rendering Techniques (2006)

Download references


This research was partially supported by JSPS KAKENHI Grant Nos. 17K00266 and 18K11540. We thank Glenn Pennycook, MSc, from Edanz Group ( for editing a draft of this manuscript.


Not applicable.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Kazuki Nagasawa.

Ethics declarations

Conflict of interest

The authors declare no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Nagasawa, K., Yoshii, J., Yamamoto, S. et al. Prediction of the layered ink layout for 3D printers considering a desired skin color and line spread function. Opt Rev 28, 449–461 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Neural networks
  • 3D printing
  • Line spread function
  • Human skin
  • Translucency