Learning to Caricature via Semantic Shape Transform

Caricature is an artistic drawing created to abstract or exaggerate facial features of a person. Rendering visually pleasing caricatures is a difficult task that requires professional skills, and thus it is of great interest to design a method to automatically generate such drawings. To deal with large shape changes, we propose an algorithm based on a semantic shape transform to produce diverse and plausible shape exaggerations. Specifically, we predict pixel-wise semantic correspondences and perform image warping on the input photo to achieve dense shape transformation. We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures. In addition, our model allows users to manipulate the shape via the semantic map. We demonstrate the effectiveness of our approach on a large photograph-caricature benchmark dataset with comparisons to the state-of-the-art methods.


Introduction
Caricature is a rendered image by abstracting or exaggerating certain facial features (e.g., contour, eyes, ear, eyebrows, mouth, and nose) of a person to achieve humorous or sarcastic effects (Fig. 1). Caricatures are widely used to depict celebrities or politicians for certain purposes in all kinds of media. However, generating visually pleasing caricatures with proper shape distortions usually requires professional artistic skills with creative imaginations, which is a challenging task for common users. Therefore, it is of great interest if caricatures can be generated from normal photos effectively or in a way that users are allowed to flexibly manipulate the output with user controls.
One crucial factor to generate a desirable caricature is to distort facial components properly, i.e., to render personal traits with certain exaggerations. Numerous efforts have been made to perform shape exaggeration by computing warping parameters between photos and caricatures from user-defined shapes (Akleman et al., 2000) or hand-crafted rules (Brennan, 2007;Liao et al., 2004). However, such methods may have limitations on generating diverse and visually pleasing results due to inaccurate shape transformations. Recently, image-toimage translation (Isola et al., 2017;Zhu et al., 2017a;Lee et al., 2018) and neural style transfer (Gatys et al., 2016;Johnson et al., 2016;Li et al., 2017a) algorithms have been developed, but most techniques can be applied to two domains with local texture variations, not for scenarios where large shape discrepancy exists.

Photo
Hand-drawn Our Results with Diverse Shapes and Styles Fig. 1 Examples of normal photos, hand-drawn caricatures, and a set of caricature outputs generated by the proposed method. Our approach is able to render a diverse set of visually pleasing caricatures.
In this work, we aim to create shape exaggerations on standard photos with shape transformations similar to those drawn by artists. Meanwhile, a rendered caricature should still maintain the facial structure and personal traits. Different from existing methods that only consider facial landmarks or sparse points Shi et al., 2019), we use a semantic face parsing map, i.e., a dense pixel-wise parsing map, to guide the shape transformation process. As such, this can provide more accurate mapping for facial details, e.g., shapes of eyebrows, noses, and face contours, to name a few. Specifically, given an unpaired caricature with a normal photo, we leverage the cycle consistency strategy and an encoder-decoder architecture to model the shape transformation. Nevertheless, operating this learning process in the image domain may involve noise from unnecessary information in pixels. Instead, we learn the model directly on the face parsing map, which is the shape transformation of interest. To learn effective shape transformation, we design a spatial transformer network (STN) to allow larger and flexible shape changes, while a few loss functions are introduced to better maintain facial structures.
To evaluate the proposed framework, we conduct experiments on the photo-caricature benchmark dataset . We perform extensive ablation studies to validate each component of the proposed shape transformation algorithm. We conduct qualitative and quantitative experiments with user studies to demonstrate that the proposed approach performs favorably against existing image-to-image translation and caricature generation methods. Furthermore, our model allows users to select the desired caricature semantic shape, with the flexibility to manipulate the parsing map to generate preferred shapes and diverse caricatures. The main contributions of the paper are as follows: -We design a shape transformation model to facilitate the photo-to-caricature generation with visually pleasing shape exaggerations that approach the quality of hand-drawn caricatures. -We introduce the face parsing map as the guidance for shape transformation and learn a feature embedding space for face parsing, which allows users to explicitly manipulate the degree of shape changes in the rendered caricature. -We evaluate the proposed algorithm on a large caricature benchmark dataset and demonstrate favorable results against existing methods.

Related Work
Image Translation. Numerous methods based on GANs (Goodfellow et al., 2014) have been recently developed for image translation for paired (Isola et al., 2017;Chang et al., 2018), unpaired (Zhu et al., 2017aKim et al., 2017;, and multimodal (Zhu et al., 2017b;Lee et al., 2018;Huang et al., 2018) settings. While most GANs based methods require large image sets for training, neural style transfer models (Gatys et al., 2015(Gatys et al., , 2016 only need a single style image as the reference. A number of methods have since been developed (Johnson et al., 2016;Li et al., 2017a;Chen et al., 2017;Liao et al., 2017;Li et al., 2017b) to improve style translation or runtime performance. In the photo-to-caricature task, however, neither GAN-based approaches or neural style transfer methods take the large shape discrepancy across domains into account. Unlike existing image translation methods, our algorithm enables shape exaggerations by utilizing an encoder- Caricature Rendering. Learning to caricature from photos is mainly concerned with modeling shape transformation. However, it has not been widely explored due to the large domain gap between photos and caricatures. Some early methods (Akleman, 1997;Akleman et al., 2000) rely on user-defined source and target shapes to compute the warping parameters, but the rendered images do not exhibit caricature styles. Several rule-based methods (Luo et al., 2002;Liao et al., 2004;Brennan, 2007) are developed to perform shape exaggeration for each facial component.
Recently, CycleGAN-based image translation methods Zheng et al., 2019) have been proposed for caricature generation with facial landmarks as conditional constraints. However, the geometric structures of the images generated with these algorithms are still close to the original photos, with limited exaggerated effects. To increase shape exaggeration, Cao et al. (2018) use a geometric exaggeration model by leveraging landmark positions to predict key points in the subspace formed by principal components. The WarpGAN model (Shi et al., 2019) learns to directly predict a set of control points used to warp the input photo based on unpaired adversarial learning. Despite showing promising results, their shape exaggerations are still limited due to the use of sparse landmarks or control points. In contrast, we use a pixel-wise parsing map and model shape transformation, thereby generating plausible exaggerations while retaining the facial traits of the input image. Furthermore, due to the learned representations in the parsing space, users can explicitly manipulate the facial map to generate preferred shapes. In Table 1, we compare our algorithm with existing methods in terms of shape transformation and requirements.
Spatial Transformer Network. The spatial transformer networks (STNs) (Jaderberg et al., 2015) are developed to improve object recognition performance by reducing input geometric variations. Numerous variants have since been developed for a wide range of computer vision applications (Wu et al., 2017;Zhou et al., 2018;Dai et al., 2017;Ganin et al., 2016;Park et al., 2017;Shu et al., 2018) that require geometric constraints. Close to our work is the method proposed by Lin et al. (2018) in which lowdimensional warping parameters are learned to manipulate foreground objects for image composition. In our task, we also introduce an STN to predict warping parameters to enable shape exaggeration on normal photos. In contrast, we need denser and more complex deformations instead of low-dimensional affine transformation (Jaderberg et al., 2015) or homography transformation (Lin et al., 2018). Furthermore, our shape transformation network leverages the facial parsing maps as an additional input to focus on the semantic facial structure.

Algorithmic Overview
In this section, we introduce the overall framework of the proposed caricature generation method (see Fig. 2).
Model Inputs. Existing methods (Zheng et al., 2019;Cao et al., 2018) use images or sparse landmarks as the inputs to capture facial structure information for shape exaggeration. However, these approaches are not effective for capturing large shape transformations in caricatures as their appearances of facial components are significantly different from the ones in normal photos. In this paper, we use the facial parsing map which provides dense semantic information to facilitate computing correspondences between faces of the photo and caricature. In practice, we adopt the adapted parsing model in (Chu et al., 2019) to account for the domain shift issue. More details about the training strategy and network architecture of the parsing model can be found in (Chu et al., 2019). Note that during training the parsing network, only 17 facial landmarks of caricatures are provided like other caricature generation methods . During the testing stage, we do not need any key-point annotations. Although the parsing quality could not be always satisfactory (e.g., IoU is 86.5% on facial skins), we design loss functions (in Section 4.3) to make the deformed shape smooth and plausible.
Caricature Retrieval. We note that the selection of target semantic shapes is useful and important for diverse caricature generation. In this work, we develop a parsing map retrieval model with the large-scale dataset Fig. 2 Overall framework of the proposed caricature generation method. Given an input photo, we first obtain its face parsing map and then retrieve caricature parsing maps from a large-scale database. Second, we feed these two maps into the proposed shape transformation network to predict the warping parameters and produce the deformed photo. Third, we utilize a style transfer network to generate the final output with caricatured textures.
WebCaricautre  as the gallery images. Given a photo parsing map P pho , we aim to retrieve suitable caricatures P cari as inputs to calculate the following shape transformation. The key challenge is how to find the appropriate embedding space to perform retrieval. Here, we assume that if P pho and P cari belong to the same identity, P cari could be a good reference. Thus, they should be close to each other in the embedding space. As shown in Fig. 3, we utilize the contrastive loss L contrastive (Chopra et al., 2005) to enforce the caricature and photo embeddings of the same person being close to each other, while the reconstruction loss L rec helps preserve the content of the parsing maps through minimizing the Euclidean distance between the input parsing maps and reconstructed ones.
The encoder consists of four basic dense blocks  and a flatten layer to obtain a global 128dimensional vector z cari /z pho , while the decoder is composed of four symmetric dense blocks with transposed convolution layers. The contrastive loss for positive and negative pairs is defined as: where the hyper-parameter m is the margin set to 2 in this work. During testing, we pre-compute the caricature embedding of our gallery caricatures in the training set, and use the photo encoder to compute the embedding of the testing photo. To find multiple caricature parsing maps, we first retrieve the top 5 caricature embeddings that are closest to the photo one based on the Euclidean distance, and then use their associated caricature parsing maps as our final outputs.
Overall Pipeline. Given the input photo, we first use a caricature retrieval model to automatically recommend the proper caricature parsing map, along with the Fig. 3 Framework of caricature retrieval. Our goal of training the retrieval model is to learn the photo and caricature embedding on parsing maps, so that given the photo parsing map, the model is able to retrieve proper caricature maps during testing.
photo parsing map as inputs to the next phase. Second, to better mimic the process of drawing caricatures, we decompose the caricature generation pipeline into two stages: shape transformation and style transfer. In the first stage, we propose a semantic-aware shape transformation network to learn dense warping parameters that enable shape exaggerations for the input photo. Once obtaining the image with the deformed shapes on facial components, we use a reference based feed-forward style transfer network (Huang and Belongie, 2017) to perform the photo-to-caricature texture translation and obtain the final caricature output.

Semantic Shape Transformation
Given a portrait photo and a recommended caricature, our algorithm transforms the portrait photo to have a similar facial structure to the recommended caricature.
In contrast to most image translation methods that learn pixel mapping in the image space, we learn the dense pixel correspondence between inputs and outputs on facial components through the face parsing map. We Proposed shape transformation network. We first feed face parsing maps to each encoder to extract latent feature encodings z. Second, we concatenate two features as the input of the decoder to predict the warping parameters D and thus produce the parsing map P f ake of the deformed photo. A reconstruction loss L rec is then computed between P pho and the parsing map of the caricature P cari . To ensure local details and cycle consistency, we further incorporate an adversarial loss L adv on P f ake and a cycle consistency loss L cyc between P pho and the reconstructed parsing map of the photo P cyc . In addition, we add a coordinate-based loss L coo to constrain the alignment based on pixel locations.
use an encoder-decoder architecture, where the encoder extracts feature representations of the parsing map and the decoder composed of an STN module estimates the warping parameters, i.e., the transformation from the parsing map of the photo to the caricature one. It is worth noting that learning in the semantic space is easier than the original image space due to less appearance discrepancy. The overall architecture and designed loss functions are presented in Fig. 4.

Encoder
Here, we describe how to obtain compact representations for the facial structures of caricatures and photos. We first denote the face parsing maps of the photo and caricature as P pho and P cari ∈ R C×H×W , where the image height and weight are denoted as H and W , and C is the number of the facial component category. As a result, in each channel, there is a binary map to describe each facial component. Considering the distributions of facial structures between photos and caricatures are quite different, we use two independent encoders in which each network consists of several dense blocks. As such, the encoded feature is a compact 128dimensional vector. In the following, we denote them as z pho and z cari .

Decoder
Once obtaining the latent feature z from the encoder to represent the facial structure, the goal of the decoder is to predict the dense correspondence denoted by a tensor D ∈ R 2×H×W . Specifically, (D 1,i,j , D 2,i,j ) indicates the corresponding target position when warping each pixel (i, j) from the recommended caricature to the input photo. To perform warping, we first concatenate the latent encoding z pho and z cari , and then feed this feature to the decoder. Here, we introduce a spatial transformer network module to generate the 2D shape transformation parameters for each pixel. As a result, we can apply the differentiable bilinear sampling operation to the photo parsing map P pho and obtain the deformed photo parsing map P f ake .
The ensuing question is how to enforce the generated P f ake to resemble the parsing map of the real caricature. To this end, we impose three constraints on P f ake : 1) the generated parsing map should be densely reconstructed with respect to the recommended one in the semantic space; 2) a cycle consistency is measured between the generated parsing map and the recommended one; 3) a coordinate-based reconstruction is used to regularize alignment at locations of facial components. In the following, we introduce the loss functions based on the above-mentioned constraints.

Reconstruction in the Semantic Space
Reconstruction Loss. First, a natural way to enforce similarity is to require P f ake and P cari identical at each pixel. Therefore, we minimize the L1 distance between them as below: (2) However, we find that this function is not effective to reconstruct every facial component. For instance, face skin regions can have a large overlap between P cari and P f ake , while smaller components such as eyes do not usually have spatial overlaps. To handle this issue, we design a location-aware metric which measures the distance of the center in the same facial component between P cari and P f ake . We average the locations of each pixel to obtain a mean location (x c , y c ) for each facial component c and obtain: This location-aware reconstruction loss is defined by: Furthermore, we define a global alignment loss by matching the number of pixels in each facial component: The full objective function for reconstruction is: where the hyper-parameters λ l and λ n control the importance of each term. In this work, we use λ l = λ n = 2 in all experiments. To encourage our loss functions to pay attention to small components, we introduce a weight λ c comp adaptively computed as the reciprocal pixel-ratio in each facial component c of the entire image, i.e., L rec = C c=1 λ c comp L c rec , where L c rec indicates the loss L rec for each semantic category c.
Adversarial Loss. The reconstruction-based loss can be used to recover the global structure. On the other hand, the GAN-based adversarial loss (Goodfellow et al., 2014) has been shown to be effective for preserving local details. In this work, we adopt a similar approach based on the GAN loss but in the semantic parsing space. To ensure the generated P f ake to look like a realistic caricature, we employ adversarial learning to match the distribution of P f ake to the one of caricatures, i.e., P cari . We adopt the same training scheme and loss function as the Wasserstein GAN model (Arjovsky et al., 2017). We denote this adversarial loss for the generator as L adv .

Cycle Consistency
Similar to the CycleGAN model (Zhu et al., 2017a), we add the cycle consistency to make the transformation more stable. Specifically, we first input P f ake to the same caricature encoder and extract the feature z f ake . We then concatenate z f ake and z pho , feed them into a decoder and recover the original face parsing map of the photo, denoted as P cyc . Here, we utilize the same reconstruction-based loss as in (6), defined as L cyc , in which the only difference is that we compute the loss between P cyc and P pho .
Coordinate-based Loss. The above-mentioned loss functions are all based on the parsing map P or regularization on D, which constrain the output to be consistent in the semantic space. Nevertheless, the constructed pixels may not be well aligned in the coordinate space. To address this issue, we further introduce a coordinate-based loss when computing the cycle consistency. Instead of only considering the parsing map P pho , we construct a coordinate map M pho ∈ R 2×H×W , where M (i,j) pho = (i, j) indicates the spatial location. After obtaining the reconstructed P cyc , we convert it to a coordinate map M cyc . Since this M cyc has been operated through two decoders with the estimated warping parameters, the newly warped coordinates may not be aligned with M pho . Thus we minimize the following loss in the coordinate space: This spatially-variant consistency loss in the coordinate space can constrain per-pixel correspondence to be oneto-one and reversible, which reduces the artifacts inside each facial part.

Raw image
Parsing map +L rec +L cyc +L coo

Overall Objective
The overall objective function for the proposed semantic shape transformation network includes the reconstruction/adversarial loss to help recover the semantic parsing map and the cycle consistency/coordinatebased loss to ensure consistency: L shape =λ r L rec + L adv + L cyc + L coo .
In this work, we regard the reconstruction term as a critical one and use λ r = 500 in the following experiments.

Implementation Details
We implement our method using PyTorch (Paszke et al., 2017) and train the model with a single Nvidia 1080Ti GPU. For the encoder, we utilize four basic dense blocks  to extract features, followed by a flatten layer to obtain a global 128-d code for representing the semantic parsing map. The decoder consists of four symmetric dense blocks with transposed convolution layers for upsampling. During training, we utilize the Adam (Kingma and Ba, 2015) optimizer and use the batch size as 32. Similar to CycleGAN, we set the initial learning rate as 0.0001 and fix it for the first 300 epochs, and linearly decay the learning rate for another 300 epochs. The source code of the proposed method is available at https://github.com/ wenqingchu/Semantic-CariGANs.

Results and Analysis
We evaluate the proposed algorithm and relevant methods on the large WebCaricature dataset . This photo-caricature benchmark dataset contains 5974 photos and 6042 caricatures collected from the web. We use the provided landmarks to crop/align faces and then resize them into 256 × 256 pixels. In addition, we randomly select 500 photos as the test set and use the rest as the training set. We perform qualitative and quantitative experiments to demonstrate the effectiveness of the proposed algorithm.

Ablation Study on Shape Transformation
We first analyze the quality of the proposed semantic shape transformation algorithm in this section.
Loss Functions. We evaluate the effectiveness of the proposed loss functions quantitatively. We randomly select 200 caricatures as reference images, guiding the test photos to generate transformed caricature-like output (without the style transfer stage). Next, we utilize the parsing map of the reference caricature and verify whether the parsing map of the transformed photo is similar to the original parsing map in caricature. We evaluate the performance with mean intersection-over-  Fig. 6 Visual comparisons of shape transformation generated by different inputs. Our method using face parsing maps is able to accurately transfer the shape exaggerations from the shape reference, while preserving the facial structure.
intersection (mIoU) and pixel accuracy (pixAcc), which are the common metrics for semantic segmentation. Table 2 shows the results of this ablation study. Without the reconstruction loss L rec , the model has no control to transform face shapes to be similar to the reference parsing map, resulting in the lowest accuracy. Without the adversarial loss L adv or cycle consistency loss L cyc , we observe that the training is less stable and is unable to preserve details. Finally, although removing L coo only slightly degrades the parsing mIoU, we notice that it is a crucial component to perform more accurate alignment for facial components.
In addition, we also provide visual comparisons in Fig. 5 to verify the effectiveness of the designed loss functions qualitatively. Here, we gradually add L rec , L cyc , and L coo to our model. We find that only using L rec leads to severe artifacts, e.g., around the eye and skin regions. This is because this model only learns to change the semantic shapes and overlook the inherent structure of the facial component. Introducing the additional L cyc could constrain the mapping function and produce less distortion. Finally, we observe that adding L coo based on the coordinate space makes the constructed pixels well aligned and produces visually pleasing results. The reason is that L coo penalizes pixels with large distortion and enforces all pixels in the same semantic region to have smoother translations.
Comparisons to the Landmark Input. In this work, we utilize the parsing maps as the input to learn shape transformation, while previous methods  mainly leverage sparse facial landmarks for shape exaggeration. We present an ablation study of using different inputs for performance evaluation. Similar to , we consider two approaches to make use of the labeled facial landmarks provided in the Webcaricature dataset and show visual comparisons in Fig. 6.
For the first method, we compute the warping parameters between the photo and the caricature using the thin-plate-spline transform (Jaderberg et al., 2015) by aligning the landmark positions. We denote this baseline method as "Landmark positions". Although landmarks are aligned, there are obvious distortions due to the lack of control points. To further increase the control points, we connect 17 sparse landmarks that belong to the same semantic (e.g., both eyes) into a single polygon. We then apply an one-hot encoding on the polygons that contain different facial components, in which this input is used for caricature generation as in the proposed model. As shown in Fig. 6, although the results are visually more pleasing than the one using landmark positions, the facial contour does not transfer from caricatures to the outputs, since we have no control over the regions that are not covered by the landmark poly- gons. In addition, we conduct a user study: among 15 participants, 89% of the total 310 votes prefer our results than the ones using landmarks. Compared to the two baseline methods, the proposed algorithm uses face parsing maps to further control the shape transform with semantics and thus produce better results.
Shape-based Methods. We present comparisons with a non-rigid registration method (Jian and Vemuri, 2010) and the recently developed Neural Best-Buddies (Aberman et al., 2018). Similar to Table 2, we evaluate the quality of the transformed parsing map. The mIoU and pixACC are (46.5%, 84.7%), (45.6%, 61.7%) for Jian and Vemuri (2010) and Aberman et al. (2018) respectively, which are worse than ours as (61.7%, 95.6%). One possible reason is that these methods cannot leverage semantic labels to guide registration, while ours is a learning-based model that considers semantics.

Comparisons to the State of the Arts
We conduct experiments with comparisons to the stateof-the-art methods by showing visual comparisons and performing user studies.
Image Translation Methods. We evaluate our method with the state-of-the-art image translation algorithms, including CycleGAN (Zhu et al., 2017a), Neural Style Transfer (Gatys et al., 2016), MUNIT DRIT (Lee et al., 2018). For Neural Style Transfer, we randomly select 5 caricatures as style images.
As shown in Fig. 7, conventional image translation methods are able to generate different textures in the generated outputs. However, there are severe artifacts due to the large texture variations in caricatures. More importantly, these methods only render slight shape changes, which is unsatisfactory for caricature generation.
Caricature Generation Methods. Next, we perform the evaluation on existing caricature generation methods, including WarpGAN (Shi et al., 2019), CariGAN , Zheng et al. (2019) and Deep Image Analogy . We use the images provided by CariGAN as the test set for fair comparisons, where their output results are provided by the authors. In Fig. 8, while the other approaches show improved results compared to image translation baselines, the exaggerated shape effects may not be natural. Note that WarpGAN does not need additional inputs, but it is less effective in aligning larger shape deformation.
User Study. We conduct user studies to evaluate the quality of generated caricatures in the above-mentioned methods. For each subject, we first show a normal face  with a few hand-drawn caricatures as the instruction to guide the users. During the study, we randomly choose two of the methods among all the methods, and present one generated caricature for each method. We then ask each subject to select the one that "looks more like a caricature" in terms of the shape exaggeration, artistic style, and image quality. To compare with image Photo Random-1 Random-2 Random-3 Retrieval-1 Retrieval-2 Retrieval-3 Fig. 9 Visual comparisons of deformed photos guided by random selected caricatures and the recommended ones. Given a photo, our caricature retrieval model is able to automatically select plausible reference caricatures and obtain diverse results.
translation methods, we collect 2650 pairwise results from a total of 38 participants, while for caricature generation algorithms, we collect 2220 pairwise outcomes from 31 participants. In Table 3 and Table 4, we show the normalized scores computed by the Bradley-Terry model (Bradley and Terry, 1952). The results show that the proposed algorithm performs favorably against state-of-the-art methods. In addition, compared with hand-drawn caricatures provided from the WebCaricature dataset  in Table 3, our score is closest to hand-drawn ones.

Additional Results and Analysis
Given a photo, our caricature retrieval model is able to automatically select plausible reference caricatures and obtain diverse results (Fig. 1). To demonstrate the effectiveness of this recommendation, we also select reference caricatures by random selection and compare the generated results through a user study. Specifically, we show two sets of deformed photos and allow users to select the preferred set based on the visual quality and diversity of generated caricatures. Among 900 pairwise outcomes from 30 participants, 83% of the votes prefer our results. Here, we show more results of deformed photos guided by random selected caricatures and the recommended ones in Fig. 9. For simplicity, we only present results of deformed photos before applying style transfer.
Shape Embedding Space. We analyze the latent shape embedding space extracted from the encoder.
We randomly select 1000 caricatures and extract their shape embedding vectors. To verify whether the shape embedding features could capture meaningful facial structure information, we first apply the mean shift clustering method (Comaniciu and Meer, 2002) to group caricature shapes and then apply the t-SNE (van der Maaten and Hinton, 2008) scheme for visualization. Fig. 11 shows that caricatures within the same cluster share a similar facial structure, while neighboring clusters are also similar to each other in certain semantic parts.
We note that the embedding space shows smooth transitions between different shape deformation clusters, which can be exploited to generate different caricatures through a simple interpolation between caricature references as shown in Fig. 10. As a result, the degree of shape exaggeration could be controlled through interpolating between the input facial shape and the reference one, whenever users prefer to preserve more identity of the input portrait.
User Control. Given a caricature/photo pair, our approach is not only as simple as a one-click shape transformation, but is also flexible to accommodate finegrained facial structure refinements from users via providing gird controls on the parsing map. We show an example for results controlled by the grid in Fig. 12. Through adjusting the positions of control points, we are able to change the face contour, move the facial component positions, or adjust the shape of facial components. Therefore, users could manipulate the desired shape easily and create diverse caricatures. In addition, these results demonstrate that it is plausible to feed new caricature parsing maps into our shape transfor- Fig. 10 Shape interpolations between the input image and caricature in our learned shape encoding space. Fig. 11 Visualization of the encoding space for caricature shapes. We show that each group (denoted in different colors) has certain shapes in facial components, while neighbors in this feature space also share similar shapes. Fig. 12 Results of manipulating on the parsing map.

Grid Controls Edited Map Deformed Photo
Here, for simplicity, we only present results of deformed photos before applying style transfer. mation model, e.g., the parsing map could be flexibly modified by the users as shown in Fig. 12.
Identity Preservation. In addition, we conduct a user study similar to CariGAN  to evaluate the degree of identity preservation. The users are asked to choose the correct subject from 5 portraits, given the results generated by each method. We collect 650 votes with 26 participants and the accuracy are 70% (our method), 53% CariGAN , 53% (Zheng et al., 2019)), 68% WarpGAN (Shi et al., 2019), and 45% Deep Image Analogy . The results show that our model is the best one to preserve the identity.
Results with Diverse Styles. The proposed algorithm utilize caricatures as reference images to guide the shape transformation and style transfer. In Fig. 13, we show the input photo in the top-left corner. In the first row, we show reference caricatures for style transfer, while in the left-most column, we show the reference caricatures of the preferred shape. With various combinations, our method is able to robustly produce diverse results with different styles and shapes.
Limitation. Although using the proposed shape transformation for caricature generation is simple and effective, we find that it may render unsatisfactory results for some scenarios. For example, when the eyebrows and eyes are very close to each other in the photo, there would be some artifacts around the eyes when the transformer tries to change the eyebrow shapes as shown in Fig. 14. In the future work, we plan to use multiple shape sub-transformers for each facial component, followed by a global refinement network to integrate different sub-transformers together. Runtime Performance. In the proposed framework, we use a single Nvidia 1080Ti GPU and the runtime on a 256 × 256 input photo is around 0.65 seconds, including 0.1 seconds for face parsing, 0.1 seconds for caricature retrieval, 0.3 seconds for shape transformation, and 0.15 seconds for style transfer.

Conclusions
In this paper, we propose a semantic dense shape transformation algorithm for learning to caricature. Specifically, we utilize a face parsing map to densely predict warping parameters, such that the shape exaggerations are effectively transferred while the facial structure is still maintained. Visual comparisons and user studies demonstrate that the proposed algorithm is able to generate high-quality caricatures against state-of-the-art methods. In addition, we show that the learned embedding space of semantic parsing map allows us to directly manipulate the parsing map and generate shape changes according to the user preference. rous challenging image translation problems which require complex shape deformation.