Abstract
While methods for generative image synthesis and example-based stylization produce impressive results, their black-box style representation intertwines shape, texture, and color aspects, limiting precise stylistic control and editing of artistic images. We introduce a novel method for decomposing the style of an artistic image that enables interactive geometric shape abstraction and texture control. We spatially decompose the input image into geometric shapes and an overlaying parametric texture representation, facilitating independent manipulation of color and texture. The parameters in this texture representation, comprising the image’s high-frequency details, control painterly attributes in a series of differentiable stylization filters. Shape decomposition is achieved using either segmentation or stroke-based neural rendering techniques. We demonstrate that our shape and texture decoupling enables diverse stylistic edits, including adjustments in shape, stroke, and painterly attributes such as contours and surface relief. Moreover, we demonstrate shape and texture style transfer in the parametric space using both reference images and text prompts and accelerate these by training networks for single- and arbitrary-style parameter prediction.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Exploratory processes play a key role in human creativity, especially in the process of creating artistic paintings. Based on this observation, Hertzmann [1] argues that the artistic process is led by a high-level goal such as “making a good painting,” but the final image is created through an exploratory process of trying out different techniques, styles, and compositions subsequently. Translating this into computational procedures, algorithmic painting can be modeled as an under-specified optimization problem that can be approached using a variety of high-level to low-level choices of abstraction, media, and distortion. Therefore, it is vital to solve this problem as an iterative, human-in-the-loop procedure with an explicit decomposition of stylistic tasks [1]. We postulate that creating methods adhering to this paradigm is useful for digital artistic stylization tools in general, as it empowers non-professional users to engage in casual creativity [2] by enabling them to create a broad range of stylistic edits and guiding them to aesthetic results without requiring a high level of technical skill. In this paper, we focus on controlling geometric shape abstraction and texture in artistic images.
Recent advancements in image synthesis, including diffusion models [3] and example-based stylization methods such as neural style transfer (NST) [4, 5], yield impressive results. However, their black-box representation of style, that intertwines shape, texture, and color aspects, and the resulting lack of separable artistic control variables [1] pose significant challenges for precise adjustment of stylistic elements. This is particularly apparent for geometric and textural attributes. For example, geometric elements, such as brush patterns, shapes and sizes, are often not applied in a consistent manner, and textural attributes, such as stroke textures, contours, tonal variations, and other complex image details are not adjustable on a fine-granular level. Our method seeks to introduce such controls by disentangling various stylistic elements in existing images, and can be used in a post-synthesis editing step.
We introduce an iterative, coarse-to-fine approach for manipulating geometric abstraction and texture with an explicit parametric decomposition of stylistic attributes. Our approach accepts any image as input and allows parametric and example-based control of geometric and textural elements. The core of our method lies in the decomposition of the input image into two components: a set of primitive shapes, representing the coarse structure of the image, and a parametric representation of the high-frequency texture details. For coarse structure extraction, we employ segmentation techniques such as superpixel segmentation [6] or neural stroke-based rendering [7, 8], depending on the desired shape primitives. To decompose the image texture into meaningful artistic control variables, we introduce a novel pipeline of lightweight, differentiable stylization filters. These filters are based on traditional image-based artistic rendering (IB-AR) [9] and are implemented in an auto-grad enabled framework [10]. Each filter produces a specific component of the overall texture details and is parameterized by stylistic or painterly attributes, such as the amount of contours, local contrast, or surface relief (e.g., oil paint texture) among others, making our approach particularly well-suited to decompose artistic images. Besides performing shape and filter decomposition in sequential stages, we also introduce an approach for joint stroke and filter parameter optimization, which enables a close integration between both, and enables adaption of the strokes to new example-based styles.
The resulting decomposed parameter and shape space enable a wide variety of human-in-the-loop as well as example-based texture and geometric shape edits (Fig. 1), making it particularly well-suited for interactive exploratory processes [1] in the workflow of casual creators [2]. The method of geometric abstraction is interchangeable and can be interactively controlled to adjust stroke shapes and abstraction levels. Filter parameters can also be interactively adjusted using manual editing of parameter masks or interpolated based on image attributes such as depth or saliency, allowing for separate control over color and texture.
The proposed method shows excellent performance in example-based texture editing, adapting textures to align with the underlying coarse structures and ensuring a consistent appearance. This includes the use of example-based losses to tune the texture in parametric space, aligning it with desired styles by optimizing NST image-based style losses [4, 11] or text-based losses [12]. Furthermore, for these example-based tasks, we can accelerate texture decomposition by training parameter prediction networks (PPNs) for real-time decomposition of NST results. This capability could be used to enable interactive refinement of NST results in professional artistic workflows, which we demonstrate by designing a PPN-based tool for NST image editing and artifact correction.
To summarize, we make the following contributions:
-
1.
We present a holistic approach for geometric abstraction and texture editing that decomposes the image into coarse shapes and high-frequency detail textures.
-
2.
We present a novel differentiable filter pipeline for texture editing. Compared to previous methods it is lightweight and improves parameter editability.
-
3.
We present an approach for joint optimization of geometric shapes and parameters using neural stroke painting.
-
4.
We introduce PPNs for real-time single- and arbitrary-style texture decomposition.
-
5.
We demonstrate that a texture can be adapted to new styles using example images or text prompts, and can be interactively adjusted on a global or local level.
This work is an extension to [13]. In the following, we discuss related work in Sect. 2, detail our decomposition approach in Sect. 3, showcase stylization and editing applications in Sect. 4 and evaluate and conclude our method in Sects. 5 and 6.
2 Related work
2.1 Example-based stylization control
Example-based methods for stylization, particularly NST as introduced by Gatys et al. [4], employ deep neural networks to extract and apply stylistic features from a reference image to a target image in a black-box fashion, and exert control over the output using a style/content trade-off. Beyond this trade-off, various methods have been developed to manipulate specific attributes in style transfer such as color [14] or stroke textures [15, 16]. CLIPstyler [12] enables style transfer guided by textual prompts, utilizing CLIP-based losses [17]. However, limitations exist in these techniques: optimization-based methods lack interactive control, and network-based methods usually only allow control over a single attribute and are not composable with others.
2.2 Generative stylization control
Recently, tremendous progress has been made in image generation following the introduction of generative adversarial networks (GANs) [18, 19] and diffusion models [3, 20, 21]. For image editing, StyleGAN [19, 22] has shown to be highly adjustable due to its structured latent space. It has been applied in the stylization domain [23, 24], and used for stroke-based [25] and text-based editing of images [26, 27]. However, editing real images with GANs involves inverting them into a latent code [28], a process that may be imperfect and limited to specific domains like faces. More recently, denoising diffusion models [29] have demonstrated superior performance in representing multimodal sceneries compared to other methods [3, 20, 21]. They have been applied to the editing of natural images, e.g., by fine-tuning the diffusion model [30, 31], where they can also generate or edit stylized images. However, due to the time required for fine-tuning, these edits are not interactive, and also strongly alter semantic appearance when only stylistic or textural edits are desired. Other approaches perform style transfer using diffusion models [32], which improve on semantic consistency, but still lack fine-grained style control. Our method may be seen as a complementary technique to generative model control (e.g., applied as a post-processing step) that enables interactive global and local editing of stylistic abstraction while keeping the semantic content intact.
2.3 Image- and stroke-based rendering
In contrast to deep-learning-based approaches, traditional IB-AR, as reviewed by Kyprianidis et al. [9], encapsulates styles using a series of image filters, providing granular control over various stylistic attributes. These filters, however, are typically engineered for specific artistic styles such as cartoon [33], oil paint [34], or watercolor [35]. Lötzsch et al. [10] propose combining IB-AR with example-based stylization by implementing filters in a differentiable manner and optimizing parameters within these filters to approximate a stylized reference image, effectively decomposing a style into an interactively controllable “white box” parameter representation.
In our work, we utilize a similar filter-based, interpretable style representation. However, the approach by Lötzsch et al. [10] encapsulates the entire input image within these parameters, entwining shape, color, and details (e.g., local textures) in a complex and often redundant set of parameters and filters. This entanglement results in a cumbersome and non-intuitive editing experience, where adjustments in parameters influencing painterly attributes may inadvertently affect colors or shapes. In contrast, our method abstracts color and shape distribution at an earlier geometric abstraction stage, focusing the filter pipeline on primarily representing high-frequency components, i.e., local textures. In contrast to the style-specific filter chains employed by Lötzsch et al. [10], our approach introduces a streamlined filter pipeline, comprising just four filters, that is capable of adapting to a wide range of styles while enhancing parameter editability and reducing redundancy.
Further, Lötzsch et al. [10] demonstrated the viability of interactive parameter prediction by training PPNs for specific img2img translation tasks in combination with an additional post-processing CNN. In our work, we expand the versatility of PPNs, showcasing their adaptability for both general single- and arbitrary-style transfer decomposition tasks, while obviating the need for post-processing networks.
Our approach to geometric abstraction draws from existing artistic approaches that utilize either segmentation-based shapes [36, 37] or stroke-based rendering [38, 39] to abstract images into primitive shapes. The geometric image abstraction, e.g., obtained using SLIC [6] segmentation, is then provided as input to the filter pipeline. For stroke-based rendering, we integrate recent developments in neural painting, employing both optimization-based (stylized neural painting (SNP) [8]) and prediction-based (PaintTransformer [7]) strategies. Zou et al. [8] further propose to optimize strokes using NST losses to adapt the color and positioning of strokes according to an artistic style. We integrate their approach into our framework to jointly optimize strokes together with filter parameters, thereby enabling a fully differentiable geometric abstraction that can be adapted using losses such as CLIPStyler [12].
3 Method
3.1 Framework overview
Our approach, shown in Fig. 2, involves a two-stage process for decomposing images into a format suitable for subsequent texture editing tasks. The first stage, a segmentation stage denoted as \(S(\cdot )\), controls texture granularity and geometric abstraction. This stage transforms the input, a reference image \(I_r\), into an abstracted version \(I_a\) by rendering it using shape primitives. The second stage involves a pipeline of differentiable image filters, \(O(\cdot )\), which represents the remaining image details, i.e., \(I_r - I_a\). These details are represented within the parameters of the filter pipeline O. Each parameter in this pipeline corresponds to a specific artistic control variable, such as contours or local contrast, as detailed in Sect. 3.2.2. These parameters can be adjusted at the pixel level using parameter masks \(P_M\). The decomposition process, in general, first runs stage S and subsequently optimizes the decomposition loss \(\mathcal {L}\) on the output image \(I_o\) thereby refining \(P_M\). As a special case, S and \(P_M\) can be jointly optimized when using a differentiable first stage. In the following, we elaborate on filter decomposition in Sect. 3.2, the segmentation stage in Sect. 3.3, and on training networks for parameter prediction in Sect. 3.4.
3.2 Filter parameter decomposition
3.2.1 Decomposition loss
Formally, O is parametrized by a set of M parameter masks, i.e., \(P_M = \{ P_i \in \mathbb {R}^{h \times w} | i \le M \}\) and expects the output image of the segmentation stage \(I_a=S(I_r)\) as input. A stylized output image \(I_o\) is thus obtained as:
For acquiring a decomposed texture representation, the parameters \(P_M\) are fine-tuned through a composite loss function \(\mathcal {L}\). This function integrates a target loss \(\mathcal {L}_{\textrm{target}}\) with a total variation loss \(\mathcal {L}_{\textrm{TV}}\), modulated by a weighting factor \(\lambda _{\textrm{TV}}\):
Here, \(\mathcal {L}_{\textrm{target}}\) is designed to align with the desired target style, while \(\mathcal {L}_{\textrm{TV}}\) works to reduce noise in the masks and promote local coherence, which is beneficial for subsequent editing.
The choice of loss function for \(\mathcal {L}_{\textrm{target}}\) varies based on the intended outcome. For texture representation suitable for interactive editing, the \(\ell _1\) loss is preferred as it focuses on detailed reconstruction of the reference image:
To adapt to a new texture style, \(\mathcal {L}_{\textrm{target}}\) can also directly utilize neural style transfer losses. For this, a perceptual content loss \(\mathcal {L}_c(I_o, I_r)\) [4] is combined with a style loss that extracts style from either an image, such as Gram loss [4] or optimal transport loss [11], or a text prompt, such as CLIPStyler [12]. We demonstrate these variants in Sect. 4.1. Additionally, these losses are effective for training a PPN to reconstruct style-transferred images in a single inference step. For this, we introduce single-style and arbitrary-style decomposition networks in Sect. 3.4.
3.2.2 Differentiable filter pipeline
Traditional heuristic-based stylization pipelines (e.g., cartoon [33], watercolor [35] or oil paint [34]), can be made differentiable and optimized in an example-based framework as Lötzsch et al. [10] demonstrated. However, several limitations decrease the filter pipelines’ suitability for example-based stylization and subsequent editing. For one, some filter operations such as color quantization [33] are not directly trainable and require differentiable proxies [10]. Further, filter pipelines often contain many repetitive elements like multiple smoothing steps which can complicate the optimization and parameter mask editing.
Our approach addresses these challenges by proposing a streamlined pipeline, \(O(\cdot )\), with intuitive and non-redundant parameters while being capable of matching any texture. It comprises the following differentiable filters:
-
(1)
Smoothing: Incorporates Gaussian smoothing (\(\sigma =1\)) and a subsequent bilateral filter with learnable parameters \(\sigma _d\) (distance kernel size) and \(\sigma _r\) (range kernel size).
-
(2)
Edge enhancement: Implements an eXtended difference-of-Gaussians (XDoG) [40] filter with learnable parameters for contour amount and opacity.
-
(3)
Painterly attributes: Controls a specific painterly aspect of a style. We utilize bump mapping for surface relief control throughout the paper, implemented using Phong shading [41] with learnable parameters bump-scale, Phong-specularity, and bump-opacity. However, in principle, any differentiable painterly filter can be used here, and we also implement wet-in-wet [42] and wobbling [35] filters for watercolor effect control.
-
(4)
Contrast: Controls the amount of local contrast enhancement.
Gradients are calculated for both the parameter masks \(P_M\) and the image input. To facilitate this, the image filters are implemented within an auto-grad-enabled framework, in line with Lötzsch et al. [10]. To preserve the input color distribution (i.e., the output from the segmentation stage), learnable parameters cannot alter color hues during the optimization process. An ablation study (Sect. 5.2) is conducted to validate our filter choices and demonstrates that each stage of our pipeline is essential for accurately reconstructing arbitrary styles. Filters in the pipeline always operate on a single-color image input, making filters independent of each other. Thereby, the pipeline is easily extendable with additional IB-AR techniques [9] to control further painterly attributes.
3.2.3 Parameter mask optimization
We optimize the parameter masks \(P_M\) using gradient descent to minimize the composite loss \(\mathcal {L}\). Optimizing solely on \(\mathcal {L}_{\textrm{target}}\), as in [10], results in highly fragmented masks with large local value variations (see Fig. 3, first row), complicating their editability. However, including \(\mathcal {L}_\textrm{TV}\) in the optimization process yields masks with less noise, increased sparsity, and greater smoothness.
For detail optimization, we follow [10]; using 100 iterations of Adam [43] with a learning rate of 0.01, reduced by factor 0.98 every five steps after the 50th iteration. Differing from [10], our method does not require the smoothing of masks, as our filters do not produce artifacts. Unless specified otherwise, we consistently use \(\lambda _\textrm{TV}=0.2\) during optimization in our experiments.
3.3 Shape decomposition
The first stage S in our pipeline is responsible for geometric abstraction and decomposes the reference image \(I_r\) into shapes by re-rendering it using distinct shape primitives, such as brushstrokes or segments of uniform color to generate abstracted image \(I_a\). This stage can employ various image-based techniques such as superpixel segmentation (e.g., SLIC [6]) or stroke-based rendering (e.g., PaintTransformer [7]), as it operates on pixels in both input and output domains. We categorize these as feed-forward techniques, that is, methods that decompose without receiving feedback from the second stage, and showcase results in Sect. 4.2.
A limitation of such feed-forward methods is the potential misalignment of stroke placement and color when adapting the filter pipeline to a new style, such as through style transfer. To address this, we propose an alternative approach that concurrently optimizes strokes alongside parameters, ensuring adaptation to a cohesive style. We adapt the SNP technique by Zou et al.[8] to our parameter optimization framework, details of which are elaborated in the following.
3.3.1 Differentiable strokes
SNP [8] incorporates a neural renderer, which is tasked with generating strokes from a set of stroke parameters, and a stroke blender that combines these strokes in a differentiable manner, optimized to reconstruct the reference image. The process starts with an empty canvas \( h_0 \). Using a neural renderer \( G \), a sequence of strokes is generated and superimposed on the canvas iteratively. For each drawing step \( t \), the renderer \( G \) takes stroke parameters \( x_t \in \mathbb {R}^{d}\) (characterizing aspects like shape, color, transparency, and texture) to produce a stroke foreground \( s_t \) and an alpha matte \( \alpha _t \). These components are blended using a soft blending equation:
where \( (s_t, \alpha _t) = G(x_t) \). This process is repeated for \( T \) steps, and the stroke parameters are optimized to match a target image \( I_r \), i.e., the reference image is also used for parameter optimization. The final rendered output \( h_T \) is formulated as:
where \( f_{t=1\sim T}(\cdot ) \) represents the mapping from stroke parameters to the rendered canvas, and \( \tilde{x} = [x1,..., x_T] \) is the collection of parameters at each drawing step.
The optimization involves minimizing the similarity loss between the final canvas \( h_T \) and the reference \( I_r \) using a loss function \( \mathcal {L_{\text {stroke}}} \), which is typically the \(\ell _1\) loss, an additional optimal transport loss improves convergence in some cases [8]. The stroke parameters are updated using gradient descent, thereby iteratively refining the strokes on the canvas to achieve a rendering that closely resembles the input image. The neural renderer \( G \) consists of a rasterization network \(G_r\) that, given stroke alpha \(x_\alpha \) and shape \(x_\text {shape}\), predicts the alpha matte \(\alpha _t\), and a shading network \(G_s\) that, given stroke shape and color \(x_\text {color}\), predicts stroke texture \(s_t\). The shape representation (\(x_\text {shape}\)) is parametrized by either a textured rectangle rotated around an anchor point, akin to the stroke representation in PTf [7], or a Bezier curve, depending on the stroke type. Our results demonstrate the use of the former “oil-painting” and “color-tape” rectangular strokes. We refer to [8] for architecture and training details.
3.3.2 Joint stroke and parameter optimization
Our joint optimization algorithm, shown in Algo. 1, adopts the progressive, grid layer-based approach of Zou et al. [8]. It starts optimizing stroke- and filter parameters on a \(128\times 128\) pixels canvas, which is subsequently partitioned into \(m \times m\) blocks (\(m=2, 3, 4,...\)), searching for a number of iterations in each grid layer. For each layer of m divisions, the number of strokes in \({\tilde{x}}\) are gradually grown. We sample stroke centers distributed according to the error between the current canvas and the reference, i.e., \(|h_{i-1} - I_r|^4\), while the other stroke parameters are sampled randomly. Subsequently, we predict and blend strokes for canvas \(h_{i}\) which is then stitched to create \(I_a\) and passed through pipeline O to create the stylized intermediate outputs. In Fig. 4, we use \(\ell _1\) for \(\mathcal {L}_{\text {target}}\) to match the painting “Wheat field with cypresses” by Van Gogh and show the interim outputs, full optimizations are showcased in the supplemental video (Online Resource 2). Gradients are computed and updated for both \({\tilde{x}}\) and \(P_M\) with learning rates \(\eta = 0.002\) and \(\lambda = 0.05\), respectively. After completing a grid level, strokes are rendered to a final canvas \(h^m_T\), which is upscaled and used as initialization in the following grid level. We optimize using Adam [43] and add T strokes per block in every grid layer.
While the optimization of strokes gradually increases the area of coverage by adding more and finer strokes, parameter optimization directly optimizes fine details across the entire image, which can lead to parameters emulating structure details that would be better represented by strokes. To remedy this, we add an optional stroke regularization loss \(\mathcal {L}_{\text {reg}}\) that aims to constrain parameters to only be optimized in the locations of the currently active stroke set and further constrain parameter variations inside a single stroke:
where the parameter mask should, for every stroke, be guided toward its average value \({\bar{P}}_M(t) = \frac{\sum {(\alpha _t P_M})}{\sum \alpha _t} \) under the area of the stroke (i.e., where \(\alpha _t\) is not zero). The resulting parameter masks (Fig. 4d) display a stroke-like structure, which aids in local parameter editing, as values can be uniformly adjusted for by selecting a stroke segment (see Sect. 4.3).
3.4 Parameter prediction for NST
Our optimization-based decomposition approach, while versatile, is computationally demanding, taking about 3 min for a 1MPix image on an Nvidia GTX 3090. For tasks with a specific style target like NST, we accelerate this step to real time by training a PPN to predict the parameter masks \(P_M\) in a single inference step, akin to pixel-predicting NST networks [44, 45].
We propose PPNs for both single- and arbitrary-style transfer texture decomposition, training them as follows:
3.4.1 Single-style decomposition PPN
The single-style transfer network (SSTN) introduced by Johnson et al. [44] is trained on a single-style image using NST losses. We adapt this approach to train a single-style PPN (\(\text {PPN}_\textrm{sst}\)) to decompose textures of a style image \(I_s\). We generate a stylized ground-truth image \(I_r\) using the SSTN trained on \(I_s\) and preprocess \(I_r\) using segmentation stage S to create training inputs \(I_a\) for our filter pipeline O. The training loss for \(\text {PPN}_\textrm{sst}\) follows Johnson et al. [44] and is a combination of Gram matrix style loss \(\mathcal {L}_{gram}\) [4] and content loss \(\mathcal {L}_c\) over VGG [46] features:
The \(\text {PPN}_\textrm{sst}\) architecture modifies the final layer’s output channels to match the number of parameter masks (\(\#P_M\)).
3.4.2 Arbitrary-style decomposition PPN
In arbitrary style, transfer networks are trained to stylize images given both a content and style image at inference. We train an arbitrary-style decomposition PPN (\(\text {PPN}_\textrm{arb}\)) on a large set of style images \(I_s\), adopting the training regime from existing arbitrary-style transfer works [5, 45]. \(\text {PPN}_\textrm{arb}\) adapts the architecture of SANet [45], differing only in the last decoder which in our case has \(\#P_M\) channels. Training involves preprocessing similar to the single-style case but using SANet for generating \(I_r\). The training loss \(\mathcal {L}_{\text {PPN}_\textrm{arb}}\) combines AdaIN [5] style loss and content loss \(\mathcal {L}_c\):
Differing from SANet training [45], we omit the structure-retaining identity loss due to our pipeline’s constrained content alteration capacity.
Training Details We trained \(\text {PPN}_\textrm{sst}\) and \(\text {PPN}_\textrm{arb}\) for 24 epochs, using MS-COCO [47] (content images) and WikiArt [48] (style images for \(\text {PPN}_\textrm{arb}\)), each comprising about 80,000 images. We use the training settings and hyperparameters listed in Table 1:
To ensure performance on diverse settings, the segmentation stage S settings were randomized by uniformly varying the global number of segments, smoothing sigma, and kernel sizes.
4 Controlling aspects of style
The decomposed style representation can be manipulated in numerous ways after optimization or prediction of style parameters. We explore re-optimization with alternative style transfer losses, parameter interpolation, and geometric abstraction control. Further, we discuss manual segment and parameter editing and showcase its usefulness for NST artifact correction.
4.1 Texture style transfer
Different loss functions can be employed in \(\mathcal {L_{\text {target}}}\) to adapt parameters \(P_M\) to new artistic targets. Style transfer has shown itself as particularly useful in this regard, and can be used to tune results from a previous parameter decomposition to a new style, or optimize a new texture style directly from zero-initialized \(P_M\).
Image-based parameter style transfer. In Fig. 5, we show parameter retargeting to new artistic texture styles. In this example, we create a stylized image using NST [4], decompose the parameters using \(\ell _1\) matching and then re-optimize these using the self-transport style loss (STROTTS [11]). Notice how the overall structure and colors are preserved, while subtle details and textures are effectively altered.
Text-based parameter style transfer. Utilizing text prompts for style representation enhances control flexibility, as it eliminates the need for reference images. For text-based style transfers, we adapt the losses from CLIPstyler [12], differing from their approach by not training a convolutional neural network (CNN) but directly optimizing effect parameters. Specifically, we adapt their patch-based and directional losses, while typically content losses are not required due to the inherent content and color preservation of the filter pipeline. Figure 6 demonstrates how the style details of a real-world painting are transformed using text prompts, with the primary colors remaining true to the original. Further examples in the supplemental material show how details conform to the prompts and adjust to segment sizes (Fig. 7).
4.2 Geometric abstraction control
In Table 2, we list the segmentation and stroke-based rendering methods that were integrated and evaluated, and their main primitive for geometric control, e.g., primitive shape, the number of strokes, segments, or layers.
Superpixel segmentation. Techniques such as SLIC [6] typically partition the image into segments based on a clustering of color similarity and spatial proximity. In our pipeline, such methods are used to preserve low-frequency information in the form of uniform color patches while omitting high-frequency details. This segmentation aligns well with the shape boundaries in the image, making these methods well-suited for intermediate representations in interactive editing, we showcase separate editing of color and texture in Sect. 4.3.
Stroke-based rendering methods. On the other hand are more adept at achieving stronger geometric abstractions. Our approach incorporates two neural painting techniques, as listed in Table 2. These methods are designed to recreate images using specific shape primitives, such as brushes, rectangles, or circles. Stylized neural painting (SNP) [8] excels in generating abstract representations, such as approximating an image using rectangles (Fig. 8c), and can optimize a variety of shapes using a neural stroke renderer. PaintTransformer (PTf) [7], on the other hand, utilizes fixed primitive shapes and generates the result in one feed-forward pass, which allows for rapid visual feedback and facilitates interactive adjustments. PTf uses a fixed shape primitive during training, whereas SNP uses a trainable generator network to synthesize brush shapes. For nuanced control over stroke details at different spatial locations, we combine the stroke confidence output from PaintTransformer with an optional level-of-detail parameter mask, \(P_S\). This mask can be either manually defined or algorithmically predicted using techniques such as saliency or depth extraction. We use depth in Fig. 8b to define foreground and background stroke granularities. Joint stroke and texture parameter optimization (Sect. 3.3.2) employs the same shape primitives as when executing SNP [8] as a feed-forward stage only. An added benefit of combined optimization is that strokes, particularly their colors and location, are adjusted according to the optimized loss terms as well. In parameter style transfer, this approach can thus introduce stroke colors that are not present in the content image, such as yellow strokes in the sky for stars in a “starry night” style (Fig. 9).
Re-optimization. Filter parameter re-optimization fine-tunes texture and minor shapes to match the shape and granularity of geometric components in \(I_a\), yielding a cohesive visual integration. This process is illustrated in Figs. 6, 12, and 13, with additional examples showcasing variations in geometric shapes and granularities provided in Sect. IV of Online Resource 1.
4.3 Filter parameter editing
One key benefit of our filter-parameter-based approach is that it allows for interactive style editing through both global and local adjustments of parameters.
- Editing:
-
Parameters Globally: After optimization or prediction, the parameter masks outlined in Sect. 3.2.2 can be globally altered by uniformly modifying their values. For instance, Fig. 10 demonstrates increasing and decreasing the bump mapping parameter.
- Editing:
-
Parameters Locally: Local modifications are feasible by adjusting parameter masks using brush metaphors. A practical use case is enhancing contours in specific facial areas, such as the eyes and mouth in a portrait, to emphasize these features, as demonstrated in the supplemental video (Online Resource 2).
- Interpolating:
-
Parameters: Interpolation of parameters can be performed locally through binary or continuous-valued blending masks. In Fig. 7, we extract a depth map using DPT [49] which is used as a blending mask to blend between two parameter sets.
4.4 Refining style transfer results
In stylizations produced by methods such as NST, artifacts or unwanted style elements can appear. Superpixels serve as an efficient tool for selecting and refining specific regions, particularly useful in rectifying localized flaws or editing local style elements. After selection, the color of a segment can be adjusted, while the fine-structural texture, represented in the filter parameters, is kept intact.
For instance, in Fig. 11, artifacts are removed by matching colors in selected superpixel regions to another area using histogram matching, thereby eliminating undesired stylization patterns or colors. A single-style \(\text {PPN}_\text {sst}\), trained on the candy style, repredicts the texture during real-time editing of segment colors, aligning textural patterns with new colors and structures. This process enables a cohesive and interactive editing workflow, further detailed in Sect. III of Online Resource 1.
5 Results and discussion
5.1 Qualitative comparisons
We compare our texture style transfer method to other example-based techniques, focusing on their capacity to precisely control geometric and texture patterns within a given image. Ideally, a technique in such an editing scenario should preserve the original composition and colors of the input image while adapting only the desired stylistic elements. Figure 12 shows results of CLIPstyler [12] and img2img stable diffusion [3], compared to our text-based filter-optimization with fine (SLIC segments) and coarse (PTf-circles) geometric shape abstractions. To ensure a fair comparison with CLIPstyler, which also modifies colors, we incorporate a histogram-matching loss [50] (CLIPstyler-Hist). It can be observed that CLIPstyler tends to introduce new content structures, and its stylization intensity varies significantly with the text prompt used. Additionally, even with histogram matching, there is a noticeable deviation in colors from the original image. In contrast, stable diffusion often results in substantial variations from the original image content. Our method retains a regular, brush-stroke-like appearance and is controllable in its granularity level, for example by using a larger brush shape for the background (Fig. 12f, top row). We further compare image-based filter optimization with STROTTS [11] incorporating histogram-matching loss [50], Gatys NST [4] with a color-preserving loss [14], and SNP with style transfer losses [8] in Fig. 13, with ours (using Ptf-circles and squares), and with joint stroke and parameter optimization. Similar to our CLIP-optimized parameters, our image-based parametric style transfer demonstrates a greater ability to retain original colors and exhibits more spatial homogeneity compared to these methods. The joint optimization of strokes and parameters (S+P) further enhances alignment between coarse geometrical structures and filter parameters, ensuring that elements such as rectangles and outlines are well-integrated with the image content, as exemplified in Fig. 13f, bottom row.
5.2 Quantitative comparisons
We conduct several experiments to quantitatively assess our approach against others, and ablate different settings. We first describe our experimental setup and then discuss results concerning style-matching capability, mask-editability, the necessity of individual filters, and runtime performance.
5.2.1 Experimental setup
In our experiments, we use the following metrics to assess decomposition quality: VGG [46]-based content (\(\mathcal {L}_c\)) and style loss (\(\mathcal {L}_s\)) [4] as indicator of stylization quality, and, additionally, in the case of optimization, the \(\ell _1\) difference to the reference image generated by STROTTS [11]. Metrics are computed on a dataset consisting of 20 content images from MS-COCO [47] stylized each by 10 widely used NST styles. We used the following hyperparameters throughout the study: content image size of 512 pixels, style image size of 256 pixels, and content and style weights of 0.01 and 5000, respectively, and \(\lambda _\textrm{TV}\) with \( \lambda _{tv}=0.2\), and Adam with a learning rate of 0.01 for 500 iterations. The segmentation stage S employs SLIC [6] superpixel segmentation with \(s=1000\) and \(s=5000\) segments. Joint stroke and parameter (S+P) optimization uses 5 grid levels and 1000 strokes..
5.2.2 Style-matching capabilities
Table 3 evaluates the style-matching capabilities of our parameter prediction and optimization against their respective baselines. While the single-style NST network, as proposed by Johnson et al. [44], demonstrates superior content preservation compared to our single-style \(\text {PPN}_\textrm{sst}\), this is anticipated given that the PPN takes an abstracted and stylized image as input, whereas [44] operates on the content image. The style preservation of \(\text {PPN}_\textrm{sst}\), on the other hand, is comparably effective. In the case of arbitrary-style PPN, content preservation is nearly equivalent to its baseline, with an enhancement in style loss performance. For our optimization approach, we assess the \(\ell _1\) distance relative to the reference image, achieving a close match. Compared to the stylization filter pipelines of Lötzsch et al. [10], our results show similar metrics, despite their approach not applying loss constraints on parameter masks. See Sect. II of Online Resource 1 for visual comparisons.
5.2.3 Mask editability
Table 4 compares mask noisiness, where typically, less noisy masks enhance editability. Our method, when optimized with \(\mathcal {L}_\textrm{TV}\), significantly reduces mask noise, by more than two orders of magnitude, as indicated by the noise standard deviation (\(\sigma \)). This reduction contrasts sharply with the masks from Lötzsch et al.’s oil paint pipeline [10], which exhibit higher noise levels. In scenarios where stroke and texture parameters are optimized jointly, using \(\mathcal {L}_{\text {reg}}\) tends to produce noisier masks compared to \(\mathcal {L}_\textrm{TV}\). However, this is often due to sharp boundaries caused by stroke regularization, which, in practice, facilitate editability by allowing parameter mask areas to be selected based on their underlying stroke region.
5.2.4 Ablation study—filter pipeline
Figure 14 presents a filter ablation study aimed at evaluating the necessity of each filter in our pipeline by matching ablated pipeline configurations to a reference image. The full experimental setup is described in Sect. 5.2.1. We remove one filter each and measure \(\ell _1\) norm to \(I_r\), as well as content and style losses. Omission of \(\mathcal {L}_{TV}\) increases the matching performance, at the cost of editability. The ablation results indicate that the omission of any filter significantly affects the pipeline’s capacity to accurately match a wide range of styles. While the content loss \(\mathcal {L}_c\) is less affected, removing filters generally leads to higher \(\ell _1\) errors, indicating that the abstracted output \(I_a\) already contains the semantic information but all filters are needed to accurately represent the fine details. Removing the contrast filter does not impact details representation; however, it degrades style loss as it is needed for color adjustment. In Fig. 15, we show example results of the study. Our proposed full pipeline, and its optimization target, as generated by STROTTS [11], demonstrate a close match. All ablated configurations exhibit difficulties in reconstructing the clock face. The absence of xDoG removes the capability to accurately represent black areas. The absence of bump mapping leads to a failure in reconstructing areas of high luminance, such as on the tree on the left. The removal of any of the other filters results in difficulties in painting over segment boundaries.
5.2.5 Runtime performance
In Table 5, we present a comparison of the runtime performance of our proposed methods. Both PPNs are about equally fast as they only require a forward pass of the network. Furthermore, the SLIC segmentation requires approximately 75% of the execution time and does not have to be continuously re-executed in interactive editing scenarios. On the other hand, the optimization method requires several seconds even for small images, and is roughly comparable in runtime to other optimization-based NSTs, with the joint stroke and parameter optimization taking significantly longer due to repeated evaluation of stroke-prediction networks.
5.3 Limitations
In our framework, the second stage is not guaranteed to consistently align with the geometric primitives introduced by the first stage. This issue is particularly apparent with style transfer losses, which prioritize statistical style targets over geometric consistency. Although joint optimization of parameters and strokes aims to enhance alignment, mismatches between the geometric representations in the stroke and target losses can still lead to discrepancies. For example, in Fig. 16, the first stage introduces tilted rectangles; however, optimizing the parameters with the CLIPStyler [12] loss “Tetris” overdraws block boundaries to create straight rectangles from the buildings.
Additionally, filter optimization matches target images effectively, but optimized losses steer parameters toward plausible outputs and editable masks without considering parameter stylistic intent. High-quality decomposition requires each filter’s parameters to uniquely affect the output, preventing local imitation of other filters’ effects. While our used filters typically yield distinct effects, with losses like \(\mathcal {L}_{\text {reg}}\) and \(\mathcal {L}_{TV}\) reducing localized imitation, some parameters can be prone to overuse. For instance, increasing contour opacity in a decomposed image can inadvertently darken areas beyond the intended contours, as seen in Fig. 17b. Conversely, reducing opacity does not completely remove all contours, notably in areas like the left eye and eyebrow (Fig. 17c). Future research could benefit from semantics-focused losses for superior decomposition. Further, painterly attribute control is limited to the existing parameters of the pipeline, adding other differentiable filters necessitates manual reconfiguration. Moreover, while our approach facilitates global texture adaptation using geometric abstraction and example-based techniques, it does not guarantee statistical textural properties, such as spatial uniformity.
6 Conclusions
We introduced a lightweight, differentiable filter pipeline for texture editing, using both manual and example-based style control combined with geometric abstraction techniques. Our approach highlights the benefits of using a decomposed representation of strokes and textures for interactive and exploratory editing. This technique, applied to artistic images, facilitates the convergence of traditional stylistic image filtering and contemporary generative image synthesis, holding the potential to significantly enhance established image editing tools. Future research directions include exploring semantics-guided loss functions and deeper integration with generative image synthesis techniques.
Data availability
No datasets were generated or analyzed during the current study.
References
Hertzmann, A.: Toward modeling creative processes for algorithmic painting. arXiv preprint arXiv:2205.01605 (2022)
Winnemöller, H.: NPR in the wild. In: Image and Video-Based Artistic Stylisation, pp. 353–374 (2012). https://doi.org/10.1007/978-1-4471-4519-6
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proc. CVPR, pp. 10684–10695 (2022). https://doi.org/10.1109/cvpr52688.2022.01042
Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proc. CVPR, pp. 2414–2423 (2016). https://doi.org/10.1109/cvpr.2016.265
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proc. ICCV, pp. 1501–1510 (2017). https://doi.org/10.1109/iccv.2017.167
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE TPAMI 34(11), 2274–2282 (2012). https://doi.org/10.1109/tpami.2012.120
Liu, S., Lin, T., He, D., Li, F., Deng, R., Li, X., Ding, E., Wang, H.: Paint transformer: Feed forward neural painting with stroke prediction. In: Proc. ICCV, pp. 6598–6607 (2021). https://doi.org/10.1109/iccv48922.2021.00653
Zou, Z., Shi, T., Qiu, S., Yuan, Y., Shi, Z.: Stylized neural painting. In: Proc. CVPR, pp. 15689–15698 (2021). https://doi.org/10.1109/cvpr46437.2021.01543
Kyprianidis, J.E., Collomosse, J., Wang, T., Isenberg, T.: State of the “art’’: a taxonomy of artistic stylization techniques for images and video. IEEE TVCG 19(5), 866–885 (2012). https://doi.org/10.1109/TVCG.2012.160
Lötzsch, W., Reimann, M., Büssemeyer, M., Semmo, A., Döllner, J., Trapp, M.: WISE: Whitebox image stylization by example-based learning. In: Proc. ECCV, pp. 135–152 (2022). https://doi.org/10.1007/978-3-031-19790-1_9
Kolkin, N., Salavon, J., Shakhnarovich, G.: Style transfer by relaxed optimal transport and self-similarity. In: Proc. CVPR (2019).https://doi.org/10.1109/cvpr.2019.01029
Kwon, G., Ye, J.C.: CLIPstyler: Image style transfer with a single text condition. In: Proc. CVPR, pp. 18062–18071 (2022). https://doi.org/10.1109/cvpr52688.2022.01753
Büßemeyer, M., Reimann, M., Buchheim, B., Semmo, A., Döllner, J., Trapp, M.: Controlling geometric abstraction and texture for artistic images. In: Proc. IEEE International Conference on Cyberworlds (CW), pp. 1–8 (2023). https://doi.org/10.1109/cw58918.2023.00011
Gatys, L.A., Ecker, A.S., Bethge, M., Hertzmann, A., Shechtman, E.: Controlling perceptual factors in neural style transfer. In: Proc. CVPR, pp. 3730–3738 (2017). https://doi.org/10.1109/cvpr.2017.397
Reimann, M., Buchheim, B., Semmo, A., Döllner, J., Trapp, M.: Controlling strokes in fast neural style transfer using content transforms. The Visual Computer, 1–15 (2022) https://doi.org/10.1007/s00371-022-02518-x
Jing, Y., Liu, Y., Yang, Y., Feng, Z., Yu, Y., Tao, D., Song, M.: Stroke controllable fast style transfer with adaptive receptive fields. In: Proc. ECCV (2018). https://doi.org/10.1007/978-3-030-01261-8_15
Radford, A., Kim, J.W., et al.: Learning transferable visual models from natural language supervision. In: Proc. ICML, pp. 8748–8763 (2021)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Proc. NIPS (2014)
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proc. CVPR, pp. 4401–4410 (2019).https://doi.org/10.1109/cvpr.2019.00453
Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Proc. NIPS, vol. 34, pp. 8780–8794 (2021)
Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E., Ghasemipour, S.K.S., Ayan, B.K., Mahdavi, S.S., Lopes, R.G., et al.: Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487 (2022)
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proc. CVPR, pp. 8110–8119 (2020). https://doi.org/10.1109/cvpr42600.2020.00813
Jang, W., Ju, G., Jung, Y., Yang, J., Tong, X., Lee, S.: Stylecarigan: caricature generation via stylegan feature map modulation. ACM TOG 40(4), 1–16 (2021). https://doi.org/10.1145/3450626.3459860
Chong, M.J., Forsyth, D.: Jojogan: One shot face stylization. In: European Conference on Computer Vision, pp. 128–152 (2022). https://doi.org/10.1007/978-3-031-19787-1_8
Singh, J., Zheng, L., Smith, C., Echevarria, J.: Paint2pix: interactive painting based progressive image synthesis and editing. In: Proc. ECCV, pp. 678–695 (2022).https://doi.org/10.1007/978-3-031-19781-9_39
Patashnik, O., Wu, Z., Shechtman, E., Cohen-Or, D., Lischinski, D.: StyleCLIP: Text-driven manipulation of stylegan imagery. In: Proc. ICCV, pp. 2085–2094 (2021)
Gal, R., Patashnik, O., Maron, H., Bermano, A.H., Chechik, G., Cohen-Or, D.: Stylegan-nada: Clip-guided domain adaptation of image generators. ACM TOG 41(4), 1–13 (2022). https://doi.org/10.1109/iccv48922.2021.00209
Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., Cohen-Or, D.: Encoding in style: a stylegan encoder for image-to-image translation. In: Proc. CVPR, pp. 2287–2296 (2021). https://doi.org/10.1109/cvpr46437.2021.00232
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: Proc. ICML, pp. 2256–2265 (2015)
Kim, G., Kwon, T., Ye, J.C.: Diffusionclip: Text-guided diffusion models for robust image manipulation. In: Proc. CVPR, pp. 2426–2435 (2022). https://doi.org/10.1109/cvpr52688.2022.00246
Kawar, B., Zada, S., Lang, O., Tov, O., Chang, H., Dekel, T., Mosseri, I., Irani, M.: Imagic: Text-based real image editing with diffusion models. arXiv preprint arXiv:2210.09276 (2022) https://doi.org/10.48550/arxiv.2205.11487
Chung, J., Hyun, S., Heo, J.-P.: Style injection in diffusion: A training-free approach for adapting large-scale diffusion models for style transfer. arXiv preprint arXiv:2312.09008 (2023)
Winnemöller, H., Olsen, S.C., Gooch, B.: Real-time video abstraction. ACM TOG 25(3), 1221–1226 (2006). https://doi.org/10.1145/1179352.1142018
Semmo, A., Limberger, D., Kyprianidis, J.E., Döllner, J.: Image stylization by interactive oil paint filtering. Computers & Graphics 55, 157–171 (2016) https://doi.org/10.1016/j.cag.2015.12.001
Bousseau, A., Kaplan, M., Thollot, J., Sillion, F.X.: Interactive watercolor rendering with temporal coherence and abstraction. In: Proc. NPAR, pp. 141–149 (2006). https://doi.org/10.1145/1124728.1124751
Song, Y.-Z., Rosin, P.L., Hall, P.M., Collomosse, J.P.: Arty shapes. In: CAe, pp. 65–72 (2008). https://doi.org/10.2312/compaesth/compaesth08/065-072
Ihde, L., Semmo, A., Döllner, J., Trapp, M.: Design space of geometry-based image abstraction techniques with vectorization applications. Journal of WSCG, 99–108 (2022)
Hertzmann, A.: Painterly rendering with curved brush strokes of multiple sizes. In: Proc. SIGGRAPH, pp. 453–460 (1998). https://doi.org/10.1145/280814.280951
Huang, Z., Heng, W., Zhou, S.: Learning to paint with model-based deep reinforcement learning. In: Proc. ICCV, pp. 8709–8718 (2019). https://doi.org/10.1109/iccv.2019.00880
Winnemöller, H., Kyprianidis, J.E., Olsen, S.C.: XDoG: Advanced Image Stylization with eXtended Difference-of-Gaussians. Computers & Graphics 36(6), 740–753 (2012)
Phong, B.T.: Illumination for computer generated pictures. Commun. ACM 18(6), 311–317 (1975). https://doi.org/10.1145/360825.360839
Wang, M., Wang, B., Fei, Y., Qian, K., Wang, W., Chen, J., Yong, J.-H.: Towards photo watercolorizatin with artistic verisimilitude. IEEE TVCG 20(10), 1451–1460 (2014)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: Proc. ICLR (2015)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Proc. ECCV (2016). https://doi.org/10.1007/978-3-319-46475-6_43
Park, D.Y., Lee, K.H.: Arbitrary style transfer with style-attentional networks. In: Proc. CVPR, pp. 5880–5888 (2019).https://doi.org/10.1109/cvpr.2019.00603
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proc. ICLR (2015)
Lin, T., et al.: Microsoft COCO: common objects in context. CoRR (2014). arxiv:1405.0312 (or) https://doi.org/10.1007/978-3-319-10602-1_48
Nichol, K.: Kaggle Painter by Numbers (WikiArt) (2016). https://www.kaggle.com/c/painter-by-numbers
Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. ICCV, pp. 12179–12188 (2021).https://doi.org/10.1109/iccv48922.2021.01196
Jonschkowski, R., Brock, O.: End-to-end learnable histogram filters (2016)
Funding
Open Access funding enabled and organized by Projekt DEAL. This work was funded by the German Federal Ministry of Education and Research (BMBF) (through Grants 01IS15041—“mdViPro”and 01IS19006—“KI-Labor ITSE”), and the German Federal Ministry for Economic Affairs and Climate Action (BMWK) (through Grant 16KN086401—“PO-NST”).
Author information
Authors and Affiliations
Contributions
M.R. wrote the main manuscript text and prepared results. M.B. prepared the main results and helped in writing the manuscript. B.B. prepared supplemental material. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file 2 (mp4 301228 KB)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Reimann, M., Büßemeyer, M., Buchheim, B. et al. Artistic style decomposition for texture and shape editing. Vis Comput (2024). https://doi.org/10.1007/s00371-024-03521-0
Accepted:
Published:
DOI: https://doi.org/10.1007/s00371-024-03521-0