Autocompletion of repetitive stroking with image guidance

Image-guided drawing can compensate for a lack of skill but often requires a signiﬁcant number of repetitive strokes to create textures. Existing automatic stroke synthesis methods are usually limited to predeﬁned styles or require indirect manipulation that may break the spontaneous ﬂow of drawing. We present an assisted drawing system to autocomplete repetitive short strokes during a user’s normal drawing process. Users draw over a reference image as usual; at the same time, our system silently analyzes the input strokes and the reference to infer strokes that follow the user’s input style when certain repetition is detected. Users can accept, modify, or ignore the system’s predictions and continue drawing, thus maintaining ﬂuid control over drawing. Our key idea is to jointly analyze image regions and user input history to detect and predict repetition. The proposed system can eﬀectively reduce the user’s workload when drawing repetitive short strokes, helping users to create results with rich patterns.


Introduction
Drawing is a common form of artistic expression.By varying the strokes, texture, and shading, artists can create drawings in various styles [1].However, it remains a largely manual process that may require significant artistic expertise and repetitive manual labor.
To reduce repetitive workload, various methods have been proposed to automatically synthesize strokes from user-provided examples [4][5][6] or through procedural steps [7].However, these methods usually perform in batches, reducing user participation in the creative artistic process.Furthermore, since many methods have predefined styles and only allow users to modify a few global parameters, the final results may look monotonous and lack originality (see Fig. 1).Other interactive systems [8,9] preserve the normal drawing flow while automating significant stroke synthesis.We share a similar goal to theirs.However, they typically target experts, requiring artistic expertise for high-level picture composition and fine-grained control.
One common way to overcome this skill barrier is to use a reference photo as a scaffold for drawing, by tracing a reference photo physically using transparent paper, or digitally via layers in digital drawing applications.Prior research [10] shows that even when a reference image is used as a scaffold, people still enjoy the freedom of individual expression.We thus propose to enhance drawing using an image Fig. 1 Style comparison.(a) Our work is designed to reduce the workload of completing repetitive patterns during the manual drawing process.Full control over the drawing process leads to more dynamic results than (b) Photoshop's art history brush tool [2] and (c) StippleShop [3].
scaffold by automating tedious repetitions.Our idea is to bridge the two extremes: manual drawing, which allows full control but can be tedious, and imagebased algorithmic synthesis, which saves effort but provides limited user control and interactivity.As the first attempt towards this goal, we focus on autocompleting repetitive short strokes, which are very common in pen-and-ink drawing (see Fig. 2), under the guidance of a reference image.As in typical digital drawing applications, users can draw freely on a reference image with our system.Meanwhile, our system analyzes the relationships between user inputs and the reference image, detects potential repetitions, and suggests what users might want to draw next.Users can accept, reject, or ignore the suggestions and continue drawing, thus maintaining fluid control of drawing.See Fig. 3 for an example.
The major contribution of this paper is the technical design of an image-guided autocompletion drawing tool that can preserve the natural drawing process and individual user styles.Our approach is inspired by image analogy [4] and operation history analysis and synthesis [9] while leveraging two key insights.Firstly, since the act of drawing repetitive strokes usually indicates specific intentions (e.g., filling an object or hatching a shaded region), we use common image features shared by the coherent repetitive strokes to infer the intended region.Secondly, the drawing is usually related to the underlying reference image (e.g., the density of strokes depends on image brightness).Therefore, we analyze the properties of both the drawing and the reference image to infer possible relationships as contextual constraints for stroke prediction.
We have implemented a prototype and conducted a pilot study with participants from different backgrounds to evaluate its utility and usability.The quantitative analysis and qualitative feedback, as well as various drawing results created by the users, suggest that our system effectively reduces user's workload when drawing repetitive short strokes, helping users to create results with rich patterns.

Image-assisted drawing
Many drawing support tools adopt reference images and provide intelligent assistance to novices, e.g., beautifying user's sketches with extracted image features [10][11][12][13], or providing educational guidance to novice users [14][15][16].We share a similar goal to Refs.[17][18][19] of reducing the user's workload.However, these works use predefined algorithms to generate strokes guided by cursor movement and only take the user's input as an indicator of where to render, thus greatly limiting the user's artistic freedom.In contrast, we aim to provide more flexibility between automatic synthesis and manual artistic control by autocompleting tedious repetitions during the user's normal drawing processes.

Image-based artistic rendering
Our work is related to image-based artistic rendering (IB-AR) [20], especially stroke-based methods and example-based methods.
Stroke-based methods create artistic results from images by strategically generating brushstrokes whose properties (e.g., position, density, orientation, color, size) are related to image properties (e.g., gradient, edges, color, salience) [7].Among those methods, the closest to ours are the early image-based penand-ink rendering methods [21,22], which allow users to input sample elements for distribution.However, users have to prepare the sample elements separately (usually as a standalone file) and then adjust parameters to produce the rendered output.In contrast, our system lets users directly provide exemplars on a reference image while silently inferring the distribution properties.
Example-based methods aim to model the visual features of example images for transferal.There are two major modeling approaches: the parametric approach [6,23,24] that is based on statistical analysis of stroke characteristics, thus preserving global textures better, and the non-parametric approach [4,5,25] based on patch-wise mapping, thus capturing local structures better.We combine both methods to generate strokes: the parametric approach is used to infer statistical relationships between stroke properties and image features, and the patch-wise matching method is used to preserve the local arrangement of strokes.Stylit [5] allows users to stylize a rendered ball and simultaneously propagates the style to arbitrary 3D shapes.Our method shares a similar idea of interactive style propagation, but with two main differences.Firstly, instead of propagating a style globally, we propagate a style to perceptually similar local areas so that users can conveniently define different styles in different areas.Secondly, we represent drawings as discrete stroke operations instead of raster textures to better preserve their structure, e.g., changing the color or size of the drawn strokes.

Operation history-assisted authoring
Operation histories [26] have been leveraged for various authoring tasks, such as sketching [9], animation [27,28], modeling [29,30], beautification of freehand drawings [31], and handwriting [32].Our work is most closely related to that of Xing et al. [9], which autocompletes repetitive sketching by analyzing dynamic operations recorded during authoring.Our method extends their work to consider additional information from a reference image and thus enables the propagation of strokes to regions with similar image attributes such as color or semantic meaning.
In our use case, an operation is an input stroke, so our work is also related to stroke pattern analysis and synthesis [8,[33][34][35][36].These works disregard the temporal relationships between past strokes, and do not use image guidance, so are different from ours.
We summarise the major differences between our work and the discussed closely related work in Table 1.

User interface
Our system prototype follows a standard digital drawing interface, with addition of our autocompletion feature, as shown in Fig. 4. The user draws on top of the reference image displayed semi-transparently on the main canvas, while our system analyzes the input strokes and the reference image in the background.

Autocompletion
In autocompletion mode, our system automatically performs analysis whenever the user finishes a new stroke.When a potential repetition is detected, our system highlights the current repetitive strokes and an inferred propagation region, updates the inferred parameters in the filling property panel, and generates an autocompletion suggestion.Users can accept or reject the suggestion using hotkeys, accept part of it using lasso selection, or ignore it and continue to draw (Fig. 5).The suggestion will continuously update according to user input.

Interactive editing
Our system provides a set of tools to refine the autocompleted results.
Propagation region editing.Users can create, add, or subtract a region using the intelligent scissors tool [37], or expand an existing region by a fixed width (see Fig. 4(e)) for stroke autocompletion.Figure 6 shows an example of creating a new region for stroke regeneration.
Density editing.Users can modify three parameters to adjust the density of the generated strokes: the average spacing, the lightness coefficient, and the gradient coefficient.The latter two define the relationships between density and image lightness and gradient, respectively.Our system automatically updates these parameters upon prediction, and the updated parameters provide a starting point for user manipulation.Figure 7 shows an example.
Orientation editing.Our system automatically predicts whether the input exemplar is correlated with the image flow; orientation can also be adjusted by the user manually.The user can also modify the image flow field using the gesture brush, and the touched strokes will be rotated to be aligned with the gesture.See Fig. 8 for an example.

Auxiliary functions
Our prototype also includes the auxiliary functions below.These are not unique to our system but can facilitate the usual drawing processes.
Post-editing stroke properties.Users can select existing strokes and edit their properties, such as size and color.
Auto-coloring.This function, when used, can automatically colorize strokes with color from the reference image.
View switching.Users can press the space key to switch between the canvas view, reference view, and pure drawing view.

Approach
Our system involves two key steps: (i) inferring the input exemplar, the output region, and the contextual constraints from the stroke history and the reference image, and (ii) synthesizing suggestive strokes accordingly.Section 4.1 first describes how to synthesize strokes, assuming that all the information is available, and then Section 4.2 explains how to infer the necessary information for synthesis.

Problem statement
The inputs to our stroke synthesis method include an exemplar E consisting of repetitive strokes, the reference image I, a target region mask M , an orientation map O, and a radius map R. Pixel values of R determine the stroke spacing: a smaller value leads to a denser distribution.Our goal is to compute an output set of strokes X over the output region M , such that X is similar to E with respect to I. We describe how to infer E, M , O, and R from user interaction with I in Section 4.2.

Idea
To support autocompletion using the reference image, we extend the discrete element texture synthesis method [9,38], which represents strokes as point samples and iteratively improves the sample distribution by minimizing the neighborhood difference between the exemplar and the output, using an additional reference image.Firstly, we combine sample neighborhoods [38] with image features [4] to measure neighborhood differences.Secondly, the range and orientation of each sample neighborhood is determined by the radius and orientation maps inferred from the reference image.Figure 9 shows our key idea.

Stroke representation
A stroke s is an ordered list of sample points, each with a timestamp and appearance attributes such as thickness and color.Here we focus on autocompleting short strokes, so we represent each stroke by its centroid p and the average direction v (see Fig. 10) for efficiency of synthesis, without considering any other information about the original stroke.To take drawing order into consideration, we obtain the dominant direction by averaging the vectors from the start point to each subsequent point.After synthesis, we reconstruct all sample points according to the updated centroid and direction.

Initialization
We pre-process the target region mask M by removing the area occupied by existing strokes in the same layer to avoid clutter, and then initialize the output X by generating sample positions with Poisson-disk sampling based on the radius map R. For each sampled position, we copy the input stroke with the Fig. 9 Synthesis algorithm.We synthesize the predicted strokes (green) from previously drawn strokes (gray) by matching their neighborhoods as well as image features.smallest image feature distance d I , which will be explained in Eq. ( 2).We then optimize the output for several objectives, as detailed below.

Neighborhood term
We define the neighborhood of a stroke s by both its neighboring strokes as well as an R(s) × R(s) image patch around its centroid, where R(s) is the radius value at s. Prior methods (e.g., Ref. [38]) determine the neighboring strokes by spatial distance.Thus, the neighborhood radius should be large enough to capture any underlying pattern.However, this might include redundant strokes and thus decrease performance.Therefore, we adopt Zhao and Zhu's method [39] to automatically find a minimal representative neighborhood, considering not only the distances between strokes but also their locations.As Fig. 10(b) shows, we set the neighborhood radius of the center stroke s to 2R(s).We then divide all strokes within the neighborhood radius into four quadrants with respect to the local frame defined by the orientation at O(s), and collect the n nearest strokes from each quadrant as the representative neighborhood N (s).In our implementation, we set n = 4 for the input exemplar and n = 1 for the output strokes to ensure that each output neighborhood can be maximally matched.
For a stroke s and a neighboring stroke s ∈ N (s), we compute their offsets in position and direction to be û(s where O(s) −1 indicates rotating the vector inversely to O(s).Note that the position and direction difference is computed in the local frame defined by the density map and orientation map.For an output stroke s o and an input stroke s i , we first find their best matching pairs {(s o , s i )} in the neighborhoods N (s o ) and N (s i ) using the Hungarian algorithm [38,40].We use the norm-2 distance of the offsets from s o or s i in Eq. ( 1) as the matching cost.The neighborhood distance is then defined as The second term measures the image feature distance d I ; µ (= 0.1 in our implementation) controls its relative weight.We use the mean Lab* color of an r × r patch at the stroke centroid as the image feature vector.The overall neighborhood term to minimize is

Correction term
Since the neighborhood term is a one-way matching from output neighborhoods to input neighborhoods, sometimes optimization may tend to leave out some void regions.Furthermore, the neighborhood term does not preserve the alignment of strokes to the image (e.g., see Fig. 11(e)).To address these issues, we apply a correction term.We compute a weighted centroidal Voronoi diagram from all the strokes' center points, using 1/R as weight; we denote the computed region centroids as {p}.Then we can minimize the total distance between each output stroke centroid and the region centroid, defined as Eq. ( 4):

Solver
The energy function we aim to minimize is defined as We iteratively minimize the energy function following the EM methodology in Ref. [38].In each iteration, for each output stroke s o , we search for the closest matching input stroke s i to minimize φ neigh , compute the Voronoi diagram centroid p to minimize φ corr , and solve a least-squares system combining both terms.Let m be the total number of iterations.For the i-th iteration, we set w = (i/m) 2 , so that more weight is given to φ neigh in earlier iterations, to optimize the neighborhood distribution first before making corrections: this leads to better results.

Inference
In this section, we describe how we infer E, M , O, and R for synthesis from user interactions with I.

Input exemplar E
In this step we aim to detect whether stroke repetitions exist and obtain the repetitive group as an exemplar for the synthesis process.Since people usually draw strokes in a coherent manner [9] and they usually have specific intentions when drawing repetitive strokes, we assume the example strokes to be temporally consecutive and to have certain similar properties.
We start from the last stroke input by the user and search backward in the stroke sequence to incrementally find strokes with similar shape and image features to the last stroke.Stroke shape similarity is measured using the Fréchet distance, while the image features include Lab* color (weighted by 0.12, 0.44, and 0.44 to suppress the impact of lightness) and precomputed semantic segmentation [41] at a stroke's center.Alternatively, one could use different image features to capture different drawing intentions.We compare the standard deviation of a feature in the traversed k strokes against a threshold (15/255 for the color feature, 1 for the segmentation feature) for similarity measurement.Back-traversal stops when the next stroke does not contain a similar feature or k > 50.These k strokes serve as the input exemplar for the synthesis process.See Fig. 12 for an example of the incremental search process.

Output region M
The shared features of the obtained stroke exemplar also indicate the intended region.For instance, if all exemplar strokes lie inside the same object segmentation region, it is very likely that the user intends to fill that region.Therefore, we use the shared features obtained in the exemplar grouping process to find a similar region for output.
Since we have only two features in our implementation, we simply obtain the region by GrabCut [42] if the Lab* color feature is shared by the exemplar strokes, we directly take the corresponding segmentation if the semantic feature is shared, and we take the intersection if both features are shared.See Fig. 12 for an example.When there are multiple disconnected regions, we retain the nearest region to the user's last stroke and discard the remainder, as it is less natural to propagate to distant regions.

Contextual constraints
Since the drawing is usually related to the underlying reference image, we analyze the properties of both the drawn strokes and the reference image to infer possible relationships to control the global Fig. 12 Predicting the input exemplar and output region.Left, above: the input stroke sequence (black dots, only a few indices are shown for clarity) on the reference image.Left, below: image features.Right, above: threshold lines and the image feature cost curves for s10, s11, s12 respectively.Right, below: corresponding predicted output regions.The cumulative number k is determined when both cost curves exceed the threshold.Note that the third region prediction result is only for demonstration: since the exemplar only contains one stroke (i.e., k = 1), it is not considered a valid exemplar and would not be used for synthesis.distribution of strokes.The constraints we consider are orientation O and radius R.
Artists usually adjust stroke directions to convey curvature, but may sometimes randomize or fix stroke orientation regardless of the depicted objects to create different visual effects.Therefore, the problem is to decide which case the input exemplar implies.We first compute the edge tangent field (ETF) [43] for the reference image and then calculate the angles between the exemplar strokes and the ETF directions at their centroids.If the standard deviation of the angles is small (less than 15 • ), we consider the stroke orientations to be related to the ETF and take the ETF as the orientation field; otherwise, we set a default global coordinate frame at each point of the orientation field.
Since density is inversely proportional to the spacing between strokes, we reframe the spacing problem as predicting a radius map that controls the extents of stroke neighborhoods.First, we compute the distance from each exemplar stroke to its nearest neighbor.We assume a linear relationship between these minimum distances r and the image features, including image lightness l and gradient strength g at a stroke's centroid, represented as r = l g 1 • t (6) where t denotes the coefficients to be found.Using the fitted linear model, if the squared correlation value is lower than 0.5 (the closer to 1, the better explanation), we use the model to compute a radius map.Otherwise, we consider the density to be uniform and create a constant radius map using the average spatial distance of the exemplar.We then update the user interface with the computed coefficients.

Approach
We conducted a pilot study to evaluate the utility and usability of our approach.We compared three modes through quantitative analysis and qualitative feedback.
In autocompletion mode, users had full access to our prototype, including autocompletion and interactive editing.
In interactive batch filling mode (batch mode), users were required to create a texture example first and then manually specify properties for batch filling.It simulates the sequential procedure in many IB-AR methods (e.g., Ref. [21]), although they rarely allow users to directly define examples on target images.This mode was performed using our system with autocompletion turned off.
In fully manual drawing mode (manual mode), users had to manually draw each stroke without any automatic synthesis.
We also tested the expressiveness of our system through an open creation session and obtained comments for future improvements.

Target session
The goal of this session was to compare the three interaction modes in terms of utility and usability.Since we aim to facilitate drawing using an image scaffold, we included general users with different backgrounds but focusing on less skillful users, who are more likely to want to use reference images.We thus recruited 12 participants, including nine novices with little drawing experience, two amateurs with some experience (P3, P4), and a student majoring in illustration (P5).Most of the studies were conducted on a Lenovo Miix 520 tablet with a stylus, in a lab environment, except for two studies conducted remotely using a mouse (due to the covid pandemic).
The study procedure consisted of a tutorial followed by target tasks, and took each participant about two hours in total.
Each participant was first given a brief introduction to our system and then asked to fill the apple in Fig. 4 with short hatches as a training task.They were encouraged to vary the density and orientation of input strokes and to become familiar with the features of our system.
We used a within-subject design, in which each participant was asked to reproduce two target drawings (see Fig. 13) in all three modes: autocompletion, batch mode, and manual mode.The target drawings contained an object and a landscape, common illustration topics (e.g., see Fig. 2).The assigned order of modes was randomised over all participants.Since we focus on region filling, we asked the participants to draw the outlines of both images in advance, so that they could focus on drawing the textures during the study.We encouraged the participants to finish each drawing as soon as possible, preferably within about a dozen minutes, but without any hard time limit.After completing the two drawings in each mode, each participant filled in a NASA-TLX questionnaire [44].Finally, we asked the participants about their preferred mode and usage experience, and for other comments.

Open session
The goal of this session was to observe how users interact with our system and to learn about user's subjective experiences.We invited seven participants (one professional artist, two amateurs, and four novices) for this session.They were asked to create a drawing freely from the same reference image (Fig. 15(a)) using our system.The reference image was a portrait photo, also common in illustrations.The only requirement was that the drawings should contain some repetitive content.We again commenced with a tutorial and conducted the task on a Lenovo Miix 520 tablet with stylus.The participants were encouraged to think aloud and describe their thought processes and interaction during this session.After this task, participants could optionally create further drawings from any images they wanted.Since our prototype does not contain all common functionality of commercial drawing tools, we allowed the participants to retouch the resulting drawings, without adding more strokes, in Photoshop.

Workload
Figure 14(a) shows the perceived workload scores from the target session.Generally, the autocompletion mode received the lowest (best) scores for almost all factors.One-way ANOVA showed the three modes have significant differences in physical demand (F = 10.69,p < 0.001) while having no significant difference in other factors.Regarding physical demand, post-hoc pairwise tests showed that the autocompletion mode and batch mode both rated significantly lower than manual mode, but had no significant difference from each other.This matches our expectation, since automatic synthesis should only reduce physical load and not cause extra pressure over manual work.

Efficiency
We calculated the average completion time (Fig. 14(b)) and stroke count (Fig. 14(c)) for each mode and task.Generally, the system synthesized about 82% of the strokes in the autocompletion mode and about 92% of the strokes in batch mode.Although manual mode took the shortest time for participants to complete, it also resulted in the fewest total strokes.We thus calculated the strokes per minute for each mode: autocompletion (111.03,SD = 38.76),batch (101.98,SD = 45.13),manual (115.95,SD = 46.73).It  turns out that automatic generation did not improve efficiency, probably because the users spent extra time adjusting and experimenting with the generated effects instead of just drawing strokes.It should be noted that such directed tasks omit the time taken to explore alternative patterns, which, however, might be high in a fully manual case.

Quality
We asked 30 external volunteers to evaluate the quality of participants' drawings.We randomized all drawings created by the participants, showed each output drawing alongside the target drawing, and asked volunteers to rate the resemblance of the output drawing to the target drawing, on a scale from 1 (very dissimilar) to 5 (very similar).The volunteers were instructed to focus more on the overall stroke distributions and flows instead of individual stroke thickness and detailed shapes.We calculated average scores for each mode: autocompletion (3.10, SD = 1.24), batch (3.09, SD = 1.21), manual (2.98, SD = 1.20).The quality of the drawings created with automatic synthesis was slightly better than for the fully manual drawings, but without a significant difference.From the participants' perspective, three novices commented that the automated strokes were better than their manual strokes, because they tend to become impatient when manually drawing all strokes, lowering quality.

Preferred mode
Seven participants preferred autocompletion mode while the other five participants preferred batch mode.Generally, autocomplete mode was considered more convenient, but less precise; batch mode was considered more precise, but requires too much interaction.P12 commented, "autocompletion mode is more straightforward, because you can see the filling effects instantly without doing a lot of manipulation beforehand; while in batch mode, you have to remember the meanings of parameters and adjust them in order to create strokes."P10 also said, "compared to batch filling, autocompletion mode provides a quick guess for filled regions and allows me to get results more quickly with less work."However, autocompletion mode is "less accurate in some vague and detailed regions, such as the shadows of the boat, where it tends to include some unwanted regions, so I had to manually subtract those regions, which is a bit tedious", according to P3.The professional, P5, also preferred batch mode for being able to precisely select regions.Therefore, we consider the autocompletion function and the interactive editing function to be complementary in usability.

Creative results and experience
Figure 15 shows outcomes from the open session.Although from the same reference image and widely using repetitive short strokes, the study participants were able to create different results by varying the stroke shapes and arrangement.Figure 16 Fig.16 Sample results.For each example: left: reference image, center: manual (black) and autocompleted (red) strokes, right: final drawings.In the last example, the strokes were created with our system first and then imported into Photoshop for background coloring.
demonstrates further results.Regarding the creation experience, one user said "it is playful, and the final result is also good", two users described it as "encouraging", because the system allows beginners to quickly create stylistic drawings, and one user commented that she "felt creative when drawing with this system", because she could try out patterns over image regions conveniently and she was more comfortable with drawing from a reference image than from scratch.The professional suggested that the tool itself was somewhat limited to pointillism and hatching styles, but could be helpful in adding interesting textures to color paintings (e.g., see Fig. 16(i)).Two users commented that the reduction in workload is useful, but they also complained about some inaccurate inferences of the autocompletion.We further discuss this problem in Section 6.

Limitations and future work
From our observation and users' feedback, we identified several opportunities for improvement.

Accuracy of autocompletion
We rely on simple Lab* color and semantic segmentation for region inference.While color features suffice for most cases, regions with similar colors but different semantics require sufficient segmentation accuracy for region inference (e.g., see Fig. 13(c)).More advanced semantic selection methods (e.g., Ref. [45]) might help to infer more accurate regions.However, granularity of selection requires further study.For example, when users draw on a bear's limb, is the intended region the whole bear, or all limbs?We leave this as future work.

Visual blocking
Since the drawing and the system's suggestions are overlaid on the reference image, it can be difficult for users to see the image when selecting parts of the suggestions (see Fig. 17) or adding a new layer of strokes.Although users can switch views using a hotkey, it might be helpful to provide some reference information, like image darkness or boundaries, through additional visual hints [10,16].

Higher-level image features
We only consider relationships between strokes and low-level image features, like colors and flows, over regions.By considering higher-level image features,  such as elements and edges, it may be possible to extend the scope of autocompletion, such as autocompleting the sparse flowers in the foreground of Fig. 16(i) through the correspondences between strokes and elements.

Stroke types
Our method only supports short strokes, while artists frequently also use long repetitive strokes [1].It is worth investigating the possibility of incorporating continuous strokes [46] in our analysis and synthesis framework and extending support to different input strokes.

Conclusions
We have presented a new drawing concept and designed an assisted drawing system to help users autocomplete repetitive short strokes with guidance from reference images while maintaining the flexible control of manual drawing.By extending operation history analysis and synthesis with image analysis, our system is able to generate results adapted to reference images and users' prior inputs.We conducted a pilot study to validate the usefulness of our approach and show various drawing results from the users.

Fig. 3
Fig. 3 Example of our system workflow.(a) A user stipples over a leaf region of a reference image while our system predicts what she might draw next (b) (blue strokes: inferred exemplars; pale red region: inferred target region; semi-transparent strokes: system suggestions), (c) which is then accepted by the user (green strokes: user inputs or accepted suggestions).(d) shows the manually drawn content (black, 261 strokes) and autocompleted content (red, 3510 strokes).(e) shows the final result; note the different repetitive stroke patterns in different regions.Our autocompletion system can reduce tedious repetitive input, while providing full user control.

Fig. 4
Fig. 4 User interface, comprising (a) a central drawing canvas, (b) a toolbar for drawing and selection, (c) a toggle-switch for autocompletion mode, (d) a brush property toolbar, (e) a filling property toolbar, and (f) a layer panel.

Fig. 5
Fig. 5 An example of autocompletion.The user selects part of the suggestion using the lasso tool (a) with the result shown in (b), then continues to draw leading to the updated suggestion (c), and accepts all the suggestions using a hotkey (d).The blue strokes in (a) and (c) indicate inferred exemplars from the user's input strokes.

Fig. 6
Fig. 6 Region editing example.The initial prediction (a) contains only the brown region.The user-specified region (b) contains the entire apple, with the corresponding synthesis result in (c).

Fig. 7
Fig. 7 Density editing example with different values of spacing, lightness, and gradient parameters.Larger spacings lead to sparser strokes, while greater lightness and gradient lead to larger stroke density variations.

Fig. 8
Fig. 8 Orientation editing example.(a) User gesture.(b) Orientation field updated based on the user gesture and the original image flow field.(c) Updated result.(d) Result without any orientation field.

Fig. 10
Fig. 10 Left: A stroke, with centroid p and dominant direction v. Right: The neighborhood of the black stroke includes the n (n = 1 in this example) closest strokes (green) from each quadrant and the middle image patch (blue pixel grid).
Figures 11(b)-11(d) show iterative optimization of both objectives.For comparison, Fig. 11(e) shows the result without the correction term and Fig. 11(f) shows the result without using the image neighborhood in both initialization and optimization.

Fig. 11 (
Fig. 11 (a) Input.(b-d) Iteration process.(e, f) Ablation studies.Without the correction term φcorr the predicted strokes tend to cluster together (e).Without the image term d I the predicted strokes may not follow the reference sufficiently (f).

Fig. 14
Fig. 14 Target session results.(a) Average NASA-TLX scores from 12 participants.Lower scores are better.(b) Average completion time.(c) Average stroke counts.The number of system-generated strokes is labeled in each column.

Fig. 15
Fig. 15 Example drawing results from the open session.Each case indicates the number of manual/autocompleted strokes.

Fig. 17
Fig. 17 Example of visual blocking.Left: reference image.Right: canvas view.
Fig. 17 Example of visual blocking.Left: reference image.Right: canvas view.

Table 1
Differences between our tool and closely related work.