1 Introduction

While recent decades have seen significant progress in CAD software, the current state of the art still appears insufficient when it comes to the styling design of products. This is evidenced by the fact that a significant portion of early design activities such as concept development and style generation occurs almost exclusively in 2D environments—be it the traditional pen-and-paper environment or its digital equivalents. While part of this bias toward 2D tools in the early design stages comes from the undeniable convenience and familiarity of such media, we believe the lack of suitable software and interaction techniques to support 3D styling design has a significant role in the current bias.

In this chapter, we present our recent studies concerning pen-based modeling of 3D geometry for industrial product design. Our system allows users to create and edit 3D geometry through direct sketching on a pen-enabled tablet computer. A distinguishing feature of our system is that it is tailored toward the rapid and intuitive design of styling features such as free-form curves and surfaces, which is often a tedious, if not complicated, task using conventional software. A key commonality among the types of products we consider is that their aesthetic appeal is a central consideration. Additionally, given that the final aesthetic form usually evolves in time rather than simply occurring, it is important that users of our system are able to accurately reproduce their ideas, while having the ability to quickly explore alternatives. The main advantage of our system lies precisely here in that it supports the direct creation and editing of free-form curves and surfaces through an intuitive interface. Our system is intended to be used by a wide variety of designers ranging from those who prefer the traditional pen-and-paper interface, to those who are already familiar with existing CAD tools.

In Sect. 13.2 of this chapter, we describe a general purpose modeling system for designing a wide variety of 3D geometries. Our approach facilitates the creation of 3D geometry with arbitrary topology through the use of a simple underlying surface template. Early in the design process, this template serves as a convenient substrate allowing designers to quickly lay out a set of curves. These curves later form the constituent wireframe edges of the 3D object. Our curve creation and modification techniques allow the template be only roughly defined with no particular detail. Using this method, we illustrate the design of various consumer products.

Section 13.3 of this chapter is concerned specifically with automotive styling design. We describe a novel method based on camera calibration and elastic deformation that allows designers’ rough, conceptual sketches to be quickly turned into 3D surface models, thereby facilitating a rapid design, evaluation and reuse of styling ideas directly in 3D. This approach closely relates to the first part of our chapter in that it provides a rapid and accurate means for generating an underlying surface model specifically for automotive design. The described techniques enhance the design process by producing 3D models readily commensurate with input sketches.

The content presented in this chapter has been summarized from our previous publications ([1620]). We refer the reader to these references for the details not presented herein.

For more exploration of existing work on sketch-based 3D modeling, a rich body of literature is available. Such work can be categorized into five groups based on each work’s characteristics:

  • Gesture-based approach [4, 6, 7, 11, 39]: This approach uses designers’ strokes primarily for geometric operations such as extrusion, bending and primitive deformation, rather than for directly depicting the target shape. While these methods allow a quick construction of rectilinear geometry, or a deformation of existing geometry, they are not targeted toward designing 3D space curves.

  • Silhouette-based approach [1, 2, 12, 13, 22, 3032]: In this approach, users’ strokes are used to form a 2D silhouette representing an outline or a cross-section, which is then extruded, inflated or swept to give 3D form. The approach is suited for obtaining a reasonable 3D geometry quickly, rather than modeling a precise shape. Recently the silhouette drawing approach has been applied to the design of organic shapes such as plants, leaves and animal parts.

  • Line-labeling and optimization [5, 9, 21, 24, 3336]: In 3D interpretation from 2D input, the well-known issue of one-to-many mapping (thus the lack of a unique solution) has resulted in the development of various constraint and optimization-based methods. To date, much work has focused on interpreting line drawings of polyhedral objects. These methods typically use some form of a line-labeling algorithm, followed by an optimization step, to produce the most plausible interpretation.

  • Template-based approach [1620, 25, 34, 38]: In this approach, the desired 3D form is obtained by modifying an underlying 3D template. The effectiveness and applicability of the approach depends on the variety of target shapes—if target shapes are akin to each other topologically and geometrically it is easy to define an effective underlying 3D template; otherwise the template may be too prohibitive to unleash the creativity of the designer. Note that our work presented in this chapter is an example of the template-based approach.

  • Mesh editing [3, 23, 27]: Mesh-editing systems allow users to operate directly on existing surfaces to deform or add feature lines using a digital pen. The key difference of these methods compared to similar gesture-based approaches is that users’ strokes are directly replicated on the resulting shape. Such systems let users add smoothly blended complex artifacts to an otherwise plain original geometry. A typical mesh-editing operation requires a specification of the target region, along with a specification of the desired deformation.

2 Sketch-based Creation and Modification of 3D Shapes

Consider the design of a computer mouse shown in Fig. 13.1. In a typical scenario, the user begins by constructing the base wireframe model of the design object. For this, the user first sketches the initial feature curves on a very rough and simplified 3D template model. In this particular case the template is defined as a polygonal mesh. This template acts as a platform that helps anchor users’ initial strokes in 3D space. Once the initial curves comprising the wireframe are constructed, the base 3D template is removed, leaving the user with a set of 3D curves. Next, through direct sketching, the user modifies the initially created curves to give them the precise desired shape. After the wireframe is obtained, the user constructs interpolating surfaces that cover the wireframe. Finally, using two physically based deformation tools, the user modifies the newly created surfaces to the desired shapes. Once the basic wireframe and surfaces are created, further details can be added using the same strategy of curve creation, curve modification, surface creation, and finally surface modification. The following sections detail these processes.

Fig. 13.1
figure 1

Modeling operations supported by our system. a Curve creation and modification illustrated on arbitrary curves. b Illustration of surface creation and modification on a user-designed wireframe. c Further design is performed in a similar way. Curves of features are drawn, modified, and finally surfaced

2.1 Constructing the 3D Wireframe

In the first step of the design, the user constructs the wireframe by sketching its constituent 3D curves directly on the template. Our system allows curves to be created with an arbitrary number of strokes, drawn in any direction and order. The process consists of two main steps. The first is a beautification step in which we identify a smooth B-spline that closely approximates the input strokes in the image plane. In the second step, the 2D curve obtained in the image plane is projected onto the template resulting in a 3D curve.

Given the input strokes in the image plane, we first fit a B-spline to the collection of strokes using a minimum least-squares criterion described in [28]. Figure 13.2 shows an example. By default, we use cubic B-splines with seven control points. While these choices have been determined empirically to best suit our purposes, they can be adjusted to obtain the desired balance between the speed of computation, the smoothness of the curve, and the approximation accuracy. Nevertheless, details of the curve fitting process and the resulting auxiliary features, such as the curve’s control polygon, are hidden from the user. The user is only presented with the resulting curve.

Fig. 13.2
figure 2

B-spline fitting to raw strokes. Top: Input strokes. Bottom: Resulting B-spline and its control polygon

Normally, the data points used for curve fitting would be those sampled along the stylus’ trajectory. However, fluctuations in the drawing speed often cause consecutive data points to occur either too close to, or too far away from one another. This phenomenon, as evidenced by dense point clouds near the stroke ends (where drawing speed tends to be low) and large gaps in the middle of the stroke (where speed is high), often adversely affects curve fitting. Hence, before curve fitting is applied, we resample each stroke to obtain data points equally spaced along the stroke’s trajectory.

The main challenge in curve fitting, however, arises from the fact that a curve can be constructed using multiple strokes, drawn in arbitrary directions and orders. This arbitrariness often causes spatially adjacent data points to have markedly different indices in the vector storing the data points. An accurate organization of the data points based on spatial proximity, however, is a strict requirement of the curve fitting algorithm. Hence, prior to curve fitting, input points must be reorganized to convert the cloud of unordered points into an organized set of points. Note that this reorganization would only affect the points’ indices in the storage vector, not their geometric locations.

To this goal, we use a principal component analysis as shown in Fig. 13.3. The main idea is that, by identifying the direction of maximum spread of the data points, one can obtain a straight line approximation to the points. Next, by projecting the original points onto this line and sorting the projected points, one can obtain a suitable ordering of the original points.

Fig. 13.3
figure 3

Point ordering using principal component analysis. a Input strokes and extracted data points. b The two principal component directions are computed as the eigenvectors of the covariance matrix of the data points. The two direction vectors are positioned at the centroid of the data points. c Data points are projected onto the first principal direction e1, and sorted

Given a set of 2D points, the two principal directions can be determined as the eigenvectors of the 2×2 covariance matrix Σ given as

$$\varSigma=\frac{1}{n}\sum_{k=1}^n(\mathbf{x}_{k}-\mathbf{\mu})(\mathbf {x}_{k}-\mathbf{\mu})^\mathrm{T}$$

Here, x k represents the column vector containing the (x,y) coordinates of the kth data point, and μ is the centroid of the data points.

The principal direction we seek is the one that corresponds to maximum spread, and is the eigenvector associated with the larger eigenvalue. After identifying the principal direction, we form a straight line passing through the centroid of the data points and project each of the original points onto the principal line. Next, we sort the projected points according to their positions on the principal line. The resulting ordering is then used as the ordering of the original points. Note that one advantageous byproduct of this method is that it reveals the extremal points of the curve (i.e., its two ends), which would otherwise be difficult to identify.

In practice, we have found this approach to work well especially because the curves created with our system often stretch along a unique direction. Hence, the ordering of the projections along the principal direction often matches well with the expected ordering of the original data points. However, this method falls short when users’ curves exhibit hooks at the ends, or contain nearly closed or fully closed loops. This is because these artifacts will cause folding or overlapping of the projected points along the principal direction, thus preventing a reliable sorting. To circumvent such peculiarities, we ask users to construct such curves in pieces consisting of simpler curves; our program allows separate curves to be later stitched together using a trim function.

Once the raw strokes are beautified into a B-spline, the resulting curve is projected onto the base template. This is trivially accomplished using the depth buffer of the graphics engine. At the end, a 3D curve is obtained that lies on the template, whose projection to the image plane matches with the user’s strokes.

2.2 Modifying the Wireframe

After creating the initial wireframe, the user begins to modify its constituent curves to give them the precise desired shape. During this step, the base template can be removed, leaving the user with a set of 3D curves. We use an energy minimization algorithm to obtain the best modification of the curve in question.

To make matters simple, we designed our approach such that the curves of the wireframe are modified one at a time, with freedom to return to an earlier curve. At any point, the curve that the user intends to modify is determined automatically as explained below, thus allowing the user to modify edges in an arbitrary order. After each set of strokes, the user presses a button that processes accumulated strokes, and modifies the appropriate curve. To facilitate discussion, we shall call users’ input strokes as modifiers, and the curve modified by those modifiers as the target curve.

Modification of the wireframe is performed in three steps. In the first step, curves of the wireframe are projected to the image plane resulting in a set of 2D curves. The curve that the user intends to modify is computed automatically by identifying the curve whose projection in the image plane lies closest to the modifier strokes. The proximity between a projected curve and the modifiers is computed by sampling a set of points from the curve and the modifiers, and calculating the aggregate minimum distance between the two point sets. In the second step, the target curve is deformed in the image plane until it matches well with the modifiers. In the third step, the modified target curve is projected back into 3D space.

2.2.1 Curve Modification in the Image Plane

We deform a projected target curve in the image plane using an energy minimizing algorithm based on active contour models [15]. Active contours (also known as snakes) have long been used in image processing applications such as segmentation, tracking, and registration. The principal idea is that a snake moves and conforms to certain features in an image, such as intensity gradient, while minimizing its internal energy due to bending and stretching. This approach allows an object to be extracted or tracked in the form of a continuous spline.

We adopt the above idea for curve manipulation. Here, the 2D target curve is modeled as a snake, whose nodes are sampled directly from the target curve. The nodes of the snakes are connected to one another with line segments making the snake geometrically equivalent to a polyline. The set of modifier strokes, on the other hand, is modeled as an unordered set of points (point cloud) extracted from the input strokes. As before, this allows for an arbitrary number of modifiers, drawn in arbitrary directions and order. With this formulation, the snake deforms and conforms to the modifiers, but locally resists excessive bending and stretching to maintain smoothness. Mathematically, this can be expressed as an energy functional to be minimized:

$$E_\mathrm{snake} = \sum_{i} E_\mathrm{int}(\mathbf{v}_i) +E_\mathrm{ext}(\mathbf{v}_i)$$

where v i =(x i ,y i ) is the ith node coordinate of the snake. E int is the internal energy arising from the stretching and bending of the snake. Our solution of minimizing this term involves applying a restitutive force F rest that simply moves each snake node toward the barycenter of its neighboring two nodes (Fig. 13.4).

Fig. 13.4
figure 4

Internal energy due to stretching and bending is minimized approximately by moving each snake node to the barycenter of its neighbors similar to Laplacian smoothing

External energy E ext describes the potential energy of the snake due to external attractors, which arise in the presence of modifiers. The modifiers’ influence on the snake consists of two components: (1) location forces, (2) pressure forces. The first component moves the snake toward the data points sampled from the modifiers. For each snake node v i , a force F loc(v i ) is computed corresponding to the influence of the location forces on v i :

$$\mathbf{F}_\mathrm{loc}(\mathbf{v}_i) =\sum_{n\in k_\mathrm{neigh}}\frac{\mathbf{m}_n-\mathbf{v}_i}{\Vert\mathbf{m}_n-\mathbf {v}_i\Vert}\cdot w(n)$$

where m n is one of the k closest neighbors of v i in the modifiers (Fig. 13.5). w(n) is a weighting factor inversely proportional to the distance between m n and v i . In other words, at any instant, a snake node v i is pulled by k nearest modifier points. The force from each modifier point m n is inversely proportional to its distance to v i , and points along the vector m n v i .

Fig. 13.5
figure 5

Location force on a node

The second component of E ext is related to pressure with which strokes are drawn. The force created due to this energy pulls the snake toward sections of high pressure. The rationale behind considering the pressure effect is based on the observation that users typically press the pen harder to emphasize critical sections while sketching. The pressure term exploits this phenomenon by forcing the snake to favor sections drawn more emphatically. For each snake node v i , a force F pres(v i ) is computed as

$$\mathbf{F}_\mathrm{pres}(\mathbf{v}_i) =\sum_{n\in k_\mathrm{neigh}}\frac{\mathbf{m}_n-\mathbf{v}_i}{\Vert\mathbf{m}_n-\mathbf {v}_i\Vert}\cdot p(n)$$

where p(n) is a weight factor proportional to the pen pressure recorded at point m n .

During modification, the snake moves under the influence of the two external forces while minimizing its internal energy through the restitutive force. In each iteration, the new position of v i is determined by the vector sum of F rest, F loc and F pres, whose relative weights can be adjusted to emphasize different components. For example, increasing the weight of F rest will result in smoother curves with less bends. On the other hand, emphasizing F pres will increase the sensitivity to pressure differences with the resulting curve favoring high pressure regions. Default weights are currently 30% for F rest, 40% for F loc, and 30% for F pres. We have determined these weights empirically such that we obtain subjectively the best outcomes in our test cases.

2.2.2 Unprojection to 3D

In this step, the newly designed 2D curve is projected back into 3D space. As mentioned previously, there is no unique solution because there are infinitely many 3D curves whose projections match the 2D curve. We therefore choose the best 3D configuration based on the following constraints:

  • The 3D curve should appear right under the modifier strokes.

  • If the modifier strokes appear precisely over the original target curve, i.e., the strokes do not alter the curve’s 2D projection, the target curve should preserve its original 3D shape.

  • If the curve is to change shape, it must maintain a reasonable 3D form. By “reasonable,” we mean a solution that the designer would accept in many cases, while anticipating it in the worst case.

Based on these premises, we choose the optimal configuration as the one that minimizes the spatial deviation from the original target curve. That is, among the 3D curves whose projections match the newly designed 2D curve, we choose the one that lies nearest to the original target curve. This can be formulated as follows:

Let C be a curve in ℝ3 constrained on a surface S.Footnote 1 Let C orig be the original target curve in ℝ3 that the user is modifying. The new 3D configuration C of the modified curve is computed as:

$$\mathbf{C}^*=\mathop{\mathrm{argmin}}\limits_{\mathbf{C}}\sum_{i}\big\|\mathbf{C}_i - \mathbf{C}^\mathrm{orig}_i\big\|$$

where C i denotes the ith vertex of C. With this criterion, C is found by computing the minimum-distance projection points of \(\mathbf{C}^{\mathrm{orig}}_{i}\) onto S (Fig. 13.6(b)).

Fig. 13.6
figure 6

Curve modification from a single view. a User’s strokes. b Surface S created by the rays emanating from the user’s eyes and passing through the strokes, and the minimum-distance lines from the original curve. c Resulting modification from three different views

The rationale behind this choice is that, by remaining proximate to the original curve, the new curve can be thought to be “least surprising” when viewed from a different viewpoint. One advantage of this is that curves can be modified incrementally, with predictable outcomes in each step. That is, as the curve desirably conforms to the user’s strokes in the current view, it still preserves most of its shape established in earlier steps as it deviates minimally from its previous configuration. This allows geometrically complex curves to be obtained by only a few successive modifications from different viewpoints.

During wireframe creation and modification, the user operates on the constituent curves one at a time, without regard to their connectivity. Hence, the curves in the resulting wireframe will likely be disconnected. To prepare the wireframe for surfacing, the user may invoke a “trim” command that merges curve ends that lie sufficiently close to one another. This command applies an appropriate set of translations, rotations and scalings to the entirety of a disconnected curve such that its ends meet with other curves. By transforming a curve as whole, rather than simply extending its ends, the shape established by the user is better preserved while eliminating the kinks that could otherwise occur at the curve ends. At the end, a well-connected wireframe is obtained that can be subsequently surfaced.

2.3 Surface Creation and Modification

In the last step, the newly designed wireframe model is surfaced to obtain a solid model. Once the initial surfaces are obtained, the user can modify them using simple deformation tools. The following sections detail these processes.

2.3.1 Initial Surface Creation

Given the wireframe model, this step creates a surface geometry for each of the face loops in the wireframe. In this work, it is assumed that the wireframe topology is already available with the template model and therefore all face loops are known a priori.Footnote 2 Each face loop may consist of an arbitrary number of edge curves. For each face loop, the surface geometry is constructed using the method proposed in [14]. In this method, each curve of the wireframe is represented as a polyline, and the resulting surfaces are polygonal surfaces consisting of purely triangular elements.

Figure 13.7 illustrates the creation of a surface geometry on a boundary loop. In the first step, a vertex is created at the centroid of the boundary vertices. Initial triangles are then created that use the new vertex as the apex, and have their bases at the boundary. Next, for each pair of adjacent triangular elements, edge swapping is performed. For two adjacent triangles, this operation seeks to improve the mesh quality by swapping their common edge (Fig. 13.8a). The mesh quality is based on the constituent triangles’ quality. For a triangle, it is defined as the radius ratio, which is the radius of the inscribed circle divided by the radius of the circumscribed circle. Next, adjacent triangles are subdivided iteratively, until the longest edge length in the mesh is less than a threshold (Fig. 13.8b). Between each iteration, edge swapping and Laplacian smoothing is performed to maintain a regular vertex distribution with high quality elements. At the end, the resulting surface is smoothed using a physically based mesh deformation method, called the V-spring operator [37]. This method, which will be presented in detail in Sect. 13.2.3.2, iteratively adjusts the initial mesh so that the total variation of curvature is minimized. Once the initial surfaces are created in this way, new feature curves can be added to the model by direct sketching, as described in the previous section.

Fig. 13.7
figure 7

Surface creation. a Initial boundary loop consisting of four curves. b Preliminary triangulation using a vertex created at the centroid. c Edge swapping d Final result after face splitting and mesh smoothing using V-spring method [37]

Fig. 13.8
figure 8

a Edge swapping. Diagonals of adjacent triangles are swapped if minimum element quality increases. b Triangle subdivision

2.3.2 Surface Modification

Often times, the designer will need to modify the initial surfaces to give the model a more aesthetic look. In this work, we adopt a simple and intuitive modification scheme that allows users to explore different surface alternatives in a controllable and predictable way. Unlike most existing techniques, our approach operates directly on the polygonal surface without requiring the user to define a control grid or a lattice structure.

Our approach consists of two deformation methods. The first method uses pressure to deform a surface. With this tool, resulting surfaces look rounder and inflated, with more volume. The second method is based on the V-spring approach described by [37]. In this method, a network of springs work together to minimize the variation of surface curvature. A discussion of the practical utility of this type of surface can be found in [10]. In both methods, deformation is applied to the interior of the surface while keeping the boundaries fixed. This way, the underlying wireframe geometry is preserved, with no alterations to the designed curves.

Surface Modification Using Pressure Force

This deformation tool simulates the effect of a pressure force on a thin membrane. The tool allows surfaces to be inflated or flattened in a predictable way. The extent of the deformation depends on the magnitude of the pressure, which is controlled by the user through a slider bar. Different pressure values can be specified for individual surfaces, thus giving the user a better control on the final shape of the solid model.

The equilibrium position of a pressurized surface is found iteratively. In each step, each vertex of the surface is moved by a small amount proportional to the pressure force applied to that vertex. The neighboring vertices, however, resist this displacement by pulling the vertex toward their barycenter akin to Laplacian smoothing. The equilibrium position is reached when the positive displacement for each node is balanced by the restitutive displacement caused by the neighboring vertices. Figure 13.9 illustrates the idea.

Fig. 13.9
figure 9

A pressure force applied to a vertex moves the vertex along its normal direction. The neighboring vertices, however, pull the vertex back toward their barycenter. Equilibrium is reached when displacements due to the pressure force and the neighbors are balanced

The algorithm can be outlined as follows. Let p be the pressure applied to the surface. Until convergence iterate:

figure a

The vertex normal n i is updated in each iteration and is computed as the average of the normals of the faces incident on v i , weighted by the area of each face. \(A_{i}^{\mathrm{voronoi}}\) is the Voronoi area surrounding v i . It is obtained by connecting the circumcenters of the faces incident on v i with the midpoints of the edges incident on v i . ξ and γ are damping coefficients that control the convergence rate. Too low values of ξ or γ may cause instability in convergence.

The above algorithm is applied to all surface vertices while keeping the boundary vertices fixed. Figure 13.10 shows an example on a seat model. If necessary, negative pressure can be applied to form concavities.

Fig. 13.10
figure 10

Modification of a seat base using pressure force

Surface Modification Using V-Spring Method

This method creates surfaces of minimized curvature variation based on a discrete spring model. This scheme produces fair surfaces that vary smoothly, which is known to be an important criterion for aesthetic design purposes [29]. Additionally, when applied to a group of adjacent surfaces, it reduces sharp edges by smoothing the transition across the boundary curves.

In this method, a spring is attached to each surface vertex. Neighboring springs usually form a “V” shape, thus giving the name to the method. The spring length approximately represents the local curvature. During modification, the springs work together to keep their lengths equal, which is equal to minimizing the variation of curvature (Fig. 13.11). Each vertex thus moves under the influence of its neighbors until the vertices locally lie on a sphere.

Fig. 13.11
figure 11

V-spring. a Displacement of v i due to v j . b Net displacement due to all neighbors

Based on this model, the displacement of v i due to a neighboring vertex v j is given as follows (see [37] for details):

$$\Delta\mathbf{v}_i^j =\frac{1}{\Vert\mathbf{v}_j-\mathbf{v}_i\Vert} \biggl[\frac {(\mathbf{v}_j-\mathbf{v}_i)\cdot(\mathbf{n}_i+\mathbf {n}_j)}{1+(\mathbf{n}_i\cdot\mathbf{n}_j)} \biggr]\mathbf{n}_i$$

where n i and n j are unit normal vectors at v i and v j . The total displacement of v i is computed as the average of displacements due to neighboring vertices. However, to maintain a regular vertex distribution throughout iterations, each vertex is also moved horizontally along its current tangent plane toward the barycenter of its neighbors. In each iteration, the positions and normals of the vertices are updated. The iterations are continued until the net displacement of each vertex is less than a threshold.

2.4 Examples

Figure 13.12 shows snapshots of our system in the design of an electric shaver. The base template used for this model is a rectangular prism as shown in Fig. 13.12a. The design begins by laying down several curves on this prism. Next, the initial curves are modified to give them the appropriate 3D shape. During the construction of the wireframe, curves are sometimes deleted if they are deemed premature at a given time. For instance, initially the part of the curves making up the three buttons on the top surface were drawn on the template prism, but were later removed in the early stages of curve modification. They were then introduced toward the end of the design process.

Fig. 13.12
figure 12

Design of an electric shaver. a Wireframe creation. b Surfaced model rendered in white. c Surfaced model painted. d Bottom and top faces, and a detail view of the polygonal surfaces

The last column of Fig. 13.12a shows the final wireframe designed via our system. Figure 13.12b shows different views of the solid model obtained after surfacing. Finally, Fig. 13.12c shows different views of the model in which individual surfaces have been painted.

The final wireframe consists of 107 individual curves. Out of these 107 curves, 37 had symmetrical pairs. Therefore, only 70 curves were actually designed by the user. The surfaced model consists of 50 individual surfaces, most of which have been modified using the surface deformation tools described earlier. This model took one of the authors about 2 hours to design, excluding the time expended on surface painting. Figure 13.13 shows several other models created similarly.

Fig. 13.13
figure 13

Various other models created using our system. The foot serves as the natural underlying template for the sandal model

Figure 13.14 shows example models created by subjects who participated in a user study ([20]). The results of this study show that our modeling techniques are effective in providing a natural and simple means to create 3D geometry. Most users found the ability to directly manipulate 3D curves using pen stokes to be a particularly powerful feature of our system. We observed that most users were comfortable adopting the new techniques and utilizing them effectively to complete their assignments. Despite having no prior experience with the system, most users quickly became adept at using the curve and surface creation techniques with little or no difficulty following a brief introductory tutorial. Given that none of the participants are routinely engaged in product design, we were pleased to see that many were able to produce satisfactory models in the allotted time frame.

Fig. 13.14
figure 14

Remote controllers designed by first-time users of our system

3 Creating 3D Shape Templates from Sketches for Automotive Styling Design

In car industry, designers place huge emphasis on concept development and styling activities as the decisions made during these stages are key to the product’s success. Most commonly, designers produce a multitude of rough sketches early in the design process as a way to explore different shapes and styles. While it would be desirable to seamlessly study a developing concept in 3D, the high fidelity, complex nature of existing 3D modeling tools typically precludes the use of such media early in the design. As a result, designers are restricted to 2D media for most of their early creative activities. Due to the significant effort and expertise required for 3D modeling, only a few select candidate concepts will typically pass to the next stage, while many others are prematurely abandoned.

The goal of this work is to improve current practice by helping the designer rapidly realize a 2D sketch in 3D and interact with the resulting geometry, without the need for complex modeling skills. With the proposed approach, we hope to enable 3D conceptual exploration early in the design cycle where constructing and interacting with the 3D model is, ideally, as easy as drawing on paper.

Our approach is based on a three-stage modeling framework that facilitates a rapid construction of a coarse 3D geometry followed by a progressive, sketch-based refinement. In the first stage, the user marks a set of fiducial points on the sketch. Using the fiducial points, our method first aligns an underlying template model with the sketch using a camera calibration algorithm. Next, an optimization algorithm deforms the template in 3D until the projection of its fiducial nodes match the fiducial points marked by the user. In the second stage, the user refines the template by tracing the car’s key character lines. Input strokes modify the edges and surface patches of the template. At the end of this stage, the user obtains a smooth 3D surface model (devoid of details) that matches the car depicted in the sketch. This surface model is then used as a substrate for the final stage where the designer sketches further styling curves directly onto the model.

3.1 Overview

Figure 13.15 illustrates the design process. At the heart of our approach is a 3D surface template that embodies a set of fiducial nodes and a set of connecting edges (Fig. 13.15(a)). The template model has been designed to embody the major surfaces of a car body in the simplest, most abstract fashion. Aside from the overall shape that characterizes the class of the car (e.g., sedan vs. hatchback), the template is devoid of stylistic articulations that could later interfere with conceptual freedom. Note that this template model is represented by a network of bi-cubic surface patches, unlike the polygonal template mesh discussed in Sect. 13.2.

Fig. 13.15
figure 15figure 15

Car body design from sketches

The designer begins by importing a digitally created or a scanned paper sketch into the user interface (Fig. 13.15(b)). The sketch may depict an orthographic or perspective projection, with an arbitrary vantage point. In the first step, the designer is asked to mark a set of 2D points on the sketch. The set of requested points correspond to the fiducial nodes of the template (including the four wheel centers). To guide the designer, a separate widget in the GUI displays the point requested at the particular instance. If visible, the designer marks the requested point in the sketch. Otherwise, the designer skips the point. At the completion of this process, the designer will have marked only a subset of the template’s fiducial nodes as several of those points will be typically invisible in the sketch for marking. By monitoring which fiducial points have been marked and which ones have been skipped, our system establishes a one-to-one correspondence between template fiducials and the points marked by the designer.

Using the fiducial marks as input, a camera calibration algorithm first aligns the template with the sketch. This step adjusts the virtual camera properties such that the projection of the template in the image plane is similar to the view depicted in the sketch. Next, an optimization algorithm deforms the underlying template such that the projections of its fiducial nodes match closely with the designer’s marker points (Fig. 13.15(c)). Although the fiducial points can be matched exactly via unrestrained deformations to the template, this will usually result in unrealistic 3D geometry. To maintain a sound 3D shape during deformation, our optimization algorithm thus seeks a deformation that deviates minimally from the original template.

Our data structure for the template model maintains a network of cubic curve edges connecting the fiducial nodes, and a set of bi-cubic surface patches for each of the associated face loops. Following fiducial point matching, the user refines the template by tracing its edges directly on the sketch (Fig. 13.15(d)). This process alters the template edge bodies in 3D to match the input strokes, while keeping the end nodes fixed. If edge modification from the current viewpoint is not satisfactory, the user may modify the template edges from other viewpoints very similar to the way described in Sect. 13.2.2. During edge modification, surface patches associated with the edges are updated instantly and automatically to match the deformed edges. When fine tuning is necessary, the designer can adjust individual node positions by a simple point-and-drag method. In the current implementation, the selected node is constrained to move in a plane parallel to the current image plane while preserving symmetry.

The resulting surface model provides a suitable basis for further development. Using the surface model as a substrate, the designer creates character lines and other details according to the sketch to refine the model (Fig. 13.15(e)). Subsequent operations are similar to those described in [20]. They include curve creation and deletion, curve modification, and curve smoothing.

3.2 Template Alignment

This step attempts to bring the projection of the template in close correspondence with the sketch without deforming the template. For this, we attempt to uncover the parameters of the perspective projection suggested in the sketch. Our approach is closely related to our previous camera calibration method described in [19]. One notable difference, however, is that our previous approach requires the eight corners of a virtual bounding box as the input to the calibration algorithm. Hence, in that approach the designer has to specify an enclosing bounding box commensurate with the sketched shape. In our current method, the availability of the fiducial points replaces the need for such a box.

3.3 Template Deformation Based on Fiducial Points

In this step, the fiducial nodes of the template are geometrically modified such that their projections to the image plane closely match the fiducial points marked by the designer. We model this problem as an elastic deformation problem subject to a set of constraints. The optimal 3D configuration is computed by minimizing the difference between the designer’s marker points and the template fiducials, while maintaining an acceptable 3D form. We define the following optimization problem:

Let V={v 1,…,v n }∈ℝ3,:

describe the geometric positions of the original, undeformed template nodes.

Let V′={v1,…,v n }∈ℝ3,:

describe the deformed positions of the same nodes. The goal is to compute V′.

Let F={f 1,…,f m }∈ℝ2,mn,:

describe the screen coordinates of the fiducial points marked by the designer.

Let \(\mathbf{\mathbf{V_{s}'}} = \{\mathbf{v}'_{1},\ldots,\mathbf{v}'_{m}\}\subset\mathbf{V'}\),:

describe the corresponding set of template nodes that need to be matched to F.

Let P:ℝ3→ℝ2 :

be the mapping function that projects a point in world coordinates to screen coordinates using the current projection matrix.

We establish the following cost function to minimize:

$$H = \alpha\sum_{i=1}^m \big\|\mathbf{F}_i-\mathrm {P}(\mathbf{V}_{\mathbf{s}_i'})\big\| + \beta\sum_{j=1}^n \big\|\mathbf{V}_j-\mathbf {V}'_j\big\|.$$

The above cost function is composed of a weighted sum of (1) the cumulative 2D difference between user’s fiducial points and the projections of the associated template nodes, and (2) the cumulative 3D difference between the original and deformed node positions. The first term ensures that the template is deformed into a shape that matches well with the fiducial points marked by the designer. The second term, on the other hand, ensures that the template deforms as little as possible from its original 3D configuration thus helping to maintain an acceptable geometry. The absence of the first term will fail to produce models that match the sketch since no deformation will be performed. The absence of the second term, on the other hand, can potentially result in unrealistic shapes as unnatural 3D deformations may be attempted in an effort to closely match the fiducial point in the image plane. Note that there are infinitely many deformations that would result in an exact match in the image plane. The above cost function thus helps deform the template in a way that closely matches the fiducial points, while maintaining a sound 3D shape.

Due to a difference of their domains, the two terms may differ by several orders of magnitudes and thus are not readily comparable. We thus normalize the two terms to achieve a congruent basis. Finally, the coefficients α and β control the relative weights of the terms and can be suitably adjusted. In our implementation we have found α=β=0.5 to produce satisfactory results.

Further details of the optimization problem including the optimization variables, equality and inequality constraints, and the solution technique can be found in [18].

3.4 Edge Representation and Manipulation

In this work, we use cubic splines as the base representation of our edges, unlike the polyline representation used in the template described in Sect. 13.2. Each edge is described in terms of its two end points and two corresponding tangent vectors. To modify an edge, the designer sketches the desired shape of the curve near the intended edge (Fig. 13.16). Using the same techniques presented in Sect. 13.2.2, an infinite virtual surface S originating from the eye, passing through designer’s strokes, and extending into the 3D space is constructed. This surface lies directly under the input strokes and is thus not visible from the current viewpoint.

Fig. 13.16
figure 16

Edge modification

The new edge is expected to lie on S while maintaining the two end points fixed. For this, we use a four-point cubic Hermite interpolation method [26]. We compute two interior edge points at parametric coordinates u=1/3 and 2/3 (u:[0,1]) and project them onto S. The original end points and the two newly computed interior points define the new shape of the edge. The resulting cubic edge minimizes the deviation from the original edge due to the projection onto S. In most cases, this choice offers a reasonable solution to the inherently ill-defined problem of computing a 3D curve from 2D input. If necessary, the user may modify the edge from other viewpoints until the desired shape is achieved.

To preserve smoothness, we maintain a set of G 1 continuity constraints between edges that form the key character lines. Figure 13.17a shows example curves subject to these constraints. As a result, when the designer modifies an edge, neighboring edges will be automatically modified to reflect this continuity. During edge modification, our scheme preserves the magnitudes of neighboring edges’ tangent vectors. As shown in Fig. 13.17b, this may have a notable effect on the neighboring edges if they have significantly large tangent vectors. While this can be desirable in many cases, it can also be a hindrance when the designer wants to minimize such effects to create sharp transitions between neighboring edges. In such cases, the designer may explicitly interfere by revealing the tangent vector handles to adjust their magnitudes.

Fig. 13.17
figure 17

a Examples of connected edges that form G 1 continuity. b Top: Edge modification causes significant changes to the neighboring edge. Bottom: Attenuating the influence on the neighboring edge by tangent reduction

3.5 Surface Representation and Manipulation

The network of cubic edges gives rise to a set of face loops, which enables a natural surface representation based on parametric patches. For each face loop, we construct a bicubically blended Coons patch [8] using the four boundary edges. Figure 13.18 illustrates the idea. A key advantage of this representation is that, when a boundary edge is modified, neighboring surface patches are seamlessly adjusted with G 1 continuity to reflect the new edge shape. For further details, see [18].

Fig. 13.18
figure 18

a Coons patch surface representation. b Continuity between two surface patches

The designer can use the resulting surface model as a substrate to explore specific styling ideas in 3D. For this, the designer can simply trace over the feature lines already in the sketch. The sketched curves are projected onto the underlying surface model using the techniques described in Sect. 13.2.1. The designer can then smooth or modify the newly created curves using the techniques described in Sect. 13.2.2 and those described in [20].

3.6 Examples

Figures 13.19 and 13.20 show example designs created using our system. In all cases, it took less than 20 minutes to obtain the displayed 3D geometry and create the styling curves.

Fig. 13.19
figure 19

An example concept design

Fig. 13.20
figure 20

Hatchback design

Our experience has shown the accuracy of template alignment to be critical in the final result of 3D construction. We note that camera calibration becomes increasingly challenging for sketches that depict strong perspectives or exaggerated depictions. Additionally, the sensitivity of the optimization algorithm for points farther from the camera (where fiducial points pile together) typically causes corresponding 3D shape to be less accurate for those regions. However, this issue is often a consequence of insufficient information in the sketch, rather than a shortcoming of the deformation algorithm. This can be remedied by allowing the designer to incorporate multiple sketches depicting different view of the concept, and judiciously combining the results into a single 3D geometry.

4 Conclusions

In this chapter, we demonstrated how a pen-based modeling system can be effectively applied to the styling design of 3D objects. Our template-based approach is tailored toward the rapid and natural design of styling features such as free-form curves and surfaces. In a typical scenario, the user first constructs the base wireframe model of the design object by sketching the initial feature curves on a very rough and simplified 3D template model. Once the initial curves comprising the wireframe are constructed, the base 3D template is removed, leaving the user with a set of 3D curves. Next, through direct sketching, the user modifies the initially created curves to give them the precise desired form. After the desired wireframe is obtained, the user constructs interpolating surfaces that cover the wireframe. Finally, using two physically based deformation tools, the user modifies the newly created surfaces to the desired shapes. Once the basic wireframe and surfaces are created, further details can be added using the same strategy of curve creation, curve modification, surface creation, and finally surface modification. The results of our user study indicate that our pen-based modeling framework is effective in providing a natural and simple means of creating 3D geometry. Most users found the ability to directly manipulate 3D curves using pen stokes to be a particularly powerful feature of our system. Our future goals include enhancing and expanding our current modeling techniques to enable a richer set of design tools for the user.