1 Introduction

Isogeometric analysis (IGA) represents a revolutionary development in the integration of Computer-Aided Design (CAD) and Computer-Aided Engineering (CAE), offering a transformative approach across various engineering disciplines [1, 2]. IGA distinguishes itself from the conventional finite element method by utilizing consistent spline-based basis functions for both geometric modeling and numerical simulation. This seamless integration circumvents the need for converting spline-based CAD models into linear mesh models, thereby preserving geometric integrity and eliminating potential errors in the analysis phase.

A pivotal first step in the IGA workflow is to construct an analysis-suitable spline-based parameterization \(\varvec{x}(\varvec{\xi })\) from the Boundary Representation (B-Rep) of the physical domain \(\Omega\) [3]. Research indicates that the quality of this parameterization significantly influences the accuracy and efficiency of subsequent analyses [4, 5]. For an analysis-suitable parameterization, ensuring bijectivity is of utmost importance, followed by the minimization of angle and area distortions. Over the past decade, various approaches have been developed [6,7,8,9,10,11,12].

Many established methods in this field assume that input boundary curves remain unchanged. However, the parameterization speed of these curves plays a crucial role in the quality of the resulting parameterization. It is particularly true for elongated and thin physical domains, as illustrated in Fig. 1a. A common challenge arises from designers focusing primarily on the immediate curves, often neglecting the parameterization speed of their opposite counterparts. This oversight frequently leads to a manual determination of boundary correspondences, a practice that limits both automation and efficiency in the parameterization process. Our work addresses this gap, proposing a method that enhances automation and streamlines the parameterization process.

The quality of boundary correspondence has been recognized as a pivotal factor in analysis-suitable parameterization, directly influencing the quality of domain parameterization and then the accuracy of subsequent analyses. Zheng et al. pioneered a method leveraging optimal mass transportation theory to enhance boundary correspondence, primarily focusing on selecting the four corner points [13]. Zhan et al. utilized a deep neural network approach for corner point selection in planar parameterizations. Both of these methodologies typically commence with selecting four corner points for planar parameterization, followed by reliance on conventional chord-length parameterization to determine the speed of boundary parameters. This approach, however, presents a notable limitation: it can constrain the overall quality and accuracy of the parameterization, which is clearly illustrated in Fig. 1a.

Fig. 1
figure 1

Boundary parameter matching problem (Continental Shelf of the Gulf of Mexico): a utilizing conventional chord-length parameterization for the boundary curves, b applying the proposed Schwarz–Christoffel mapping approach for enhanced boundary correspondence

This paper presents a novel approach to address the boundary parameter matching problem for elongated physical domains in isogeometric analysis-suitable parameterization, focusing specifically on the parameter speed of boundary curves. Our method employs the Schwarz–Christoffel (SC) mapping, a specialized form of conformal mapping that has not been previously applied in the IGA framework. This method facilitates precise boundary correspondence, as demonstrated in Fig. 1b. To the best of our knowledge, this study is the first to tackle the boundary parameter matching issue in IGA.

The main contributions of this paper are as follows:

  • Schwarz–Christoffel (SC) mapping is employed to compute markers on input boundary curves. The proposed approach transforms complex NURBS-represented boundary curves into simplified polygons, followed by computing a SC mapping from these polygons to a unit disk. The existence and uniqueness of the SC mapping are underpinned by the well-known Riemann mapping theorem.

  • A boundary reparameterization scheme is developed, which actively recalibrates parameters to achieve a harmonious alignment between boundary curves. This scheme is specifically designed to maintain the geometric accuracy of the original boundaries. The theoretical proof is given to demonstrate that our proposed scheme preserves the exact geometry of the input boundaries.

  • Our findings suggest that satisfactory parameterization results can be achieved with a straightforward linear interpolation-based method, offering an alternative to more advanced analysis-suitable parameterization techniques. This is a significant insight, as it implies that simpler methods, when applied effectively, can still yield high-quality parameterizations. The effectiveness of this approach is further validated by numerical experiments, which demonstrate notable improvements in parameterization quality using the proposed method.

The remainder of the paper is organized as follows. Section 2 provides a comprehensive review of the existing literature on analysis-suitable parameterization and SC mapping. In Sect. 3, we define the specific problem our study addresses and introduce the key ideas underlying our proposed method. Section 4 presents an in-depth exploration of our methodology. The results and comparative analysis of different methods are discussed in Sect. 5, demonstrating the efficacy of our approach. Section 6 explores the practical applications of our proposed method in solid modelling and IGA simulations. This paper concludes in Sect. 7 with a summary of our key findings and insights into future research directions.

2 Related work

This section provides a comprehensive review of current methodologies in parameterization as applied in IGA. Next, we will explore the diverse applications and theoretical advancements of SC mapping.

2.1 Review for analysis-suitable parameterization in IGA

The importance of parameterization quality in enhancing the accuracy and efficiency of downstream analyses has been emphasized in various studies [3,4,5]. Algebraic methods, such as Coons patches [14] and Spring patches [15], are commonly employed for constructing analysis-suitable parameterizations due to their simplicity and efficiency. However, these methods can sometimes result in non-bijective parameterizations, particularly for complex computational domains.

To address these limitations, a range of methods has been developed over the past decade. Xu et al. [6] pioneered a nonlinear constrained optimization framework, marking a significant advancement in this field. However, this method often involves a large number of constraints, leading to substantial computational demands. To mitigate this, Wang and Qian [16] and Pan et al. [17] introduced strategies that effectively reduce computational burdens. Utilizing barrier-type objective functions, Ji et al. efficiently eliminated foldovers through solving a simple unconstrained optimization problem [18]. An alternative approach involves the use of penalty functions and Jacobian regularization techniques, tracing back to Garanzha et al.’s work in mesh untangling [19, 20]. Wang and Ma [21] adopted this strategy, thereby avoiding additional foldover elimination steps. Ji et al. improved upon this by introducing a new penalty term that minimizes numerical errors from previous iterations [22]. Beyond the barrier-type objective function, other studies have delved into quasi-conformal theories, such as Teichmüller mapping [23] and the low-rank quasi-conformal method [24]. Additionally, Martin et al. utilized discrete harmonic functions for trivariate B-spline solids [25], while Nguyen and Jüttler [26] and Xu et al. [27] explored sequences and variational methods of harmonic mapping, respectively. Falini et al. approached the problem by computing harmonic maps from physical domains to parametric domains using boundary element methods, followed by inverse mapping approximations via least-squares fitting [28]. Building on the principles of Elliptic Grid Generation (EGG), Hinz et al. proposed PDE-based methods that excel in domains with extreme aspect ratios due to their robust convergence properties [8, 29]. Further advancing this domain, Ji et al. introduced an enhanced elliptic PDE-based parameterization technique, notable for its speed and ability to produce uniform elements near concave/convex boundary regions in general domains [11]. Zhang et al. [30, 31] and Liu et al. [32] developed volumetric spline-based parameterizations for IGA, utilizing T-splines known for their local refinement capabilities. A computational reuse framework for three-dimensional models was introduced by Xu et al. [33]. Later, Wang et al. [34] proposed a multi-patch parameterization method that integrates frame field and polysquare structures to handle complex computational domains. In addition, Xu et al. [35] and Ji et al. [36] introduced r-adaptive methods that focus on minimizing curvature metrics.

The aforementioned studies presuppose the availability of boundary representations and maintain these input boundary curves and surfaces in their original fixed form. Establishing a coherent correspondence between distinct shapes is a fundamental challenge in shape analysis, various methods have been established during past decades [37, 38]. However, these traditional methods fall short in the context of our research as the underlying assumptions are not applicable. Liu et al. allow the parameterization of boundary curves may be movable by optimize the boundary and interior control points simultaneously [10]. Zheng et al. introduced an automated technique for boundary correspondence in isogeometric analysis-suitable parameterization, grounded in optimal mass transportation theory [13]. This approach primarily focuses on identifying appropriate corner placements and then applies chord-length methods for boundary reparameterization, which may lead to outcomes as depicted in Fig. 1a. Building on this, Zheng et al. extended the concept to volumetric cases [39], while Zhan et al. innovated the identification of corner points in the physical domain using deep neural networks [40].

This paper proceeds with the fundamental assumption that the four corner points of the domain are predetermined. The key contribution of our work is in calculating the boundary parameter correspondence, a critical aspect for achieving isogeometric analysis-suitable parameterization.

2.2 Review for Schwarz–Christoffel mapping

Conformal mapping, particularly the Schwarz–Christoffel (SC) mapping, plays a vital role due to its theoretical significance and wide-ranging practical applications [41]. Central to the numerical computation of SC mapping is the challenging Schwarz–Christoffel parameter problem [42]. To address this complexity, a myriad of computational techniques has been developed, with numerical conformal mapping emerging as a particularly effective solution. To surmount computational challenges associated with SC mapping, Driscoll and Vavasis introduced an innovative approach utilizing Cross-Ratios and Delaunay Triangulation (CRDT) algorithms [43]. This method was further extended by Delillo and Kropf to accommodate multiply connected domains [44]. As for the available implementations of SC mapping, the SCPACK package [45], originally developed in Fortran, represents an early solution. The SC Toolbox [46] in MATLAB and its subsequent open-source version [47] are notable advancements. The computational speed of SC mapping has been further accelerated by Banjai and Trefethen [48] through the use of a multipole method. Moreover, Anderson extended SC mapping to non-polygonal domains [49]. For an in-depth understanding of SC mapping, the monograph by Driscoll and Trefethen [50] is a valuable reference.

3 Problem statement and key ideas

In this section, the focus is on defining the specific problem addressed by our research, which establishes the notation that will be used consistently throughout the paper. Additionally, we introduce the key ideas that underpin our proposed methodology.

3.1 Problem statement

In numerous industrial applications, domains characterized by a pronounced, elongated shape with extreme aspect ratios are commonly encountered. Such configurations are prevalent in channels, conduits within reactors, and waterways, as illustrated in Fig. 1. In these scenarios, a reasonable boundary correspondence across opposing long boundary curves is crucial. The parameter speed along these boundaries significantly influence the quality and bijectivity of the derived parameterization. Our study specifically addresses this challenge, seeking to optimize boundary parameter matching and ensure the integrity of parameterization in such elongated domains.

Consider the real variables \(\varvec{x} = [x,y]^{\textrm{T}}\) in the physical domain \(\Omega\), alongside \(\varvec{\xi } = [\xi ,\eta ]^{\textrm{T}}\), representing orthogonal real variables within the parametric domain \(\hat{\Omega }\). The quartet of boundary curves defined in B-spline form, namely \(\mathcal {C}^{\textrm{West}}(\eta )\), \(\mathcal {C}^{\textrm{East}}(\eta )\), \(\mathcal {C}^{\textrm{South}}(\xi )\), and \(\mathcal {C}^{\textrm{North}}(\xi )\), with the superscript \(^{*}\) representing the boundary direction, are given by

$$\begin{aligned} \mathcal {C}^{*}(\xi ) = \sum _{i=0}^{n^{*}} \textbf{P}_{i}^{*} R_{i,p^{*}}^{\varvec{\Xi }^{*}}(\xi ), \end{aligned}$$
(1)

where \(R_{i,p^{*}}^{\varvec{\Xi }^{*}}(\xi ) = {\omega _i N_{i,p^{*}}^{\varvec{\Xi }^{*}}(\xi )}/{\sum _{j=0}^{n^{*}} \omega _j N_{j,p^{*}}^{\varvec{\Xi }^{*}}(\xi )}\) are the i-th degree-\(p^{*}\) NURBS basis functions, and \(N_{i,p^{*}}^{\varvec{\Xi }^{*}}(\xi )\) denote the i-th degree-\(p^{*}\) B-Spline basis functions defined over the knot vector \(\varvec{\Xi }^{*}\). Assuming, for simplicity, that the West curve \(\mathcal {C}^{\textrm{West}}(\eta )\) and the East curve \(\mathcal {C}^{\textrm{East}}(\eta )\) represent the longitudinal boundaries, our goal is to establish an analysis-suitable parameterization \(\varvec{x}:\ \hat{\Omega } \rightarrow \Omega\) through a coordinate transformation, expressed in B-Spline form, that adheres to the prescribed boundary conditions \(\varvec{x}^{-1}|_{\partial \Omega } = \partial \hat{\Omega }\). Hence, we define:

$$\begin{aligned} \varvec{x}(\varvec{\xi }) = \underbrace{ \sum _{i \in \mathcal {I}_I} \textbf{P}_i R_i(\varvec{\xi })}_{\textrm{unknown}} + \underbrace{ \sum _{j \in \mathcal {I}_B} \textbf{P}_j R_j(\varvec{\xi })}_\textrm{known}, \end{aligned}$$
(2)

where \(\mathcal {I}_I\) and \(\mathcal {I}_B\) index the unknown interior and the known boundary control points, respectively,

$$\begin{aligned} R_{i}(\varvec{\xi }) = R_{i_1, i_2}(\xi , \eta ) = \frac{\omega _{i_1, i_2} N_{i_1, p_1}^{\varvec{\Xi }}(\xi ) N_{i_2, p_2}^{\varvec{\mathcal {H}}}(\eta )}{\sum _{i_1 = 0}^{n_1} \sum _{i_2 = 0}^{n_2} \omega _{i_1, i_2} N_{i_1, p_1}^{\varvec{\Xi }}(\xi ) N_{i_2, p_2}^{\varvec{\mathcal {H}}}(\eta )} \end{aligned}$$
(3)

are bivariate tensor-product NURBS basis functions of bi-degree \((p_1,p_2)\), and \(\omega _{i_1, i_2}\) are weights.

Fig. 2
figure 2

Workflow diagram of the proposed boundary parameter matching approach via Schwarz–Christoffel mapping

The differential form of the transformation is expressed as follows:

$$\begin{aligned} \begin{bmatrix} dx \\ dy \end{bmatrix} = \varvec{\mathcal {J}} (\xi , \eta ) \begin{bmatrix} d \xi \\ d \eta \end{bmatrix} \end{aligned}$$
(4)

with \(\varvec{\mathcal {J}}(\xi , \eta )\) representing the Jacobian matrix. A scaled rotation transformation is defined by the composition of a diagonal scaling matrix and an orthogonal rotation matrix, i.e.,

$$\begin{aligned} \varvec{\mathcal {J}} = \begin{bmatrix} h(\xi , \eta ) &{} 0 \\ 0 &{} g(\xi , \eta ) \end{bmatrix} \begin{bmatrix} \cos \theta (\xi , \eta ) &{} - \sin \theta (\xi , \eta ) \\ \sin \theta (\xi , \eta ) &{} \cos \theta (\xi , \eta ) \end{bmatrix}, \end{aligned}$$
(5)

where \(h(\xi , \eta )\) and \(g(\xi , \eta )\) are scalar functions that respectively define the components of the scaling transformation, and \(\theta (\xi , \eta )\) represents the angle of rotation transformation.

A conformal transformation is distinguished as a scaled rotation transformation that fulfills the additional condition of \(g = h\). Specifically, this implies that

$$\begin{aligned} \varvec{\mathcal {J}} = h(\xi , \eta ) \begin{bmatrix} \cos \theta (\xi , \eta ) &{} - \sin \theta (\xi , \eta ) \\ \sin \theta (\xi , \eta ) &{} \cos \theta (\xi , \eta ) \end{bmatrix}. \end{aligned}$$
(6)

Such a transformation preserves angles and is characterized by a metric that remains invariant with respect to the direction of \(d \varvec{\xi } = d \xi + \vec {i} d\eta\), where \(\vec {i}\) is the imaginary unit. For concise notation, a point in a plane is not distinguished from a complex number in this paper. In complex context, if we define \(\varvec{x} = x + \vec {i} y\) and \(\varvec{\xi } = \xi + \vec {i} \eta\), then (6) is essentially a statement of the well-known Cauchy-Riemann equations.

3.2 Key ideas

Figure 2 presents the main workflow of our methodology. Central to our approach is the utilization of the Schwarz–Christoffel mapping—a conformal mapping technique. Initially, we simplify complex NURBS-represented boundary curves into a closed polygon. This step prepares the domain for the application of SC mapping. Then the polygon is numerically transposed onto a unit circle through the SC mapping process. Next, a series of markers are placed along the long boundary curves to guide the reparameterization process.

Subsequent to the mapping, we proceed with a reparameterization scheme for the boundary curves. This entails a careful recalibration of parameters, ensuring the parameterization speed of one curve is harmoniously aligned with the other. A distinguishing feature of our method is its retention to preserving both the accuracy and the continuity of the initial geometric boundaries.

Eventually, we integrate the robust and efficient parameterization technique from our previous research. This workflow yields parameterizations that are exceptionally suited to the demands of isogeometric analysis, please refer to Fig. 2, thereby enhancing the fidelity and utility of the analytic process.

4 Methodology

This section details the core methodology of our proposed approach. In Sect. 4.1, we explore the foundational concepts of SC mapping. Section 4.2 details the computational aspects of the SC parameter problem. Section 4.3 presents the curve reparameterization process, emphasizing its role in preserving geometric accuracy. Lastly, Sect. 4.4 offers a concise overview of the improved PDE-based method.

4.1 Schwarz–Christoffel mapping

Consider \(\mathcal {P}\), an open and simply connected polygon on the complex plane \(\mathbb {C}\), as illustrated in the right of Fig. 3. The Riemann mapping theorem guarantees an analytic function f, with a consistently non-zero derivative, mapping the open unit disk \(\mathcal {D}\) onto \(\mathcal {P}\), such that \(f(\mathcal {D}) = \mathcal {P}\). This function f is uniquely bijective over the domain \(\mathcal {D}\). If we select a specific point \(z_0\) in the unit disk \(\mathcal {D}\) and an angle \(\alpha\) in the range of \([0, 2\pi )\), then f can be uniquely identified to fulfill \(f(0) = z_0\) and \(\arg (f'(0)) = \alpha\). Here, \(z_0\) serves as the conformal center of f, anchoring the mapping in the complex plane.

Fig. 3
figure 3

Notational conventions for the Schwarz–Christoffel mapping

Denote by \(\omega _1< \omega _2< \cdots < \omega _n\) the vertices of the polygon \(\mathcal {P}\) in counterclockwise order. For ease of indexing purposes, we define \(\omega _{n+1} = \omega _1\) and \(\omega _0 = \omega _n\). The interior angles at these vertices are represented by \(\alpha _1, \alpha _2, \ldots , \alpha _n\). We define \(\beta _j = \alpha _j / \pi - 1\) for each vertex j, where \(\beta _j\) falls within the interval \((-1, 1]\). Furthermore, the sum of the interior angles must equate to a total turn of \(2\pi\), satisfying \(\sum _{k=1}^n \alpha _k = n - 2\). Given these constraints, a conformal mapping f from the unit disk \(\mathcal {D}\) to the polygon \(\mathcal {P}\), as shown in Fig. 3, can be described as [50]:

$$\begin{aligned} f(z) = f(z_0) + C \int _{z_0}^{z} \prod _{j=1}^{n} \left( 1 - \frac{\zeta }{z_j} \right) ^{\beta _j} d \zeta , \end{aligned}$$
(7)

where \(f(z_0)\) and C are complex constants with \(C \ne 0\). Here, \(z_1, z_2, \ldots , z_n\) are prevertices located on the boundary of \(\mathcal {D}\) in counterclockwise order, satisfying the condition \(\omega _k = f(z_k)\) for each \(k = 1, 2, \ldots , n\). Equation (7) is commonly referred to as the Schwarz–Christoffel formula.

4.2 Schwarz–Christoffel parameter problem

A crucial step of the transformation (7) is the determination of the prevertices \(z_j\), for \(j =1, 2, \ldots , n\). These prevertices typically lack a closed-form solution, except in special cases. The transformation is characterized by \(n+4\) real parameters, including the affine constants \(f(z_0)\) and C, as well as the angles \(\theta _1, \theta _2, \ldots , \theta _n\). Each \(\theta _i\) denotes the argument of the complex number \(\omega _i\). Due to the three degrees of freedom inherent in Möbius transformations, it is possible to arbitrarily select three prevertices, including the predetermined \(z_n\). This choice results in \(n-3\) remaining prevertices, which are uniquely defined and necessitate solving a system of nonlinear equations. This challenge forms the essence of the Schwarz–Christoffel parameter problem.

Practical implementation of SC mapping often involves solving complex nonlinear equations, derived from the geometric constraints of polygons. This process, fundamental to software like SCPACK [45] and the SC Toolbox for Matlab [46], aims to match the computed properties of the polygon \(\mathcal {P}\)—such as side lengths and orientation - with those of the desired polygon, thereby establishing \(n-3\) conditions [42]. However, two significant challenges limit the efficiency and applicability of these software packages.

Firstly, the nonlinear systems often lack a simple, solvable structure, leading to potential pitfalls such as local minima, which can significantly hinder, or even entirely prevent, the convergence of solvers. Secondly, a more critical challenge is the “crowding” phenomenon, a frequent occurrence in domains characterized by elongated and narrow channels [51]. This effect causes the positions of the prevertices \(z_j\) to be disproportionately skewed, with their displacement being exponentially proportional to the aspect ratio of these constricted regions. Such intricacies in the conventional SC mapping approaches underscore the need for a more robust and reliable alternative. In response to these challenges, our approach adopts the Cross-Ratios of the Delaunay Triangulation (CRDT) algorithm, a method within the numerical conformal mapping paradigm, as proposed in [43]. This method addresses the aforementioned issues more effectively by leveraging the stability and precision of the CRDT technique.

The CRDT hinges on the principle that the cross-ratio is invariant under conformal mapping. Consider four distinct points a, b, c, and d in the complex plane, arranged to form a quadrilateral abcd with vertices ordered counterclockwise. Additionally, let ac be an interior diagonal of this quadrilateral. The cross-ratio of these points is mathematically defined as follows:

$$\begin{aligned} \rho (a, b, c, d) = \frac{(d - a) (b - c)}{(c - d) (a - b)}. \end{aligned}$$
(8)

The main computation scheme is as follows:

  • Step 1: Splitting the long edges of polygon \(\mathcal {P}\) prevents the formation of elongated and narrow quadrilaterals whose edges align with those of the polygon. This step is crucial as it ensures the prevertices of such quadrilaterals are not densely crowded on the unit circle. After this splitting, let n represent the total number of vertices.

  • Step 2: Compute the Delaunay triangulation of the polygon \(\mathcal {P}\). From this triangulation, identify the \(n-3\) diagonals, denoted as \(d_1, d_2, \ldots , d_{n-3}\), and their corresponding quadrilaterals \(Q_1, Q_2, \ldots , Q_{n-3}\), with \(Q_i = Q(d_i)\) for each i. For each quadrilateral \(Q_i\), the vertices are represented as \(\omega _{k(i,1)}, \omega _{k(i,2)}, \omega _{k(i,3)},\) and \(\omega _{k(i,4)}\), where i ranges from 1 to \(n-3\). Here, each four-tuple \(\left( k(i,1), k(i,2), k(i,3), k(i,4) \right)\) consists of distinct indices drawn from the set \({1, 2, \ldots , n}\). The next step is to calculate \(c_i\) for each quadrilateral, defined as:

    $$\begin{aligned} c_i = \log \left( \left| \rho (\omega _{k(i,1)}, \omega _{k(i,2)}, \omega _{k(i,3)}, \omega _{k(i,4)}) \right| \right) , \end{aligned}$$
    (9)

    for \(i=1,2,\ldots ,n-3\), where \(\vert \cdot \vert\) indicates the magnitude of a complex number.

  • Step 3: Solve the nonlinear system \(\varvec{\mathcal {F}} = 0\), where the i-th nonlinear equation is

    $$\begin{aligned} \varvec{\mathcal {F}}_i = \log \left( \left| \rho (\zeta _{k(i,1)}, \zeta _{k(i,2)}, \zeta _{k(i,3)}, \zeta _{k(i,4)}) \right| \right) - c_i, \end{aligned}$$
    (10)

    \(\zeta _1, \zeta _2, \ldots , \zeta _n\) constitute the vertices of the polygonal image derived from the SC mapping formula (7). This is based on the invariance of the cross-ratio among the vertices \(\zeta\) under a similarity transformation. For integration along a straight-line path originating from the origin, the compound Gauss-Jacobi quadrature rule, as detailed by [42], is utilized. The central computational challenge in CRDT lies in effectively solving this nonlinear system \(\varvec{\mathcal {F}} = 0\) in (10). In practice, we found that the simple Picard iteration converges too slowly. Therefore, we opt for nonlinear solvers that rely solely on function evaluations. Specifically, we employ the Gauss-Newton solver, enhanced with a Broyden update for the derivative \(\varvec{\mathcal {F}}'\), to achieve more efficient convergence.

Upon computing the SC mapping as described in Eq. (7), we then evaluate equation (7) by uniformly sampling on the boundaries of the unit disk to identify the marker sets \(\textbf{P}_i^{\textrm{West}}\) and \(\textbf{P}_i^\textrm{East}\), located on the closed polygon, as depicted in Fig. 4(a). The next step in our methodology is to reparameterize one boundary curve while keeping the parameter speed of the other boundary fixed.

4.3 Geometry-preserving curve reparameterization technique

Without loss of generality, we choose to fix the West boundary curve \(\mathcal {C}^{\textrm{West}}\) and apply reparameterization to the East boundary curve \(\mathcal {C}^\textrm{East}\). The theoretical foundation of the reparameterization process is the invariant property of B-Spline basis functions under scaling and translation transformations, as elucidated in the following lemma.

Lemma 1

Let \(N^{\varvec{\Xi }}_{i, p}(\xi )\) be the i-th degree-p B-Spline basis function defined over the knot vector \(\varvec{\Xi }\). Consider a scaled and translated knot vector \(\hat{\varvec{\Xi }} = s \varvec{\Xi } + t\) with \(s > 0\). Then \(N^{\hat{\varvec{\Xi }}}_{i, p}(s \xi + t) = N^{\varvec{\Xi }}_{i, p}(\xi )\) holds.

Proof

We prove this lemma using the principle of mathematical induction on the degree p of the B-Spline basis functions.

For \(p=0\), \(N^{\varvec{\Xi }}_{i, 0}(\xi )\) and \(N^{\hat{\varvec{\Xi }}}_{i, 0}(s \xi + t)\) are piecewise constant functions defined as:

$$\begin{aligned} N^{\varvec{\Xi }}_{i, 0}(\xi ) = {\left\{ \begin{array}{ll} 1 &{} \text {if } \xi _{i} \le \xi < \xi _{i+1}, \\ 0 &{} \text {otherwise}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} N^{\hat{\varvec{\Xi }}}_{i, 0}(s \xi + t) = {\left\{ \begin{array}{ll} 1 &{} \text {if } s \xi _{i} + t \le s \xi + t < s \xi _{i+1} + t, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Since \(s > 0\), the conditions for non-zero values are equivalent, implying \(N^{\hat{\varvec{\Xi }}}_{i, 0}(s \xi + t) = N^{\varvec{\Xi }}_{i, 0}(\xi )\).

Assume the lemma holds for all degrees less than \(p \ge 1\). Then, for degree p, the recurrence relation for B-Spline basis functions gives:

$$\begin{aligned} \begin{aligned} N^{\hat{ \varvec{\Xi } }}_{i, p}(s \xi + t)&= \frac{(s \xi + t) - (s \xi _{i} + t)}{(s \xi _{i+p} + t) - (s \xi _{i} + t)} N^{\hat{ \varvec{\Xi } }}_{i, p-1}(s \xi + t) \\&\quad + \frac{(s \xi _{i+p+1} + t) - (s \xi + t)}{(s \xi _{i+p+1} + t) - (s \xi _{i+1} + t)} N^{\hat{ \varvec{\Xi } }}_{i+1, p-1}(s \xi + t) \\&= \frac{s (\xi - \xi _{i})}{s (\xi _{i+p} - \xi _{i})} N^{\varvec{\Xi }}_{i, p-1}(\xi ) \\&\quad + \frac{s (\xi _{i+p+1} - \xi )}{s (\xi _{i+p+1} - \xi _{i+1})} N^{ \varvec{\Xi }}_{i+1, p-1}(\xi ) \\&= N^{\varvec{\Xi }}_{i, p}(\xi ). \end{aligned} \end{aligned}$$
(11)

This completes the proof. \(\square\)

Corollary 2

Let \(R^{\varvec{\Xi }, \varvec{\mathcal {H}}}_{i,j,p,q}(\xi , \eta )\) be the NURBS basis function defined over the knot vectors \(\varvec{\Xi }\) and \(\varvec{\mathcal {H}}\) with weights \(\omega _{i,j}\), and degrees p and q. Consider scaled and translated knot vectors \(\hat{\varvec{\Xi }} = s \varvec{\Xi } + t\) and \(\hat{\varvec{\mathcal {H}}} = s' \varvec{\varvec{\mathcal {H}}} + t'\) with \(s> 0\) and \(s' > 0\). Then, \(R^{\hat{\varvec{\Xi }}, \hat{\varvec{\mathcal {H}}}}_{i,j,p,q}(s \xi + t, s' \eta + t') = R^{\varvec{\Xi }, \varvec{\mathcal {H}}}_{i,j,p,q}(\xi , \eta )\).

Proof

Given the invariant property of B-Spline basis functions as stated in Lemma 1, the proof of this corollary is trivial. \(\square\)

Building upon the invariant property of B-Spline basis functions established in Lemma 1, we can extend this concept to NURBS curves. The invariance of B-Spline basis functions under scaling and translation transformations of the knot vector implies that NURBS curves, which are defined using these basis functions, will similarly maintain their geometric form under such transformations. This observation leads us to the following proposition:

Proposition 3

Let \(\mathcal {C}(\xi )\) be a NURBS curve defined over the parameter \(\xi\) within the domain of the knot vector \(\varvec{\Xi }\). If \(\hat{\varvec{\Xi }} = s \varvec{\Xi } + t\) represents the scaled and translated knot vector with \(s > 0\), then the NURBS curve \(\hat{\mathcal {C}}(s \xi + t)\), defined over the transformed parameter domain, is geometrically identical to \(\mathcal {C}(\xi )\). Formally, \(\hat{\mathcal {C}}(s\xi +t) = \mathcal {C}(\xi )\), ensuring the geometric shape of the curve remains invariant under scaling and translation of its parameter domain.

Based on the fundamental principle in Proposition 3 that the geometric integrity of NURBS curves is preserved under scaling and translation transformations of their parameter domain, we develop a novel reparameterization method specifically tailored for addressing boundary parameter correspondence challenges.

Fig. 4
figure 4

Schematic diagram for curve reparameterization and the resulting parameterization after curve parameterization

With the computed markers \(\textbf{P}_i^{\textrm{West}}\) and \(\textbf{P}_i^\textrm{East}\) (\(i=0,1,\ldots ,m\)) from the SC map, the next step is to determine the corresponding parameter values \(\xi _i^\textrm{East}\) and \(\xi _i^{\textrm{West}}\) for these markers. This involves projecting a point onto a NURBS curve to find the closest point on the curve and then performing point inversion to find the corresponding parameter. This process is known as the point inversion and projection problem, as described in Section 6.1 of the well-known monograph [52]. In our context, the markers \(\textbf{P}_i^{\textrm{West}}\) and \(\textbf{P}_i^\textrm{East}\) do not necessarily lie on the input curves \(\mathcal {C}^{\textrm{West}}(\xi )\) and \(\mathcal {C}^\textrm{East}(\xi )\) since they are on the approximating polygon. Therefore, we compute the closest points on the East and West boundary curves by solving the following nonlinear equation:

$$\begin{aligned} \begin{aligned} \left( \mathcal {C}^{\textrm{West}}(\xi ) - \textbf{P}^{\textrm{West}}_i \right) \cdot \mathcal {C}^{\prime , \textrm{Weat}}(\xi )&= 0,\\ \left( \mathcal {C}^\textrm{East}(\xi ) - \textbf{P}^\textrm{East}_i \right) \cdot \mathcal {C}^{\prime , \textrm{East}}(\xi )&= 0, \end{aligned} \end{aligned}$$
(12)

where \(\mathcal {C}^{\prime , \textrm{West}}(\xi ) = {\partial \mathcal {C}^{\textrm{West}}(\xi )}/{\partial \xi }\) and \(\mathcal {C}^{\prime , \textrm{East}}(\xi ) = {\partial \mathcal {C}^{\textrm{East}}(\xi )}/{\partial \xi }\) represent the first derivatives of the curves \(\mathcal {C}^{\textrm{West}}(\xi )\) and \(\mathcal {C}^{\textrm{East}}(\xi )\), respectively. Here, \(\textbf{P}^{\textrm{West}}_i\) and \(\textbf{P}^{\textrm{East}}_i\) denote the i-th marker on the boundary curves \(\mathcal {C}^{\textrm{West}}(\xi )\) and \(\mathcal {C}^{\textrm{East}}(\xi )\), respectively.

In our implementation, the standard Newton’s method is employed to solve equation (12) efficiently. Note that a good initial guess is critical to the convergence of the standard Newton’s method. To this end, we evaluate curve points at densely spaced parameter values and use the parameter value with the closest distance to the current marker as the initial guess for the Newton iteration.

Upon identifying these parameters, we then focus on the East boundary curve \(\mathcal {C}^\textrm{East}\). We split this curve at the parameter \(\xi _{i+1}^\textrm{East}\). This is achieved by inserting the value of \(\xi _{i+1}^\textrm{East}\) until its multiplicity is equal to the degree of the curve. Consequently, we obtain a curve segment defined over the parameter interval \([\xi _{i}^\textrm{East}, \xi _{i+1}^\textrm{East}]\). The next step is to align this segment with the West boundary curve \(\mathcal {C}^{\textrm{West}}\). To accomplish this, we apply an affine transformation to the curve segment. This transformation adjusts the parameter interval of the East curve segment to match the parameter interval \([\xi _{i}^{\textrm{West}}, \xi _{i+1}^{\textrm{West}}]\). The result is a harmonized alignment between the East and West boundary curves, crucial for maintaining geometric consistency in the reparameterization process. However, please note that it is equivalent to subdividing the entire computational domain into a set of subdomains and parametrizing each subdomain separately. Therefore, the curve, after reparametrization, only has \(G^k\) continuity instead of \(C^k\) continuity in its original form.

This procedure enables the reparameterization of the curve while maintaining the geometric integrity. By applying the linear interpolation-based parameterization method along the \(\eta\)-direction, we achieve a refined parameterization. This improvement is demonstrated in Fig. 4b, which highlights the significant improvement in parameterization quality achieved through our reparameterization procedure.

4.4 Analysis-suitable parameterization

As demonstrated in Fig. 5a, our method for reparameterizing boundary curves, when compared with the original boundary curves shown in Fig. 1a, proves to be highly effective. Even a straightforward linear interpolation between corresponding boundary control points results in a markedly improved parameterization. In this section, we explore the possibility of further enhancing the parameterization quality by integrating our previously developed PDE-based method [11].

It is important to highlight that, in this case, we are constrained by the absence of additional degrees of freedom to refine the parameterization quality. To address this, we implement k-refinements along the \(\eta\)-direction, thereby introducing the necessary degrees of freedom. Henceforth in this paper, we will default to elevating the degree along \(\eta\)-direction to 2 and inserting 2 extra knots to the knot vector \(\varvec{\Xi }\), except where explicitly stated otherwise. This k-refinement procedure results in the formation of the knot vector \(\varvec{\mathcal {H}} = \{ 0, 0, 0, 1/3, 2/3, 1, 1, 1 \}\).

The core step of the PDE-based parameterization method lies in solving a nonlinear elliptic PDE system:

$$\begin{aligned} \left\{ \begin{array}{cc} \nabla \cdot [\mathbb {A} \nabla \xi (x)] &{}= 0\\ \nabla \cdot [\mathbb {A} \nabla \eta (x)] &{}= 0 \end{array} \quad \text {s.t.}\ \varvec{x}^{-1}|_{\partial \Omega } = \partial \hat{\Omega }. \right. \end{aligned}$$
(13)

where the tensor field \(\mathbb {A}\) is set to \(\texttt {DIAG}(1/\left| \varvec{\mathcal {J}} \right| , 1/\left| \varvec{\mathcal {J}} \right| )\) in [11].

Fig. 5
figure 5

Comparative visualization of scaled Jacobian \(\vert \varvec{\mathcal {J}} \vert _s\): a parameterization via linear interpolation; b improved parameterization employing PDE-Based approach

Denote by \(\mathbb {S}_{p,q}^{\varvec{\Xi }, \varvec{\mathcal {H}}}\) the spline space spanned by the bivariate B-spline bases of degree p and q with knot vectors \(\varvec{\Xi }\) and \(\varvec{\mathcal {H}}\), respectively, and \((\mathbb {S}_{p,q}^{\varvec{\Xi }, \varvec{\mathcal {H}}})^0 = \{N_i \in \mathbb {S}_{p,q}^{\varvec{\Xi }, \varvec{\mathcal {H}}}: N_i \vert _{\partial \hat{\Omega }} = 0\}\) be those vanishing on \(\partial \Omega\). The variational principle deduces the nonlinear PDE system (13) to its equivalent discretization scheme:

$$\begin{aligned} \forall N_i \in (\mathbb {S}_{p,q}^{\varvec{\Xi }, \varvec{\mathcal {H}}})^0: \left\{ \begin{array}{cc} \int _{\Omega }\ \nabla \textbf{N} \cdot \mathbb {A} \nabla \xi (x)\ \textrm{d} \Omega = \textbf{0}, &{}\\ \int _{\Omega }\ \nabla \textbf{N} \cdot \mathbb {A} \nabla \eta (x)\ \textrm{d} \Omega = \textbf{0}, &{} \end{array} \quad \text {s.t.}\ \varvec{x}|_{\partial \hat{\Omega }} = \partial \Omega , \right. \end{aligned}$$
(14)

where \(\textbf{N}\) denotes the collection of B-spline basis functions in the spline space \((\mathbb {S}_{p,q}^{\varvec{\xi }, \varvec{\mathcal {H}}})^0\). The enhanced parameterization, depicted in Fig. 5b, demonstrates a significant improvement in parameterization quality. This advancement underscores the effectiveness of the proposed approach in refining the parameterization process. Our implementation is available now in the open-source G+Smo library [53], offering a robust solution framework for the analysis-suitable parameterization problem in isogeometric analysis.

5 Numerical experiments and comparisons

In this section, we present a numerical investigation to demonstrate the effectiveness of our proposed method for boundary parameter matching problem.

5.1 Implementation details and parameterization quality metrics

The method described in this paper is implemented using C++ programming language. The computational experiments were conducted on a MacBook Pro 14-inch 2021 equipped with an Apple M1 Pro CPU and 16 GB of RAM. We utilize G+Smo (Geometry + Simulation Modules), an open-source C++ library, as the foundation of our implementation, leveraging its extensive functionalities in geometric computing and simulation [53, 54]. For handling matrix and vector operations, as well as solving the linear systems integral to our method, Eigen library [55] is integrated.

Two quality metrics are employed to assess the parameterization: the scaled Jacobian, denoted as \(\vert \varvec{\mathcal {J}} \vert _s\), which evaluates orthogonality, and the uniformity metric, denoted as \(m_{\mathrm{unif.}}\), which measures uniformity.

  • The scaled Jacobian is defined as follows:

    $$\begin{aligned} \vert \varvec{\mathcal {J}} \vert _s = \frac{\vert \varvec{\mathcal {J}} \vert }{\vert \varvec{x}_{\xi } \vert \vert \varvec{x}_{\eta } \vert }. \end{aligned}$$
    (15)

    This metric takes values ranging from \(-1.0\) to 1.0. A negative value of \(\vert \varvec{\mathcal {J}} \vert _s\) indicates a fold in the parameterization \(\varvec{x}\), signifying overlaps in the mapping. The ideal value of \(\vert \varvec{\mathcal {J}} \vert _s\), which is 1.0, is achieved when the orthogonality in the parameterization is maintained optimally.

  • The uniformity metric is computed as:

    $$\begin{aligned} m_{\mathrm{unif.}} = \left| \frac{\vert \mathcal {J} \vert }{R_\textrm{area}} - 1 \right| , \end{aligned}$$
    (16)

    where \(R_\textrm{area} = {\text {Area}(\Omega )}/{\text {Area}(\hat{\Omega })}\) represents the area ratio between the physical domain \(\Omega\) and the parametric domain \(\hat{\Omega }\). This metric \(m_{\mathrm{unif.}}\) attains its optimal value of 0.0 when the parameterization uniformly conserves area.

For a comprehensive evaluation, these metrics were calculated over a dense grid of \(1001 \times 1001\) sample points, including the domain boundaries. In our analysis, we exclude the maximum values of \(\vert \varvec{\mathcal {J}} \vert _s\) and the minimum values of \(m_{\mathrm{unif.}}\) from the statistics, as these are typically achievable in all the examples.

5.2 Role of boundary correspondence

This section is devoted to conducting a comparative quality analysis of the parameterizations derived from various methods.

Initially, as depicted in Fig. 6a, the boundary curves are parameterized using an approximate chord-length method tailored to their individual representations. However, this method faces challenges in creating analysis-suitable parameterizations, mainly due to a mismatch in parameter speeds that degrades the parameterization quality. This degradation becomes particularly noticeable in the resulting poor parameterization.

Fig. 6
figure 6

Comparative analysis of parameterization techniques for the Scheldt estuary: Scaled Jacobian \(\vert \varvec{\mathcal {J}} \vert _s\) and uniformity metrics \(m_{\mathrm{unif.}}\) across different methods

Our proposed boundary parameter matching method addresses these issues effectively, enhancing the overall quality of parameterization. This improvement is achieved through a simple linear interpolation along the \(\eta\)-direction, as illustrated in Fig. 6b, leading to considerable enhancements. With the linear interpolation-based parameterization as an initial guess, further refinement is obtained by implementing our PDE-based parameterization technique, which results in a superior quality of parameterization, as shown in Fig. 6c. The effectiveness of these improvements is substantiated by comprehensive statistical data, detailed in Table 1. In this case, the initial application of the chord-length method results in an invalid parameterization, indicated by a negative minimum value of the scaled Jacobian \(\vert \varvec{\mathcal {J}} \vert _s\). This suggests that the parameterization is non-bijective. We omit the additional quality metrics for the non-bijective parameterization results.

Table 1 Comparisons between different methods: chord-length method, linear interpolation-based parameterization method, and PDE-based parameterization method
Fig. 7
figure 7

Scaled Jacobian \(\vert \varvec{\mathcal {J}} \vert _s\) for a letter “M”-shaped domain

Building on the previous discussion, the effectiveness of our proposed method is further exemplified with additional “M”-shaped domain, as shown in Fig. 7. This example is instrumental in highlighting the substantial improvement in parameterization quality achieved by our approach. Remarkably, even a straightforward linear interpolation technique, when applied in conjunction with our boundary parameter approach, results in parameterizations of surprisingly high quality. In instances like these, resorting to a PDE-based parameterization method to further improve quality becomes superfluous. This outcome demonstrates not only the robustness of our method, but also its capability and robustness in handling complex geometries.

Although the proposed method is primarily designed for elongated strip-shaped geometries with extreme aspect ratios, it can also be applied to other types of geometries, such as disk-shaped ones. As shown in Fig. 8, the upper boundary is composed of two B-spline curves. This example demonstrates the feasibility of applying the proposed method to boundary curves composed of multiple B-spline curves. First, we manually determine the four corners for parameterization. Next, the two upper boundary curves are merged into a single \(C^0\) continuity B-spline curve. Then, we apply the proposed method to the current geometry. The result, seen in Fig. 8b, demonstrates the applicability of the proposed method to not only strip-shaped geometries but also other types of geometries. Quantitative data is shown in Table 1.

Fig. 8
figure 8

Parameterization of a disk-like Geometry using the proposed boundary parameter matching method

5.3 Comparisons with existing methods

This section presents a comparative analysis of our proposed method against established techniques, specifically the low-rank parameterization method introduced by Pan et al. [24], the boundary correspondence method utilizing optimal mass transport as proposed by Zheng et al. [13], the boundary corners selection method based on deep learning by Zhan et al. [40], and the simultaneous boundary and interior parameterization method via deep learning proposed by Zhan et al. [56].

For our comparative analysis, we utilize the open-source implementation of the boundary correspondence method provided by the original authors [13], which is accessible at https://github.com/ZH-ye/BoundaryCorrespondenceOT. Within the workflow of this method, the authors first identify optimal corner points that align the curvature measure of the physical domain \(\Omega\) with that of the unit square. Subsequently, a spline-based least-squares fitting subroutine in conjunction with chord-length parameterization is utilized to approximate the input point cloud, which is followed by the integration of the low-rank quasi-conformal mapping parameterization method [24] to complete. While the last two competitors by Zhan et al. from [40, 56], the parameterizations are kindly provided by the original authors.

Fig. 9
figure 9

Letter “G”: Comparison of Scaled Jacobian (\(\vert \varvec{\mathcal {J}} \vert _s\)) with different methods

Figure 9 shows the parameterizations of a "G"-shaped domain obtained through various existing methods alongside our proposed approach. As depicted in Fig. 9a, the boundary correspondence method [13] does not correctly identify the corners in this example. Specifically, the least-square approximation of the input curves results in the loss of geometric precision at a sharp corner, an issue our method avoids by preserving the exact geometry of the boundaries. A key concern is the significant distortion in the resulting parameterization due to the mismatched opposite boundaries. To address this, we establish correct corner correspondences and engage the low-rank quasi-conformal method [57]. Figure 9c shows the automatic corner detection using a deep learning method as proposed by Zhan et al. [40], coupled with an interior parameterization achieved through the low-rank quasi-conformal method detailed by Pan et al. [24]. It is evident that the parameterization quality is poor due to the mismatching of the opposing boundaries. Figure 10d demonstrates the resulting non-bijective parameterization characterized by numerous self-intersections, as produced by the deep learning approach [56]. However, as shown in Fig. 9b, this approach yields a non-bijective parameterization (refer to Table 1 please). In contrast, parameterizations initiated with chord-length parameterization and further refined using our method with simple linear interpolation produce bijective outcomes, as demonstrated in Fig. 9e and Fig. 9f, respectively. Notably, our method significantly enhances the orthogonality of the resulting parameterization compared with the initial parameterization in Fig. 9e.

Fig. 10
figure 10

Letter “P”: Comparison of Scaled Jacobian (\(\vert \varvec{\mathcal {J}} \vert _s\)) with different methods

Additional comparative results are presented in Fig. 10. In this case, the boundary correspondence method [13], as depicted in Fig. 10a, successfully identifies the four corners, leading to a parameterization result comparable to that achieved by the low-rank quasi-conformal method (Fig. 10b). Figure 10c, d show the resulting non-bijective parameterizations generated by [40] and [56], respectively. The parameterization derived from the initial chord-length method, illustrated in Fig. 10e, exhibits significant distortion, particularly at the upper left turn. Comprehensive statistical analysis is provided in Table 1, where the negative minimum value of the scaled Jacobian \(\left| \mathcal {J} \right| _s\) indicates that these parameterizations are non-bijective. In contrast, our proposed parameterization method, shown in Fig. 10f, demonstrates superior parameterization is achieved.

6 Applications

6.1 Application to solid modeling

In 3D solid modeling applications, numerous CAD models can be generated or effectively simplified to planar cases through fundamental operations such as extrusion, sweeping, lofting, ruling, and revolving [52]. Although this paper primarily focuses on planar domains, the proposed method can be readily adapted to 3D volumes created by using these fundamental modeling operations. Figure 11 shows volumetric parameterizations generated by combining the proposed method with extrusion and sweeping, showcasing its applicability to solid modeling.

Furthermore, we apply the proposed boundary parameter matching method to generate volumetric parameterization for a 5-lobe rotor and its circular casing, as illustrated in Fig. 11(c). The parameterizations are generated using the PDE-based technique introduced in Sect. 4.4 for a planar slice, and then the volumetric parameterization was achieved through rotation and lofting along the third direction in the physical space, both before (left) and after (right) employing the proposed boundary parameter matching method. The colormap represents the scaled Jacobian of the resulting parameterization. It is evident that the proposed method significantly improves the quality, particularly enhancing the orthogonality of the resulting parameterization.

Fig. 11
figure 11

Illustration of solid models generated by integrating the proposed parameterization method with standard solid modeling operations, e.g., extrusion, sweeping, and rotation

6.2 Application to IGA simulation

To validate the usability of our parameterizations for IGA simulation, we consider Poisson’s problem with mixed boundary conditions in the domain \(\Omega\) (see Fig. 12):

$$\begin{aligned} \left\{ \begin{aligned} -\Delta u&= f,\ \ \text {in}\ \Omega ,\\ u&= g,\ \ \text {on}\ \Gamma _{D},\\ \nabla u \cdot \textbf{n}&=h,\ \ \text {on}\ \Gamma _{N},\\ \end{aligned} \right. \end{aligned}$$
(17)

where \(f \in L^2(\Omega ): \Omega \rightarrow \mathbb {R}\) denotes a given source term, \(\partial \Omega =\bar{\Gamma }_{D}\cup \bar{\Gamma }_{N}\) defines the boundary of the physical domain \(\Omega\) with \(\Gamma _{D}\cap \Gamma _{N}=\mathbf {\varnothing }\), \(\textbf{n}\) denotes the outward pointing unit normal vector on the boundary \(\partial \Omega\), \(\Gamma _{D}\) and \(\Gamma _{N}\) denote the separate parts of the boundary where Dirichlet and Neumann boundary conditions are prescribed, respectively.

Fig. 12
figure 12

IGA simulation on a channel geometry

Figure 12a, b present the initial chord-length and linear interpolation-based parameterizations achieved through our boundary matching method. These visualizations, with the scaled Jacobian \(\vert \varvec{\mathcal {J}} \vert _s\) color-encoded, demonstrate significant improvement in orthogonality due to our method. For this example, the source term is set as \(f = 8 \pi ^2 \sin (2 \pi x) \sin (2 \pi y)\), leading to the exact solution \(u^\textrm{exact} = \sin (2 \pi x) \sin (2 \pi y)\). The boundary conditions and the exact solution are depicted in Fig. 12c, d, respectively.

Fig. 13
figure 13

Error convergence during h-refinement

Figure 13 shows the \(L_2\) and \(H_1\) error convergence for different parameterizations. The results highlight a substantial reduction in error for our improved parameterization compared to the original chord-length parameterization, with all methods achieving the optimal rate of convergence.

7 Conclusions and outlook

This paper presents a novel method designed to address the boundary parameter correspondence challenge, a crucial aspect of analysis-suitable parameterization in IGA. Our approach, integrating Schwarz–Christoffel mapping with a subsequent curve reparameterization procedure, effectively maintains the geometric exactness and continuity of the input. Significantly, our boundary parameter matching technique enables even basic linear interpolation methods to yield high-quality parameterizations, especially in elongated domains. Through numerous numerical experiments, the effectiveness and reliability of our method have been convincingly demonstrated, underscoring its potential in practical applications.

Despite its strengths, our method is not without limitations. A significant constraint lies in the planar nature of conformal mapping, while the demand for mesh generation predominantly exists in three-dimensional contexts. This discrepancy presents a challenge and an opportunity for future research. Exploring the potential of extending our method to address surface parameter correspondence in three-dimensional spaces is an exciting and valuable direction for ongoing investigations. Such advancements could significantly broaden the applicability and impact of our approach in the field of IGA. In addition, a typical workflow for computational domains with holes involves first computing a multi-patch layout and then applying a single-patch parameterization method to each individual patch. In this context, boundary parameter matching is critical to the quality of the resulting parameterization [58, 59]. Applying the proposed boundary parameter matching technique in this context is a very interesting topic.