1 Introduction

One of the most effective methods for collecting spatial data is terrestrial laser scanning (TLS). The resulting data are provided as a 3D point cloud, possibly including radiometric information, namely colour and intensity. In general, point cloud data can serve as a geometric basis for modelling objects and phenomena. Point clouds are used for visualisation, measurement, and many other surveying tasks such as 3D model creation or surface reconstruction (Previtali et al. 2014), digitalisation of cultural heritage (Adamopoulos and Rinaudo 2021; Haličková and Mikula 2016; Remondino and Rizzi 2010), deformation analysis of structures (Holst et al. 2015), flood modelling or landslide surveying (Giussani and Scaioni 2012).

In the present paper, we work with point clouds resulting from TLS scans of buildings. Nowadays, virtual models are created for managing the whole lifecycle of a building, from the design, construction, operation to facility management and demolition using Building Information Modelling (BIM). A BIM model is an object-oriented virtual model defined as a combination of graphic and non-graphic data. The graphic data in most cases have the form of a 3D model. The use of information from the 3D point cloud and the information derived from the BIM model can be used for the monitoring and quality check of a given structure during the construction process or after its completion (Bariczová et al. 2021).

Our point cloud data include 3D coordinates of the points, colour channels captured by a digital camera, and the intensity channel containing information related to the strength of the reflected laser signal. Our goal is to find planar subsets of the original point cloud containing points that represent the surface of walls. These subsets could be used, e.g. for automation of the creation of as-built models of buildings.

In our typical workflow, we perform a two-phase segmentation. Primary segmentation is performed by plane fitting. Planar subsets of the entire point cloud are selected. Each subset contains points lying within a specified distance from the fitting plane and will be referred to as planar point cloud. The planar point cloud corresponding to a wall often contains objects that do not represent the surface of the wall (e.g. door and electrical outlets) and may affect the result of the quality check (Bariczová et al. 2021). The role of the secondary segmentation based on evolving curves is to segment each planar point cloud into subsets representing the wall surface and other objects, e.g. door, whiteboard, and the remaining unsegmented points (waste).

For the primary segmentation, many methods have been developed, see the review by Grilli et al. (2017). We usually use a modification of the Random Sample Consensus (RANSAC) algorithm (Fischler and Bolles 1981) called M-estimated Sample Consensus (MSAC) (Torr and Zisserman 2000), possibly followed by some filters, e.g. a normal filter which discards points whose local normals deviate from the normal of the fitting plane by more than a given threshold angle. In addition to the MSAC method, we also use the approach developed in Honti et al. (2022).

For the purpose of this paper, we consider a planar point cloud as the input data for the secondary segmentation, which will be described in detail. As we will see, the basic idea of our approach is to create a set of images representing the point cloud dataFootnote 1 and subsequently segment these images. In the field of image processing, many segmentation methods based on curve and surface evolution models have been developed. We distinguish two main approaches to handle evolution problems, the so-called direct (or Lagrangian) approach (Dziuk 1999; Mikula and Ševčovič 2004; Balažovjech and Mikula 2011) and level set (or Eulerian) approach (Sethian 1999; Osher et al. 2004; Caselles et al. 1997; Zhao et al. 2000; Corsaro et al. 2006). Our secondary segmentation is based on a Lagrangian curve evolution model, which is computationally fast because it solves only 1D curve evolution problem. The Eulerian approach treats the segmentation curve as the zero-level set of the level set function and the whole function is evolving. Therefore, one has to solve 2D evolution problem which is computationally more expensive.

The evolution of segmentation curves is driven by information from multiple channels of the point cloud data, e.g. colour, intensity, or distance from the fitting plane. The quality of the discretisation mesh is crucial for stability and accuracy in the Lagrangian approach. Various techniques for tangential redistribution of points improving the mesh quality have been developed (Hou et al. 1994; Kimura 1997; Ševčovič and Mikula 2001; Barrett et al. 2011; Ševčovič and Yazaki 2011; Benninghoff and Garcke 2014), and we adopt asymptotically uniform redistribution of points from (Mikula and Ševčovič 2004). Moreover, an algorithm for the detection and treatment of the topological changes is needed (Benninghoff and Garcke 2014; Frei and Garcke 2016). We use an efficient O(n) algorithm developed in (Mikula and Urbán 2012; Balažovjech et al. 2012; Ambroz et al. 2019).

The mathematical model for curve evolution is an extension of the model from the paper (Mikula et al. 2021a) which deals with one channel segmentation of forests from satellite images. Contributions of the present paper are mainly the fully automatic methodology for segmentation of the point clouds (meaning two segmentation phases, creation of the representative images, curve segmentation and creation of the final segments of the point cloud), the multichannel approach using a novel construction of the aggregated edge detector, and simultaneous segmentation of multiple regions (requiring a different detection of merging of curves). Minor contributions include construction of the initialisation mask, a simpler approximation of the signed curvature (without using \(\arccos \) as in Mikula et al. (2021a)) and a local weight for the curvature regularisation. To make the text self-content, we included some ideas from Mikula et al. (2021a) in this paper. A preliminary version of the curve segmentation of planar point clouds is outlined in the paper (Bariczová et al. 2021) which deals with quality check of walls in buildings. In the paper (Bariczová et al. 2021), only a brief intuitive description of the model is presented, and the functionality of the segmentation is shown only on one example. In this paper, we present the final version of the methodology with the full mathematical description of the model and all computational details of our approach. The functionality of the model is demonstrated on multiple representative examples.

The text of this paper is organised into three main sections. In Sect. 2, we introduce the main ideas of our segmentation methodology, mainly the creation of the representative planar point cloud images (Sect. 2.1), mathematical model (Sect. 2.2), construction of the edge detectors (Sect. 2.3), and our approach to the multichannel segmentation (Sect. 2.7). In Sect. 3, we present the numerical discretisation of the mathematical model. The Sect. 4 shows several numerical experiments with real data that illustrate the functionality of our approach.

Fig. 1
figure 1

Primary segmentation of an apartment scanned by TLS. A planar subset corresponding to a wall was segmented

Fig. 2
figure 2

An example of the secondary segmentation phase of a planar point cloud corresponding to a wall with a door

2 Point cloud segmentation methodology

In this section, we describe our method for segmentation of the point clouds. We obtain our input point cloud data using the Trimble TX5 scanner. The point clouds usually contain millions of points with the average resolution about \(2-6\) mm. A typical example is a flat in Fig. 1.Footnote 2 The overall workflow of our segmentation process consists of the following main steps.

  1. 1.

    Primary segmentation by plane fitting. Selection of planar subsets of the point cloud using the MSAC method, Fig. 1. Each subset consists of points located within a specified threshold distance from the fitting plane. The fitting can be possibly followed by a normal filter removing points whose local normals deviate from the normal of the fitting plane by more than a specified threshold angle. The resulting planar point cloudsFootnote 3 serve as input data for secondary segmentation.

  2. 2.

    Secondary segmentation by curve evolution. Segmentation of each planar subset of the point cloud using evolving curves (Fig. 2).

    1. (a)

      Preprocessing: Creation of representative images corresponding to the channels (such as colour or intensity) of the planar point cloud.

    2. (b)

      Evolution: Segmentation by evolving curves using representative images.

    3. (c)

      Postprocessing: Extraction of points in the point cloud corresponding to the segmented regions.

In the following subsections, we describe the steps of the secondary segmentation in detail. As we mentioned, the same procedure is performed for each planar point cloud. Therefore, in the following, we will work with a single planar point cloud. In Sects. 2.22.6, we present the method for one-channel segmentation and in Sect. 2.7, we propose a technique to use information from multiple channels.

2.1 Preprocessing: creation of the representative images from planar point cloud

To reduce the dimension of the problem, we transform (rotate and move) our planar point cloud to the new coordinate system in which the first two dimensions span the fitting plane. If we have a transformed point with coordinates \(\textbf{x}=(x_1,x_2,x_3)\), the coordinates \(x_1\) and \(x_2\) describe the position of the point in the plane and the \(x_3\) axis is orthogonal to the fitting plane, i.e. the absolute value of \(x_3\) is the distance from the fitting plane, denoted as \(\text {D}=|x_3|\).

Fig. 3
figure 3

Creation of the bitmap images from the point cloud data visualised for R, G, B channels

Subsequently, we create a regular square mesh in the \(x_1\), \(x_2\) plane. We denote the pixel size (edge length) as h and usually set it as a multiple of the resolution of the point cloud (e.g. 3\(\times \)resolution). Then, we represent the point cloud data by a set of bitmap images, one image for each available channel. Usually, we use the colour channels R, G, B, the intensity channel I, and the distance channel D. For example, we take the R channel of colour and compute the value in each pixel (square of the mesh) as the mean value of the R channel of the points which lie in the pixel, see Fig. 3. Finally, we rescale the image values to the range [0, 1]. The domain of the images will be denoted by \(\Omega \subset \mathbb {R}^2\).

The resulting images are pixel representations of the input planar point cloud and will be segmented using evolving curves.

2.2 Mathematical model of the curve evolution

In this section, we present a Lagrangian curve evolution model for segmentation of the planar point clouds obtained by TLS. The input data is an image \(\mathcal {I}\), which is one of the channels. The use of multiple channels will be described in the Sect. 2.7. The mathematical model for curve evolution is an extension of the model from the paper (Mikula et al. 2021a) and we employ a similar notation. The segmentation curve is an evolving closed planar curve with a position vector of its points denoted by \(\textbf{x}(u,t)\), where \(u\in [0,1]\) is a parameter that goes along the curve and \(t\in [0,t_f]\) is the time (with \(t_f\) being a final time). The fact that the curve is closed means \(\textbf{x}(0,t) = \textbf{x}(1,t)\).

Fig. 4
figure 4

An evolving closed planar curve plotted at times \(t=0,t_1,t_2\), where \(0<t_1<t_2\)

The evolution is driven by a suitably designed velocity field \(\textbf{v}(\textbf{x},t)\), therefore, the basic evolution model is

$$\begin{aligned} \partial _t \textbf{x}(u,t) = \textbf{v}(u,t), \end{aligned}$$
(1)

where \(\partial _t \textbf{x} :=\frac{\partial \textbf{x}}{\partial t}\) denotes the time derivative of the position vector, that is, the velocity of point \(\textbf{x}\). Equation (1) is coupled with an initial condition \(\textbf{x}(u,0) = \textbf{x}_0(u), u\in [0,1]\). The initial curve \(\textbf{x}_0\) is a small circle (Fig. 4) automatically placed inside the segmented region (e.g. the small white circle in Fig. 2b). Details on automatic insertion of the initial curve are provided in Sect. 2.8. We consider the velocity \(\textbf{v}(u,t)\) in the following form (see also Mikula et al. (2021a)):

$$\begin{aligned} \textbf{v}(u,t) = \big (1-\lambda (t)\big )\,\underbrace{B(u,t)\, \textbf{N}(u,t)}_\text {Expansion} + \lambda (t) \underbrace{\big (-\nabla E(\textbf{x}(u,t))\big )}_\text {Edge attraction} + \delta (u,t)\!\! \underbrace{k(u,t)\, \textbf{N}(u,t),}_\text {Curvature regularisation} \end{aligned}$$
(2)

where \(\textbf{N}\) denotes the positively oriented unit normal vector defined by rotation of the unit tangential vector \(\textbf{T}= \frac{\partial \textbf{x}}{\partial s}\) (also denoted as \(\partial _s \textbf{x}= (\partial _s x_1, \partial _s x_2)\)) by \(\frac{\pi }{2}\) in the positive direction, with \(s=\int _0^u\Vert \partial _u \textbf{x}\Vert \,du\) denoting the arc length parameter of the curve. Therefore, the normal can be expressed as \(\textbf{N}= \textbf{T}^\perp = (-\partial _s x_2,\partial _s x_1)\). The function k in (2) is the signed curvature and \(\nabla \) is the gradient operator. Functions E and B will be properly defined later. Now, we give a brief description of the terms in (2). The role of the first term \(B(u,t)\, \textbf{N}(u,t)\) is to expand the segmentation curve in the normal direction from its initial shape through the segmented region towards its border, and the ‘blowing’ function B(ut) controls the speed of expansion.

The second term attracts the points of the curve towards the borders of the segmented region. Information about the borders is contained in the edge detector function \(E(\textbf{x})\in [0,1]\), \(\textbf{x}\in \Omega \) which has values close to 0 near the edges and close to 1 in homogeneous regions of the planar point cloud (see Fig. 5d). The negative gradient of E points towards the lower values and, therefore, is suitable to attract the segmentation curve towards the edges.

The time-dependent parameter \(\lambda (t)\in [0,1]\) serves as a weight between the expansion and the edge attraction term. The standard approach is to set \(\lambda (t)\) at the beginning (\(t=0\)) to a number \(0<\lambda _0\ll 1\) (i.e. the edge attraction does not dominate), keep it unchanged until the curve is close to the border (moving very slowly) and then switch \(\lambda (t)\) to 1, which turns off the expansion term. The setting \(\lambda (t) = \lambda _0\) will be called the expansion phase and the setting \(\lambda (t) = 1\) we call the attraction phase.

The last term in (2) is called curvature regularisation and has a smoothing effect. We use it to deal with the noise and to smooth the sharp edges of the segmentation curve during the expansion phase. The importance of the curvature regularisation was studied, e.g. in Mikula et al. (2021b). The parameter \(\delta (u,t)\) weights the influence of the term and will be properly defined in Sect. 2.7.

The first two terms in (2) could be also called together as ‘external velocity’, because they substantially depend on external quantities which are properties of the segmented image, unlike the curvature regularisation term, which depends on the signed curvature k and normal \(\textbf{N}\), which are properties of the curve. However, we may set the weight \(\delta \) to be dependent also on the speed of the curve evolution, as we shall see later.

2.3 Edge detector

The (smoothed) edge detector function \(E(\textbf{x})\) is constructed as follows (Mikula et al. 2021a). First, we prefilter the original image \(\mathcal {I}\), for example, using the Gaussian filter to get \(\mathcal {I}_{\sigma _0}=G_{\sigma _0}*\mathcal {I}\) (the convolution with the Gaussian kernel \(G_{\sigma _0}\) with variance \(\sigma _0\)) and then compute the norm of the gradient of the filtered image \(\big \Vert \nabla \mathcal {I}_{\sigma _0}(\textbf{x})\big \Vert \). If the original image is very noisy and strong filtration is needed, we can use the anisotropic diffusion (Perona–Malik) filter (Perona and Malik 1990) instead. However, in some situations, the prefiltration is not necessary because some filtration is actually done in the preprocessing step (Sect. 2.1) when computing pixel representations of the point cloud data, which is akin to a mean filter. In such cases, one may directly compute the norm of the gradient of the original image \(\big \Vert \nabla \mathcal {I}(\textbf{x})\big \Vert \).

Next, we find edges in the image \(\mathcal {I}\) by computing an edge detector function

$$\begin{aligned} g(\textbf{x}) = \frac{1}{1 + \mu \, \big \Vert \nabla \mathcal {I}_{\sigma _0}(\textbf{x})\big \Vert ^2}. \end{aligned}$$
(3)

The gradients are large near the edges, and therefore, \(g(\textbf{x})\) is close to zero in such regions. In homogeneous areas, the gradients are small, and thus \(g(\textbf{x})\) is close to 1. The parameter \(\mu >0\) controls the sensitivity of edge detection. If it is small, the value of \(g(\textbf{x})\) is close to zero (detects an edge) only for very sharp edges (i.e. very large gradients). If the parameter \(\mu \) is large, the function \(g(\textbf{x})\) has values close to zero even for smaller gradients, which means that \(g(\textbf{x})\) is more sensitive to edges. The reasonable values of \(\mu \) are from the interval (0, 10].

Fig. 5
figure 5

Construction of the smoothed edge detector function for the red channel

Finally, we apply the Gaussian filter (convolve the function \(g(\textbf{x})\) with the Gaussian kernel with variance \(\sigma \)) to obtain the smoothed edge detector function

$$\begin{aligned} E(\textbf{x}) = (G_{\sigma } * g)(\textbf{x}). \end{aligned}$$
(4)

Smoothing makes the edges ‘wider’ and causes the edge attraction term \(-\nabla E(\textbf{x})\) in (2) to point in the right direction (towards the edge) in a larger neighbourhood of the actual edge, which helps the edge attraction. The edges become too wide and blurry for large \(\sigma \), so small values of \(\sigma \) are reasonable, we use \(\sigma =0.5\). In Fig. 5a, we can see a point cloud of a wall with two pictures. Image \(\mathcal {I}\) for the red channel is in Fig. 5c and the corresponding smoothed edge detector function \(E(\textbf{x})\) is plotted in Fig. 5d.

We note that if image \(\mathcal {I}\) contains holes (as in Fig. 7b), i.e. not a number (NaN) values corresponding to empty pixels (containing no points of the point cloud), the gradients and filtration need to be computed carefully. For example, if we carelessly calculate the derivative \(\partial _x \mathcal {I}\) in the (ij)-th pixel by the finite difference \((\partial _x \mathcal {I})_{ij}\approx (\mathcal {I}_{i+1,j} - \mathcal {I}_{i-1,j})/2\,h\) and the value \(\mathcal {I}_{i-1,j}\) or \(\mathcal {I}_{i+1,j}\) would be NaN, the result would be NaN. Therefore, we first interpolate (or extrapolate) empty pixels, then compute the finite difference, and eventually reset the NaN values to preserve the information about the holes (if \(\mathcal {I}_{ij}\) was NaN, then reset \((\partial _x \mathcal {I})_{ij}\) to NaN). If a point \(\textbf{x}(u,t)\) of the curve lies in an empty (NaN) pixel, we set both the expansion and edge attraction term in (2) to zero.

2.4 Blowing function

The blowing function B used in the expansion term in (2) is defined as (see also Mikula et al. (2021a))

$$\begin{aligned} B(u,t) = H(u,t) \, E(\textbf{x}(u,t)), \end{aligned}$$
(5)

where H is called the homogeneity function and is defined as

$$\begin{aligned} H(u,t) = {\left\{ \begin{array}{ll} 1, &{}d(u,t) < d_0, \\ 0, &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$
(6)

where \(d_0\) is a threshold value and \(d(u,t) = \left| \mathcal {I}(\textbf{x}(u,t)) - \langle \mathcal {I}\rangle ^t\right| \) is the difference function which computes the (absolute value of the) difference between the value \(\mathcal {I}(\textbf{x}(u,t))\) and the average value \(\langle \mathcal {I}\rangle ^t :=\frac{1}{|V(t)|}\int _{ V(t)}\mathcal {I}\,d \textbf{x}\) in the region V(t), which denotes the set of points \(\textbf{x}\in \Omega \) visited by the curve since the beginning of the evolution, together with points inside the initial curve (i.e. V(0) is the interior of the initial curve). If the value of the image \(\mathcal {I}\) at a point \(\textbf{x}(u,t)\) of the curve is similar to the points \(\textbf{x}\in \Omega \) already segmented by the curve, then the difference d(ut) is small and the point \(\textbf{x}(u,t)\) should move, which is provided by homogeneity \(H(u,t)=1\) for small d(ut). On the contrary, if the difference d(ut) greater than threshold \(d_0\), the definitions (5), (6) give zero expansion of the curve at the point \(\textbf{x}(u,t)\).

A natural definition of the blowing function could simply be \(B = H\). However, then the expanding term could sometimes drive the points of the curve across the edges. This is the reason why we multiply the homogeneity function by the edge detector function E (which is close to zero in the neighbourhood of edges).

2.5 Normal and tangential speed

The basic evolution model (1) moves the points on the curve both in the normal and tangential directions. From a continuous (analytical) point of view, only the normal component \(\beta =\textbf{v}\cdot \textbf{N}\) of the velocity (2) has an effect on the shape of the curve. Therefore, we could use the evolution model \(\partial _t \textbf{x} = \beta \textbf{N}\). However, in the discrete setting, it is convenient to enrich this model with a suitable tangential speed \(\alpha \) which can be designed to control the distribution of the discretisation points along the curve. This becomes crucial in numerical computations because inappropriate positions of the points can lead to unacceptable errors. Therefore, instead of the basic model (1), we use the model

$$\begin{aligned} \partial _t \textbf{x} = \beta \textbf{N} + \alpha \textbf{T}, \end{aligned}$$
(7)

where \(\beta =\textbf{v}\cdot \textbf{N}\) is the normal speed calculated from (2) as

$$\begin{aligned} \beta = \beta _e + \delta k,\qquad \text {where }\qquad \beta _e = \big (1-\lambda \big )\,B - \lambda \nabla E\cdot \textbf{N}\end{aligned}$$
(8)

is the so-called external normal speed.

The tangential speed \(\alpha \) is constructed to achieve a uniform distribution of the points in the discrete setting. We follow the approach from Mikula and Ševčovič (2004). The length of the curve segment between the points \(\textbf{x}(\hat{u},t)\) and \(\textbf{x}(\hat{u}+\Delta u,t)\) is \(\int _{\hat{u}}^{\hat{u}+\Delta u}g\,du\), where \(g:=\Vert \partial _u \textbf{x} \Vert \). We want this length to be asymptotically (for \(t\rightarrow \infty \)) same for every \(\hat{u}\) (each pair of points), since this corresponds to a uniform distribution of grid points on the discretised curve. However, the total length of the curve \(L=\int _0^1g\,du\) changes during evolution, so we have to study the relative length of the segment (with respect to the total length). For an asymptotically uniform redistribution of points, the relative length of the segment approaches a constant

$$\begin{aligned} \frac{\int _{\hat{u}}^{\hat{u}+\Delta u}g\,du}{L} \rightarrow \text {const.} \qquad \text {for every } \hat{u}. \end{aligned}$$

This is satisfied if \(\frac{g}{L} \rightarrow c\), where \(c>0\) is a constant. Since \(\int _0^1\frac{g}{L}du = 1\), we have to set \(c=1\) and the condition is \(\lim \limits _{t\rightarrow \infty }\frac{g}{L} = 1.\) This is fulfilled if the ratio \(\frac{g}{L}\) obeys a relaxation equation

$$\begin{aligned} \partial _t\left( \frac{g}{L}\right) = \left( 1-\frac{g}{L}\right) \omega , \end{aligned}$$
(9)

where the relaxation parameter \(\omega >0\) controls the speed of relaxation. On the left-hand side, we need a formula for the derivative of g and L. The first one is

$$\begin{aligned} \begin{aligned} \partial _t g&= \partial _t \Vert \partial _u\textbf{x}\Vert \overset{1.}{=} \frac{\partial _u\textbf{x}}{\Vert \partial _u\textbf{x}\Vert }\cdot \partial _t \partial _u\textbf{x} \overset{2.}{=} \partial _s\textbf{x}\cdot \partial _u(\partial _t\textbf{x}) \overset{3.}{=}\ \textbf{T}\cdot \partial _u(\beta \textbf{N} + \alpha \textbf{T})\\&\overset{2.}{=}g\textbf{T}\cdot \partial _s(\beta \textbf{N} + \alpha \textbf{T}) = g\textbf{T}\cdot \big ((\partial _s\beta )\textbf{N} + \beta \partial _s\textbf{N} + (\partial _s\alpha )\textbf{T} + \alpha \partial _s\textbf{T}\big )\overset{4.}{=}\ -gk\beta + g\partial _s\alpha , \end{aligned} \end{aligned}$$
(10)

where we used:

  1. 1.

    Derivative of the norm \(\partial _t\Vert \textbf{w}\Vert = \partial _t \sqrt{\textbf{w}\cdot \textbf{w}} = \frac{\textbf{w}}{\Vert \textbf{w}\Vert }\cdot \partial _t \textbf{w}\).

  2. 2.

    Chain rule \(\frac{\partial }{\partial u}=\frac{\partial s}{\partial u} \frac{\partial }{\partial s}= \Vert \partial _u\textbf{x}\Vert \partial _s = g\partial _s\) and \(\partial _t \partial _u=\partial _u \partial _t\).

  3. 3.

    Equation (7) and the definition of the unit tangent vector \(\textbf{T}=\partial _s\textbf{x}\).

  4. 4.

    Frenet–Serret formulas \(\partial _s\textbf{T}=k\textbf{N}\), \(\partial _s\textbf{N}=-k\textbf{T}\) and orthonormality of the basis \(\textbf{T}\), \(\textbf{N}\) (i.e. \(\textbf{T}\cdot \textbf{T}=1\), \(\textbf{N}\cdot \textbf{N}=1\) and \(\textbf{T}\cdot \textbf{N}=0)\).

Integrating (10), we obtain the equation for the derivative of the curve length

$$\begin{aligned} \partial _t L = \int _0^1 \partial _t g\,du = -\int _0^1 gk\beta \,du + \int _0^1 g\partial _s\alpha \,du = -\int _0^L k\beta \,ds = - L \langle k\beta \rangle , \end{aligned}$$
(11)

where \(\langle k\beta \rangle :=\frac{1}{L}\int _0^L k\beta \,ds\) denotes the average value of \(k\beta \) over the curve. In the above calculation we used \(ds=\partial _u s\,du = g\,du\) and \(\int _0^1\,g\partial _s\alpha \,du = \int _0^1 \partial _u\alpha \,du = \alpha (1,t)-\alpha (0,t)=0\), which is provided by periodic boundary conditions \(\alpha (1,t)=\alpha (0,t)\).

The application of the quotient rule in (9) and the use of formulas (10), (11) give

$$\begin{aligned} \partial _t\left( \frac{g}{L}\right) = \frac{(\partial _t g) L - g\,\partial _t L}{L^2} = \frac{(-gk\beta + g\partial _s\alpha )L + gL \langle k\beta \rangle }{L^2} = \left( 1-\frac{g}{L}\right) \omega . \end{aligned}$$

After multiplication by \(\frac{L}{g}\) and rearrangement, we obtain the equation for the tangential speed \(\alpha \)

$$\begin{aligned} \partial _s\alpha = k\beta - \langle k\beta \rangle + \left( \frac{L}{g}-1\right) \omega . \end{aligned}$$
(12)

To ensure a unique solution, we can simply fix the value of \(\alpha \) in one point (e.g. \(\alpha (0,t)=0\)). However, this can lead to a larger tangential motion than is needed for uniform redistribution. Therefore, to minimise the tangential motion, we ensure uniqueness by requiring zero average tangential motion \(\langle \alpha \rangle =\frac{1}{L}\int _0^L\alpha \,ds=0\) for each \(t\in [0,t_f]\), see also Ambroz et al. (2019).

2.6 Curvature regularisation

The weight \(\delta (u,t)\) of the curvature regularisation term in (2) and (8) can be simply a constant \(\delta (u,t)=\delta _k\) (usually a small number, e.g. 0.01). However, it is convenient to use a more sophisticated setting that considers the phase of the evolution and the relative speed of the point \(\textbf{x}(u,t)\)

$$\begin{aligned} \delta (u,t)= {\left\{ \begin{array}{ll} \delta _k \delta _\text {loc}(u,t), &{}\lambda <1,\quad (\text {Expansion phase}) \\ 0, &{}\lambda =1,\quad (\text {Attraction phase}) \end{array}\right. } \end{aligned}$$
(13)

where the local weight \(\delta _\text {loc}(u,t)\) is defined as follows (Fig. 6):

$$\begin{aligned} \delta _\text {loc} = 3w^2-2w^3, \qquad \text {where}\qquad w(u,t) = \frac{\beta _e(u,t)}{\max \nolimits _{u\in [0,1]}\beta _e(u,t)}. \end{aligned}$$

Thus, in the expansion phase, the proposed weight \(\delta (u,t)\) is a function of the relative external normal speed w(ut) of the point \(\textbf{x}(u,t)\). For slow points, we apply a small regularisationFootnote 4 weight, and for fast points, we set a high regularisation.Footnote 5 In the attraction phase, we turn off curvature regularisation, which enables sharp corners of the final segmentation curve.

2.7 Multichannel segmentation

Fig. 6
figure 6

Local weight

Fig. 7
figure 7

A white wall with three plastic plates

Fig. 8
figure 8

Left column: Bitmap images \(\mathcal {I}^l\), \(l=1,\ldots ,5\) generated from the channels R, G, B, I, D of the point cloud. Right column: corresponding edge detector functions \(g^l\), \(l=1,\ldots ,5\)

So far, we have only discussed single-channel segmentation. Now, we propose a technique to use information from multiple channels and demonstrate it in a simple but illustrative example in Fig. 7. There is a white wall with three plastic plates: blue, red, and white. There are holes (NaN values, black) in Fig. 7b in parts of the planar point cloud in which there are no point cloud points within the threshold distance from the fitting plane. Let us denote the images corresponding to the channels R, G, B, I and D as \(\mathcal {I}^l\), \(l=1,\ldots ,5\), respectively, see the left column in Fig. 8.

First, we will discuss the construction of the smoothed edge detector function E and then the computation of the blowing function B. In the right column of Fig. 8, we can see the edge detector functions \(g^l(\textbf{x})\) corresponding to the images \(\mathcal {I}^l\) computed using (3). In practice, we can set different sensitivity parameter \(\mu ^l\) for each channel. Notice that some edges are discovered in one edge detector \(g^l(\textbf{x})\) but not in another, e.g. the red plate (with \((\text {R,G,B})\approx (1,0,0)\)) is ‘invisible’ for the R channel (because the wall is white, i.e. \((\text {R,G,B})\approx (1,1,1)\), see first row of Fig. 8), but it is clearly distinguished by the B channel (and vice versa for the blue panel). The intensity edge detector (fourth row in Fig. 8) revealed multiple edges that are not seen in the colour (R, G, B) channelsFootnote 6: the edge of the white plate, electrical outlets, light switch, and electrical box covers. These objects have different reflexivity than the wall, and this is captured by the intensity channel which is related to the strength of the reflected laser signal. The electrical outlets, light switch and box covers were also clearly detected by the D channel, see the last row of Fig. 8.

A natural idea is to use the data from all available channels R, G, B, I, D (sometimes the colour data might be missing) to construct one edge detector. Therefore, we aggregate the edge detector functions \(g^l(\textbf{x})\), \(l=1,\ldots ,5\) into a single edge detector function

$$\begin{aligned} g(\textbf{x}) = \prod _{l=1}^{5}g^l(\textbf{x}), \end{aligned}$$
(14)

which contains aggregated information about all edges in all channels, see Fig. 9. Indeed, if at a point \(\textbf{x}\) any (even one) of the edge detectors has a value \(g^l(\textbf{x})\) close to zero, then the value \(g(\textbf{x})\) will be close to zero. Moreover, if multiple channels have values close to zero at a point \(\textbf{x}\), then the resulting value \(g(\textbf{x})\) will be much closer to zero. This approach is also convenient, in the situation if multiple edge detectors agree on some mild edge, the resulting aggregated edge will be more distinct. Finally, we smooth the edge detector function (14) using (4) to obtain the final smoothed aggregated edge detector function \(E(\textbf{x})\) and use it in the curve segmentation.

Since the range of the edge detector function (3) is (0, 1], the aggregation (14) can be interpreted as fuzzy logic aggregation using the product aggregation operator \(A:[0,1]^n\rightarrow [0,1]\) defined as \(A(x_1,\ldots ,x_n)=\prod _{i=1}^{n}x_i\), see, e.g. van Krieken et al. (2022). Fuzzy logic in general is a many-valued logic in which the truth values of variables \(x_i\) are real numbers from the interval [0, 1], i.e. there is a concept of partial truth while \(x_i=0\) represents completely false and \(x_i=1\) is completely true. The product is a generalisation of the AND operator from Boolean logic: the conjunction \(x_1 \wedge x_2\) with \(x_1,x_2 \in \{0,1\}\) generalises to \(x_1\cdot x_2\) for \(x_1,x_2 \in [0,1]\).

Now, we shall briefly discuss the aggregated blowing function B. We compute it using (5) as in the single-channel case. However, now E is the smoothed aggregated edge detector function described above and H is the aggregated homogeneity function computed as

$$\begin{aligned} H(u,t) = H^1(u,t)\wedge \cdots \wedge H^5(u,t), \end{aligned}$$
(15)

where \(H^l(u,t)\in \{0,1\}\) denotes the homogeneity function computed using (6) for the image \(\mathcal {I}^l\). Equation (15) guarantees that the value B(ut) of the blowing function is nonzero only if the difference \(d^l(u,t) = \left| \mathcal {I}^l(\textbf{x}(u,t)) - \langle \mathcal {I}^l\rangle ^t\right| \) is smaller than the threshold \(d_0^l\) for each channel. If \(d^l(u,t)\ge d_0^l\) for some channel,Footnote 7 then the blowing function (expansion term) vanishes at the point \(\textbf{x}(u,t)\) of the curve. The threshold value \(d_0^l\) can be set differently for each channel. However, in this paper, we will use \(d_0^l=0.5\) for all channels.

Fig. 9
figure 9

The smoothed aggregated edge detector function \(E(\textbf{x})\)

Fig. 10
figure 10

Initialisation mask computed for the wall with three plates. White regions with value \(M(\textbf{x})=1\) are suitable for placing initial curves

2.8 Inserting initial curves

At the beginning of the segmentation, we insert a small circle (or multiple circlesFootnote 8) into the image. We use the radius \(r_0=3h\), with h denoting the pixel size (edge length). In order to find suitable candidates for positions of the initial circles, we construct an initialisation mask M, Fig. 10. First, we filter the smoothed edge detector \(E_{\sigma _M} = G_{\sigma _M} * E\), where we use the Gaussian filter with a higher standard deviation (e.g. \(\sigma _M=2\)) and a larger kernel size (e.g. \(7\times 7\)). Then, we threshold the function \(E_{\sigma _M}\) as follows:

$$\begin{aligned} M(\textbf{x}) = {\left\{ \begin{array}{ll} 0, &{} E_{\sigma _M}(\textbf{x}) < 0.7 \quad \text {or}\quad E_{\sigma _M}(\textbf{x}) = \text {NaN},\\ 1, &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$
(16)

The region where \(M(\textbf{x})=1\) will be referred to as segmentable region \(\Omega _S\subset \Omega \). We note that here we perform a careless filtration, which intentionally spreads NaN values to their neighbourhood (differently from the Sect. 2.3). The centres of the initial circles are randomly placed in the segmentable region \(\Omega _S\) (more precisely to random pixel centres in \(\Omega _S\)). The construction of mask M prevents the insertion of circles on the edges, holes (NaN values), and their neighbourhood.

If all curves stop evolving, we remove the segmented region V(t) of each curve from the segmentable region by setting \(M(\textbf{x})=0\) for all \(\textbf{x}\in V(t)\). Subsequently, we randomly add a new circle (or multiple circles) into the segmentable region. If we insert multiple circles (either at the beginning or during the process of segmentation), it is convenient to remove the interior and possibly some neighbourhood of a circle from the segmentable region after each curve insertion. This prevents the generation of intersecting or too close curves. When a segmentation target is reached, e.g. 99 % of the initial segmentable area is segmented, we end the segmentation process.

2.9 Postprocessing: creation of the point cloud segments

When the curve segmentation is finished, the planar point cloud is divided into segments. We take the segmented region \(V(t_f)\) of each segmentation curve and select all points in the point cloud that lie in \(V(t_f)\). The last segment is a ‘waste after segmentation’ and corresponds to regions of \(\Omega \) which were not visited by any curve.

3 Numerical scheme

In this section, we discuss the numerical discretisation of our method in great detail. For the purpose of the derivation of the numerical scheme, let us rewrite the model (7), (8) using \(\textbf{T}=\partial _s \textbf{x}\), \(\textbf{N}=\partial _s \textbf{x}^\perp \) and the Frenet–Serret formula \(k\textbf{N} = \partial _s\textbf{T} = \partial _s^2\textbf{x}\). We obtain the following equation:

$$\begin{aligned} \partial _t \textbf{x} - \alpha \partial _s \textbf{x} = \delta \partial _s^2\textbf{x} + \beta _e\partial _s \textbf{x}^\perp , \end{aligned}$$
(17)

with is actually a system of two PDEs, since \(\textbf{x}=(x_1,x_2)\) is the unknown position vector. Equation (17) has a form of so-called intrinsic PDE. The tangential term \(-\alpha \partial _s\textbf{x}\) represents an intrinsic advection along the curve, with \(-\alpha \) being the speed of the advection, and the curvature term \(\delta \partial _s^2\textbf{x}\) can be regarded as an intrinsic diffusion with diffusion coefficient \(\delta \).

Now, we will discretize equation (17). First, we perform the space discretisation which is based on the finite-volume method (Mikula et al. 2021a). Then, we present the time discretisation both by the standard semi-implicit scheme (Ševčovič and Mikula 2001) and semi-implicit scheme with Inflow-Implicit/Outflow-Explicit (IIOE) technique to treat the advection term (Mikula and Ohlberger 2011; Balažovjech et al. 2012; Mikula et al. 2014).

3.1 Space discretisation

The mesh will consist of m grid points \(\textbf{x}_i\), \(i=1,\ldots ,m\), where \(\textbf{x}_i(t)\approx \textbf{x}(u_i,t)\), \(u_i=(i-1)\Delta u\) and \(\Delta u = \frac{1}{m}\). The construction of the finite volume \(\mathcal V_i\) corresponding to the point \(\textbf{x}_i\) consists of two line segments connecting the points \(\textbf{x}_{i-\frac{1}{2}}:=\frac{\textbf{x}_{i-1}+\textbf{x}_i}{2}\), \(\textbf{x}_i\) and \(\textbf{x}_{i+\frac{1}{2}}:=\frac{\textbf{x}_i+\textbf{x}_{i+1}}{2}\), Fig. 11. The size of \(\mathcal V_i\) is \(|\mathcal V_i|=\frac{h_{i-1}+h_i}{2}\), where \(h_i:=\Vert \textbf{x}_{i+1}-\textbf{x}_i\Vert \) is the length of the segment \(\{\textbf{x}_i,\textbf{x}_{i+1}\}\). We integrate equation (17) over the finite volume \(\mathcal V_i\) and obtain

$$\begin{aligned} \partial _t\textbf{x}_i\int _{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}} \,ds -\alpha _i\int _{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}} \partial _s\textbf{x} \,ds= \delta _i\int _{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}}\partial _{s}^{2}\textbf{x}\,ds + \beta _{e,i}\int _{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}}\partial _s\textbf{x}^\perp \,ds, \end{aligned}$$
Fig. 11
figure 11

The notation on the curve mesh

where we approximate some quantities by constants on \(\mathcal V_i\), namely \(\partial _t\textbf{x}\approx \partial _t\textbf{x}_i\), \(\alpha \approx \alpha _i\), \(\delta \approx \delta _i\) and \(\beta _e\approx \beta _{e,i}\). After using the Newton–Leibniz formula, we have the following:

$$\begin{aligned} \partial _t\textbf{x}_i|\mathcal V_i| -\alpha _i[\textbf{x} ]_{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}} = \delta _i[\partial _{s}\textbf{x}]_{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}} + \beta _{e,i}\left( [\textbf{x}]_{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}}\right) ^\perp , \end{aligned}$$

which evaluates to

$$\begin{aligned} \partial _t\textbf{x}_i\frac{h_{i-1}+h_i}{2} -\alpha _i \left( \textbf{x}_{i+\frac{1}{2}}-\textbf{x}_{i-\frac{1}{2}}\right) = \delta _i[\partial _{s}\textbf{x}]_{\textbf{x}_{i-\frac{1}{2}}}^{\textbf{x}_{i+\frac{1}{2}}} + \beta _{e,i}\left( \textbf{x}_{i+\frac{1}{2}}-\textbf{x}_{i-\frac{1}{2}}\right) ^\perp . \end{aligned}$$

Finally, we approximate the derivative \(\partial _{s}\textbf{x}\) in the midpoints \(\textbf{x}_{i-\frac{1}{2}}\), \(\textbf{x}_{i+\frac{1}{2}}\) using the finite differences \(\partial _s\textbf{x}_{i-\frac{1}{2}}\approx \frac{\textbf{x}_i-\textbf{x}_{i-1}}{h_{i-1}}\) and \(\partial _s\textbf{x}_{i+\frac{1}{2}}\approx \frac{\textbf{x}_{i+1}-\textbf{x}_i}{h_i}\), respectively. The space discretisation of the PDE (17) reads

$$\begin{aligned} \frac{h_{i-1}+h_i}{2}\partial _t\textbf{x}_i -\alpha _i \left( \frac{\textbf{x}_{i+1}-\textbf{x}_{i-1}}{2}\right)= & {} \delta _i\left( \frac{\textbf{x}_{i+1}-\textbf{x}_i}{h_i} - \frac{\textbf{x}_i-\textbf{x}_{i-1}}{h_{i-1}}\right) \nonumber \\ {}{} & {} + \beta _{e,i}\left( \frac{\textbf{x}_{i+1}-\textbf{x}_{i-1}}{2}\right) ^\perp , \end{aligned}$$
(18)

for each \(i=1,\ldots ,m\), with \(\textbf{x}_{0}:=\textbf{x}_{m}\) and \(\textbf{x}_{m+1}:=\textbf{x}_{1}\).

3.2 Time discretisation by semi-implicit scheme

In this paper, we show two different time discretisations. In this section, we perform the time discretisation using a semi-implicit scheme (Ševčovič and Mikula 2001) and motivate the need for a more sophisticated semi-implicit IIOE scheme (Mikula and Ohlberger 2011; Balažovjech et al. 2012; Mikula et al. 2014), which is presented in the Sect. 3.3 and used in our implementation. The motivation is given from a different point of view from those of (Mikula and Ohlberger 2011; Mikula et al. 2014; Balažovjech et al. 2012).

We discretize the time interval \([0,t_f]\) using discrete times \(t^n\), \(n=0,\ldots ,n_\text {max}\) with \(t^0=0\) and \(t^{n_\text {max}}=t_f\). Let us denote the length of the time step by \(\tau ^n = t^{n+1} - t^n\). The discretisation is not necessarily uniform in time, and we usually choose a smaller \(\tau ^n\) in the attraction phase. We approximate the time derivative in (18) by the forward finite difference \(\partial _t\textbf{x}_i\approx \frac{\textbf{x}_i^{n+1}-\textbf{x}_i^n}{\tau ^n}\), where \(\textbf{x}_i^n\approx \textbf{x}_i(t^n)\). Then, we take the coefficients \(h_i,\alpha _i,\delta _i,\beta _{e,i}\) from the n-th (old) time step, the advection and diffusion terms are approximated implicitly, that is, we take \(\textbf{x}_i\) from \((n+1)\)-th (new) time step, and the expansion term explicitly, meaning that \(\textbf{x}_i\) is taken from the old time step. We obtain the following:

$$\begin{aligned}{} & {} \frac{h_{i-1}^n+h_i^n}{2}\frac{\textbf{x}_i^{n+1}-\textbf{x}_i^n}{\tau ^n} -\alpha _i^n \left( \frac{\textbf{x}_{i+1}^{n+1}-\textbf{x}_{i-1}^{n+1}}{2}\right) \\{} & {} \qquad = \delta _i^n\left( \frac{\textbf{x}_{i+1}^{n+1}-\textbf{x}_i^{n+1}}{h_i^n} - \frac{\textbf{x}_i^{n+1}-\textbf{x}_{i-1}^{n+1}}{h_{i-1}^n}\right) + \beta _{e,i}^n\left( \frac{\textbf{x}_{i+1}^n-\textbf{x}_{i-1}^n}{2}\right) ^\perp , \end{aligned}$$

which can be rearranged as

$$\begin{aligned} -A_i^n\textbf{x}_{i-1}^{n+1} + B_i^n\textbf{x}^{n+1}_i - C_i^n\textbf{x}_{i+1}^{n+1} = \textbf{D}_i^n,\qquad i=1,\ldots ,m \end{aligned}$$
(19)

for \(n=0,\ldots ,n_\text {max}-1\). We denoted

$$\begin{aligned} \begin{array}{ll} A_i^n:=\frac{\delta _i^n}{h_{i-1}^n}-\frac{\alpha _i^n}{2},\qquad \qquad &{} B_i^n:=\frac{h^n_{i-1}+h^n_i}{2\tau ^n} + \delta _i^n\left( \frac{1}{h^n_i}+\frac{1}{h^n_{i-1}}\right) ,\\ C_i^n:=\frac{\delta _i^n}{h_i^n}+\frac{\alpha _i^n}{2}, \qquad \qquad &{} \textbf{D}_i^n:=\frac{h^n_{i-1}+h^n_i}{2\tau ^n}\textbf{x}_i^n+\beta ^n_{e,i}\left( \frac{\textbf{x}^n_{i+1}-\textbf{x}^n_{i-1}}{2}\right) ^\perp .\\ \end{array} \end{aligned}$$
(20)

The formula (19) is a system of m vector equations (2m scalar equations) with unknowns \(\textbf{x}^{n+1}_i\), \(i=1,\ldots ,m\). The matrix form of the system is

$$\begin{aligned} \begin{bmatrix} B_{1}^n &{} -C_{1}^n &{} 0 &{} \dots &{} 0 &{} -A_{1}^n &{} \\ -A_{2}^n &{} B_{2}^n &{} -C_{2}^n &{} 0 &{} \dots &{} 0 &{} \\ 0 &{} -A_{3}^n &{} B_{3}^n &{} -C_{3}^n &{} 0 &{} \\ \vdots &{} &{} &{} \ddots &{} &{} \vdots &{} \\ -C_{m}^n &{} &{} \dots &{} &{} -A_{m}^n &{} B_{m}^n &{} \end{bmatrix} \begin{bmatrix} \textbf{x}_1^{n+1}\\ \\ \vdots \\ \\ \textbf{x}_m^{n+1}\\ \end{bmatrix} = \begin{bmatrix} \textbf{D}_{1}^n\\ \\ \vdots \\ \\ \textbf{D}_{m}^n\\ \end{bmatrix}. \end{aligned}$$
(21)

It is a cyclic tridiagonal system.Footnote 9 Such systems can be solved by a modified Thomas algorithm (also known as tridiagonal matrix algorithm) (Press et al. 2007).Footnote 10 If the system matrix is diagonally dominant, the Thomas algorithm is stable. The diagonal dominance in our case reads \(|B_i^n| \ge |A_i^n| + |C_i^n|\), for all \(i=1,\ldots ,m\), that is,

$$\begin{aligned} \frac{h^n_{i-1}+h^n_i}{2\tau ^n} + \delta _i^n\left( \frac{1}{h^n_i}+\frac{1}{h^n_{i-1}}\right) \ge \left| \frac{\delta _i^n}{h_{i-1}^n}-\frac{\alpha _i^n}{2}\right| + \left| \frac{\delta _i^n}{h_i^n}+\frac{\alpha _i^n}{2}\right| , \end{aligned}$$
(22)

where we cancelled the absolute value on the LHS because all quantities are nonnegative. However, we cannot cancel it in the terms on the RHS, because \(\alpha _i^n\in \mathbb {R}\). Therefore, the condition (22) is not always satisfied. We can achieve diagonal dominance with the appropriate choice of the time step length \(\tau ^n\). From (22), we have the following:

$$\begin{aligned} \tau ^n\le \frac{1}{2}\frac{h_{i-1}^n + h_i^n}{\left| \left( \frac{\delta _i^n}{h_{i-1}^n}-\frac{\alpha _i^n}{2}\right) \right| + \left| \left( \frac{\delta _i^n}{h_i^n}+\frac{\alpha _i^n}{2}\right) \right| - \delta _i^n\left( \frac{1}{h^n_i}+\frac{1}{h^n_{i-1}}\right) }, \end{aligned}$$
(23)

which must hold for each \(i=1,\ldots ,m\), see also Balažovjech and Mikula (2011). Therefore, we can evaluate the RHS of (23) for each i and find the minimum that will give us the suitable time step length \(\tau ^n\). Such computation of \(\tau ^n\) slows down the evolution algorithm and, moreover, if we obtain a very small \(\tau ^n\), the evolution algorithm would be even slower. However, there is a trick (Mikula et al. 2014) which will guarantee stability without any restriction of the time step length.

Now, we give a motivation for the trick. Let us consider the case with no tangential speed, \(\alpha _i^n=0\). Then, we can cancel the absolute values also on the RHS of (22) and obtain the condition \(\frac{h^n_{i-1}+h^n_i}{2\tau ^n}\ge 0\) which holds (even strictly) for any \(\tau ^n\), thus for zero tangential speed the system matrix is strictly diagonally dominant. In the case of non-vanishing tangential speed, the idea will be the same, i.e. to obtain the system matrix for which we can cancel all absolute values in diagonal dominance condition and then cancel out all terms except \(\frac{h^n_{i-1}+h^n_i}{2\tau ^n}\).

3.3 Time discretisation by semi-implicit IIOE scheme

In this section, we present the semi-implicit Inflow-Implicit/Outflow-Explicit (IIOE) scheme (Mikula and Ohlberger 2011; Balažovjech et al. 2012; Mikula et al. 2014). The goal is to obtain a diagonally dominant matrix with no restriction on the time step length also in the case with nonzero tangential speed. We have seen that the tangential term causes problems. We will take a term containing \(\alpha _i^n\) implicitly (i.e. it will occur in the system matrix) only if it has a suitable sign; otherwise, we will take it explicitly (move it to the RHS). By a ‘suitable sign’, we mean the sign that will allow us to cancel all absolute values in the diagonal dominance condition.

First, we rearrange the tangential term in (18) as follows:

$$\begin{aligned} \nonumber -\alpha _i \left( \frac{\textbf{x}_{i+1}-\textbf{x}_{i-1}}{2}\right)&= \frac{1}{2}(-\alpha _i)(-\textbf{x}_{i-1})+\frac{1}{2}\alpha _i(-\textbf{x}_{i+1}) + \underbrace{\frac{1}{2}(-\alpha _i)\textbf{x}_i+\frac{1}{2}\alpha _i\textbf{x}_i}_0\\ \nonumber&= \frac{1}{2}(-\alpha _i)(\textbf{x}_i-\textbf{x}_{i-1})+\frac{1}{2}\alpha _i(\textbf{x}_i-\textbf{x}_{i+1})\\&=\frac{1}{2}\left( b_{i-\frac{1}{2}}^\text {in}+b_{i-\frac{1}{2}}^\text {out}\right) (\textbf{x}_i-\textbf{x}_{i-1})+\frac{1}{2}\left( b_{i+\frac{1}{2}}^\text {in}+b_{i+\frac{1}{2}}^\text {out}\right) (\textbf{x}_i-\textbf{x}_{i+1}), \end{aligned}$$
(24)

where we introduced the notations

$$\begin{aligned} b_{i-\frac{1}{2}}^\text {in}&:=\max (-\alpha _i,0)\ge 0,\qquad \quad b_{i-\frac{1}{2}}^\text {out}:=\min (-\alpha _i,0)\le 0,\nonumber \\ b_{i+\frac{1}{2}}^\text {in}&:=\max (\alpha _i,0)\ge 0,\qquad \qquad b_{i+\frac{1}{2}}^\text {out}:=\min (\alpha _i,0)\le 0. \end{aligned}$$
(25)

To clarify the notation: if the advection speed \(-\alpha _i\) is positive, we have inflow into the finite volume \(\mathcal V_i\) at the point \(\textbf{x}_{i-\frac{1}{2}}\) and outflow at \(\textbf{x}_{i+\frac{1}{2}}\), if the advection speed \(-\alpha _i\) is negative, there is outflow at \(\textbf{x}_{i-\frac{1}{2}}\) and inflow at \(\textbf{x}_{i+\frac{1}{2}}\).

As we shall see, if the sign of \(\alpha _i^n\) corresponds to the inflow (either at the point \(\textbf{x}_{i-\frac{1}{2}}\) or \(\textbf{x}_{i+\frac{1}{2}}\)), the corresponding terms have a suitable sign and allow cancellation of absolute values in the diagonal dominance criterion. Therefore, to perform the time discretisation of the tangential term (24), we take the Inflow terms Implicitly and Outflow terms Explicitly (IIOE):

$$\begin{aligned} \frac{b_{i-\frac{1}{2}}^{\text {in},n}}{2}\left( \textbf{x}_i^{n+1}-\textbf{x}_{i-1}^{n+1}\right) + \frac{b_{i-\frac{1}{2}}^{\text {out},n}}{2}\left( \textbf{x}_i^n-\textbf{x}_{i-1}^n\right) + \frac{b_{i+\frac{1}{2}}^{\text {in},n}}{2}\left( \textbf{x}_i^{n+1}-\textbf{x}_{i+1}^{n+1}\right) + \frac{b_{i+\frac{1}{2}}^{\text {out},n}}{2}\left( \textbf{x}_i^n-\textbf{x}_{i+1}^n\right) . \end{aligned}$$

The final system has the form (19), i.e.

$$\begin{aligned} -A_i^n\textbf{x}_{i-1}^{n+1} + B_i^n\textbf{x}^{n+1}_i - C_i^n\textbf{x}_{i+1}^{n+1} = \textbf{D}_i^n,\qquad i=1,\ldots ,m \end{aligned}$$
(26)

for \(n=0,\ldots ,n_\text {max}-1\). The resulting coefficients \(A_i^n\), \(B_i^n\), \(C_i^n\), and \(\textbf{D}_i^n\) are

$$\begin{aligned} A_i^n&:=\frac{\delta _i^n}{h_{i-1}^n}+\frac{b_{i-\frac{1}{2}}^{\text {in},n}}{2},\quad B_i^n:=\frac{h^n_{i-1}+h^n_i}{2\tau ^n} + \delta _i^n\left( \frac{1}{h^n_i} +\frac{1}{h^n_{i-1}}\right) +\frac{b_{i-\frac{1}{2}}^{\text {in},n}}{2}+\frac{b_{i+\frac{1}{2}}^{\text {in},n}}{2},\nonumber \\ C_i^n&:=\frac{\delta _i^n}{h_i^n}+\frac{b_{i+\frac{1}{2}}^{\text {in},n}}{2},\\ \textbf{D}_i^n&:=\frac{h^n_{i-1}+h^n_i}{2\tau ^n}\textbf{x}_i^n +\beta ^n_{e,i}\left( \frac{\textbf{x}^n_{i+1}-\textbf{x}^n_{i+1}}{2}\right) ^\perp \!\!\!+\frac{1}{2}b_{i-\frac{1}{2}}^{\text {out},n}(\textbf{x}_i^n-\textbf{x}_{i-1}^n)+\frac{1}{2}b_{i+\frac{1}{2}}^{\text {out},n}(\textbf{x}_i^n-\textbf{x}_{i+1}^n).\nonumber \end{aligned}$$
(27)

The diagonal dominance criterion \(|B_i^n| \ge |A_i^n| + |C_i^n|\), now reads

$$\begin{aligned} \left| \frac{h^n_{i-1}+h^n_i}{2\tau ^n} + \delta _i^n\left( \frac{1}{h^n_i} +\frac{1}{h^n_{i-1}}\right) +\frac{b_{i-\frac{1}{2}}^{\text {in},n}}{2}+\frac{b_{i+\frac{1}{2}}^{\text {in},n}}{2}\right| \ge \left| \frac{\delta _i^n}{h_{i-1}^n}+\frac{b_{i-\frac{1}{2}}^{\text {in},n}}{2}\right| + \left| \frac{\delta _i^n}{h_i^n}+\frac{b_{i+\frac{1}{2}}^{\text {in},n}}{2}\right| . \end{aligned}$$

All terms are nonnegative, and therefore, we can cancel all absolute values and the condition reduces to \(\frac{h^n_{i-1}+h^n_i}{2\tau ^n}\ge 0\). This holds for any \(\tau ^n\) and the solution of the system (26) by the modified Thomas algorithm is guaranteed.

Let us briefly discuss the stopping criterion for evolution. If the segmentation is finished and the curve does not evolve, we should break the time loop even before the final \(n_\text {max}\)-th time step is computed by (26). If a large percentage (e.g. 99 %) of the points do not move, we switch the evolution phase from expansion to attraction (setting \(\lambda ^n=1\)) and perform 20 evolution steps in this phase. By not moving points, we naturally mean points which move very slowly (e.g. \(\Vert \textbf{x}^{n+1}_i-\textbf{x}^{n}_i\Vert < 0.05 h\)) but also the points jumping back and forth between two neighbouring pixels, which happens very often at an edge.

3.4 Discretisation of the normal and tangential speed

The normal speed is given by (8) and we approximate it at the point \(\textbf{x}_i^n\) as follows:

$$\begin{aligned} \beta _i^n = \beta _{e,i}^n + \delta _i^n k_i^n,\qquad \text {with }\qquad \beta _{e,i}^n = \big (1-\lambda ^n\big )\,B_i^n - \lambda ^n \nabla E(\textbf{x}_i^n)\cdot \textbf{N}_i^n, \end{aligned}$$
(28)

The evaluation of \(\delta _i^n\) by (13) is straightforward. The discretisation of the signed curvature \(k_i^n\) will be discussed in Sect. 3.5. The blowing function is calculated as \(B_i^n=H_i^n E(\textbf{x}_i^n)\), with the difference \(d_i^n\) in the calculation of \(H_i^n\) (using (6)) approximated as \(d_i^n = \left| \mathcal {I}(\textbf{x}_i^n) - \frac{1}{|V^n|}\sum _{p \in V^n}{\mathcal {I}_p} \right| \) where \(V^n\) is the set of pixels p that approximates the region \(V(t^n)\), \(|V^n|\) denotes the number of pixels in \(V^n\) and \(\mathcal {I}_p\) denotes the value of image \(\mathcal {I}\) in the pixel p. The edge detector function E is computed using (3), (4) in the bitmap image \(\mathcal I\) and the gradients \(\nabla \mathcal I\) and \(\nabla E\) are approximated by central differences. The positively oriented normal in (28) is calculated as \(\textbf{N}_i^n=\left( \frac{\textbf{x}_{i+1}^n-\textbf{x}_{i-1}^n}{\Vert \textbf{x}_{i+1}^n-\textbf{x}_{i-1}^n \Vert }\right) ^\perp \).

The tangential speed is the solution of the equation (12) and we discretize it as follows. The derivative on the LHS is approximated by the finite difference, \(\partial _s\alpha \approx \frac{\alpha _{i+1}^{n}-\alpha _i^n}{h_i^n}\). We regard it as a central difference at the point \(\textbf{x}_{i+\frac{1}{2}}^n\) and, therefore, approximate the RHS at that point. The average (integral) term is approximated by the Riemannian sum, \(\langle k\beta \rangle (t^n)\approx \frac{1}{L^n}\sum _{j=1}^mk^n_{j+\frac{1}{2}}\beta ^n_{j+\frac{1}{2}}h_j^n\) with approximate curve length computed as \(L^n=\sum _{i=1}^m h_i^n\) and the normal speed at \(\textbf{x}_{i+\frac{1}{2}}^n\) evaluated as \(\beta ^n_{i+\frac{1}{2}}=\frac{1}{2}(\beta _i^n+\beta _{i+1}^n)\). The approximation of the signed curvature \(k^n_{i+\frac{1}{2}}\) will be explained in the next section. The norm \(g=\Vert \partial _u \textbf{x}\Vert \) is approximated by \(\frac{\Vert \textbf{x}_{i+1}^n-\textbf{x}_{i}^n \Vert }{\Delta u}=h_i^n m\). Putting all together, we have

$$\begin{aligned} \frac{\alpha _{i+1}^{n}-\alpha _i^n}{h_i^n}=k^n_{i+\frac{1}{2}}\beta ^n_{i+\frac{1}{2}}-\frac{1}{L^n}\sum _{j=1}^m k^n_{j+\frac{1}{2}}\beta ^n_{j+\frac{1}{2}}h_j^n+\left( \frac{L^n/m}{h_i^n}-1\right) \omega , \end{aligned}$$
(29)

for \(i=1,\ldots ,m\), which is a two-diagonal system for unknown \(\alpha _i^n\), \(i=1,\ldots ,m\). The system can be easily solved by setting \(\alpha _1^n=0\) and successively computing \(\alpha _2^n,\ldots ,\alpha _m^n\) by

$$\begin{aligned} \alpha _{i+1}^{n}=\alpha _i^n+h_i^n\left( k^n_{i+\frac{1}{2}}\beta ^n_{i+\frac{1}{2}}-\frac{1}{L^n}\sum _{j=1}^m k^n_{j+\frac{1}{2}}\beta ^n_{j+\frac{1}{2}}h_j^n+\left( \frac{L^n/m}{h_i^n}-1\right) \omega \right) . \end{aligned}$$
(30)

Finally, we redefine the tangential speeds by subtracting their average \(\alpha _i^n\leftarrow \alpha _i^n-\frac{1}{m}\sum _{j=1}^m\alpha _j^n\) to ensure zero average tangential motion.

3.5 Discretisation of the signed curvature

In the discretisation of the normal and tangential speeds, we need a formula for approximate curvature at the grid points \(\textbf{x}_{i}^n\) and at the midpoints \(\textbf{x}_{i+\frac{1}{2}}^n\) of the edges. We will discretize the formula

$$\begin{aligned} k=\frac{\partial _u^2\textbf{x}\cdot \textbf{N}}{\Vert \partial _u \textbf{x} \Vert ^2}. \end{aligned}$$
(31)

It can be derived directly from the definition. Using the notation \(g=\Vert \partial _u\textbf{x}\Vert \), we have

$$\begin{aligned} k=\partial _s \textbf{T}\cdot \textbf{N} \overset{1.}{=} \frac{1}{g}\partial _u \frac{\partial _u\textbf{x}}{g} \cdot \textbf{N}&\overset{2.}{=}\ \frac{1}{g} \frac{(\partial _u^2\textbf{x})g - \partial _u \textbf{x}\left( \frac{\partial _u\textbf{x}}{g}\cdot \partial _u^2\textbf{x}\right) }{g^2} \cdot \textbf{N}\\&= \frac{\partial _u^2\textbf{x} - (\partial _u^2 \textbf{x}\cdot \textbf{T}) \textbf{T}}{g^2} \cdot \textbf{N} \overset{3.}{=}\frac{\partial _u^2\textbf{x} \cdot \textbf{N}}{g^2}, \end{aligned}$$

where we used:

  1. 1.

    Definition of the unit tangent vector \(\textbf{T}=\frac{\partial _u\textbf{x}}{g}\) and the chain rule \(\partial _s = \frac{1}{g}\partial _u\) (see 2. under (10)).

  2. 2.

    Quotient rule and \(\partial _u\Vert \textbf{w}\Vert = \partial _u \sqrt{\textbf{w}\cdot \textbf{w}} = \frac{\textbf{w}}{\Vert \textbf{w}\Vert }\cdot \partial _u \textbf{w}\) with \(\textbf{w}=\partial _u\textbf{x}\).

  3. 3.

    Since the basis \(\{\textbf{T},\textbf{N}\}\) is orthonormal, we have \(\textbf{w} - (\textbf{w}\cdot \textbf{T})\textbf{T} = (\textbf{w}\cdot \textbf{N}) \textbf{N}\), where \(\textbf{w}=\partial _u^2\textbf{x}\).

To approximate the signed curvature at the grid point \(\textbf{x}_{i}^n\), we can use the central differences \(\partial _u \textbf{x}\approx \frac{\textbf{x}_{i+1}^n-\textbf{x}_{i-1}^n}{2 \Delta u}\) and \(\partial _u^2\textbf{x}\approx \frac{\textbf{x}_{i-1}^n-2\textbf{x}_i^n+\textbf{x}_{i+1}^n}{\Delta u^2}\) (see Fig. 11) which leads to the discretisation

$$\begin{aligned} k_i^n = \frac{4(\textbf{x}_{i-1}^n-2\textbf{x}_i^n+\textbf{x}_{i+1}^n)\cdot \textbf{N}_i^n}{\Vert \textbf{x}_{i+1}^n-\textbf{x}_{i-1}^n\Vert ^2}. \end{aligned}$$
(32)

At the midpoint \(\textbf{x}_{i+\frac{1}{2}}^n\) of the edge, we use the central differences \(\partial _u \textbf{x}\approx \frac{\textbf{x}_{i+1}^n-\textbf{x}_i^n}{\Delta u}\), \(\partial _u^2\textbf{x} \approx \frac{\textbf{x}_{i+\frac{3}{2}}-2\textbf{x}_{i+\frac{1}{2}}+\textbf{x}_{i-\frac{1}{2}}}{\Delta u^2}\) (see Fig. 11) and the normal \(\textbf{N}_{i+\frac{1}{2}}^n=\left( \frac{\textbf{x}_{i+1}^n-\textbf{x}_i^n}{h_i^n}\right) ^\perp \), which gives

$$\begin{aligned} k_{i+\frac{1}{2}}^n=\frac{\textbf{x}_{i-1}^n-\textbf{x}_{i}^n-\textbf{x}_{i+1}^n+\textbf{x}_{i+2}^n}{2(h_i^n)^3}\cdot (\textbf{x}_{i+1}^n-\textbf{x}_{i}^n)^\perp . \end{aligned}$$
(33)

3.6 Topological changes

When dealing with curve evolution using the Lagrangian approach, one has to treat topological changes that occur during the evolution. The evolving curve can split into multiple curves, or several initial evolving curves can merge. The detection and treatment of the topological changes can be highly time-consuming because the natural approach has computational complexity \(O(m^2)\), where m denotes the number of grid points (it includes computation of pairwise distances between all grid points). However, we adopt the method from Mikula and Urbán (2012); Balažovjech et al. (2012); Ambroz et al. (2019), which has complexity only O(m). This makes the Lagrangian method for curve evolution efficient and applicable in complex situations.

The basic idea of Mikula and Urbán (2012); Balažovjech et al. (2012); Ambroz et al. (2019) is to construct a background mesh (we use the mesh described in Sect. 2.1). To detect splitting, we walk through all curve grid points and label the underlying cells by the grid point numbers. If the current i-th point belongs to a cell already labelled by the j-th point and the difference \(i-j\) is large enough (e.g. \(i-j>3\)), the splitting is detected. Since we pass the curve only once, the computational complexity is O(m). The merging detection is similar, we walk through the points of all curves and label underlying cells by the curve number. If the current point belongs to the cell already labelled by a point of another curve, the merging is detected. The most complete explanation of the algorithms (including pseudocodes) is given in Ambroz et al. (2019).

For the reliable topological changes detection and treatment, it is required to maintain the mean curve segment length close to a specific constant \(h_d\) called desired curve segment length (desired distance between neighbouring grid points). We use setting \(h_d=h/2\), where h is the pixel size. In our method, every curve is asymptotically uniformly discretised (see Sect. 2.5), i.e. all segment lengths are close to their mean value. However, the mean value can differ between different curves, and it increases for expanding curves and decreases for shrinking ones. Such difference is undesirable, especially in merging of curves, so we maintain the mean curve segment length close to \(h_d\) for all curves by adding or removing points, after which the asymptotically uniform redistribution is quickly obtained due to the tangential velocity \(\alpha \). The detailed algorithm for adding and removing grid points is in Ambroz et al. (2019). We note that small curves with less than \(m=15\) grid points are deleted.

The merging detection in this paper is slightly different. If two evolving curves numbered \(c_1\) and \(c_2\) get close to each other, we have to decide if they are segmenting regions with similar properties (curves should merge) or different properties (curves should not merge). First, let us discuss the single-channel case. We compute the average values \(\langle \mathcal {I}\rangle _{c_1}^t\) and \(\langle \mathcal {I}\rangle _{c_2}^t\) of image \(\mathcal {I}\) in the regions \(V_{c_1}(t)\), \(V_{c_2}(t)\) already segmented by the curves \(c_1\) and \(c_2\), respectively, see definitions in the Sect. 2.4. If the difference \(d_{c_1,c_2}(t):=|\langle \mathcal {I}\rangle _{c_1}^t - \langle \mathcal {I}\rangle _{c_2}^t|\) is smaller than the threshold value \(d_0\), merging is detected.

For the correct topological changes detection, we use the time step length \(\tau ^n<h/\beta _{e,\text {max}}^n\), where \(\beta _{e,\text {max}}^n:=\max _{i\in \{1,\ldots ,m\}}\beta _{e,i}^n\) denotes the maximal external normal speed in the n-th time step. Small time step length is crucial because it prevents the points of the curve from skipping a pixel one time step (and missing possible topological change).

In the case of multichannel segmentation, we compute the difference \(d_{c_1,c_2}^l(t)\) for each channel \(\mathcal {I}^l\), \(l=1,\ldots ,5\) and compare it with the corresponding threshold \(d_0^l\). If \(d_{c_1,c_2}^l(t)<d_0^l\) for all l, then the curves \(c_1\) and \(c_2\) segment the regions \(V_{c_1}(t)\), \(V_{c_2}(t)\) with similar properties (in all channels) and the merging is detected. If \(d_{c_1,c_2}^l(t)\ge d_0^l\) for some l, merging is forbidden.

3.7 Summary of the entire algorithm

Now, we give a summary of the algorithm for the multichannel curve segmentation of the planar point clouds. The algorithm consists of the following main steps and is fully automatic.

  1. 1.

    Preprocessing:

    1. (a)

      Computation of the images \(\mathcal {I}^l\) (for all channels) representing the input planar point cloud according to Sect. 2.1.

    2. (b)

      Computation of the edge detector functions \(g^l\) and smoothed aggregated edge detector function E as described in the Sects. 2.3 and 2.7.

    3. (c)

      Construction of the initialisation mask M as explained in the Sect. 2.8 and insertion of the initial curve(s).

  2. 2.

    Evolution: The main evolution cycle through \(n=1,\ldots ,n_\text {max}\).

    1. (a)

      Detection and treatment of the topological changes according to the Sect. 3.6 and the paper (Ambroz et al. 2019).

      1. (i)

        Splitting (for each curve).

      2. (ii)

        Deleting small curves.

      3. (iii)

        Merging.

    2. (b)

      Adding and removing the grid points on all curves as described in Ambroz et al. (2019). (This is performed only in the expansion phase and only in every fifth step.)

    3. (c)

      For all (unstopped) curves:

      1. (i)

        Calculation of the necessary geometric quantities as described in the Sect. 3, computation of the blowing function B, external normal speed \(\beta _e\) and tangential speed \(\alpha \), curvature regularisation weights \(\delta \) according to Sect. 3.4.

      2. (ii)

        Construction of the system matrix coefficients (27) and solution of the system (26).

      3. (iii)

        Check the stopping criterion discussed at the end of Sect. 3.3 and possibly switch to the attraction phase (\(\lambda =1\)) or stop the curve.

    4. (d)

      Check if all curves are stopped. If so, update the initialisation mask M and check if the segmentation target is reached (see Sect. 2.8). If not, add a new curve(s) and continue in the segmentation process.

  3. 3.

    Postprocessing: Create the final segments of the point cloud as explained in Sect. 2.9.

4 Numerical experiments

In this section, we test our curve segmentation methodology in several numerical experiments. Data acquisition was carried out with the Trimble TX5 scanner with a priori distance measurement error ± 2 mm at 10 m (defined by the producer). In all experiments, we set the desired curve segment length to h/2 (see Sect. 3.6), where h is the pixel size. We use adaptive time step length. For each evolving curve, we set \(\tau ^n=0.3\, h/\beta _{e,\text {max}}^n\) in the expansion phase (\(\lambda ^n<1\)) and smaller \(\tau ^n=0.1\, h/\beta _{e,\text {max}}^n\) in the attraction phase (\(\lambda ^n=1\)). The Gaussian filter variance was set to \(\sigma =0.5\). The global weight of the curvature regularisation was set to \(\delta _k=0.01\), the tangential velocity relaxation parameter to \(\omega =100\), the threshold \(d_0=0.5\) for all channels and the initial weight between the attraction and the expansion term to \(\lambda _0=0.1\).

Fig. 12
figure 12

Curve segmentation of the wall with three plastic plates, visualisation of the selected time steps

In the first experiment, we perform the curve segmentation of the planar point cloud corresponding to the white wall with three plastic plates depicted in Fig. 7. The point cloud consists of approximately 1.2 million points, its resolution is about \(3\times 3\) mm and the pixel size was set to \(h=12\) mm. The images and edge detectors for individual channels were discussed in the Sect. 2.7 (see Fig. 8). The smoothed aggregated edge detector is visualised in Fig. 9. Several steps of segmentation are shown in Fig. 12. The curves coloured in magenta are stopped. We start the segmentation with a single curve, and when all curves stop, we always insert only one new curve.Footnote 11 The evolution up to the time step \(n=500\) is shown in the upper left of Fig. 12. The curve meets the first edges around the time step \(n=400\) and the edge detector correctly prevents the curve from crossing the edge. In the next subfigures, we also colour already segmented region \(V^n\). In the time step \(n=1700\), we encounter a surprising observation: the curve stops on the white plastic plate, which is not visible in the RGB image, but thanks to the intensity channel (fourth row of Fig. 8), it is clearly captured in the aggregated edge detector, Fig. 9. The right part of the curve is enclosing the electrical box cover and the curve splitting is going to occur. In the time step \(n=4000\), we see that the first segmentation curve correctly segmented the wall and excluded unwanted objects: the plates, but also the electrical elements (e.g. electrical outlets in the lower part of the wall). Another curve (green) was inserted onto the blue plate. It correctly segmented the plate and did not merge with the previous curve (see \(n=4400\)). After it stopped, a new (black) curve was inserted onto the red plate. The last subfigure of Fig. 12 shows the segmentation just before the termination. After the dark green curve on the left stops, the segmentation target is reached and no more curves are inserted. The final segments of the planar point cloud are visualised in Fig. 13. The magenta points represent the waste after segmentation, i.e. the regions that were not segmented by any of the curves.

Fig. 13
figure 13

The final segments of the planar point cloud

Fig. 14
figure 14

A wall in P.O. Hviezdoslav Theatre in Bratislava

Fig. 15
figure 15

Left column: Bitmap images generated from the channels R, G, B, I, D of the point cloud. Right column: Corresponding edge detector functions

Fig. 16
figure 16

The smoothed aggregated edge detector function \(E(\textbf{x})\)

In the second experiment, we segment a wall from the interior of P.O. Hviezdoslav Theatre in Bratislava, Fig. 14. The planar point cloud has roughly 0.4 million points with an average resolution of about \(6\times 6\) mm. Pixel size was set to \(h=18\) mm. We can see two pictures on the wall, a sink with facing in the left corner. The hole above the facing corresponds to a mirror (from which the laser beam did not reflect directly to the scanner). The representative images \(\mathcal {I}^l\) and the corresponding edge detectors \(g^l\) are shown in Fig. 15. We see the edges of the pictures in all channels. Some edges of these pictures are not complete in each channel, however, in the smoothed aggregated edge detector, Fig. 16, we see distinct and complete edges (mainly due to the intensity channel). The edge around the facing is mild in all edge detectors, but since the edge is present in multiple edge detectors and the aggregation is done by multiplication (14), the aggregated edge is strong in Fig. 16, lower left part. An interesting situation is in the top left area of the wall strongly illuminated by the lamp. There is a mild edge in all colour channels, but not precisely in the same region. As a result, the aggregated edge in Fig. 16 is mild and diffuse.

Fig. 17
figure 17

Curve segmentation of the wall in P.O. Hviezdoslav Theatre in Bratislava

Fig. 18
figure 18

The final segments of the planar point cloud

Fig. 19
figure 19

Detail of crooked wall to the left of the mirror

Several steps of the segmentation process are shown in Fig. 17. In this experiment, we start with four random initial curves. In the time steps \(n=300\) and \(n=352\), we see that three of them merged into one. However, as we see in the time step \(n=600\), the green and blue curves did not merge, which is correct. The evolution stopped in the time step \(n=924\). There are two main segments (the first corresponding to the wall and the second to the facing) and the waste after segmentation, see Fig. 18. The curve segmentation correctly excluded pictures, electrical outlets, and switch (above the facing). Interestingly, the blue curve did not reach the facing edge in the area to the left of the mirror. The reason is that the wall is crooked (not flat) in this region, which is slightly captured by the edge detector corresponding to the D channel (last row of Fig. 15). The resulting edge is mild, so this is not the reason for stopping the curve. The curve stopped because the homogeneity function (6) is zero in this region due to the large \(\text {D}=|x_3|\) compared to the average in the already segmented region \(V^n\). The detail of this region is shown in Fig. 19.

Fig. 20
figure 20

Point cloud of the classroom B306 at the Faculty of Civil Engineering of the Slovak University of Technology in Bratislava (ceiling and one wall removed)

Fig. 21
figure 21

Left column: primary segmentation of the point cloud. Red points belong to the resulting planar point clouds and the blue points were rejected by the normal filter. Right column: corresponding smoothed edge detector functions

Fig. 22
figure 22

Final segments of the point cloud after segmentation of the largest walls

The last experiment is the segmentation of computer classroom B306 at the Faculty of Civil Engineering of the Slovak University of Technology in Bratislava. The input point cloud with approximately 10 million points and an average resolution of about \(2\times 2\) mm is in Fig. 20. The ceiling and one wall (with windows) were removed from the original data in order to make the images more readable. The point cloud was scanned from four scanner positions. First, we performed primary segmentation to find planar point clouds. Secondary segmentation was performed only for vertical planes corresponding to the largest walls visualised in Fig. 21, left column: front wall with two whiteboards, the right one is partially covered by a projection screen; Side wall consisting of two planes, the first with a door and the second with a long wall hanger and alcove with a sink; and finally, the back wall. We also added a normal filter to remove points whose local normals deviate from the normal of the fitting plane by more than 30 degrees (coloured blue). The red points form the resulting planar point clouds. In the right column of Fig. 21, we can see corresponding smoothed edge detector functions. Note that some unwanted ‘artificial’ edges are also present, e.g. long inclined edge on the back wall caused by uneven illumination by natural light, edges on the whiteboard of the same origin, or edges due to merging of multiple scans (mainly on the front wall and bottom of the back wall). The final segments are shown in Fig. 22. Dark grey points represent waste from the primary segmentation (points that do not belong to any of the segmented planes). Light grey points form the waste of the secondary segmentation.Footnote 12 We can see that on the front wall, our curve segmentation successfully found segments representing the wall, whiteboards, and projection screen, removed unwanted objects, such as electrical elements, wall hanger on the side wall, divided the side wall into top and bottom segments (corresponding to different wall colours). In the upper part of the door plane, we see separate segments due to a cable cover (which is barely visible in RGB channels, Fig. 20, but considerably filtered out by the normal filter, see the second row of Fig. 21). Some regions are not segmented as desired, e.g. the segment representing the darker part of the wall on the left from the door, Fig. 20, is merged with the upper segment, see the dark red segment in Fig. 22. The segmentation curve leaked through the weak part of the edge. Also, the situation on the back wall is strongly influenced by the edge caused by light and the low density of the point cloud in the part closer to us, see lower part of Fig. 20 and last row of Fig. 21. However, these are situations that cannot be avoided when working with real data. Therefore, we conclude that we are generally satisfied with the result of the segmentation.

5 Conclusion

In this paper, we presented a method for multichannel segmentation of planar point clouds using evolving curves. The planar point clouds were represented by a set of images, one for each available channel (colour and intensity channels and distance from the fitting plane). We discussed details of the mathematical model of the Lagrangian curve evolution, construction of the edge detectors using all channels, the design of the normal velocity driving the evolution. The model was equipped with a suitable tangential motion to improve the quality of the mesh. We derived the numerical discretisation of the mathematical model and used the fast O(n) approach to treat topological changes. The method was successfully tested on multiple examples of real data.

The method could be used to automate the processing of point clouds, especially in the area of as-built documentation and geometry verification of building structures. Further development of the method can include automatic tuning of the model parameters; correction of the rough TLS intensity channel (Höfle and Pfeifer 2007; Xu et al. 2017; Bolkas 2019); usage of other channels, e.g. a derived channel capturing point cloud curvature; examination of other possibilities for aggregation of the edge detector functions (e.g. choose different fuzzy logic aggregation operators); or automatic computation of threshold values \(d_0^l\), possibly using information from histogram of the segmented image. The overall segmentation methodology could incorporate the segmentation of other geometric shapes, such as cylinders or rounded walls.