Unsupervised random forest for affinity estimation

This paper presents an unsupervised clustering random-forest-based metric for affinity estimation in large and high-dimensional data. The criterion used for node splitting during forest construction can handle rank-deficiency when measuring cluster compactness. The binary forest-based metric is extended to continuous metrics by exploiting both the common traversal path and the smallest shared parent node. The proposed forest-based metric efficiently estimates affinity by passing down data pairs in the forest using a limited number of decision trees. A pseudo-leaf-splitting (PLS) algorithm is introduced to account for spatial relationships, which regularizes affinity measures and overcomes inconsistent leaf assign-ments. The random-forest-based metric with PLS facilitates the establishment of consistent and point-wise correspondences. The proposed method has been applied to automatic phrase recognition using color and depth videos and point-wise correspondence. Extensive experiments demonstrate the effectiveness of the proposed method in affinity estimation in a comparison with the state-of-the-art.

computer vision and image processing tasks. Affinity of motion trajectories, for example, is utilized in motion segmentation [1,2] and action recognition [3]. Automatic phrase recognition employs trajectory affinity to define motion patterns in color and depth videos [4]. Point-to-point affinity and shape correspondence are essential for attribute transfer and data reuse [5][6][7][8][9], as well as shape comparisons in morphological studies [10,11]. It is, however, time consuming to estimate pairwise affinities for large-scale datasets, where the complexity grows quadratically with the size of the dataset. Some distance metrics, such as the earth mover's distance, have higher computational costs for higher-dimensional data. This paper presents an unsupervised random-forest-based metric for efficient affinity estimation, and demonstrates its efficacy on automatic phrase recognition and point-wise correspondence of a shape corpus.
Random forests have been popular in computer vision for decades, and are well-known for their scalability and real-time evaluation as well as providing good generalization to unseen data [12][13][14][15][16][17][18]. A clustering random forest works in an unsupervised fashion [19][20][21][22][23][24][25] to estimate the underlying data distribution and affinity without prior labels. Alzubaidi et al. [26] utilized a density forest [20] with a Gaussian distribution assumption for tree nodes, where clustering compactness was measured by the covariance matrix. However, the zero-valued determinant in the case of rank-deficiency causes the criterion to become invalid. The combinatorial node splitting criterion which integrates trace-based distribution measurement and scatter index [4] can handle rank-deficiency for optimal node splitting.
Recent research addressed forest-based metrics for affinity estimation. The cascaded clustering forest (CGF) was proposed to refine voxel-wise affinity by iteratively updating geodesic coordinates [27] using a set of clustering models. The mixed metric random forest (MMRF) utilized self-learning of data distributions for matching consistencies between images [28], taking advantage of the weak labeling and classification criterion to optimize node splitting. The oblique clustering forest (OCF) [29] extended the splitting criterion from traditional orthogonal hyperplanes to oblique hyperplanes, reducing the tree depth and model complexity. The spatially consistent (SC) clustering forest employed a data-dependent learning guarantee for unsupervised clustering of randomized trees [30]. The above clustering forests introduce additional computation, such as cascaded clustering models [27], fine-tuning with penalized weighting of the classification entropy [28], dominant principal component and regression [29], and datadependent learning guarantee for tree pruning [30], to improve data clustering and affinity estimation. In contrast, our work here does not introduce additional computational costs to construct the clustering forest. Instead, we extend the binary forest-based metric to a continuous one for affinity estimation. As training an unsupervised clustering forest is typically more time-consuming than a supervised classification forest due to entropy estimation for the high-dimensional data, the decremental covariance matrix evaluation technique is introduced to avoid assessment of covariance matrices from scratch and reduce the learning complexity.
Affinities are measured efficiently by hierarchical clustering forests, in contrast to the learning-based feature fusion for the affinity graph using iterative optimization of convex problems [31]. Two points are intuitively assumed to be similar if they are placed in the same leaf. The generalized forestbased metric is derived from the average affinities of individual trees; it has been used to measure data similarity [20,24]. A continuous affinity measure has been proposed based on the common traversal path from the root to leaf nodes as well as the node cardinality on the path [25]. To relieve the weight computation on the traversal path, we present a forestbased metric as a linear combination of normalized common-traversal-path-based and smallest-sharedparent-based metrics. The proposed metric takes into account both the unbalanced data distribution and partial similarity. Given the pairwise affinities of a dataset, it is straightforward to compute the lowdimensional embedding. Ganapathi-Subramanian et al. [32] constructed a joint latent embedding function combining diffusion embedding and a linear mapping for descriptor transport in a shape corpus, where the nonlinear embedding function relied on the predefined feature descriptors. The paper addresses the forest-based metric and affinity estimation. The embedding is conducted by the multi-dimensional scaling (MDS) algorithm [33]; it is computed based on affinity estimation without explicit representation learning.
This work introduces a pseudo-leaf-splitting (PLS) algorithm to handle inconsistent leaf assignments, since the random forest built upon independent data points cannot accommodate global data structures. The random-forest-based metric with PLS regularizes point-wise correspondences. The proposed PLS technique differs from existing methods [9,[34][35][36] in that it bridges the gap between separate pointwise correspondence and consistency refinements. Deep learning-based methods have been used for shape correspondence [5,[37][38][39], learning from prior ground truth correspondences or metric space alignment.
3DN [39] and 3D-coded [38] were unsupervised end-to-end networks to infer global displacement fields between a shape and a template, utilizing chamfer and earth mover's distance-based loss functions. FMNet [37] optimized a feature extraction network via a low-dimensional spectral map. ADD3 used anisotropic diffusion-based spectral feature descriptors [5]. FMNet [37] and ADD3 [5] learn in a supervised manner, requiring prior ground truth correspondence. Unlike deep neural network-based descriptor learning, this work exploits unsupervised forest-based metric learning for pointwise correspondence.
This paper presents a combined forest-based metric and a PLS regularization scheme to improve the forest-based metric for affinity estimation, as shown in Fig. 1. The main contributions of this work are: (i) a continuous forest-based metric exploiting both the common traversal path and the cardinality of the smallest shared parent node, enabling efficient and effective affinity estimation in large and highdimensional data, (ii) a PLS scheme to regularize the forest-based metric to account for global spatial and structural relationships, overcoming inconsistent leaf Fig. 1 Our proposed unsupervised random forest-based metric for affinity estimation. The forest-based continuous metric is defined using both the length of the common traversal path and the cardinality of the smallest shared parent node. A pseudo-leaf-splitting algorithm is proposed to account for spatial relationships, regularising affinity measures and inconsistent leaf assignments. Decremental covariance matrix evaluation is used to reduce learning complexity. assignments, and (iii) experimental demonstrations and comparisons with the state-of-the-art indicating successful affinity estimation for facial trajectories and 3D points, enabling efficient and automatic phrase recognition and consistent correspondences for a 3D shape corpus.

Unsupervised random forest
Given an unlabeled dataset T = {t i |i = 1, . . . , N}, comprising a set of trees trained independently, the unsupervised density forest estimates the underlying data distribution using a Gaussian distribution assumption [20]. The combinatorial node splitting criterion integrates a trace-based distribution measurement and a scatter index [4]. The objective function I of the j-th node with data T j is defined as follows.
where tr(·) is the matrix trace, T i j denotes the data assigned to the i-th child node from parent node j, σ denotes the covariance matrix of the Gaussian distribution, m T i j denotes the size of the left or the right child nodes, and m T j the parent node size. φ(T i j , μ i ) = max t∈T i j t − μ i ∞ . μ l and μ r are the centroids of the left and right child nodes respectively. The constant λ is set to 50 empirically.
The covariance matrices need to be repeatedly evaluated when given randomly selected parameters; it is time-consuming to evaluate the covariance matrix σ from scratch for the optimal splitting parameters when building the forest. This work introduces a decremental covariance matrix evaluation technique (see Appendix A). The complexity of covariance matrix evaluation is reduced from O(mρ 2 ) to O(ρ) by the decremental technique, where m is the cardinality of the node, and ρ denotes the data dimensionality. The trace evaluation complexity is reduced to O(κρ) given κ randomly selected parameters.

Binary forest-based metric
The forest leaves L define a partition of the training data. When feeding an instance t to a tree, it will finally reach a leaf (t) ∈ L, after a sequence of binary tests stored in the branch nodes. Instances assigned to the same leaf node are assumed to be similar and their pairwise affinity is set to 1; it is 0 otherwise. The symmetric affinity matrix A is defined as a weighted combination of A k from independent trees.
where n T is the number of trees. Since only points within a leaf node are considered to be similar, the affinity matrix from the random forest automatically accounts for neighboring relationships. Thus, A can be viewed as a geodesic affinity matrix of the original dataset. However, when using the L 2 distance metric, there is no prior on local neighbor relationships. A kNN-like algorithm is needed to find neighbors from the pairwise distance matrix with additional time cost.
The affinity matrix obtained by the binary metric is often relatively sparse since only point pairs in the same leaf node are assumed to be similar. Generally speaking, the leaf node should not be too small to account for the affinity of the dataset: randomized trees should provide sufficient similar candidate points in leaf nodes.

Continuous forest-based metric
Aside from the binary affinity, we propose a continuous forest-based metric based on the common path P ij of two instances t i and t j as they traverse from the root to leaves (t i ) and (t j ). The distance d cp (t i , t j ) is computed by the common path as follows: where ν ij = max(ν i , ν j ) is the maximum depth of (t i ) and (t j ), and | · | o is the cardinality of a set. If two instances reach the same leaf node, the distance is zero. Otherwise, the distance is set to 1 when the two instances lack a common path. The binary affinity definition is a special case of Eq. (3) by setting the common path to null for instances not in the same leaf. However, there is no guarantee that the decision tree is balanced for an arbitrary dataset. In this case, similarity is defined based on the cardinality of the data stored in the smallest shared parent (SSP) node T p ij of (t i ) and (t j ).
is the minimum leaf size of (t i ) and (t j ). When t i and t j go into the same leaf node, the SSP node T p ij is the leaf itself, and distance d sp is zero. On the other hand, when the shared parent node is at the highest level, i.e., the root node T r , d sp is set to 1. When the leaf size n l is selected as the termination criterion of the tree growth, the above SSP-based metric can be For an unbalanced data distribution, the distance between two instances in a small cluster is shorter than in a large cluster, using the definition in Eq. (4): two instances are likely to be far apart in the large cluster.
Compared to the adaptive forest-based metric in Ref. [25], here the cardinality of the SSP node is used to determine affinity without weight computation in the shared traversal path. The combined forest-based metric d f is defined as a linear combination of the common path-based d cp and the SSP-based d sp .
where the constant weight w cp +w sp = 1. The entry in the affinity matrix A is defined as

Proposition 1. The functions defined in Eqs. (3)-(5) are non-negative metrics with following properties:
The proof of Proposition 1 is given in Appendix B. The above binary, the common-path-based, the SSPbased, and the combined distance metrics are applied to a set of toy data in Figs. 2 and 3. The difference e A between the affinity matrices A computed by the clustering forest-based metrics and A L 2 by the L 2 norm and the kNN is shown in Fig. 4.
where ⊕ is the xor operator of matrix entries. n A is the size of A. · F is the Frobenius norm. The combined random-forest-based metric achieves lower e A than the binary, the common-path-based, and the SSP-based metrics. All metrics display a reduced difference e A when enlarging the forest size. The Dice similarity metric [40] e I is used to compare the k nearest neighbors obtained by the proposed metrics  with those from the L 2 norm in Fig. 5. The nearest neighbors obtained by the combined random-forestbased metric are more consistent with the L 2 metric than other metrics. We observe that consistency increases with increasing forest size. Moreover, on enlarging the forest size, the performance of the binary random-forest-based metric approaches that of the combined metric (see Figs. 4(a) and 5(a)), because a large number of randomized decision trees tend to provide sufficient neighboring candidates. The look-up of feature values and comparison with thresholds when traversing trees are very fast and take negligible time. Although the cost of pairwise distances for small subsets or sampled point pairs is much lower than dense pairwise distance computation, any kNN-graph-based method is timeconsuming for a high-dimensional dataset. The proposed forest traversal and leaf assignments have linear complexity with respect to the data size. More importantly, the time complexity of our method is independent of dimensionality, which is desirable for high-dimensional data. In the extreme case of a forest-based metric, the binary metric, there are no multiplication operations in the affinity estimation. Since the instances in the same leaf node are assumed to be similar, the complexity depends on the number of the leaf nodes, and there is no pairwise distance computation by the binary forest-based metric. For the continuous metrics, such as d sp , there are just normalization operations in the affinity estimation.

Pseudo leaf splitting
It is efficient to acquire the pairwise affinity matrix between datasets by the random-forest-based metric. However, there is no regularization for pointwise correspondence because the random forest is built upon independent feature descriptors without considering the relationship. For instance, when establishing correspondence C between datasets X and Y , the forest-based metric can be used to produce a candidate matching pair {(x i , y i ) ∈ C|x i ∈ X, y i ∈ Y }. The above correspondence does not guarantee relationship preservation, i.e., g(x i , x j ) ∝ g(y i , y j )  when (x i , y i ) ∈ C and (x j , y j ) ∈ C. g is some function to measure the relationship, e.g., the geodesic distance on a 3D mesh surface. This work introduces PLS to handle the lack of affinity regularization in the forest-based metric.
To begin with, the leaf node * with the largest span is located as the starting leaf, and we set * = argmax The span of starting node * is denoted η * = max Generally speaking, the leaves of extreme points can be identified in this way, e.g., the leaf node of the toe in a 3D human mesh dataset. A Gaussian mixture model (GMM) is used to fit the point distribution in the leaf node. For simplicity, the dominant mode acquired by the mean shift method [41] is used to represent the leaf. Let μ * denote the center of the dominant mode in * . Point x * ∈ * is selected as the seed satisfying J(X) returns the seed point of dataset X. The point set belonging to X and * is split according to the seed selection. In our system, the seed point is assigned to the left leaflet. The binary test for leaf splitting is defined as Given the starting leaf node and the seed selection, the leaf splitting is propagated to other leaves. The unprocessed leaves are sorted by distance to the seed point x * ∈ * and propagation begins from the nearest leaf node. Let k be the current leaf node. For point x ∈ k , the binary test for leaf splitting of dataset X is defined as where η k1 = min Only leaf nodes with ambiguous correspondence need to be split, which can be determined simply by checking the span of the leaf node. When the span is greater than the predefined threshold, set to 10% of the largest span of dataset X in our experiments, the leaf nodes are split. The pseudo-leaf-splitting process is given in Algorithm 1.
PLS is a general technique to regularize the pairwise affinity obtained from a forest. Here the function g is used to measure the point-wise relationship between Algorithm 1 Pseudo leaf splitting Input: Random forest R, dataset X. Output: Pseudo leaf splitting. for Each tree in R do Locate starting leaf * with the largest span (Eq. (7)); Compute the centroid of the dominant mode in * ; Get a seed point x * ∈ * (Eq. (8)); Split leaf node * using Eq. (9); Sort unprocessed leaves by distance to x * ; for Each inconsistent leaf node do Perform leaf splitting using Eq. (10); end for end for points inside a dataset, where the leaflet splitting tests are set according to the span of the dataset. There are no requirements that two sets share the same span when using the forest-based metric and PLS regularization to establish point-wise correspondence. The proposed scheme can handle non-isometrically deformed datasets by using the data-dependent binary tests in Eqs. (9) and (10).
It is computationally complex to find consistent correspondences in a shape corpus. Existing techniques do so by minimizing overall distortion using dynamic programming [36], positive semidefinite matrix decomposition [34], and functional map networks [35]. Additional refinement is required for consistent correspondence when given an initial pairwise mapping. The gap between the pointwise correspondence of shapes and the consistency refinement can be avoided by taking into account point distribution in the shape corpus. Unlike the example-based classification forest for shape correspondence [9], there is no need for labeled training data using the proposed forest-based metric.
The correspondence function between surface meshes X p and X q is denoted τ pq ( When given a group of surface meshes, point-wise correspondence using PLS is consistent and satisfies cycle constraints: when τ pq (x p i ) = x q j and τ qr (x q j ) = x r k , τ pr (x p i ) = x r k . It can be ascribed to the seed selection based on the Gaussian fitting of the dominant mode in * . The mapping between starting seed points of X p and X q is τ pq (x p * ) = J q J −1 p (x p * ) = x q * . It is obvious that correspondence of seed points satisfies cycle constraints, where τ pr ( Taking into account the similarity propagation nature of PLS, the point-wise correspondence satisfies the cycle constraints.

Datasets and metric
The proposed method is applied to affinity estimation of various datasets, including KinectVS [4], OULUVS [42], and OuluVS2 [43]. KinectVS consists of twenty subjects uttering twenty phrases six times [4]. Color and depth video data were obtained by Kinect with a resolution of 640 × 480. The OULUVS dataset [42] consists of color videos of twenty subjects uttering ten phrases five times with a resolution of 720 × 576. OuluVS2 [43] consists of color videos of 53 subjects uttering ten phrases three times with a resolution of 1920 × 1080.
The AAM algorithm [44] is used to extract 35 patch trajectories around the lips and jaw following Ref. [4], where the shape and texture features of patches are concatenated to represent the trajectories. In our experiments, the affinity matrix obtained by the forest-based metric is sorted, and r nearest neighbors are viewed as matching candidates of probe trajectories. r is set to 1 (Top-1), 5 (Top-5), and 10 (Top-10) in the affinity evaluation. If the trajectory with the same label as the probe occurs in the candidate set, there is a hit. The trajectory labeling accuracy is computed as n hit /n probe , where n hit and n probe denote the numbers of hits and probe trajectories respectively.
We also evaluate the proposed method on 3D shape corpora, including TOSCA [45], Scape [46], SHREC07-NonSym [34,45], and Faust datasets [47]. The wave kernel signature (WKS) [48] and normalized geodesic distance vector are used as 3D point feature descriptors. The geodesic distance vector of point x is composed of the geodesic distance between x and all other points on the surface meshes, computed by the fast marching algorithm. The correspondence accuracy of 3D surface meshes X and Y is defined as where τ and τ are the estimated and ground truth point-wise mapping functions, n X is the number of points in X, and g is the geodesic distance function. The percentages of correct matches with a set of geodesic errors, including 0.02, 0.05, 0.10, and 0.16, are reported in our experiments.

Affinity estimation
The proposed method is applied to affinity estimation on the facial trajectories and 3D points. We compare the proposed criteria with the classical Gini index [25], the determinant of the covariance matrix [20], and the variance of feature differences [23] on the facial trajectories (Figs. 6(a)-6(c)) and 3D shape datasets (Figs. 6(e)-6(g)). The node splitting criterion based on the determinant of the covariance matrix [20] fails for all datasets due to rank deficiency of the covariance matrices. The forests built by the Gini index of the dummy set [19,24,25] depend on the construction of synthetic data, being limited to locate the data clusters effectively. The node splitting criterion tries to find a feature pair to produce the largest variance of feature difference [23], which does not model the data distribution of child nodes. On the other hand, our splitting criteria handle the data distribution and produce the best results with the Fuse metric. The numbers of trees are set to 17 and 50 for the visual utterance datasets and 3D shape datasets respectively. Comparisons of binary (Bin), common-path (Path), SSP, and combined distance metrics (Fuse) on the facial trajectories and 3D points are shown in Figs. 6(a)-6(c) and Figs. 6(e)-6(g). The Fuse metric shows better performance than the binary one, and produces an improvement relative to the Path and the SSP-based metrics. For two pairs with common paths of the same length, the one with the smaller SSP is more similar than the other. Both the Path and SSP metrics contribute to affinity estimation based on tree traversal in forests.
Figures 6(d) and 6(h) show the labeling accuracies of the facial trajectories for KinectVS, OuluVS, and OuluVS2, as well as the 3D point matching accuracies on TOSCA, Scape, and Shrec-NonSym datasets with and without PLS regularization. The labeling results with PLS regularization are better than those without for all datasets. Because the shape feature defined as the difference of patch positions in adjacent frames possesses motion information, the symmetric facial trajectories on the left and right half faces are less likely to be confused. Thus, improvements using PLS regularization are limited for the facial trajectory datasets compared to those for the 3D shape datasets.
The facial tracker is designed for frontal faces, and tracking performance deteriorates when given Fig. 6 Labeling accuracies for the combined random-forest-based metric (Fuse), the binary (Bin), common-path-based (Path), and SSP-based metrics, random forests with node splitting criterion using the determinant of the covariance matrix [20], the variance of feature differences [23], and the Gini index profile facial images in the OuluVS2 phrase dataset [43]. Figure 7 shows the effects of facial landmark tracking on affinity estimation for trajectories. The less accurate facial landmark tracking in the profile views makes it harder to locate the correct facial trajectories. The facial trajectory labeling accuracy of the frontal view is better than for the profile views in the Top-1, Top-5, and Top-10 experiments.

Dense correspondence between shapes
An unsupervised random forest-based metric with PLS regularization is employed to estimate the point distribution (see Fig. 8). A comparison of the pairwise correspondence found by the proposed method and functional maps (FM) [6], blended intrinsic maps (BIM) [7], a coarse-to-fine combinatorial method [8], and a classification random forest (CRF) [9], are shown in Table 1. Like Ref. [9], we only conduct experiments on the classes with more than six objects to ensure sufficient training data for the forest. Here all shapes except the query are used to  train the forest. Our method can achieve more than 96% correct matching within 0.05 geodesic error. In the experiments, the WKS and the geodesic distance vectors are used as the point descriptor. Table 1 gives the correspondence accuracy based on WKS (RF wks ), the geodesic distance vector (RF geo ), and feature fusion (RF fusion ). In our experiment, the accuracy of the dense correspondence given by RF fusion outperforms those from RF wks and RF geo . The fusion of the local shape descriptor WKS and the contextual geodesic vector facilitates searching for the optimal node splitting. Point-wise matching based on forests with different numbers of trees is shown in Fig. 9(a). The forest size is larger than that for the supervised CRF [9]. A relatively large number of randomized decision trees is needed to estimate the correspondence in an unsupervised manner. The more the training data, the more accurately the correspondence can be obtained (see Fig. 9(b)).
We have applied the proposed method to a motion dataset [49], where the first 10% shapes are used to train the forest. There is no requirement that the training and testing shapes are from the same kind of motions. Our method can achieve more than 95% correct matches within 0.05 geodesic error, as shown in Fig. 9(c).
The proposed method is compared with convexoptimization-based nonrigid registration (CO) [51] and a CNN classifier-based method [52] on the Faust database [47]. Following Refs. [51,52], correspondence is computed between pairs of meshes from the same subject (intra-subject) or different subjects (inter-subject). Aside from the testing pairs, all other meshes are used to build the random forest. Our method outperforms CO and the CNN-based method in average error, average error of the worst pair, and 10-cm recall: see Fig. 10. CNN followed by nonrigid registration (CNN-S) produced the best results. However, CNN and CNN-S were built upon 2D depth maps, where partial scans and additional registration operations were required. Figure 11 and Table 2 show a comparison with deep learning-based shape correspondence models, including 3D-coded [38], FMNet [37], and ADD3 [5] Fig. 9 (a) Correspondence errors with different forest sizes for the TOSCA dataset. (b) Average geodesic errors corresponding to various sizes of training sets for the proposed method (RF fusion ) and a classification random forest (CRF) [9]. (c) Comparison of pairwise correspondence errors with different feature channels on human motion data [49]. (d) Comparison of consistent correspondence errors on the SHREC-NonSym dataset by the proposed method, and FMN [35], SDP [34], CFM [50], and OBF [36] methods.   [38]. (e) FMNet [37]. on the Scape dataset The proposed forest-based metric with PLS regularization outperforms the supervised and unsupervised deep learning-based models with a matching accuracy of 0.65 vs. 0.48 (3Dcoded) and 0.27 (ADD3) at g0.02. The supervised FMNet has the best performance, which is learned from prior ground truth correspondence and the mapping in both the spatial and spectral domains. On the other hand, the proposed approach only requires unsupervised forest-based metric learning for pointwise affinity.

Consistent correspondence in shape corpus
Aside from pairwise shape correspondence, the proposed method is also compared with existing consistent correspondence methods, including positive semi-definite matrix decomposition (SDP) [34], an optimization-based framework (OBF) for distortion minimization [36], a functional map network (FMN) [35], and consistent functional maps (CFM) [50] on the SHREC-NonSym dataset: see Table 3 and Fig. 9(d). The proposed method takes advantage of the point distribution modeling by the clustering forest and the PLS regularization scheme, outperforming the compared methods with correspondence accuracies of 44.2% (g0.02) on the Shrec-NonSym dataset. Table 4 gives the correspondence by the proposed method, SDP [34], OBF [36], and fuzzy correspondence (FC) [53] methods on the TOSCA and Scape datasets. The proposed method outperforms SDP [34] and OBF [36] by significant margins in local matching with 0.02 geodesic errors: the proposed method has an edge in matching specificity. At 0.16 Table 3 Comparison of consistent correspondence by the proposed RF fusion method with and without PLS regularization, and FMN [35], SDP [34], CFM [50], and OBF [36]  geodesic errors, the proposed method can realize full matching as SDP [34] and OBF [36] methods. As shown in Tables 1, 3, and 4, the proposed forest-based metric with PLS regularization refines the forest-based metric and produces an improvement by a large margin for both pairwise and consistent correspondence in the shape corpus.

Phrase recognition
Phrase recognition accuracies for the proposed method (RF fusion ) on the depth and color videos are given in Fig. 12. The accuracy for subjectindependent (SI) experiments is lower than for subject-dependent (SD) experiments. The performance variations in the SD and SI experiments can be ascribed to personal speaking characteristics and person-specific texture differences regarding the moustache and lip shapes. The SI experiments on the frontal phrase set of OuluVS2 provide an average accuracy of 84.8%, comparable to the state-of-theart methods [54,55] (see Fig. 13). In the OuluVS2 dataset, the video data for Subject 29 turned out to be unusable since his mouth was not seen most of the time, so Subject 29 was not used in the test data. Table 5 reports the phrase recognition accuracies on the frontal phrase set of the OuluVS2 dataset in SI experiments. The proposed model is compared to deep CNN-based lipreading models with the long   short-term memory architecture [55] and parallel branches [56]. A large-scale dataset was used to learn the network parameters [57]. The proposed model achieves an average accuracy of 84.8%, comparable to a latent variable model [54] and LSTM [55]. The system based on deep neural networks produces a large margin improvement [56,57]; many parameters need to be learned from annotated training data. Figure 14 gives phrase recognition accuracies for each subject in the color videos (RF color ) with a patch size of 15 × 15 and the depth videos (RF depth ) with a patch size of 7 × 7, for the KinectVS dataset. We set the patch sizes following Ref. [4]; the patch size for depth videos is smaller given the relatively low signal-to-noise ratio of the depth video.

Comparison with forest-based correspondence
The proposed method utilizes a multivariate Gaussian distribution and clustering forest-based metrics for affinity estimation and correspondence. We estimate supervoxel correspondence on bony tissues of the craniofacial CBCTs, which are divided into two parts, the mandible and the maxilla. The dataset consists of 150 clinically obtained cone beam CTs (CBCTs) [30], each decomposed into 5000 supervoxels. We compare with recent work on forestbased metrics, including OCF [29], MMRF [28], SC forest [30], and the classification forest (CLA) [58], on supervoxel-wise correspondence; see Table 6.
In experiments, we estimate the supervoxel-wise correspondence on bony tissues of the craniofacial CBCTs. The proposed approach extends the binary forest-based metric to a continuous one, and achieves a Dice similarity coefficient (DSC) of 0.93 on the maxilla, outperforming MMRF (0.88), SC (0.89), and CLA (0.81). MMRF and SC show better performance for the mandible with a relatively small number of supervoxels than ours, though additional classification criteria and tree pruning are required.
Here OCF achieves the best performance with DSC of 0.93 and 0.95 on the mandible and the maxilla respectively. However, OCF requires additional dominant principal component estimation and regression [29]. The proposed approach does not incur additional computational cost to forest construction.

Conclusions
We have presented unsupervised random-forest-based metrics for affinity estimation for large and highdimensional data, taking advantage of both the common traversal path and the smallest shared parent node. The proposed forest-based metric combined with PLS can account for spatial relationships to determine consistent correspondences. The proposed PLS scheme regularizes the forest-based metric and avoids the gap between point-wise correspondence and additional consistency refinements inside a shape corpus. The proposed method has been applied to phrase recognition using color and depth videos, as well as point-wise correspondence of 3D shapes, demonstrating the effectiveness of the proposed method compared to the state-of-the-art. In future, we will further explore clustering random forest methods for affinity estimation. The additional PLS is utilized in the current system to account for global spatial and structural relationships. This approach can regularize the forest-based metric but relies on a heuristic seed selection and propagation process to optimize the node splitting parameters and generate the forest. We will further study optimization of unsupervised clustering forests for consistent and point-wise correspondence.

Appendix A Decremental covariance matrix evaluation
Since the covariance matrices need to be evaluated repeatedly when given randomly selected parameters, it is time consuming to evaluate the covariance matrix σ from scratch for the optimal splitting parameters when building the forest. Let ρ be the data dimensionality. The time complexity of covariance matrix construction is O(κ min(m 2 T l ρ, m T l ρ 2 ) + κ min(m 2 T r ρ, m T r ρ 2 )) for κ randomly selected parameters. The complexity of trace evaluation is O(κm T l ρ + κm T r ρ). The decremental evaluation technique for covariance matrices is presented using the fact that the data in each node are a subset of that of the root node.
Let σ p , σ l , σ r denote the covariance matrices of the parent and two child nodes respectively. The ij-th entry of σ p is defined as σ p ij = E((t i − μ p )(t j − μ p ) ). Without loss of generality, here the left child node is assumed to be larger than the right one. To begin with, the covariance matrix of the smaller child node, i.e., the right one, is computed. The entry of σ r is defined as σ r ij = E((t i − μ r )(t j − μ r ) ). For a point pair (t i , t j ) belonging to both the parent and the right child nodes, the differences of corresponding entries in σ p and σ r are computed as follows.
where σ p ij = σ p ij (m T p − 1) and σ r ij = σ r ij (m T r − 1). Let σ * r denote the sub-matrix of σ p with columns and rows corresponding to points in the right child node.
The trace of the covariance matrix σ r of the right child node is derived as (13) where o r is the displacement vector from the centroid of the right child node to the parent, and o r = μ p −μ r . The right child-related constant o r = μ p 2 − μ r 2 .
Given tr(σ r ), the trace of σ l is computed as follows: Given the randomly selected splitting parameters, the centroids μ l and μ r of the left and right child nodes, as well as the norms μ l and μ r are computed. Next, the trace of the covariance matrix of the smaller child node is computed based on the submatrix extracted from the parent node as in Eq. (13). The trace of the covariance matrix of the other child node is computed given tr(σ p ) and tr(σ r ) as in Eq. (14). Since just the traces of the covariance matrices are needed to estimate the information gain in our system, the complexity of the covariance matrix evaluation is reduced from O(m T l ρ + m T r ρ) to O(ρ). Given κ randomly selected parameters, the trace evaluation complexity is reduced to O(κρ).

Appendix B Proof of Proposition 1
We prove the functions defined in Eqs. (3)-(5) are metrics as follows.

Eq. (3)
Let t i , t j , t k be three input instances and the corresponding leaf nodes be (t i ), (t j ), (t k ). The common paths are denoted P ij , P jk , and P ik .
Triangle inequality: Suppose that P ij is the longest common path.
Similarly, when P jk or P ik is the longest common path, the triangle inequality property holds.

Eq. (4)
Triangle inequality: Suppose that T p ji is the smallest shared parent node. It follows that |T Similarly, when T p jk or T p ik is the smallest shared parent node, the triangle inequality property holds.

Eq. (5)
Since the function is a weighted combination of two metrics as defined in Eqs. (3) and (4), it is obvious that the function defined in Eq. (5) is a metric.
Tae-Kyun Kim received his Ph.D. degree from the University of Cambridge UK, in 2008 and was a Junior Research Fellow at Sidney Sussex College, Cambridge from 2007 to 2010. He has been a lecturer in computer vision and learning at Imperial College, London since 2010. His research interests span object recognition and tracking, face recognition and surveillance, action and gesture recognition, semantic image segmentation and reconstruction, and man-machine interfaces. He has co-authored over 40 academic papers in top-tier conferences and journals, 6 MPEG-7 standard documents, and 17 international patents. His co-authored algorithm is an international standard in MPEG-7 ISO/IEC for face retrieval. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.