Matching method for mutated veneer sheet images using gray-level co-occurrence matrix features

This paper studies the tracking of wooden veneer sheets by matching their respective wet and dry colour images. The tracking of veneer sheets has proved to be a challenging task due to random mutations during processing in terms of color changes, the emergence of defects, and, occasionally, lost pieces of the veneer surface. The proposed matching procedure involves image segmentation with five different sizes, followed by segment-wise extraction of Gray Level Co-occurrence Matrix (GLCM) textural feature arrays, and their similarity comparisons respectively. A voting mechanism is introduced that allocates the correct match based on the majority. An optional shifting procedure is applied to match candidates with missing areas. The method is demonstrated and benchmarked using a real-world dataset sourced from the industry, comprising 2579 high-quality images of spruce veneer pairs obtained from peeling and drying. In comparison to earlier studies that employed randomized 50 pair sampling on the same dataset, our approach yields a matching accuracy of 99.41%, outperforming the previously reported 84.93%. These findings have relevance for researchers in wood image analytics and carry practical implications for large-scale, automated veneer production facilities seeking innovative ways to optimize their raw material usage.


Introduction
Over the past few decades, computer vision applications have increasingly replaced manpower in simple tasks such as quality inspection, inventory management, and equipment monitoring. In large-scale veneer and plywood production, digital imaging is routinely used for detecting defects on veneer sheets' surfaces and classifying sheets into stacks of the same quality after each phase of processing. This has already resulted in significant material savings for the producers in terms of raw material savings (Kryl et al. 2020), although, still, the quality classes given are merely process phase specific. This is due to the challenges of automated image matching with highly mutated wood surfaces and the lack and size of labeled data that would enable the construction of robust algorithms. In other words, the current situation is that individual veneer sheets cannot be reliably tracked over the entire production line despite the availability of image data.
The inability to match sheets in veneer and plywood production presents a significant challenge for developing advanced automation systems. This is because, (see, e.g., Lu 2017a, b) keeping track of products throughout the factory is essential with regards to the realization of Industry 4.0 paradigm which seeks to optimize production lines holistically using data. Tracking of veneer sheets over multiple pieces of equipment is known to be difficult not only because of the tens of thousands of daily image instances but also because of defects, color changes, and shrinking of sheets that emerge as a result of physical operations. The defects are random in the sense that the changes within the surface of wood depend on visually unobserved factors such as the microscopic structure of the material and whether the dark area is located inside a knot or on a smooth surface. Furthermore, marking sheets with visible markings is not feasible as they either get worn out and/or, compromise the visual quality of the final product (see, also Jalonen et al. (2021a) for further discussion). Despite the challenges, a solid economic rationale exists for matching as it would bring an improved understanding of the individual sheet behavior opening new avenues to dynamic process line optimization and, thus, significant savings, especially in terms of raw materials and energy, which are by far the largest cost items of operations. This paper addresses the gap by proposing a method that divides each sheet image into segments and evaluates the similarity of image segments between sheets with selected Gray Level Co-occurrence Matrix (GLCM) textural features originally introduced by Haralick et al. (1973). The method also includes a shifting procedure of image segments to account for cut-out areas torn out during sheet handling and image cropping inaccuracies of the cameras. The method shows excellent performance with the given dataset of manually collected and labeled 2579 image pairs, made publicly available by Jalonen et al. (2021b), from an operating veneer/LVL plant. The results suggest that a properly constructed and calibrated GLCM textural feature-based matching method is a suitable, as well as computationally light, tool for matching wooden sheet images and possibly other similar products with large surface area and high texturization.
In computer vision, image matching is known to be one of the fundamental tasks and can be defined, according to Haralick and Shapiro (1991), as the process of determining correspondence between two images region by region, arc by arc, or pixel by pixel. According to Ma et al. (2021), the general image-matching pipeline consists of two steps: first, extracting the relevant features, and second, matching them with the corresponding features of candidate images.. They continue that a traditional approach is the use of handcrafted image features which is often an effective way to increase performance using expert knowledge. This paper follows the above-described approach by implementing texture-based GLCM-feature extraction to the image regions and their respective comparison for pairs of images. The definition of texture is not universal, but it can be summarized as "a nonuniform or varying spatial distribution of intensity or color" (see Cavalin and Oliveira 2017) and it is a fundamental characteristic of appearance on natural surfaces (Liu et al. 2017).
This paper continues with a chapter of related works in the field followed by a detailed description of the case and the methodological details of the matching process constructed. Chapter four provides the matching results supported by a discussion of the technical alternatives. The performance achieved is reflected in both the existing results of Jalonen et al. (2021a) and the requirements of the industrial application in question. The paper ends with a discussion and conclusions.

Related works
A Siamese neural networks (SNN) with automated feature extraction matching approach for the data in study has been earlier presented in Jalonen et al. (2021a) which is used as a point of comparison in this research. Their results show a matching accuracy of 84.93% per 50 sheet pairs concerning our performance expectations for plant-level applications. As an alternative to SNN, this study adopts a textural feature-based matching method.
Texture classification methods, such as GLCM and Local Binary Patterns (LBP), are applied in various fields including medical imaging (Korchiyne et al. 2014;Muntasa and Yusuf 2020), remote sensing such as landscape classification (Gao 2011;Hall-Beyer 2017), industrial inspection, writer identification, video analysis (Lloyd et al. 2017) and agriculture (Yogeshwari and Thailambal 2021) as well as steel production (Bharati et al. 2004;Hsu et al. 2018). LBP has been extensively reviewed by Liu et al. (2017), A detailed review of textural features has been presented earlier by Cavalin and Oliveira (2017). Bharati et al. (2004) divide textural feature extraction into four categories: statistical, structural, model-based, and transform-based methods where GLCM falls into the category of statistical methods. Gray-level cooccurrence matrix, as its name indicates, suffers from the inability to use colors of the textures. In the review of Khaldi et al. (2019), several methods to overcome this restriction have been introduced.
Technically, Gray Level Co-occurrence Matrix utilizes pixel intensity information and can be used as a direct, areabased matching method (see, e.g., Ma et al. 2021). More precisely, GLCM inspects relative frequencies of relationship values for pairs of pixels (i,j) in an image separated by distance d with angle(s) θ = {0, π/4, π/2, 3π/4}. Often, the distance between the pair of pixels is set to one. The size of the GLCM matrix is dependent on the analyzed texture area, which serves as the basis of textural features calculation. The statistics of GLCM-calculation (see, e.g. Cavalin and Oliveira 2017) provide measures for such properties as smoothness, coarseness, and regularity. In the original work of Haralick et al. (1973), a total of 14 textural measures were proposed out of which only contrast and correlation were used defined as: where x , y , x and y are the means and standard deviations of p x and p y .
In the wood industry, there is a stream of literature describing the use of GLCM for wood species recognition. In a review by Hwang and Sugiyama (2021), it was summarized that GLCM is the most widely used feature texture in wood identification. Some examples of such studies include early research by Tou et al. (2007) using x y GLCM with Multi-layer Perceptron to identify wood species. They achieved a recognition accuracy of 72% with a small test set of 25 images from five different species. Gasim et al. (2013) improve recognition accuracies by dividing the images into four blocks and calculating GLCM and edge detection features for each block separately before feeding the features to an artificial neural network for the final decision. In a more recent research effort by Zamri et al. (2016), a derivative of the Basic Gray Level Aura Matrix has been reported to outperform GLCM techniques achieving over 97% accuracy in a test set of 100 images. However, in comparative studies between GLCM and LBP, listed in the review of Hwang and Sugiyama (2021), the LBP methods are shown to be generally more accurate in the task of species recognition.
In the literature study, no prior studies on matching wooden sheets, other than the benchmark study by Jalonen et al. (2021a), were found. This is attributed to the unavailability of labeled, wooden sheet image pair datasets, which in turn boils down to the suggestion that the possible sheet matching-induced optimization applications might not have been actively considered in the veneer/LVL industry in the past. Although there is a significant body of literature on wood species recognition, as summarized by Hwang and Sugiyama (2021), it is not directly applicable to our study, as our focus is on identifying matching pairs of images rather than assigning them to a particular wood species.

Materials
This study focuses on matching sheet images between the veneer lathe and dryer, which are crucial stages in the veneer production process, as they impact the final product quality the most. An illustration of the problem setting is shown in Figure 1 where, at first, a block of wood is peeled into a single long sheet of wood that is photographed and automatically cut into veneer sheets of the desired length. Prior to the physical cutting, a virtual cutting plan is generated to optimize the yield of high-quality sheets, and then the cut is made as close as possible to the physical world. Commonly, there are discrepancies in images of some tens of centimeters between the virtual and the actual cuts, which have to be considered by the matching algorithm.
The peeled sheets are classified and piled into stacks containing approximately 300 sheets each based on the image. The stacks reside in the warehouse for several days to weeks before they are sent to the dryer where they are photographed for the second time after the drying and classified independently of the previous peeling phase. It is important to note that the original order of the sheets in the lathe cannot be used to determine matches due to random shuffling during processing, which is necessary to ensure the structural quality of the final layered product. Therefore, it remains unknown, for example, how much the quality of sheets has degraded or what are the shrinkages of individual sheets in the production pipeline which includes peeling, warehousing, and drying. The ability to Manually matching the typical number of 10,000 sheets that go through the production process each day is undoable due to its repetitive and tiring nature, not to mention that many sheets, especially from the same piece of wood, are indistinguishable to the human eye. Therefore, a highly automated computer vision solution is necessary. The data of this study are limited to spruce which, as a highly textured species of wood, has enabled manual labeling of the dataset.
The conditions of digital imaging are stable, with a fixed distance between the sheet and the camera ensuring that high-quality images are consistently produced with constant lighting. While camera lenses may be subject to disruptions caused by condensed water in some production lines, this issue did not affect the dataset used in this study. The major issues of matching lie in the deformations of the sheets (see Figure 2) including defects, changes in size, color changes during processing, and missing pieces from the top or bottom of the image pairs. The dataset of sheet images used in this study represents the realistic situation on the factory floor when the individual stacks containing a set of 300 sheets each are being tracked loosely. The use of red paint for manual identification is an actual production process, which, however, plays no role in the GLCM-based matching method adopted as more visible features dominate-that is, the marking does not count for a distinctive textural feature between candidates.
It is physically possible that sheets can be rotated 180° or overturned during processing, but this would require some type of manual operation between the peeling and drying. In the image-labeling phase, no such cases were observed and, thus this possibility is not accounted for in this study. However, it should be noted that as the sheets are frequently being moved in stacks using forklifts, the potential 180° rotation should be considered on a case-by-case basis during matching in subsequent processing stages after drying.

Method
Area-based feature matching methods are knowingly prone to rotation, scaling, and local deformation (see, e.g., Ma et al. 2021). These problems were decreased with the initial pre-processing: the original color images sized 3350x3360 (9 to 11 Mb each) were gray scaled and contracted to 800 pixels wide preserving the aspect ratio of the original height. As a result of this pre-processing step, the file size of each image was reduced to between 0.27 and 0.29 Mb. Then, the extra white spaces around sheet images were removed in two phases. First, a rectangle around the possibly rotated sheet was cut out leaving white corners. Next, the rectangle was rotated horizontally to shift the white spaces from the corners to the top and bottom of the sheet, which were then possible to cut out without sacrificing image information. paint marking to help manual labeling; Red dashed squares: torn-out pieces or contingency in digital versus physical cutting; Yellow triangles: example knots dropped out; Green cloud: ripped corner, and White ellipses: color and visibility of wood grains change noticeably 1 3 However, due to high temperatures that induce sheet distortions and broken edges (as seen in Figure 2/right upper left corner), it is not always possible to achieve a rotated image with no white space. Thus, an iterative rotation method was adopted where the image is rotated in very small increments and the best fit is assumed to be an angle that has the darkest average brightness value on the scale of 0 to 255. Subsequently, the images were segmented into n x n squares (n = 10, 20, …, 50), referred to as "grids", and a GLCM-matrix was calculated with 256 gray levels for each segment with a line length d = 5 and angle θ = 0 based on subjective evaluation. An example of the resulting matrices of feature values is shown in Figure 3. Note that the method is scalable to include any resolution of segmentation and is not limited to a selected set of five segmentations ( n x n squares, where n = 10, 20, …, 50) used here.
To look for the matches between GLCM arrays (Figure 3), cosine similarity is used as a similarity metric. The cosine similarity calculates the cosine angle between vectors, producing a score between zero and one, with one denoting the highest possible match. Mathematically, the cosine similarity is defined as the dot product of the two vectors divided by the product of their Euclidean norms as explained, for example by Han et al. (2012): The cosine similarity method is chosen because it emphasizes matching GLCM-feature sections between the arrays whereas non-matching sections are considered with practically zero value. That is, distinct, mutually shared textural feature patterns between candidates are likely to score high similarity values while disregarding candidates that appear with near-similar patterns. In other words, the cosine similarity method is effective at identifying strong matches while filtering out weaker matches that may appear visually similar.
On an algorithm level, the overall matching process includes two distinct phases depicted in Figure 4. First, the pre-calculated GLCM-feature arrays between the sheets are compared directly including a total of five image grids times two features (contrast and correlation). Second, the remaining unmatched set of sheets is compared with each other by "sliding" the reduced peeling sheet GLCM-feature matrices over its counterpart in drying with the selected number of grids. This process is repeated in a loop until all the sheets have been matched. However, in the case of unlabeled data, this assumption should be relaxed to include only a few iterations, as the number of actual pairs is unknown, unlike the labeled dataset used in this study.
From a technical perspective, a calculation object, "sheet-Pic", was written for each sheet storing both the miniaturized image and the GLCM-grids. This allows for pre-calculation of the GLCMs in the peeling, making them available for comparison as soon as a new respective object became available from drying. A pseudo-code for GLCM-based matching of a single drying sheet object with a list of peeling sheet candidates is provided in the Appendix/Algorithm A1 in Electronic Supplementary Material that is applied in phase 1 matching decision (see, Figure 4). The accept/reject outcome is based on a majority voting mechanism as the match is calculated for each grid and feature separately by ranking the results of correlation and contrast similarities (highest similarity = 1) for every grid. The entries below the user-given rank threshold are shortlisted and this shortlist is aggregated by the number of votes given. All grids are given one vote in the voting process. The candidate with the highest number of votes is returned as a match assuming that a vote number threshold is reached which is usually set to 50% rounded to an integer. For instance, in the case of five grids, three votes are needed to get elected. Once a match has been decided, the accepted sheets are removed from the database of unmatched items, which speeds up the matching process. Within this study, it was not inspected whether the matching accuracy would have improved further if the whole matching process had been run completely before match allocations.
In the second phase of the matching (Figure 4 / phase 2) Algorithm 2 (see Appendix in electronic Supplementary Material) is applied which uses the unmatched sheet objects of phase 1 as input. Here, a set of three grids, 20x20, 30x30, and 40x40, was used and a sliding window of six was set as a default meaning that from the peeling sheet GLCMs three uppermost and three undermost rows are removed. Then, this reduced feature array object is matched with varying versions of the drying sheet GLCM-feature arrays with cuts 0 to 6 from the top and/or bottom. As an example of using three grids, there would be a total of six potential sheet names (i.e. [two features] * [three grids]). The match is given in a voting mechanism to the sheet that has the largest sum of differences to the second-best candidate. The matching is done for the whole list of sheets at once followed by a duplicate removal. That is, in case of duplicate peeling sheet matches for a single drying sheet, no decision is given. Then, the matching of the unmatched items is repeated until no further iterations are needed: there is either one pair or no pairs left to match. The notable aspects of the second phase are that attention is paid to the distinctiveness of the potential sheet image pairs compared to the other available candidates rather than aiming for the maximum similarity of the images.
It is acknowledged, regarding phase 2 that the number of grids and sliding windows used in phase 2 could be further optimized. For instance, in the present configuration, the fixed sliding window of six produced unequal areas to be matched as 70% ((20-6)/20), 80% ((30-6)/30), and 85% ((40-6)/40. To ensure equal areas for grid-wise comparison, future methodological developments could incorporate a sliding window size that is dependent on the grid size.
The experiments were coded in Python using the publicly available and widely used libraries including cv2, pandas, Fig. 4 Illustration of the matching process NumPy, PIL, and skimage. The full code is available in electronic Supplementary Material and GitHub (https:// github. com/ jygi/ veneer-image-pair-match ing). The algorithm's performance was not optimized in this work. Both the feature matrix extraction for images and the subsequent matching computations can be parallelized across multiple CPUs, which can significantly reduce the calculation time in proportion to the available computing power. Alternatively, utilizing a Ball Tree or KD Tree -type algorithm could also speed up the process; however, these algorithms are not currently supported by default for the Cosine Similarity calculations. Accuracy is reported as the key performance indicator. In the benchmark study, an accuracy of 84.93% was reached (m = 50) whereas the GLCM-based methodology presented above yielded 99.72% respectively. The inspection of failure cases revealed 15 wrongly labeled pairs and four pairs with defective images in the initial data, resulting in a total of 2560 valid pairs. After excluding false instances, the matching tests were repeated, resulting in a slight increase in accuracies. The results are presented in Figure 5 and Appendix Table A1 in Electronic Supplementary Material. The results in Figure 5 (left) using the clean dataset show consistent performance across the cluster sizes tested, indicating that the algorithm did not reach its limits in distinguishing between sheets within the given total number of pairs (2560).It was stated earlier that a sample size of 300 sheet pairs could be used in a real application setting, provided that some metadata of sheets whereabouts in different stacks are available (see, also, Fig. 1 for overall process illustration). To further highlight this observation, an additional plot (Fig. 5 / right) is sketched showing the expected number of false matches in a set of 300 sheets. However, having an approximate knowledge of individual stack contents is rarely the case in practice and therefore the entire dataset (n = 2579) is matched in one go corresponding to roughly eight and a half stacks (with 300 sheets each). Another point to keep in mind is that some veneer manufacturer store huge proprietary image data on their production where the algorithm can also be applied for data mining purposes.

Matching performance and comparison with previous works
The results of the proposed method outperform the benchmark analysis of Jalonen et al. (2021a) which utilized SNN containing automated feature extraction and machine learning-based matching. In the detailed analysis of false positive matches, it was revealed that the most pairing errors occur with very dark textured peeling sheets, with a possibly high number of knots, that undergo a significant color transformation from dark to bright ones (refer to Figure 6).

Sensitivity analysis of selected image segmentation
The assumption in the previous analysis was that the best matching results would be obtained by using all five available grids. To validate this assumption, a sensitivity analysis was conducted to determine whether an alternative combination of grids could produce comparable results with less computation. The analysis involved varying the total number of grids from one to five and testing all possible combinations. For example, if four grids are used, then there are five combinations (n) to test -namely {[10, 20, 30, 40], [10,20,30,50], [10,20,40,50], [10,30,40,50], [20,30,40,50]}. For each combination, 100 randomly sampled pairs were matched 100 times. The results of the sensitivity are illustrated in Figure 7 and Appendix Table A2 in Electronic Supplementary Material. Based on the sensitivity analysis, it was found that the previously reported approach of using all five grids provides the best performance for matching veneer sheets. While the performance difference between using three or four grids is not significant, the use of a single grid with a coarse resolution of 10x10 results in a significant drop in matching accuracy below 85%. However, when combined with higher resolutions, the 10x10 seems to be beneficial. For instance, when comparing the matching accuracy of the  Table A2 1 3 five-grid approach [10,20,30,40,50] at 99.76% to a fourgrid alternative ([-, 20, 30, 40, 50]) at 99.47%, there is only a slight drop of 0.29%. This indicates that important information about the textural features is conveyed by both the low-and high-resolution abstractions which are aggregated by voting to decide the correct pair. This observation is reminiscent of Convolutional Neural Networks (CNNs) (see, e.g., Dhillon and Verma 2020) where detailed and less detailed features are extracted simultaneously. Within the scope of this paper, the detailed mechanisms of how the interplay between GLCM-grids works is not addressed further.

Discussion
The performance of the methodology can be evaluated in light of the discussion by Cavalin and Oliveira (2017) who state that the use of textural analysis remains important in areas where data or computing resources are limited for training complex models such as CNNs. In the case of veneer production, both premises hold and the benchmark study might be seen as an example of how more advanced methodologies produce subpar results when the availability of data is limited. It seems that within this study one was able to extract relevant textural features which reach two competing goals, formulated by Liu et al. (2017), at once: first, the low computational complexity for real-time operation, while, second, capturing the most representative texture information that can be distinguished in the presence of image distortions.
It is acknowledged that there are other established, keypoint-based matching methods outside the broad scope of Artificial Neural Networks that could be worth considering. The keypoint-based matching methods not reported in this paper, including Scale Invariant Feature Transformation (SIFT) (Lowe 2004), Speeded Up Robust Features (SURF) (Bay et al. 2006), and Oriented FAST and Rotated BRIEF (ORB) (Rublee et al. 2011), can be considered as more advanced and potentially applicable computational techniques calling for a justification for GLCM as the methodological choice. SIFT and SURF are examples of gradient statistic-based methods, which are based on the difference between Gaussian and Haar-wavelet responses, respectively whereas ORB uses local intensity comparison-based descriptors.
In this work, SIFT, SURF, and ORB were tested for the data as an initial guess for the whole images. Based on preliminary testing of approximately 300 image pairs at once resulted in a matching performance of around 40 to 55% depending on the image pre-processing. Segmentation of images before the matching brought no added performance as the segmented images were likely to be too small for efficient keypoint detection. Two major issues were identified with the keypoint-based methods: firstly, many of the distinctive features picked by the keypoint detector(s) are found in several candidate sheets in the data simultaneously, such as a symmetrical clustering of knots irrespective of their location or a highly visible wood grain pattern. Image segmentation may partially overcome this issue, but based on brief testing, no noticeable improvement in performance was found. In fact, sub-images led to deteriorating accuracies as the total number of keypoints picked by the detectors decreased compared to the case of using full images (width of 800px). Secondly, keypoint methods may be erred by the change of sheet coloring, meaning that a distinctive grain pattern might be detected in a dry sheet that does not appear clearly in the original peeling image. Therefore, it is concluded that, in this specific application, the relative inaccuracy of textural features, picked by the GLCM, becomes a major benefit as the feature extraction with the five selected segmentation levels "generalizes" the overall looks of a single sheet that can be, still, identified later despite the mutations incurred.
In this study, the algorithm's performance was tested using random image pairs sampling from the dataset similarly to the benchmark study. When it comes to the case of non-labeled, very large datasets, the matching performance, while remaining super-human, is probably affected by the almost similar-looking sheets originating from the same block of wood. As this study did not have access to such an extensive dataset, this issue could not be addressed. From the results verification point of view, it should be highlighted that some subjective choices or adjustments were made regarding the numerical parameters of GLCM, including image resizing, segmenting (n), GLCM-feature selection (two features in use), line length (d), angle (θ) and, sliding window size for partial matching that all are components affecting the matching performance. Thus, there is room for systematic optimization to improve performance, particularly if a larger, labeled dataset becomes available. Nevertheless, a trade-off must be maintained between the computational/analysis effort and the potential gains in accuracy: ideally, the veneer matching algorithm should not be excessively heavy to compute to ensure real-time operation with tens of thousands of images. When it comes to the false-positive matches, the algorithm could be developed to identify these (see example in Fig 6) based on simple, averaged, image metrics and leave their matching only at the very last stages of matching as a separate subset when the available set of possible candidates is at its smallest. However, this might be easier said than done because of the randomness of changes incurred during the processing.

Conclusion
It was shown that a properly configured GLCM-feature extraction combined with cosine similarity comparison can be used for automated image matching of highly textured, and mutated veneer sheet images. The reached matching accuracy was 98.63% with 2560 image pairs. The images used in this research were of spruce, which exhibit a range of distinct features, including knots, holes, and visible grain patterns, any of which could be used to assess pairwise correspondence between sheets. However, the random mutation of the sheet features along with color alterations and the presence of near-matches in data, originating from the same piece of wood, pose a significant challenge for non-texturebased image matching algorithms, such as keypoint-based methods and Siamese neural networks. The matching performance achieved here can be considered high enough to be used for sheet-specific process control and thus optimize production processes. These in turn improve both the sustainable use of materials and the economic value of veneer production.
In the field of image analytics, deep neural networks are becoming increasingly popular. However, the lack of sufficiently large datasets is a major obstacle to the development of such models. Applying GLCM for sheet matching in the veneer industry, as described in this paper, could be leveraged to produce large, labeled datasets of paired wooden sheets that could be later used to train Deep Learning models for novel data analytics applications in the context of veneer and LVL.

Author contributions NA (one author).
Funding Open Access funding provided by LUT University (previously Lappeenranta University of Technology (LUT)). This work was supported by the The Strategic Research Council (SRC) at the Academy of Finland under Grant 313396 (Manufacturing 4.0).

Conflict of interest
The authors have no financial or proprietary interests in any material discussed in this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.