1 Background

Using data mining techniques to probe and establish process‐structure‐property relationships has witnessed a growing interest owing to its ability to accelerate the process of tailoring materials by design. Before the advent of data mining techniques, scientists used a variety of empirical and diagrammatic techniques [1], like pettifor maps [2], to establish relationships between structure and mechanical properties. Pettifor maps, one of the earliest graphical representation techniques, is exceedingly efficient except that it requires a thorough understanding and intuition about the materials. Recent progress in computational capabilities has seen the advent of more complicated paradigms ‐ so‐called virtual interrogation techniques ‐ which span from first‐principles calculations to multi‐scale models [3]‐[7]. These complex multi‐physics and/or statistical techniques and simulations [8],[9] result in an integrated set of tools which can predict the relationships between chemical, microstructural, and mechanical properties producing an exponentially large collection of data. Simultaneously, experimental methods ‐ combinatorial materials synthesis [10],[11], high‐throughput experimentation, atom probe tomography ‐ allow synthesis and screening of a large number of materials while generating huge amounts of multivariate data.

A key challenge is then to efficiently probe this large data to extract correlations between structure and property. This data explosion has motivated the use of data mining techniques in materials science to explore, design, and tailor materials and structures. A key stage in this process is to reduce the size of the data, while minimizing the loss of information during this data reduction. This process is called data dimensionality reduction. By definition, dimensionality reduction (DR) is the process of reducing the dimensionality of the given set of (usually unordered) data points and extracting the low‐dimensional (or parameter space) embedding with a desired property (for example, distance, topology, etc.) being preserved throughout the process. Examples for DR methods are principal component analysis (PCA) [12], Isomap [13], Hessian locally linear embedding (hLLE) [14], etc. Applying DR methods enables visualization of the high‐dimensional data and also estimates the optimal number of dimensions required to represent the data without considerable loss of information. Additionally, burgeoning cyberinfrastructure‐based tools and collaborations sustained by the government’s recent Materials Genome Initiative (MGI) provides a great platform to leverage the data dimensionality reduction tools. This will enable integration of information obtained from the individual high‐throughput simulations and experimentation efforts in various domains (e.g., mechanical, electrical, electro‐magnetic, etc.) and at multiple length‐scales (macro‐meso‐micro‐nano) in a fashion as never seen before [15].

Data dimensionality reduction is not a novel concept. Page [16] describes different techniques of data reduction and their applicability for establishing process‐structure‐property relationships. Statistical methods like PCA [17] and factor analysis (FA) [18] have been used on materials data generated by first‐principles calculations or by experimental methods. However, dimensionality reduction techniques like PCA or factor analysis to establish process‐structure‐property relationships traditionally assume a linear relationship among the variables. This is often not strictly valid; the data usually lies on a non‐linear manifold (or surface) [13],[19]. Non‐linear dimensionality reduction (NLDR) techniques can be applied to unravel the non‐linear structure from unordered data. An example of such application for constructing a low‐dimensional stochastic representation of property variations in random heterogenous media is [19]. Another exciting application of data dimensionality reduction is in combination with quantum mechanics‐based calculations to predict the structure [20]‐[22]. For a more mathematical list of linear and non‐linear DR techniques, the interested reader can consult [23],[24].

In this paper, the theory and mathematics behind various linear and non‐linear dimensionality reduction methods is explained. The mathematical aspects of dimensionality reduction are packaged into an easy‐to‐use software framework called Scalable Extensible Toolkit for Dimensionality Reduction (SETDiR) which (a) provides a user‐friendly interface that successfully abstracts user from the mathematical intricacies, (b) allows for easy post‐processing of the data, and (c) represents the data in a visual format and allows the user to store the output in standard digital image format (eg: JPEG), thus making data more tractable and providing an intuitive understanding of the data. We conclude by applying the techniques discussed on a dataset of apatites [25]‐[29] described using several structural descriptors. This paper is seen as an extension of our recent work [30]. Apatites (A 4IA 6II(B O4)6X2) have the ability to accommodate numerous chemical substitutions and hence represent a unique family of crystal chemistries with properties catering many technological applications, such as toxic element immobilization, luminescence, and electrolytes for intermediate temperature solid oxide fuel cells, to name a few [25]‐[29].

The outline of the paper is as follows: The section ‘Methods: dimensionality reduction’ briefly describes the concepts of DR, algorithms, and the dimensionality estimators that can be used to estimate the dimensionality. The software framework, SETDiR, developed to apply DR techniques is described in the section ‘Software: SETDiR’. The section ‘Results and discussion’ discusses the interpretation of low‐dimensional results obtained by applying SETDiR to the apatite dataset.

2 Methods: dimensionality reduction

The problem of dimensionality reduction can be formulated as follows. Consider a set of data, X. This set consists of n data points, xi. Each of the data points xi is vectorized to form a ‘column’ vector of size D. Usually, D is large. Thus, X={x0,x1,…,xn−1} of n points, where x i R D and D≫1. Visualizing and analyzing correlations, patterns, and connections within high‐dimensional dataset is difficult. Hence, we are interested in finding a set of equivalent low‐dimensional points, Y={y0,y1,…,yn−1}, that exhibit the same correlations, patterns, and connections as the high‐dimensional data. This is mathematically posed asFind Y={y0,y1,…,yn−1}, such that y i R d , dD and ∀i,j |xixj|h=|yiyj|h. Here, |ab|h denotes a specific norm that captures properties, connections, or correlations we want to preserve during dimensionality reduction [23].

For instance, by defining h as Euclidean norm, we preserve Euclidean distance, thus obtaining a reduction equivalent to the standard technique of PCA [12]. Similarly, defining h to be the angular distance (or conformal distance [31]) results in locally linear embedding (LLE) [32] that preserves local angles between points. In a typical application [33],[34], xi represents a state of the analyzed system, e.g., temperature field, concentration distribution, or characteristic properties of a system. Such state description can be derived from experimental sensor data or can be the result of a numerical simulation. However, irrespective of the source, it is characterized by high dimensionality, that is D is typically of the order of 102 to 106[35],[36]. While xi represents just a single state of the system, contemporary data acquisition setups deliver large collections of such observations, which correspond to the temporal or parametric evolution of the system [33]. Thus, the cardinality n of the resulting set X is usually large (n∼102 to 105). Intuitively, information obfuscation increases with the data dimensionality. Therefore, in the process of DR, we seek as small a dimension d as possible, given the constraints induced by the norm |ab|h[23]. Routinely, d<4 as it permits, for instance, visualization of the set Y.

The key mathematical idea underpinning DR can be explained as follows: We encode the desired information about X, i.e., topology or distance, in its entirety by considering all pairs of points in X. This encoding is represented as a matrix An×n. Next, we subject matrix A to unitary transformation V, i.e., transformation that preserves the norm of A (thus, preserving connectivities and correlations in the data), to obtain its sparsest form Λ, where A=V Λ VT. Here, Λn×n is a diagonal matrix with rapidly diminishing entries. As a result, it is sufficient to consider only a small, d, number of entries of Λ to capture all the information encoded in A. These d entries constitute the set Y. The above procedure hinges on the fact that unitary transformations preserve original properties of A[37]. Note also, that it requires a method to construct matrix A in the first place. Indeed, what differentiates various spectral data dimensionality methods is the way information is encoded in A.

We focus on four different DR methods: (a) PCA, a linear DR method; (b) Isomap, a non‐linear isometry‐preserving DR method; (c) LLE, a non‐linear conformal‐preserving DR method; and (d) Hessian LLE, a topology‐preserving DR method.

2.1 Principal component analysis

PCA is a powerful and a popular DR strategy due to its simplicity and ease in implementation. It is based on the premise that the high‐dimensional data is a linear combination of a set of hidden low‐dimensional axes. PCA then extracts the latent parameters or low‐dimensional axes by reorienting the axes of the high‐dimensional space in such a way that the variance of the variables is maximized [23].

PCA algorithm

  1. 1.

    Compute the pair‐wise Euclidean distance for all points in the input data X. Store it as a matrix [ E].

  2. 2.

    Construct a matrix [W ] such that the elements of [W ] are −0.5 times the square of the elements of the euclidean distance matrix [E].

  3. 3.

    Find the dissimilarity matrix [ A] by double centering [W ]:

    [A]= H T W H
    (1)
H ij = ( 1 1 / n ) i = j , ( 1 / n ) i j.
(2)
  1. 4.

    Solve for the largest d eigenpairs of [A]:

    [A]= U Λ U T .
    (3)
  1. 5.

    Construct the low‐dimensional representation in R d from the eigenpairs:

    Y = I Λ 1 / 2 U T .
    (4)

The functionality of the identity matrix is to extract the most important d‐dimensions from the eigenpairs of [A].

The limitation of PCA is that it assumes the data lies on a linear space and hence performs poorly on the data that are inherently non‐linear. In these cases, PCA also tends to over‐estimate the dimensionality of the data.

2.2 Isomap

Isomap relaxes the assumption of PCA that the data lies on a linear space. A classic example of a non‐linear manifold is the Swiss roll. Figure 1 shows how PCA tries to fit the best linear plane while Isomap unravels the low‐dimensional surface. Isomap essentially smooths out the non‐linear manifold into a corresponding linear space and subsequently applies PCA. This smoothing out can intuitively be understood in the context of the spiral, where the ends of the spiral are pulled out to straighten the spiral into a straight line. Isomap accomplishes this objective mathematically by ensuring that the geodesic distance between data points are preserved under transformations. The geodesic distance is the distance measured along the curved surface on which the points rest [23]. Since it preserves (geodesic) distances, Isomap is an isometry (distance‐preserving) transformation. The underlying mathematics of the Isomap algorithm assumes that the data lies on a manifold which is convex (but not necessarily linear). Note that both PCA and Isomap are isometric mappings; PCA preserves pair‐wise Euclidean distances of the points while Isomap preserves the geodesic distance.

Figure 1
figure 1

Comparison of performance of PCA and Isomap on a dataset lying on non‐linear manifold.

Isomap algorithm

  1. 1.

    Compute the pair‐wise Euclidean distance matrix [E] from the input data X.

  2. 2.

    Compute the k‐nearest neighbors of each point from the distance matrix [E].

  3. 3.

    Compute the pair‐wise geodesic distance matrix [G] from [E]. This is done using Floyd’s algorithm [38].

  4. 4.

    Construct a matrix [W ] such that the elements of [W ] are −0.5 times the square of the elements of the geodesic distance matrix [G].

  5. 5.

    Find the dissimilarity matrix [A] by double centering [W ]:

    [A]= H T W H
    (5)
H ij = ( 1 1 / n ) i = j , ( 1 / n ) i j.
(6)
  1. 6.

    Solve for the largest d eigenpairs of A:

    A = U Λ U T .
    (7)
  1. 7.

    Construct the low‐dimensional representation in R d from the eigenpairs:

    Y = I Λ 1 / 2 U T .
    (8)

The non‐linearity in the data is accounted for by using geodesic distance metric. The graph distance is used to approximate the geodesic distance [39]. Graph distance between a pair of points in a graph (V, E) is the shortest path connecting the two given points. The graph distances are calculated using the well‐known Floyd’s algorithm [38].

2.3 Locally linear embedding

In contrast to PCA and Isomap methods which preserve distances, LLE preserves the local topology (or local orientation, or angles between data points). LLE uses the notion that locally a non‐linear manifold (or curve) is well‐approximated by a linear curve. In other words, the manifold is locally linear and hence can be represented as a patchwork of linear curves. The algorithm first divides the manifold into patches and reconstructs each point in the patch based on the information (or weights) obtained from its neighbors (i.e., infer how a specific point is located with respect to its neighbors). This process extracts the local topology of the data. Finally, the algorithm reconstructs the global structure by combining individual patches and finding an optimized, low‐dimensional representation. Numerically, local topology information is constructed by finding the k‐nearest neighbors of each data point and reconstructing each point from the information about the weights of the neighbors. The global reconstruction from the local patches is accomplished by assimilating the individual weight matrices to form a global weight matrix [W] and evaluating the smallest eigenvalues of normalized global weight matrix [A].

LLE algorithm

  1. 1.

    For each of the n input vectors from X={x 0,x 1,…,x n−1}:

  2. (a)

    Find the k‐nearest neighbors of the data point x i.

  3. (b)

    Construct the local covariance or Gram matrix G i

    g r , s (i)= ( x i x r ) T ( x i x s )
    (9)

where xr and xs are neighbors of xi.

  1. (c)

    Weight vector, w i is computed by solving the linear system:

    G i w i =1
    (10)

where 1 is a k×1 vector of ones.

  1. 2.

    Using the vectors w i, build the sparse matrix W. The (i,j) of W is zero if x i and x j are not neighbors. If x i and x j are neighbors, then W(i,j) takes the values of the corresponding with vector, w i(j).

  2. 3.

    From W, build A:

    [A]= ( I W ) T (IW).
    (11)
  1. 4.

    Compute the eigenpairs (corresponding to the smallest eigenvalues) for A:

    [A]= U Λ U T .
    (12)
  1. 5.

    Compute the low‐dimensional points in R d from the smallest eigenpairs.

2.4 Hessian LLE

Hessian LLE [14] (hLLE) is a modification of LLE and Laplacian Eigenmaps [40]. Mathematically, hLLE replaces the Laplacian (first derivative) operator with a Hessian (second derivative) operator over the graph. hLLE constructs patches, performs a local PCA on each patch, constructs a global Hessian from the eigenvectors thus obtained, and finally finds the low‐dimensional representation from the eigenpairs of the Hessian. hLLE is a topology preservation method and assumes that the manifold is locally linear.

hLLE algorithm

  1. 1.

    At each given point x i, construct a k×n neighborhood matrix M i such that each row, j, of the matrix represents a point

    x j = x j x i ̄ ,
    (13)

where x i ̄ is the mean of k neighboring points.

  1. 2.

    Perform singular value decomposition (SVD) of the M i to obtain the SVD matrices, U, V, D.

  2. 3.

    Construct the (Nd(d+1)/2) local Hessian matrix [H]i such that the first column is a vector of all ones and the next d columns are the columns of U followed by the products of all the d columns of [U].

  3. 4.

    Compute Gram‐Schmidt orthogonalization [37] on the local Hessians [H]i and assimilate the last d(d+1)/2 orthonormal vectors of each to construct the global Hessian matrix [A] [14].

  4. 5.

    Compute the eigenpairs (corresponding to the smallest eigenvalues) of the Hessian matrix:

    A = W Λ W T .
    (14)
  1. 6.

    Compute the low‐dimensional points [Y] in R d from the eigenpairs:

    Y = W W T W 1 / 2 .
    (15)

An important point to note here is that, as discussed in the section ‘Methods: dimen‐sionality reduction’, matrix [A] encodes the required information for each of the DR techniques, and the construction of this matrix is what differentiates a spectral DR method from the rest. Matrix [A] is a normalized Euclidean matrix in the case of PCA, a normalized geodesic matrix in the case of Isomap, a normalized Hessian matrix for hLLE, and so on.

2.5 Dimensionality estimators

A key step in constructing the low‐dimensional points from the data is the choice of the low dimensionality or optimal dimensionality d. Methods like PCA and Isomap have an implicit technique to estimate the low dimensionality (approximately) using scree plots. We introduce a graph‐based technique that rigorously estimates the latent dimensionality of the data, which can be used in conjunction with the scree plot.

2.5.1 Dimensionality from the scree plot

Scree plot is a plot of the eigenvalues with the eigenvalues arranged in decreasing order of their magnitude. Scree plots obtained from PCA and Isomap (distance‐preserving methods) give an estimate of the dimensionality. A heuristic method of identifying the dimensionality is by identifying the elbow in the scree plot. A more quantitative estimate of dimensionality is estimated by choosing a value for pvar(d) that ensures a threshold of the minimum percentage variability. If λ1>=λ2>=…λn are the individual eigenvalues arranged in descending order, the percentage variability (pvar(d)) covered by considering first d eigenvalues is given by

p var (d)=100× i = 1 d λ i i = 1 n λ i
(16)

A usual approach is to choose a d that takes 95% of the variability into account.

2.5.2 Geodesic minimal spanning tree estimator

We have recently utilized a dimensionality estimator based on the BHH theorem (Breadwood‐Halton‐Hammersley Theorem) [41]. This theorem states that the rate of convergence of the length of minimal spanning treea gives a measure of the latent dimensionality. This theorem allows one to express the dimensionality (d) of an unordered dataset as a function of the length of geodesic minimal spanning tree (GMST) of the graph of the dataset. Specifically, the slope of a log(n) vs. log(Ln) plot constructed by calculating the GMST length (Ln) with respect to increasing size of randomly chosen data points (n) provides an estimate of the dimensionality: d= 1 ( 1 m ) , where m is the slope of the log‐log plot [19].

2.5.3 Correlation dimension

Correlation dimension is a space‐filling dimension which is derived from a more generic fractal dimension by assigning a value of q=2 in

C(μ,ε)= μ B e ̄ ( z ) q 1 (z)
(17)

where μ is a Borel probability measure on a metric space . B e ̄ (z) is a closed ball of radius ε centered on z.

Numerical definition of correlation dimension is given by

d cor ( ε 1 , ε 2 )= log ( Ĉ 2 ( ε 2 ) ) log ( Ĉ 2 ( ε 1 ) ) log ( ε 2 ) log ( ε 1 )
(18)

where Ĉ 2 ( ε 2 ) is a measure of proportion of distances less than ε[23],[42]. Intuitively, these ε values are like window ranges through which one zooms through the data. Too small ε will render the data as individual points, while too huge ε will make the entire dataset look like a single fuzzy spot. Hence, correlation dimension is sensitive to the epsilon values. One important point to note, however, is that the correlation dimension provides the user with a lower bound of the optimal dimensionality.

2.6 Software: SETDiR

These DR techniques are packaged into a modular, scalable framework for ease of use by the materials science community. We call this package, SETDiR. This framework contains two major components:

  1. 1.

    Core functionality: developed using C++

  2. 2.

    User interface: developed based on Java (Swing)

Figure 2 describes the scope of the functionality of both modules in SETDiR.

Figure 2
figure 2

Description of the software.

2.6.1 Core functionality

Functionality is developed using object‐oriented C++ programming language. It implements the following methods: PCA, Isomap, LLE, and dimensionality estimators like GMST and correlation dimension estimators [23].

2.6.2 User interface

A graphical user interface (shown in Figure 3) is developed using Java™ Swings Components with the following features which make it user‐friendly:

  1. 1.

    Abstracts the user from the mathematical and programming details.

  2. 2.

    Displays the results graphically and enhances the visualization of low‐dimensional points.

  3. 3.

    Easy post‐processing of results: in‐built cluster analysis, ability to save plots as image files.

  4. 4.

    Organized settings tabs: Based on the niche of the user, the solver settings are organized as Basic User and Advanced User tabs which abstract a new or a naive user from, otherwise overwhelming, details.

Figure 3
figure 3

Snapshot of clustering pattern displayed using SETDiR for apatite dataset.

This framework can be downloaded from SETDiR (http://setdir.engineering.iastate.edu/doku.php?id=download). A more detailed discussion of the parallel features of the code is deferred to another publication. We next showcase the framework and the mathematical strategies on the apatite dataset.

3 Results and discussion

In this section of the paper, we compare and contrast the algorithms on an interesting dataset of apatites with immense technological and scientific significance. Apatites have the ability to accommodate numerous chemical substitutions and exhibit a broad range of multifunctional properties. The rich chemical and structural diversity provides a fertile ground for the synthesis of technologically relevant compounds [25]‐[29]. Chemically, apatites are conveniently described by the general formula A 4 I A 6 II ( BO 4 ) 6 X 2 , where AI and AII are distinct crystallographic sites that usually accommodate larger monovalent (Na +, Li +, etc.), divalent (Ca 2+, Sr 2+, Ba 2+, Pb 2+, etc.), and trivalent (Y 3+, Ce 3+, La 3+, etc.), B‐site is occupied by smaller tetrahedrally coordinated cations (Si 4+, P 5+, V 5+, Cr 5+, etc.), and the X‐site is occupied by halides (F , Cl , Br ), oxides, and hydroxides. Establishing the relationship between the microscopic properties of apatite complexes with those of the macroscopic properties can help us in gaining an understanding and promote the use of apatites in various technological applications. For example, information about the relative stability of the apatite complexes can promote the utilization of apatites as a suitable host material for immobilizing toxic elements such as lead, cadmium, and mercury (i.e., by identifying an apatite chemical composition that contain at least one of the aforementioned toxic elements and yet remaining thermodynamically stable). DR techniques offer unique insights into the originally intractable high‐dimensional datasets by enabling visual clustering and pattern association, thereby establishing process‐structure‐property relationship for chemically complex solids such as apatites.

3.1 Apatite data description

The crystal structure of the aristotype P 6 3 /m Ca 4 I Ca 6 II ( PO 4 ) 6 F 2 apatite with hexagonal unit cell is shown in the Figure 4 with the atoms projected along the (001) axis. The polyhedral representation of AIO6 and B O4 structural units are clearly shown with the Ca II‐site (pink atoms) and F‐site (green atoms) occupying the tunnel. Thin black line represents the unit‐cell of the hexagonal lattice.

Figure 4
figure 4

Crystal structure of a typicalP 6 3 /m Ca 4 I Ca 6 II ( PO 4 ) 6 F 2 apatite with hexagonal unit cell [[43],[44]].

The sample apatite dataset considered consists of 25 different compositions described using 29 structural descriptors. These structural descriptors, when modified, affect the crystal structure [44]. Therefore, by establishing the relationship between the crystal structure and these structural descriptors and analyzing the clustering of different compositions, conclusions can be drawn about how the changes in these structural descriptors (defining the atomic features) could affect the macroscopic properties (such as elastic modulus, band gap, and conductivity). The bond length, bond angle, lattice constants, and total energy data are taken from the work of Mercier et al. [26]; the ionic radii data are taken from the work of Shannon [45] and the electronegativity data is based on the Pauling’s scale [46]. The ionic radii of AI‐site ( r A I ) has a coordination number nine and AII‐site ( r A II ) has a coordination number seven (when the X‐site is F ) or eight (when the X‐site is Cl or Br ). Our database describes Ca, Ba, Sr, Pb, Hg, Zn, and Cd in the A‐site; P, As, Cr, V, and Mn in the B‐site; and F, Cl, and Br in the X‐site. The 25 compounds considered in this study belong to the aristotype P 63/m hexagonal space group. We utilize SETDiR on the apatite data and present some of the results below. More information regarding the source of the apatite data can be found in [44]. A preliminary analysis (focusing only on PCA) can be found in [30].

3.2 Dimensionality estimation

SETDiR first estimates the dimensionality using the scree plot. A scree plot is a plot of eigenvalue indices vs. eigenvalues. The occurrence of an elbow (or a sharp drop in eigenvalues) in a scree plot gives the estimate of the dimensionality of the data. Figure 5 displays the scree plots when the input vectors {x0,x1,…,xn−1} were normalized with respect to that when they were not normalized. We plot for comparison the eigenvalues that are obtained from both PCA and Isomap. This plot shows how the second eigenvalue collapses to zero when the input vectors are not normalized and hence emphasizes the importance of normalization of input vectorsb. It is also interesting to compare the eigenvalues of PCA and Isomap for normalized input: PCA being a linear method over‐estimates the dimensionality as 5, while Isomap estimates it to be 3. SETDiR subsequently uses the geodesic minimal spanning tree method to estimate the dimensionality of the apatite data. This method gives a rigorous estimate of 3 (Figure 6), which matches the outcome of the more heuristic scree plot estimate.

Figure 5
figure 5

Scree plots for PCA and Isomap ‐ normalized vs. unnormalized input.

Figure 6
figure 6

Sensitivity analysis of dimensionality estimators with change in neighborhood size.

3.3 Low‐dimensional plots

In this section, we discuss the visual interpretation of the low‐dimensional plots obtained by applying the dimensionality reduction techniques ‐ PCA, Isomap, LLE, and hLLE ‐ to a set of apatites described using structural descriptors. Figure 7 (left) shows the 2D plot between principal components 2 and 3. The reason for showing this plot is that PC2‐PC3 map captures pattern that is similar to Isomap components 1 and 2. While we find associations among compounds that are similar to those as shown in Figure 7 (right), the nature of information is manifested differently. This is mainly attributed to the differences in the underlying mathematics of the two techniques, where PCA is essentially a linear technique and, on the other hand, Isomap is a non‐linear technique. To further interpret the hidden information captured by Isomap classification map (Figure 7), we have focused on the three regions separately.

Figure 7
figure 7

Apatite PCA (left) and Isomap (right) result interpretation [[43]].

Figure 7 (right) shows a two‐dimensional classification map with isomap components 1 and 2 in the orthogonal axes. The two‐dimensional classification map groups various apatite compounds into three distinct regions that capture various interactions between A‐, B‐, and X‐site ions in complex apatite crystal structure. Region 1 corresponds to apatite compounds with fluoride (F) ion in the X‐site. All apatite compounds in this region contain only F in the X‐site but has different A‐site (Ca, Sr, Pb, Ba, Cd, Zn) and B‐site elements (P, Mn, V). Therefore, this unique region classifies F‐apatites from Cl and Br‐apatites. Region 2 belongs to apatite compounds with phosphorus (P) ion in the B‐site and contains Cl and Br ions in the X‐site. The uniqueness of this region is manifested mainly due to the presence of only smaller P ions in the B‐site. Similarly, region 3 belongs to apatite compounds with Cl ions in the X‐site and contains larger B‐site Cr, V, and As cations.

Figure 8 (right) presents the results from hLLE. It can be observed that the compounds that have highly covalent A‐site cation (e.g., Hg 2+ and Pb 2+) and highly covalent B‐site cation (P 5+) clearly separate out from the rest. An exception to this rule is Pb10(CrO4)6Cl2. Our PCA‐derived structure map also revealed similar pattern ‐ i.e., Pb10(CrO4)6Cl2 was found to not obey the general trend [44]. Note that the presence of Cr cation in the B‐site has been known to cause structural distortions in apatites.

Figure 8
figure 8

Apatite LLE (left) and hLLE (right) result interpretation [[43]].

For example, Sr10(PO4)6Cl2 has a P 63/m symmetry, whereas Sr10(CrO4)6Cl2 has a distorted P 63 symmetry [28]. Based on the previous PCA work [44], we attribute the cause for this exception to two bond distortion angles: (i) rotation angle of AIIAIIAII triangular units and the angle that bond AIO1 makes with the c‐axis. Compared to Hessian LLE, we cannot find any clear pattern with respect to chemical bonding in the LLE result Figure 8 (left).

Figure 9 shows a zoomed‐in plot of the Hessian LLE result.c Around the origin, we can find two clusters of compounds: (i) one on the left with negative component 1 value corresponding to compounds that have ionic alkaline earth metal cations in the A‐site and (ii) one on the right with positive component 1 value corresponding to compounds that have covalent A‐site cations. An exception here is Ca10(CrO4)6Cl2, which is found among the covalent A‐site cluster indicating that Ca10(CrO4)6Cl2 may have a distorted symmetry. It is important to recognize that neither PCA nor Isomap identifies Ca10(CrO4)6Cl2 as an exception. Compared to hLLE, we do not find any intriguing insights from the LLE analysis and therefore, we do not discuss LLE results.

Figure 9
figure 9

Apatite hLLE result interpretation [[43]].

One needs to explore different manifold methods to fully understand high dimensional correlations and mappings. Hence, in the following section, we shall explore the impact of the Isomap analysis.

In Figure 10 region 1, the ionic radii of A‐site elements increases along the direction shown, with Zn 2+ cation being the smallest and Ba 2+ being the largest. Note that this A‐site ionic radii trend is not clearly seen in the PC2‐PC3 classification map (Figure 7). One of the key outcomes from Figure 10 is the identification that Pb10(PO4)6F2 compound is an outlier. In terms of Shannon’s ionic radii scale, Pb 2+ is larger than Ca 2+ but smaller than Sr 2+ cation. Ideally (assuming apatites as ionic crystals), the relative position of Pb10(PO4)6F2 should have been between Ca10(PO4)6F2 and Sr10(PO4)6F2 compounds in the map. However, this was not the case. The physical reason behind this observation could be attributed to the electronic structure of Pb 2+ ions [47]. The theoretical electronic structure calculations indicate that in the atom‐projected density of states curves, the Pb 2+ ions have active 6s2 lone‐pair electrons that hybridize with oxygen 2p electrons resulting in a strong covalent bond formation. Indeed, recent density functional theory (DFT) calculations [48] show that the electronic band gap (at the generalized gradient approximation (GGA) level) for Pb10(PO4)6F2 is 3.7 eV, which is approximately 2 eV smaller compared to Ca10(PO4)6F2 (5.67 eV) and Sr10(PO4)6F2 compounds (5.35 eV).

Figure 10
figure 10

Apatite Isomap result interpretation (region 1) [[43]].

In our dataset, the electronic structure information of A‐site elements was quantified using Pauling’s electronegativity data. While PCA captures this behavior, the dominating effect of the electronic structure of Pb 2+ ions is more transparent within the mathematical framework of non‐linear Isomap analysis.

Besides, from Figure 10, it can also be inferred that the bond distortions of Zn apatite is different from other compounds. This trend correlates well with the non‐existence of Zn10(PO4)6F2 compounds due to the difficulty in experimental synthesis [49]. On the other hand, the relative correlation position of Hg10(PO4)6F2 compound offer intriguing insights. In fact, the uniqueness of Hg10(PO4)6F2 chemistry was previously detected in a PCA‐derived structure map [44], which clearly identified the composition as an outlier among other isostructural compounds. Guided by this original insight from PCA, recently, Balachandran et al. [48] showed using DFT calculations that the ground state structure of Hg10(PO4)6F2 is triclinic (space group P 1 ̄ ). Although the ionic size of Hg 2+ is very close to that of Ca 2+, the aristotype P 63/m symmetry distorts to P 1 ̄ symmetry in Hg10(PO4)6F2 due to the mixing of fully occupied Hg‐ 5d10 orbitals with the empty Hg‐ 6s0 orbitals. This mixing is unavailable to the Ca10(PO4)6F2 compound, because it does not have orbitals of appropriate symmetry.

In Figure 11, region 2 is highlighted where we find a clear trend of apatite compounds with respect to the ionic radii of A‐site elements. Similar to region 1, Pb apatites manifest themselves as outliers in region 2. The unique electronic structure of Pb 2+ cations in forming a covalent bond with oxygen 2p‐states is identified as the reason for the deviation of Pb apatites from the expected trend. The covalent bonding among Pb compounds appear to be independent of X‐site anion, when the B‐site is occupied by phosphorus cations. In Figure 11, Hg10(PO4)6Cl2 compound is found to be closely associated with Ca10(PO4)6Br2 indicating some similarity in the bond distortions of the two compounds. In comparing the relative correlation position of all Cl‐containing apatites (except Pb‐based compounds) in region 2, we predict Hg10(PO4)6Cl2 to have a stable apatite structure type (in sharp contrast to Hg10(PO4)6F2).

Figure 11
figure 11

Apatite Isomap result interpretation (region 2) [[43]].

Figure 12 describes region 3 where we find clusters of apatite compounds with Cl ions in the X‐site and contain larger V, Cr, and As cations in the B‐site. The ionic radius of A‐site element increases in the direction as shown in the figure, and in this case, the Pb apatites are not outliers. The presence of large V, Cr, and As cations (compared to smaller P cations in regions 1 and 2) in the B‐site were identified as the reason for this behavior. Region 3 also identifies the existence of complex relationship between A‐site and B‐site chemistries in Cl apatites.

Figure 12
figure 12

Apatite Isomap result interpretation (region 3) [[43]].

Several topological observations can be made on the data. Firstly, since low‐dimensional points obtained are different for both Isomap and PCA, it could be interpreted that the apatite data lie on a non‐linear manifold in the embedding space. However, a counter argument can be made based on the fact that PC2‐PC3 plot shows similar trends and clustering as that in Isomap1‐Isomap2. One possible reason for this happening could be due to the existence of outliers dominating and deviating the first PCA component (PC1) while Isomap being unaffected by this outlier; in which case, the data could actually be lying on a linear manifold. Secondly, the different clustering phenomena observed along different dimensionality reduction techniques might imply that the pattern/features seen in PCA and Isomap clusters are a function of the distance preserved, while those in hLLE and LLE is a function of the topology preserved. Hence, these chosen features represented by these clusters happen to be preserved all along the dimensionality reduction process from the embedded space to the lower‐dimensional space.

4 Conclusions

In this paper, we have detailed a mathematical framework of various data dimensionality reduction techniques for constructing reduced order models of complicated datasets and discussed the key questions involved in data selection. We introduced the basic principles behind data dimensionality reductiond. The techniques are packaged into a modular, computational scalable software framework with a graphical user interface ‐ SETDiR. This interface helps to separate out the mathematics and computational aspects from the scientific applications, thus significantly enhancing utility of DR techniques to the scientific community. The applicability of this framework in constructing reduced order models of complicated materials dataset is illustrated with an example dataset of apatites. SETDiR was applied to a dataset of 25 apatites being described by 29 of its structural descriptors. The corresponding low‐dimensional plots revealed previously unappreciated insights into the correlation between structural descriptors like ionic radius, bond covalence, etc., with properties such as apatite compound formability and crystal symmetry. The plots also uncovered that the shape of the surface on which the data lies could be non‐linear. This information is crucial as it can promote the use of apatite materials as a potential host lattice for immobilizing toxic elements.

5 Availability of supporting data

Information regarding the source of the apatite data can be found in [44].

6 Endnotes

a A tree is a graph where each pair of vertices is connected exactly with one path. A spanning tree of a graph G(V,E) is a sub‐graph that traces all the vertices in the graph. A minimal spanning tree (MST) of a weighted graph G(V,E,W) is a spanning tree with a minimal sum of the edge weights (length of the MST) along the tree. A geodesic minimal spanning tree (GMST) is an MST with edge weight representing geodesic distance. Computationally, GMST is computed using Prim’s (greedy) algorithm [50].

b Normalization of a variable is forcing a limit of [ −1, 1] or [0, 1] to an existing limit of [a, b] of a variable by dividing the sequence of numbers with the maximum absolute value of the sequence.

c Hessian LLE is highly sensitive to neighborhood size and is much more sensitive to the input estimated dimensionality. Incorrect input of estimated dimensionality implies construction of tangent planes of incorrect dimensions which, in turn, implies sub‐optimal low‐dimensional representation.

d A comprehensive catalogue of non‐linear dimensionality reduction techniques along with the mathematical prerequisites for understanding dimensionality reduction could be found in [23].