Nanoinformatics pp 133-155 | Cite as

# Topological Data Analysis for the Characterization of Atomic Scale Morphology from Atom Probe Tomography Images

## Abstract

Atom probe tomography (APT) represents a revolutionary characterization tool for materials that combine atomic imaging with a time-of-flight (TOF) mass spectrometer to provide direct space three-dimensional, atomic scale resolution images of materials with the chemical identities of hundreds of millions of atoms. It involves the controlled removal of atoms from a specimen’s surface by field evaporation and then sequentially analyzing them with a position sensitive detector and TOF mass spectrometer. A paradox in APT is that while on the one hand, it provides an unprecedented level of imaging resolution in three dimensions, it is very difficult to obtain an accurate perspective of morphology or shape outlined by atoms of similar chemistry and microstructure. The origins of this problem are numerous, including incomplete detection of atoms and the complexity of the evaporation fields of atoms at or near interfaces. Hence, unlike scattering techniques such as electron microscopy, interfaces appear diffused, not sharp. This, in turn, makes it challenging to visualize and quantitatively interpret the microstructure at the “meso” scale, where one is interested in the shape and form of the interfaces and their associated chemical gradients. It is here that the application of informatics at the nanoscale and statistical learning methods plays a critical role in both defining the level of uncertainty and helping to make quantitative, statistically objective interpretations where heuristics often dominate. In this chapter, we show how the tools of Topological Data Analysis provide a new and powerful tool in the field of nanoinformatics for materials characterization.

## Keywords

Atom probe tomography Topological data analysis Persistent homology## 7.1 Introduction

The modern development of Atom Probe Tomography (APT) has opened new exciting opportunities for material design due to its ability to experimentally map atoms with chemistry in a 3D space [1, 2, 3, 4, 5, 6, 7]. However, the challenges exist to accurately reconstruct the 3D atomic structure and to more precisely identify features (for example, precipitates and interfaces) from the 3D data. Because data is in the format of discrete points in some metric space, i.e., a point cloud, many data mining algorithms which have been developed are applicable to extract the geometric information embedded in the data. Nevertheless, those geometric-based methods have certain limitations when being applied to solve the problems in atom probe data. We summarize below the limitations of geometric-based methods and present a data-driven approach to address significant challenges associated with massive point cloud data and data uncertainty at sub-nanoscales which can be generalized to many other applications.

### 7.1.1 Atom Probe Tomography Data and Analysis

Improvements in data collection rates, field-of-view, detection sensitivity (at least one atomic part per million), and specimen preparation have advanced the atom probe from a scientific curiosity to a state-of-the-art research instrument [9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. While APT is a powerful technique with the capacity to gather information containing hundreds of millions of atoms from a single specimen, the ability to effectively use this information has significant challenges. The main technological bottleneck lies in handling the extraordinarily large amounts of data in short periods of time (e.g., giga- and terabytes of data). The key to successful scientific applications of this technology in the future will require that handling, processing, and interpreting such data via informatics techniques be an integral part of the equipment and sample preparation aspects of APT.

As applies to APT, two main phases are involved in the data processing and analysis. The first one is the reconstruction of the 3D image, which identifies the 3D coordinate and chemistry for each collected atom. The second phase is to extract useful information from the reconstructed image; for example, to identify crystalline structures, clusters, and precipitates. There are two parameters of interest here which need to be determined during the 3D image reconstruction: the voxel size [19, 20, 21] and the elemental concentration threshold for the voxels. Normally these two parameters are determined empirically by trial and error—i.e., a value is set for the parameter and if the expected features are visible then the image is considered to be correct. Once the parameters are set, they are treated as fixed values and all the subsequent analyses are done based on these set values. There are two issues with this approach (1) the determination of the values for the parameters is largely subjective, and (2) once the values are chosen, the results of the subsequent analyses are biased toward those particular values.

### 7.1.2 Characteristics of Geometric-Based Data Analysis Methods

The modern development of (APT) has opened new exciting opportunities for material design due to its ability to experimentally map atoms with chemistry in a 3D space. However, the challenges exist to accurately reconstruct the 3D atomic structure and to more precisely identify features (for example, precipitates and interfaces) from the 3D data [23, 24, 25, 26, 27, 28, 29, 30]. Because data is in the format of discrete points in some metric space, i.e., a point cloud, many data mining algorithms, which have been developed, are applicable to extract the geometric information embedded in the data [31, 32, 33, 34]. Nevertheless, those geometric-based methods have certain limitations when being applied to solve the problems in atom probe data. We summarize below the limitations of geometric-based methods.

In the category of supervised learning [35], many methods require prior knowledge about the data. In the case when the prior knowledge is not available, assumptions need to be made and a bias could be introduced. For example, regression usually assumes a mathematical function between the variables, which means the conclusion we draw from the regression would bias the function that is chosen. On the other hand, for unsupervised learning methods [34], there is usually some parameter(s) that needs to be determined for the algorithm. For example, clustering methods usually require the number of clusters (or some equivalent parameter) to be manually determined; in the case of dimensionality reduction, a common assumption is that the data resides on a lower dimensional manifold, which will sufficiently represent the data, although the dimension of the manifold may not be something that can be determined by the algorithm.

Due to the wide range of applications, there is hardly a universal rule to determine the values of the parameters required by the geometric-based methods. For a particular task, the parameters can be determined either empirically based on the constraints of the situation at hand, or by some algorithm [36]. In these cases, the hidden assumption is that the number of the parameter is fixed once chosen. In some scenario, it would be worthwhile to make those fixed parameters variables. This is not equivalent to giving a set of values to the parameters and collecting all of the results, since the results are independent from each other. What is needed is a scheme that can summarize the results as the parameter changes value. The lack of variability also exists on another level, that is, geometric-based approaches have the property of being exact, i.e., two points in a space are geometrically distinguishable as long as they do not share the same coordinates. As a result of this, for example, classification algorithms determine the classes by using a set of hyper boundaries which are fixed once obtained by training the algorithm.

- (i)
topology focuses on the qualitative geometric features of the object, which themselves are not sensitive to coordinates, which means that the data can be studied without having to use some algebraic function, and thus no prior assumption or parameter needs to be dealt with;

- (ii)
instead of using a metric for distance, topology uses a less clear metric, i.e., “proximity” or “neighborhood”, since “proximity” is less absolute than the actual metric, topology is capable of dealing with the scenarios where information is less exact;

- (iii)the qualitative geometric features can be associated to some algebraic structure through homology, so changes in the topology can be tracked by these algebraic structures, which can be useful when assessing the impact of a parameter on the result of a given analysis. All these properties make the topological-based methods good candidates for dealing with APT data. Table 7.1 summarizes the main differences between the geometric- and topological-based methods.Table 7.1
Comparison of geometric-based and topological-based methods

Requirement for model/Assumption

Requirement for coordinate

Parametric flexibility

Geometric methods

Based on algebraic model or assumptions such as data has certain algebraic property

Need to have coordinate information since metric(s) is used

Parameter is fixed once the value is determined; result cannot reflect the impact of different parameter values

Topological methods

No model required, no algebraic assumption on the data

Use neighborhood (proximity) as metric, possible for coordinate-free applications

Parameter can be variable; result is integrated with the parameter of different values

## 7.2 Persistent Homology

Summary of homology classes and their corresponding qualitative geometric features for the first few dimensions

0th homology class | 1st homology class | 2nd homology class | 3rd homology class |
---|---|---|---|

Connected components | 2D holes (enclosed area) | 3D cavities (enclosed volume) | 4D Hyper-voids (enclosed hyper-volume) |

## 7.3 Voxel Size Determination: Identification of Interfaces

- (i)
for a given voxel size, apply a Gaussian kernel to each atom at its exact position;

- (ii)
sum all of the Gaussian kernels to define a estimated density across the voxel;

- (iii)
define another Gaussian assuming every atom in the voxel is located within the center of the voxel;

- (iv)
calculate the difference between the estimated density and the central Gaussian which approximates the true density; and

- (v)
define the optimal voxel size as that which has the minimum difference.

The specifics of each step are expanded below.

*x*”, toward a voxel of unit length centered around the origin can be represented by a Parzen window [58] given by:

*x*” is the atom position and

*X*is the center of the voxel of edge length “

*h*” where the individual kernel contribution is measured, the contribution of the Gaussian kernel is defined by the weighting function

_{d}is known as the bandwidth of the kernel and in the case of the Gaussian kernel: h

_{d}=

*σ*where

*σ*is the standard deviation. The standard Gaussian kernel (with zero mean and unit variance) is given by \( K\left( t \right) = \frac{1}{{\sqrt {2\pi } }}e^{{ - \frac{1}{2}t^{2} }} \).

*x*within the voxel (Fig. 7.8) is defined by the average of the different kernel contributions (Eq. 9.2) as

where h_{d} > 0 is the window width, smoothing parameter or bandwidth.

*MISE*is the mean integrated square error. Since the distribution of atoms does not follow any known pattern, especially at the region of interest such as the interface, the true density \( f\left( x \right) \) is not known. The approach followed here to approximate the true density as closely as possible to the estimated density consists of the following sequence: \( f\left( x \right) \) is first assumed to be a Gaussian distribution, assumed to represent the actual distribution of atoms within the voxel, although the atoms may very well be non-normally distributed. The mean and variance of this assumed Gaussian spread is calculated for the atoms within and on the boundary of the voxel of interest. Next, depending on the real distribution of the atoms, the Gaussian function may peak either at or off center in the voxel and in the latter case it is translated to the center of the voxel. The difference of this Gaussian distribution with \( \hat{f}\left( x \right) \) is used for computation of

*MISE*. For the cases where the initial assumption of f(x) is a poor one, it will results in a high MISE. By gradually varying the voxel size the validity of this assumption reaches a most probable value corresponding to minimized MISE. The total squared error (

*E*

_{ tot }) is then computed for the entire dataset given by the following equation

*V*is the total number of voxels.

*E*

_{ tot }is then minimized with respect to varying voxel size. The kernel density estimation was carried out on the Ni–Al–Cr dataset comprising ~8.72 million atoms. For each atom in a voxel, a Gaussian kernel was fit at the atom location and its amplitude was set at 1 with full width at half maximum set to the voxel edge length. The kernel contributions of atoms to the voxel were calculated at the voxel center for all atoms within and on the boundary of a particular voxel. These values were then added giving the amplitude of density at the voxel center. The error was then calculated between the actual density and estimated density using the procedure explained in the previous section. This procedure is repeated for the voxel size varied from 0.5 to 2.5 nm in steps of 0.1 nm. A minimum error was obtained for 1.6 nm voxel size (Fig. 7.9) providing a tradeoff between the noise and data averaging. This voxel size of 1.6 nm reduces the atomic data set into a representation of 83,754 voxels at 1.6 nm

^{3}each.

^{3}throughout the sample where the concentration of Al and Cr are almost equal. As the voxel size is increased, the increase in volume averages out the noise and a clear interface starts emerging. At 1.6 nm most of the statistical noise vanishes and a very sharp interface is obtained, with nanometer scale fluctuations visible on the isosurface representing the interface (Fig. 7.10). As the voxel size is increased beyond this value, over smoothing of data starts occurring. The interface starts becoming diffuse and the graininess in the image disappears. At this stage there is ideally no statistical noise and the residual clusters scattered throughout the volume could potentially be capturing the presence of nanoclusters.

## 7.4 Topological Analysis for Defining Morphology of Precipitates

Interfaces and precipitate regions are typically identified from APT data by representing them as isoconcentration surfaces at a particular concentration threshold, thereby making the choice of concentration threshold critical. The popular approach to selecting the appropriate concentration threshold is to draw a proximity histogram [61], which captures the average concentration gradient across the interface and visually identifies a concentration value that is the best representative of an interface or phase change occurrence. This makes the choice of concentration gradient user dependent and subjective. In this section, we will showcase how persistent homology can be applied to better recover the morphology of the precipitates.

*β*

_{ 0 }

*, β*

_{ 1, }and

*β*

_{ 2 }. The relationship between the Betti numbers, the data topology, and the concept of barcodes as described in the introduction is summarized in Fig. 7.11.

As discussed earlier and expanded upon in our prior work [62], the persistence of different topological features can be recorded as barcodes, which we now group according to each Betti number. The horizontal axis represents the parameter ɛ or the range of connectivity among points in the point cloud while the vertical axis captures the number of topological components present in the point cloud at each interval of ɛ. There has to be some knowledge of the appropriate range for ɛ, such as the interatomic distance when dealing with raw atom probe data or voxel length if the data has been voxelized. The persistence of features is a measure of whether these features are actually present in the data or if they are artifacts appearing at certain intervals.

The top panel shows the evolution of Betti numbers for varying Sc concentration. At each value of Sc concentration threshold “δ”, those voxels having a concentration of δ ± 0.02 were chosen. Consider β_{0}: at a high concentration threshold, beyond 0.5, a very small number of simply connected components are observed. This is because very few voxels have concentration value equal or more than this threshold. As concentration threshold is decreased, more voxels qualify to be included in the group leading to an increase in β_{0}. The value of β_{0} remains constant for a certain range indicating that these are real features. A plot of the voxels at δ = 0.3 shows that it indeed captures real clusters of Sc. With further decrease of concentration threshold, a decrease in β_{0} is observed. This is because every voxel outside the Sc clusters has some minimal content of Sc and the inclusion of all exterior voxels results in one single connected component. We also observe a peak in the value of β_{1} at a low concentration of δ = 0.03. When we plot the isoconcentration surface for those voxels we find that these represent cavities. These voxels with very low Sc concentration sit on the edge of the Sc clusters, and thereby, enclose Sc clusters within themselves. A similar trend is observed with Mg where for low concentration we see Mg Isosurface containing cavities that enclose Sc clusters, whereas for high concentration there are few voxels.

## 7.5 Spatial Uncertainty in Isosurfaces

_{i}as the atom in a voxel with coordinates (x

_{i}

^{x}, x

_{i}

^{y}, and x

_{i}

^{z}), the mean µ

_{x}and standard deviation σ

_{x}along the x-axis will be given as follows:

## 7.6 Summary

Atom probe tomography is a chemical imaging tool that produces data in the form of mathematical point clouds. Unlike most images which have a continuous gray scale of voxels, atom probe imaging has voxels associated with discrete points that are associated with individual atoms. The informatics challenge is to assess nano and sub-nanoscale variations in morphology associated with isosurfaces when clear physical models for image formation do not exist given the uncertainty and sparseness in noisy data. In this chapter, we have provided an overview of the application of topological data analysis and computational homology as powerful new informatics tools that address such data challenges in exploring atom probe images.

## Notes

### Acknowledgements

We gratefully acknowledge support from NSF DIBBs Project OAC-1640867 and NSF Project DMR-1623838. KR acknowledges support from the Erich Bloch Endowed Chair at the University at Buffalo-State University of New York.

## References

- 1.M.K. Miller, R.G. Forbes,
*Atom-Probe Tomography*(Springer, New York, 2014)CrossRefGoogle Scholar - 2.D.J. Larson, T.J.T.J. Prosa, R.M. Ulfig, B.P. Geiser, T.F. Kelly,
*Local Electrode Atom Probe Tomography: A User’s Guide*(Springer, New York, 2013)CrossRefGoogle Scholar - 3.B. Gault, M.P. Moody, J.M. Cairney, S.P. Ringer,
*Atom Probe Microscopy*(Springer, New York, 2012)CrossRefGoogle Scholar - 4.S.K. Suram, K. Rajan, Microsc. Microanal.
**18**, 941 (2012)CrossRefGoogle Scholar - 5.B. Gault, S.T. Loi, V.J. Araullo-Peters, L.T. Stephenson, M.P. Moody, S.L. Srestha, R.K.W. Marceau, L. Yao, J.M. Cairney, S.P. Ringer, Ultramicroscopy
**111**, 1619 (2011)CrossRefGoogle Scholar - 6.E.R. McMullen, J.P. Perdew, Solid State Commun.
**44**, 945 (1982)CrossRefGoogle Scholar - 7.E.R. McMullen, J.P. Perdew, Phys. Rev. B
**36**, 2598 (1987)CrossRefGoogle Scholar - 8.M.K. Miller, T.F. Kelly, K. Rajan, S.P. Ringer, Mat. Today
**15**, 158 (2012)CrossRefGoogle Scholar - 9.M.K. Miller, E.A. Kenik, Microsc. Microanal.
**10**, 336 (2004)CrossRefGoogle Scholar - 10.K. Hono, Acta Mat.
**47**, 3127 (1999)CrossRefGoogle Scholar - 11.T.F. Kelly, D.J. Larson, K. Thompson, J.D. Olson, R.L. Alvis, J.H. Bunton, B.P. Gorman, Ann. Rev. Mat. Res.
**37**, 681 (2007)Google Scholar - 12.T.F. Kelly, M.K. Miller, Rev. Sci. Instr.
**78**, 1101 (2007)CrossRefGoogle Scholar - 13.M.K. Miller,
*Atom-probe tomography*(Kluwer Academic/Plenum Publishers, New York, 2000)CrossRefGoogle Scholar - 14.M.K. Miller, A. Cerezo, M.G. Heatherington, G.D.W. Smith,
*Atom-probe field-ion microscopy*(Clarendon Press, Oxford, 1996)Google Scholar - 15.J. Rsing, J.T. Sebastian, O.C. Hellman, D.N. Seidman, Microsc. Microanal.
**6**, 445 (2000)Google Scholar - 16.D.N. Seidman, R. Herschitz, Acta Metall.
**32**, 1141 (1985)Google Scholar - 17.D.N. Seidman, R. Herschitz, Acta Metall.
**32**, 1155 (1985)Google Scholar - 18.D.N. Seidman, B.W. Krakauer, D. Udler, J. Phys. Chem. Sol.
**55**, 1035 (1994)CrossRefGoogle Scholar - 19.M. Hetherington, M. Miller, J. Phys. Colloq.
**50**, C8 (1989)CrossRefGoogle Scholar - 20.M. Hetherington, M. Miller, J. Phys. Colloq.
**48**, C6–559 (1987)CrossRefGoogle Scholar - 21.K. Torres, M. Daniil, M.A. Willard, G.B. Thompson, Ultramicroscopy
**111**, 464 (2011)CrossRefGoogle Scholar - 22.C. Phillips, G. Voth, Soft Matter
**9**, 8552 (2013)CrossRefGoogle Scholar - 23.M.P. Moody, L.T. Stephenson, P.V. Liddicoat, S.P. Ringer, Contingency table techniques for three dimensional atom probe tomography. Micros. Res. Tech.
**70**, 258 (2007)CrossRefGoogle Scholar - 24.F. Vurpillot, F. De Geuser, G. Da Costa, D. Blavette, J. Microscopy
**216**, 234 (2004)CrossRefGoogle Scholar - 25.J.M. Hyde, A. Cerezo, T.J. Williams, Ultramicroscopy
**109**, 502 (2009)CrossRefGoogle Scholar - 26.M.P. Moody, B. Gault, L.T. Stephenson, D. Haley, S.P. Ringer, Ultramicroscopy
**109**, 815 (2009)CrossRefGoogle Scholar - 27.B. Gault, X.Y. Cui, M.P. Moody, A.V. Ceguerra, A.J. Breen, R.K.W. Marceau, S.P. Ringer, Scripta Mat.
**131**, 93 (2017)CrossRefGoogle Scholar - 28.K. Hono, D. Raabe, S.P. Ringer, D.N. Seidman, MRS Bull.
**41**, 23 (2016)CrossRefGoogle Scholar - 29.F. Vurpillot, W. Lefebvre, J.M. Cairney, C. Obderdorfer, B.P. Geiser, K. Rajan, MRS Bull.
**41**, 1 (2016)CrossRefGoogle Scholar - 30.J.M. Cairney, K. Rajan, D. Haley, B. Gault, P.A.J. Bagot, P.-P. Choi, P.J. Felfer, S.P. Ringer, R.K.W. Marceau, M.P. Moody, Ultramicroscopy
**159**, 324 (2015)CrossRefGoogle Scholar - 31.O. Hellman, J. Vandenbroucke, J. Blatz du Rivage, D.N. Seidman, Mater. Sci. Eng. A
**327**, 29 (2002)CrossRefGoogle Scholar - 32.L.T. Stephenson, M.P. Moody, P.V. Liddicoat, S.P. Ringer, Microsc. Microanal.
**13**, 448 (2007)CrossRefGoogle Scholar - 33.A. Shariq, T. Al-Kassab, R. Kirchheim, R.B. Schwarz, Ultramicroscopy
**107**, 773 (2007)CrossRefGoogle Scholar - 34.F.D. Geuser, W. Lefebvre, Microsc. Res. Tech.
**74**, 257 (2011)CrossRefGoogle Scholar - 35.L. Ericksson, T. Byrne, E. Johansson, J. Trygg, C. Vikstrom,
*Multi- and Megavariate Data Analysis: Principles, Applications*(Umetrics Ab, Umea, 2001)Google Scholar - 36.H.-P. Kriegel, P. Krger, J. Sander, A. Zimek, Wiley Interdisciplinary Reviews. Data Min. Knowl. Disc.
**1**, 231 (2011)CrossRefGoogle Scholar - 37.G. Carlsson, Bull. Am. Math. Soc.
**46**, 255 (2009)CrossRefGoogle Scholar - 38.H. Edelsbrunner, D. Letscher, A. Zomorodian, Discret. Comput. Geom.
**28**, 511 (2002)CrossRefGoogle Scholar - 39.S. Bhattacharya, R. Ghrist, V. Kumar, IEEE Trans. Rob.
**31**, 578 (2015)CrossRefGoogle Scholar - 40.G. Carlsson, T. Ishkhanov, V.d. Silva, A. Zomorodian. Int. J. Comput. Vision
**76**, 1 (2008)CrossRefGoogle Scholar - 41.P.G.
*Cmara, Current Opinion in Systems Biology Future of Systems Biology Genomics and Epigenomics*, vol. 1, p. 95 (2017)Google Scholar - 42.I. Donato, M. Gori, M. Pettini, G. Petri, S. De Nigris, R. Franzosi, F. Vaccarino, Phys. Rev. E
**93**, 052138 (2016)CrossRefGoogle Scholar - 43.H. Edelsbrunner, D. Letscher, A. Zomorodian, Discr. Comput. Geom.
**28**, 511 (2002)CrossRefGoogle Scholar - 44.Y. Hiraoka, T. Nakamura, A. Hirata, E.G. Escolar, K. Matsue, Y. Nishiura, Proc. Natl. Acad. Sci.
**113**, 7035 (2016)CrossRefGoogle Scholar - 45.D. Horak, S. Maletic, M. Rajkovic, J. Stat. Mech Theor. Exp.
**2009**, P03034 (2009)CrossRefGoogle Scholar - 46.H. Liang, H. Wang, PLoS Comput. Biol.
**13**, e1005325 (2017)CrossRefGoogle Scholar - 47.N. Otter, M.A. Porter, U. Tillmann, P. Grindrod, H.A. Harrington, arXiv:1506.08903 (2015)
- 48.B. Rieck, H. Leitte, Comput. Graph. Forum
**34**, 431 (2015)CrossRefGoogle Scholar - 49.B. Rieck, H. Leitte, Comput. Graph. Forum
**35**, 81 (2016)CrossRefGoogle Scholar - 50.B. Rieck, H. Mara, H. Leitte, IEEE Trans. Visual Comput. Graph.
**18**, 2382 (2012)CrossRefGoogle Scholar - 51.C.M. Topaz, L. Ziegelmeier, T. Halverson, PLoS One
**10**, e0126383 (2015)CrossRefGoogle Scholar - 52.B. Wang, G.-W. Wei, J. Comput. Phys.
**305**, 276 (2016)CrossRefGoogle Scholar - 53.K. Xia, G.-W. Wei, International journal for numerical methods. Biomed. Eng.
**30**, 814 (2014)Google Scholar - 54.K. Xia, G.-W. Wei, J. Comput. Chem.
**36**, 1502 (2015)CrossRefGoogle Scholar - 55.A. Zomorodian, G. Carlsson, Discr. Comput. Geom.
**33**, 249 (2005)CrossRefGoogle Scholar - 56.S. Srinivasan, K. Kaluskar, S. Dumpala, S. Broderick, K. Rajan, Ultramicroscopy
**159**, 381 (2015)CrossRefGoogle Scholar - 57.V. Epanechnikov, Theor. Probab. Appl.
**14**, 153 (1969)CrossRefGoogle Scholar - 58.E. Parzen, Ann. Math. Stat.
**33**, 1065 (1962)CrossRefGoogle Scholar - 59.M. Rosenblatt, Ann. Math. Stati.
**27**, 832 (1956)CrossRefGoogle Scholar - 60.D.W. Scott, Biometrika
**66**, 605 (1979)CrossRefGoogle Scholar - 61.O.C. Hellman, J.A. Vandenbroucke, J. Rusing, D. Isheim, D.N. Seidman, Microsc. Microanal.
**6**, 437 (2000)Google Scholar - 62.S. Srinivasan, K. Kaluskar, S. Brodeick, K. Rajan, Ultramicroscopy
**159**, 374 (2015)CrossRefGoogle Scholar

## Copyright information

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.