Establishing and inverting process–structure–property (PSP) linkages is a central goal in integrated computational materials engineering (ICME) in order to accelerate the development of new materials. With increasing computational resources and much development in data processing and machine learning, data-centric workflows for microstructure design receive more and more attention [1]. These workflows rely on large databases that are created using numerical simulations. One central aspect to consider in this context is how to choose and create the microstructures to simulate from the extremely big set of possible structures. To avoid extremely time-consuming and cost-intensive experimental campaigns, an efficient microstructure characterization and reconstruction (MCR) tool is therefore a key ingredient to making large-scale ICME workflows feasible. A very brief introduction to MCR is given in the following, and the reader is kindly referred to [2] for an in-depth review.

Microstructure characterization, the first aspect of MCR, is required to handle the stochasticity of the microstructures: Two distinct image sections of the same microstructure are similar from a visual and statistical perspective, but completely different in terms of a pixel-based representation. Thus, for operations like quantitative comparisons, it is reasonable to map the pixel-based microstructure to a translation-invariant, stationary descriptor D that allows for these operations. In practice, D can range from simple volume fractions to advanced statistical descriptors such as spatial correlations. Therefore, D is a reasonable choice for representing structures in PSP linkages. Furthermore, it provides a possibility to explore the microstructure space in data-driven materials development workflows.

Microstructure reconstruction, the second aspect of MCR, can be regarded as the inverse operation to microstructure characterization: The goal is to find a microstructure such that the corresponding descriptor equals the given value. Microstructure reconstruction allows to (i) create a plausible 3D volume element from a 2D slice like a microscopy image, (ii) create a set of similar microstructures given one realization and (iii) interpolating between microstructures in terms of descriptors.

These two aspects of MCR, namely characterization and reconstruction, can be treated independently, for example using spatial correlations as descriptors and modern machine learning-based techniques for reconstruction. However, automatic ICME workflows for complex materials highly benefit from a principled exploration of the descriptor space, where microstructures are selected for reconstruction, simulation and homogenization in a way that maximizes the expected information gain for the PSP linkage [3]. Therefore, it is important to combine characterization and reconstruction so that given arbitrary combinations of descriptors and their values, the reconstruction can be triggered from these descriptors. Furthermore, recent research indicates that there is no single best descriptor for microstructure reconstruction [4] and for PSP linkages [5], but that it is reasonable to choose descriptors based on the structure at hand. For this purpose, we present MCRpyFootnote 1, a modular and extensible open-source tool that facilitates easy microstructure characterization and reconstruction based on arbitrary descriptors.

Free open-source platforms are a great way of harnessing the advantages of digitization and modern computational infrastructure. The free accessibility allows researchers to quickly test each others’ ideas and to develop them further. The open-source nature of such a platform enables it to become a collaborative project, considerably leveraging its potential. Especially in complex scientific disciplines, such collaboration is indispensable. As an example, consider the field of machine learning, specifically neural networks [6]. In the beginning of research on neural networks, newcomers had to implement relatively complex procedures like back-propagation before being able to reproduce results from the literature, let alone to develop them further. Later, easy-to-use open-source libraries like TensorFlow and PyTorch have greatly lowered the hurdle, allowing more researchers to enter the field easily. This surely contributed to the rapid progress in the last decades and to the plethora of neural network architectures and applications that is observed today.

The digital infrastructure of the materials science community has grown considerably as a consequence of the materials genome initiative [7] and similar projects. Despite the rapidly growing number of tools for materials innovation in general, MCR specifically is in a comparable position now as machine learning was 20 years ago: A great variety of methods exists, but in the absence of a common platform and interface, every newcomer in the field has to implement fundamental technologies like the lineal path function and the Yeong–Torquato algorithm by hand. This is a big hurdle and thwarts rapid progress. Thus, the goal of this contribution is to accelerate MCR research by providing MCRpy as an easy-to-use, extensible and flexible software solution that aims at realizing a seamless workflow by providing various interfaces to new and established techniques.

The work starts with Sect. Current Digital Infrastructure, where the current digital infrastructure is reviewed and it is outlined how MCRpy integrates into it. Then, MCRpy is presented in Sect. Overview of MCRpy. Typical application workflows are presented in Sect. Typical MCRpy Workflows and finally, a conclusion is drawn in Sect. Conclusions and Outlook.

Current Digital Infrastructure

After President Barack Obama announced the US-American Materials Genome Initiative [7] that provided substantial funding for accelerated materials development, collaborative projects and digital frameworks were initiated all over the world. A non-exhaustive list includes the American NanoMine open data resource [8], the European NOMAD-CoE [9] and its platform described in [10] and the Swiss NCCR MARVEL [11] with its AiiDA platform [12] described in [13]. The extremely popular and often-cited pymatgen library [14] can be mentioned as an early contribution to open-source materials science software infrastructure. This trend continues, as can be seen with the recent example radonpy [15]. However, much of this research is focused on deriving material properties from considerations on the atomistic length scale.

On the continuum length scale, the Python Materials Knowledge System (pyMKS) [16] is a notable open-source framework. Its efficient FFT-based implementation of the spatial two-point correlation \(S_2\) facilitates easy microstructure characterization. However, in pyMKS, microstructure characterization is limited to \(S_2\) and no further descriptors are available. Moreover, pyMKS does not allow for microstructure reconstruction, only characterization. A strong focus lies on efficient homogenization [17] and direct coupling to an internal finite element solver, SfePy [18, 19]. This is very convenient for simple problems like elasticity. For advanced techniques like crystal plasticity, external software like the Düsseldorf Advanced Materials Simulation Kit (DAMASK) [20] can be used. Furthermore, pyMKS provides an easy interface for dimensionality reduction of the descriptor space and for establishing structure-property linkages based on the reduced descriptors and the corresponding homogenized properties. In summary, pyMKS acts as an overarching framework to implement ICME workflows.

For numerical simulation of microstructures, many open-source tools are available, ranging from general and easy-to-use packages like SfePy [18, 19] to special-purpose software like DAMASK [20], which comes with a highly optimized Fourier-based crystal plasticity solver. Furthermore, current research on FFT-based homogenization [21] is making remarkable progress that might lead to an open-source tool soon. Thus, with pyMKS as an overarching framework and numerous tools and progress for numerical simulation and homogenization, an open-source MCR software package can be identified as a final component of ICME workflows.

To the authors’ best knowledge, the only widely used software tool for MCR is DREAM.3D [22], a long developed and full-fledged program. Its roots date back around 20 years to the early works of Michael Groeber and the Carnegie Mellon University microstructure builder. Despite this long history, DREAM.3D still enables numerous current research activities in materials innovation and ICME, see for example [23]. This success is empowered by the many features, robustness, efficiency and easy user interface of DREAM.3D, which may be partially attributable to its open-source core. Thus, DREAM.3D can be highly recommended for the workflows it implements. However, the internal microstructure representation and the available pipelines in DREAM.3D are mainly intended for certain material systems and microstructure descriptors. The internal data format as well as the provided characterization and reconstruction algorithms are centered around classical descriptors like grain size distribution functions and orientation distribution functions. This makes DREAM.3D excellent at reconstructing geometric inclusions like ellipses and texture as in metallic materials, but multi-phase materials with complex morphology as shown in Fig. 8 cannot be realized. Furthermore, DREAM.3D is written in C++, which is not common among engineering researchers due to its complexity. In recent research, new microstructure descriptors or reconstruction algorithms are sometimes provided as Python or Matlab code in a GitHub repository, but are hardly ever implemented in C++ as a DREAM.3D pipelineFootnote 2. Even if that was the case, then these descriptors could not be readily used for reconstruction since the DREAM.3D reconstruction pipelines are tailored toward specific descriptors and would need to be re-implemented. Thus, DREAM.3D is an excellent and robust program, but it is mainly suited for specific practical applications and for certain materials.

In contrast, the present work aims at creating a flexible research platform for multiphase materials of high morphological complexity. Thus, MCRpy clearly differs from DREAM.3D regarding the targeted audience and the scope of materials systems. As a Python package, it integrates naturally with numerous tools for numerical simulation, machine learning or materials science workflows. Especially pyMKS can act as an overarching ICME framework, where the present work provides an MCR solution. In summary, MCRpy attempts to fill a striking gap in the ICME software landscape. A theoretical understanding of MCRpy is provided in Sect. Overview of MCRpy, followed by an illustration of typical workflows in Sect. Typical MCRpy Workflows.

Overview of MCRpy

Microstructure characterization and reconstruction in Python (MCRpy) is an open-source software tool accessible under It is released under the Apache 2.0 license and can be used

  • (i) as a program with graphical user interface (GUI), intended for non-programmers and as an easy introduction to MCR,

  • (ii) as a command line tool, intended for automated and large-scale application on high-performance computers without graphical interface, and

  • (iii) as a regular PIP-installable Python module, intended for performing advanced and custom operations in the descriptor space.

A schematic overview is given in Fig. 1: The main functionalities of MCRpy, characterization and reconstruction, are explained in Sect. Characterization and Reconstruction, respectively. Furthermore, additional functions are provided to manipulate the microstructures and descriptors and to visualize data. A complete set of the available operations is given in Table 1, and supported inputs and outputs for selected functions are summarized in Table 2. The core idea of MCRpy is its extensibility in that arbitrary descriptors can be used for characterization and arbitrary loss functions combining arbitrary descriptors can be minimized using arbitrary optimizers for reconstructing random heterogeneous media. This is outlined in Sect. Extensibility.

Fig. 1
figure 1

Schematic overview of MCRpy: Microstructures can be characterized by descriptors and reconstructed by optimization. Herein, descriptors, losses and optimizers can be provided as flexible plugin modules

Table 1 Functions
Table 2 Possible inputs and outputs for selected MCRpy functions


The characterization function

$$\begin{aligned} f_C : M \mapsto \{D_i\}_{i=1}^{n_D} \end{aligned}$$

assigns a given pixel-based microstructure M to a set of \(n_D\) corresponding descriptors \(D_i\). These descriptors, sometimes referred to as statistical descriptors, quantify the microstructural morphology in a statistical and translation-invariant mannerFootnote 3. Hereby, a microstructure with \(n_\text {p}\) different phases is represented as a set of \(n_\text {p}\) indicator functions

$$\begin{aligned} I_p(x) = \left\{ \begin{array}{ll} 1, &{} \qquad \text {if }x \text { in phase }p \\ 0, &{} \qquad \text {else.} \\ \end{array} \right. \end{aligned}$$

For example, the volume fraction \(v_\text {f}\) of a microstructure is a very simple descriptor. Of course, the volume fraction captures some but not all information needed to describe the microstructure. Several other quantities matter, for example the size and shape of inclusions and the degree to which distinct phases, are spatially clustered. Besides these classical descriptors, in the light of increasing computational resources, recent research has been focused on more universal high-dimensional descriptors that are less dense in information, but have higher descriptive capabilities in total. As an early example for high-dimensional descriptors, spatial correlations [24] have proven to be a versatile tool that is still used today [2]. A good introduction can be found in [25]. A differentiable generalization of spatial correlation is presented in [26] and used in this work. Spatial correlations have inspired a range of conceptually similar descriptors like the lineal path function [27], cluster correlation function [28] and polytope function [29]. The reader is referred to [2] for a comprehensive overview. Finally, the Gram matrices of the feature maps of pre-trained convolutional neural networks have been shown to contain relevant microstructural information [30]. Remarkable results in microstructure reconstruction have been achieved using such Gram matrices alone [31, 32] and in combination with other descriptors [4, 33].

Finding a microstructure description that is both dense and contains all relevant information is an active field of research [2]. Examples are the recently developed entropic descriptors [34] or polytope functions [29]. Thus, besides the currently available descriptors listed in Table 3, users can add a descriptor plugin to MCRpy. If the descriptor plugin is defined with an indicator function as input, it is applied to the indicator function of each phase separately. Furthermore, a 2D descriptor is automatically applied on and averaged over 2D slices of a 3D structure. The only requirement posed on new descriptors is that they must be computable on a pixel or voxel geometry. More details on extensibility can be found in Sect. Extensibility. All of the available and added descriptors can be used for microstructure reconstruction, which is discussed in the following section.

Table 3 Microstructure descriptors that are implemented in MCRpy


In MCRpy, microstructure reconstruction is fundamentally regarded as an optimization problem

$$\begin{aligned} M^\text {rec} = \underset{M}{\text {argmin}} \; \mathcal {L}\left( \{ (D_i(M), \; D_i^\text {des}) \}_{i=1}^{n_D} \right) \quad , \end{aligned}$$

where the reconstructed microstructure \(M^\text {rec}\) minimizes a loss function \(\mathcal {L}\). The loss function depends on \(n_D\) different descriptors \(\{D_i\}_{i=1}^{n_D}\) and quantifies the distance between their current and desired values. Herein, \(D_i(M)\) denotes the value of the i-th descriptor associated with the current microstructure and its desired value \(D_i^\text {des}\). Naturally, as in the characterization step, arbitrary descriptors can be used, for example the volume fractions \(v_\text {f}\), the spatial correlations S or the Gram matrices G. For the loss function \(\mathcal {L}\), a simple choice is a weighted sum over the mean squared error norm. Different loss functions are available in MCRpy and the user can implement additional ones. Finally, given a set of descriptors and a loss function, an optimization problem emerges as a special case of Equation 3. This optimization problem can be solved using an optimizer, which is provided as a plugin module. If all descriptors are differentiable, then a gradient-based optimizer like L-BFGS-B [36] can be used, leading to the very efficient differentiable MCR [4, 26]. Otherwise, the choice is limited to gradient-free optimizers like simulated annealing.

As a simple example, if only the spatial two-point correlation \(S_2\) is used as a descriptor and the loss function is formulated as a mean squared error norm of the descriptor difference, the following optimization problem emerges:

$$\begin{aligned} M^\text {rec} = \underset{M}{\text {argmin}} \; || S_2(M) - S_2^\text {des} ||_{\text {MSE}} \quad . \end{aligned}$$

If simulated annealing is chosen as an optimizer, MCRpy effectively performs the well-known Yeong–Torquato algorithm as used in [37].

As a more recent example, if the Gram matrices G of the feature maps of the VGG-19 convolutional neural network are chosen as a descriptor [30] for the same loss function, the emerging optimization problem

$$\begin{aligned} M^\text {rec} = \underset{M}{\text {argmin}} \; || G(M) - G^\text {des} ||_{\text {MSE}} \quad \end{aligned}$$

allows for a gradient-based optimizer. If L-BFGS-B [36] is chosen for this purpose, MCRpy effectively performs the approach of Li et al. [31], which is a special case of differentiable MCR [26].

As a final example, the differentiable three-point correlations \(S_3\), the above-mentioned Gram matrices G and the normalized total variation \(\mathcal {V}\) are combined. The loss function accumulates the weighted mean squared error norm, where \(\lambda _{D_i}\) denotes the weight of the i-th descriptor. If the resulting optimization problem

$$\begin{aligned} M^\text {rec} = \underset{M}{\text {argmin}} \;&\lambda _S || S_3(M) - S_3^\text {des} ||_{\text {MSE}} \nonumber \\ +&\lambda _G || G(M) - G^\text {des} ||_{\text {MSE}} \nonumber \\ +&\lambda _\mathcal {V} || \mathcal {V}(M) - \mathcal {V}^\text {des} ||_{\text {MSE}} \end{aligned}$$

is solved using the gradient-based L-BFGS-B optimizer, MCRpy effectively performs the differentiable MCR algorithm as used in [4].

As can be seen, different parameter settings allow to re-create well-known reconstruction algorithms as well as to try out new ones by simply changing the arguments. As an overview, all descriptors, optimizers and loss functions are listed in Table 4.

Table 4 Microstructure descriptors, optimizers and loss functions that are implemented in MCRpy. Simulated annealing is the only optimizer in the list that is not gradient-based. More details on the descriptors are given in Table 3.


The central advantage of MCRpy is its extensibility in that descriptors, loss functions and optimizers can be easily provided by anyone. For example, new optimization-based reconstruction algorithms like the work of Cecen at al. [43] can be implemented as an optimizer plugin to combine them with all the available microstructure descriptors. This is achieved by a plugin architecture, which is sketched in Fig. 2. In this section, we explain the underlying software pattern, whereas exact instructions and an example on how to write a plugin are given in Sect. Defining a custom descriptor. In the following, the plugin architecture is explained for the case of descriptors. The same idea is employed for loss functions and optimizers.

Fig. 2
figure 2

Schematic overview of the plugin architecture in MCRpy

A descriptor plugin can be written by simply inheriting from the abstract Descriptor class. Consequently, the available descriptor plugins are not known at the time of writing the MCRpy core code, so they must be loaded dynamically as soon as the characterization or reconstruction module demands the plugin. This is done by means of a loader module based on importlib. Upon import, a descriptor plugin registers itself at a descriptor factory. After that, the descriptor factory can be queried to create descriptor instances from the plugin. The descriptor factory then returns a callable which computes the descriptor value given a microstructure. This callable can now be used to characterize microstructures, compose loss functions, compute gradients using automatic differentiation and to reconstruct microstructures.

Thus, adding a descriptor plugin to MCRpy merely consists of adding a file with the plugin definition to the right directory, while the rest of the code does not need to be changed. The descriptor immediately becomes available for characterization and for reconstruction in combination with arbitrary other descriptors, arbitrary loss functions and arbitrary optimizers.

Typical MCRpy Workflows

Typical use-cases and workflows of MCRpy are illustrated in this section by means of three representative examples. First, in Sect. Obtaining a 3D domain from a 2D microstructure slice, a plausible 3D volume element is reconstructed from a 2D microstructure slice. This very relevant, since 3D information can be very time- and cost-intensive to obtain experimentally. Secondly, in Sect. Obtaining a set of similar volume elements, a statistically similar set of small volume elements is generated from a single example. This greatly reduces the computational effort for numerical homogenization. Thirdly, in Sect. Manipulating the descriptor space, descriptor values are directly manipulated and used for reconstructing novel structures. Techniques like this may be explored in the future to augment data sets and explore PSP linkages. These three examples are demonstrated in the three modes of operating MCRpy, namely via a GUI, as a command line tool and as a Python library, respectively. Note that this order is chosen for demonstration purposes only and it is possible to execute all three workflows with all three modes of operation. Finally, in Sect. Defining a custom descriptor, it is demonstrated how to add a custom descriptor to MCRpy and how to use it for characterization and reconstruction. The original structures are taken from pyMKS [16] for Sect. Obtaining a 3D domain from a 2D microstructure slice to Manipulating the descriptor space and from [31] for Sect. Defining a custom descriptor.

Obtaining a 3D Domain from a 2D Microstructure Slice

As a first example, MCRpy is used to reconstruct a plausible 3D volume element given a segmented 2D slice. This is a common task since experimental observations are often available only in 2D. The 3D volume element can be used for example for numerical simulations. From an algorithmic perspective, this goal is achieved by computing the descriptor on the given slice and prescribing it on every slice of the microstructure, details cf. [4].

This task is solved using the MCRpy GUI as shown in Fig. 3. A simple approach would be characterization and immediate reconstruction, but as mentioned in Table 1, MCRpy provides a shortcut for this in the match function. After selecting the match-action on the left, the relevant options can be set in the center. The name of each option is identical to the command line and the Python library, allowing users to easily switch interfaces. By default, a 2D structure is reconstructed in 2D. However, by using the option add_dimension, the extent of the reconstructed structure in z-direction is set to the desired value. The differentiable three-point correlations \(\tilde{S}_3\) as proposed in [26] are chosen as descriptor. Furthermore, as discussed in [4], the variation \(\mathcal {V}\) is employed as a descriptor in order to suppress noise in the 3D reconstruction. The weights of \(\tilde{S}_3\) and \(\mathcal {V}\) are empirically set to 1 and 100, respectivelyFootnote 4. Finally, the role of the setting limit_to needs to be discussed. The parameter is introduced in [26] as P and Q and quantifies the length in pixels up to which spatial correlations are computed with the highest-possible precision. All longer-ranged correlations are computed on a lower-resolution version of the structure in order to save computational resources, cf. [26]. With a default of 16, it allows a flexible trade-off between accuracy and efficiency. A quantitative analysis of wallclock time and memory requirements is given in Appendix D. In this example, it is lowered to 8 in order to accelerate the computations.

Fig. 3
figure 3

Screenshot of the MCRpy graphical user interface. After selection an action on the left, all options can be set in the center and performed upon clicking start. The options are identical to the command line interface and the Python library

After setting all options, the reconstruction can be started and the results can be viewed from the GUI by selecting the view-action on the left. 2D microstructures are plotted directly, whereas 3D structures are exported to and opened in ParaView [44]. The original 2D slice and the reconstructed 3D volume are shown in Fig. 4. Note that for 2D-to-3D reconstruction using multiple orthogonal 2D slices, an additional descriptor merging step is required as discussed in Sect. Manipulating the descriptor space and carried out in Appendix C.

Fig. 4
figure 4

Results for the example in Section Obtaining a 3D domain from a 2D microstructure slice

In addition to the final microstructure, a convergence data file is written, which can be viewed interactively with MCRpy as shown in Fig. 5. On the left, the loss is plotted over iterations along with blue dots indicating intermediate results. The user can click on these dots to have the corresponding microstructure displayed on the right. For 3D structures, only one slice is plotted and the user can scroll through the microstructure using the mouse wheel. For displaying the raw phase indicator functions of multiphase structures and other functionalities, the user is referred to the documentation. In summary, the MCRpy GUI constitutes an easily accessible solution for microstructure reconstruction.

Fig. 5
figure 5

Interactive window for inspecting convergence data. By selecting the highlighted dot on the left at iteration 60, a slice of the intermediate result is displayed on the right. In this example, the indicator function of phase 1 is displayed for slice 18 of 64

Obtaining a Set of Similar Volume Elements

As a second example, a statistically similar set of volume elements is created from a single microstructure example. In numerical homogenization, a volume element can only be called representative if it is large enough for the stochasticity of the microstructure to have no effect on the effective properties. In practice, this requirement can imply unfeasible computational effort. If smaller volume elements are used, it is still possible to quantify the effective behavior by using sufficiently many smaller volume elements and statistically aggregating the results. An example for structure-property linkages based on this idea can be found in [45]. From an MCR perspective, this requires characterizing the given structure and reconstructing different random realizations from itFootnote 5.

This task is solved using MCRpy as a command line tool as shown in Listing 1. First, the original 2D microstructure stored as ms_slice.npy is characterized using the same parameters as in Sect. Obtaining a 3D domain from a 2D microstructure slice (line 1). Then, nine different 3D structures are generated by a simple loop over the reconstruction script (lines 2-7). Note that the extent of the reconstructed structures in voxels is set independently of the original slice (line 5). Furthermore, the loop index is passed to the reconstruction script in order to have it added to all result filenames and prevent to overwrite previous results (line 5). Because the chosen descriptors are differentiable, the standard optimizer L-BFGS-B [36] can be used, allowing to harness the computational efficiency of DMCR [4, 26]. On an Nvidia A100 GPU, the reconstructions take around 25 minutes per structure for 500 iterations. The original structure and the results can be seen in Fig. 6. In summary, the command line interface is analogous to the GUI and allows for easy automation and large-scale application.

figure a
Fig. 6
figure 6

Input and results generated from the code in Listing 1

Manipulating the Descriptor Space

As a third example, the explicit availability of the descriptor is exploited by directly manipulating it. Specifically, MCRpy is used to interpolate between two given microstructures in a morphologically meaningful way. Consider two microstructures that could stem from different sets of process parameters. It can be interesting to create a morphology that is a mix between these two structures. For example, if numerical simulations and homogenization of the interpolated structure predict favorable effective properties, it might be worth to fine-tune the process parameters or try to establish a PSP linkage to manufacture these structures. Direct interpolation of the microstructures in terms of pixel values is meaningless. As a simple alternative, we interpolate linearly in the descriptor space and reconstruct microstructures from the interpolated descriptors.

This task is solved using MCRpy as a Python library as shown in Listing 2. After defining the settings (lines 4-10), the original 2D microstructure slices are loaded (lines 13-14) and characterized (lines 17-18). For reconstructing elongated 3D structures, the 2D descriptors need to be combined such that different descriptors are used in different directions. The order thereby matters and mistakes can lead to geometrically unrealizable descriptorsFootnote 6. In order to avoid confusion and mistakes, MCRpy provides the function merge for this task. The merged descriptors (lines 21-22) are then interpolated in 5 steps including start and end (line 25). Each descriptor is used for a 3D reconstruction, which returns the convergence data and the final microstructure (line 29). The convergence data is viewed in an interactive window as shown in Fig. 5 (line 31). Finally, the microstructures are smoothed by a Gaussian filter (line 32) and saved to a file (line 33). The results are shown in Fig. 7. It can be confirmed that linear interpolation in the descriptor space leads to a visually reasonable transition between the corresponding microstructures.

figure b
Fig. 7
figure 7

The original descriptors (a, e) are linearly interpolated (b-d) and used for reconstruction to create a smooth transition between an isotropic and an elongated microstructure (f-j). For the descriptor \(\tilde{S}_3(\mathbf {r}_a,\mathbf {r}_b)\), only the case that \(\mathbf {r}_a=\mathbf {r}_b=\mathbf {r}\) is plotted for clarity

Defining a Custom Descriptor

MCRpy can be easily extended by adding custom plugin modules. In this section, the procedure is demonstrated by means of a descriptor plugin. Similar concepts apply to loss functions and optimizers. First, the implementation of a descriptor plugin is discussed for the volume fraction \(v_\text {f}\). Secondly, a differentiable approximation to the lineal path function is developed and tested.

Listing 3 shows the plugin source code for the volume fraction \(v_\text {f}\). Like all descriptors in MCRpy, the volume fraction must inherit from the abstract Descriptor class (line 5). This base class provides

  • (i) a wrapper that applies descriptors defined for single phases to the indicator function of each phase,

  • (ii) a wrapper to compute multigrid descriptors as discussed in [26] and

  • (iii) default functions for visualizing descriptorsFootnote 7.

In this case, it is reasonable to define the descriptor for a single phase and let the superclass handle the generalization to multiple phases. For this purpose, the subclass function make_singlephase_descriptor is defined (line 9). This function receives information about the microstructure, like the resolution, which is not needed in this case and therefore summarized via **kwargs. It returns a callable which computes the descriptor given the indicator function of a phase (line 12). In order to allow for automatic differentiation of the descriptor with respect to the microstructure, this callable needs to be implemented in TensorFlow. In contrast, a non-differentiable descriptor would be implemented in Numpy and integrated into the computation graph by MCRpy using the TensorFlow function tf.py_function. Finally, the plugin is required to register itself at the descriptor factory using its class name (lines 16-17).

figure c

In the following, the same procedure is applied to a differentiable approximation \(\tilde{L}\) to the lineal path function L, which is developed in Appendix A. Naturally, the code for defining \(\tilde{L}\) is much longer than Listing 3 and is not given in this paper. Instead, the reader is referred to the GitHub repository for viewing the code. After adding the descriptor definition to the mcrpy/descriptors directory, it is accessible for characterization and reconstruction via the MCRpy GUI, the command line interface and the Python library.

In the Yeong–Torquato algorithm, the lineal path function is often employed to compensate for the shortcomings of the two-point correlation \(S_2\) alone [2, 24]. As an alternative approach to enriching \(S_2\), the differentiable three-point correlations \(\tilde{S}_3\) are used in [26]. Furthermore, Gram matrices G have become a common descriptor recently [4, 31,32,33]. In order to determine a best-practice for gradient-based reconstruction, \(\tilde{S}_3\) is compared to G and a combination of \(\tilde{S}_2\) and \(\tilde{L}\) in Fig. 8. It can be seen that \(\tilde{S}_3\) yields perfect reconstructions except for the copolymer, which can only be reconstructed well from G. In contrast, G yields acceptable results for all structures. The combination of \(\tilde{S}_2\) and \(\tilde{L}\) performs very poorly for the alloy and the copolymer and is relatively noisy for the carbonate and ceramics. However, it outperforms G for the polymer composite. In summary, the results in Fig. 8 indicate that including higher-order information to \(\tilde{S}_2\) via \(\tilde{S}_3\) is more promising for gradient-based reconstruction than via the newly proposed differentiable approximation to the lineal path function \(\tilde{L}\).

Fig. 8
figure 8

Comparison between original microstructures and reconstruction results from different descriptors. For a clearer visualization, the reconstructed microstructures are shifted periodically to match the original structure as closely as possible

The more relevant aspect, however, is how easily new descriptors can be assessed using MCRpy. After defining a plugin as shown in Listing 3, it can be used for characterization and reconstruction and evaluated seamlessly. This extensibility facilitates quick and easy experimentation, allowing researchers to assess new MCR techniques easily and provide them to their colleagues.

Conclusions and Outlook

MCRpy is a powerful and extensible open-source Python library and toolkit for microstructure characterization and reconstruction. Besides these core features, MCRpy provides a plethora of convenient tools for inspecting and comparing descriptors, analyzing reconstruction results and controlling the descriptor space. It is easily applied via a GUI and brought to automated large-scale application on high-performance computers through a command line interface. For advanced and custom operations in the descriptor space, MCRpy can be imported and used as a Python module with direct access to the structures and descriptors. Typical workflows for these interfaces are presented in this work by means of different MCR tasks.

A central design aspect in MCRpy is its extensibility in that descriptors, loss functions and optimizers can be provided by the community as simple plugin modules. An example for a simple plugin is given in this work. We hope that the open source nature of the code and the plugin architecture will make MCRpy an international collaborative project with contributions from numerous researchers. This growth can leverage the potential of the presented tool to facilitate faster and easier MCR research and ultimately help accelerating materials development.