Skip to main content
Log in

From virtual clustering analysis to self-consistent clustering analysis: a mathematical study

  • Original Paper
  • Published:
Computational Mechanics Aims and scope Submit manuscript

Abstract

In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a  mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319–341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann–Schwinger equations. Based on a key postulation of “once response similarly, always response similarly”, clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann–Schwinger equation, by virtue of the Saint-Venant’s principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Weinan E (2011) Principles of multiscale modeling. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  2. Fish J (ed) (2011) Multiscale methods. Oxford University Press, Oxford

    MATH  Google Scholar 

  3. Liu WK, Karpov EG, Park HS (2006) Nano mechanics and materials: Theory, multiscale methods and applications. Wiley, Chichester

    Book  Google Scholar 

  4. Pavliotis GA, Stuart AM (2007) Multiscale methods averaging and homogenization. Springer, New York

    MATH  Google Scholar 

  5. Holdren J et al Materials genome initiative: strategic plan. Office of Science and Technology Policy 12/2014, Washington, DC. https://www.mgi.gov/sites/default/files/documents/mgi_strategic_plan_-_dec_2014.pdf

  6. Berkooz G, Holmes P, Lumley JL (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Ann Rev Fluid Mech 25:539–575

    Article  MathSciNet  Google Scholar 

  7. Michel J, Suquet P (2003) Nonuniform transformation field analysis. Int J Solids Struct 40:6937–6955

    Article  MathSciNet  Google Scholar 

  8. Roussette S, Michel JC, Suquet P (2009) Nonuniform transformation field analysis of elastic–viscoplastic composites. Compos Sci Technol 69:22–27

    Article  Google Scholar 

  9. Yvonnet J, He QC (2007) The reduced model multiscale method (r3m) for the non-linear homogenization of hyperelastic media at finite strains. J Comput Phys 223:341–368

    Article  MathSciNet  Google Scholar 

  10. Liu Z, Bessa M, Liu WK (2016) Self-consistent clustering analysis: an efficient multi-scale scheme for inelastic heterogeneous materials. Comput Methods Appl Mech Eng 306:319–341

    Article  MathSciNet  Google Scholar 

  11. Liu Z, Flemming M, Liu WK (2018) Multiscale microstructural database for nonlinear elastoplastic materials. Comput Methods Appl Mech Eng 330:547–577

    Article  Google Scholar 

  12. Haykin SO (2009) Neural networks and learning machines. Pearson, New York

    Google Scholar 

  13. Liu WK, Kim DW, Tang S (2007) Mathematical analysis of the immersed finite element method. Comput Mech 39:211–222

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank Dr. Zeliang Liu, Dr. Modesar Shakoor, Mr. Cheng Yu, Mr. Hengyang Li, and Mr. Jiaying Gao for stimulating discussions and helps in editing the manuscript. This work is partially supported by NSFC under Grant No. 11521202. W.K.L. thanks National Institute of Standards and Technology and Center for Hierarchical Materials Design (CHiMaD) under Grant Nos. 70NANB13Hl94 and 70NANB14H012; W.K.L also acknowledges the support of the AFOSR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaoqiang Tang.

Appendices

Appendix 1: Framework of virtual clustering analysis for general homogenization problems

Consider a general problem described by

$$\begin{aligned} {\mathfrak {N}}(F(x,u),u)=0, \quad x\in \Omega \subseteq \mathbb {R}^d, \end{aligned}$$
(55)

where \(x\in \mathbb {R}^d\) is the independent variable, u is the unknown (function, vector function, tensor, etc.), F(xu) is a given function, and \({\mathfrak {N}}\) is a nonlinear operator.

The goal is to effectively approximate, over a range of loadings, the relation between average quantities

$$\begin{aligned} {\bar{u}}=\displaystyle \frac{1}{|\Omega |}\displaystyle \int _\Omega u dx, \quad {\bar{F}}=\displaystyle \frac{1}{|\Omega |}\displaystyle \int _\Omega F(x,u(x)) dx. \end{aligned}$$
(56)

We design a linear (usually homogeneous) operator \({\mathfrak {L}}\) for u over the whole space \(\mathbb {R}^d\).

$$\begin{aligned} {\mathfrak {L}}u=0, \quad x\in \mathbb {R}^d. \end{aligned}$$
(57)

There are a family of homogeneous solutions to this linear equation, with \(u=u^0\) as one among them. Usually by Gauss/Stokes theorem, we may find an associate operator \({\mathfrak {L}}^*\) over the domain \(\Omega \)

$$\begin{aligned} \mathfrak {L}v *w = v*\mathfrak {L}^*+<\mathfrak {A}v,w>_{\partial \Omega }, \end{aligned}$$
(58)

where \(*\) denotes convolution \(v*w =\displaystyle \int _\Omega v(x-\tilde{x})w(\tilde{x})dx\), \(\mathfrak {A}\) is a linear operator and \(<\cdot ,\cdot >_{\partial \Omega }\) is a boundary integral on \(\partial \Omega \). In applications, \(\mathfrak {L}^*\) may likely be the same as \(\mathfrak {L}\). In particular, this holds true when \(\Omega =\mathbb {R}^d\), which can be proved readily by Fourier transform.

We further let the fundamental solution to the associated problem be \(\varphi (x)\), i.e.,

$$\begin{aligned} \mathfrak {L}^*\varphi =\delta (x). \end{aligned}$$
(59)

We remark that when u is a vector or tensor, \(\varphi \) is a vector or tensor as well, with each entry in the form similar to (59).

Combining the following two equations in \(\Omega \)

$$\begin{aligned}&{\mathcal {L}}(u-u^0)+(\mathfrak {N}(F(x,u),u)-\mathfrak {L}u)=0, \end{aligned}$$
(60)
$$\begin{aligned}&u-u^0=\delta (x)*(u-u^0) =\mathfrak {L}(u-u^0)*\varphi \nonumber \\&\quad - <\mathfrak {A}(u-u^0),\varphi >_{\partial \Omega }, \end{aligned}$$
(61)

we obtian

$$\begin{aligned}&u-u^0+\varphi *(\mathfrak {N}(F(x,u),u)-\mathfrak {L}u)\nonumber \\&\quad = - <\mathfrak {A}(u-u^0),\varphi >_{\partial \Omega }. \end{aligned}$$
(62)

Depending on the specific application, further manipulations may be performed. For instance, one may integrate by parts to move (partly) the differential operator from Fu to \(\varphi \).

Under the assumption (M1), we may decompose the domain into k clusters and approximate u(x) to be constant in each cluster. Integrating (62) over one cluster a time gives k equations. There are \((k+1)\) unknowns \(u^0,u^1,\ldots ,u^k\). It is then precisely solvable after amending the average loading to be \({\bar{u}}\).

We remark that u(x) varies along with position x, hence in general \(u-u^0\) does not vanish. Nevertheless, it is plausible to make \(u^0\) close to the average of u. Meanwhile, the linear operator \(\mathfrak {L}\) may be regarded as a preconditioner. It is also suggested to be ’close’ to the original nonlinear operator \(\mathfrak {N}\) in a certain sense.

Appendix 2: Calculations of \(D^{IJ}\) in two space dimensions

For the sake of clarity, we present by replacing \((x_1,x_2)\) with (xy) in this appendix. Taking Fourier transform \((x,y)\rightarrow (\xi ,\eta )\), we may find readily the fundamental solutions by

$$\begin{aligned} {\mathcal {F}}u^{\textcircled {{\textit{1}}}} =\displaystyle \frac{-1}{(\lambda ^0+2\mu ^0)\mu ^0(\xi ^2+\eta ^2)^2} \left[ \begin{array}{l} \mu ^0 \xi ^2 + (\lambda ^0+2\mu ^0)\eta ^2 \\ -(\lambda ^0+\mu ^0)\xi \eta \end{array}\right] , \end{aligned}$$
(63)
$$\begin{aligned} {\mathcal {F}}u^{\textcircled {{\textit{2}}}} =\displaystyle \frac{-1}{(\lambda ^0+2\mu ^0)\mu ^0(\xi ^2+\eta ^2)^2} \left[ \begin{array}{l} -(\lambda ^0+\mu ^0)\xi \eta \\ (\lambda ^0+2\mu ^0)\xi ^2 +\mu ^0 \eta ^2 \end{array}\right] ,\nonumber \\ \end{aligned}$$
(64)

in terms of displacements, or

$$\begin{aligned} \mathcal {F}\varepsilon ^{\textcircled {{\textit{1}}}}= & {} \displaystyle \frac{-i}{(\lambda ^0+2\mu ^0)\mu ^0(\xi ^2+\eta ^2)^2} \nonumber \\&\left[ \begin{array}{ll} \xi (\mu ^0 \xi ^2 + (\lambda ^0+2\mu ^0)\eta ^2) &{} \frac{\eta }{2}(-\lambda ^0\xi ^2+(\lambda ^0+2\mu ^0)\eta ^2) \\ \frac{\eta }{2}(-\lambda ^0\xi ^2+(\lambda ^0+2\mu ^0)\eta ^2) &{} -(\lambda ^0+\mu ^0)\xi \eta ^2 \end{array}\right] ,\end{aligned}$$
(65)
$$\begin{aligned} \mathcal {F}\varepsilon ^{\textcircled {{\textit{2}}}}= & {} \displaystyle \frac{-i}{(\lambda ^0+2\mu ^0)\mu ^0(\xi ^2+\eta ^2)^2} \nonumber \\&\left[ \begin{array}{ll} -(\lambda ^0+\mu ^0)\xi ^2\eta &{} \frac{\xi }{2}(-\lambda ^0\eta ^2+(\lambda ^0+2\mu ^0)\xi ^2) \\ \frac{\xi }{2}(-\lambda ^0\eta ^2+(\lambda ^0+2\mu ^0)\xi ^2) &{} \eta (\mu ^0\eta ^2 + (\lambda ^0+2\mu ^0)\xi ^2)\end{array}\right] ,\nonumber \\ \end{aligned}$$
(66)

in terms of strain.

Direct calculations show the entries in \(\Phi \) from the respective entries in inverse Fourier transforms of \(i\xi \mathcal {F}\varepsilon ^{\textcircled {{\textit{1}}}},\displaystyle \frac{1}{2}(i\eta \mathcal {F}\varepsilon ^{\textcircled {{\textit{1}}}}+i\xi \mathcal {F}\varepsilon ^{\textcircled {{\textit{2}}}}), i\eta \mathcal {F}\varepsilon ^{\textcircled {{\textit{2}}}}\) as follows.

$$\begin{aligned} \Phi _{11}= & {} \left[ \begin{array}{cc} - \frac{2 \left( \lambda ^0 + 2 \mu ^0\right) \left( x^4 - 6 x^2 y^2 + y^4\right) +2 \mu ^0 \left( x^4 + 6 x^2 y^2 - 3 y^4\right) }{{\left( x^2 + y^2\right) }^3} &{} -\frac{4 x y \left( 2 \lambda ^0 x^2 - 2 \lambda ^0 y^2 + 3 \mu ^0 x^2 - \mu ^0 y^2\right) }{{\left( x^2 + y^2\right) }^3}\\ *** &{} \frac{2 \left( \lambda ^0 + \mu ^0\right) \left( x^4 - 6 x^2 y^2 + y^4\right) }{{\left( x^2 + y^2\right) }^3} \end{array}\right] , \end{aligned}$$
(67)
$$\begin{aligned} \Phi _{12}= & {} \Phi _{21}=\left[ \begin{array}{cc} -\frac{8 x y \left( 2 \lambda ^0 x^2 - 2 \lambda ^0 y^2 + 3 \mu ^0 x^2 - \mu ^0 y^2\right) }{{\left( x^2 + y^2\right) }^3} &{} \frac{4 \left( \lambda ^0 + \mu ^0\right) \left( x^4 - 6 x^2 y^2 + y^4\right) }{{\left( x^2 + y^2\right) }^3} \\ *** &{} \frac{8 x y \left( 2 \lambda ^0 x^2 - 2 \lambda ^0 y^2 + \mu ^0 x^2 - 3 \mu ^0 y^2\right) }{{\left( x^2 + y^2\right) }^3} \end{array}\right] , \end{aligned}$$
(68)
$$\begin{aligned} \Phi _{22}= & {} \left[ \begin{array}{cc} \frac{2 \left( \lambda ^0 + \mu ^0\right) \left( x^4 - 6 x^2 y^2 + y^4\right) }{{\left( x^2 + y^2\right) }^3} &{} \frac{4 x y \left( 2 \lambda ^0 x^2 - 2 \lambda ^0 y^2 + \mu ^0 x^2 - 3 \mu ^0 y^2\right) }{{\left( x^2 + y^2\right) }^3} \\ *** &{} - \frac{2 \left( \lambda ^0 + 2 \mu ^0\right) \left( x^4 - 6 x^2 y^2 + y^4\right) +2 \mu ^0 \left( - 3 x^4 + 6 x^2 y^2 + y^4\right) }{{\left( x^2 + y^2\right) }^3} \end{array}\right] . \end{aligned}$$
(69)

Here \(***\) denotes an entry determined by symmetry.

The entries may be calculated explicitly. For instance, consider

$$\begin{aligned} D_{IJ11}=\displaystyle \frac{1}{|\Omega ^I|}\iint _{\Omega ^I}\iint _{\Omega ^J}\Phi _{11}(x-\tilde{x},y-\tilde{y}) d(\tilde{x},\tilde{y}) d(x,y).\nonumber \\ \end{aligned}$$
(70)

Let

$$\begin{aligned} f(x,y)=r^2 \ln r=\frac{1}{2}(x^2+y^2)\ln (x^2+y^2). \end{aligned}$$
(71)

Then \(\Phi ^{11}\) can be expressed as a combination of \(f_{xxxx},f_{xxxy},f_{xxyy},f_{xyyy},f_{yyyy}\).

$$\begin{aligned} f_{xxxx}= & {} \displaystyle \frac{-2x^4-12x^2y^2+6y^4}{(x^2+y^2)^3}, \end{aligned}$$
(72)
$$\begin{aligned} f_{xxxy}= & {} \displaystyle \frac{4x^3 y-12x y^3}{(x^2+y^2)^3}, \end{aligned}$$
(73)
$$\begin{aligned} f_{xxyy}= & {} \displaystyle \frac{-2x^4+12x^2y^2-2y^4}{(x^2+y^2)^3}, \end{aligned}$$
(74)
$$\begin{aligned} f_{xyyy}= & {} \displaystyle \frac{-12x^3 y+4x y^3}{(x^2+y^2)^3}, \end{aligned}$$
(75)
$$\begin{aligned} f_{yyyy}= & {} \displaystyle \frac{6x^4-12x^2y^2-2y^4}{(x^2+y^2)^3}. \end{aligned}$$
(76)

Their indefinite integrals are, respectively,

$$\begin{aligned}&\begin{aligned}&\iint \iint f_{xxxx}(x-\tilde{x},y-\tilde{y}) d(x,y) d(\tilde{x},\tilde{y})\\&\quad = \displaystyle \frac{1}{2}(y-\tilde{y})^2\ln ((x-\tilde{x})^2+(y-\tilde{y})^2)\\&\qquad -\,\displaystyle \frac{3}{2}(x-\tilde{x})^2\ln ((x-\tilde{x})^2 +(y-\tilde{y})^2)\\&\qquad +4(x-\tilde{x})(y-\tilde{y})\arctan \displaystyle \frac{y-\tilde{y}}{x-\tilde{x}}+C, \end{aligned}\end{aligned}$$
(77)
$$\begin{aligned}&\begin{aligned}&\iint \iint f_{xxxy}(x-\tilde{x},y-\tilde{y})d(x,y) d(\tilde{x},\tilde{y})\\&\quad = (x-\tilde{x})(y-\tilde{y})[\ln ((x-\tilde{x})^2+(y-\tilde{y})^2)-1]\\&\qquad +2(x-\tilde{x})^2\arctan \displaystyle \frac{y-\tilde{y}}{x-\tilde{x}}+C.\\ \end{aligned}\end{aligned}$$
(78)
$$\begin{aligned}&\iint \iint f_{xxyy}(x-\tilde{x},y-\tilde{y})d(x,y) d(\tilde{x},\tilde{y}) {=}\displaystyle \frac{1}{2}((x-\tilde{x})^2\nonumber \\&\quad +\,(y-\tilde{y})^2)\ln ((x-\tilde{x})^2+(y-\tilde{y})^2)+C. \end{aligned}$$
(79)

Other terms are obtained by switching xy with symmetry.

Appendix 3: Lippmann–Schwinger equation in three space dimensions

In the same way as for two space dimensions, we may derive the Lippmann–Schwinger equation as follows.

$$\begin{aligned} \begin{aligned}&u_i-u_i^{0} =-\iiint _{\Omega } (\sigma (\tilde{x},\tilde{y},\tilde{z})-C^0:\varepsilon (\tilde{x},\tilde{y},\tilde{z})): \varepsilon ^{\textcircled {{\textit{i}}}}\\&\quad (x-\tilde{x},y-\tilde{y},z-\tilde{z}) d (\tilde{x},\tilde{y},\tilde{z})\\&\quad - \, \oint _{\partial \Omega }(n\cdot (\sigma (\tilde{x},\tilde{y},\tilde{z}) -C^0:\varepsilon ^0)) \cdot u^{\textcircled {{\textit{i}}}}\\&\quad (x-\tilde{x},y-\tilde{y},z-\tilde{z}) d S \\&\quad -\,\oint _{\partial \Omega } n \cdot \sigma ^{\textcircled {{\textit{i}}}}(x-\tilde{x},y-\tilde{y},z-\tilde{z}) \cdot \\&\quad (u(\tilde{x},\tilde{y},\tilde{z})-u^{0}(\tilde{x},\tilde{y},\tilde{z})) d S, \end{aligned}\quad i=1,2,3.\nonumber \\ \end{aligned}$$
(80)

Here \(u^{\textcircled {{\textit{i}}}},\varepsilon ^{\textcircled {{\textit{i}}}}\) and \(\sigma ^{\textcircled {{\textit{i}}}}\) are solved from

$$\begin{aligned}&\nabla \cdot (C:\varepsilon ) = r_i, \quad r_1=\left( \begin{array}{c}\delta (x,y,z)\\ 0\\ 0 \end{array}\right) , \; \nonumber \\&r_2=\left( \begin{array}{c}0\\ \delta (x,y,z)\\ 0 \end{array}\right) ,r_3=\left( \begin{array}{c}0\\ 0\\ \delta (x,y,z) \end{array}\right) . \end{aligned}$$
(81)

In three dimensional problems, \(\varepsilon \) and \(\sigma \) are both expressed in terms of symmetric matrices. In particular, again with \(\lambda ^0,\mu ^0\) as the reference material constants for the fictitious homogeneous elastic material, we may let \(s=\displaystyle \frac{\lambda ^0+\mu ^0}{\mu ^0}\). Then the Green’s functions are

$$\begin{aligned} \mathcal {F} u^{\textcircled {{\textit{1}}}}= & {} \displaystyle \frac{-1}{(1+s)(\xi ^2+\eta ^2+\zeta ^2)^2} \nonumber \\&\left[ \begin{array}{l} \xi ^2 + (s+1)(\eta ^2+\zeta ^2) \\ -s\xi \eta \\ -s\xi \zeta \end{array}\right] , \end{aligned}$$
(82)
$$\begin{aligned} \mathcal {F} u^{\textcircled {{\textit{2}}}}= & {} \displaystyle \frac{-1}{(1+s)(\xi ^2+\eta ^2+\zeta ^2)^2} \nonumber \\&\left[ \begin{array}{l} -s\xi \eta \\ \eta ^2 + (s+1)(\xi ^2+\zeta ^2) \\ -s\eta \zeta \end{array}\right] , \end{aligned}$$
(83)
$$\begin{aligned} \mathcal {F} u^{\textcircled {{\textit{3}}}}= & {} \displaystyle \frac{-1}{(1+s)(\xi ^2+\eta ^2+\zeta ^2)^2} \nonumber \\&\left[ \begin{array}{l} -s\xi \zeta \\ -s\eta \zeta \\ \zeta ^2 + (s+1)(\xi ^2+\eta ^2) \end{array}\right] , \end{aligned}$$
(84)

in terms of displacements solving the Navier equation, or

$$\begin{aligned} \mathcal {F}\varepsilon ^{\textcircled {{\textit{1}}}}= & {} \displaystyle \frac{-i}{(1+s)(\xi ^2+\eta ^2+\zeta ^2)^2} \\&\left[ \begin{array}{lll} \xi [(1-s) \xi ^2 + (1+s)(\eta ^2+\zeta ^2)] &{}*** &{} *** \nonumber \\ \displaystyle \frac{\eta }{2}[((1-s) \xi ^2 + (1+s)(\eta ^2+\zeta ^2)]&{} -s\xi \eta ^2 &{} ***\\ \displaystyle \frac{\zeta }{2}[((1-s) \xi ^2 + (1+s)(\eta ^2+\zeta ^2)] &{}-s\xi \eta \zeta &{} -s\xi \zeta ^2 \end{array}\right] ,\end{aligned}$$
(85)
$$\begin{aligned} \mathcal {F}\varepsilon ^{\textcircled {{\textit{2}}}}= & {} \displaystyle \frac{-i}{(1+s)(\xi ^2+\eta ^2+\zeta ^2)^2} \nonumber \\&\left[ \begin{array}{lll} -s\xi ^2\eta &{} \displaystyle \frac{\xi }{2}[(1-s) \eta ^2 + (1+s)(\xi ^2+\zeta ^2)] &{} -s\xi \eta \zeta \\ *** &{} \eta [((1-s) \eta ^2 + (1+s)(\xi ^2+\zeta ^2)] &{}***\\ *** &{} \displaystyle \frac{\zeta }{2}[((1-s) \eta ^2 + (1+s)(\xi ^2+\zeta ^2)]&{} -s\eta \zeta ^2 \end{array}\right] ,\end{aligned}$$
(86)
$$\begin{aligned} \mathcal {F}\varepsilon ^{\textcircled {{\textit{3}}}}= & {} \displaystyle \frac{-i}{(1+s)(\xi ^2+\eta ^2+\zeta ^2)^2}\nonumber \\&\left[ \begin{array}{lll} -s\xi ^2\zeta &{} *** &{} \displaystyle \frac{\xi }{2}[(1-s) \zeta ^2 + (1+s)(\xi ^2+\eta ^2)] \\ -s\xi \eta \zeta &{} -s\eta ^2\zeta &{} \displaystyle \frac{\eta }{2}[((1-s) \zeta ^2 + (1+s)(\xi ^2+\eta ^2)]\\ *** &{} *** &{} \zeta [((1-s) \zeta ^2 + (1+s)(\xi ^2+\eta ^2)] \end{array}\right] . \end{aligned}$$
(87)

Therefore, we denote

$$\begin{aligned} {\tilde{\sigma }}=\sigma -C^0:\varepsilon = \left( \begin{array}{lll} {\tilde{\sigma }}_{11} &{} {\tilde{\sigma }}_{12} &{} {\tilde{\sigma }}_{13}\\ *** &{} {\tilde{\sigma }}_{22} &{} {\tilde{\sigma }}_{23} \\ *** &{} *** &{}{\tilde{\sigma }}_{33} \end{array} \right) . \end{aligned}$$
(88)

and the inverse Fourier transforms of \(i\xi \mathcal {F}\varepsilon ^{\textcircled {{\textit{1}}}}_{kl}, (i\eta \mathcal {F}\varepsilon ^{\textcircled {{\textit{1}}}}_{kl}+i\xi \mathcal {F}\varepsilon ^{\textcircled {{\textit{2}}}}_{kl}),(i\zeta \mathcal {F}\varepsilon ^{\textcircled {{\textit{1}}}}_{kl}+i\xi \mathcal {F}\varepsilon ^{\textcircled {{\textit{3}}}}_{kl}), i\eta \mathcal {F}\varepsilon ^{\textcircled {{\textit{2}}}}_{kl}, (i\zeta \mathcal {F}\varepsilon ^{\textcircled {{\textit{2}}}}_{kl}+i\eta \mathcal {F}\varepsilon ^{\textcircled {{\textit{3}}}}_{kl}), i\zeta \mathcal {F}\varepsilon ^{\textcircled {{\textit{3}}}}_{kl}\) as \(\Phi _{11kl},\Phi _{12kl},\Phi _{13kl},\Phi _{22kl},\Phi _{23kl},\Phi _{33kl}\) with \(k,l=1,2,3\), respectively, and \(\Psi \) for that with displacements \(u^{\textcircled {{\textit{i}}}}\) replacing the strains \(\varepsilon ^{\textcircled {{\textit{i}}}}\), then in physical space the Lippmann–Schwinger equation reads

$$\begin{aligned} \begin{aligned}&\varepsilon -\varepsilon ^0 +\Phi *{\tilde{\sigma }}\\&\quad =- \oint _{\partial \Omega } \Psi (x-\tilde{x},y-\tilde{y},z-\tilde{z})\cdot (n \cdot (\sigma (\tilde{x},\tilde{y},\tilde{z})\\&\qquad - C^0:\varepsilon ^0)) dS\\&\qquad - \oint _{\partial \Omega }(( \Phi (x-\tilde{x},y-\tilde{y},z-\tilde{z}):C^0) \cdot n) \cdot (u(\tilde{x},\tilde{y},\tilde{z})\\&\qquad - u^0(\tilde{x},\tilde{y},\tilde{z})) dS. \end{aligned} \end{aligned}$$
(89)

Appendix 4: Flowcharts for k-means and SOM algorithms

In k-means, we have set X composed of data \(x_j\in \mathbb {R}^m \;(j=1,\ldots ,N)\). We predefine a norm \(\Vert \cdot \Vert \) for the m-dimensional vector space, and designate \(\ell \) clusters. The goal is to assign all elements in X to \(\ell \) clusters expressed as \(C(j)=I\in \{1,\ldots ,\ell \}\) such that

$$\begin{aligned} J(C;\mu _1,\ldots ,\mu _\ell )=\displaystyle \sum _{I=1}^\ell \displaystyle \sum _{C(j)=I} \Vert x_j-\mu _I \Vert ^2 \end{aligned}$$

attains its minimum, where both the encoder \(C:j\mapsto C(j)\) and the estimated mean \(\mu _I\in \mathbb {R}^m\) are to be optimized.

An iterative algorithm for k-means clustering is as follows.

  1. 1.

    For a given encoder C, find optimal \(\mu _I\)’s by minimizing \(J(C;\mu _1,\ldots ,\mu _\ell )\). This may be computed via explicit form, as J is in quadratic form.

  2. 2.

    For given estimated means \(\mu _1,\ldots ,\mu _\ell \), find optimal encoder C by minimizing \(J(C;\mu _1,\ldots ,\mu _\ell )\), namely, to find the closest estimated mean for each data

    $$\begin{aligned}C(j)=\arg \displaystyle \min _{1\le I\le \ell }\Vert x_j-\mu _I \Vert ^2. \end{aligned}$$

In SOM, we have a data set X, with each data \(x\in \mathbb {R}^m\), and designate \(\ell \) clusters. We predefine a norm \(\Vert \cdot \Vert \) for the m-dimensional vector space, and a distance \(d_{i,j}\) for the integer indices \(i,j=1,\ldots ,\ell \). We allocate the data to these clusters, together with weights determined in the following way.

  1. 1.

    Initialization Assign randomly the initial weight vectors \(w_j, j=1,\ldots ,\ell \).

  2. 2.

    Sampling Draw a sample x from the input data with a certain probability.

  3. 3.

    Similarity matching Find the winning cluster i(x) at time-step n according to the best-matching criterion based on minimizing the distance

    $$\begin{aligned}i(x)=\arg \displaystyle \min _{1\le j\le \ell } \Vert x(n)-w_j\Vert . \end{aligned}$$
  4. 4.

    Updating Adjust the weight vectors of all affected clusters

    $$\begin{aligned} w_j(n+1)=w_j(n)+\eta (n)h_{j,i(x)}(n)(x(n)-w_j(n)), \end{aligned}$$

    where \(\eta (n)\) is a learning rate parameter, and \(h_{j,i(x)}(n)\) is the neighborhood function centered around the winning cluster i(x). We choose a Gaussian function

    $$\begin{aligned} h_{j,i(x)}(n)=\exp \left( -\displaystyle \frac{d_{j,i(x)}^2}{2\sigma _0^2\exp (-2n/\tau _1)} \right) , \end{aligned}$$

    and a decreasing learning function

    $$\begin{aligned} \eta (n)=\eta _0\exp (-n/\tau _2). \end{aligned}$$
  5. 5.

    Continuation Iterate with Step 2 until no visible change occurs.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, S., Zhang, L. & Liu, W.K. From virtual clustering analysis to self-consistent clustering analysis: a mathematical study. Comput Mech 62, 1443–1460 (2018). https://doi.org/10.1007/s00466-018-1573-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00466-018-1573-x

Keywords

Navigation