International Journal of Thermophysics

, Volume 34, Issue 10, pp 1930–1952

Design and Analysis of Spectrally Selective Patterned Thin-Film Cells


    • University of Texas at Austin
  • John R. Howell
    • University of Texas at Austin

DOI: 10.1007/s10765-013-1495-y

Cite this article as:
Hajimirza, S. & Howell, J.R. Int J Thermophys (2013) 34: 1930. doi:10.1007/s10765-013-1495-y


This paper outlines several techniques for systematic and efficient optimization as well as sensitivity assessment to fabrication tolerances of surface texturing patterns in thin film amorphous silicon (a-Si) solar cells. The aim is to achieve maximum absorption enhancement. The joint optimization of several geometrical parameters of a three-dimensional lattice of periodic square silver nanoparticles, and an absorbing thin layer of a-Si, using constrained optimization tools and numerical FDTD simulations is reported. Global and local optimization methods, such as the Broyden–Fletcher–Goldfarb–Shanno quasi-Newton method and simulated annealing, are employed concurrently for solving the inverse near-field radiation problem. The design of the silver-patterned solar panel is optimized to yield maximum average enhancement in photon absorption over the solar spectrum. The optimization techniques are expedited and improved using a novel nonuniform adaptive spectral sampling technique. Furthermore, the sensitivity of the optimally designed parameters of the solar structure is analyzed by postulating a probabilistic model for the errors introduced in the fabrication process. Monte Carlo simulations and unscented transform techniques are used for this purpose.


Fabrication errorInverse optimizationSensitivity analysis Thin-film solar cells

List of Symbols



Variable in Eq. 7


Enhancement factor


Objective function


Height of silver nanowires


Height of amorphous silicon


Hermite polynomial of order \(N\)


Solar irradiance spectrum


Sigma points for the unscented transform


Number of satisfied moments in unscented transform


Sigma weights for the unscented transform


Width of silver nanowires


Selected geometry

\(\Vert {\varvec{x}} \Vert \)

Norm of vector \({\varvec{x}}\)

Greek Symbols

\(\alpha _{\mathrm{gr}}\)

Spectral absorptivity in the presence of grating

\(\alpha _{\mathrm{ngr}}\)

Spectral absorptivity in the absence of grating

\(\lambda \)


\(\gamma _i, \lambda _i\)

Predefined constants

\(\Delta {\varvec{x}}\)

Change in \({\varvec{x}}\)

\(\varLambda _{\mathrm{Ag}}\)

Nanowires period

\(\varOmega \)

Optical wavelength range

1 Introduction

Today, fossil fuels constitute the largest fraction of global energy use. Looking at the historical trend of energy consumption, however, it is ostensible that the dominance of fossil fuel usage has declined significantly over the past 50 years [1], mostly due to limited natural resources, undesirable environmental effects, uneven geographical distribution, and political influences. In contrast, other forms of energy such as nuclear and renewable sources have gained higher usage in the global energy profile. In particular, solar photovoltaic (PV) energy has experienced the highest growth rate in the past decade, exemplified by a 102 % increase in utility scale usage over the last five years [2]. Whether or not PV energy will keep growing at such a fast pace depends on technological advances in efficient and inexpensive conversion of solar energy to electricity. To that aim, much research has been dedicated to increase the performance and reduce the cost of PV cells, so that they become competitive with current modes of electricity generation.

Most of the cost of a PV cell is in the material and processing expenses. At present, most commercial PV cells are based on bulk or wafer-based crystalline silicon (c-Si) which, although very efficient in transforming insolation to electricity, is quite expensive due to the excessive usage of semiconductor materials. An alternative to first generation c-Si PV cells is the use of thin-film semiconductor layers of non-crystalline silicon which requires anywhere between 10 and 300 times less material than traditional solar cells. In thin-film cells, a thin layer of a semiconductor is deposited on a suitable substrate using low cost techniques. Popular thin-film materials are amorphous silicon (a-Si), polycrystalline (p-Si) and nanocrystalline (nc-Si), cadmium telluride, copper indium selenide, and others. Despite the lower absorption and overall efficiency of silicon, it is widely used in thin-film solar cells due to its abundance, renewability, and the availability of a mature manufacturing process which is the consequence of many years of industrial experience in digital electronics.

Despite offering processing and cost advantages, the efficiency of the state-of-the-art thin-film PV cells (around 10 %) is noticeably less than (about half) that of c-Si PV cells, mainly due to the smaller thickness of the absorbing layers. Therefore, major efforts are focused on producing thin-film cells with higher efficiencies. A class of techniques employed to enhance the optical absorption of thin-film solar cells is called “light trapping” techniques, which refer to mechanisms used to increase the average path length of incident light inside the solar cell, and thereby improving the overall fraction of absorbed photons. A systematic design of light trapping mechanisms often entails analyzing radiative near-field effects to develop methods for enhancing both spectral and directional selectivity of solar cells. This is often achieved through nanopatterning of the PV surface.

Light trapping in thin-film solar cells via surface texturing or nanopatterning has been discussed in many prior publications. Four common techniques include the deposition of plasmonic nanoparticles on the front or back surface of the solar cells (see, e.g., [39]), surface texturing through the use of metallic gratings such as nanostrips or nanochips (see, e.g., [1015]), textured transparent conductive oxide (TCO) [1517], and the use of semiconductor nanowires [1821]. Some of these references report a significant increase in the photonic absorption capability of the thin-film structure through the use of one or more of the light trapping techniques, as well as the right choice of material. In particular, Rockstuhl et al. [11] demonstrated that applying silver nanowires across the upper surface of a solar panel composed of thin-film a-Si can increase total photon absorption by as much as 60 %. Tumbleston et al. [12] claimed an increase in photon absorption of around 18 % in organic-based solar cells with photonic crystal structures and zinc oxide gratings. Beck et al. [13] reported increased optical absorption by a factor of five at a wavelength of 1100 nm, and enhanced external quantum efficiency of thin-film Si solar cells by a factor of 2.3 at the same wavelength via tuning localized surface plasmons in arrays of silver nanoparticles. Wang et al. [14] obtained 30 % broadband absorption enhancement for thin a-Si using unique nanogratings. Other aforementioned references have reported similar or even more promising results through exploiting combinations of light trapping techniques and advanced deposition methods.

In the present work, we focus on light trapping through the use of metallic nanogratings. The mechanisms responsible for the increase in absorption in this case include Fabry–Perot resonance, plasmon polariton generation on the surface, and resonance and planar waveguide coupling. The exact analysis of the light trapping properties of a given nanopattern is a formidable analytical/numerical problem, requiring the solution of near-field radiation-surface interactions through Maxwell’s equations for electromagnetic waves. Moreover, designing a nanopatterned surface with properties tailored for solar cells is even more difficult, as it represents an inverse design problem. This requires an inverse solution of Maxwell’s equations applied to interaction of electromagnetic waves with a surface geometry that is to be determined, given the desired spectral/directional distribution of absorbed radiation. Nevertheless, despite the scope and importance of the problem, scant work exists for dealing with the inverse solution of near-field radiation for nanopatterning of solar cells. Most work in the literature deals only with simple geometries using simplistic inverse methods for optimizing surface patterning (or in many cases, repetitive “brute-force” solutions covering the entire map of patterning parameters). Often, only one or two parameters are considered.

In previous work, we have reported the results of preliminary experiments of the inverse optimization method for certain classes of 2-dimensional and 3-dimensional solar cell structures [22, 23]. In the current work, we improve upon the numerical techniques of [22, 23] to incorporate more sophisticated and practical scenarios. We first optimize the surface patterning of 3-dimensional solar panels in the presence of a periodic lattice of square nanochip grating, in order to maximize the light absorption. In doing so, we provide a mathematical framework for the optimization program which facilitates the incorporation of various practical and physical constraints. We demonstrate that a hybrid numerical optimization (composed of simulated annealing followed by quasi-Newton optimization) finds a near-optimal solution within a limited number of iterations (far less than that required for an exhaustive search method). In particular, we find a geometry pattern in which the light trapping factor (i.e., number of absorbed photons) increases roughly by a factor of 1.52 when surface texturing is used. In addition to the optimization of 3-dimensional surface patterning, we introduce a novel approach for the approximation of spectral optical characteristics of a given cell, which results in faster convergence rates for absorption calculation. The proposed method is based on adaptive nonuniform sampling of a target irradiance-absorption spectrum, motivated by the inhomogeneous concentration of absorption spectrum and convergence rates of FDTD simulations in the optical range.1 We use the proposed method in the inverse optimization and report expedited simulations.

Finally, we evaluate the sensitivity of the inverse optimization solution in the presence of fabrication/modeling error. Employing a Monte Carlo (MC) technique, and a multidimensional unscented transform (UT) method, we statistically analyze the robustness of the proposed design in the presence of Gaussian error in all geometry parameters. A practical sensitivity analysis is a critical issue in the engineering of high efficiency solar cells, especially from a manufacturing point of view. Despite this importance, there is no prior work on this subject, and to the best of the authors’ knowledge, the present work is a first attempt at this problem.

2 Problem Setup

The structure considered in this work is an amorphous silicon (a-Si) thin-film solar cell in the substrate configuration, and the goal is to evaluate the effect of surface texturing on the light trapping properties of the cell under solar illumination. The considered surface grating is a periodic lattice of three-dimensional rectangular-shaped metallic (silver) nanoparticles. The basic geometry of the structure is presented in Fig. 1. It consists of 3-D periodic Ag square nanochips mounted on a thin-film a-Si layer. The grating pattern is characterized by three parameters: a height \(h_{\mathrm{Ag}} \), width \(w_{\mathrm{Ag}} \), and a period \(\varLambda _{\mathrm{Ag}} \). The thickness of the silicon layer is also a variable, denoted by \(h_{\mathrm{Si}}\). A substrate material with a refractive index of 1 is assumed to be present under the a-Si layer. The cell is illuminated by a plane wave with zero polarization angle from above (cf. Fig. 1).
Fig. 1

Model of 3-D solar cell with nanoparticles surface grating used for near-field radiation calculation

3 Optimization of the Geometry Parameters

The quasi-Newton (QN) method and simulated annealing (SA) are used to find the geometry corresponding to the maximum enhancement factor (EF). Both methods are detailed in prior work [22]. However, we provide a brief description of the two methods for completeness.

3.1 Quasi-Newton Method

The QN method is a memory-less optimization technique that is suited for finding local optima of a sufficiently smooth function. It consists of continuously updating a search point based on the first and second derivatives of the objective function [2426]. A new candidate point is selected in the direction of the maximum descent determined by the gradient vector multiplied by the inverse of the Hessian matrix. The search for a new point with a better objective function proceeds in that direction, using a so-called line-search process. For enhancement factor maximization, the unconstrained objective (cost) function \(f({\varvec{x}})\) is taken as the reciprocal of the solar enhancement factor \(E({\varvec{x}})\), where \({{\varvec{x}}}\) is the vector of geometric parameters being optimized. The Broyden–Fletcher–Goldfarb–Shanno (BFGS) update for the Hessian matrix and the basic line-halving technique for the line search are chosen. For a more detailed description of the method, we refer the reader to [22, 23].

The QN method is deterministic in nature and may become trapped in local minima. Adding a perturbation in the search direction can improve its performance. However, it remains unlikely that this approach modifies the search region significantly, and therefore the algorithm might settle down at a local minimum far from the global solution. The QN method is therefore well-suited for situations where the initial candidate point happens to be close to the global minimum, or for optimization problems having a single optimum. Unfortunately, that is not the case here, as the electromagnetic objective function is highly nonlinear and has many local optima [22]. We will describe another optimization method which is more suitable for finding global solutions in such cases.

3.2 Simulated Annealing

In contrast to the QN method, the SA method is randomized in nature and is more suited for situations where the initial geometry is not known to reside in the proximity of the global minimum, and for highly non-convex objective functions [27, 28]. Although SA is primarily used for discrete optimization, variations have been introduced for solving continuous optimization problems. In this work, the fast annealing method developed by Ingber is chosen, in which a Cauchy distribution for determining new candidate points is used rather than the conventional Boltzmann distribution [29]. A detailed description of the SA method used is given in [22].

3.3 Hybrid Optimization Technique

Although SA is suitable for quickly scanning large state spaces, it often fails to capture some local variations of the objective function, especially when the cooling rate parameter is large. The performance of the SA method can be improved via techniques such as iterative re-annealing and cooling, use of memory, or alternative choices of the selection rule (see, for example, [30]). Also, we refer the interested reader to [31], for a discussion on the generalized SA method and how its performance can be tailored to different applications.

Another technique that can be used to improve the performance of the SA algorithm is to combine it with a local optimization technique to form a hybrid platform. Examples of local optimizers are the memory-less algorithms such as the described QN method, or the memory utilizing techniques such as Tabu Search (TS, see, e.g., [23] for optimization of 3-dimensional solar cell patterns using TS). In contrast to SA, the QN algorithm is more promising for the cases where the initial geometry is close to the global optimum. Hence, SA is more recognized as a global solver, whereas QN is a local solver. It is in general useful to exploit a combination of the two methods for a fast and accurate global optimization (see, e.g., [3234] and the references therein for a more sophisticated study of hybrid optimization tools based on SA). In this paper, we evaluate the performance of the SA and QN optimization techniques individually, as well as in a hybrid framework to optimize the three-dimensional parameters of a surface-textured a-Si solar panel. In the hybrid optimization, a fast converging SA is run to achieve a set of one or more candidate point(s) with suboptimal objective functions, including the best solution found by SA. Afterwards, the QN method is implemented starting from the set of candidate points to perform a proximity search. The numerical results and the discussions of the implemented optimization techniques are given in Sect. 7.

4 Practical Constraints

The problem of optimizing the geometry of a surface-textured solar cell is subject to physical and real world constraints. A class of constraints is implied by certain geometry vectors which will not map to realistic dimensions. For instance, from Fig. 1 it is clear that \(\hbox {w}_{\mathrm{Ag}}\) cannot be larger than \(\lambda _{\mathrm{Ag}}\). Another set of constraints is related to our general knowledge of the existing electromagnetic and quantum level phenomena inside the solar structure, resulting in certain limitations on the sizes of the structures. For example, too large or too small dimensions can limit the electromagnetic absorption, severely enhance optical reflection from the top of the structure to the air, or improve the undesirable recombination of the minority carriers inside the cell, which limits the overall conversion efficiency of the solar cell (for more explanations on these issues, see [22]). Finally, another class of restrictions is associated with the fabrication process. Due to the limited resolution of fabrication devices, numerical values of the optimized geometries must be truncated, and dimensions must be larger than the minimal processing scale at which fabrication is feasible. Furthermore, numerical error is almost inevitable during any fabrication process. The collection of all physical constraints is mathematically modeled by imposing upper and lower bounds on the space of valid geometries \({\varvec{x}}\) in the optimization program. In other words, we assume that a valid geometry satisfies the following inequalities:
$$\begin{aligned} {\varvec{x}}^{(\mathrm{L})}\le {\varvec{x}}\le {\varvec{x}}^{\left( \mathrm{U} \right) }. \end{aligned}$$
The considered optimization problem is therefore a constrained optimization. In the remainder of this section, we explain how we can modify our optimization algorithms in order to incorporate these constraints. Then in the next section, we address fabrication error and explain how we can assess the sensitivity of the optimized geometry to the fabrication error.

4.1 Constrained Optimization

As discussed above, there are several sources of constraints that are enforced on the set of valid geometries. The first type of constraint includes upper and lower bounds on each dimension. These bounds are chosen so that nanochip sizes and the thin-film thickness are not unreasonably large or unrealistically small. Provided that the underlying electromagnetic interaction of the incident light and the thin-film panel is relatively well understood, and the dependency of the absorbed light spectrum to the geometry parameters is (at least intuitively) known, it is possible to narrow down the space of possible dimensional values (“search space”) where using the nanochips can result in significant enhancement.

A second class of constraints involves those that arise from the available finite precision of the fabrication and testing devices. We must assume a limited precision for the allowable geometry parameters, since it is impossible to fabricate nanostructures with infinitesimally small precision. In this paper, due to limitations in computation time using the FDTD simulator, we limit the search space to parameters rounded off to \(1\,\hbox {nm}\), meaning that none of the dimensions have sub-nanometer numerical values.

The optimization problem that we consider is defined in terms of an objective (cost) function. We always formulate the problem in terms of a “minimization problem,” therefore, the objective function must be chosen in such a way that its minimization corresponds to the maximization of the enhancement factor. One straightforward choice for the objective function is the inverse of the enhancement factor,
$$\begin{aligned} f\left( {\varvec{x}} \right) =1/E({\varvec{x}}), \end{aligned}$$
where \({\varvec{x}}\) is the vector representation of the geometric dimensions, \(f\left( {\varvec{x}} \right) \) is the value of the objective function for \({\varvec{x}}\), and \(E({\varvec{x}})\) is the average enhancement factor of the light absorption in the solar spectrum. We consider the wavelength range between 300 nm and 900 nm as the significant segment for the solar spectrum. The enhancement factor is defined as the ratio of the average number of absorbed photons in the presence of nanopatterning, to the average number of absorbed photons in the absence of surface nanopatterning, and can be calculated from the numerical values of the absorbed power at different wavelengths. More precisely, if the solar irradiance as a function of wavelength is denoted by \(I(\lambda )\), and the value of absorptivity in the presence and absence of surface grating are denoted by \(\alpha _{\mathrm{gr}} (\lambda )\) and \(\alpha _{\mathrm{ngr}} (\lambda )\), respectively, then the EF is defined by
$$\begin{aligned} E\left( x \right) =\left( {\int \limits _\varOmega I\left( \lambda \right) \cdot \lambda \cdot \alpha _{\mathrm{gr}} \left( \lambda \right) \hbox {d}\lambda } \right) \Big / \left( { \int \limits _\varOmega I\left( \lambda \right) \cdot \lambda \cdot \alpha _{\mathrm{ngr}} \left( \lambda \right) \hbox {d}\lambda } \right) , \end{aligned}$$
where \(\varOmega \) is the wavelength range of the solar spectrum. Note that the above formulation is derived based on the fact that the number of absorbed photons at wavelength \(\lambda \) is proportional to \(\lambda \cdot \alpha (\lambda )\) (note that the photon energy is proportional to \(1/\lambda )\) and also directly proportional to the intensity of the incident light at that wavelength. For simplicity, we refer to the term \(I\left( \lambda \right) \cdot \lambda \cdot \alpha (\lambda )\) as the “irradiance-absorption product.”
The constrained problem of maximizing the enhancement factor can be written as
$$\begin{aligned} \max _{{\varvec{x}}^{(\mathrm{L})}\le {\varvec{x}}\le {\varvec{x}}^{(\mathrm{U})}} E\left( {\varvec{x}} \right) . \end{aligned}$$
As mentioned, the above problem corresponds to the following minimization:
$$\begin{aligned} \min _{{\varvec{x}}^{(\mathrm{L})} \le {\varvec{x}} \le {\varvec{x}}^{(\mathrm{U})}} f\left( {\varvec{x}} \right) . \end{aligned}$$
We use the following strategy to incorporate the constraints of Eq. 5 into the QN and SA algorithms. For the QN method, we incorporate the constraints into the objective function, and solve an equivalent unconstrained problem. In other words, we introduce additional positive costs in the objective function, which account for cases where the geometry dimensions fall outside the desired boundaries. The optimization algorithm is therefore unlikely to converge to an out-of-bounds geometry, due to undesirable additional costs. Moreover, the algorithm retains the freedom to arbitrarily choose and try any point in the search space. We use the following modified function in the QN optimization:
$$\begin{aligned} f_{\mathrm{QN}} \left( {\varvec{x}} \right) =\frac{1}{E\left( {\varvec{x}} \right) }+ \sum _{i=1}^M \lambda _{i} \lceil x_{i} - x_i^{\left( \mathrm{U} \right) }\rceil ^{+}+\sum _{i=1}^M \gamma _i \lceil x_i^{{\left( \mathrm{L} \right) }}-x_i\rceil ^{+}, \end{aligned}$$
where \(\lambda _i \) and \(\gamma _i \) are positive constants that should be chosen appropriately, and by definition,
$$\begin{aligned} \lceil a \rceil ^{+}:=\hbox {max}(0,a) \end{aligned}$$
for a number \(a\). This choice of cost function is related to the concept of Lagrange multipliers in the theory of constrained optimization. In fact, provided that the right \(\lambda _i \) and \(\gamma _i \) are chosen, from the theory of constrained optimization and Karush–Kuhn–Tucker (KKT) conditions [35], the solution to the original problem of Eq. 4 is given by the following unconstrained problem:
$$\begin{aligned} \min _{\varvec{x}} f_{\mathrm{QN}} ({\varvec{x}}). \end{aligned}$$
Including the Lagrange multipliers in the constrained optimization ensures that the outlier solutions are penalized depending on their distance from the valid region. Note that in this method, constraints are not imposed directly on all candidate solutions at every iteration, and are rather incorporated softly into the objective function.

In contrast, in the SA algorithm, the choice of the objective function is not changed. Instead, the constraints are strictly imposed at every iteration by assuring that the new candidate point lies inside the allowable region. In other words, the rule for selection of a new candidate changes as follows. In each iteration, the selection of a new random candidate is based on the default distribution. However, if the new point falls outside the allowable region, the candidate is immediately rejected, and a new choice is considered. This is repeated until a valid candidate is selected.

5 Nonuniform Spectral Sampling and Fast Numerical Method

As expressed in Eq. 3, calculating the enhancement factor requires computing numerical integrals of the irradiance-absorption products in the absence and presence of a nanograting. FDTD simulations are set to compute the absorption power as a function of wavelength. Once the absorption power is computed for a sufficiently large number of wavelengths in the solar range, the irradiance-absorption product can be interpolated for the whole range and numerically integrated. The straightforward approach is to consider uniform wavelength samples in the solar range. Clearly, the more samples that are considered, the more accurate is the estimate of the enhancement factor. This is the standard approach that we have pursued for most of the numerical results of the current paper and in our previous work [22]. Certain aspects of the numerical computations reveal that a nonuniform choice of sample wavelengths results in more accurate estimates of the enhancement factor and hence a faster computational process. The key is the fact that FDTD simulations have variable convergence rates that depend on the physical nature of the problem and wavelength of simulations. As a result, the simulations tend to run longer at certain frequencies. Furthermore, in all of the empirical simulations, the FDTD simulations are much more time consuming in the presence of a grating. In addition, the irradiance-absorption profile is often a bandpass spectrum which is significantly larger in certain segments of the solar range. Therefore, it might be possible to reduce the computational costs of the simulations by employing a nonuniform set of frequency samples that are more concentrated at the bulk of the irradiance-absorption profile. We propose a simple nonuniform sampling approach that realizes this idea. The method is as follows.

Let the spectral simulations be limited to \(N\) sample wavelengths between \(\lambda _{\mathrm{min}}\) and \(\lambda _{\mathrm{max}}\). In the uniform sampling approach, these points are chosen uniformly in the interval \([\lambda _{\mathrm{min}} ,\lambda _{\mathrm{max}}]\). In the nonuniform approach, sample wavelengths are chosen uniformly for the case with no grating. In the presence of gratings (which contributes to the major computational cost of the simulations), samples are selected adaptively based on the previous values of the wavelengths and the corresponding irradiance-absorption product values. We propose the following method. First, a set of \(N/2\) uniform sample wavelengths \(\lambda _1, \lambda _2, \ldots , \lambda _{N/2}\) is considered in \([\lambda _{\mathrm{min}}, \lambda _{\mathrm{max}}]\), and the corresponding absorption powers are computed using near-field Maxwell equation solutions obtained through FDTD simulations. Next, the irradiance-absorption product is calculated at those \(N/2\) wavelengths and the region of significance is identified. In this case, the region of significance is an interval of wavelengths \(\lambda \) where the term \(\lambda \alpha _{\mathrm{gr}} \left( \lambda \right) I(\lambda )\) (i.e., the irradiance-absorption product) is large, and calls for a higher wavelength resolution. We define this region as the range within the smallest and largest index \(1\le i\le N/2\) for which the following holds:
$$\begin{aligned} \lambda _i \alpha _{\mathrm{gr}} \left( {\lambda _i } \right) I\left( {\lambda _i} \right) \ge 0.1 \max _{1\le j\le N/2} \lambda _j \alpha _{\mathrm{gr}} \left( {\lambda _j } \right) I\left( {\lambda _j} \right) . \end{aligned}$$
Let the smallest and largest index solutions \(i\) to the above inequalities be \(i^{\prime }\) and \(i^{{\prime }{\prime }}\), respectively. In other words, for \(i^{{\prime }}\le i\le i^{{\prime }{\prime }}\), the irradiance-absorption profile at wavelength \(\lambda _i\) is greater than 10 % of the maximum irradiance-absorption. We then consider \(N/2\) extra sample wavelengths \(\nu _1, \nu _2, \cdots , \nu _{N/2} \) in the interval \([\lambda _{i^{{\prime }},} \lambda _{i^{{\prime }{\prime }}}]\) and run the simulations for those wavelengths. As a result, we will have the absorption profile of the given geometry for the set of \(N\) wavelengths \(\{\lambda _1, \ldots , \lambda _{N/2}, \nu _1 ,\ldots ,\nu _{N/2}\}\). The resulting irradiance-absorption profile is interpolated and integrated to compute the enhancement factor. The results of optimization using nonuniform spectral sampling are given in the simulations section.

6 Fabrication Error Modeling

Optimal design of periodic nanochips for achieving broadband absorption enhancement must be robust to incurred structural and numerical errors. In particular, every fabrication method is subject to errors that can be statistically modeled by examining a large sample set of finished structures based on a given target geometry, by means of high resolution nanoscale imaging techniques such as scanning electron microscopy (SEM) or atomic force microscopy (AFM). There are many factors that contribute to the fabrication error, including limited precision of fabrication devices/processes and human factors. To the best of our knowledge, a comprehensive study of nanostructure fabrication error modeling does not exist in the literature. In addition to concerns over errors that arise in fabrication reasons, it is also useful to know from a computational perspective how sensitive the optimal solution is. In particular, if the enhancement factor objective function is a highly oscillating function with respect to the geometry vector, then it is likely that good solutions are in the close vicinity of poor solutions, and therefore a small deviation in the structure can result in unexpectedly bad performance.

For simplicity, we consider a hypothetical error model in which in the finished cell, each dimension has an independent Gaussian deviation around the optimized target value, with some fixed standard deviation. In practice, structural errors of different parameters can be correlated, and their variances might depend on their values. However, in the absence of information on such possible correlations, we believe that the uncorrelated model is general enough to capture the effects of many uncertainties, and can be useful for evaluating the sensitivity of a proposed design.

The statistics of objective functions (i.e., enhancement factor) based on the described fabrication error can be obtained using a MC simulation. Suppose that the optimal designed geometry is \(x_{\mathrm{opt}}\). The simulated finished geometry is then given by
$$\begin{aligned} {\varvec{x}}_{\mathrm{fin}} = {\varvec{x}}_{\mathrm{opt}} +{\varvec{x}}_{\mathrm{err}}, \end{aligned}$$
where \({\varvec{x}}_{\mathrm{err}}\) is the manufacturing error vector, the entries of which are independent Gaussian variables with some finite variance \(\sigma _{\mathrm{err}}\). The MC method is based on finding the objective function for a large set of finished geometries, which is equivalent to a large set of random fabrication error vectors drawn from the presumed error model. The statistics of the objective function can be estimated using the simulated values. The convergence of the MC method often requires a fairly large number of sample points, and considering the time consuming nature of the FDTD simulations, can be quite tedious. A faster alternative to MC is the UT technique. A similar approach for modeling the manufacture error of microstrip filters using UT has been done in [36]. In the UT method, a deterministic set of points \(\left\{ {S_i } \right\} _{i=1}^N\) (which are called the sigma points) and a set of corresponding weights \(\left\{ {w_i } \right\} _{i=1}^N \) is chosen in such a way that the first \(T\) moments of the random error match the empirical weighted moments of the sample points [37]. In other words, the sigma points and the weights are chosen such that following equations are satisfied:
$$\begin{aligned} \sum _{i=1}^N w_i S_i^t =E\left\{ {x_{\mathrm{err}}^t} \right\} , \quad 1\le t\le T, \end{aligned}$$
where \(E\{x_{\mathrm{err}}^t \}\) is the \(t\)th moment of \(x_{\mathrm{err}}\). Solving the above equations, even for a moderate number of points can be cumbersome. However, it turns out that for a one-dimensional error with a standard normal distribution, the solution \(\left\{ {S_i, w_i} \right\} _{i=1}^T\) to the above equations can be derived from the Hermite polynomials of up to order \(N\) [38]. For a Gaussian distribution, this method reduces to the Gaussian quadrature (GQ) method [39]. The Hermite polynomials are defined by the following recursive equations:
$$\begin{aligned} H_0 \left( x \right)&= 1, H_1 \left( x \right) =x,\nonumber \\ H_{n+1} \left( x \right)&= xH_n \left( x \right) -nH_{n-1} \left( x \right) . \end{aligned}$$
The sigma points \(\left\{ {S_i }\right\} _{i=1}^N\) are equal to the zeros of the polynomials \(H_N (x)\) (scaled by \(\sigma \)) and the weights \(\left\{ {w_i } \right\} _{i=1}^N\) are given by
$$\begin{aligned} w_i =\frac{2^{n+1}n!\sqrt{\pi }}{H_n^{\prime } \left( {S_i } \right) ^{2}}. \end{aligned}$$
The weights can also be found by solving the inverse of a linear system of equations given by Eq. 11, after the sigma points are identified. Having the sigma points and weights, one can estimate the statistics of the objective function by evaluating the function at the errors specified by the sigma points. The moments of the objective function \(f(x_{\mathrm{fin}})\) can be estimated from
$$\begin{aligned} Ef^{t}\left( {{\varvec{x}}_{\mathrm{fin}} } \right) \approx \sum _{i=1}^N w_i f^{t}(x_{\mathrm{opt}} +S_i ) \end{aligned}$$
for every \(t\). In the case of multidimensional vectors, when the errors in different dimensions are independent, the set of multidimensional sigma points is simply the Cartesian product of the one-dimensional sets of sigma points.

7 Simulations and Discussion

Optimization of the geometry parameters was performed using FDTD simulations with \(N=21\) sample wavelengths between 300 nm and 900 nm. Four parameters were considered for optimization, as illustrated in Fig. 1: thickness of the a-Si, height of the squares, width of the squares, and period of the patterning \(({\varvec{x}} = [h_{\mathrm{Si}}, h_{\mathrm{Ag}}, w_{\mathrm{Ag}}/2, \varLambda _{\mathrm{Ag}}])\). The following upper and lower bounds are enforced to the geometry of the 3D cells:
$$\begin{aligned} {\varvec{x}}^{(\mathrm{U})}=[170, 100, 100, 250]\,\hbox {nm} \end{aligned}$$
$$\begin{aligned} {\varvec{x}}^{(\mathrm{L})}=[45, 20, 50, 100]\,\hbox {nm}. \end{aligned}$$
Creating periodic Ag squares is technically easy using present electron beam lithography if a minimum spacing is respected between two adjacent squares. Therefore, an additional constraint is defined during optimization, which dictates that a minimum of 50 nm spacing should exist between adjacent squares. The mesh and mesh sizes are automatically generated by FDTD solutions taking into account the wavelengths used and the highest material indexes except for the areas surrounding and containing the squares, where a refined mesh size of 2.5 nm is imposed. The cell is irradiated by the normal AM1.5 solar spectrum. On average, solving for the radiative properties of the solar cell for incident solar radiation between 300 nm and 900 nm (hereby referred to as a spectral FDTD trial) at 21 uniform wavelength samples required around 13 min.

7.1 Optimization Results

We present the results of two independent SA simulations and two independent QN optimizations. The SA results are further improved using local QN optimizations starting at the optimal SA solutions. The evolution of the inverse cost function for SA simulations followed by local QN simulations, and independent QN simulations are shown in Figs. 2, 3, 4, and 5. The initial points of the optimizations are selected uniformly at random within the upper and lower bounds specified above. Note that the iterations do not reflect the additional FDTD spectral trials that resulted in rejected candidate points (i.e., only selected points of each algorithm are reflected in these figures). In Table 1, we have presented the numerical values of the initial simulation points, the optimal geometry solutions, and the corresponding EFs, as well as the number of spectral FDTD trials before reaching the optimal solution in each case. Note that the number of trials for each optimization is directly related to the running time of each implementation. We can judge, based on these results, that the SA method outperforms QN for obtaining an optimal parameter set, and is relatively faster. The best EF obtained was 1.52 after 15 SA trials followed by four QN trials. Note that the convergence of the algorithms is not based on the value of the final solution, but rather based on the ability of the algorithms to select and test significantly different new candidate points. Specifically, when the temperature parameter in the SA algorithm becomes too small, the choice of new candidates becomes very limited and the selection process becomes very strict. Therefore, the algorithm stops searching upon the sufficient decay of the temperature. In contrast, the QN method always pauses at a local optimum where no local modification to the solution can enhance the objective function.
Fig. 2

Evolution of the obtained enhancement factor per iteration for the first SA simulation followed by a local QN optimization
Fig. 3

Evolution of the obtained enhancement factor per iteration for the second SA simulation followed by a local QN optimization
Fig. 4

Evolution of the obtained enhancement factor per iteration for the first QN simulation
Fig. 5

Evolution of the obtained enhancement factor per iteration for the second QN simulation

Table 1

Results for SA and QN algorithms for the 3-D inverse optimization problem


\({\varvec{x}}_{\mathrm{initial}} \)(nm)

Optimal EF

# of Trials

\({\varvec{x}}_{\mathrm{opt}} \)(nm)





\(\left[ {81, 64, 76, 230} \right] \)





\([78, 64, 76, 230]\)










\([78, 62, 65, 183]\)





\([78, 62, 65, 184]\)






7.2 Fabrication Error Modeling

From the previous section, the optimal designed geometry for the 3-dimensional square grating is equal to
$$\begin{aligned} {\varvec{x}}_{\mathrm{opt}} =\left[ {78\, 62\, 65\, 184} \right] \,\hbox {nm}, \end{aligned}$$
which is obtained by SA followed by the QN method. The corresponding EF for this geometry is around 1.52 for the incident solar irradiance specified by AM1.5. As motivated in Sect. 7, we use a numerical model to evaluate the sensitivity of the solution to variations in the geometry vector. We refer to the structural errors as “fabrication error” and propose a mathematical model to incorporate their effect on the objective function. A simple model for such fabrication error is to assume that it has independent dimensions with Gaussian distributions. In other words, we model the fabrication error as an additive 4-dimensional vector with independent Gaussian entries with some standard deviation \(\sigma \) and mean 0. For the analysis of the current paper, we consider a hypothetical value of \(\sigma = 5\,\hbox {nm}\), which is intuitively consistent with practical values.
In this section we evaluate the sensitivity of the optimal EF with respect to the fabrication error, and seek to understand the statistics of such a deviation. We first run a MC simulation method (described previously in the paper) with 500 samples, and obtain the empirical statistics of the enhancement factor for the finished structure, namely, \(f({\varvec{x}}_{\mathrm{fin}})\). Based on these simulations, the mean enhancement factor is 1.37 and the standard deviation is 0.005 compared with the ideal EF of 1.52. In addition, We also study the sensitivity of the EF to the fabrication error using the method of UT. We consider four sigma points in each dimension, which are equal to the roots of the polynomial \(H_4 (x)\) multiplied by the standard deviation of the error, \(\sigma =5\,\hbox {nm}\) (see Eq. 12 and the explanations thereby). The resulting sigma points and corresponding weights are listed in Table 2. Due to the assumption that errors in different dimensions are independent, the 4-dimensional sigma points are the collections of all possible combinations of the sigma points in all dimensions. In other words, the 256 4-dimensional sigma points and weights are given by
$$\begin{aligned} S_{i,j,k,l}&= [s_i ,s_j ,s_k ,s_l ],\nonumber \\ W_{i,j,k,l}&= w_i w_j w_k w_l ,\nonumber \\ 1&\le i,j,k,l\le 4, \end{aligned}$$
where \(\{s_1, s_2, s_3, s_4 \}\) and \(\{w_1, w_2, w_3, w_4 \}\) are the sets of sigma points and weights, respectively, given in Table 2.
Table 2

Four optimum UT Sigma points and weights for Gaussian error in one dimension with standard deviation of 5 nm

Sigma points (\({\varvec{s}}_{\varvec{i}}, 1\le i\le 4)\)

\(\pm \)0.7420

\(\pm \) 2.3344

Weights (\({\varvec{w}}_{\varvec{i}}, 1\le i\le 4)\)



Based on the experiments with the 4-dimensional sigma points, the mean enhancement factor is 1.35 and the standard deviation is 0.0044. The cumulative distribution function (cdf) of the resulting EF based on the MC simulations and the UT technique is plotted in Fig. 6. From the estimated statistics, we can infer that with a confidence of 90 %, the finished EF is above 1.25.
Fig. 6

Empirical cdf of the absorptivity enhancement factor for the finished geometry obtained by MC method with 500 samples and the UT technique with 256 4-D sigma points

To assess the sensitivity of the enhancement factor with respect to variations in each individual dimension, we perform UT simulations specific to each dimension. Assuming that the other three parameters are error free, we use \(N=10\) sigma points in one of the dimensions. The optimal sigma points and corresponding weights are calculated from the zeros of the Hermite polynomial of order 10 (Eq. 12), scaled by the standard deviation of 5 nm, and Eq. 13, respectively. Their numerical values are listed in Table 3.
Table 3

Ten optimum UT Sigma points and weights for Gaussian error in one dimension with standard deviation of 5 nm


\(\pm \)2.4247

\(\pm \)7.3299

\(\pm \)12.4216

\(\pm \)17.9091

\(\pm \)24.2973







The approximate cumulative distribution function of the finished structure, based on the error in each of the parameters, is depicted in Fig. 7, using the UT and the sigma points/weights given above. Also, the first two central moments, skewness and the kurtosis of \(f\left( {\varvec{x}}_{\mathrm{fin}} \right) \), are calculated in each case, using the statistics collected based on the 10 point UT method. Results are enumerated in Table 4.
Fig. 7

Cumulative distribution function of the enhancement factor of the finished geometry based on the Gaussian error with 5 nm standard deviation in each parameter

Table 4

Mean, standard deviation, skewness, and kurtosis of the enhancement factor for Gaussian error with 5 nm standard deviation in each variable, estimated through UT with 10 sigma points

Variable parameter


Std. deviation



\(h_{\mathrm{Si}} \)





\(h_{\mathrm{Ag}} \)





\(w_{\mathrm{Ag}} /2\)





\(\varLambda _{\mathrm{Ag}} \)





The following remarks can be made. First, the MC simulations indicate that for 90 % of the cases, the EF is above 1.27, and in only about 1 % of simulations, it is less than 1.15, which is an indication of the relatively robust and large enhancement factor. Based on the one-dimensional UT results (Table 4), the enhancement factor of the finished geometry has the smallest variance when the error occurs in \(h_{\mathrm{Ag}}\), i.e., the height of the square grating, compared to the error in other parameters. For other parameters, the variances are very close to each other, as verified by the empirical cdf of the EF in Fig. 7. This implies that the EF has less sensitivity to variations in \(h_{\mathrm{Ag}}\) than to variations of other parameters. The inferior sensitivity of absorption enhancement to variations in the nanochip height can be better explained by examining of the shapes of absorption spectra when different parameters are deviated, as displayed in Figs. 8, 9, 10, and 11. These figures illustrate the absorption spectra for the optimal geometry and the simulated finished geometries obtained by adding the designated set of errors (1-D sigma points) to the entries of the optimal geometry. Notice that unlike the other three cases, with error in \(h_{\mathrm{Ag}}\), the absorption does not decrease significantly at long wavelengths, and the peak absorption does not change too much. Instead, the wavelength of peak absorption changes; it is blue-shifted (shifted to left) with decreasing silver height and red-shifted (shifted to right) with increasing silver height. This is justified by a shift in the wavelength of plasmon surface resonance of the metallic nanogratings when texture sizing changes. The negativity of skewness of the EF is simply the indication of the fact that \({\varvec{x}}_{\mathrm{opt}}\) is an optimally designed geometry, thus, the fabrication error is likely to reduce the enhancement factor. In most of the simulated cases, the resulting EF is less than the optimal value of 1.52. Only in one case where the geometry is \({\varvec{x}}_{\mathrm{fin}} =\left[ {78\, 64.42\, 65\, 184}\right] \,\hbox {nm}\), the enhancement factor is 1.54, slightly larger than the optimized value of 1.52. However, when we round this value off to \({\varvec{x}}=\left[ {78,\, 64,\, 65,\, 184} \right] \,\hbox {nm}\), the same value of 1.52 is obtained for the EF. The reason for this discrepancy is that the original optimization is done with 1 nm precision. When error is added, the finished geometry does not necessarily obey this limitation and can have smaller uncontrolled resolution, and therefore (although very unlikely) can result in a slightly higher EF.
Fig. 8

Absorptivity spectra for the optimal designed geometry (solid red curve) and the finished geometries with 10-point UT errors in first dimension \((h_{\mathrm{Si}} )\)
Fig. 9

Absorptivity spectra for the optimal designed geometry (solid red curve) and the finished geometries with 10-point UT errors in second dimension \((h_{\mathrm{Ag}} )\)
Fig. 10

Absorptivity spectra for the optimal designed geometry (solid red curve) and the finished geometries with 10-point UT errors in third dimension \((w_{\mathrm{Ag}} /2)\)
Fig. 11

Absorptivity spectra for the optimal designed geometry (solid red curve) and the finished geometries with 10-point UT errors in fourth dimension \((\varLambda _{\mathrm{Ag}} )\)

7.3 Nonuniform Sampling Results

We first discuss the timing scales of the spectral FDTD simulations. As mentioned before, FDTD simulations required for computing the absorption power of a given geometry have varying convergence rates depending on the wavelength of the incident light. The plot of Fig. 12 shows the empirical average time required for computing the absorption power of a random geometry \({\varvec{x}}\) selected uniformly in the interval \([{\varvec{x}}^{\left( \mathrm{L} \right) },{\varvec{x}}^{\left( \mathrm{U} \right) }]\). The data are collected for 20 random geometries, and the depicted time values refer to the computation of both nanograting textured and bare a-Si chips, in the 3D square case. The simulations were done on a dual-core AMD Operaton\(^\mathrm{{TM}}\) processor 6170, 32 GB RAM machine, operating under Windows 7, using Lumerical FDTD software.
Fig. 12

Average time scale required for computing the absorptivity of a random valid geometry as a function of wavelength

On the plotted curve, we observe the timing difference between the convergence of simulations at shorter wavelengths, compared to longer wavelengths, especially those close to 800 nm. The enhancement factor computation based on nonuniform sampling can therefore be significantly time efficient, provided that the irradiance-absorption product happens to be smaller at longer wavelengths, where FDTD simulations are tedious. An example irradiance-absorption profile for a valid geometry \(x=\left[ {164, 63, 65.5, 197} \right] \,\hbox {nm}\) is displayed in Fig. 13. Observe that the profile intensity is negligible at wavelengths close to 800 nm and wavelengths less than 400 nm. Along with the timing curve of Fig. 12, this suggests that the nonuniform sampling approach for EF computation should be faster and potentially more accurate.
Fig. 13

An example irradiance-absorption profile for \(x=\left[ {164,\,63,\,65.5,\,197} \right] \,\hbox {nm}\). The profile is normalized such that the peak value is 1

Based on a MC simulation with 89 random geometry vectors selected uniformly in \([{\varvec{x}}^{\left( \mathrm{L} \right) },{\varvec{x}}^{\left( \mathrm{U} \right) }]\), we statistically compare the running time and performance of the two approaches. The following numerical indicators were extracted. In all simulations, both uniform and nonuniform sampling algorithms were performed using \(N=21\) wavelength samples in the range \(\left[ {300, 900} \right] \,\hbox {nm}\). Out of these 89 simulations, in 88 cases the nonuniform sampling approximation of the EF converged faster than the uniform sampling approximation, on average by a factor of 1.3 (each simulation involves calculating the EF of a random geometry \({\varvec{x}}\) using both the uniform and nonuniform wavelength samples). Only in one simulation did the uniform and nonuniform methods have almost the same running time. Overall, the nonuniform sampling approach is, on average, faster by a factor of 1.3. Note that the nonuniform-based estimation of the EF is slightly different from (not necessarily less credible than) the uniform estimation. On average, the EF estimate based on nonuniform sampling deviates from the EF estimate based on uniform sampling by only 5 %. A scatter plot of the running times of all simulations is presented in Fig. 14. Every point in this figure corresponds to one FDTD simulation using the two methods, where the \(x\) and \(y\) coordinates represent the running times of the nonuniform and uniform sampling methods, respectively. The least-squares line that best fits the data has a slope of 1.3, indicating the average superiority of the nonuniform method.
Fig. 14

Scatter plot of the running times of the uniform and nonuniform FDTD methods for 89 random geometries

Finally, we have also run a SA optimization completely based on nonuniform sampling FDTD trials. The evolution of the enhancement factor for this simulation is given in Fig. 15. The optimal point achieved by this simulation is \({\varvec{x}}_{\mathrm{opt}} =[92,\, 54,\, 66,\, 224]\,\hbox {nm}\) and the highest EF is 1.39.
Fig. 15

Evolution of the obtained enhancement factor per iteration for the SA simulation based on the nonuniform sampling method

8 Conclusion

We studied the problem of inverse optimization in thin-film solar cells, for the optimal design of surface nanopatterns. We invoked mathematical tools of constrained optimization to formulate an optimization program to solve for the dimensions of a 3D periodic surface pattern maximizing the solar absorption enhancement factor. Using a constrained SA optimization followed by the localized QN method, we obtained an enhancement factor of 1.52 in the absorption of the solar power when silver nanopatterns are used. Furthermore, we proposed an adaptive sampling scheme that expedites the running time of spectral FDTD simulations. The proposed method is useful beyond this particular optimization problem and potentially reaches many other instances of electromagnetic profile optimization.

The suggested design obtained by the inverse optimization program is relatively resilient to fabrication error; in the presence of a Gaussian error in each geometry dimension with a 5 nm standard deviation, we demonstrated by using MC and unscented transform simulations that with 90 % confidence, the final enhancement factor is above 1.3.

Future work includes rigorous analysis of the convergence time of the proposed adaptive algorithm, as well as incorporation of more realistic fabrication error models. The inverse optimization paradigm introduced in this paper is a powerful tool that can be applied to other nanostructures including higher efficiency tandem thin-film cells, and can accommodate other physical constraints such as carrier recombination. These shall be the subjects of future research.


The term irradiance-absorption profile, and the details of the mentioned method shall be formally outlined in the remainder of the paper.



The authors appreciate support for this work from the US National Science Foundation under Grant CBET-1032415 and also would like to thank Dr. Alex Heltzel for helpful discussions.

Copyright information

© Springer Science+Business Media New York 2013