Measurement Techniques

, Volume 55, Issue 8, pp 867–875

Size measurements in microelectronic technology

Authors

    • Institute of Arts and Information Technologies, Branch
Nanometrology

DOI: 10.1007/s11018-012-0052-6

Cite this article as:
Nikitin, A.V. Meas Tech (2012) 55: 867. doi:10.1007/s11018-012-0052-6
  • 42 Views

The status of and problems with size measurements in the technologies for fabrication of microelectronic devices with dimensions of hundreds or tens of nanometers are examined under industrial production conditions. It is argued that relative measurements are inadequate and it is necessary to proceed to measurements on an absolute size scale that will ensure absolute accuracy of the results, as well as their reproducibility, which are especially important for the development of nanometer-sized devices.

Keywords

algorithmsnonlinearitycalibration of magnificationreproducibility and accuracy of results

The Place of the Metrology of Linear Dimensions in Modern Integrated Circuit Technology. It is generally accepted that lithographic processes are the key operations in the manufacture of modern microelectronic devices. Many years of observations indicate that about 35% of the overall cost of producing integrated circuits goes into photolithographic operations. Monitoring procedures for verifying the quality of lithography and rejecting wafers and modules that do not meet technical specifications are considered to be a vital part of these operations. Up to 5% of the cost of the lithographic processes goes into these monitoring operations, i.e., 1.5–1.7% of the combined expenditures for microelectronics manufacturing. After completion of the lithographic procedures, monitoring and measurement operations are conducted in order to reject devices during testing for defects in the lithographic pattern, matching of process layers, and the sizes of the completed components of the circuits. Thus, size measurements and monitoring represent about 0.3–0.5% of the total manufacturing cost. What is this part of the costs in absolute terms?

The annual turnover in worldwide manufacturing of integrated circuits is as high as 100 billion dollars [1], i.e., according to cautious estimates, about 250–300 million dollars is spent annually in the developed countries on size-measurement operations. At large producers with massive outputs, tens or even hundreds (e.g., TSMC, Taiwan) of specialized electron microscopes operate around the clock and are used exclusively for size measurement operations; each of these is worth about 50 million dollars.

Of course, the appropriateness of such large expenditures on the metrological accompaniment to the manufacture of integrated circuits depends on the reliability of the measurement results and of the decisions to junk (or regard as suitable) devices based on these measurements.

Features of Size Measurements in the Nanometer Range. The main feature is the extreme smallness of the measured sizes and the resulting need to use microscopes (mostly scanning electron microscopes SEM) as a comparison instrument. In these cases, it is not the physical object (an element of an integrated circuit) that is measured, but only its enlarged image These are indirect measurements. They are often done in practice, but reliability of the results can be assured only when the relationship between a directly measured property and the indirectly measured property has been reliably established. For example, the current in an electrical circuit can be measured, without breaking the circuit, by measuring the voltage drop across a calibrated resistor and then calculating the desired quantity using Ohm’s law.

In modern microelectronics technology, the measurement device is almost always an SEM. In practice, in the indirect SEM measurements the place of Ohm’s law must be occupied by a law relating the physical object to its SEM image; unfortunately, this law has not been established. This sort of situation does not just hold for scanning electron microscopy. It applies, to the same extent, to all forms of optical, transmission electron, scanning tunnel, and atomic force microscopy. As for SEM methods, despite the apparent obviousness and intuitiveness of SEM images, they do not convey the true geometry of an object. The picture, i.e., the distribution of brightness on a screen (image), corresponds (with some strong reservations) to the distribution to the point coefficient of secondary electron emission over the surface of the object. Of course, this coefficient depends on the composition and geometry of the object being scanned and, to some extent, does reflect its geometrical and physical properties, but this image is controlled by far more complicated (and still not well understood) laws than Ohm’s law. This is the fundamental obstacle to the development of metrological support for SEM measurements in the nanometer range.

Problems with Measurements of Small Sizes in Micro- and Nanoelectronics. The dynamics of the development of microelectronics is such that now [2] the industry is developing ultra-large-scale integrated circuits (ULSIC) with design sizes of 32 nm or smaller. The accuracy with which these devices are measured during manufacture (interoperational measurements) must be extremely high, and the permissible error is estimated to be 0.30–015 nm (3σ, where σ is the mean square deviation) [2]. Note that the problem is not the feasibility of measuring such small objects, but attaining the required accuracy. To solve it, let us analyze the sources of error in the existing measurement techniques. The customary process for measuring by the SEM methods is illustrated in Fig. 1. It is assumed that the size L of any object can be calculated as the product of two numbers, (a) the size Lp of this object in pixels as found in accordance with the left branch of Fig. 1 and (b) the distance between pixels PL (pixel length) expressed in absolute units and calculated in accordance with the right branch of Fig. 1; i.e.,
$$ L={L_{\mathrm{p}}}PL. $$
(1)
https://static-content.springer.com/image/art%3A10.1007%2Fs11018-012-0052-6/MediaObjects/11018_2012_52_Fig1_HTML.gif
Fig. 1

Illustrating the measurement of critical dimensions.

Thus, it can be seen that there are two fundamental sources of error in the measurement result: first there is the process, as such, of measuring the size (in pixels) from an SEM image. Here the error is mainly related to imprecise localization of the edges of the object being measured (see the left branch of Fig. 1). Second, there is the inaccuracy of the operation of calibrating the magnification of the SEM measurement system, i.e., the error in the pixel scale PL (right branch of Fig. 1). These errors ultimately limit the overall accuracy of the results. This analysis indicates that these errors, like the overall error in the measurements as a whole, depend to a great extent on the measurement conditions.

Thus, in row-by-row measurements, just the error owing to noise in the video signal can reach 2 nm (3σ). The signal-to-noise ratio rarely exceeds 5 in practice. Only by statistical averaging of the individual results is it possible to reduce the error of the average to acceptable levels. The sources of error and their relationships are illustrated in Fig. 2.
https://static-content.springer.com/image/art%3A10.1007%2Fs11018-012-0052-6/MediaObjects/11018_2012_52_Fig2_HTML.gif
Fig. 2

Error balance of the measurements.

Sources of Error. The Use of Simplifying Idealizations or Assumptions

Errors in magnification calibration. In the overall error budget, it is reasonable to ascribe at least a third of the tolerance for sizes to the role of magnification calibration operations. In particular, in order to ensure the required accuracy for measurements of the elements of ULSIC designed to a size of 65 nm (±0.3 nm or 0.7% by the 3σ criterion [2]), it is necessary to reduce the error in the magnification of the SEM to a level of ±(0.2–0.3)% (3σ). The currently attainable calibration error is a few percent for this range of sizes. This error, alone, can “eat up” the entire reserve of permissible [2] measurement error.

Traditional statement of the magnification calibration problem. Usually this kind of problem involves determining the magnification M with the greatest possible accuracy. In some models of measurement SEM the calibrated quantity is the size G of the field of view of the microscope. Using the scale M or the size G of the field of view as the calibrated quantities cannot be regarded as successful, since then it is assumed tacitly that equal segments of the image correspond to equal segments of the object. This assumption is false in principle, since in any real microscope the scanning systems are always imperfect (in particular, nonlinear) to some extent. The notion of linear SEM scans, like the conclusions regarding the magnification M or field of view G derived from it, is an simplifying idealization that is not justified in reality. Nonlinearity of SEM scans shows up as inconstancy of the microscope magnification; it also varies over the field of view along the direction of scanning. It becomes necessary to introduce the concept of local magnification and a corresponding local pixel length. Now the constant PL is replaced by a function PL(i), where i is the number of a pixel in a row. Thus, any distance measured from an SEM image, e.g., between the first and second pixels, i.e., Li2–i1, can no longer be calculated using Eq. (1), which now takes the form
$$ {L_{i2-i1 }}=\sum\limits_{i1}^{i2 } {PL(i)} . $$

The author does not know of any specific models or examples of measurement SEMs which could provide fully linear scanning with different amplitudes (magnifications) and sweep rates. All of the measurement microscopes in use are, to some extent, nonlinear, but are not even attested in terms of this parameter. Some of them are characterized by a nonlinearity of several percent. Experiments have shown that the error in the magnification calibration can be as high as 0.6–0.8% for nonlinearity at a 5% level.

A standard for calibration of magnification. Another source of error in the calibration of magnification is associated with the properties of the standard used for this purpose. The choice of the type of standard is far from indifferent. There are at least two types of length standards, in terms of “line width” and period. It is generally recognized that the latter type is preferable for calibrations because then the calibration size is the period of a grid, which can be determined in various ways without having to solve the major problem of localizing the edge of an object on its enlarged and, inevitably, distorted SEM image. In addition, a standard of this type (a diffraction grating) can be attested by an independent optical-goniometric method with high accuracy and on an absolute size scale. If all the grooves of a grating were identical and the distance between them strictly constant, then there would be no obstacles in the way of using standards of this type. Unfortunately, none of the manufactured devices of this type meet these specifications. Detailed measurements of different diffraction grating standards kept at NIST (USA), Hitachi (Japan), etc., have shown that the periods usually have a nonuniformity of 1.0–1.5% (3σ). Thus, the widespread notion of the uniformity of the periods of diffraction gratings is yet another simplifying idealization that does not accord with reality. Thus, the errors arising from the residual nonlinearity of microscope scans combine with the errors owing to the nonuniformity of the period of diffraction gratings and other factors (e.g., unavoidable noise in video signals) to prevent precision calibration of SEM magnification.

Errors Owing to Imperfections in Measurement Procedures (left branch of Fig. 1)

Distinctive features of measurements in microelectronics technologies. They are distinctive because, for a number of important reasons, it is better to measure the sizes of elements of the circuits during the stage of shaping the photoresist mask. Thus, the typical and most important objects of monitoring turn out to be the relief details of the mask— the lines and contact windows formed in the photoresist layer by lithography, which have a trapezoidal or (ideally) rectangular cross section. Figure 3 shows a typical relief line with a trapezoidal cross section and the corresponding SEM video signal S(x) (one row). The dimensions measured from the lower cut of the trapezoidal cross section are the most significant. It is important to emphasize that the lower edges of the trapezoid correspond to the points at the largest slopes of the peaks in the video signal, which creates difficulties in locating the edge during measurements. Since the size of any object is, by definition, the distance between its edges, localizing the edges of an object in an image is the central problem in measurements with any kind of microscope.
https://static-content.springer.com/image/art%3A10.1007%2Fs11018-012-0052-6/MediaObjects/11018_2012_52_Fig3_HTML.gif
Fig. 3

An object of measurement and its SEM image: a) transverse cross section of the photoresist line; b) typical shape of the video signal (single row of a frame); 1) photoresist layer; 2) substrate.

Automatic algorithms are used in modern SEM measurement systems (Hitachi ser. 7000, 8000, 9000, JEOL, KLA, AMAT, LEO, etc.) to eliminate operator influence on the measurement results. This also eliminates the major source of irreproducibility, operator influence. However, of the known and widely used algorithms (Threshold (T), Linear Approximation (LA), Curvature (CU), Derivative (DE), and Fermi–Dirac (FD)), none are based on physically justified concepts of the mechanisms by which video signals are formed in SEM. Thus, there are no methods for detecting the points in a video signal which correspond to the edges of objects being measured. In addition, program implementations of the known algorithms contain free parameters (degrees of freedom) set by the user (operator, field engineer, or administrator) on his own judgment based only on experience and intuition. We emphasize that no rules have been set for choosing the values of these free parameters during measurements. Thus, the renowned human factor again enters into the procedure for size measurements.

In the following we show the results of an analysis of the capabilities of the widely used algorithms T and LA, the program implementations of which contain several free parameters. All the conclusions of this analysis, except the numerical estimates, can be extended to the other algorithms mentioned above, which also contain free parameters.

Comparative analysis of measurement algorithms. In all cases, the object of measurement was the same initial SEM image (frame) obtained with a Hitachi 4700 microscope with an accelerator voltage of 1.0 kV, a magnification of 100,000, a signal-to-noise ratio of 4, averaging of 400 rows of the image to a single row (see Fig. 3), and a nominal size of the measured line of 84.3 nm based on the lower cutoff of its cross section.

The Threshold algorithm. This algorithm has two free parameters, the smoothing S and the threshold (cutoff level) T for the video signal. The results of measuring L from the same video signal (see Fig. 3) are shown in Fig. 4 as a plot of the surface L = f(S, T) representing the measured size L as a function of the free parameters S and T. Figure 4 implies that the measurement results for a given object actually do vary over a wide range, depending on the values of these parameters. The corridor of possible results for the T algorithm can have a width of 50 nm or more.
https://static-content.springer.com/image/art%3A10.1007%2Fs11018-012-0052-6/MediaObjects/11018_2012_52_Fig4_HTML.gif
Fig. 4

The measured size obtained with the Threshold algorithm as a function of the free parameters S and T.

The Linear Approximation algorithm. The sizes were measured using the same original image. The distinctive feature of LA is that it contains four free parameters. This means that the measurement results cannot be presented in such an intuitive way as in Fig. 4. An analysis of the set of numerical data obtained in the course of the measurements indicates that, in this case as well, the corridor of possible results extends over many tens of nanometers (about 80 nm for moderate variations in the free parameters).

Discussion of Results. There are several reasons for the low reliability of the measured results. The problem of free parameters was outlined above. It is possible to list another 5–6 reasons that lead to such unreliable results: video signal noise, residual nonlinearity in the sweep that shows up during the measurements, themselves, and not just during calibration of the magnification, and so-called hidden free parameters. Even now, solutions that would greatly reduce the influence of these sources of error can now be seen.

One example of a free parameter is the nominal magnification of the microscope. In fact, the parameter S of the T algorithm and some parameters of the LA algorithm are specified by the operator in the form of a number of pixels, and the pixel size depends directly on the magnification. Thus, the smoothing S specified by the operator in the form of a constant number of pixels has a discriminating effect on the video signal if the magnifications are generally different.

There are, however, some other, more profound reasons for the low reliability of the measurements, such as an inadequately developed theory of the formation of the video signal in SEMs. In particular, no solution has been found for the problem of localizing the edges of an object in its SEM image. But the practical needs of industry cannot wait; industry requires solutions today, now, even if they are not quite indisputable. The result of this forced compromise has been the appearance of a number of models of measurement microscopes on the market without metrologically valid measurement algorithms and programs. In fact, with the standard algorithms, it is not the size of a physical object that is measured, but only the distance between special points on a video signal set by some formal mathematical indicators, but with no relation to the actual size of the object. This is the main conclusion regarding the current state of metrological support for microelectronics measurements. Can it be surprising that during comparative measurements of a single object on different microscopes by different operators, unacceptably large discrepancies of as much as several tens of nanometers were found?

The Problem of Comparative Measurements (matching procedures). The discrepancy among the results of measurements obtained with different microscopes is customarily corrected with a system of corrections calculated during comparative measurements (matching procedures). In practice, these procedures include a choice of one of the available microscopes and a declaration that it is the “gold” standard. Then a special service arranges comparative measurements of some “gold” object with the standard and the other microscopes. Data from these comparative measurements are used to create tables of transfer coefficients from each microscope to the results from the “gold” microscope. Graphs of the correspondence (dependences of the transfer coefficients on nominal size) are plotted and analyzed, the intersection points of these graphs with the Y axis are found (which determines the so-called offset), the deviations of the calibration curves from linear dependences are found, and a nonlinearity coefficient is introduced for each microscope. All the correspondence tables and the graphs have to be periodically revised because of ageing of the instruments and drift in their characteristics. Special services for these procedures are created at plants to perform this kind of work independently of others. Thus, every large microelectronics company forms its own system of units which has, so to say, no place for the meter in Paris.

Here many specialists understand that if the actual dimensions of the physical objects were obtained in the course of the measurements, then the results of the measurements would coincide automatically, regardless of the model for the microscope, its characteristics, and the preferences of the operator, while the matching procedure would not be necessary at all. Thus, a recognition of the need for this procedure, in fact, always signifies a recognition of the untenability of the the measurement algorithms that are in use. In the form practiced today, the matching procedure represents an activity in which one false result in a complex system of corrections leads to another that is just as erroneous. The cornerstone of metrology, the unity of measurements, is not supported by this procedure; it is replaced by another principle, the unity of measurement errors, even if only within the confine of a single factory or production line.

Reproducibility of Measurement Results and Their Accuracy. Until recently, the concept of absolute accuracy was extremely rarely encountered in the practice of metrological support for size measurements in microelectronics. This is in the channel of the generally accepted, established “methodology” of the measurements, in which ensuring the reproducibility of measurement results is sufficient to satisfy the practical needs of microelectronics technology. Here it is assumed that the possible existence of systematic errors intrinsic to individual measurement SEMs or algorithms is entirely compensated by matching procedures. The adepts of this sort of “methodology” usually employ the following arguments.

What’s so bad if, for example, in measuring object A, with a size of 50 nm, we constantly get another value, say, 53 nm? The important thing is that this, even if erroneous, value is provided by any repeat measurements, while the spread in the measurement results is minimal. If these conditions are fulfilled, then, if we get a result of, say, 55 nm by measuring an unknown object B, we can be certain that the size of object B is 2 nm larger than that of A. Thus, systematic measurement errors (insufficient absolute accuracy) do not keep us from making reliable comparative measurements.

Is this kind of “methodology” indisputable? Recall that the main goal of size measurements in microelectronics technology at intermediate operations in the course of fabricating the end product is to obtain information with which a monitored wafer is fit or should be scrapped. Each measurement on a wafer shifts the balance to one side or another, depending on whether the result of this measurement lies within or outside some previously established limits. As a rule, the half width of the corridor of acceptable sizes (tolerance) is taken to be a fraction (10%) of the nominal size of the NODE design standard.

Thus, in accord with Ref. 2, for microcircuits with NODE 65 nm, the size tolerance is taken to be B = 6.6 nm. This means that, if the measured size lies between the two limiting values 58.4 and 71.6 nm, then this serves as an argument in favor of regarding the piece as suitable. For brevity, in the following we refer to such as result as positive. If, on the other hand, the measurement result lies outside this range, then it is an argument in factor of scrapping the wafer (a negative result). The balance between positive and negative measurement results ultimately determines the further fate of the monitored wafer: will it continue on the production path or will it be thrown into the wastebasket for scrap. It is perfectly obvious that the validity of decisions to scrap depend directly on the reliability of the measurement results. An unjustified answer to this question will be fraught with substantial financial losses because of the drop in suitable output.

An analysis [3] has shown that in an idealized case with no systematic measurement error (Δ = 0), a nominal size L0 = 65 nm, a size tolerance B = 6.6 nm, and a mean square deviation of the results σ = 1.7 nm, the fraction of positive results is 99.99% and of negative results, 0.01%, which is entirely acceptable. For the same values of L0, B, and σ, but with a systematic error Δ = 3 nm, the fraction of positives falls to 96.6%, while the fraction of negatives rises to 3.4%, or by a factor of 340! This amount of increase certainly affects the yield of suitable output. And this is the cost to modern industry of neglecting absolute measurements.

Calculations were done for other values of the design standards, 45 and 32 nm, and this shows that, as the nominal dimensions and, therefore, the tolerances B, are reduced, the effect of the errors Δ increases sharply (see Table 1).
Table 1

Effect of Systematic Error Δ on the Fraction of Negative Results

Design standard, nm

Fraction (%) of negative results for Δ, nm

0

3

65

0.01

1.71

45

0.01

7.31

32

0.01

36.2

If we assume that, with the passage of time, the design standards will decrease so much that the corresponding tolerances B become comparable to the systematic error Δ, then this will lead to the loss of half the positive measurement results as a whole. In other words, the fraction of results favoring the scrapping of production reaches 50%. Of course, under these conditions the grading of the product into “suitable” and “scrap” loses all significance. Thus, conclusions regarding the appropriateness of treating a piece as suitable or scrap that are based on relative measurements will always lead to an unjustified reduction in the yield of suitable production. This becomes the decisive factor determining the losses through rejection, especially for pieces with decreasing design sizes of 45 nm, 32 nm, and smaller.

Conclusion. Analyses of the state of today’s metrological problems in micro- and nano-technologies have made it possible to develop a series of technical solutions that will enhance the accuracy and reliability of measurements. These solutions apply to methods for attestation of microscopes with respect to residual scanning nonlinearity [4] and to calibration grating standards in terms of the uniformity of their period [5]. Methods for precision calibration of magnification have been proposed on the basis of this analysis. In particular, it has been possible to reduce the effect of noise in the video signal during calibration operations by using some of the integral characteristics of SEM images of diffraction grating standards. These include avoiding the use of any isolated points of an SEM image as “indicators” of the position of the lines and instead using the “centers of mass” of fragments of the lines of these gratings [6]. Likewise, there are methods of reading the period of gratings using special integral characteristics of the video signal, “self scans” [7], or modified “self scans” [8], rather than the video signal, itself.

Considerable effort has been expended in the development of algorithms and measurement techniques that contain no free parameters (the basic source of errors). Advances in this area have been based on the widespread use of modern computer programs for simulation of the video signal in SEM. The first results were reported in [9]. Recently, a new generation of simulation models [10] has been developed that is capable of greatly speeding up the creation of new algorithms and programs that are not affected by many of the above-mentioned sources of error: idealizations, free parameters, etc. These new algorithms and measurement techniques will finally make even the now widely-used matching procedure unnecessary.

Based on the above discussion, we may conclude that simplifying idealizations, free parameters in computer programs, the avoidance of absolute measurements, and other difficulties with size measurements in microelectronics that have been examined here can be dealt with, while the associated errors can be eliminated through the combined efforts of metrologists and engineers.

Copyright information

© Springer Science+Business Media New York 2012