In this chapter we will introduce the theory of finite size scaling and demonstrate how we can apply the theory to improve our measurements of the properties of percolation clusters. Usually, we attempt to measure properties of percolation system in the largest possible system we can simulate. Here, we demonstrate that if the system behaves according to simple scaling relations, it is instead much better to systematically vary the system size and the interpolate to infinite system sizes. This approach is generally called finite size scaling and we provide a thorough introduction to the theory and its applications to understand the scaling of the density of the spanning cluster, \(P(p,L)\), the average cluster size, \(S(p,L)\), and the percolation probability \(\varPi (p,L)\).

How can we utilize a disadvantage, such as a finite system size, to an advantage? Usually, we have found a finite system size to be a hassle in simulations. We would like to find the general behavior, but we are limited by the largest finite system size we can afford to simulate. It may be tempting to put all our resources into one attempt—to make one simulation in a really large system. However, this is usually not a good strategy. Because we will then know that our results are limited by the system size, but we do not know to what degree the finite system size affects our result.

Instead, we will follow a different strategy: the strategy of finite size scaling. We will systematically increase the system size, measure the quantities we are interested in, and then try to extrapolate to an infinite system size. This has several advantages: It allows us to understand and estimate the errors in our predictions, and it allows us to use simulations of smaller systems. Indeed, it turns out that it is more important to do simulations in smaller systems, than only to try to simulate that largest system possible. However, for this to be effective, we need to have a theoretical understanding of finite size scaling [7].

The methods we develop here are powerful and can be generalized to many other experimental and computational situations. In many experiments it is also tempting to try to perform the perfect experiment by reducing noise or measurement errors. For example, we may perform an experiment where we need to make the experimental system as horizontal as possible, because deviations from a horizontal system would introduce errors. Instead of trying to make the system as horizontal as possible, we may instead systematically vary the orientation, and then extrapolate to the case when the system is perfectly horizontal. This allows us to control the uncertainty. Of course, we cannot vary all possible uncertainties in an experiment or a simulation, but this alternative mindset provides us with a new tool in our toolbox, and a new way to deal with uncertainties.

In practical situations, we will always be limited by finite system sizes. If you measure the size of earthquakes in the Earth’s crust, your results are limited by the thickness of the crust or by the extent of a homogeneous region. If you simulate a molecular system, you are definitely limited by the number of atoms you can include in your simulation. Thus, better insight into how we can systematically vary the system size and use this to gain insight are general tools of great utility.

Here, you will learn how to systematically vary system size L in order to find good estimates for exponents and percolation thresholds. Indeed, my hope is that you will see that finite size scaling is a powerful tool that can be used both theoretically and computationally. To introduce this tool, we need to address specific examples that can help build our intuition and shape our mindset. We will therefor start from a few examples, such as the finite size scaling for the density of the spanning cluster, \(P(p,L)\), and then apply the method to a new case, the percolation probability \(\varPi (p,L)\).

6.1 General Aspects of Finite Size Scaling

We have found that a percolation system is described by three length-scales: the size of a site, the system size L, and the correlation length \(\xi \). Finite size scaling addresses the change in behavior of a system as we change the system size L. Typically, we divide the behavior into two categories:

  • When the system size L is much smaller than the correlation length \(\xi \), \(L\ll \xi \), the system appears to be on the percolation threshold.

  • When L is much larger than \(\xi \), \(L \gg \xi \), the geometry is essentially homogeneous at lengths longer than \(\xi \).

We will then address the behavior close to \(p:c\). In the case of percolation, we usually assume that the behavior is a power-law in \(p-p:c\). For example, the mass \(M(p;L)\) of the spanning cluster:

$$\displaystyle \begin{aligned} M(p) \propto (p - p_c)^{-x} \; , \end{aligned} $$
(6.1)

where the exponent x determines the behavior close to \(p:c\).

The general approach to finite size scaling is to make a scaling ansatz, that is, an assumption about how the system behaves, which typically consists of a scaling term and a cut-off function, as you have seen several times in this book:

$$\displaystyle \begin{aligned} M(p,L) = L^{\frac{x}{\nu}} f \left( \frac{L}{\xi} \right) \; , \end{aligned} $$
(6.2)

where \(f(u)\) is an unknown function. (Sometimes we instead make the assumption \(M(p,L) = \xi ^{x/\nu } \tilde {f} (L/\xi )\). We leave it to the reader to demonstrate that these assumptions are equivalent.)

We will then apply our insight into the particulars of the system to infer the behavior in the limits when \(\xi \gg L\), and \(\xi \ll L\) to determine the form of the scaling function \(f(u)\), and use this functional form as a tool to study the behavior of the system. We will explain this reasoning through three examples: The case of \(P(p,L)\), the case of \(S(p,L)\) and the case of \(\varPi (p,L)\).

6.2 Finite Size Scaling of \(P(p,L)\)

Measuring \(P(p,L)\) for finite L

Let us now apply this methodology to study the behavior of the density of the spanning cluster, \(P(p,L)\), for finite system sizes. First, we generate a plot of \(P(p,L)\) for various values of L using the following program:

The resulting plot of \(P(p,L)\) is shown in Fig. 6.1. We see that as L increases, \(P(p,L)\) approaches the shape expected in the limit when \(L \rightarrow \infty \). We can see how it approaches this limit by finding the value of \(P(p:c,L)\) as a function of L. We expect this value to go to zero as L increases. Figure 6.1b shows how \(P(p:c,L)\) approaches zero. Let us see if we can develop a theoretical prediction for this behavior and check if our measured results confirm the prediction.

Fig. 6.1
A multi-line graph plots pie p, L versus p and a scatter plot of P p subscript c semicolon L. The line graph has intersecting curves from the bottom left to the top right and the scatter plot has four points at (20, 0.21), (50, 0.187), (100, 0.168), and (200, 0.152). Values are estimated.

(a) Plot of \(P(p,L)\). (b) Plot of \(P(p:c;L)\) as a function of L

Finite Size Effects in \(P(p,L)\)

We know that \(P(p) \propto (p-p_c)^{\beta }\) and \(\xi \propto |p-p_c|^{-\nu }\), so that

$$\displaystyle \begin{aligned} P(p) \propto (p-p_c)^{\beta} \propto \xi^{-\beta/\nu} \; . {} \end{aligned} $$
(6.3)

This is valid in the limit when \(L \rightarrow \infty \), that is, when \(L \gg \xi \). In the limit when \(L \ll \xi \), which eventually will occur as p approaches \(p:c\) and \(\xi \) diverges, we see from Fig. 6.1 that \(P(p:c,L)\) depends on L. In this case, we have previously found that

$$\displaystyle \begin{aligned} P(p,L) \simeq P(p_c,L) = \frac{M(p_c,L)}{L^2} \propto \frac{L^D}{L^d} \propto L^{D-d} \propto L^{-\beta/\nu} \; . {} \end{aligned} $$
(6.4)

Combined, we therefore have the behavior

$$\displaystyle \begin{aligned} P(p,L) \propto \left\{ \begin{array}{cc} \xi^{-\beta/\nu} & \text{ when } L \gg \xi \\ L^{-\beta/\nu} & \text{ when } L \ll \xi \end{array} \right. \; . \end{aligned} $$
(6.5)

Finite Size Scaling Ansatz

The fundamental idea of finite size scaling is then to assume a particular form of a function that encompasses this behavior both when \(\xi \ll L\) and \(\xi \gg L\), by rewriting the expression for \(P(p,L)\) as

$$\displaystyle \begin{aligned} P(p,L) = L^{-\beta/\nu} f( L / \xi ) \; . {} \end{aligned} $$
(6.6)

Where we have assumed that the only relevant length scales are L and \(\xi \), and that the function therefore only can depend on a ratio between these two length scales. How must the function \(f(u)\) behave for this general form to reduce to Eqs. (6.3) and (6.4)?

First, we see that when \(\xi \gg L\) the function \(f(L/\xi )\) should be a constant, that is, \(f(u)\) is a constant when \(u \ll 1\). Second, we see that when \(\xi \ll L\), we need the function \(f(L/\xi )\) to cancel all the L-dependency in order to find the relation in Eq. (6.3):

$$\displaystyle \begin{aligned} P(p,L) = L^{-\beta/\nu} f( L / \xi ) = \xi^{-\beta/\nu} \; . \end{aligned} $$
(6.7)

This will occur if and only if \(f(u)\) is a power-law, that is, \(f(u) = u^a\). In order to cancel the L-dependency, the power-law exponent for the L-term must be zero:

$$\displaystyle \begin{aligned} \begin{array}{rcl} P(p,L) & \propto&\displaystyle L^{-\beta/\nu} (L / \xi )^a \propto L^{-\beta/\nu + a} \xi^{-a} \propto \xi^{-\beta/\nu} \end{array} \end{aligned} $$
(6.8)
$$\displaystyle \begin{aligned} \begin{array}{rcl} & \quad &\displaystyle \; \Rightarrow \; - \beta/\nu + a = 0 \; \Rightarrow \; a = \beta/\nu \; . \end{array} \end{aligned} $$
(6.9)

Indeed, we could have used this in order to find the exponent in the relation \(\xi ^{-\beta /\nu }\). It would simply have been enough to assume that \(P(p,L) \propto \xi ^x\) for some exponent x in the limit of \(\xi \ll L\).

In order to satisfy these conditions, the scaling form of \(P(p,L)\) must therefore be

$$\displaystyle \begin{aligned} P(p,L) = L^{-\beta/\nu} f(L/\xi) \; , \end{aligned} $$
(6.10)

where

$$\displaystyle \begin{aligned} f(u) = \left\{ \begin{array}{cc} \text{const.} & \text{ for } u \ll 1 \\ u^{\beta/\nu} & \text{ for } u \gg 1 \end{array} \right. \end{aligned} $$
(6.11)

Testing the Scaling Ansatz

We can now test the scaling ansatz by plotting \(P(p,L)\) according to the ansatz, following a strategy similar to what we developed for \(n(s,p)\). We rewrite the scaling function \(P(p,L) = L^{- \beta /\nu } f(L/\xi )\) in terms of \(|p-p:c|\) by inserting \(\xi = \xi _0 |p - p_c|^{-\nu }\):

$$\displaystyle \begin{aligned} \begin{array}{rcl} P(p,L) & =&\displaystyle L^{- \beta/\nu} f(L/\xi) \end{array} \end{aligned} $$
(6.12)
$$\displaystyle \begin{aligned} \begin{array}{rcl}& =&\displaystyle L^{- \beta/\nu} f(L\xi_0 |p - p_c|^{\nu} ) \end{array} \end{aligned} $$
(6.13)
$$\displaystyle \begin{aligned} \begin{array}{rcl} & =&\displaystyle L^{- \beta/\nu} f(( \xi_0 L^{1/\nu} (p - p_c) )^{\nu} ) \end{array} \end{aligned} $$
(6.14)
$$\displaystyle \begin{aligned} \begin{array}{rcl}& =&\displaystyle L^{- \beta/\nu}\tilde{f} ( L^{1/\nu} (p - p_c) ) \; . \end{array} \end{aligned} $$
(6.15)

Which again can be rewritten as

$$\displaystyle \begin{aligned} L^{\beta/\nu} P(p,L) = \tilde{f} ( L^{1/\nu} (p - p_c) ) \; . \end{aligned} $$
(6.16)

Consequently, if we plot \(L^{1/\nu }(p-p_c)\) along the x-axis and \(L^{\beta /\nu }P(p,L)\) along the y-axis, we expect the data from simulations for various L-values to fall onto a common curve, the curve \(f(u)\). This is illustrated in Fig. 6.2, which shows that the measured data is consistent with the scaling ansatz. We call such as plot a scaling data collapse plot.

Fig. 6.2
A multi-line graph plots L superscript beta over v P p, L versus L superscript 1 over v p minus p subscript c. The graph has four S-shaped curves labeled L equals 25, 50, 100, and 200 from the bottom left to the top right in an increasing trend.

Scaling data collapse plot of \(P(p,L)\) with \(L^{1/{\nu }} (p-p_c)\) along the x-axis and \(L^{\beta /\nu } P(p,L)\) along the y-axis

Comparing to Theory at \(p=p:c\)

Finally, we can now use this theory to understand the behavior for \(P(p:c,L)\). In this case we find that \(P(p_c,L) = c L^{-\beta /\nu }\). We can therefore measure \(-\beta /\nu \) from the plot of \(P(p:c,L)\) in Fig. 6.1. While the data in this figure is too poor to produce a reliable result, the figure demonstrates the principle.

Varying L to Gain Insight

The take-home message is that instead of trying to simulate one single simulation with as large L as possible, we instead vary L systematically and then use this variation to estimate the relevant exponents \(\nu \) and \(\beta \). The methods demonstrated here usually provide much better results in term of precision of the exponents than a direct measurement for a large system size.

Alternative Approaches

We could instead have started with a scaling ansatz of \(P(p,L) = (p-p_c)^{\beta } g(L/\xi ) = \xi ^{-\beta /\nu } g(L/\xi )\). However, the end result would be the same. We leave this as an exercise for the eager reader.

6.3 Average Cluster Size

We can characterize the distribution of cluster sizes using moments of the cluster number distribution. The k-th moment \(M:k(p,L)\) is defined as:

$$\displaystyle \begin{aligned} M_k(p,L) = \sum_{s=1}^{\infty} s^k n(s,p;L) \; . \end{aligned} $$
(6.17)

We have already introduced the second moment, \(M:2(p,L)\), which we called the average cluster size, \(S(p,L)\).

$$\displaystyle \begin{aligned} S(p,L) = M_2 (p,L) = \sum_{s=1}^{\infty} s^2 n(s,p;L) \; . \end{aligned} $$
(6.18)

Now, let us see if we can apply the finite-size scaling approach to develop a scaling theory for \(S(p,L)\). First, we will measure \(S(p,L)\), and then develop and test a scaling theory.

6.3.1 Measuring Moments of the Cluster Number Density

How would we measure \(S(p,L)\)? We recall that we measure the cluster number density from

$$\displaystyle \begin{aligned} \overline{n(s,p;L)} = \frac{N_s}{L^d} \; , \end{aligned} $$
(6.19)

where \(N:s\) is the number of clusters of size s. Thus we can estimate \(S(p,L)\) from:

$$\displaystyle \begin{aligned} \overline{S(p,L)} = \sum_{s=1}^{\infty} s^2 \overline{n(s,p;L)} = \sum_{s=1}^{\infty} s^2 \frac{N_s}{L^d} \; . \end{aligned} $$
(6.20)

We realize that we can perform this sum by summing over all possible s and then including how many clusters we have for a given s, or we can alternatively sum over all the observed clusters \(s:i\). (Try to convince yourself that this is the same by looking at a sequence of clusters of sizes \(1,2,1,5,1,2\).). Thus, we can estimate the second moment from the sum:

$$\displaystyle \begin{aligned} \overline{S(p,L)} = \sum_{i} s_i^2/L^2 \; . \end{aligned} $$
(6.21)

And similarly by summing over \(s:i^k\) for the k-th moment. We implement this in the following program:

The resulting plot of \(S(p,L)\) as a function of p for various values of L is shown in Fig. 6.3.

Fig. 6.3
A multi-line graph plots S p, L versus p and a scatter plot of S p subscript c semicolon L. The line graph has curves with varying heights from the bottom left to the bottom right and the scatter plot has four points at (20, 0.21), (50, 0.187), (100, 0.168), and (200, 1300). Values are estimated.

(a) Plot of \(S(p,L)\). (b) Plot of \(S(p:c;L)\) as a function of L

6.3.2 Scaling Theory for \(S(p,L)\)

How can we understand these plots and how can we develop a theory for \(S(p,L)\)? We previously found that S diverges as p approaches \(p:c\):

$$\displaystyle \begin{aligned} S(p) = S_0 | p - p_c |^{-\gamma} \; , \end{aligned} $$
(6.22)

where the exponent is \(\gamma = 43/18\) for \(d = 2\). Following the approach for finite-size scaling introduced above, we introduce the finite size L through a scaling function \(f(L/\xi )\), giving us a finite-size scaling ansatz (our hypothesis):

$$\displaystyle \begin{aligned} S(p,L) = S_0 | p - p_c |^{-\gamma} f \left(\frac{L}{\xi} \right) \; . \end{aligned} $$
(6.23)

We rewrite the first expression by introducing \(\xi = \xi _0 | p - p_c |^{-\nu }\) so that \(S_0 | p - p_c |^{-\gamma } = \xi ^{\gamma /\nu }\), giving:

$$\displaystyle \begin{aligned} S(p,L) = \xi^{\gamma/\nu} f \left(\frac{L}{\xi} \right) \; . \end{aligned} $$
(6.24)

Now, we see from Fig. 6.3 that when \(p = p_c\), \(S(p:c,L)\) does not diverge, but is limited by L, as we would expect for a finite system. Thus we know that in the limit when \(p \rightarrow p_c\), \(S(p,L)\) can only depend on L. This implies that the function \(f(L/\xi )\) in this limit must be so that the \(\xi \) in \(f(L/\xi )\) cancels the \(\xi ^{\gamma /\nu }\) in front of it. This can only happen if \(f(L/\xi ) \propto (L/\xi )^{\gamma /\nu }\):

$$\displaystyle \begin{aligned} S(p,L) \propto \xi^{\gamma/\nu} \left( \frac{L}{\xi} \right)^{\gamma/\nu} \propto L^{\gamma/\nu} \; . \end{aligned} $$
(6.25)

Thus, we have found that \(S(p_c,L) \propto L^{\gamma /\nu }\).

This allows us to write the scaling form of \(S(p,L)\) in a different way:

$$\displaystyle \begin{aligned} S(p,L) = L^{\gamma/\nu} g \left( \frac{L}{\xi} \right) \; . \end{aligned} $$
(6.26)

We can test this prediction by plotting \(S(p,L)L^{- \gamma /\nu }\) as a function of \(L/\xi \):

$$\displaystyle \begin{aligned} \begin{array}{rcl} S(p,L) L^{-\gamma/\nu} & =&\displaystyle g \left( \frac{L}{\xi} \right) = g\left( L (p -p_c)^{-\nu} \right) \end{array} \end{aligned} $$
(6.27)
$$\displaystyle \begin{aligned} \begin{array}{rcl} & =&\displaystyle g\left( \left( L^{1/\nu} ( p - p_c ) \right)^{\nu} \right) = \tilde{g} \left( L^{1/\nu} (p-p_c) \right) \; . \end{array} \end{aligned} $$
(6.28)

The resulting plot is shown in Fig. 6.4, which indeed demonstrates that the measured data is consistent with the scaling theory. Success!

Fig. 6.4
A multi-line graph plots L superscript minus gamma over v S p, L versus L superscript 1 over v p minus p subscript c. The graph has four bell-shaped curves labeled L equals 25, 50, 100, and 200 from (negative 10.0, 0.00) to (7.5, 0.00) with a peak at (0.0, 0.10). Values are estimated.

A data-collapse plot of the rescaled average cluster size \(L^{-\gamma /\nu }S(p,L)\) as a function of \(L^{1/\nu }(p - p_c)\) for various L

6.4 Percolation Threshold

Finally, we will demonstrate one of the most elegant applications of finite-size scaling theory to the percolation probability \(\varPi (p,L)\) and to see how a finite system size will affect the effective percolation threshold.

6.4.1 Measuring the Percolation Probability \(\varPi (p,L)\)

We can measure the percolation probability for a set of finite system sizes using the methods we developed previously. Here, we have implemented the measurement in the following program which is very similar to the program developed to measure \(P(p,L)\)

The resulting plot of \(\varPi (p,L)\) for various values of L is shown in Fig. 6.5.

Fig. 6.5
A multi-line graph plots pie p, L versus p. The graph has intersecting curves at (0.60, 0.5) for different L values 25, 50, 100, and 200 from the bottom left (0.40, 0.0) to the top right (075, 1.0). Values are estimated.

Plot of \(\varPi (p,L)\)

6.4.2 Measuring the Percolation Threshold \(p:c\)

Let us now assume that we do not a priori know \(p:c\) or any of the scaling exponents. How can we use this data-set to estimate the value for \(p:c\)?

The simplest approach may be to estimate \(p:c\) as the value for p that makes \(\varPi (p,L) = 1/2\). This corresponds to intersection between the horizontal line \(\varPi = 1/2\) and the curves in Fig. 6.5. This is illustrated in Fig. 6.6. Here, we have also plotted \(p:{1/2}\) as a function of L, where \(p:{1/2}\) is the value for p so that \(\varPi (p_{1/2},L) = 1/2\). These values for \(p:{1/2}\) are calculated by a simple interpolation as illustrated in the following program. (Notice that as usual in this book, we do not aim for high precision in this program. The simulations are for small system sizes and few samples, but are meant to illustrate the principle and be reproduceable for you.)

Fig. 6.6
Two graphs, a and b, plot pie p, L versus p and p 1 over 2 versus L. Graph a has intersecting curves for different L values from the bottom left to the top right and graph b has four plotted points at (25, 0.59130), (50, 0.59225), (100, 0.59270), and (200, 0.59325). Values are estimated.

(a) Plot of \(\varPi (p,L)\). (b) Plot of \(p:{1/2}\) as a function of L

From Fig. 6.6 we see that as L increases the value for \(p:{1/2}\) gradually approaches \(p:c\). Well, we cannot really see that it is approaching \(p:c\), but we guess that it will. However, in order extrapolate the curve to infinite L we need to develop a theory for how \(p:{1/2}\) behaves. We need to develop a finite size scaling theory for \(\varPi (p,L)\).

6.4.3 Finite-Size Scaling Theory for \(\varPi (p,L)\)

We apply the same method as before to develop a theory for \(\varPi (p,L)\). First. we notice that at \(p:c\)\(\varPi (p_c,L)\) does not either diverge or go to zero. This means that \(\varPi (p,L)\) cannot be a function of \(\xi \) alone, but instead must have the scaling form:

$$\displaystyle \begin{aligned} \varPi(p,L) = \xi^0 f \left( \frac{L}{\xi} \right) \; . \end{aligned} $$
(6.29)

We rewrite this in terms of \((p-p:c)\) by inserting \(\xi = \xi _0 | p - p_c|^{-\nu }\):

$$\displaystyle \begin{aligned} \varPi(p,L) = f \left( L \xi_0 | p - p_c|^{\nu} \right) = f \left( \xi_0 \left( L^{1/\nu} (p - p_c) \right)^{\nu} \right) \; . \end{aligned} $$
(6.30)

We introduce a new function \(\varPhi (u) = f \left ( \xi _0 u^{1/\nu } \right )\):

$$\displaystyle \begin{aligned} \varPi(p,L) = \varPhi \left( L^{1/\nu} (p - p_c) \right) \; . \end{aligned} $$
(6.31)

This is our finite-size scaling ansatz (theory).

6.4.4 Estimating \(p:c\) Using the Scaling Ansatz

How can we use this theory to estimate \(p:c\)? We follow a technique similar to what we used above: We find the value \(p:x\) that makes \(\varPi (p_x,L) = x\). Above, we did this for \(x = 1/2\), but we can do this more generally. Actually, as \(L \rightarrow \infty \), we expect any such \(p:x\) to converge to \(p:c\). We notice from above that \(p:x\) is a function of L: \(p_x = p_x(L)\).

We insert this into the scaling ansatz:

$$\displaystyle \begin{aligned} x = \varPhi \left( \left(p_x(L) - p_c \right)L^{1/\nu} \right) \; , \end{aligned} $$
(6.32)

which can be solved as

$$\displaystyle \begin{aligned} (p_{x} - p_c)L^{1/\nu} = \varPhi^{-1}(x) = C_x\; , \end{aligned} $$
(6.33)

where it is important to realize that the right hand side, \(C:x\), is a number which only depends on x and not on L. We can therefore rewrite this as

$$\displaystyle \begin{aligned} p_x - p_c = C_x L^{-1/\nu} \; . \end{aligned} $$
(6.34)

If we know \(\nu \), we see that this gives a method to estimate the value of \(p:c\). Figure 6.7 shows a plot of \(p_{1/2} - p_c\) as a function of \(L^{-1/\nu }\) for \(\nu = 4/3\). We can use this plot to extrapolate to find \(p:c\) in the limit when \(L \rightarrow \infty \) as indicated in the plot. The resulting value for \(p:c\) extrapolated from \(L = 50,100,200\) is \(p_c = 0.5935\), which is surprisingly good given the small system sizes and small sample sizes used for this estimate. (The best known value is \(p_c = 0.5927\)). This demonstrates the power of finite size scaling.

Fig. 6.7
A graph plots p 1 over 2 versus L to the negative one over v. The graph has a dotted line from the top left (0.00, 0.5935) to the bottom right (0.09, 0.5913) with four plotted points at (0.02, 0.5932), (0.03, 0.5927), (0.05, 0.5923), and (0.09, 0.5913). Values are estimated.

Plot of \(p:{1/2}\) as a function of \(L^{-1/ \nu }\). The dashed line indicates a linear fit to the data for \(L = 50,100,200\). The extrapolated value for \(p:c\) at \(L \rightarrow \infty \) is \(p_c = 0.5935\)

6.4.5 Estimating \(p:c\) and \(\nu \) Using the Scaling Ansatz

However, this approach depends on us knowing the value for \(\nu \). What if we did not know neither \(\nu \) nor \(p:c\)? How can we estimate both from the scaling ansatz? One alternative is to generate plots of \(p:{x}\) as a function of \(L^{-1/\nu }\) for several values of x. Then we adjust the values of \(\nu \) until we get a straight line, in that case we can read of the intersect with the \(p:x\) axis as the value for \(p:c\).

However, we can do even better by noticing a trick: For two x values \(x:1\) and \(x:2\), we get

$$\displaystyle \begin{aligned} dp = p_{\varPi=x_1}(L) - p_{\varPi=x_2}(L) = (C_{x_1} - C_{x_2})L^{-\nu} \; , \end{aligned} $$
(6.35)

and we can therefore plot \(\log (dp)\) as a function of \(\log (L)\) to get \(\nu \), and then use this to estimate \(p:c\). As an exercise, the reader is encouraged to demonstrate that this scaling ansatz is valid for \(d=1\), and in this case find \(C:x\) explicitly.

Exercises

Exercise 6.1 (Finite-Size Scaling in One Dimension)

  1. (a)

    Show that the scaling ansatz for \(\varPi (p,L)\) is valid for \(d=1\).

  2. (b)

    Find an explicit expression for \(C:x\) for \(d=1\).

Exercise 6.2 (Finite-Size Scaling in Two Dimensions)

In this exercise we will use the scaling ansatz to provide estimates of \(\nu \), \(p:c\) and the average percolation probability \(\langle p \rangle \) in a system of size L.

We define \(p:x\) so that \(\varPi (p_x,L) = x\). Notice that \(p:x\) is a function of system size L used for the simulation.

  1. (a)

    Find \(p:x\) for \(x = 0.3\) and \(x = 0.8\) for \(L = 25,50,100,200,400,800\). Plot \(p:x\) as a function of L.

    According to the scaling theory we have

    $$\displaystyle \begin{aligned} p_{x_1} - p_{x_2} = \left( C_{x_1} - C_{x_2} \right) L^{-1/\nu} \; . \end{aligned} $$
    (6.36)
  2. (b)

    Plot \(\log \left ( p_{0.8} - p_{0.3} \right )\) as a function of \(\log (L)\) to estimate the exponent \(\nu \). How does it compare to the exact result?

    In the following, please use the exact value \(\nu = 4/3\). The scaling theory also predicted that

    $$\displaystyle \begin{aligned} p_x = p_c + C_x L^{-1/\nu} \; . \end{aligned} $$
    (6.37)
  3. (c)

    Plot \(p:x\) as a function of \(L^{-1/\nu }\) to estimate \(p:c\). Generate a data-collapse plot for \(\varPi (p,L)\) to find the function \(\varPhi (u)\) described above.

  4. (d)

    Plot \(\varPi '(p,L)\) as a function of p for the various L values used above. Generate a data-collapse plot of \(\varPi '(p,L)\). Find \(\langle p \rangle \) and plot \(\langle p \rangle \) as a function of \(L^{-1/ \nu }\) to find \(p:c\).

Exercise 6.3 (Finite Size Scaling of \(n(s,p:c,L)\))

  1. (a)

    Develop a finite size scaling ansatz/theory for \(n(s,p:c,L)\). You should provide arguments for the behavior in the various limits.

  2. (b)

    Plot \(n(s,p:c,L)\) as a function of s for \(L = 100,200,400,800\).

  3. (c)

    Demonstrate the validity of the scaling theory by producing a data-collapse plot for \(n(s,p:c,L)\).