1 Introduction

Continuum models play a central role in the scientific representations of varied physical phenomena due to their wide-ranging applicability and empirical adequacy. The continuum picture of nature can be historically attributed to Leibniz and Bernoulli, who subscribed to a classical view of physical quantities. As per the classical view, the principle of sufficient reason entails that all physical quantities in nature must obtain at all intermediate values between a starting value and a final value without any jumps [1, p. 30]. Although the classical view has long since fallen from favour, there is still a tendency amongst modern philosophers of science to endorse the continuum picture: call this the continuum fallacy. To commit the continuum fallacy is to believe that continuity is essential for many accounts of scientific representation, explanation and understanding. One version of the fallacy has it that continuum models are indispensable, “in principle”, because many macroscale phenomena cannot be explained in purely reductive, microphysical terms, and can instead be explained only by positing continuum models at the macroscale. That is, continuum models play an indispensable role in scientific explanations despite being effectively decoupled from the microphysical details of the systems they describe [2,3,4]. Another version of the fallacy relates to a pragmatic view of continuum models, that is, the justification for such models comes from their mathematical convenience and empirical adequacy, as discontinuous representations are generally intractable [5]. A further version of the fallacy has it that the discontinuities apparent in scientific representations (such as in phase transitions) may not really be there when the associated physics is parsed out carefully, and thus, all things considered, continuity seems to be the norm [6,7,8].Footnote 1

By focusing on the case of temperature discontinuities, we argue that the continuum view (briefly discussed in sect. 2) is fallacious because:

  1. i)

    continuum models at the macro level are not necessarily decoupled from the microscopic features of a physical system, making continuum models empirically inadequate and inapplicable in many situations of current scientific interest;

  2. ii)

    the evidence of temperature discontinuities runs from the macroscopic to the microscopic — that is, they are present in both the data (experiments and simulations) and the phenomena (theories and models) pertaining to thermal systemsFootnote 2; and

  3. iii)

    the discontinuous modelling of temperature profiles can not only be more empirically adequate than continuum modelling but also provide us with a better scientific understanding of the underlying thermal phenomena.

We choose the case of temperature because, following from the continuum view, there is a commonly held misconception that temperature varying across a region of space or time can always be represented as a continuous function.

We illustrate our arguments by analysing three inter-related cases of temperature modelling in detail in sect. 34 and 5 to make this point:

  1. a)

    phase transitions in evaporative processes leading to temperature discontinuities across liquid-vapour droplets (interfaces);

  2. b)

    thermal boundary resistance across solid-solid interfaces leading to temperature discontinuities; and

  3. c)

    temperature and velocity jump at walls in fluid flows, such as in micro-channels.

We conclude in sect. 6 that a) continuum models are not indispensable in describing physical phenomena; b) temperature is not necessarily a continuous function in our best scientific representations of the world; and c) that its continuity, where applicable, is a contingent matter. Our view is that both continuum and discontinuum models work in certain contexts and either fail or become intractable in others, and that the indispensability thesis is thus misguided and unwarranted.

The cases we discuss, however, raise an important question about whether discontinuous representations are truly de-idealised descriptions of physical phenomena. We discuss this question briefly in sect. 6 in our concluding remarks.

2 A Brief Overview of the Continuum Fallacy

Before we get to the case studies, we briefly summarise the positions of the proponents of the continuum picture.Footnote 3

A position maintained by Batterman [2,3,4] is that continuum idealisations are not eliminable, “even in principle”. This is because not all emergent patterns (like continuum parameters appearing at the macroscopic level) can be reduced to a “fundamental theory” by deriving them bottom-up from atomistic theories or facts. One therefore needs to introduce continuum idealisations that purportedly remove the unnecessary details from a system and provide one with an explanatory power that was not attainable via these bottom-up descriptions. This is because, Batterman argues, “...the bulk behaviors of solids and fluids are almost completely insensitive to the actual nature of the physics at the smallest scale.” [4, p. 275], even as these microphysical details genuinely distinguish physical systems from one another. He therefore claims that continuum idealisations are “robust”, despite being insensitive to the changes at the microphysical level, and theories that start from microphysical details (bottom-up) fail to explain this fact:

...derivations that start from atomic assumptions fail to arrive at the correct theory. It seems that here may very well be a case where a continuum point of view is actually superior: bottom-up derivation from atomistic hypotheses [for instance] about the nature of elastic solid bodies fails to yield correct equations governing the macroscopic behavior of those bodies. [4, p. 272]

Therefore, for Batterman continuum idealisations represent an indispensable “point of view” [4], but not one that is merely pragmatically justified. Batterman thus concludes that:

...idealized “overly simple” [continuum] model equations can better explain and characterize the dominant features of the physical phenomenon of interest. That is to say, these idealized models better explain than more detailed, less idealized models. (ibid., p. 429)

Some philosophers hold a view of continuity that is stronger than Batterman’s “point of view” idea of continuity. For instance, Colyvan [9, p. 49] claims that the continuity of temperature is a ‘minor’ structural assumption that can be readily made in mathematical models involving temperature because the continuity is either necessary or non-causal in some sense. Colyvan does not clarify on what he means by this ‘necessity’ in his book [9] — The Indispensability of Mathematics — but a charitable interpretation of his views suggests that it can be read along the lines of the (strong) indispensability of continuity. Khalifa et al. [10, pp. 1448–49] vouch for a similar view: “ it is physically impossible for temperature to be discontinuous.”, because this would result in a violation of the Fourier’s law and the law of energy conservation. Both Colyvan and Khalifa et al. mention the necessity of continuity in the context of modelling temperature continuously on topological spaces because continuity is critical for certain topological theorems and explanations to work. The topological theorem discussed by Colyvan and Khalifa et al. is the Borsuk-Ulam theorem, as a corollary of which there must exist at least two antipodal points on the earth with the same temperature and the same pressure at any given moment of time.Footnote 4 However, we argue in this paper that the continuity of temperature (in our scientific representations) is a contingent matter. Consequently, topological applications of theorems like the Borsuk-Ulam theorem (which require that mappings of variables onto topological spaces be continuous) cannot yield ‘necessarily true’ predictions of physical phenomenon, like the purported existence of two such antipodal points around the earth with the same temperature and pressure.Footnote 5 Further, in section 4, we demonstrate that it is in the very cases where the Fourier’s law breaks down that the discontinuity of temperature and its importance in modelling heat flows become obvious.

A more pragmatic view, however, comes from Shech [11] who argues that although infinite (continuum) idealisations “...do seem to play an indispensable role [mathematically] in exploring the possible representational structure and foundations of scientific theory” (in several instances), does this really entail that such idealisations are ineliminable? He further asks whether “...idealizations that are not infinite or infinitesimal” can allow for exploration.

A further view comes from Butterfield [5] who argues that continuum models are pragmatically justified in many instances (he concedes that the justification may be hard to achieve) by their appeal to their mathematical convenience and empirical adequacy. In his defence of infinite idealisations (on which continuum idealisations in phase transitions rely), he discusses two themes that are common to many such modelling practices:

The first theme is abstraction from finitary effects. That is: the mathematical convenience and empirical adequacy of many such models arises, at least in part, by abstracting from such effects. Consider (a) how transient effects die out as time tends to infinity; and (b) how edge/boundary effects are absent in an infinitely large system.

The second theme is that the mathematics of infinity is often much more convenient than the mathematics of the large finite. The paradigm example is of course the convenience of the calculus: it is usually much easier to manipulate a differentiable real function than some function on a large discrete subset of \(\mathbb {R}\) that approximates it. (ibid., p. 1081)

Butterfield thus argues that often the value of a function representing some quantity in fluid and solid mechanics (abstracted from finitary effects) is not a single value assigned to a fixed number of molecules, but rather an averaged-out value of an idealised function of some other underlying variable which remains “suppressed” from the functional notation. He further notes that “...to make this function easily manipulated, e.g. continuous or differentiable so that it can be treated with the calculus, we often need to have each value of the function be defined as a limit (namely, of values of another function)” (ibid., 1081–82).

An alternative view on continuum idealisations (in phase transitions) comes from the debate between referentialits and non-referentialists, as outlined by Bangu [7]. Strong referentialists claim that the discontinuities in the thermodynamical variables pertaining to phase transitions (such as pressure-volume discontinuities in P-V diagrams) are genuine physical features of the world [7, 12].Footnote 6 Non-referentialists claim that the discontinuities apparent in the mathematical representations of the P-V diagrams are artefacts of the mathematical formalism used to represent phase transitions since these formalisms are too coarse-grained to capture what is happening in the physical phenomenon, and thus discontinuities have no genuine physical significance [8]. Bangu, who opposes this simplistic dichotomy, notes that there are discontinuities of genuine physical (scientific) interest even if these discontinuities may not be physically ‘real’ [6, 7]. By drawing on the distinction between data and phenomenon, as introduced by Bogen and Woodward [14], Bangu clarifies that the discontinuities apparent in the P-V diagrams of phase transitions are present in the ‘phenomenon’ of phase transitions, but not in the data related to it: he calls this view as “weak referentialism”. A phenomenon, he argues, is a theoretical structure postulated by scientists to ‘save’ raw data, where raw data is what instruments produce from observations. He explains weak referentialism as follows [7, p. 1934]:

...singularities are referential at the phenomenal/unobservable level, while not at the data/observable level. This characterization implies that a singularity does not refer to a directly ascertainable, measurable feature of the physical system (as I suspect that the strong form of Referentialism would have it), but to an indirect feature, introduced via a specific mathematical representational framework (here, in terms of free energy). Thus, while a singularity lacks observational significance indeed — note the concession made to the non-Referentialists! — it still retains full physical significance.

Having discussed the views supporting the continuum picture, we will go on to show in the following sections that evaporation models (concerning phase transitions) and thermal boundary resistance models demonstrate that continuum idealisations are not indispensable, even “in principle”, as these continuum models critically rely on the microscopic details of the thermal system, contrary to what Batterman argues. This is because the microscopic changes in the material conditions at various interfaces (e.g., liquid-vapour, solid-solid) manifest as temperature discontinuities and ignoring these discontinuities results in poor empirical predictions of heat transport across these interfaces. Further, significant temperature discontinuities have been reported in various thermal systems using heat transport equations, molecular dynamics simulations and temperature measurements at various levels (micro, meso and macro). Thus, discontinuities are present in both the data and the phenomena, contrary to what Bangu argues. Finally, we look into Butterfield’s strategy in a bit more detail in sect. 5 and argue that although Butterfield is justified in his defence of the continuum strategy, one must account for certain microscopic features of the modelled phenomenon before adopting the continuum idealisation.

3 Temperature Discontinuity in Evaporative Processes (Phase Transitions)

Evaporation models, which motivated the discovery of temperature discontinuities in phase transitions, have received significant attention in the scientific literatureFootnote 7 because even though evaporation is a ubiquitious phenomenon, the underlying physics has not yet been fully understood [15, 17].Footnote 8 Aursand and Ytrehus [16] note that:

When dealing with fluid mechanics problems that involve heat transfer and a liquid-vapour interface, it is usually necessary to specify some relation between the state of the continuum fluids on either side of the interface and the resulting evaporation or condensation flux across it. For simplicity, such phase transitions are often treated as quasi-equilibrium processes. In practice this means that the interface temperature is assumed to be continuous and exactly equal to the fluid’s saturation temperature, which allows simple energy conservation considerations to close the problem. However, in reality phase transitions occur under non-equilibrium conditions...[and therefore] Evaporation model[s] introduce additional considerations from outside the realm of the continuum and local equilibrium assumptions made in fluid mechanics. (p. 67)

Fig. 1
figure 1

Illustration of an evaporating interface from Aursand and Ytrehus [16] — the dashed black line adjacent to the liquid bulk is the interface transition

In order to estimate the evaporation and mass flux for an evaporating liquid-vapour interface, one analyses the surface kinetics of a drop or a film, where although an interface is assumed to be of zero thickness in the continuum approximation, it is actually composed of two microscopic layers [16, p. 68]:

  1. a)

    the interface transition, where there is a “rapid transition from a liquid-like density to a gas-like density across the distance of a few molecular diameters..., referred to as the “interface” throughout the article, and

  2. b)

    the Knudsen layer, which is “a layer between the liquid surface and the vapor bulk. Here the gas is heavily influenced by the evaporating interface and is in a non-equilibrium state.” It is usually of the thickness of a few molecular mean free paths (MFPs). The MFP is the average length of the path a fluid molecule travels before colliding with another molecule. The Knudsen layer is typically invisible in macroscopic descriptions of an evaporating surface. Nonetheless, its analysis is of considerable importance in evaporation models and in fluid flows, as we discuss below.

Various studies, both theoretical and experimental, report significant temperature discontinuities across liquid-vapour interfaces in phase transitions. We elaborate on exactly what these discontinuities are: that is, whether the discontinuities actually exist in the data or are they merely a by-product of coarse-grained observations or the scale of the modelling.

Pao [18] predicted the existence of an anomalous temperature profile between two parallel liquid films kept at different but constant temperatures, where evaporation occurs on one of the films and condensation on the other. Pao argued that according to the Kinetic Theory of Gases (KTG), there must exist a temperature discontinuity on the surface of the evaporating film such that the temperature is higher on the liquid side than the vapour side, and the reverse must be true on the film where condensation was taking place, i.e., temperature must be higher on the vapour side than in the liquid side on the film. (The temperature profile of the evaporating half of the setup looks similar to that in Fig. 1, except that the liquid bulk of Fig. 1 is replaced by a film.) The prediction of such an anomalous temperature profile, first viewed suspiciously for its potential violation of the postulates of thermodynamics, became potentially acceptable in non-equilibrium thermodynamics on the theoretical ground that it is associated with an increase in entropy [15, p. 4]. Further, in a series of experiments [19,20,21] temperature discontinuities, of as much as \(7.8^{\circ }\)C, were successfully measured across the surface of an evaporating water droplet (see Figs. 2 and 3). The discontinuity was reported at a scale of one MFP, which was of the order of a few micrometers in their experiments. However, the direction of the anomaly was found to be different from what Pao [18] predicted — the temperature was consistently found to be higher on the vapour side than on the liquid side, whether observed on the side where evaporation or condensation was taking place. (We discuss the significance of this temperature reversal shortly.) Rahimi and Ward [15] note that such temperature discontinuities can exist in a wide range of conditions, for instance, in cases where water is evaporating under temperatures of roughly \(65^{\circ }\)C or below.

Fig. 2
figure 2

Temperature table from Fang and Ward [20, 427] showing temperature in liquid and vapour phases at the droplet interface (TC denotes thermocouple): note the large temperature discontinuities within one mean free path highlighted within the orange box (Color figure online)

Fig. 3
figure 3

The figures from McGaughey and Ward [22, 6410] show the different temperatures (discontinuities) recorded at the liquid interface \({T_i}^L\), the vapour interface \({T_i}^V\), the vapour thermocouple bead (on which the water droplet hangs) \({T_tc}^V\) and the bath \({T}^B\). The bars in the top figure around \({T_i}^V\) are error estimation bars. The bottom figure shows the model used for the construction of the temperature profile in the vapour phase

The investigation of discontinuous temperature profiles by Fang and Ward [20], a seminal study on temperature discontinuities, was motivated by inconsistent predictions of the evaporation rates of droplets based on the well-known \(D^2\) law of evaporation,Footnote 9 and a general lack of understanding of the phenomenon of evaporation [15, 20, 22]. The \(D^2\) law is derived from the classical KTG with various background assumptions, including the crucial continuum hypothesis, namely, that the temperature profile across the surface of an evaporating water droplet is continuous. This assumption was based on the belief that the rates of evaporation are affected only by the temperature of the liquid, in that the coupling between the vapour and liquid phases (such as the transfer of molecules from the vapour to the liquid phase) can be ignored, and thus one could assume thermal equilibrium across the interface would result in a continuous temperature profile across the interface. In an attempt to improve the predictive accuracy of the \(D^2\) law, however, several attempts have been made at modelling the rate of evaporation alternatively by relaxing some of the crucial assumptions of the \(D^2\) law [22,23,24]. A number of models have been proposed that relax the continuum hypothesis (see [17] for a brief review). We discuss two such prominent models and show that the existence of temperature discontinuities can be traced back to the microscopic differences in the material conditions on either side of the interface affecting the thermal transport between the liquid and the vapour bulk. (We will demonstrate an analogous case for solid-solid thermal boundary transport in the next section.)

The first model was developed by an application of the Statistical Rate Theory (SRT), a theory widely used in the modelling of various phenomena like adsorption, ion transport, and solidification [23]. Introduced by Fang and Ward [23] in the modelling of evaporation flux, the microscopic SRT-based model of evaporation “relates the molecular transport rates [that is, transport from the liquid phase to the vapour phase and vice versa] to the transition probabilities between [their] quantum-mechanical states and uses the Boltzmann definition of entropy to relate these transition probabilities to the thermodynamic properties of each phase.”Footnote 10 In this model, one begins the modelling process by adding up the entropy change during phase transitions, by including the entropy change resulting not only from the movement of the liquid molecules to the vapour phase but also the movement of the vapour molecules back into the liquid phase. The older models (like the Hertz-Knudsen model) ignored this coupling factor and thus give incorrect predictions of evaporation rates across the interface. Fang and Ward [23, 429] note, “There is no method available in classical kinetic theory that can be used to derive the boundary conditions [across the interface]. Rather they are assumed [for instance, thermal equilibrium is assumed across the interface] and none of the boundary conditions considered [in the previous continuum models] have supposed that the temperature of the molecules leaving the liquid were other than the temperature of the liquid.” In SRT, however, to account for the total entropy change, one treats the temperature of the liquid and vapour phases as differing across the interface, which is barely few molecular diameters thick — an approach motivated by the experimentally-recorded discontinuities and the theoretical predictions discussed earlier. The total entropy change in SRT (\(\frac{\Delta S_{LV}}{k_B}\)) can be rearranged as the sum of three components: the continuum component, the temperature discontinuity term and the inter-molecular-vibration frequency term [25, 121709-3], that is

$$\begin{aligned} \frac{\Delta S_{LV}}{k_B} = \frac{\Delta S_{C}}{k_B} + \frac{\Delta S_{T}}{k_B} + \frac{\Delta S_{\omega }}{k_B}, \end{aligned}$$
(1)

where the continuum term is

$$\ \frac{{\Delta S_{C} }}{{k_{B} }} = \ln \left( {\frac{{P_{S} (T_{L} )}}{{P_{V} }}} \right) + \frac{{m\nu _{f} (T_{L} )}}{{k_{B} T_{L} }}(P_{V} - \gamma _{{LV}} C - P_{S} (T_{L} )),{\text{ }}$$
(2)

the temperature discontinuity term is

$$\frac{{\Delta S_{T} }}{{k_{B} }} = 4\left( {1 - \frac{{T_{V} }}{{T_{L} }}} \right) + \ln \left( {\frac{{T_{V} }}{{T_{L} }})^{4} } \right)$$
(3)

and the inter-molecular-vibration frequency term is

$$\frac{{\Delta S_{\omega } }}{{k_{B} }} = \ln \left( {\frac{{q_{V} ^{{vib}} }}{{q_{L} ^{{vib}} }}} \right) + \left( {\frac{1}{{T_{V} }} - \frac{1}{{T_{L} }}} \right)\mathop \sum \limits_{{l = 1}}^{{DOF}} \left( {\frac{{\theta _{l} }}{2} + \frac{{\theta _{l} }}{{\exp \frac{{\theta _{l} }}{{T_{V} }} - 1}}} \right)$$
(4)

Here L and V refer to liquid and vapour phases respectively, S is the entropy, \(k_B\) is the Boltzmann constant, “DOF” refers to the vibrational frequency degrees of freedom of the molecule, \(P_S\) is the saturation pressure at the liquid temperature, \(P_V\) is the vapour-phase pressure, m is the mass of an evaporating molecule, \(\nu _f\) is the specific volume of the liquid at saturation, \(\theta _l\) refers to the characteristic temperature for the vibrational component of the molecule’s energy, \(q^{vib}\) is the vibration partition function, C is the principal curvature of the interface, and \(\gamma _{LV}\) is the liquid-vapour surface tension.

Some observations about the model are in order. The inverse terms \(\frac{1}{T_L}\) and \(\frac{1}{T_V}\) are highly sensitive to even small changes in the values of \({T_L}\) and \({T_V}\), with the temperature discontinuity being (\(T_L - T_V\)). Thus, both the temperature discontinuity term \(\left(\frac{\Delta S_{LV}}{k_B}\right)\) and the inter-molecular-vibrational term (\(\left(\frac{\Delta S_{\omega }}{k_B}\right)\), where the discontinuity appears, contribute significantly to the overall change in entropy \(\left(\frac{\Delta S_{LV}}{k_B}\right)\) and the calculation of the evaporation flux (based on the equations above) across the interface. (We skip the model of the evaporation flux to keep matters simple.) For instance, the evaporation mass flux can increase by more than \(80\%\) if \(T_L\) increases even by 0.01 K [25, p. 121709-4]. Even though the equations depend so critically on the discontinuity modelled at the interface, the temperature of the interface cannot itself be measured. This is mainly due to experimental difficulties related to the microscopic nature of the interface and the limitations on the size of the temperature measuring devices, which usually produce an averaged result in a very small region near the interface.Footnote 11 The temperature discontinuity across the very small inter-molecular distance is instead inferred from the values of \({T_L}\) and \({T_V}\), close to the interface, which are extrapolated from the (macroscopic) temperatures of the liquid and vapour bulk, respectively, near the interface. One might object that our inability to measure the temperature within the interface shows that we do not know if the data shows a discontinuity or not. One might also claim that reducing the size of the temperature measurement probe might show that it is in fact continuous after all. However, these objections are unwarranted for several reasons. The SRT-based model utilises discrete quantum mechanical transitional probabilities, and therefore, one should not be surprised to find quantities that do not vary continuously. Fitting in a continuous function in this discrete mess is unlikely to work. (We discuss this in a bit more detail below.) More importantly, the temperature discontinuity modelled via SRT is a result of the abrupt change in the microscopic material conditions at the interface, which result in a breakdown of local thermal equilibrium at the interface (and in the Knudsen layer) — this is taken as a starting premise of the model, a premise motivated from the theoretical and the experimental results concerning evaporation studies. (To reiterate, the existence of temperature discontinuities was a theoretical prediction of the classical KTG, which has been widely confirmed with the predictions of the Boltzmann transport equations [17], and thus relaxing the continuum hypothesis has led to an improvement in the predictive accuracy of evaporation models.) Besides, experimental results with a reduced size of the measurement probe do not show a significant deviation in the discontinuity. For instance, [25, 121709–8] note that when a 12 \(\mu m\) thermocouple wire was used instead of a 25 \(\mu m\) wire, the results do not show a variation of more than 1 K; the large discontinuity (of nearly 8 K across one MFP) prevails despite the probe size being much smaller than a MFP. Furthermore, results of molecular dynamics simulations also show a significant discontinuity across the liquid-vapour interface [28, p. 360]. (We discuss an analogous prediction for the solid-solid interface and assess its significance and meaning in the next section.) Notably, other applications of SRT-based modelling of transport processes at micro-scales, e.g., modelling of the rate of gas adsorption on single crystal metal surfaces, assume isothermal conditions (at a given instant) at the interface (gas-metal interface in this case) and yet produce correct predictions [23, 429]. SRT thus employs both continuous and discontinuous representations of temperature, and the theory itself is therefore neutral on the issue; the representation of temperature, whether continuous or discontinuous, depends on the microscopic dynamics of the modelled system, rather than dictated by SRT.

The second model that we discuss, given by Chen [17], also shows how the consideration of the microphysics around the interface and the Knudsen layer results in an interfacial temperature discontinuity in the evaporation model. This is primarily because the molecules travelling towards the interface and those travelling away from the interface have different molecular distribution functions (which may or may not be Maxwellian) [17, p. 122845-3]. The difference in molecular distribution functions is caused by the fact that the liquid and the vapour bulk at the interface are not in equilibrium. The molecular distribution function on a given side of the interface is commonly assumed to depend, among other factors, on the temperature of only that side of the interface (since the kinetic energy of the molecules is averaged either to the left or to the right of the microscopic interface). Presumably, one can also average the kinetic energy of the molecules by including the molecules in the interface itself, but given that the interface is an extremely small region, roughly 3Å, it does not contain enough molecules to make a substantial difference to the results. Moreover, there is no local thermal equilibrium at the interface, which makes it difficult to assign a temperature there. The Knudsen layer adjacent to the interface does contain a sufficient number of molecules but this layer is typically assumed to be of zero thickness and the molecular collisions in the layer are ignored (at least in Chen’s model). This is because the molecular distribution function in the Knudsen layer is complicated to model (since the molecules from the liquid and the vapour bulk interact within the very small layer under non-equilibrium conditions). Additionally, heat transport within this layer can be ballistic (that is, occurring at scales smaller than the MFP) and requires a much more rigorous treatment. Although including the ballistic transport and the molecular collisions within the Knudsen layer can improve the predictive accuracy of the evaporation model [17, p. 122845-6], doing so does not really affect the existence of temperature discontinuities. This is because irrespective of how one defines a natural region around the interface over which the kinetic energy is averaged, the discontinuities are still reported. (We elaborate on this point in the next section in the context of choosing the boundaries of molecular dynamical simulation cells, where significant temperature discontinuities have been predicted.) Therefore, the assumption that the distribution function of the liquid bulk to the left of the interface depends, among other factors, on the temperature of the liquid bulk alone, not on that of the vapour bulk, is justified; analogous reasoning applies to the vapour bulk as well. The difference in distribution functions due to the different temperatures of the liquid and the vapour bulk (which has wide theoretical and experimental support, as discussed above) thus manifests as a discontinuity across the interface in the evaporation model. We now briefly show how this is modelled by Chen [17].

Chen’s model, [17], promises to remedy some of the problems with the SRT, mainly to do with its inadequacy in predicting the exact magnitude of temperature discontinuities, For brevity, we do not go over Chen’s derivation in detail, but make some salient points on the underlying methodology of his model that illustrate the points we make above. To calculate the net mass or evaporation flux (a primary goal of evaporation modelling) at the liquid-vapour interface in this model, one needs to know the molecular distribution function on either side of the interface. Chen approximates the molecular distribution function (\(f_s^{+}\left( T_L\right)\)) of molecules leaving the interface (s) in terms of a Maxwellian distribution, which looks like:

$$\begin{aligned} \begin{array}{r} f_s^{+}\left( T_L\right) =\alpha \left( T_L\right) n_s\left( T_L\right) \left[ \frac{m}{2 \pi k_B T_L}\right] ^{3 / 2} \exp \left\{ -\frac{m\left[ v_x^2+v_y^2+v_z^2\right] }{2 k_B T_L}\right\} \\ \end{array}, \end{aligned}$$
(5)

for \(v_z>0\). The outgoing mass flux from the evaporating interface (dependent on \(T_L\)) is then given by:

$$\begin{aligned} J_m^{+}=\alpha n_S\left( T_L\right) \sqrt{\frac{k_B T_L}{2 \pi m}}-(1-\alpha ) J_m^{-}. \end{aligned}$$
(6)

The molecular distribution function away from the interface is given byFootnote 12:

$$\begin{aligned} \begin{aligned} f=&f_d-\tau v_z \frac{d f_d}{d z}=f_d-f_d \tau v_z \times \\ {}&\left\{ \frac{1}{n} \frac{d n}{d z}+\frac{1}{T} \frac{d T}{d z}\left[ \frac{m\left[ v_x^2+v_y^2+\left( v_z-u(z)\right) ^2\right] }{2 k_B T(z)}-\frac{3}{2}\right] +\frac{m\left[ v_z-u(z)\right] }{k_B T(z)} \frac{d u}{d z}\right\} , \end{aligned} \end{aligned}$$
(7)

where f is the molecular distribution function in the phase space, \(f_d\) is the displaced Maxwell velocity distribution (based on an approximation to the Boltzmann equations), \(\tau\) is the relaxation time, n is the density, m is the molar mass, \(\alpha\) is the accommodation coefficient, T is the absolute temperature, z is the distance from the interface, v the velocity of the molecules with components \(v_x\), \(v_y\), \(v_z\), and u(z) is the average velocity in the z-direction (in a direction perpendicular to the interface).

The mass flux J coming towards the interface from the vapour bulk (dependent on \(T_V\)) can then be written as:

$$\begin{aligned} J_m^{-}=-n\left( \frac{k_B T_V(0)}{2 \pi m}\right) ^{1 / 2}+\frac{J_m}{2}. \end{aligned}$$
(8)

The unknown term \(\textrm{Jm}_{\textrm{m}}\) refers to an unknown net molecular flux which comes from the diffusion of molecules in the Knudsen layer that distort the molecular distribution function just outside the Knudsen layer in the vapour bulk where T(z) is estimated; this is captured in equation (7). The temperature jump at the interface is difference between \(T_L\) and \(T_V(0)\).

After solving for \(\textrm{J}_{\textrm{m}}\), the sum of \(J_m^{+}\)and \(J_m^{-}\) gives the net molecular flux as:

$$\begin{aligned} J_m=\frac{2 \alpha }{2-\alpha }\left[ n_s\left( T_L\right) \sqrt{\frac{k_B T_L}{2 \pi m}}-n\left( T_V\right) \left( \frac{k_B T_V}{2 \pi m}\right) ^{1 / 2}\right] . \end{aligned}$$
(9)

Assuming that the ideal gas law, \(P(z) = k_B T(z)n(z)\), is true in this situation, the mass flux obtained from the evaporation flux is:

$$\begin{aligned} \begin{aligned} {\dot{m}}&=\frac{2 \alpha }{2-\alpha } \sqrt{\frac{M}{2 \pi R}}\left[ \frac{P_s\left( T_L\right) }{\sqrt{T_L}}-\frac{P_v(0)}{\sqrt{T_V(0)}}\right] . \end{aligned} \end{aligned}$$
(10)

Some observations about the model are in order now. The mass flux in the model depends on the pressure and the temperature of the liquid and vapour bulks. As outlined above, since the different distribution functions, \(f_s^{+}\left( T_L\right)\) and f, on either side of the interface, depend only on the temperature (and the pressure) of the liquid and the vapour bulk respectively, this dependence enters as a discontinuity \(\left[ \frac{P_s\left( T_L\right) }{\sqrt{T_L}}-\frac{P_v(0)}{\sqrt{T_V(0)}}\right]\) in the equation for the net mass flux obtained by adding the fluxes from the sides of the liquid and the vapour bulk. Note that there are both pressure and temperature discontinuities denoted in this model. (For paucity of space, we will not be able to go through the details of the pressure discontinuities but they seem to be as ‘real’ as are the temperature discontinuities. Presumably then, volume discontinuities shown in P-V diagrams of phase transitions may also have an analogous justification, based on abrupt changes at the interface.)Footnote 13 The difference in distribution functions is to do with the peculiar microphysics of the situation, as discussed above. Therefore, importantly, no matter what mathematical form of distribution function is used to describe the thermal dynamics in the liquid and the vapour bulks, as long as the distribution function depends on the temperature of only one side of the interface, this will enter as a discontinuity in the model of the evaporation flux or mass flux. This is analogous to how temperature discontinuity is modelled in the SRT-based evaporation model.

Therefore, both the SRT-based model and Chen’s model bypass the continuum picture by modelling the microscopic variations in the system’s interfacial conditions as a discontinuity. These models are not only more empirically adequate but also provide greater insight into the underlying physics of evaporation. Thus, contrary to what Bangu suggests [6, 7], the discontinuity appears in both the data (detailed experimental observations at the molecular level) and the phenomena (evaporation models of phase transitions). We address potential concerns related to the existence of temperature discontinuities, in cases where temperature is not even well-defined, such as at the interface, in sect. 4.Footnote 14

3.1 Responses To Quick Objections

Let us consider some obvious objections here. One objection could be that even if temperature behaves discontinuously at these microscopic levels, these are insignificant at the macro level, where many continuum models can be applied without the complications noted above. A second objection could be that the discontinuity of temperature at the micro-level can potentially be “smoothed out” by extrapolating temperature distribution curves so that they appear to be continuous at the macro-level, and continuum models (like the topological Borsuk-Ulam theorem) can still be applied. We have dealt with the second objection briefly above, some more details follow alongwith a rebuttal of the first objection. The idea is that important theoretical insights and puzzles concerning the micro-causal factors affecting thermal phenomena can be overlooked by the continuum assumption.

As for the first objection, large temperature discontinuities have been discovered at macroscopic levels, such as near the Sun’s corona and within interstellar molecular clouds [29, 30]. The Sun’s Coronal Heating Problem (CHP), where the Sun’s corona is estimated to be over a million degree K, even though the surface just beneath it is only about 5,300 degree K, is one of the biggest unsolved problems in astrophysics [30]. The anomalous or inverted temperature profile discussed in the case of evaporating liquid-vapour interfaces above is observed in the CHP as well, since the temperature function (whatever form it takes), starting from the core of the Sun and moving to the outer periphery, is neither monotonic nor continuous. Interestingly, the denser parts of the Sun (except the core) are colder than the dilute ones (the corona), contrary to expectation [29]. [31] note that although the CHP is a very complicated problem, with the exact mechanism still contested, some conceptual similarities from the case of evaporating droplets can be observed:

If one thinks of the solar surface heating the corona, it would be impossible for the corona to be hotter than the surface, but if one thinks of the higher-energy particles escaping the surface, there is no reason the corona could not have a higher temperature. Similarly, if during evaporation, the molecules of higher energy are the ones escaping the liquid, there is no reason the interfacial vapour temperature could not be higher than that of the liquid.

So, the discontinuities and their magnitude are relevant at the macro-scale as well. They can be underpinned by the complex surface kinetics of the system and by the contingent factors involved in these kinetics, such as the dependence of the magnitude of the temperature discontinuity in evaporating liquid-vapour interfaces on the evaporation flux [31, p.7]. In addition to this, [16] observe that the discontinuity of temperature at the interface (and across the Knudsen layer) does not become irrelevant just because one is analysing an evaporation problem from the macro-scale (pp. 68–69). Temperature discontinuities (and similar discontinuities at material interfaces) are “rules rather than exceptions” [17]. The boundary conditions across the interface still play an important role determining macro-level outcomes because the temperature jump or discontinuity depends on the driving force (difference in pressure in the liquid and vapour bulk) and is a uniquely determined output from it [16, pp. 76–78].

For the second objection, even if the temperature profile can be considered to be continuous within the Knudsen layer in certain instances, this simplifying assumption is often explanatorily and predictively problematic (as also discussed briefly above). For instance, most of the temperature jump occurs at the boundary of the interface transition (see Fig. 1) and cannot be smoothed out without incorrectly representing the temperature profile. This is because quasi-equilibrium models, which treat the interface temperature as being continuous, suffer from major defects. For one thing, they are only applicable in cases of “weak evaporation”, where it is assumed that the interface temperature is continuous [16, p. 75]. To reiterate, the full non-linear SRT model includes the phonon component \(\left(\frac{\Delta S_{\omega }}{k_B}\right)\) which takes into account the temperature jump at the interface; the continuum side of the equation only looks at the liquid temperature, or \(T_L\). For another thing, quasi-equilibrium models wrongly predict the evaporation mass flux across the droplet since they do not take into account an important mode of heat transfer across the droplet, which happens via convection currents driven due to the surface-tension in the evaporating droplet, termed as thermocapillary effect [16, p. 78] — this effect is in addition to the coupling factor between the liquid and the vapour phase mentioned above.Footnote 15 A third point is that they make incorrect predictions about the direction of the temperature discontinuity anomaly — temperature is actually higher towards the vapour side and lower in the liquid side — as these continuum models rely on predictions from the KTG, as pointed out above. Finally, in order for one to use a smoothing extrapolation function one needs to take into account the contingent and specific causal factors that lie behind the modelling process, such as the thickness of the evaporating layer, any possible contamination of the layer, the pressure-based driving force, the shape of the interface, the scale of the modelling, the boundary conditions around the interface, and so on [15, 16, 24]. As mentioned above, the probabilistic-quantum-mechanical SRT-based evaporation modelling prohibits this kind of simplistic treatment.

The temperature discontinuities noted in these cases thus depends strongly on the vapour pressure [24, 32]: it decreases with an increase in pressure and may well disappear at higher pressures, such as atmospheric pressure where one generally does not observe temperature discontinuities. Presumably, where pressures are low, such as in rarefied gases high in the atmosphere, recording temperature discontinuities may be more likely. This suggests that the continuity of temperature is contingent upon, inter alia, the pressure conditions around the interfaces of evaporating droplets and the microphysics of fluid flow, of which more in sect. 5. This being so, even if one finds a continuous function supposedly representing the average temperature distribution in a large system, like the atmosphere around the earth, at a given point of time, even a little variation in the contingent factors can change the temperature distribution significantly. An adjustment in the function reflecting this change will require a detailed knowledge of the specific causal factors underpinning the variation. It is important to note that this is not merely an epistemic difficulty; there is no certainty whether such a continuous function is even to be found, given the extremely large number of data points (encoding temperature values) in a large control volume. For instance, temperature discontinuities across evaporating water droplets in the atmosphere can be large and accumulate quickly, and therefore, no one function, without a consideration of the contingent factors involved, can presumably represent the temperature distribution accurately around the earth, let alone continuously. In light of all these points, it cannot simply be assumed that the temperature of a system is, in all cases, accurately represented by a continuous function. Therefore, topological theorems like the Borsuk-Ulam theorem which require continuous mappings of variables onto topological spaces will not necessarily yield correct predictions. It is thus not true that there must be two antipodal points on the earth at any given moment of time with the same temperature and pressure, unlike what some mathematicians and philosophers have claimed [9, p. 49]; [33, pp. 157–159]; [34, p. 21].

Depending on the vapour pressure and the peculiar microphysical conditions around the interfaces of evaporating droplets, phase transitions can be modelled with both continuous and discontinuous representations of temperature, unlike what the proponents of the continuum picture assume. Continuum idealisations are dispensable “in principle” and even desirable in these contexts.

4 Temperature Discontinuities Across Solid-Solid Interfaces

In this section, we discuss the problems with a continuum representation of temperature in heat transfer problems across solid-solid interfaces. The problems we discuss here are analogous to those discussed in the previous section — that is, the boundary or interfacial conditions prevent a continuum representation of temperature. However, this case presents some additional complexities due to the existence of multiple thermal carriers and the lack of their mutual equilibrium at the interface leading to ambiguities in how the temperature at the interface can be defined. We also address a potential objection as to whether it is legitimate to talk about the discontinuous representation of temperature in a situation where it is not well-defined.Footnote 16

Temperature discontinuities across interfaces, such as solid-solid or solid–liquid interfaces, are essentially caused by the reflection and scattering of thermal carriers (phonons, electrons, magnons, etc.) due to a change in the vibrational properties of the adjacent materials [35]. This results in a breakdown of local thermal equilibrium at the interface leading to an ill-defined overall temperature at the interface. These discontinuities have been widely reported and studied [17, 36,37,38]. The ratio of temperature discontinuity at an interface to the heat flux across the interface is called the Thermal Boundary Resistance (TBR), or Kapitza Resistance. Modelling and predicting temperature discontinuities across interfaces is crucial for various physical applications, such as predicting heat flow and avoiding heat death in semiconductors or nano-transistors. Given that these discontinuities are reported at scales comparable to the MFP of thermal carriers, the models of TBR and corresponding studies of heat flow focus on a micro-level analysis, i.e., at nano, molecular or atomic scales. Numerical methods, such as Molecular Dynamics (MD) simulation, are used to predict these discontinuities since a complete TBR modelFootnote 17 is yet to be found [35], and the phenomenon remains poorly understood [40]. (More on the relevance of MD simulations to the continuum fallacy shortly.)

Fig. 4
figure 4

Temperature profile representing a metal-nonmetal interface, where \(T_e\) and \(T_{ph}\) are the temperatures of electrons and phonons on the metal side and \(T_n\) is the phonon temperature in the non-metal. The total temperature jump at the interface (\(\Delta T = \Delta T_{e-ph} + \Delta T_{ph-ph}\)) is composed of both an electron–phonon coupling contribution \(\Delta T_{e-ph}\) and a phonon-phonon coupling contribution \(\Delta T_{ph-ph}.\) Figure and description from Chen et al. [39, p. 025002–13]

Since the scale of the interfacial microstructure is comparable to or smaller than the MFP of the thermal carriers or their wavelengths, “...an understanding of heat transport beyond that achievable at the continuum level” is needed given the ambiguity in the definition of temperature in such cases [36, 794]. (We discuss the relationship of this ambiguity with the continuum fallacy shortly.) In this context, Wilson et al. [38] note,

“...different types of thermal excitations can have drastically different temperature and heat flux boundary conditions. For example, electrons in a metal near a metal/dielectric interface have an adiabatic boundary condition [i.e., zero heat flux], while phonons in the metal do not; this means local thermal equilibrium cannot exist between electrons and phonons in close proximity to a metal/dielectric interface that is subjected to a heat flux” (p. 144305-1).

For instance, phonons and electrons are the dominant thermal carriers in metals, whereas phonons are the dominant carriers in dielectrics or semi-conductors. This leads to an ambiguity in the definition of temperature at such interfaces because there are various couplings that occur during heat transport in solids: taking a simple case of metal/non-metal interfaces, these include electron–electron coupling within the metal, electron–phonon coupling within the metal, electron–phonon coupling at the metal/non-metal interface and phonon-phonon coupling at the metal/non-metal interface [41]. Amongst these couplings, electron–electron couplings are the fastest to transport heat within the metal and reach an equilibrium denoted by the electron temperature, or \(T_e\), whereas electron–phonon couplings across the metal/non-metal interface require a longer time. This is denoted by the Two-Temperature Model (TTM), developed by Anisimov et al. [42], which shows a state of non-equilibrium between electrons (on the metal side) and the phonon lattice (on the non-metal side).Footnote 18 Thus, to denote electron–phonon interactions at such an interface, two different temperatures are defined in the metallic layer (at the interface) for phonons and electrons, \(T_{ph}\) and \(T_e\), respectively, and \(T_n\) for phonons in the non-metallic layer [38, 39] — see Fig. 4. Different temperatures for phonons and electrons are caused by the difference in their heat capacities and the varying mechanisms via which they approach equilibrium with the lattice.Footnote 19 Considering their different temperature profiles, the total temperature jump across the interface is calculated as \(\Delta T = \Delta T_{e-ph} + \Delta T_{ph-ph}\), which is composed of both an electron–phonon coupling contribution, or \(\Delta T_{e-ph}\), and a phonon-phonon coupling contribution, or \(\Delta T_{ph-ph}\) [39, p. 025002-14].Footnote 20 But the TTM is not free from theoretical and experimental difficulties either, because it is based on various assumptions which do not hold in all experimental settings.Footnote 21 Therefore, modelling and calculating the temperature jump, in cases where it is feasible, requires the consideration of various factors at the microscopic level, including the nature of the material boundary and the non-equilibrium couplings of the thermal carriers along it.Footnote 22

Therefore, the issues underlying the study and prediction of temperature discontinuities at these interfaces relate to the validity of the continuum assumption and to whether temperature can even be properly defined at these scales [36, p. 794]. Wilson et al. [38] further note,

“on macroscopic scales, heat flow in a material is well described by the heat diffusion equation and depends only on the magnitude of the material’s heat capacity and thermal conductivity. The heat diffusion equation is a valid description of heat flow as long as all quasi-particles that store and carry heat are in local thermal equilibrium. In other words, the occupation of all thermal excitations must be well described by a single temperature on time-scales that are comparable to the rate of heating/cooling and length-scales that are comparable to the quasi-particle [such as a phonon] mean free paths” (p. 144305–1).

At scales larger than the phonon MFP, heat transport is diffusive, that is, heat transport occurs by the scattering of phonons with neighbouring molecules and Fourier’s law of heat diffusion is valid. However, at scales shorter than the phonon MFP, heat is transferred ballistically without the scattering of phonons by neighbouring molecules; in such cases, Fourier’s law of heat diffusion breaks down [47, p. 3277]. Therefore, Khalifa et al. [10] are incorrect in saying that the continuity of temperature is physically impossible, or that it is necessary as Colyvan argues [9]. Fourier’s law \(q=\lambda \nabla T\), which relates the temperature gradient \(\nabla T\) to the heat flux q, where \(\lambda\) represents the thermal conductivity, assumes instantaneous heat transfer on the development of a temperature gradient, an assumption that is highly non-trivial at such scales. More accurate generalisations of heat transfer, such as the Maxwell-Cattaneo-Vernotte heat transport equations (\(\tau \partial t q=\lambda \nabla T\)), assume a non-zero relaxation time \(\tau\) between heat transfer and the development of a temperature gradient [48].

For the cases in which the Fourier’s law breaks down, the “microscopic knowledge concerning the system’s thermal excitations is necessary to accurately predict its thermal response” [38, p. 144305-1]. Some ways to predict the thermal response of such multi-layered systems at this scale include the use of the Boltzmann equations of transport or MD simulations.Footnote 23 Boltzmann equations of transport predict steep temperature gradients at the boundaries [36, 798] because the distribution function of phonons on either side of the boundary/interface differs significantly — this difference is largely due to the lack of local thermal equilibrium between thermal carriers given their scattering and reflection at the interface. (This is analogous to the prediction of temperature discontinuities due to differing distribution functions of the fluid molecules on either side of the liquid–vapour interface in sect. 3.) The prediction of such discontinuities can be further corroborated with MD simulations. MD simulations use classical Newtonian equations of motion to predict the thermal response of a system by tracking the dynamics of its individual atoms based on empirical potentials [49]. The temperature of a region is approximated via the statistical averaging of the kinetic energy of a population of atoms within the region. This kinetic energy can be converted into temperature using both classical and quantum approaches, depending on the distribution function used [36]. MD simulations predict significant temperature discontinuities at the molecular level (see Fig. 5) which may also be measurable [36, p. 800].

Fig. 5
figure 5

Temperature discontinuity recorded across MD simulation cells in [36, p. 797] — each cell is roughly 1 MFP long and contains 840 atoms

One must, however, note that the empirical reliability of the kind of results shown in Fig. 5 is limited given the classical roots of MD simulations — that is, their use of classical Newtonian equations and continuous empirical potentials alongside the assumption that each atomic vibrational mode is equally excited. So, MD simulations do not really provide an insight into the underlying physics of the problem [36], nevertheless, the prediction of temperature discontinuities is robust and can be verified across various methods of measurement even as the exact estimate may vary [36, 39]. (The exact estimate depends on the temperature range over which the simulations are run and whether one is using classical Maxwellian or quantum Fermi-Dirac distribution functions in translating the KE averages of each simulation cell to the temperature of the cell. The quantum estimate becomes equal to the classical estimate only in high temperature ranges, note Cahil et al. [36, pp. 799–800].) The differing estimates of temperature discontinuities only highlights the ambiguity and the complexity in defining the temperature at such multi-scale boundaries in MD simulations which track the micro-level details of the system. Cahil et al. [36], in a landmark study on nano-scale thermal transport, remark:

An important issue is the size of the region over which temperature is defined. The classical definition is entirely local, and one can define a [local] temperature for each atom or plane of atoms [in the MD simulation]. For the quantum definition, the length scale is defined by the mean-free-path, \(l_{\lambda q}\) of the phonon. If two regions of space have a different temperature, then they have a different distribution of phonons. The phonons can change their distribution by scattering. The most important scattering is the anharmonic process in which one phonon divides into two, or two combine to one. This process occurs on the length scale of the mean free path [MFP]. A local region with a designated temperature must be larger than the phonon scattering distance.Footnote 24...

This phonon viewpoint of temperature implies that temperature cannot be defined for a particular atom, or a plane of atoms. In particular, there should not be an abrupt variation in temperature between a plane of atoms. Although this definition seems quite reasonable, it makes the numerical results...[pertaining to temperature discontinuities at interfaces]...quite puzzling. The MD simulations by different [research] groups do show an abrupt change in the kinetic energy of a plane of atoms at the twin boundary. Regardless of which temperature scale is adopted...[classical or quantum], a graph of temperature versus distance will show an abrupt change. (p. 800)

They note that one may possibly resolve this difficulty of defining temperature at scales below the MFP by treating a grain or twin boundary as a natural boundary for a region of temperature, but caution that:

Even if one adopts this hypothesis [that a grain boundary is a ‘natural’ boundary for temperature definition], it still means that temperature cannot vary within a grain, or within a superlattice layer [a thin layer of alternately stacked materials], on a scale smaller than \(l_{\lambda q}\). If the layer thickness of the superlattice is less than \(l_{\lambda q}\), then one cannot define T(z) [local temperature] within this layer. The whole layer is probably at the same temperature. This point is emphasised, since all theories of heat transport in superlattices have assumed that one could define a local temperature T(z) within each layer [of the superlattice]. [36, p. 800]

Therefore, there is ambiguity in the definition of temperature at such scales and the continuum hypothesis is invalidated due to the peculiar microscopic constitution of such systems.

At this point two questions arise. First, can one can legitimately speak of temperature discontinuities in cases where temperature is not even well-defined? Second, supposing the answer to first question is “yes”, is every case of temperature discontinuity simply a case where the temperature between one point and another is not well-defined, or are there cases of temperature discontinuities which involve a sudden jump in temperature between two points that are only infinitesimally far apart from each other?Footnote 25 The answer to the first question is affirmative, since the alternative would be to speak of temperature varying continuously within an interval where temperature is not well defined, which would, plainly, be absurd. It is a common practice to speak of discontinuities in mathematical functions at points where the value of the function is not well-defined or blows up. The answer to the second question is not so straightforward. Within the experimental and the theoretical limits, one can find two infinitesimal points (as construed at the molecular level) where there is a sharp jump in the value of temperature from one point to another — for instance, the length of the MD simulation cell is typically smaller than the MFP of the phonons which carry most of the heat in such ballistic situations [36, p. 800]. So, at the molecular level, there is a discontinuity in the sense that the temperature defined or measured at two points adjacent to each other shows a sharp jump. However, at scales much smaller than the MFP, matters get a bit more complicated. We have noted that it is not possible to determine the interfacial temperatures due to the theoretical and the experimental limitations on how precise the models can be or how small the measurement probes can be. To reiterate, this is despite the fact that the discontinuities are measured and predicted at molecular scales, the smallest possible scale thermal transport phenomena are currently investigated in the scientific literature. At the scale of the interface, which is only a few Å long (much smaller than a MFP), the temperature is ill-defined. The temperature on a given side of the interface is thus obtained by an extrapolation of the temperature of the molecules on that side of the interface (such as liquid or vapour bulk), and the jump is calculated by subtracting the temperatures extrapolated from the liquid and the vapour bulk towards the interface. It thus seems reasonable to say that the ambiguity in the definition of temperature at the interface, alongside the sharp jump at the interface, is modelled as a discontinuity. So, seen at this scale, the discontinuity of temperature emerges partly from this ambiguity. One can, however, legitimise discontinuous representations of temperature, even if temperature is ill-defined at a certain scale, analogous to how one legitimises ‘continuous’ representations of temperature despite our ignorance of what lies under the hood of such representations. After all, if these scientific representations are idealised descriptions of physical phenomena, and one might ask whether there is a fundamental difference between continuous and discontinuous representations of these phenomena — they are both representations that work in a certain context and either fail or become intractable in others. We say a bit more on this in our concluding remarks.

5 Temperature Discontinuities Due to Microscopic Fluctuations and Slip Flows

Fig. 6
figure 6

Representative sampling volume [50, p. 18]

In this section, we discuss some general problems with the continuum idealisation by discussing a more general instance of how microscopic fluctuations affect the description of a macroscopic phenomena. Although these problems are widely known, it is important for us to briefly mention them here since they help us widen the scope of our arguments.

As discussed briefly in sect. 2, Butterfield [5] makes a case for continuum idealisations (in the cases where it is possible) based on mathematical convenience and empirical adequacy. Batterman [4] argues that Butterfield’s strategy is essentially the Representative Sampling Volume (RSV) strategy wherein a macroscopic variable is averaged over microscopic fluctuations. Batterman also notes that although the RSV strategy is justified in many cases, it has limited applicability in cases where higher micro-scale structures such as dislocations and metastabilities become important. Batterman [4] thus makes a case for the homogenisation approach based on the Renormalisation Group (RG) strategy, which is capable of preserving the micro and meso level structures that the RSV strategy may completely dissolve — the RSV strategy yields inaccurate predictions in such cases. Although we do not discuss the homogenisation approach given the lack of space in this paper, we largely agree with Batterman’s assessment of the RSV strategy. However, we make some additional points about the cases where the RSV strategy is inapplicable.

Colin [50] notes:

When applicable, the continuum assumption is very convenient since it erases the molecular discontinuities by averaging the microscopic quantities on a small sampling volume. All macroscopic quantities of interest in classic fluid mechanics (density..., velocity..., pressure..., temperature..., etc.) are assumed to vary continuously from point to point within the flow...In order to respect the continuum assumption, the microscopic fluctuations should not generate significant fluctuations of the averaged quantities. (p. 18)

The microscopic fluctuations, as shown in Fig. 6, are usually averaged over a representative sampling volume to assign a macroscopic quantity, like temperature or pressure, over a control volume. (The representative sampling volume is a part of the larger control volume under investigation, such as the earth’s atmosphere.) However, Colin [50] notes further:

...the size of a representative sampling volume must be large enough to erase the microscopic fluctuations, but it must also be small enough to point out the macroscopic variations, such as velocity or pressure gradients of interest in the control volume...[as in Fig. 6]...If the shaded area in [Fig.  6]...does not exist, the sampling volume is not representative and the continuum assumption is not valid.

This implies that the representative sampling volume must be chosen carefully to take into account variations in both the macroscopic and microscopic levels by considering contingent factors, such as the length of the MFP which needs to be small with respect to the sampling volume. The MFP, itself, depends on several contingent factors such as air pressure, molecular density and humidity [51], and could range from a few nanometers to several kilometeres depending on how dense or rarefied the gas is [52, p. 386]. For instance, the length of the representative sample volume in air at sea level corresponding to \(1\%\) statistical fluctuations is roughly 72 nm, comparable to the value of the MFP of about 49 nm [50, p. 19]. Where the MFP ranges in kilometeres, such as in a rarefied atmosphere high above the earth, the continuum assumption can be invalid in even a large control volume. One must, therefore, assess the relevant contingent causal factors beforehand, in order to know whether, at a certain scale, the theorem is applicable or not. In addition to this, the continuum assumption requires that the sampling volume is in thermodynamic equilibrium, meaning that there should be sufficient many high-frequency inter-molecular collisions in the sampling volume for statistical averages to be defined (ibid.). This also requires the MFP to be much smaller than the length of the representative sampling volume. The point can be made by saying that the continuity assumption is only valid for flows characterised by low Knudsen numbers, \(K_n = \frac{\lambda }{L_{sv}} \ll 1\), where \(K_n\) is the Knudsen number, \(\lambda\) is the length of the MFP, and \(L_{sv}\) is the length of the representative sampling volume.

The Knudsen number also plays an important role in analysing whether there is a slip or jump in the values of the physical variables observed over the sampling volume, such as where the flow of a fluid is observed along a wall. For instance, at high Knudsen numbers (\(K_n > 0.1\)), the continuum assumption is not necessarily valid because the “inter-molecular collisions...[in the fluid are]...negligible compared with [the] collisions between the gas molecules and the walls.” [50, p. 21]. If the gas is rarefied near the wall, the statistical fluctuations could remain large in any control volume near it — such that temperature is not even well-defined near the wall — or the lack of interaction between the wall and the gas due to rarefaction may imply that no thermal equilibrium is achieved close to the wall. In such instances, velocity slips (due to insufficient momentum exchange) and temperature jumps (due to lack of sufficient thermal contact between the gas and the wall) can be observed in a control volume close to the wall. Such effects close to the wall are observed when the size of the control volume is shrunk considerably, such as in micro-channel flows, resulting in the predominance of surface (wall) effects over volume effects [50, p. 22–23]. This provides us with another example in which one needs to know the contingent microphysical factors beforehand in order to assess whether the continuum assumption is valid.

6 Conclusion

We have argued that the abrupt changes in the microscopic material conditions at liquid-vapour interfaces (phase transitions) and at solid-solid interfaces enters the thermal transport models as temperature discontinuities. We have also shown that in slip flows, the lack of equilibrium close to the wall results in microscopic fluctuations in the gas temperature which cannot be modelled via continuous representations. Our conclusion is that temperature is not necessarily a continuously defined function; its continuity, where applicable, is contingent upon various microphysical factors. The failure of continuum models in all these cases can be attributed to their strong coupling with the microscale material constitution and the associated thermal dynamics. In such cases, discontinuous representations provide valuable insights into the underlying physical phenomenon that are not forthcoming from “overly simple” continuum models. In consequence of the fact that continuum-level descriptions are not necessarily decoupled from the microscopic details of a system, the view that continuum idealisations are indispensable “in principle” (either explanatorily, or empirically, or pragmatically) cannot be justified. Both continuous and discontinuous representations work in certain contexts and either fail or become intractable in others. Continuous representations work when local thermal equilibrium can be assumed at the interface and thus a temperature can be clearly defined there. Discontinuous representations work when local thermal equilibrium breaks down at the interface leading to an ambiguity in the definition of the temperature there. The ambiguity, as we have discussed, comes from both theoretical and experimental limitations, due to the inapplicability of the concept of ‘temperature’ in such microscopic regimes. The ambiguity is thus modelled as a discontinuity at the interface.

The modelling of this ambiguity as a discontinuity, however, raises a deeper philosophical question about the status of discontinuous representations, which we are unable to answer in this paper. The question is: should discontinuous representations work only when nature is fundamentally discontinuous, or do discontinuous representations (like continuous representations) serve only as ‘idealised’ descriptions of a much more complicated underlying reality?Footnote 26 Alternatively put, presuming that both continuous and discontinuous models are idealised representations of the world, is the continuity-discontinuity dichotomy enough to account for the complexities of the world underlying our models? We hope future research sheds light on these questions.