1 Precursor on Terms and Definitions

Given the cross-disciplinary nature of the topic, the terms used in this paper which may derive different meanings in different fields are clarified here. The intended interpretation of terms in this paper are typically related to the mathematical definition:

Evolution—in the context of the paper, this term makes reference to the class of mathematical evolution equations whereby the solution develops in time from prescribed initial conditions. The exact nature depends on the equation itself, and refers foremost to differential equations [1].

Global—The term ‘global’ in this paper, refers to a property or entity considered across an entire domain or space. It is used in contrast to the term ‘local’ which applies to a defined neighbourhood or limited portion of a space [2].

Deterministic—in the mathematical sense, a deterministic system presumes a future event can be calculated exactly from a current state without the involvement of randomness [3]. Similarly, in the algorithmic sense, given a particular input, a deterministic algorithm will always produce the same output.

Iterate—the term iterate is typically used in this paper in the computational sense and as a noun to mean the cumulative result of a number of iterations. For example, where each iteration is one loop of a calculation (or computer process), the iterate is the result produced at the end of each loop or repeated process.

2 Computation as a Basis for Reality

It is a hallmark of human behaviour to use the most advanced technology of the era to analogise our living experience. From the industrial age when we put the “wheels in motion" for a mechanised vernacular of body parts, to when the thought “sparked" to instead consider our “genetic programming" in view of our biology. As we continue using our inventions as our conceptualisations, perhaps we reach a level of technological maturity where the analogy in fact becomes reality. The question of whether our reality could in fact be a high fidelity computer simulation is fast becoming a subject of interest in philosophy, and more recently in physics and computer science. Whether or not reality is simulated per se, explorations of computation as an alternate basis for our fundamental physical laws may reveal insights about the nature of our observed reality.

The concept of a simulated reality was first posed formerly in a publication [4] by philosopher Nick Bostrom in 2003, putting forward: “The Simulation Argument". Bostrom’s argument uses the terminology “posthuman civilisation" to describe intelligent life of sufficient technological maturity where near-infinite computing power is accessible such that “Ancestor Simulations" can be run. Only in recent decades, has computation emerged as a third mode of investigation in the sciences, that is, after theory and experiment. Today, advances in computer simulation capability predominantly aim to capture the real phenomena of the physical world. The Ancestor Simulation could therefore be considered the ultimate goal, whereby the evolution of all existence unfolds in a deterministic nature from the predefined governing laws and initial conditions. Given near-infinite computing power, the idea is that the high fidelity simulation would be indistinguishable from observable reality.

Fig. 1
figure 1

Computer generated fractal tree

Bostrom’s work does not directly argue that we inhabit a simulation (and nor does this work). However the argument supposes, in the case where only a fraction of post-human civilisations develop ancestor simulations, and those simulated post-humans then develop ancestor simulations, we have a fractal structure of reality (Fig. 1). The number of simulated lifeforms then far exceeds the number of original organic lifeforms, in which case—statistically—we would very likely be living in a simulation.

Alternatively, a large number of simulated universes could be initialised at once, and evolve in parallel. Other academics and authors have expanded upon this idea. Busey [5] writes of the concept in which universes are constantly being created, with stable evolutions surviving and unstable universes collapsing. This mode of thinking mirrors Sante Fe Institute physicist Lawrence Krauss’s concept of cosmological natural selection. Krauss characterises a similar concept of a parameter space in which certain universal variables (the speed of light, the cosmological constant, Planck lengths, etc.) lead to the survival of a cosmos [6]: “Namely, that we live in a universe in which it is possible for intelligent organisms (us) to arise who can observationally verify that we live in a universe with the right conditions for intelligent organisms to arise." Though Krauss refers here to the survival of real universes, the testing of such a parameter space would be logical for the simulation of universes in parallel.

These modern scientific descriptions echo an earlier philosophical proposal of The Anthropic Principal. First articulated by theoretical astrophysicist Brandon Carter in a 1973 symposium [7], the notion was put forward that any data collected or observed within our universe is foremost observable on the basis of compatibility with the conscious or sapient life which makes the observation. This philosophical argument bifurcated into the Weak Anthropic Principle and the Strong Anthropic Principle, explored by many subequent authors [8,9,10], which examines either the capability or the necessity of various fundamental cosmological parameters to produce self-verifiable existence. The theory is often drawn to relate closely to multiverse theory [11]. So too are the more current writings of Busey and Krauss.

Other physicists and computer scientists have theorised more specifically on the construct of spacetime as arising from computation. Most notably, MIT professor Seth Lloyd postulates that the geometry of spacetime is a construct which can be derived from underlying quantum information processing [12].

Brian Whitworth has published on “the emergence of the physical world from information processing" [13], where he proposes a virtual reality conjecture. He puts forward the idea that a multidimensional grid representing space has a maximum refresh-rate given by the speed of light, thus dynamical quantities like mass, charge and energy which obey conservation laws in the physical world, can be reduced to a single principle of dynamic information conservation. As an example, Whitworth explores the relativistic time dilation present in the famous Twin Paradox. Where one twin stays on earth, the other explores the universe in a rocket travelling close to the speed of light. In this classic relativity example, Whitworth’s explanation for the dilation of time for the astronaut twin is due to the virtual reality’s finite bandwidth. The high speed travel “loads the grid" with another processing task, thus slowing the simultaneous processing of existence (i.e. dilating time). Whitworth goes further to discuss how this virtual reality model may reconcile relativity and quantum theory.

The conflict of relativity theory with quantum mechanics is widely considered to be the problem of modern theoretical physics [14]. Despite the quest for a so-called “universal theory", over a century on and we remain equipped with two fundamentally incompatible descriptions of reality. While general relativity provides a reliable formalism of the physics of the cosmos and ordinary (macroscopic) scale, as one moves towards a smaller and smaller scale, such laws begin to make anomalous predictions, to the point where we derive an entirely different governing theory at the quantum scale. While general relativity describes a universe which is continuous and deterministic, quantum mechanics is formalised in terms of discrete values (quantisation) and probabilistic events.

This paper proposes a new construction for the way in which the physics of our reality may inherently emerge from laws of computation. The proposed model is not one of a virtual reality, but rather, a model rooted in the algorithms applied to modern computational physics. The model goes further than providing analogies to classical physics in that it offers a mathematical underpinning for exploring consistencies between the nature of computed properties and the nature of observable reality. Like Whitworth, the continuum computing theory proposed in this paper is predicated on a concept of information conservation, however unlike previous work, is not constrained by hypothetical information processing, but rather, derives a basis in cosmological information propagation. While the theory of this paper examines the same aspects of relativity, the limitations are not bandwidth or processing, but rather, the stability constraints which arise mathematically from numerical methods implemented for the solution of continuous dynamical systems, namely, the Courant–Friedrichs–Lewy condition (CFL condition) [15].

This work adopts as basis equations only the most fundamental physical laws in the form of the non-relativistic conservation equations. Numerical solution of the form of differential equations given by the governing conservation laws, via the proposed computing construct, is outlined through a simple set of premises, predicated on numerical stability and logical optimisation. This inherently constructs a fused spacetime and relativistic effects on the macroscale, qualitatively consistent with our observational reality.

The conceptualisation underlying this work diverges in an important way from previous works in simulation theory. Previously, Bostrom’s “Ancestor Simulation" implies the simulated universe is a replication of the full set of physics of the organic simulating universe, including relativity. This paper proposes instead that central aspects of the physics of our observed universe could plausibly arise due to governing computational laws as a basis. Importantly, while ideas about simulated realities are a useful contextualisation, the construct of this paper does not ultimately require the hypothetical existence of a simulated universe (and therefore nor a simulating universe by extension). The philosophical suggestion, rather, is that of a deeper role of computational constructs underlying fundamental physics.

This work therefore proposes an explanation for the fused nature of time and space on the basis of numerical stability constraints derived from computational continuum mechanics. By constructing a numerical method which is both computationally stable and logically optimised, it is possible to demonstrate mathematically qualitative congruities with relativity, as is explored in this paper. This model also gives rise to the concept of a continuum-quantum border at the level of the computational cell, which offers a means of conceptually reconciling the disjunctive physics of the classical and quantum scales.

Though the proposed model must be classed as speculative, it provides a framework for reconsidering many of the puzzling paradigms of modern physics, namely the coupling of time and space in relativity. Ultimately, the proposed model aims to provide a new explanation for the observed physics of our universe, and a computing conceptualisation which could offer utility to other physicists and philosophers in the development of new theory.

3 Continuum Computing Fundamentals

At the heart of computational science lies the problem: how does one model a continuous world-view as a discrete set of data?

If we run our finger along a line in space, and stop at any given point, there are very many quantities ascribed to that point. If we were to write a long list of all of the kinds of descriptions we can assign to that point (e.g. location, temperature, state of matter, etc.) and then move our finger to any other point in space, we can add a column, turn our list into a table and populate the cells. No matter where we point, we can describe the position by assigning values to the same set of quantity types. We can point infinitely many times between those first two finger placements and assign a value; however, to what resolution are those values unique? As two data points become closer and closer together they may become indistinguishable depending on the nature of the property and our means of perception. That is, given our human perceptive limits, we would reach a resolution threshold where the representation via discrete data is indistinguishable from the continuous world.

What then if we replaced our finger pointing with a microscope? Take the example of pressure in a gas: generally at the macroscopic scale, pressure can be represented by a scalar field. Zooming into the microscopic or atomistic scale, where the length scale of view approaches the mean-free particle path, the thermodynamic description of pressure fundamentally shifts to a description of the motion (mass and velocity) of individual molecules. In other words, on different scales, the list of descriptors fundamentally shifts. However, the macroscale is always an emergent view of the microscale.

Consider then, all of space being divided into a finite number of cells. Like pixels on a screen containing a single scalar quantity (colour), except each cell contains a large number of properties which collectively describe a complete state in that region of space. This cell exists at the resolution limit where its data gives rise to all emergent properties at every broader scale. This is our defined computational cell.

Continuous quantities defined in space, which evolve in time, can be described mathematically via partial differential equations (PDEs). Firstly, taking the simplified example of the motion of a fixed object or particle (think of a perfectly rigid stone) its position can be determined through 6 parameters (6 degrees of freedom: 3 translational and 3 rotational axes) at every point in time. The dynamics of this rigid object occur within a finite-dimensional configuration space. The configuration of a continuous medium however (think of a fluid) occurs within an infinite-dimensional configuration space. This is the key difference between a particle based description and a continuum based description. It renders the latter much more difficult to solve, and typically goes beyond the limits of pen-to-paper mathematics, requiring numerical methods (and computing power) for practical solution.

The numerical (computational) solution, can only ever achieve an approximation of the true continuous solution. However, the higher the resolution, the closer this approximation converges to reality.

A PDE for a single given variable \(\omega\), where \(\omega (x_0, x_1,\ldots , x_n)\) is given by:

$$\begin{aligned} f(x_0, x_1,\ldots , x_n ; \omega , \frac{\partial \omega }{\partial x_0}, ... \frac{\partial \omega }{\partial x_n}; \frac{\partial ^2 \omega }{ \partial x_0 \partial x_0}, ... \frac{\partial ^2 \omega }{ \partial x_0 \partial x_n}; ... ) = 0 \end{aligned}$$

Continuous quantities which are defined in space and which evolve in time, are governed by hyperbolic PDEs. For a hyperbolic PDE which is first order in time on a quantity \(\phi\), if initial data for \(\phi (x,y,z)\) is defined everywhere in the domain (with sufficient smoothness), there exists a solution of \(\phi\) for all subsequent time. That is, evolving deterministically from the initial condition data. The solution of a hyperbolic PDE is of a wavelike nature. That is, for a disturbance in a given space–time coordinate, the effect of the disturbance travels through the domain with a finite propagation speed, and along characteristics of the governing equation.

The most fundamental laws of nature, which govern how quantities evolve from processes and whether processes can occur, are the conservation laws. A conservation law is a continuity equation, expressed mathematically via PDEs. These equations define the relationship between the “amount” of a quantity and the “transport” of a quantity:

The amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume. [16]

In classical physics, these laws must minimally include the conservation of: mass (matter), momentum, energy, and electric charge.

A generalised conservation law is given by:

$$\begin{aligned} \frac{\partial \phi }{\partial t} + \nabla \cdot f(\phi ) = 0 \end{aligned}$$

We can directly relate a conserved quantity within a finite volume, to data in our aforementioned computational cell.

A conservation law applied across two neighbouring computational cells, defined with initial data, naturally defines a Riemann problem [17]. Finite volume methods are useful solution methods for solving PDEs numerically [18], yet conservatively, by converting the divergence quantity to surface integrals via the divergence theorem.

In one dimension we define our computational cells in \(x-t\) space, as in Fig. 2. For a given cell i the finite volume occupies \(\upsilon = [x_{i-1/2}, x_{i+1/2}] \times [t^n, t^{n+1}]\) which has equivalent area: \(\Delta x \times \Delta t\).

Fig. 2
figure 2

Finite volume discretisation with one spatial dimensional and one temporal axis, with positions in space defined as \(x_i\) and positions in time \(t^n\) defining spatio-temporal computational cells of bounded volume \(\Delta x \times \Delta t\)

For a set of interdependent variables (i.e. which describe a state), we obtain a system of hyperbolic PDEs. A compact notation for dealing with systems of PDEs, is to express the set of variables within a vector. Where U is a state vector of continuous quantities, and F their associated fluxes, the 1D system of conservation equations can be written:

$$\begin{aligned} \mathbf{U }_t + \mathbf{F}(U) _x = \mathbf{0 } \end{aligned}$$
(1)

The subscript t denotes the partial derivative with respect to time, and x the partial derivative in the spatial dimension.

The integral form of equations applied over the defined finite volume can be equated to a surface integral via Green’s theorem:

$$\begin{aligned} \int _\upsilon \frac{\partial \mathbf{U }}{\partial t} + \frac{\partial \mathbf{F }}{\partial x} dx dt = \oint _S \mathbf{U } dx - \mathbf{F } dt = 0 \end{aligned}$$
(2)

Integrating around the boundary of the finite volume, we obtain:

$$\begin{aligned} \int _{x_{i-1/2}}^{x_{i+1/2}} \mathbf{U }(t^{n+1}, x) dx - \int _{x_{i-1/2}}^{x_{i+1/2}} \mathbf{U }(t^{n}, x) dx = \int _{t^n}^{t^{n+1}} \mathbf{F }(t, x_{i-1/2}) dt - \int _{t^n}^{t^{n+1}} \mathbf{F }(t, x_{i+1/2}) dt \end{aligned}$$
(3)

For the conserved variables of \(\mathbf{U }\), in one dimensional space, the integral average quantity is defined as:

$$\begin{aligned} \mathbf{U }_i^n = \frac{1}{\Delta x} \int _{x_{i-1/2}}^{x_{i+1/2}} \mathbf{U }(x,t^n) dx \end{aligned}$$
(4)

Similarly the flux vectors \(\mathbf{F }_{i\pm 1/2}\) are defined as the integral average of the flux over one time step:

$$\begin{aligned} \mathbf{F }_{i\pm 1/2} = \frac{1}{\Delta t} \int _{t^n}^{t^{n+1}} \mathbf{F }(\mathbf{U }(x_{i \pm 1/2}, t))dt \end{aligned}$$
(5)

With these integral average definitions we arrive at the conservative numerical update formula:

$$\begin{aligned} \mathbf{U }_i^{n+1} = \mathbf{U }_i^{n} + \frac{\Delta t}{\Delta x}(\mathbf{F }_{i-1/2} - \mathbf{F }_{i+1/2}) \end{aligned}$$
(6)

In this way, the finite volume construction leads to this conservative numerical update formula, where the continuous equations have been transformed into discrete state and flux variables. This serves as the basis for most continuum computing models [19].

It is clear from this time-explicit numerical update formula how all subsequent time solutions evolve from given initial data (\(\mathbf{U }^0_{\forall i}\)). However, the solution will evolve to become numerically unstable unless certain criteria are met. Namely, the Courant–Friedrichs–Lewy (CFL) condition must hold (necessary, but not always sufficient) for the simulation to evolve in a stable manner.

$$\begin{aligned} \Delta t = \frac{CFL \cdot \Delta x}{S_{max} } \end{aligned}$$
(7)

The CFL condition places a limit on the maximum time step based on the distance between adjacent cells (\(\Delta x\)) and the propagation speed of the fastest wave (\(S_{max}\)). The ‘waves’ of the solution are the characteristics of the equations, along which information propagates. As can be seen from the discrete replacement of the time integral defined in Eq. 5, the flux is defined at the spatial cell boundary, and as such, the characteristic wave speeds S are computed at the cell boundary.

If the time step is too large, and the state data of a given cell propagates beyond the adjacent cell within a single time step, then state information is lost. This leads to error and ultimately, instability. The dimensionless CFL number which ensures stability depends on the numerical scheme and the number of dimensions. For the simplest one-dimensional problems is derived as \(CFL \le 1\).

Fig. 3
figure 3

\(x-t\) computational cells: the 3 arrows represent some 3 characteristics of a system of conservation equations. The intersection of \(S_{max}\) with cell boundary determines the maximum stable \(\Delta t\)

Therefore the maximum time step in a computational cell, is the time at which the fastest characteristic wave crosses a cell boundary (Fig. 3). It follows then, the faster the wave, the smaller the time step, and also, the smaller the computational cell the smaller the time step. In the case of non-linear systems (the real world) the characteristic wave speeds vary across cells. In the case of non-uniform grids, computational cell size varies across the domain. Whilst the simplest solution is to determine the smallest time step across the entire domain and limit all cells to this global time step, this is inefficient. Alternate methods commonly exist in computational continuum mechanics, including evolving independent patches containing cells of different sizes and enforcing time consistency conditions at patch borders. In any case, in order to maintain time-equivalence across the domain, computational efficiency is compromised for non-linear systems and for non-uniform grids.

4 Continuum Computational Construct

The construction of the proposed continuum computational model is laid out in the following premises:

  1. 0.

    Governing equations and computability

Firstly, it is important to note that the system being modelled is constituted only of the most fundamental physical laws: full set of non-relativistic conservation laws. That is, the relativistic phenomena (and corresponding form of the equations) is not assumed a priori. The stable and optimised numerical solution of the system (locally across cells) under the proposed computing construct leads to the resultant relativistic effects macroscopically, as will be presented.

Before presenting the computational construct applied to the solution of these system equations, we first consider the computability of such equations. The mathematical nature of the governing equations determines the class of numerical methods required for solution. These different classes of numerical methods can be assessed in terms of their computational complexity. To define this concept clearly: computational complexity, or algorithmic complexity, refers to the amount of resources required to execute an algorithm. Memory is the resource for the storage of data (space complexity), and computational time is the resource for the mutation of that data (time complexity) and depends upon the number of elementary operations required to compute the data update [20]. One can reasonably assume that: the laws of physics are not computable if they require infinite computational complexity. Further, the optimum algorithm aims to minimise complexity.

Therefore, as a 0th step to the outlayed premises, we first assess any constraints or requirements on the governing non-relativistic continuity equations in terms of their computational complexity and computability. To put it another way- could all conceivable laws of physics be computed, or rather, are there computational constraints on the form of the governing laws of physics as we observe them? Specifically we explore the computational basis for the existence of a finite maximum signal velocity (the speed of light—c). In examining this, the heart of the question is: why should the macroscale form of the physical laws of our universe logically emerge as hyperbolic (finite c) rather than elliptic (infinite c)?

By counterpoint, let’s first consider the case where the fastest signalling wave i.e. maximum speed of information propagation is permitted to be infinite. Take as an example equation, the rearrangement of the electromagnetic wave equation in the limit of \(c \rightarrow \infty\) :

$$\begin{aligned} \nabla ^2 u = \lim _{c \rightarrow \infty } \frac{1}{c^2} \frac{\partial ^2 u}{\partial t^2} \end{aligned}$$
(8)

which becomes: \(\nabla ^2 u = 0\). This would result in some of the fundamental laws of physics becoming elliptic in nature. In terms of the computation of an elliptic governing equation: the space complexity depends on the number of nodes or cells which define the discretisation in space, and the time complexity depends on the number of elementary operations required by the algorithm to update the cell-associated data. For the numerical solution of an elliptic partial differential equation, the domain of dependence is the entire computational domain \((\Omega )\) comprising N nodes or cells. That is to say, the update of information at any one cell depends on the information at every other cell in the domain. Numerical solution of elliptic equations therefore applies iterative algorithms which operate over all cells in the domain [21]. Therefore, the time-complexity (\(T^*\)) of any algorithm applied to the solution of an elliptic system is a function of size (N) of the whole computational domain: \(T^* = f(N)\). As N approaches infinity, the computational complexity to update any single cell also approaches infinity.

Let’s now consider the case where the fastest signalling wave of the system has a finite maximum value. This defines the fundamental laws of physics of the universe as hyperbolic. The domain of dependence for a specified point in a hyperbolic system of equations, is the space–time envelope enclosed by the maximum and minimum speeds of information propagation, where therefore, the maximum stable time-step depends upon the absolute maximum wave speed. This is the concept enforced by the CFL condition. Limiting the time-step in this way therefore permits the construction of a computational stencil, which spans a small local collection of contiguous cells which contains the physical domain of dependence, and which the numerical method uses as input data for the computation of a cell-state update. The exact numerical method defines the size of the computational stencil (n), for example, the simplest case would be the stencil of a first order numerical method defined as \(n = 2d+1\) where d is the number of spatial dimensions [22]. Even for high order numerical methods applied to solve hyperbolic systems, the computational stencil spans a very small number of cells (\(n<< N\)), upon which the time-complexity \(T^* = f(n)\) depends. The space-complexity of a hyperbolic system is the same as for an elliptic system defined over the equivalent surface and space discretisation. However, for hyperbolic numerical solution methods, even as N approaches infinity, the time complexity for each cell evolution remains to be finite, and independent of N.

Further, for a time-explicit hyperbolic system, where the applied numerical update scheme is of the form of Eq. 6: \(\mathbf{U }_i^{n+1} = f(\mathbf{U }^n_{i-k}, \ldots , \mathbf{U }^n_{i},\ldots , \mathbf{U }^n_{i+k})\), where the numerical domain of dependence spans a small [\(i-k\) : \(i+k\)] number of spatial cells from the prior time state, then computation of the new time state could theoretically be completely parallelised (all cells updated simultaneously). This degree of parallelism is generally not possible for elliptically evolving systems [23].

From this, let’s therefore consider a fundamentally computational universe, where all of space space \((\Omega )\) is fully discretised as a set of N spatial cells, and this spatial data is to be evolved by discrete iterates. For a mesh of cells to be permitted to approach infinite size (be that either through spanning infinite space or approaching infinite cell density through refinement), the computational complexity of an elliptically-evolving system would similarly approach infinity (since it must operate across all cells). For the system to therefore be computable as \(N \rightarrow \infty\) necessitates an algorithm with time complexity independent of N, which implies a finite-sized computational stencil in space, resulting in a finite number of elementary operations associated with the numerical method for evolution of individual cells. This directly necessitates a fundamental computational limit on the rate of information propagation, which implies the existence of a finite maximum signal velocity within the governing continuity equations, thus implying hyperbolicity.

To summarise (where: \(< \infty\) implies finiteness):

$$\begin{aligned} \begin{array}{c} \underline{\text {Elliptic governing laws}}:\ c=\infty , T^* = f(N)\\ \\ \lim _{\Omega \rightarrow \infty } \Omega \quad \rightarrow \quad T^* = \lim _{N \rightarrow \infty } f(N) = \infty \quad \rightarrow \quad \mathbf{not }\ \mathbf{computable }\\ \\ \underline{\text {Hyperbolic governing laws}}:\ |c|< \infty , T^* = f(n), |n|<< |N|\\ \\ \lim _{\Omega \rightarrow \infty } \Omega \quad \rightarrow \quad T^* = \lim _{N \rightarrow \infty } f(n)<< \infty \quad \rightarrow \quad \mathbf{computable } \end{array} \end{aligned}$$

The existence of a finite maximum wave speed is assumed in the application of the CFL stability criteria in premise 2, however, the criteria itself does not define or enforce finiteness of system characteristics. This argument defines a computability requirement on the finiteness of c in the governing continuity equations. Beyond this requirement on c, the elementary operations can plausibly be defined by any operations, which will macroscopically constitute the laws of physics.

  1. 1.

    Underlying discretisation of space and time

The underlying computational construct assumes all of space is a continuous medium which is discretised by a mesh of computational cells. In three dimensional space this represents a three dimensional finite volume cell. Therefore, the cell represents a finite region of space, containing a fundamental data set of discrete quantities, which collectively gives rise to all emergent properties at the continuum macroscale (Fig. 4).

Fig. 4
figure 4

\(x-t\) 1D computational cell representation with fundamental state data denoted as \(\alpha\), \(\beta\) ... and maximum stable \(\Delta t\) at the cell interface based on the fastest wave speed \(S_{max}\)

The data associated with each computational cell is evolved through some set of elementary operations, which results in the mutation of that data. This data mutation represents a discrete evolution in time. This treatment of space and time as inherently different things—space as the underlying data structure, time as the actual computation—is conceptually cornerstone to a simulation theory grounded in computational physics. The key issue in this defined systemisation is the apparent de-coupled treatment, therefore, of space and time. It is through the subsequent premises of this work, based on principles of continuum computing, that a logical construct for spacetime coupling for such a discretisation is proposed.

This computational cell construct, in combination with the numerical update formulas used to compute the system solution, implies a model which is fully deterministic. That is, a computed state is determined completely from the preceding state. As per the 0th premise, the solution of hyperbolic continuity equations implies a finite domain of dependence (where \(n<< N\)). That is, the state of a given cell is computed locally from the states of all cells sharing a boundary in the preceding state solution (or a number of adjacent cells depending on the specific algorithm). For example, assuming a first order numerical method (which considers the states of the first-most adjoining cells), the state \(\mathbf{U }_i^{n+1}\) is computed deterministically from the preceding cell states: [\(\mathbf{U }_{i-1}^{n}\), \(\mathbf{U }_{i}^{n}\), \(\mathbf{U }_{i+1}^{n}\)] which collectively define its domain of dependance as the span of 3 cells.

  1. 2.

    Necessary stability constraints

As per the introduction, the deterministic evolution of the system in time from the initial state data defined in space, is governed by the system of conservation equations. For the system to evolve in a computationally stable manner, the CFL condition must be satisfied. The CFL condition requires that a maximum wave speed be identified, and serve to limit discrete time steps.

As detailed in the 0th premise, the finiteness of a maximum signalling velocity for information can be assumed, based on underlying computability requirements on the governing system equations.

The value of the maximum wave-speed depends on the system being modelled. Considering the defined computational cell which contains the fundamental state data set, the wave speeds of information propagation depend upon the characteristics of the full set of conservation equations computed across the cells. The fastest wave speed (\(S_{max}\)) which restricts the stable time step according to the CFL stability criterion is identified as the fastest possible speed of all information propagation, which therefore depends upon the speed of light in the conservation laws.

  1. 3.

    Optimisation of the computational step

In a standard continuum computing model, the time step is restricted globally by the overall smallest stable time step computed in a given cell, in order to achieve numerical stability and such that time evolves in steps uniformly across the domain. After every evolution we have global time equivalence. Therefore, every cell which has a locally stable time step computed as greater than the globally restricted time step evolves in computationally inefficient manner.

Due to this observed time step inefficiency, mixed resolution meshes with local time-stepping was a concept proposed by Osher in [24] and is used commonly in the numerical solution of conservation laws. However, these simulations typically involve sub-grids, where base cells are subdivided by a refinement factor into [refinement factor \(\times\) number of dimensions] smaller cells [25, 26]. A global maximum wave speed for the conservation laws is still computed over the full domain, and after smaller sub-grid cells evolve by an integer multiple of their smaller stable \(\Delta t\), time equivalence is reached with the larger base cell (as depicted in Fig. 5). While this provides efficiency improvements, cell resolution is constrained by factors of a base cell size, and where wave speeds vary across the domain, the local time step remains non-optimised.

Fig. 5
figure 5

Diagram of a typical local-time-stepping scheme. Cells are refined spatial by a refinement factor, and corresponding time reduction factor achieves global time equivalence with the base cell after sub-cycled smaller time steps

It is proposed here that the system is not constrained at all by time equivalence. By relaxing this constraint we are able to maximally optimise computation over freely defined cell-sizes. By grounding the algorithm in the logical basis of (i) numerical stability, (ii) freely varying cell-size, and (iii) computationally optimised time-stepping, the construction permits information fluxes evolved by time steps of different sizes at cell boundaries. Though sizes of time steps may differ, the integer number of steps in the system evolve simultaneously. That is, we may consider the simulation to evolve in iterates as discrete global updates, with the stable time step being computed at every cell boundary.

The idea can be summarised as enforcing computational optimisation instead of enforcing time-equivalence:

$$\begin{aligned} \text {globally enforced time equivalence} \quad\rightarrow & {} \qquad \quad \text {compromised local efficiency} \\ \text {optimised local efficiency}\qquad \quad\rightarrow & {} \qquad \text {globally varying time-stepping} \end{aligned}$$

We define clearly the concept of global time iterates and real time steps before demonstrating how this construction naturally gives rise to aspects of special and general relativity.

Fig. 6
figure 6

Under a continuous time-value axis, for a non-linear system or non-uniform grid, the maximum stable time step varies between cells, and in a time-equivalent simulation, becomes limited by the smallest global \(\Delta t\)

Fig. 7
figure 7

Under a time-iterate axis, the characteristics can be considered scaled with respect to \(\Delta t^\star\) and the simulation is evolved as a single global step in a discrete integer time dimension, where computed \(\Delta t_i\) may vary across cell borders

As shown in the diagram of Fig. 6, the real time step is computationally optimised for each cell when the wave of fastest propagation speed intersects exactly at the spatial cell boundary (max. stable \(\Delta t_i\)). The local limiting \(\Delta t_i\) is then applied to evolve the information flux between cells.

The derivation of the conservative numerical update formula presented earlier in Eqs. 16 assumes a constant \(\Delta x\) and \(\Delta t\) between neighbouring cells in the integral average definitions. For freely varying spatial cells and time evolution, the integral average flux definition of Eq. 5 and the conservative numerical update formula of Eq. 6 gain spatio-temporal degrees of freedom. For computed fluxes this becomes:

$$\begin{aligned} \mathbf{F }_{i\pm 1/2} = \frac{1}{\Delta t_{i+1/2}} \int _{t_{i+1/2}^n}^{t_{i+1/2}^{n+1}} \mathbf{F }(\mathbf{U }(x_{i \pm 1/2}, t))dt \end{aligned}$$
(9)

The explicit numerical update formula then becomes:

$$\begin{aligned} \mathbf{U }_i^{n+1} = \mathbf{U }_i^{n} + \frac{\Delta t^n_{i-1/2}}{\Delta x_{i-1/2}}\mathbf{F }_{i-1/2} - \frac{\Delta t^n_{i+1/2}}{\Delta x_{i+1/2}}\mathbf{F }_{i+1/2} \end{aligned}$$
(10)

The system is not under-determined, since the spatial step \(\Delta x_i\) is predetermined by the mesh, and each \(\Delta t^n_i\) (of the cell interfaces) is computed directly from the fundamental state data of the adjacent cells at the current time step n, in accordance with the governing conservation laws.

The physics of every individual cell pertains to its own locally determined \(\Delta t^n_i\) step, and the global time iterate axis represents a discrete number of evolutions. This time-iterate axis (Fig. 7) represents the integer n number of total discrete updates, at which time is denoted as \(t^\star\) and whereby the step from n to \(n+1\) is denoted \(\Delta t^\star\). Evolving the cell states by a simultaneous discrete time iterate \(\Delta t^\star\), with real time step \(\Delta t\) computed for each cell, is equivalent to the wave speeds undergoing a scaling transformation when represented on the time-iterate axis. This concept is shown in Fig. 7. The actual wave propagation speeds are locally conserved under \(S = \frac{\partial x}{\partial t}\), where S and t are the real wave speeds and real local time. The maximum stable real time increment and subsequent wave transformation in one dimension is simply:

$$\begin{aligned} \Delta t_i = \frac{CFL \cdot \Delta x_i}{S_{max,i}} \rightarrow \Delta t^\star = \frac{CFL \cdot \Delta x_i}{S^\star _{max,i}} \rightarrow S^\star _{max,i} = \frac{\Delta t_i}{\Delta t^\star } S_{max,i} \end{aligned}$$
(11)

Note that \(\Delta t^\star\) can be any arbitrarily chosen time increment (e.g. Planck time) and stability is maintained. To summarise, time evolves globally by a discrete time iterate update, within which a local stable real time step is computed at every cell boundary. In this way, by optimising the computational step in accordance with the numerical stability constraint, we will demonstrate how this logical construct results in inherent time dilation effects on the macroscale.

5 Emergent Relativistic Properties

This initial analysis presents a mathematically qualitative assessment of how the proposed continuum computing model naturally gives rise to aspects of observed reality, consistent with Einstein’s theory of relativity. This paper explores the premise of fused spacetime, aspects of special relativity, and some general relativistic effects as a minimal basis for further explorations.

5.1 Coupling of Space and Time

One of the fundamental conceptual difficulties in proposed simulated reality theories, is that computational simulations tend strongly to decouple the concepts of space and time, whereas relativity implies that space and time should not be decoupled. The core postulate of this paper is the proposal of a logical construction whereby stability constraints naturally enforce a strict coupling between space as the underlying data structure, and time as the computational evolution.

This coupling emerges as follows: within a reality comprised of continuum-type computational cells, the interdependency of time and space arises necessarily from the stability constraints of the numerical method applied to solve conservation equations via a discrete quantity formulation. If we suppose reality behaved as a continuum-type computation-then time step must become a function of discrete space under the CFL condition. The dependency of time on space must therefore also be a function of the fastest speed of all information propagation. Applying this computing principle to the full set of conservation laws, the fastest total wave speed depends upon the peak matter-velocity and its fastest signalling wave: which draws a cosmological basis in the speed of light. Logically optimising computational efficiency means time-steps vary according to this time–space–wave–speed dependency, across all cells.

The coupling of space and time in forming 4-dimensional fused spacetime underlies the relativistic phenomena of our macroscale universe. One could consider this postulate in more philosophical terms and noting the key difference to other works in simulation theories. Rather than explicitly programming the known and observed physics of our reality, including relativity, this work removes the concept of an active programmer, and examines reality in terms of only elementary basis laws (in simplest non-relativistic form), and logical computing laws for their solution. The philosophical enquiry becomes: in the case where existence as we observe it is arises from fundamental computing laws, does spacetime coupling emerge necessarily as a constraint of stable computational evolution?

The concept of a unit cell is introduced here to aid the subsequent explanations. This unit cell is useful as a base unit for exploring the relativistic phenomena. Step 3 in the earlier construction explains that for any arbitrarily chosen discrete time increment \(\Delta t^\star\) stability is maintained. Let’s then propose a unit cell where the one dimensional width is \(\Delta {\bar{x}}\) and the numerics permit \(CFL = 1\). Where the set of all known conservation equations are being solved across two unit cells of size \(\Delta {\bar{x}}\), let’s assume, for simplicity, the limiting wave speed is given by the speed of light. Then \(\Delta t^\star\) is defined to be simply:

$$\begin{aligned} \Delta t^\star = \frac{1 \cdot \Delta {\bar{x}} }{c} \end{aligned}$$
(12)

where we infer from earlier Eq. 11 then: \(S^\star _{max} = c\), where c is the constant speed of light (in a vacuum). We explore in the subsequent sections how system velocities and varying cell size leads to time dilation with respect to the defined unit cell.

5.2 Special Relativistic Effects

We consider first the flat spacetime in the absence of gravitational fields. The core postulates of special relativity are that the laws of physics are invariant in all inertial frames of reference and that the speed of light in a vacuum is the same for all observers [27]. Presented here is a simple demonstration of how special relativistic time dilation arises inherently from the proposed continuum computing cell construct. Further, the construct is shown to satisfy the aforementioned postulates.

With reference to our unit cell, we consider a flat spacetime 1D mesh comprised of constant sized \(\Delta {\bar{x}}\) cells. We consider the continuum medium contained in the cell to have a matter-velocity tracked via the cell state data. A cell containing a zero matter-velocity is then equivalent to the unit cell, where the computed stable real time step is simply \(\Delta t = \Delta t^\star\), such that the cell information travels a maximum distance of \(c \cdot \Delta t^\star = \Delta {\bar{x}}\) in one computational time step. The step is stable and optimised. In the flat spacetime, this gives a lower bound on \(S_{max}\) of c and a corresponding upper bound on real time step \(\Delta t = \Delta t^\star\) (time progresses at a maximum rate of \(\Delta {\bar{x}} /c\) seconds per discrete iterate in a ‘stationary’ cell).

Fig. 8
figure 8

(Left) 1D computational unit-cell as defined above. (Right) 1D cell containing a non-zero matter-velocity with reduction in stable computed \(\Delta t\) to maintain stability on uniform grid of fixed spatial steps \(\Delta {\bar{x}}\)

For a cell region containing a non-zero matter-velocity (V), the total information propagation is related to the base velocity V and its fastest possible emitting information rate—the speed of light. And so we have \(S_{max} = f(V,c)\). The speed of light is constant (as per the locally applied continuity equations) within the computational cell and therefore propagates information (about the high velocity matter) at \(c = \frac{\partial x}{\partial t}\) with respect to real-time t in the cell. This is consistent with known physics; that a fast moving object emits light at the speed of light with respect to the fast moving object. Real time step \(\Delta t\) must therefore be computed such that the information does not propagate more than the distance \(\Delta {\bar{x}}\) in the computed step. Otherwise the CFL condition is not met, information is lost across the time step, and instabilities manifest in the computation. This concept is depicted in Fig. 8.

The general qualitative argument is as follows: where a non-zero matter velocity increases the total rate of information propagation, computed real time must dilate to maintain the numerical stability condition. The exact manner in which the matter-velocity and the speed of light are positively constructive within a cell determines the quantification of time dilation.

In relation to the core postulates, it is important to note the following: in an inertial reference frame where a local domain of cells contain a uniform matter velocity, all cell data evolves uniformly in discrete steps of real \(\Delta t\) in that local domain, where \(\Delta t\) is computed based on c propagating as a constant with respect to the base V of the frame. (Note also that the defined reference frame will move across fixed spatial cells between time iterate updates). Similarly, we can consider the manner of flux computation across cells: if base velocity V is uniform in a reference frame, the flux differential associated with that reference velocity remains zero across cells (nothing within the reference frame changes due to the base velocity). Therefore the laws of physics within a stationary or moving reference frame remains invariant.

Where the flux associated with base matter-velocities is non-zero, is where there is a velocity differential across a cell border. Where information about an event occurring in a fast-moving reference frame crosses the cell border into a stationary reference frame (or vice-versa), information flux and stability depends upon these velocities. The processing of the information (as it passes across cell borders) defines a local Riemann problem between fixed computational cells. In the time explicit model defined by the numerical update formula of Eq. 6 (whereby all data evolves deterministically across time steps), the continuous property flux over the time step \(\Delta t\) from \(t^n\) to \(t^{n+1}\) is replaced by a discrete flux approximation. The discrete flux approximation is a function of the fundamental cell data sets at time n. Using the notation for the fundamental state data as represented in Fig. 4 (\(x-t\) dimensional system), let \(\mathbf{U } = \{\alpha , \beta ...\}\), and the total information flux \(\mathbf{F }\) over a time step \(\Delta t\) entering cell i is given by:

$$\begin{aligned} \int _{t^n}^{t^{n+1}} \mathbf{F }(t, x_{i-1/2}) dt \approx \Delta t \mathbf{F }(U_{i-1}^{n}, U_{i}^{n}) \end{aligned}$$
(13)

Given that the fundamental state data ultimately represents all system properties, in effect this renders the flux as a function of adjacent cell contained velocities as well as constant c:

$$\begin{aligned} \mathbf{F } = f(\{\alpha _{i-i}, \beta _{i-i} ... [V_{i-i}, c]\}, \{\alpha _i, \beta _i ... [V_i, c]\}) \end{aligned}$$
(14)

since the wave speeds \([S_0...S_n]\) at the cell boundary are a function of the state data, and specifically one of those wave speeds is:

$$\begin{aligned} S_{max} = f([V_{i-1}, V_i, c]) \end{aligned}$$
(15)

then the flux computation depends upon the full set of variables:

$$\begin{aligned}&\mathbf{F } = f(\{\alpha _{i-i}, \beta _{i-i} ...\}, [S_0...S_n], \{\alpha _i, \beta _i ...\}) \end{aligned}$$
(16)
$$\begin{aligned}&\Delta t = f(S_{max}) \end{aligned}$$
(17)

Therefore \(S_{max}\) and the information flux is influenced by adjacent cell velocities and their differential. When \(S_{max}\) is large, \(\Delta t\) is small (to maintain numerical stability), and the amount of information flux which passes across the cell border between reference frames is acted upon by reduced \(\Delta t\) in a reciprocal manner. A reduced time step at the cell border acts effectively as a numerical information funnel. The propagation or observation of information as a function of the contained matter-velocities of neighbouring cells, therefore renders the construction as reference frame dependent.

Additionally, within locally computed stable time steps, the speed of light is constant within every cell, and is never observed to be exceeded from any reference frame outside the cell with respect to the time steps of that frame. Under this construction, it is impossible for any matter to be observed to exceed the speed of light, for any observer outside the cell, as the increase in its velocity within a cell directly results in the local reduction of time step at its cell borders.

It is important to note here that the arguments are being presented in a qualitatively mathematical form to demonstrate the rudimentary consistencies with relativity theory. Further developments such as deriving Lorentz factor scaling, proving general covariance is preserved, and quantitatively demonstrating synchronisation within frames, would require the definition of specific update schemes of the simulation, and construction of an underlying mesh topology. The core arguments presented in this work are limited to the core qualitative and conceptual bases. However further explorations are invited, such as; including different, more complex, and even unstructured, mesh topologies paired with various numerical update schemes in order to derive further consistencies with properties of relativity.

5.3 General Relativistic Effects

In this section we explore how the computational construction produces gravitational time dilation and the gravitational deflection of light. Again, it is somewhat futile to reason quantitatively on these phenomena, without supposing an actual underlying mesh topology (discretisation of space). Therefore we examine first the core ideas and effects in a qualitative manner and then present a hypothetical (non-realistic) mesh geometry to demonstrate these effects.

Fig. 9
figure 9

(Left) 2D visual depiction of spacetime curvature in the presence of massive bodies, source: [28]. (Right) 3D grid offers a better representation (than 2D) of what spacetime may actually look like, depicting a refined mesh in the vicinity of the mass, source: [29]

General relativity describes the manner in which stress or energy in the universe (specifically in the presence of large masses) curves and warps spacetime. The geometry of spacetime is described through the metric tensor \(g_{\mu \nu }\), with its contained curvature relating directly to gravitational effect.

In regions of a numerical simulation where higher precision is required, this can be achieved through higher resolution (or higher order methods). Where higher precision is required in regions of high complexity, greater resolution then results in greater information density. Though it seems reasonable to assume that regions in space of high density and energy are associated with regions of greater complexity (requiring higher numerical precision), a definitive relationship cannot be drawn without a deeper understanding of the specific computational construction. Therefore, it is simply assumed from the logical computing argument, in order to optimise computational resources, regions of higher precision (smaller cell size) and lower precision (larger cell size) are freely permitted. Regions of refinement and dispersement of the underlying computational mesh creates the topology of the fabric of space in a computational reality. Where a uniform mesh represents the flat spacetime, a non-uniform mesh produces a warped spacetime. The prototypical depiction of the fabric of space, warped in the presence of large masses, is shown in Fig. 9.

Fig. 10
figure 10

Equivalent wave speeds S within 1D computational cells of different sizes \(\Delta x\). In the large cell on the left, time progresses faster than the defined unit cell (\(\Delta t > \Delta t^\star\)), and in the small cell on the right, time progresses more slowly than the unit cell (\(\Delta t < \Delta t^\star\))

The computational explanation of gravitational time dilation can be laid out very simply: where we have defined local real time to be computed as a function of cell size and maximum wave speed, we observe that the smaller the finite computational cell volume the smaller the local computed stable time step \(\Delta t\). The concept is depicted clearly in Fig. 10, where the the smaller \(\Delta x\) one dimensional cell results in a smaller stable \(\Delta t\).

Thus far, we have presented all of the theory in one spatial dimension and one time dimension. If local time is computed as the optimum time step based on the given cell dimension, what then when the mesh is extended to multiple spatial dimensions? We extend the theory logically in a second dimension. Maintaining the same principle, consider local time to be computed optimally at the borders defining every cell dimension—that is, allow time to act effectively as a vector with respect to a computational cell and its boundaries.

Under the proposed computational model and its logical extension to higher dimensions, we observe that matter dynamics are influenced by a non-uniform mesh, and furthermore, light automatically bends according to regions of refinement and dispersement. The dynamics and stability of this model are best demonstrated through an example two-dimensional numerical simulation.

Reducing the complex system of conservation equations to the simplest possible demonstration equation, let’s consider the conservative linear advection of a massless photon at the speed of light. Our test equation is given by:

$$\begin{aligned} \frac{\partial \phi }{\partial t} + \nabla \cdot [c \phi ] = 0 \end{aligned}$$

which, in 2 dimensions and for constant c is given by:

$$\begin{aligned} \frac{\partial \phi }{\partial t} + c \cdot\left ( \frac{\partial \phi }{\partial x} + \frac{\partial \phi }{\partial y} \right) = 0 \end{aligned}$$

Any choice of conservative numerical scheme is reasonable, and we define here a simple first order upwinded Godunov scheme:

$$\begin{aligned} \phi _{i,j}^{n+1} = \phi _{i,j}^{n} + \frac{\Delta t^n_{i,j,x}}{\Delta x_{i,j}}[f(\phi _{i-1/2,j}) - f(\phi _{i+1/2,j})] + \frac{\Delta t^n_{i,j,y}}{\Delta y_{i,j}}[f(\phi _{i,j-1/2}) - f(\phi _{i,j+1/2})] \end{aligned}$$

where the x-dimension up-winded boundary fluxes are given by:

$$\begin{aligned} f(\phi _{i-1/2,j}),\ f(\phi _{i+1/2,j}) = {\left\{ \begin{array}{ll} c \phi _{i-1,j},\ c \phi _{i,j} &{} \text {for} \ c<0 \\ c \phi _{i,j},\ c \phi _{i+1,j} &{} \text {for} \ c \ge 0 \\ \end{array}\right. } \end{aligned}$$

and where ij denotes the spatial cell position and further subscript x or y on \(\Delta t^n\) permits the time step to have x and y dimensional components at time n.

For this single advection equation there is simply one characteristic wave speed \(S=c\).

The scalar advected variable \(\phi\) is initialised to a value of 1 across the spatial cells that the photon occupies, and zero elsewhere in the domain.

Setting CFL = 1 and the photon to have a velocity \(c_x = c_y = c\) travelling diagonally at \(45^o\) across the domain (see N.B.1 in Supplementary Information I), we arbitrarily distort the cell sizing across the domain, though the mesh remains regular (Cartesian). In this example the mesh is compressed in the middle in both dimensions (\(\Delta x_i\) and \(\Delta y_j\) take a sinusoidal profile). Any time increment \(\Delta t^\star\) can be chosen for the time stepping iterations, and by computing local \(\Delta t_k\) and rescaling \(S^\star _k = c^\star _k\) (subscript k represents either spatial dimension) the method is stable. Time and space are non-dimensionalised for simplicity with \(c=1\). The remaining details of the simulation construction are described in Supplementary Information I.

The real local time step and scaled wave speeds are computed for both x and y dimensions :

$$\begin{aligned} \Delta t^n_{i,j,x} = \left |\frac{CFL \cdot \Delta x_{i,j}}{c}\right |,&\Delta t^n_{i,j,y} = \left |\frac{CFL \cdot \Delta y_{i,j}}{c}\right | \\ c_x^\star = \frac{c_x \cdot \Delta t^n_{i,j,x} }{\Delta t^\star },\quad &c_y^\star = \frac{c_y \cdot \Delta t^n_{i,j,y} }{\Delta t^\star } \end{aligned}$$

At time n as the simulation evolves via the update formula:

$$\begin{aligned} \phi _{i,j}^{n+1} = \phi _{i,j}^{n} + \frac{\Delta t^\star }{\Delta x_{i,j}}[f(\phi _{i-1/2,j}) - f(\phi _{i+1/2,j})] + \frac{\Delta t^\star }{\Delta y_{i,j}}[f(\phi _{i,j-1/2}) - f(\phi _{i,j+1/2})] \end{aligned}$$

where for our test equation: \(f(\phi _{i,j}) = c^\star _k \phi _{i,j}\).

In Fig. 11, the initial conditions and final time solution is shown. In Fig. 12, the actual solution at the given time iterate is the region of photon value = 1 and zero elsewhere in the domain. A tail of 0.5 value has been added to the visualisation of the solution to track the path the photon travelled through the domain.

Fig. 11
figure 11

(Left) initial condition for test simulation with photon region set to 1.0. (Right) final time iterate solution. Mesh consists of 160 \(\times\) 160 cells, sinusoidally spaced along both x and y axes (cell centres plotted with black marker)

Fig. 12
figure 12

Subsequent time steps, from iterate number 15 to iterate number 89. A ‘tail’ of 0.5 is added in order to visualise the path traced by the photon

As can be seen, the motion of the photon takes a curved path through space, and distorts in shape moving through the mesh geometry. The evolution of local real time is tracked, and the final real time landscape across the domain is represented in Fig. 13 for this arbitrary non-uniform mesh. This is taken to be the summation of absolute time steps (\(\sum _{i=0}^n|(\Delta t^i_{x+1/2},\, \Delta t^i_{y+1/2})|\)) computed at cell borders.

Fig. 13
figure 13

Evolved local real time for cells across the domain in 2D (left) and projected as a surface over the given mesh (right)

This simulation demonstrates how the advection of any quantity (including light) takes a path which is dependent on the underlying mesh construction of space. Regions of compression, expansion and distortion of the mesh directly impact the path of propagating quantities. It also demonstrates that the formulation of the numerical method is stable. That is, where space is non-uniform, real time is computed as a function of space and wave speed, and under the time iterate transformation the advection constant is scaled, we show the solution evolves explicitly and in a stable manner by discrete time iterates.

This particular mesh geometry was not representative of anything physical. The purpose was to demonstrate that mesh distortion influences the path of light (and masses) but not the specific hyperbolic path light takes around a massive body due to gravitational effect. In theory, an underlying mesh geometry could be constructed as consistent with observed gravitational effect, causing light to take a hyperbolic deflected path around massive bodies. Such a construction would require a relationship to be defined between energy and mesh resolution. As such, there may not be a unique solution, rather a viable solution space. The solution space may consider different mesh types (not necessarily a Cartesian structured mesh as shown), different finite volume cell types (not necessarily comprised of quadrilateral cells) and different numerical update schemes. This should be the subject of future work.

6 Comments, Implications and Invitations

This paper proposes a new explanation for the fused nature of space and time as we observe it. By exploring the idea of reality as generated from laws of computation, it proposes how spacetime may be a construct invoked from numerical stability constraints as they arise in continuum computing. Such explorations, and the congruities they derive with relativistic physics, raise intriguing questions both scientifically and philosophically.

A continuum-type numerical method is described and proposed as a logical and viable construction of an underlying computational model. From established computing theory, the Courant–Friedrichs–Lewy stability restriction creates a dependency of time on space. The dependency is also related to the fastest wave of information propagation, which, when applied to the full set of (non-relativistic) conservation laws, derives a dependency on matter velocity and its fastest signalling wave: the speed of light. By relaxing time equivalence to therefore optimise every local computational step, time steps are computed at cell boundaries and vary across the domain depending on both the spatial resolution and maximum wave speed. According to the cell-data which describes the state in space and time, local characteristics are preserved whilst scaled characteristics apply relative to the time iterate axis, resulting in an observably stable solution evolution. The inference is that for pre-defined initial conditions and cosmological parameters of the system, a time explicit stable solution is able to evolve deterministically.

The construction of such a numerical method leads to observations consistent with our known reality: fast travelling objects or reference frames produce time dilation (special relativistic phenomena), as does the compression or expansion of the fabric or mesh of space (a general relativist effect). In both cases the time dilation is produced as a condition of maintaining numerical stability within a construct which optimises computational efficiency.

Extending the model logically in multiple dimensions renders real time step computed at dimensional cell interfaces effectively acting as a vector relative to the cell. Where time steps are computed according to the topology of the spatial mesh, the consequent state scaling causes light (and other entities) to move along paths influenced by the geometry of the mesh defining space. The time dilation and path of advection is demonstrated through an example simulation programmed via the described method. Though a deflected path of motion was demonstrated, the question remains—is it possible for a mesh topology to produce the type of unbound hyperbolic orbits light is observed to take around massive bodies? There is potentially a large solution space of viable meshes which produce dynamics consistent with gravitational effect. Further explorations on this are warmly invited.

In terms of mesh topology there is a logical argument for mesh refinement in regions of massive bodies. It is standard in continuum computing to have high resolution in regions where high precision is required (typically related to peak property gradients). In regions where high precision is required, this implies regions of greater complexity, and the local computational cells are of higher resolution in these regions, and low resolution elsewhere to best optimise computational resources. In a perfectly optimised simulation, mesh refinement and dispersement would be related to regions of high and low information density. Where matter is fundamentally stored as data, a notable advancement would be to develop a logical relationship between matter and complexity, whereby the corresponding optimally refined mesh contained regions of refinement and dispersement related to the vicinity of massive bodies. Such a relationship could be tested for its prediction of numerical time-stepping in line with gravitational time dilation. The proposed continuum computational model therefore invites a new perspective when considering some of the deeper unanswered questions invoked by Einstein’s theory of relativity, such as: why do massive objects distort spacetime? The computational answer implied, is that gravity is an inherent emergent effect of mesh refinement which optimises computational resources, based on spatially defined property complexity.

Intrinsic to this proposed model is the implication that time evolves as discrete iterates. That is, locally each cell computes a real time step, and globally every cell evolves simultaneously by a time update. The construction also implies a fundamentally discrete representation of reality at the level of the computational cell. The discrete fundamental state data of the cell gives rise to continuous physics as we observe it at the simulation macroscale. Something which is yet to be discussed in detail, is a supposed scale, or resolution, of the spatio-temporal computational construct. The existence of a continuum-quantum border as the scale approaches the computational cell (implicit in this theory) gives credence to the prevailing observation of disparate governing models of the classical and quantum realms. The implication is that a universal theory perhaps be re-considered in terms of a universal computation.

The implication of the proposed construct is that the computational stability constraints of the individual cells result in necessary spacetime coupling, observed as relativistic effects on the broader scale. The philosophical interpretations of this are intriguing. Rather than programming the physics of the universe as we observe and understand—as is implied by the “Ancestor Simulation" of Bostrom’s Simulation Argument—the premise is reversed. That is, this work implies that aspects of our observed reality could arise inherently from laws of computation rather than simply using computation to replicate physical laws. The further inference is that a relativistic universe is a computational universe. However, one can argue this does not require that we view a computational universe as being actively simulated by some form of organic parent universe, as per popular discussion. All that can be directly proposed, is that theories based in laws of computation can provide a useful mode of exploring an alternative basis for fundamental physics. One may draw important conclusions about the possibility of a deeper role of information theory and computation in behaviours foundational to our reality.

The proposed computing construct provides a new basis for which further developments can be built upon in producing effects in alignment with observable physical phenomena. Quantitative developments will require supposing specific mesh topologies and numerical update schemes in order to derive further consistencies with properties of relativity. This paper focusses specifically on the outlined congruities as a minimal basis for such further explorations.

Given the core contribution is that of a new explanation for existing observations of reality, in this sense the core argument is a philosophical one. We return then to the original philosophical enquiry—can foundational aspects of the physics of our universe plausibly emerge from fundamental laws of computation? Though the concept appears to be growing within the academic community, the notion, in its absolute, is near impossible to test. Scientific theory, as it stands, can only ever be appraised on the merits of its predictive power. Therefore, where a philosophical conjecture becomes a scientific one, is where it offers new and provable hypotheses. This work demonstrates new ways in which numerical models can draw inherent mathematical consistencies with our observed reality, generating a shift in perspective for explorations of computational realities. The hope for such a proposed theory is its conceptual utility in developing new theory, which may in turn develop new predictive power to thus further our understanding of deeper foundations of our physics.