Darwinian standard model of physics obtains general relativity

A Darwinian perspective of the standard model of physics (SMP) quantum fields (QFs) is proposed, called the physics-cell (PC) approach. Because Darwinian evolution is not deterministic, the PC approach allows for the violation of the charge-parity-time symmetry. In the PC approach, the SMP laws are contained in the PCs which receive and emit QFs through the PCs’ outer surface which is necessarily constrained by Bekenstein’s surface-information limit. The establishment of gauge invariance-compatible communication protocol-agreements between the PCs obtains an average correlation of QFs that is equivalent to an asymmetric metric tensor with the symmetric component being equivalent to general relativity and the anti-symmetric component being very small but still large enough to allow for enough ex-nihilo mass-creation to explain dark matter. Based on experimental data, the PC minimum-size is 1.5·10-31\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1.5\cdot 10^{-31}$$\end{document} m which is similar to the scale at which the grand unified theory force convergence occurs. Plus, the cosmological constant energy density is equal to the energy density of the discreteness-correction QF alterations that constitute the dark energy and are caused by the finiteness of the PC time-step which equals 5.0·10-40\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$5.0\cdot 10^{-40}$$\end{document} s, hence obtaining a PC maximum information processing rate of 6.6·1047\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$6.6\cdot 10^{47}$$\end{document} qubit/s. Moreover, the PC approach obtains that the minimum mass for black holes is 2.1·109\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.1 \cdot 10^{9}$$\end{document} larger than the maximum mass for which the no-hiding theorem can apply and that the maximum capacity for quantum computers is about 29.0·1012\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$29.0 \cdot 10^{12}$$\end{document} qubit.


Introduction
The standard model of physics (SMP) (e.g., [61]) describes all forces of Physics, except for gravity which is described by the metric tensor obtained from general relativity (GR) (e.g., [56]). In biological Darwinism, the deterministic rules governing the biological cells are stored in the cells themselves (e.g., [45,48,53]); hence in the Darwinian approach to the SMP quantum fields (QFs) that is proposed here, the rules for the alteration of the QFs are stored in SMP quantum computation volumes called physics-cells (PCs), and thus, it is called the PC approach. The PCs only communicate to the timeless medium surrounding them (e.g., R d ), which is where the ψ QFs of the SMP occur. The PCs having a size and at the same time creating spacetime might appear at first to be contradictory, but what is being proposed is that the PC's internal states and external medium do exist in a certain vector space (e.g., R d ).
The reception and emission by the PCs of the ψ and δψ QFs create alterations of the ψ in the medium, with their amplitudes typically obeying the inequalities: ψ ψ δψ . The correlation of the ψ defines the metric tensor, whereas the characteristics of the δψ define the cosmological constant. Although there have been Darwinian approaches to quantum mechanics (QM) such as quantum Darwinism (QD) [62], to the best of our knowledge this is the first attempt of a Darwinian approach to the QFs of the SMP. The concept of time and hence of spacetime occurs as a result of the communication between PCs using the ψ and δψ QFs; spacetime is thus an epiphenomenon of the communication between PCs.
In short, in the PC approach: i. each PC receives and outputs both ψ and δψ QFs, with the alteration in the ψ occurring through the PC in accordance with what are the rules of the SMP; ii. the ensemble of PCs are located in a time-free continuous finite dimensions vector space; iii. the shape of the PCs is unknown, but calculations indicate that the specific shape is mostly not relevant for the obtained QF dynamics; iv. using what is known about the behavior of information, meaning the Bekenstein surface-information limit (a.k.a. Bekenstein-Hawking boundary entropy), the SMPrelated information in the PC is assumed to be located in the outer surface of the PC and be at most 1 qubit per Planck square. [4,5,29]; v. by calculating the minimum amount of information needed to make the SMP calculations in a PC and crossing it with the Bekenstein surface-information limit, it is obtained the minimum size for a PC; vi. the PCs communicate only through emission and absorption of QFs and so there are no direct connections between the PCs, plus the concept of time is not definable neither locally nor globally because of the Einstein-Podolsky-Rosen (EPR) "paradox" [49] experimental results; hence, the PCs need to have a way of assigning a spacetime label in each PC and to communicate such labeling to the other PCs by the emission and reception of QFs; vii. it is the transmission of the spacetime labels between PCs that creates the "illusion" of spacetime, and it is that need for an appropriate communication between PCs that gives spacetime the characteristics that are currently described using GR. The PC approach is Darwinian because: a. PCs carry information within them; b. not all information survives; c. Laws of Physics evolve with the cosmic inflation period being the most likely period for the stabilization of such Laws through Darwinian evolution [36].
The time-evolution of QFs in the SMP is deterministic [61] and obeys to the chargeparity-time (CPT) symmetry rule. The CPT symmetry of the SMP assumes that the SMP is Lorentz-invariant, that the vacuum is Lorentz-invariant, and that the energy is bounded below, whereas in the PC approach the universe's evolution is defined by an ensemble of PCs, which allows for the vacuum to be in part not Lorentz-invariant; because of both the discreteness of the PCs and the lack of SMP rule uniformity across the PCs, the PC approach is hence compatible with CPT symmetry breaking even if the SMP rules are CPT-invariant for each PC. This agrees well with the characteristics of both biological Darwinian evolution not being time-invariant [53] and of QD eigenstate extinction process not being time-invariant [62]. The size of the PCs and the amplitude of the mass creation are then shown to be in agreement with both previous and recent [19,32] experimental data.
The more thorough justification for the characteristics of the PC approach is made in the following sections. The "Computation Limits of the Physics-Cell" section determines the computation capacity of a PC. The "Minimum Physics-Cell size for Standard Model" determines the minimum size of a PC capable of calculating the evolution of the QFs assuming that the SMP is valid; and such minimum size coincides with the grand unified theory (GUT) scale at which the electromagnetic, weak nuclear, and strong nuclear forces have equal coupling constant values [47,61]. The medium where the PCs are located is timeless but the receiving and emitting of the QFs at each PC have associated with them a spacetime label that is described in the "Labeling Spacetime and Energy-Momentum" section. The establishment of agreements between the different PCs of what constitutes the appropriate spacetime label creates the "illusion" of a spacetime associated with the correlation of the different SMP QFs and which in average constitute a possibly asymmetric metric tensor described in the "General Relativity defined by Quantum Fields" section where the symmetric component of the metric tensor agrees with what is predicted by GR. The cosmological constant is then obtained as an average of the appropriate QFs in the "Effect of finite time-step of Physics-Cells" section; where the value of the cosmological constant is shown to be in agreement with the speed of light in vacuum, the Cosmic Microwave Background (CMB) temperature of the universe's vacuum and the mass of the Higgs boson. The physical consequences of the asymmetric metric tensor are then described in the "Consequences of metric tensor asymmetry" section and are then shown to be in agreement with dark matter experimental data. The "Discussion" section makes a general analysis of different issues associated with the PC approach, such as the characteristics of its vacuum state. Finally, in the "Conclusion" section a resume is made of the relations this work has obtained, which include a calculation of the maximum capacity for quantum computers that is different from previous estimations [34,35] as those estimations assumed that quantum computers could be made arbitrarily large whereas the PC approach obtains that they cannot.
interacting with matter there is no time, rather, all of its existence is a permanent "now" as there is no time-change for an inertial referential system traveling with a photon in vacuum. Moreover, photons in matter travel at the same speed as in vacuum, with the apparently reduced speed of light in matter being caused by the photon's difficulties is escaping matter. Hence, for the PC approach the EPR experiment indication that "quantum entanglement" is not limited by spacetime implies that: QFs exist; space exists; but spacetime is an "illusion," an "illusion" from which we cannot escape, but an "illusion" nevertheless.
In Computer Science, the communication protocol-agreements refer to what allows a reception/broadcasting system to communicate information with another reception/broadcasting system [16]. For the case of the ensemble of PCs, the protocols must be established by the PCs that are communicating with the other PCs through QFs, as there are no other entities capable of enforcing a communication protocol. For that communication to occur, information necessarily needs to be transmitted and received through the QFs. For the PCs, the communication protocol-agreements cannot, to the best of contemporary knowledge, be altered in any way through actions occurring in the physical word as such an alteration would consist in the alteration of the Laws of Physics. Thus, in the PC approach the effects of the communication protocol-agreements can be observed, but the communication protocols themselves cannot be observed as they are contained within the PCs' interior.
Using the communication protocol-agreements, the different PCs interact through the emission and receiving of QFs by the PCs, and through that interaction they give rise to spacetime. This PC approach is different from other quantum gravity (QG) approaches (e.g., [3,10,43,50,54]) in that it is specifically using communication protocol-agreements to interpret the received and emitted QFs, it considers the computation-power necessary for implementing the Laws of Physics, and it proposes that an ensemble of information-communicating PCs creates both the SMP and GR in a R d medium rather than assume that SMP QFs occur in spacetime. Thus, this approach is a new form of obtaining emergent GR [18], meaning that GR occurs as an average effect of QFs. The PCs that are neighbors to each other in the R d medium where the PCs are embedded are usually also neighbors in a spacetime sense, but not necessarily.
We use the term bit as the unit for "Shannon information amount" of classical binary units and the term qubit as the unit for "Shannon information amount" of maximally entangled quantum binary units; a set of P classical binary units has P bit, whereas a set of P maximally entangled quantum binary units has P qubit which corresponds to 2 P bit [2]. In the PC approach, information qubits are contained within a PC in a way that guarantees that the Bekenstein surface-information limit [4,5,29] (a.k.a. Bekenstein-Hawking boundary entropy) is valid, and hence, all the PC's information is stored in the interface between the medium and the PCs; and there are at most 1 qubit per (d-1)dimensional "square" with sides equal to a Planck length l P = ·G c 3 = 1.62·10 −35 m; where = h 2π with h the Planck constant, G is the gravitational constant, and c is the speed of light in vacuum. Although lattice-based QF simulations of SMP fields have been performed [9], they did not study the creation of the SMP from a multi-agent network of PCs as is proposed here. The volume of spacetime created by the PCs in the medium will have the known information processing limitations of SMP and GR [4,5,29,39].
By extending the Bekenstein surface-information limit to the d-dimensional case, then if A [d−1] is the PC [d − 1]-dimensional "area" orthogonal to a PC "length" l PC and the distance d PC between PCs is similar to l PC , then corresponding to a maximum PC time-step t PC = l PC c compatible with a maximum information-transport velocity of c, the PC maximum information processing rate (PC-MIPR) for that t PC time-step We assume in the PC approach that d PC ≈ l PC , but if that does not occur, then the PC-MIPR is If ϑ is a shape factor, equal to π 4 for a circle and to 1 for a square, with the shape factor for the Planck "area" being ϑ P , and for the PC "area" being ϑ PC , then the PC-MIPR for that time-step is simply Thus, if I SM P is the Information amount of the PC's SMP evolution equation expressed in qubits, then the smallest possible size for the PC is l PC = d−1 ϑ P ϑ PC · I SM P · l P . Hence, the PC-MIPR for the smallest PC compatible with the SMP is I SM P · [t PC ] −1 , whereas for a different PC time-step of τ = ζ · t PC with ζ a scaling factor, the PC-MIPR is then I SM P · [ζ · t PC ] −1 , which signifies that the maximum information transport speed is then c ζ . The characteristics of the shape factors for the Planck "area" and the PC "area" are experimentally unknowable, as only their ratios will have an effect in the PC-MIPR, and for simplicity we will assume that ϑ PC ϑ P = 1. The existence of information transport at the maximum speed c implies that ζ = 1, whereas the existence of tachyons would require that ζ < 1.

Minimum physics-cell size for standard model
The association of a L SM Lagrangian density to each PC simply symbolizes the calculations going on at each PC, as the L SM of the contemporary SMP has only been tested for spacetime scales much larger than the size of the PCs and in each PC those calculations can be occurring through some other formalism. The L SM allows for the definition at each PC of a QF transition U ( SM (x I ); SM (x F )) between an initial SM I QF and a final SM F QF, see Eq. 1, where SM F SM I D SM expresses an integral over all QF arrangements between an initial SM I QF and a final SM F QF.
The tetravector x = (c · t, − → r ) is related to the local time t and the local 3-dimensional space vector − → r ∈ R 3 . The QF transition U ( SM (x I ); SM (x F )) expressed in Eq. 1 is calculated by the qubits of the PC, instead of the Feynman diagrams-based transition probability that is calculated by the bits of a classical computer.
The {M} ≡ ( f , s, p, l) label, together with the integer index b, characterizes the SMP ψ {M} b QF, where the Generation index is f ∈ {1, 3} and hence N f = 3 , the Flavor index is s ∈ {1, 4} and hence N s = 4, the Parity index is p ∈ {1, 2} and hence N p = 2, plus the Type index is l ∈ {1, 2} and hence N l = 2, where the two possible parities are L and R, and the two types are particle and anti-particle. The interaction of the QFs in the SMP [47] is defined as being those that are compatible with the local gauge-invariance of the matter fields, and it is done likewise in the PC approach. The gauge-invariance means that the transformation by a SU(n) local gauge-transformation of the phase of a QF ψ {M} b carries no meaningful physical alteration, where the transformation is: with the (n) t a being the matrix generators of the SU(n) group and the (n) a being real values; and hence the index b has the size appropriate for the corresponding (n) t a matrix generators of the SU(n) group. The independence of the physical behavior of ψ with zeroes.
The boson local gauge QFs are the N n=1 D [n] a=1 B μ;(n);a where each SU(n) gauge has a D [n] number of generators, with each generator corresponding to a gauge-field, thus arriving at 12 local gauge QFs corresponding to 12 gauge boson QFs; as for the SMP we have N = 3, and since D [1] = 1, D [2] = 3 and D [3] = 8, it is thus obtained that 3 n=1 D [n] = 12. The QF ϕ is the Higgs field, which is a spin 0 global gauge QF [47]. The 3,4 are the 24 fermion QFs; 3-generations x 4-flavors x 2-"parity possibilities"; with their corresponding gauge-invariances, and these 12 pairs of fermion QFs are then associated with the 12 masses m f ,s of the 12 fundamental fermion-particles [47]. Hence, the field state for the L SM Lagrangian density perspective in Eq. 1 is the SM given by Eq. 4: The shortest wavelength ever experimentally detected was a 3.2·10 11 GeV proton [58], the so-called "Oh-My-God Particle," which using the de Broglie relation corresponds to a wavelength of about 4.0 · 10 −27 m. To be compatible with observations, the largest possible size for a PC should be considerably smaller than this, e.g., 10 −28 m. Assuming that d = 3, which will be justified in a latter section, the maximum information content of a PC with sides equal to 10 −28 m would be of the order of 2.4 · 10 20 qubit as in the PCs approach the Bekenstein surface-information limit implies that in the maximum information-storage situation [4,5,29] all the PC information is stored in the outer surface of the PC. Thus, the largest possible experimental data-compatible PC can at most process about 7.0 · 10 56 qubit/s. A smaller PC contains less information and processes less information per unit time; hence, the smallest possible PC depends on the number of qubits necessary to represent the SMP which depends on the number of SMP's degrees of freedom (DOF). At sufficiently high energy, above the top-antitop quark threshold, the SMP's DOF are at a maximum [30]. To get an estimate of the required DOF, we include the Higgs particle with N H = 1 DOF, and contributions beyond the SMP such as the symmetric metric tensor of GR, N g = 10 DOF. Moreover, it is described in latter sections how the antisymmetric elements of the metric tensor require an additional DOF=4, hence N g = 14 DOF. Thus, the number of SMP's fermionic DOF is 2·(N f · N s ·(D [1] + D [2] )) = 96 (see Eq. 2), and of SMP's bosonic DOF is 2 · (D [1] + D [2] + D [3] ) + D [2] where the 2 in both cases is there to account for both particles and antiparticles, and the added D [2] in the second expression is needed because of the extra polarization state associated with the nonzero mass of massive vector bosons. The total number of the required DOF needed to define the most general state in the PC approach is hence 96 + 42 = 138.
Furthermore, the PC approach assumes that due to Wick's theorem the most general interaction involves at most 4 states and any more complex interactions can be split into Feynman diagrams with not more than 4 legs. The strict conservation laws reduce the DOF of the 4 th state given the 3 others. Accounting for conservation of energymomentum, angular momentum, charge, color charge, weak isospin, and probability current; plus the CPT invariance of the SMP Lagrangian density L SM ; the 4 th leg is restricted to 124 DOF. Thus, the maximum information needed by a PC to describe the most general interaction of states is about 138 3 · 124 ≈ 3.3 · 10 8 qubit.
The minimum PC size related to having I SM P = 3.3 · 10 8 qubit for ϑ PC = ϑ P , using l PC = d−1 √ I SM P · l P , is 1.5 · 10 −31 m. This corresponds to a minimum PC time-step t PC = 5.0 · 10 −40 s, which corresponds to a PC-MIPR of 6.6 · 10 47 qubit/s. This smallest PC size is in the range of the GUT scale, from 10 −30 to 10 −32 m, and hence might be the GUT scale's origin. Moreover, the τ j = ζ j · t PC is the time-step associated with the discrete counting-mechanism in PC j, which is hence equivalent to a local discrete time-step at each PC.
The PC having the smallest-possible size compatible with the SMP implies that the execution of the SMP calculations occurs at the highest-possible data-processing speed, which requires the maximization of information access, which according to the Bekenstein surface-information limit [4,5,29] requires that the information is stored at the outer surface of the PC, which is what is assumed in this approach. Moreover, using the Nyquist-Shannon sampling theorem [46,52,57], the maximum mass for an entangled quantum state, m S , must obey to the inequality m S · c 2 ≤ t PC 2 thus m S ≤ 2 t P t PC · m P ≈ 3.6 · 10 −4 · m P ≈ 7.9 · 10 −12 Kg where m P = c G is the Planck mass. The maximum mass for an entangled quantum state allowed by the PC approach is just slightly above the recent experimental value for the heaviest known entangled state, which weighted 7.7 · 10 −12 Kg [32]. Hence, the PC approach obtains that the mass of the entangled quantum state obtained in ref. [32] is less than a third of the N. Lori mass of the total available gas of ref. [32] because the maximum possible entangled mass had already been reached.

Labeling spacetime and energy-momentum
The contravariant de Broglie relation, between momentum P and wavelength λ is P μ = h[λ −1 ] μ and is more often used in QM where the elements of the {x μ , P μ } pair are related through P μ ψ = i ∂ψ ∂ x μ . The respective covariant de Broglie relation P μ = h[λ −1 ] μ is more often used in GR, where the {x μ , P μ } occurs because the covariant indices of the Einstein tensor originate from ∂ ∂ x μ spatial derivatives of the metric tensor, and because in GR the Einstein tensor is proportional to the energymomentum-stress tensor. For the PC labels to be able to represent both QF theory and GR with the minimum information needed, a choice must be made about which pair is used by the PCs, either the covariant 4-momentum or the contravariant 4-momentum. We will, henceforth, assume that the QM pair {x μ , P μ } is the pair used by the PCs for the spacetime labeling, whereas the GR pair {x μ , P μ } is considered to not be an appropriate label for the PCs.
We are using the (+ + +) Misner-Thorne-Wheeler sign convention [42] for GR, which implies that the Lorentz signature is (−, +, +, +). Hence, for each PC there are two types of labels associated with it: a 4-dimensional covariant loca- associated with a path where E is the energy and p = ( p x , p y , p z ) is the 3-dimensional momentum. For simplicity, we call the inversewavelength label [λ −1 ] μ , the wavelength label. To each of the two types of label corresponds a type of communication protocol-agreement, respectively, for location represented by and for wavelength represented by¯.
The universality of both forms of communication protocol-agreements, one a location-label from a PC to its neighboring PC, and the other a wavelength-label from a PC path to a neighboring PC path, is proposed in the PC approach to have occurred by a Darwinian evolution during the cosmic inflation period [36] which maximized the flow of information between PCs. There is no a-priori reason to assume that the two types of protocol-agreements, and¯, maximize the same type of information flow; hence, it is possible that the information flow maximizing the information flow for one of the protocol-agreements would minimize the information flow for the other protocol-agreement. This difference of information type for each of the two protocolagreements is directly related to the Heisenberg inequalities [6,23]. Thus, each QF will have an amount of information appropriate for each protocol-agreement type and so we obtain an altered form of Eq. 4: The analysis of Eq. 1 using Eq. 5 obtains that the term inside the internal square brackets of the right-side of Eq. 1 is a phase matrix for the location-protocol u rl ] rl that is multiplied by vectors of the form of the upper part of Eq. 5, and similarly for the wavelength-protocol there is a u rl ] rl that is multiplied by vectors of the form of the lower part of Eq. 5. Hence, Eq. 1 becomes: To establish the appropriate protocol-agreements between PC j and its neighbors k and internally with its previous iteration, the protocol-agreements must be able to identify QF alteration marks that identify those QF alterations as coming from a specific PC k, and where the self-influence occurs when k = j. Thus, protocol-agreements are about how the l component of the QFs of Eq. 5 originating from PC k affects the r component of the QFs arriving at PC j. The concordance of the communication protocol-agreements is hence expressible through the terms: [k] u r l , and [ j] [k] u l r . By assuming that the change in communication protocol-agreement informationamount between the two communication protocol types is only done in the PC, not in the medium, the transmission through the medium is thus only within the same type of communication protocol-agreement and thus: [k] u l r = 0 ( 7 )

General relativity defined by quantum fields
We will now assess the dimensionality of the R d vector space medium. The maximizing of the information transport between PCs implies that both the location-protocol and the wavelength-protocol information need to be easily communicated between PCs, regardless of any rotation that the PC might suffer in the R d medium. Hence, the QF should have U(1), SU (2),…,SU(d) local gauge invariances as d is the dimensionality of the medium, and as each rotation of the PC would necessarily change the corresponding representation of the QFs in the PCs relative to the neighboring PCs. Thus, since the SMP has U(1), SU (2), and SU(3) local gauge invariances, but no SU(4) local gauge invariance, meaning that N = 3, the dimensionality of the medium is likely to be d = 3, as was already assumed when calculating the PC-MIPR in a previous section. If the medium's dimensionality is higher than N , there will be rotations of the PCs that cannot be canceled by the gauge-invariance of the QF representations, and if the medium's dimensionality is lower than N , there would be no information-transport usefulness for the existence of such gauge-invariant QFs as it would go against the Darwinian evolution's tendency to maximize the information-transmission capacity [36,45,62] of such communication protocol-agreements. Hilbert spaces are the representation basis in QM, but typically they cannot be used to represent SMP QFs as the number of particles is typically not preserved for SMP QFs [47,60,61]; nevertheless, the Hilbert space can be used for the cases where the characteristics of the system do not depend on the number of particles, which will be the case for this section. The Hilbert space is a vector space of states, where ⊗ is a Tensor product of representations and ⊕ is a direct sum of representations.
To assess the consequences of [k] u rl and [k] u rl , the QF alterations at the PC j after a time-step iteration are referred to as, respectively, [ j] SM and [ j] SM . Hence, the PC j emits QF alterations representing information about the two protocol-agreements types to its k neighboring PCs through: The average QF alterations in a volume V is the Q V defined in Eq. 9. The volume V contains many PCs, and that Q V average affects the evolution of that volume V through the effect the QF alterations have in Eqs. 5-8, where PCs are labeled j and k, and the average Q V is simply the sum in volume V over all those PC to PC labeling terms of Eq. 8. This average constitutes a statistical ensemble, and just like for all statistical ensembles in QM, it will be represented by a mixed state [47,51], which implies that its average value will result from the average contribution of each of the two protocol-agreement types: The Darwinian evolution toward the maximization of information transport between PCs, with its corresponding Entropy increase in the universe, implies that any QFs entering a PC will tend to affect more the QFs like itself, as the self-propagation of QFs is what requires the least amount of information flow, and hence the [kεV ] u rl and [ jεV ] [kεV ] u rl very different from zero will tend to divide the large matrixes [kεV ] u rl and [ jεV ] [kεV ] u rl into smaller sub-matrixes, such as [kεV ] u μν and [ jεV ] [kεV ] u μν , for both fermion fields ψ {M} ν and ψ ν;{M} included in Eq. 5, and for both boson fields B ν;(n);a and B ν;(n);a also included in Eq. 5, respectively, and where {μ, ν}ε{0, 1, 2, 3} as is assumed in the PC approach. We neglect the contribution of the Higgs field ϕ as it is a global gauge-invariance QF, and not a local gauge-invariance QF as are the other bosonic QFs [47]; hence SM = [ψ ⊕ B]. The relation between Brownian motion and the diffusion tensor is valid in both classical mechanics [41] and QM [62], with only slight differences in how to interpret the source of the diffusion coefficients. Thus, Eq. 9 can be expressed as a sum of diffusion-like signal variations. The phase minimization in the diffusion-like signal variations is a common characteristic of Brownian motion [41] with the reduction of the amplitude of the signal described by a tensor D which in the PC approach is 4-dimensional. By use of the previous paragraphs, the average contribution of the QFs in a volume V thus becomes: The F Fourier transformations from Eq. 10 4-dimensional Gaussian distribution in space obtains a 4-dimensional Gaussian distribution. The probability in a classical mechanics 3-dimensional Brownian motion of a displacement − → r is proportional to e − 1 2 j,k r j ·[2D j,k t] −1 · r k , where D rk is the 3x3 Diffusion tensor [41]. Likewise, for our 4-dimensional Gaussian distribution, the variance matrixes are, respectively: 2·t The probability maximization of this 4-dimensional Gaussian distribution in the two terms of Eq. 11 generates the two geodesic equations of Eq. 13 by simply using Eq. 12 to define the g location-protocol metric tensor of the medium in volume V , and the g wavelength-protocol metric tensor of the medium in volume V .
The trajectories obtained from Eq. 11, for lengths much larger than 3 √ V , are random-walks in 4-dimensional with the probability of having a step x μ = [2 · t] · [ ν D μν · x ν x ] for a time-step parameter t being equal to . If the large integer number L is the number of steps in the random-walk, then the two geodesic trajectory equations maximizing the trajectory's probability for the location-protocol and wavelength-protocol are, respectively, the probability maximization paths: The necessary and sufficient condition for both geodesic equations of Eq. 13 to provide the same trajectories in spacetime is that ν g μν g ν = δ μ , which is equivalent to Henceforth, it is assumed that: g μν ≡ [[g] −1 ] μν , and g μν ≡ [[g] −1 ] μν . Thus, Eq. 14 becomes: [g μν ] −1 = g μν . The Heisenberg inequalities in curved spacetime for the PC approach can be directly deduced from the effect of semi-classical metric tensor in the Heisenberg uncertainty because the metric tensor of Eq. 12 is an average of QFs across a volume V . For a semi-classic curved spacetime described by a metric tensor and using Eq. 14 together with ref. [11], the commutation rules for the PC approach are thus, the Heisenberg inequalities for the PC approach are: The curved spacetime Heisenberg inequalities of Eq. 15 are for the PC approach the same as the flat spacetime Heisenberg inequalities, which agrees with the description of the Heisenberg inequalities in the vicinity of the black hole event horizon proposed by ref. [22]. Because of PC's finite size and using the Nyquist For an Earth-size object and for a V PC in the scale of the PC's minimum size, it is obtained that V PC V ≈ 10 −55 , which makes the Gaussian distribution-like behavior of Earth's trajectory to be an extremely sharp line-trajectory where the effects of the Heisenberg inequalities for such scales can be disregarded, as has been experimentally observed in all tests of GR (e.g., [56]).
We are using the (+ + +) Misner-Thorne-Wheeler sign convention; hence, the spacetime elements are the Christoffel symbol  [56]. The P μ and x μ amplitudes need to be pre-served by the communication protocol-agreements of the PC approach because such amplitude-preservation greatly stabilizes both the location protocol-agreement labels and the wavelength protocol-agreement labels associated with the PCs. Thus, based in the stabilization of the location and wavelength protocol-agreements of the PCs, field equations are obtained in Eq. 16, which are identical to the GR field equations [56]. If the correlations of Eq. 12 are symmetric, then the metric tensor is symmetric and identical to what GR predicts, but in the PC approach the metric tensor can be asymmetric if the PCs do not force the symmetry.
Anyone familiar with the GR field equations would notice that it is quite common to use [R − 2 ] instead of R in Eq. 16, where is the cosmological constant. It will be for now assumed that = 0, as Einstein initially assumed, and only justify the existence of a nonzero cosmological constant in a later section of this work. Thus, if the ℘ symbol stands for probability-per-volume, then Eq. 16 inside volume V becomes: Using Eq. 12 and the short range of the influence of all QFs, except for the neutrino and photon QFs, we obtain that the long-range metric tensors, by long-range meaning a range much larger than the nucleus-size, will mostly depend on the QF bursts of neutrinos { ψ [ The information transmitted between PCs is likely represented by spin-up ↑ versus spin-down ↓ states, by states meaning quanta of the respective QF. The quanta can be particles or virtual particles. In SMP, for each QF the spin amplitude is a characteristic of that QF, but the orientation of the spin is only meaningful for the quanta of that QF. Thus, maximization of information transmission maximizes the correlation of spin-up ↑ states to spin-up ↑ states, and spin-down ↓ states to spin-down ↓ states. Although information transmission in QM has been assessed in previous works [20], the behavior of information in SMP must be treated differently [37], as Hilbert spaces used in QM cannot in general be used in SMP [26,60], but they can be used if the number of quanta is large and its exact amount is not relevant; thus, as spin-coupling does not depend on the number of quanta provided the number of quanta is large, the Hilbert space can hence be used for the analysis in this section and the next two sections.

Effect of finite time-step of physics-cells
Each PC j receives from PC k both QFs: Then, it sums them to obtain, using Eq. 12 and Eq. 14, that if [ j] V is a volume enveloping the PC j, then: It is then possible in each PC j to alter the Lagrangian density L SM by the addition of a semi-classical Einstein-Hilbert action where the metric tensors are the semi-classical metric tensors [ j] g μν and [ j] g μν of Eq. 19 that use, respectively, the location protocolagreement and wavelength protocol-agreement transmission QF alterations. Thus, using Eq. 5, the two full lists of QFs being received at each PC continue to be fully separated between covariant location-protocol QFs and contravariant wavelength-protocol QFs, with the metric tensor-generating QFs being the information transmission that allows the full concordance between PC's communication protocol-agreements, and hence maximizing the information transmission between PCs.
The transition probabilities at each PC, assuming the PC is using a Lagrangian formalism are described by Eq. 6, plus are then changed by two alterations using the semi-classical fields [ j] g μν and [ j] g μν . The first alteration of the transition probabilities U of Eq. 6 is that of the Minkowski tensor η μν → [ j] g μν , η μν → [ j] g μν , and dV → −| [ j] g μν |dV , plus the extra transformations of the tetrad basis used in the tetrad formalism [56]. The second alteration is the addition of the semi-classical Einstein-Hilbert action where the semi-classical fields [ j] g μν and [ j] g μν are mixed through the Christoffel symbol [ j] μ ν (g μν , g μν ). Thus, at each PC, calculations are made, where the cosmological constant was not included for reasons to be described in the next paragraphs. The extra information stored in the PC because of the added semi-classical metric tensor calculations having already been included through the N g = 14 DOF used in the calculation of I SM P .
For each τ PC j = ζ j · t PC time-step of each PC j, the PC j emits two QF changes [k] δ SM describe how the QF output variations of a neighboring PC k from a certain protocol-agreement type, during the time-step τ PC j of PC j, affect the QF in the vicinity of PC j for the same protocolagreement type. The contributions of the small δ SM variations will amount to a diffusion-like process much slower than the diffusion-like process of Eqs. 10-12 that obtain the GR field equations of Eq. 16. The alteration caused by these variations creates a growing difference between the effective Ricci scalar [R − 2 (δ SM , δ SM )] occurring in the R d medium and the Ricci scalar R calculated in each of the PCs.
To actually implement the overall covariance across PCs that the sending of both aim for, a signal must also be emitted about the existence of a noise-like effect caused by the finite-duration of the PCs' time-step, and so each PC would need to estimate their local δ SM and δ SM prior to the occurrence of the time-step, which is not possible. A possible form of compensating, within the PC, for this lack of information about δ SM and δ SM , would be the calculation of the local value of 2 (δ SM , δ SM ) by the PC. That value would be calculated by calculating the average energy density in the medium by the combined use of the [ past] {δ SM , δ SM } of its previous time-step, together with the combined [k] {δ SM , δ SM } error-correction QFs sent to the medium by the other PCs k and arriving at PC j, error-correction QFs that the PC j is also sending to its medium-sense PC neighbors. These [k] {δ SM , δ SM } are the δψ QFs referred in Introduction.
For such an error-correction approach to work properly, the cosmological constant energy density needs to be equal to the vacuum energy density of the {δ SM , δ SM } QFs, but an equality between the cosmological constant energy density and the vacuum energy density of QFs has been previously found to be wrong by a factor of 10 120 [12,40]. The here-proposed PC approach makes a very different prediction. The {δ SM , δ SM } are spin 0 QFs as they are simply the difference between very similar QFs with the same spin orientation, and thus they will repeal masses of equal sign [61], hence generating a repulsive gravitational field which is relevant for explaining cosmic inflation [33,36] and dark energy [21], but which was not proposed until now. By using the PC approach, it is obtained both the cosmological constant and the repulsive-gravity-like QFs {δ SM , δ SM }, thus the obtained PC Approach (PA) received/emitted QFs are: The major reasons for the elimination of the 10 120 factor in the PC approach are: i. the QFs δ SM and δ SM are much smaller than the SM and SM QFs, ii. the mass of the Higgs was found experimentally to be 125 GeV instead of the previously assumed 175 GeV [40], iii. the energy density is proportional to the 4th power of the QF's amplitude [12], iv. the thermal equilibrium QF amplitude at each PC is proportional to the thermal energy of the PC's spacetime vicinity [40]. It is described in Eq. 21, the ratio between the Quantum Chromo-Dynamics (QCD) energy density for δ SM in the PC approach, ρ QC D δ ; and the energy density calculated using the cosmic constant in GR, ρ . The parameters used here are the vacuum energy density of QCD in SMP, ρ QC D SM ; and the vacuum energy density of the electroweak-transition in SMP, ρ EW SM , but using the Higgs mass m Higgs = 125GeV instead of the previously used mass-like scalar m thresh = 175GeV [40]. The ratio [ δ SM SM ] is calculated using the assumption that both δ SM and SM are in thermal equilibrium with the CMB temperature T vac = 2.73K , that the average PC time-step in a volume V much larger than the PC size is τ PC = ζ ·t PC ; and moreover, the Taylor expansion of the energy eigenstates obtains [ δ SM SM ] = d 2 · kT vac · ζ · t PC which implies that [ δ SM SM ] 4 = 10 10 · [0.83 · ζ ] 4 . The other terms being |  [12,40]. Hence: By simply making ζ = 1.0 ± 0.1, with the ± indicating the uncertainty of the used values, which implies that τ PC = t PC = [5.0 ± 0.5] · 10 −40 s , we obtain using Eq. 21 that, as expected, ρ QC D δ = ρ . Hence, the obtained τ PC compatible with the experimental data implies that in average there are no tachyons and that the PCs are operating at their maximum processing speed. The energy density of QCD is about 10 10 higher than the other QF's contribution to the energy density [12,40], and so the contribution of the other QFs to the energy-density is likely to be negligible. The obtained value of τ PC implies a PC-MIPR of about 6.6 · 10 47 qubit/s.
The {δ SM , δ SM } spin 0 QFs generate a repulsive gravitational field that explains some of the known characteristics of dark energy such as it being the cosmological constant [21,31]. In the next section, the relation between PCs and dark matter through the occurrence of asymmetry in the metric tensor is assessed.

Consequences of metric tensor asymmetry
The relation between the metric tensor and energy conservation [56] does not imply that the metric tensor is symmetric, as in the PC approach the energy is not necessarily conserved. Moreover, there are some indications that local energy creation might be able to explain dark matter [27]. Thus, metric tensor asymmetry might be related to dark matter, as will be discussed in this section. The correlation of QFs on Eq. 12 can be tuned by the PCs so that the metric tensor is symmetric, but in this section we will consider the case where the PCs do not force the metric tensor defined by the correlations in Eq. 12 to be symmetric.
To maximize the information transmission, it must be highly likely that the B will both amount to spin 2 quasi-boson spin patterns, which is called here the quasi-graviton. The quasi-graviton has the graviton corrections to the SMP which are expected [47], and the quasi-gravitons' low force strength is explained here by its source being an average of mostly independent QFs, as described in Eqs. 8 [n] will both amount to spin 1 quasi-boson spin patterns, which using ref. [56] is here called the quasi-vierbein. Moreover, it is known that boson QF spinors commute [47] which implies that the B commute, whereas fermion QF spinors anti-commute [47] which implies that the ψ anti-commute, with symmetry and anti-symmetry in the indices represented as, respectively, (μν) and [μν] . Thus, it is obtained that: The g (μν) and g (μν) are associated with a quasi-graviton spin 2 matrix QFs, whereas g [μν] and g [μν] are associated with a quasi-vierbein spin 1 vector QFs. The vierbein spin 1 gravitational field has been previously associated with the asymmetry in metric tensors [25,56]. Several forms of GR where the metric tensor is not symmetric or where the covariant derivatives do not commute, ∇ μ ∇ ν = ∇ ν ∇ μ , have been proposed by Einstein [17], Cartan [13], Hehl [24] and many others. In all those perspectives, the key definitions are those of torsion T For the SMP QFs, using γ as the Dirac gamma matrices in standard notation, plus Eq. 5, plus the Dirac matter field (DMF) averaged over a volume V being equal to ψ V as Proca matter QFs are absent from the explicit aspect of the SMP Lagrangian density L SM , plus the corresponding spin angular momentum tensor [24]   , and hence it is obtained an approach to GR similar to the U 4 approach [24] and almost identical to the Einstein-Cartan approach [13], which is thus an approach for which, using the SMP matter fields, it is obtained that the geodesics remain orthogonal to the Cauchy hypersurfaces for Gaussian normal coordinates [25], that the torsion tensor is completely anti-symmetric and that the GR field equation in Eq. 16 is still valid but the energy-momentum-stress tensor T ν is no longer conserved [24,25]: m q ] as the volumes over which the average is done are typically a lot larger than the nucleus-scale; plus, as the kinetic energy of the interacting quarks are included in the mass term m B ; therefore, it is obtained that l P ] 3 ≈ 23.6 · 10 9 years and τ {M} ≥ τ {B} . Thus, τ {M} is about twice the age of the universe and agrees qualitatively with the results of ref. [24]. Hence, the mass creation of the PC approach does not significantly alter the trajectories of objects; however, it is shown below to be a possible explanation for dark matter.
Using Eq. 25, if ρ N S is the mass density of a neutron star, m N is the mass of the neutron, and as the mass density threshold at which the anti-symmetric and symmetric parts of the metric tensor have similar amplitudes is for the neutron equal to ρ N = c 8π [ m N l P ] 2 ≈ 10 57 Kgm −3 [24]; then, the ratio between the maximum mass created by the torsion in the vicinity of the neutron star, m T , and the mass of the neutron star, m N S , is: Equation 26 implies that in the vicinity of a neutron star having a mass density of about 6 · 10 17 Kgm −3 , a typical value for such stars, it is obtained a value of m T m N S ≈ 6.0 · 10 −40 . The recently compiled dark matter experimental data of ref. [19] obtained that their best models imply a galactic central core mass density of either 3.7 · 10 −3 M pc −3 or 3.2 · 10 −3 M pc −3 , depending on whether the model forbids or assumes the existence of a dark matter cloud outside the halo, where M is the mass of the sun and pc stands for parsec. Hence, for the no-cloud and the yes-cloud models, respectively, it corresponds to 2.5 · 10 − ≈ 4 · 10 −40 , and the maximum value theoretically predicted by the PC approach to be caused by the asymmetric component of the metric tensor using Eq. 26, m T m N S ≈ 6 · 10 −40 , is an indication that a possible source for the existence of dark matter near neutron stars is the creation of energy-momentum-stress by the torsion tensor obtained in Eq. 24 and then used in Eq. 25 to obtain Eq. 26. Thus, the PC approach's asymmetric metric tensor at 2 3 of its maximum possible value can explain the dark matter data of ref. [19].

Discussion
The contemporary experimental Physics data agree well with both GR and the SMP, except for the dark energy and dark matter, but the dark energy is well described by the cosmological constant [31] in agreement with the PC approach as can be seen in Eq. 21, whereas the dark matter is proposed in the PC approach to be caused by the asymmetry of the metric tensor as described by Eqs. 24-26. For the SMP, the physical reality is made of QFs acting on a "vacuum state" |0 and those QFs express a stochastic capacity for creating and destroying quanta, meaning particles and anti-particles [47]. For GR, the metric tensor is itself the spacetime, as the metric tensor both allows and describes spacetime [56]. For SMP and GR to be made compatible, it is often assumed, e.g., in the loop quantum gravity (LQG) approach, that the metric tensor needs to become a QF and hence the area/volume of spacetime becomes discrete [3,50,54]. Whereas in the PC approach, the metric tensor is a QF that occurs by the joint contribution of the SMP QFs, but is not a part of the L SM in each PC. Thus, in the PC approach the SMP QFs are forced to have GR-compatible covariance because of the communication protocols between the PCs, meaning that while in LQG the QFs in the L SM are made to have GR-compatible covariance so that they can exist in a GR-compatible spacetime, in the PC approach the QFs have their covariance altered by the transmission of communication protocol-implementing QFs that make the QFs behave as if they were in a GR-compatible spacetime. Therefore, in the PC approach the GR-compatible spacetime is an epiphenomenon. It is likely that PCs appear and disappear just as quanta do in SMP, as in SMP the QFs are expressible as creation and destruction operators acting in a "vacuum state" |0 . For the process of creation and destruction of PCs to occur there would need to be a QF operating in the time-free R 3 medium, but the only QF of SMP that has global gauge invariance is the Higgs field, thus the Higgs field is likely to be associated with the creation and destruction of PCs.
The Faddeev-Popov ghosts simply cancel the wrong counting of the gaugevariances of the QFs in the Feynman path integral approach to the SMP QFs [47], but in the PC approach the calculations across the PCs use the gauge-invariance of PC representation in each PC, meaning that a gauge-alteration of the QFs does not alter in any way the representation communication to the other PCs. Thus, in the PC approach the Faddeev-Popov ghosts do not need to be included in the L SM calculated within the PC.
Although previous work has proposed the generalized uncertainty principle (GUP) as a form of making the Heisenberg inequalities be compatible with a minimum length (e.g., refs. [1,14,28,59]), in the PC approach the GUP is not needed as the PC size l PC is defined by the finite value of I SM P and not as a result of the Heisenberg inequalities. In the PC approach, the quantum uncertainty is expressed by the Heisenberg inequalities of Eq. 15 and not by a GUP.
The Heisenberg inequalities for energy and time [47,51], see Eq. 15, imply that as E P = m P · c 2 is the Planck energy, hence for a time uncertainty equal to the time-step t PC corresponds a mass uncertainty of about t P t PC · m P 2 ≈ 0.7 · 10 −6 · m P ≈ 1.52 · 10 −14 K g; but as the R 3 medium is time-free, both the medium "vacuum state" |0 R 3 and the dynamic "vacuum state" |0 PC are compatible with the Heisenberg inequalities. Hence, it is possible to conceive that the dynamic "vacuum state" |0 PC is a Darwinian evolution from an initial medium "vacuum state" |0 R 3 .

Conclusion
There are experimental results which could show the PC approach is false if the values obtained are different from the prediction made by the PC approach. For example, there are two experimental tests that the PC approach passed which could indicate the non-validity of the PC approach if the results had been different from those obtained: i. experimentally obtained maximum mass for an entangled quantum state of approximately 7.7 · 10 −12 Kg [32] is compatible with the PC prediction of 7.9 · 10 −12 Kg; ii. the lack of a need for extra fundamental particles to make SMP compatible with experimental data [7] as the existence of a lot more fundamental particles would alter the value of I SM P and thus make the smallest PC size no longer in the GUT scale, plus making Eq. 21 no longer valid. The PC approach proposed hence obtains the following outcomes: 1. Physics identical to the SMP all the way down to the l PC = d−1 √ I SM P · l P ≈ 1.5 · 10 −31 m scale, similar to the GUT scale, and plus predicts a very different type of Physics for the scales below that scale. 2. Geodesic trajectories identical to those of GR by defining the symmetric part of the metric tensor as a statistical correlation of bosonic QFs (mostly photons), and plus it describes how there is no perceivable quantum oscillation in the trajectories of macroscopic bodies traveling through spacetime under the effects of gravity. 3. The symmetric part of the obtained metric tensor is mostly originated from the statistical correlation of photonic spin 1 QFs with spins pointing in the same orientation, which thus obtains the GR spacetime curvature as being primarily obtained from a massless spin 2 chargeless graviton-like QF, in agreement with what the SMP expected [47]. 4. The metric tensors g μν and g μν are a result of the average communication protocolagreement concordance QFs, and thus they are not a part of the PC's L SM , which helps explain the difficulty physicists had in including a QG term in L SM . There is no need for a QG term in the L SM "processed" within the PCs, and hence, there is no need to include a QG term in the L SM used by physicists. 5. Heisenberg inequalities approximately independent of the spacetime curvature, which agrees with Hawking's description of expected black hole radiation. 6. The energy density that GR predicts for the experimentally measured cosmological constant is shown to be equal to the energy density of the combined {δ SM , δ SM } spin 0 QFs, and also that the PC time-step is hence 5.0 · 10 −40 s. 7. The antisymmetric part of the metric tensor is mostly originated from the statistical correlation of spin 1 2 QFs pointing in the same orientation thus constituting a spin 1 quasi-vierbein QF. The antisymmetric part of the metric tensor is used to define torsion and contorsion tensors that generalize GR, hence going beyond the GR field equations so as to link the Dirac spinor QFs to the GR field equations. Thus, allowing for energy-momentum-stress creation in an amount that is not strong enough to significantly alter the trajectory of celestial bodies in the observable universe, but is nevertheless strong enough to possibly be the source of dark matter creation in the vicinity of neutron stars. 8. The validity of QM's de Broglie relation for a system with mass m S together with the Nyquist-Shannon sampling theorem implies that for the PC approach m S ≤ 2 t PC t P · m P ≈ 7.9 · 10 −12 Kg; which if we assume 1qubit per atom and assume the use of Holmium atoms [37,44] implies a maximum capacity for quantum computers of 29.0 · 10 12 qubit. 9. The no-hiding theorem suggests a paradox between Hawking's semi-classical black hole radiation prediction and the unitarity of QM [8,22]. The PC approach obtains that for the radius of a Schwarzschild black hole [56] to be compatible with the PC approach, then it must be larger than l PC and hence its mass m B H must obey m B H ≥ 1 2 l PC l P ·m P , which when combined with the relation for the heaviest possible QM-compatible mass m S obtains that m B H m S ≥ 1 4 l PC l P 2 ≈ 2.1 · 10 9 . Thus, the PC approach implies that black holes are always too heavy for the no-hiding theorem to apply to them, hence resolving the no-hiding theorem paradox.
The outcome #8 establishes a limit in the maximum physically allowed number of qubits a quantum computer can have, which is relevant in future Computer Science approaches and much smaller than previously suggested [34,35]. Moreover, the PC approach obtains that great insight into Physics can be gained by taking a Computer Science perspective, which agrees well with previous work where a Computer Science perspective obtained great insight into Mathematics [15]. Plus, the PC approach can be applied in developing new approaches to Computer Science, just like the wormholes of GR were already used in proposing a new approach to distributed computing [55]. Finally, as quantum computers are moving from the use of QM toward the use of QF theory [38], a Computer Science approach to the basis of QF theory, such as the PC approach, is likely to open new perspectives about the future application of Computer Science to quantum computers.
Funding This work is supported by the European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project n • 039479; Funding Reference: POCI-01-0247-FEDER-039479]. This work has also been supported by "FCT-Fundação para a Ciência e a Tecnologia" within the R&D Units Project Scope: UIDB/00319/2020.

Availability of data and material Not applicable.
Code Availability Not applicable.

Conflict of interest Not applicable.
Ethical approval Not applicable.

Consent for publication Author allows publication of manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.