1 Introduction: Was Zeno right? - A brief summary of Quantum Gravity and the in-depth structure of space and time

According to Plato, the great Greek philosopher, around 450 BC Zeno and Parmenides, disciple and founder of the Eleatic School, visited Athens ([59], Parmenides) and encountered Socrates, who was in his twenties. On that occasion Zeno discussed his world famous paradoxes, “four arguments all immeasurably subtle and profound”, as claimed by Bertrand Russell [69].

In essence, Zeno’s line of reasoning used, for the first time, a powerful logical method, the so-called reductio ad absurdum, to demonstrate the logical impossibility of the endless division of space and time in the physical world.

Indeed, in his most famous paradox, known as Achilles and the tortoise, Zeno states that, if one admits as true the endless divisibility of space, in a race the quickest runner can never overtake the slowest, which is patently absurd, thus demonstrating that the original assumption of infinite divisibility of space is false.

The argument is as follows: suppose that the tortoise starts ahead of Achilles; in order to overtake the tortoise, in the first place Achilles has to reach it. In the time that Achilles takes to reach the original position of the tortoise, the tortoise has moved forward by some space, and therefore, after that time, we are left with the tortoise ahead of Achilles (although by a shorter distance). In the second step the situation is the same, and so on, demonstrating that Achilles cannot even reach the tortoise.

Despite the sophistication of logical reasoning, today we know that the error in the reasoning of Zeno was the implicit assumption that an infinite number of tasks (the infinite steps that Achilles has to cover to reach the tortoise) cannot be accomplished in a finite time interval, which is not true if the infinite number of time intervals spent to accomplish all the tasks constitute a sequence whose sum is a convergent mathematical series.

However, the line of reasoning reported above exerts a certain fascination on our brains, which reluctantly accept the fact that, in a finite segment, an infinite number of separate points may exist.

The mighty intellectual edifice of Mechanics developed by Newton has its foundations on the convergence of mathematical series which serves to define the concept of the derivative (fluxions, to use the name originally proposed by Newton for them), that are ubiquitous in physics. Classical Physics has this idea rooted in the postulate (often implicitly accepted) that the physical quantities can be conveniently represented and gauged by real numbers.

At the beginning of the last century, the development of Quantum Mechanics revolutionised this secular perspective. Under the astonished eyes of experimental physicists, Nature acted incomprehensibly when investigated at microscopic scales. It was the genius of Einstein who fully intuited the immense intellectual leap that our minds were obliged to accomplish to understand the physical world. In a seminal paper of 1905 [30] the yet unknown clerk at the Patent Office in Bern shattered forever the world of Physics by definitely proving, with an elegant explanation of Brownian motion, that matter is not a continuous substance but is rather constituted by lumps of mass that were dubbed Atoms by the English physicist Dalton in 1803. The idea that matter is built up by adding together minuscule indivisible particles is very old, sprouting again from a surprising insight of Greek philosophers. The word itself, Atom, which literally means indivisible, was coined by the ancient Greek philosophers Leucippus and Democritus, master and disciple, around 450 BC, in the same period in which Zeno was questioning the endless divisibility of space and time!

In 1905 Einstein completed the revolution in the physics of the infinitely small by publishing another milestone of human thought [31] in which he argued that light is composed of minuscule lumps of energy that were dubbed photons by the American physicist Troland in 1916.

The idea that the fundamental “bricks” of matter were indivisible particles with universal properties characterising them like mass and electrical charge progressively settled into the physics world thanks to the spectacular discoveries of distinguished experimental physicists. In a quick overview of this hall of fame we have to mention (without claiming to perform a comprehensive review) Thomson, who discovered the electron in 1896, Rutherford, who discovered in 1909 that the positive charge of the Atom was concentrated in a small central nucleus, and discovered the proton in 1919, Chadwick, who discovered the neutron in 1932, Reines, who discovered the neutrino in 1956, following Pauli who in 1930 postulated its existence, Gell-Mann and Zweig, who proposed the existence of the quark in 1964, Glashow, Salam and Weinberg, who proposed the existence of the W and Z gauge bosons in 1961, discovered by Rubbia and van der Meer in 1983, and finally Higgs, Brout, Englert, Guralnik, Hagen, and Kibble who postulated the existence of the Higgs boson in 1964, discovered at the CERN laboratories in 2011 by teams led by Gianotti and Tonelli.

Summarising, by the beginning of the third millennium physicists have developed and experimentally verified a quite coherent and theoretically robust picture of the world at small scale that they dubbed with the rather unprepossessing expression the Standard Model of Particle Physics, where the central role of the indivisible fundamental bricks that build up the world is alluded into the word “Particle”. After 2,500 years, the formidable intuition of Greek philosophers has been confirmed: Democritus was right!

But what about Zeno? The mighty and flawless edifice of Calculus, developed by giants of human thought like Archimedes, Newton and Leibniz, and the elegant and audacious construction of Cantor, who demonstrated that even the endless divisibility of fractional numbers was not powerful enough to describe the immense density of real numbers - and the name “real”, used by mathematicians to describe this type of number, alludes to the idea that they are essential to adequately gauge the objects of the physical world - seemed to have finally relegated the sophisticated logical arguments of the philosopher from Elea to the endless graveyard of misconceptions.

However, the inverse square law, the universal law discovered by Newton for gravitation, that was successfully extended by Coulomb to the realm of electricity, and effectively generalised by Yukawa in 1930 for a massive scalar field, contained the seed that would resurrect the old proposal of Zeno in the vivacious crowd of modern scientific thought.

The crucial point is that the combination of the indivisible discreteness of some fundamental properties, like mass or charge - that allowed the development of the very concept of elementary particle, cornerstone of the Quantum Field Theory, the mathematical formulation behind the Standard Model - is at odds with the generalised Yukawa potential widely used at least in the lowest order formulation of the interaction of a pair of fermions in Quantum Field Theory. The crucial role of the Yukawa potential in the development of Quantum Field Theory is evident when using Feynman Diagrams (firstly presented by Feynman at the Pocono Conference in 1948) to represent the interaction of a pair of fermions. In simple words, the Yukawa potential is divergent with r → 0 and therefore in contrast with the existence of point-like particles.

In our opinion the essence of the conflict between the “granular” world of Quantum Particles (excited states of the fields) and the continuum manifold that is used to represent the Minkowski Space-Time over which the fields are represented has to be ascribed to the difficulty to insert, in the same logical scheme, the indivisible nature of elementary particles and the infinite divisibility of Space-Time over which Quantum Fields are defined.

To fully grasp this important aspect we must quickly summarise the stages through which the Fields, and the Space-Time on which they are defined, have become “actors” on the stage of physics playing an active supporting role, if not a dominant one, with respect to that of the Particles just discussed.

Together with Quantum Mechanics, General Relativity radically changed our understanding of Space and Time. According to the great philosopher Immanuel Kant, both these quantities are necessary a priori representations that underlie all other intuitions. Indeed, in his Critique of Pure Reason, Kant says: “Now what are space and time? Are they actual entities? Are they only determinations or also relations of things, but still such as would belong to them even if they were not intuited? Or are they such that they belong only to the form of intuition, and therefore to the subjective constitution of our mind, without which these predicates could not be ascribed to any things at all?” These fundamental issues, raised by the German philosopher, outline the sense of the immense epistemological revolution bravely fought by the audacious physicists of the nineteenth and twentieth centuries. Indeed, the seminal work of Maxwell and Einstein, just to mention the most prominent actors, has revealed that (electromagnetic) fields, space, and time, are not a priori categories of human thought, but physical objects, susceptible to experimental investigation. Their physical properties would have turned out, in the years to come, to be very different from those that our intuition could suggest to us. The initial albeit crucial point of this investigation can be identified in Maxwell’s proposal of adding the “displacement current” term to one of the electromagnetic laws, already proposed by Coulomb, Faraday, and Ampère. The addition of this term determines a complete feedback of the electric and magnetic fields, in the absence of charges or currents, and, therefore, determines a physical reality for electromagnetic fields, that is independent of the presence of the charges, and currents that generated them. Fields are no longer convenient mathematical tools to compute the forces acting on particles, but constitute physical objects endowed with their own independent existence! From the wave equation implied by these new laws, Maxwell obtained the constant that expresses the speed of propagation of these fields with respect to the vacuum. The genius of Einstein understood that the combination of the constancy of the speed of light with the principle of relativity, proposed in 1632 by Galilei in his Dialogue on the two greatest systems of the world, was to unhinge our Newtonian conception of absolute Space and Time, independent of each other. This led him to the extraordinary conception of a deformable Space-Time, subject to the constraint of Lorentzian invariance. However, the price to pay for this epistemological revolution, was the acknowledgement that, operationally - in the Bridgmanian sense of the term [19] - it is impossible to synchronise the clocks, and/or to define the distances, in an instantaneous way or, in any case, faster than imposed by the speed of light in vacuum. This led Einstein to the intuition that also Gravity (the only other field known at the time) should propagate through a wave equation, at the same speed determined by Maxwell’s equations. Indeed, in their weak field limit, the field equations of General Relativity resemble Maxwell’s equations, in the presence of the so-called Gravito-magnetic Field, a field generated by matter currents, in perfect analogy with the Magnetic Field generated by charge currents. Again, through the complete feedback determined by the equations relating temporal and spatial variations of Gravitational and Gravito-magnetic Fields, a wave equation was capable of describing the propagation of Gravitational Fields through the vacuum, at the very same speed as the Electromagnetic Fields! The overall coherence of this epistemological revolution, imposed by Special Relativity, was guaranteed by acknowledging that Space-Time was a physical entity, subject to oscillations in its texture, and not a couple of philosophical a priori categories, as discussed by Kant.

In summary, in modern physics, space and time have progressively changed their role. From mere passive containers of events (in line with the Kantian idea of mental categories) to physical quantities that, combined in the unique hyperbolic geometry implied by the constancy of the speed of electromagnetic waves, are able to deform under the gravitational action of the fields and of the particles. With due attention, the Space-Time of General Relativity can be considered, for all intents and purposes, a field with its associated quantum particles (excited states of the fields): the gravitons. In this unifying picture, macroscopic coherent states of a huge number of gravitons are the gravitational waves, recently detected by the LIGO and Virgo observatories.

The tension between the granularity of quantum particles and the continuity of fields (defined by real variables) has been alleviated by renormalisation techniques fully applicable in Gauge Theories of Quantum Field, as shown by Gerard ’t Hooft for all fundamental forces except gravity. Renormalisation techniques have proven to be extremely effective in solving the problem of the infinities that arise when, in Quantum Field Theory, we try to combine point-like particles with fields diverging for \(r \rightarrow 0\). This approach is based on the existence of “charges” of opposite sign capable of producing, in the calculations of the associated physical quantities, terms of opposite sign which, although diverging, cancel each other out, when treated with sufficient care.

Despite their success, renormalisation techniques seem to be inadequate when gravity comes into play. Because of the mass-energy equivalence predicted by Special Relativity, the natural generalisation of the source “charge” of the gravitational field is the entire energy density and not only that associated with the rest mass of the particles. This implies that any type of field attempting to prevent gravitational collapse acts, through the energy density (usually positive) associated with it, as a further source of gravitational field, preventing, in fact, an effective renormalisation. This last feedback is difficult to eliminate within the framework just described and makes clear, in our opinion, the conceptual stalemate that prevents, at the present time, the unification of the two most revolutionary physical theories of the twentieth century: General Relativity and Quantum Mechanics. Indeed, a novel ingredient, peculiar to General Relativity, prevents the propagation, in the surrounding Universe, of the oddities associated with a divergent field, by enshrouding the singularity with an Event Horizon, a surface on which time is frozen by the intensity of the gravitational grip. However, the formation of these Event Horizons around gravitational singularities is not guaranteed by the mathematical structure of the theory, in which singularities not surrounded by Event Horizons are dubbed Naked Singularities. In order to guarantee self-consistency of the whole picture, in 1969 Roger Penrose conceived the so-called Cosmic Censorship Hypothesis, that no naked singularities exist in the Universe [55]. Beside being an ad hoc conjecture, not stated in a completely formal way, a lively scientific debate is currently underway regarding the validity of the proposed conjecture, e.g. the somewhat related Thorne-Hawking-Preskill bet (“Black hole information bet”, see e.g. [73], last chapter). In this perspective, Extended Theories of Gravity represent an approach to overcome the lack of a final theory of Quantum Gravity (see e.g. [23]).

To overcome this formidable impasse, theoretical physics is today exploring more radical approaches that require a new conceptual revolution, a paradigm shift, to use Kuhn’s words.

Here we just mention two opposite approaches that tackle the problem of the irresolvable dichotomy of particles and fields from somewhat opposite perspectives. String Theories (see e.g. [71], for reviews and later criticism of this approach) that eliminate the point-like nature of the particles by assigning to each of them a (mono)-dimensional extension: the string. Loop Quantum Gravity (see e.g. [64], for reviews) which questions the smoothness of Space-Time, quantising it into discrete energy levels like those observed in classical quantum-mechanical systems to form a complex pregeometric structure (to use the words of Wheeler) dubbed Spin-Network.

Both proposed theories (although with different and somewhat opposite theoretical approaches) imply the existence of a minimal length for physical space (and time). The emergence of Atoms of Space and Time - to use an efficacious and vivid expression, coined by Smolin in 2006 - is a necessary consequence of the ultimate quantisation of Space-Time.

However the spatial (and temporal) length-scales associated with this quantisation, are minuscule, in terms of standard units, as already suggested in a pioneering and visionary work of Planck in 1899 [57]: \(\ell _{\mathrm {P}} \sim \sqrt {\hbar G/ c^{3}} \sim 10^{-33}\) cm and \(t_{\mathrm {P}} \sim \sqrt {\hbar G/ c^{5}} \sim 10^{-43}\) seconds for the Planck length and time, respectively. For comparison, the shortest distance (Compton wavelength) directly measured to date at the Large Hadron Collider at CERN is \(\sim 10^{-20}\) centimeters (for colliding energies of few 1012 eV). The shortest time intervals ever measured are just above atto-seconds \(\sim 10^{-18}\) seconds (see e.g. [36]). Experimentally, at the present moment, we are more than ten orders of magnitude above the theoretical limit we would like to probe to effectively constrain our theoretical speculations!

For a quick (and not exhaustive) overview of the variety of theoretical approaches exploring the possibility of the existence of fundamental limits in the ability to measure (and therefore to define, in the Bridgmanian sense) intervals of arbitrarily small space and time, we use, almost textually, what is reported in a recent work by some of us [20] and references therein.

Several thought experiments have been proposed to explore fundamental limits in the measurement process of time and space intervals (see e.g. [38], for an updated and complete review). In particular Mead [48] “postulate the existence of a fundamental length” (to use his own words) and discussed the possibility that this length is the Planck length, \(\ell _{\min \limits } \sim \sqrt {G \hbar /c^{3}} = \ell _{\mathrm {P}}\), which resulted in limitations in the measurement of arbitrarily short time intervals giving rise to relations similar to the Space-Time Uncertainty relation proposed by [20]. Moreover, in a subsequent paper [48], Mead discussed an -in principle- observable spectral broadening, a consequence of the postulate of the existence of a fundamental length of the order of the Planck length. More recently, in the framework of String Theory, [83, 84] proposed a space-time uncertainty relation which has the same structure as the uncertainty relation discussed in the aforementioned paper of [20] (see e.g. [85] for a discussion of the possible role of a space-time uncertainty relation in String Theory). The relation proposed in String Theory constrains the product of the uncertainties in the time interval cΔT and the spatial length ΔXl to be larger than the square of the string length S, which is a parameter of the String Theory. However, to use the same words as Yoneya [85], this relation is “speculative and hence rather vague yet”. Indeed, in the context of Field Theories, uncertainty relations between space and time coordinates similar to those proposed here have been discussed as an ansatz for the limitation arising in combining Heisenberg’s uncertainty principle with Einstein’s theory of gravity [29]. Garay [35] postulated and discussed, in the context of Quantum Gravity, the existence of a minimum length of the order of the Planck length, but followed the idea that this limitation may have a similar meaning to the speed limit defined by the speed of light in Special Relativity, in line with what was already pointed out previously (see e.g. [78] and references therein). In the framework of the so-called Quantum Loop Gravity (see e.g. [64,65,66], for a review) a minimal length appears characteristically in the form of a minimal surface area [12, 67]: indeed the area operator is quantised in units of \(\ell _{\mathrm {P}}^{2}\) [63]. It has been sometimes argued that this minimal length might conflict with Lorentz invariance, because a boosted observer could see the minimal length further Lorentz contracted.

Indeed, some of the proposed theories allow for this Lorentz Invariance Violation (LIV, hereinafter) at some small scales (see e.g. [9, 42, 46], for reviews). Essentially in these scenarios the presence of a granular structure of space in which electromagnetic waves (i.e. photons, from the quantum point of view) propagate, determines the emergence of a dispersion law for light in vacuum, in close analogy with what happens for the propagation of photons in a crystal lattice.

We should stress that not all ways of introducing spacetime granularity will produce these dispersive effects. In particular, in Loop Quantum Gravity the granularity is mainly reflected in a minimum value for areas which however, is not a fixed property of geometry, but rather corresponds to a minimal (nonzero) eigenvalue of a quantum observable that has the same minimal area \(\ell _{\text {Planck}}^{2}\) for all the boosted observers (what changes continuously in the boost transformation is the probability distribution of seeing one or the other of the discrete eigenvalues of the area (see e.g. [68]). However, in Loop Quantum Gravity there are results amenable for testing with gamma-ray telescopes, the most studied possibility being an anomalous dependence of frequency on distance, producing a flattening of the cosmological redshift [14].

The energy scale at which dispersion effects become manifest can be easily computed e.g. equating the photon energy, E = hν, to \(\nu \sim 1/t_{\mathrm {P}}\) which provides the Planck Energy \(E_{\mathrm {P}} \sim \sqrt {\hbar c^{5}/G} \sim 10^{28}\) eV, a huge energy for the particle’s world, corresponding to the mass of a paramecium (∼0.02 mg). Again, frustratingly, this energy scale is well beyond any possibility of direct investigation with any kind of colliders in the near and distant future. It is worth noting that, in the simplest models, at lowest order, the dispersion law for the photon speed vphot is dominated by the linear term: \(\delta v_{\text {phot}}/c \propto h\nu /\sqrt {\hbar c^{5}/G}\), with constant of proportionality \(\xi \sim 1\).

In our opinion, this unprecedented situation, in which the scale of the expected experimental phenomena is very far from the current possibilities of experimental verification, is hampering any significant progress in our understanding of the ultimate structure of the world. Physics is, after all, an experimental discipline in which continuous comparison with experimental data is essential, even to draw unexpected clues from which to develop new theories. This was the case for the development of Relativity and Quantum Mechanics in which bold physicists and epistemologists had to develop new logical models to account for unexpected experimental results that were unimaginable for the classical conception of nature developed by Greek philosophers. Indeed, the fatal blow to the classical conception of physics developed up to Newton and Maxwell, was given by the experimental impossibility to determine the speed of Earth with respect to the Cosmic Aether (the medium in which electromagnetic waves propagate) as firmly established by the null result of the Michelson and Morley experiment [51].

Indeed, in the context of Quantum Gravity, we are witnessing a flourishing of countless elegant mathematically daring theories, which testify to the lively interest of brilliant minds towards problems of undoubted physical and epistemological relevance that sadly, at the moment, lack the invigorating and vitalising confrontation with constraining experimental data.

For comparison, the recent discoveries of the existence of the Higgs Boson, which confirmed and strengthened it, the Standard Model of Particle Physics, the detection of Gravitational Waves, which confirmed what was predicted a century ago by General Relativity and the recent spectacular image, obtained interferometrically, of the event horizon around a supermassive black hole, which confirmed the formation of trapped surfaces in the Space-Time fabric, have revitalised these very interesting fields of research by opening the doors to new disciplines such as multi-messenger astronomy [6].

However, we believe that a giant leap is now possible also in the difficult experimental task of investigating the texture of Space on the minuscule scales provided by Quantum Gravity. In the following we will show how the technological progress in Space Sciences and the enormous reduction in the costs necessary to put detectors into space, can allow us to conceive an ambitious experiment to verify, for the first time, directly, some of the most important consequences of the existence of a discrete structure for the texture of the space. To put it suggestively, twenty-five centuries after the meeting of the Eleatic philosophers with Socrates in Athens, we are able to investigate the problem raised by Zeno in a quantitative way.

In particular, in line with the suggestions outlined in some pioneering works in the field of experimental investigation of Quantum Gravity [10, 24], we propose an ambitious albeit robust experiment to directly search for tiny delays in the arrival times of photons of different energies determined by the dispersion law for photons discussed above. Given the hugeness of the Planck Energy, we expect, as will be shown in Section 6.2, delays ∼ few microseconds for Gamma-Ray Burst (sudden and unpredictable bursts of hard-X-γ rays, with huge fluxes up to 102 ergs/cm2/s, emitted at cosmological distances, GRB hereafter) photons that travelled for more than ten billion years!

These last numbers show, in themselves, the difficulty and ambitiousness of the proposed experiment. We would like to emphasize here, however, that even a null result, that is a solid proof of the non-existence of a linear effect in the law of photonic dispersion for energies normalised to the Planck scale, would constitute a result of capital importance for the progress of fundamental physics. After all, the aforementioned Michelson and Morley experiment, decisive for the acceptance, in an understandably conservative scientific community, of the revolutionary ideas on space and time implied by the Theory of Relativity, provided a null result with respect to the possibility of identifying motion with respect to the Cosmic Aether!

A promising method for constraining a first order dispersion relation for photons in vacuo is the study of discrepancies in the arrival times of high-energy photons of Gamma-Ray Bursts in different energy bands. Despite the relevant number of papers, published in recent years (see e.g. [32], for a comprehensive analysis of Fermi-LAT gamma-ray burst data), we believe that the first order dispersion relation has not yet been investigated with the due accuracy because, at present, we lack an experiment with all the desired characteristics to effectively constrain this relation, beyond any possible loophole. In particular, our major concerns are possible intrinsic delays (characterising the emission process) superimposed over the tiny quantum delays. This is particularly evident in the caveat discussed in [7] on GRB090510 and, more recently, in the paper by [80] and [32] who set a robust constraint on LIV using Fermi-LAT GRB data of a few 1017GeV. Further indications of no LIV violations come from the HESS collaboration, in particular from spectral analysis of the blazar Mrk 501 [43], although also in this case a spectral shape and hypothesis on the emission process are assumed. Moreover, all these analyses assume a dependence of the effects on redshift which was conjectured in the pioneering paper by [39]; however as theorists acquire the ability to test the Jacob-Piran conjecture in explicit models it is often found that other forms of redshift dependence apply (see e.g. [62]). In our opinion, given the importance of the question, a direct robust measurement cannot be based on the analysis of a single object and a robust statistical analysis of a rich sample of data is required, in which the natural direct timescale of the LIV-induced delays in the gamma-ray band (one microsecond) is thoroughly searched. None of the experiments discussed above had the right combination of time resolution and collecting area to effectively scrutinise this regime.

2 GrailQuest and its scientific case in a nutshell

The coalescence of compact objects, neutron stars (NS) and black holes (BH), and the sudden collapse to form a supra-massive NS or a BH, hold the keys to investigate both the physics of matter under extreme conditions, and the ultimate structure of Space-Time. At least three main discoveries in the past 20 years prompted such studies.

Prompt arcminute localisations of GRBs enabled by the instruments on board BeppoSAX, allowed the discovery of their X-ray and optical afterglows [26, 76], which led to the identification of their host galaxies [50]. This confirmed the extragalactic nature of GRBs and assessed their energy budget, thus establishing that they are the most powerful accelerators in the Universe. Even accounting for strong beaming, the energy released can attain 1052 − 53 erg, a large fraction of the Sun’s rest mass energy, in ≈ 0.1 − 100 seconds, produced by the bulk acceleration of plasmoids to Γ ≈ 100 − 1000 [7, 18].

Second, the large area telescope (LAT) on board the Fermi satellite confirmed GRBs as GeV sources as previously reported by the EGRET instrument on board the NASA Compton Gamma-Ray Observatory satellite, confirming their capability to accelerate matter up to Γ ≈ 100 − 1000 and allowing us to apply for the first time the program envisioned by Amelino-Camelia and collaborators at the end of the 90’s [10] to investigate quantum space-time using cosmic sources.

Third, the recent discoveries of the gravitational wave signals from one NS-NS merger and several BH-BH mergers by Advanced LIGO and Virgo [1,2,3], opened a brand new window to investigate the astrophysics of compact objects, as well as fundamental physics. The gravitational signal carries a huge amount of information on the progenitors and final compact objects (masses, spins, luminosity, distance etc.). Moreover, the current values for the number of mergers (rate in excess of 12 Gpc− 3yr− 1), implies that the number of Gravitational Wave Events (GWEs hereafter) associated with the merging of two compact objects is significant.

These scenarios and limits will be further constrained and improved in the coming few years when the sensitivity of the interferometers will be further improved, and the corresponding volume for BH-BH and NS-NS merging events further enlarged. The activation of a third interferometer, Advanced Virgo, on August 2017, has already greatly improved the localisation capability of the Advanced LIGO/Virgo system, producing error boxes with areas of a few hundreds of square degrees, 10-100 times smaller than those provided by Advanced LIGO [3]. The localisation accuracy will reduce to a few tens of square degrees with the advent of the Kamioka Gravitational Wave Detector (KAGRA).

In August 2017 the first NS-NS merging event was discovered by LIGO/Virgo [4], with an associated short GRB seen off-axis and detected first by the Fermi gamma-ray Burst Monitor (GBM), INTEGRAL/SPI-ACS [5], and, only nine days after the prompt emission, by Chandra [74]. The GBM provided a position with uncertainty ∼12 degrees (statistical, 1σ, to which a systematic uncertainty of several deg should be added). The LIGO/Virgo error boxes led to the first identification of an optical transient associated with both a short GRB and a GWE, opening, de facto, the window of multi-messenger astrophysics. This exciting new field of astrophysics research will allow us, in the immediate future, to obtain physical and cosmological information of paramount importance for our understanding of the GWE and GRB phenomena (see e.g. [56]).

These considerations show that, in the near future, the prompt accurate localisation of the possible transient electromagnetic counterparts of GWEs is mandatory in order to fully exploit the power of scientific investigation of multi-messenger astronomy. Indeed, a high sensitivity to transient events in the X-ray/Gamma-ray window and their subsequent fast localisation with accuracies in the arcminute range or below, are mandatory in order to point narrow field instruments to scrutinise the GWE’s electromagnetic counterparts in the whole electromagnetic band.

In addition, as discussed in Section 1, GRB lightcurves in different energy bands, in the X-ray/gamma-ray window, with temporal resolution ≤ 1 microsecond, can be used to investigate a dispersion law for photons, predicted in some of the proposed theories of Quantum Gravity.

In summary, there are at least three broad areas that can and must be tackled in the next few years:

  1. 1.

    the accurate (arcminute/arcsecond) and prompt (seconds/minutes) localisation of bright transients;

  2. 2.

    the study of the transients’ hard X-/gamma-ray temporal variability (down to the microsecond domain and below, i.e three orders of magnitude better than the best current measurements), as a proxy for the physical activity of the so-called inner engine that powers the most powerful explosions in our Universe;

  3. 3.

    the use of fast high-energy transients to investigate the structure of space-time.

We will discuss these three broad themes in the next Sections. We devote the last Sections to describing our proposed approach to tackling the three main science themes listed above; this consists of a distributed instrument, a swarm of simple but fast hard X-/gamma-ray detectors hosted by small/micro-satellites in low Earth orbit. This GrailQuest mission is specifically conceived to provide precise measurements of the three main scientific themes mentioned above.

3 Gamma-Ray Burst simulations and timing accuracy in cross-correlation analysis

3.1 Gamma-Ray Burst fast variability

GRBs are thought to be produced by the collapse of massive stars and/or by the coalescence of two compact objects. Their main observational characteristics are their huge luminosity and fast variability, often as short as one millisecond, as shown by [79], both in isolated flares and in lower amplitude flickering. These characteristics soon led to the development of the fireball model, i.e. a relativistic bulk flow where shocks efficiently accelerate particles. The cooling of the ultra-relativistic particles then produces the observed X-ray and gamma-ray emission. One possibility to shed light on their inner engines is through GRB fast variability. Early numerical simulations [40, 60, 72] suggested that the GRB lightcurve reproduces the activity of the inner engine. More recently, hydrodynamical simulations of GRB jets showed that, in order to reproduce the observed lightcurves, fast variability must be injected at the base of the jet by the inner engine, while slower variations may be due to the interactions of the jets with the surrounding matter [52].

The most systematic searches for the shortest timescales in GRBs so far are those of [45, 79] and [16]. The first two works exploit rather sophisticated statistical (wavelet) analyses, while the latter performs a parametric burst deconvolution into pulses. Walker et al. [79] conclude that the majority of analysed BATSE GRBs show risetimes faster than 4 milliseconds and 30% of the events have risetime faster than 1 millisecond (observer frame). MacLachlan et al. [45] use Fermi/GBM data binned at 200 microseconds (the accuracy of the GBM time tagging is 2 microseconds) and report somewhat longer minimum variability timescales than [79], but conclude that variability of the order of a few milliseconds is not uncommon (although they are limited by the wider temporal bin size adopted of 200 microseconds and much worse statistics with respect to the BATSE sample). Systematically longer time-scales are reported by [16], using data binned at 1 millisecond. This is not surprising, because direct pulse deconvolution requires very good statistics, which can hardly be obtained for the shortest pulses.

3.2 Synthetic Gamma-Ray Bursts

To estimate the accuracy obtainable from cross-correlation analysis, ECC, defined as the standard deviation σ of the distribution of delays obtained applying cross-correlation techniques to pairs of simulated GRB lightcurves, we started by creating synthetic Long and Short GRBs with the following characteristics. The Long and Short GRBs considered have durations ΔtLong = 25 seconds and ΔtShort = 0.4 seconds, respectively. To simulate the GRB’s variability with a time-scale of \(\sim 1\) second we assumed that each GRB results from the superposition of a great number of identical exponential shots of decay constant \(\tau _{\text {shot}} \sim 1\) millisecond, randomly occurring at an average arrival rate of λshot = 100shot/s for the entire duration of the GRB. The amplitude of each exponential shot is normalised to have a flux of 8.0counts/s/cm2 in the energy band 50-300 keV, while the background photon flux in the same energy band has been fixed to 2.8counts/s/cm2 (consistent with typical backgrounds observed by Fermi GBM).

Figure 1 shows the synthetic lightcurves for the long (top panel) and short (bottom panel) GRBs, respectively, calculated accumulating photons on time scales of 10− 2 seconds. The simulated GRB millisecond variability can be inspected in greater detail in the insets on Fig. 1, in which a small fraction of the same lightcurves has been simulated increasing the equivalent effective area of the detector up to 100 square meters and accumulating photons on timescales two orders of magnitude shorter (10− 4 seconds).

Fig. 1
figure 1

lightcurves on timescales of 10− 2 seconds for the synthetic long (top panel) and short (bottom panel) GRBs created following the procedure described in Section 3.2. The insets show a zoom-in of the lightcurves created on shorter timescales (10− 4 seconds) after rescaling the effective area of the equivalent detector up to 100 square meters

3.3 Fermi GBM Gamma-Ray Bursts

To further investigate the method we applied the same techniques to real data. In order to achieve the objectives described above, we performed Monte-Carlo simulations based on real detections of GRBs obtained with GBM. We searched the available Fermi GBM archive seeking GRB’s characterised by variability on time scales as short as a few milliseconds in order to enhance the sensitivity of time delay measurements between photons of different energies as well as the localisation of the GRBs prompt emission. For this work we selected the following events: a) a Short GRB (GRB120323507) observed on 2012 March 23, characterised by a t90Footnote 1 duration of \(\sim 0.4\) seconds with a fluence of \(\sim 1\times 10^{-5}\) erg/cm2; b) a Long GRB (GRB130502327) observed on 2013 May 2, characterised by a t90 duration of \(\sim 24\) seconds and a fluence of \(\sim 1\times 10^{-4}\) erg/cm2. Figure 2 shows the lightcurves of the two selected events accumulated on 10− 2 seconds timescales.

Fig. 2
figure 2

lightcurves on timescales of 10− 2 seconds for Long (top panel) and Short (bottom panel) GRBs detected by Fermi GBM (see Section 3.3 for more details). The insets show a zoom in of the simulated lightcurves created on shorter timescales (10− 4 seconds) after rescaling the effective area of the equivalent detector up to 100 square meters

Simulations on short time scales (\(\sim 0.1\) millisecond) of a unique type of transient event such as a GRB, based on observed lightcurves, can be challenging when the effective area of the detector is so small that the statistics are fully dominated by Poissonian fluctuations that unavoidably characterise the (quantum) detection process. In particular, if the detected counts within the given time scale is ≤ 1, quantum fluctuations of the order of 100% are expected. If, naively, the number of counts per bin is simply rescaled to account for an increase of effective area, these quantum fluctuations can introduce a false imprint of 100% variability with respect to the original signal. No definite solution is available to mitigate this problem, that could be, however, alleviated by re-binning and/or smoothing techniques. Although smoothing techniques allow us to create lightcurves for any desired temporal resolution, correlations between subsequent bins are unavoidable. Cross-correlation techniques are strongly biased by this effect, hence we opted for a more conservative method involving standard rebinning in which the number of photons accumulated in each (variable) bin is fixed. After several trials and Monte-Carlo simulations we find that 6 photons per bin allows us to preserve the signal variability while introducing undesired fluctuations not larger than \(\sim 30\%\). Applying this rebinning technique to the GBM lightcurves (at the maximum time resolution of 2 microseconds) discussed above, we generated a variable bin size light curve. In order to produce a template for Monte-Carlo simulations, usable on any time scale, we linearly interpolated the previous light curve to create a functional expression (template) for the theoretical light curve. We note explicitly that linear interpolation between subsequent bins is the most conservative approach that does not introduce spurious variability on any time scale.

For a given temporal bin size, we amplified the GRB template previously described in order to take into account the overall effective area of the detector(s) and used this value as the expectation number of photons within the bin. Poissonian randomisation was then applied to produce a simulated light curve. The insets of Fig. 2 show the results of this process for the Long and Short GRBs described above simulated for a timescale of 10− 4 seconds and overall effective area of 100 square meters.

3.4 Cross-correlation technique and Monte-Carlo simulations

Starting from the GRB lightcurves described above, we apply cross-correlation techniques to determine time delays between two signals. Figure 3 shows an example of a cross-correlation function obtained by processing two GRB lightcurves simulated using the previously described template of the Short GRB observed by Fermi GBM (GRB120323507) that we rescaled to mimic a detector(s) with 100 square meters effective area. In order to extract the temporal information of the delay, we fitted a restricted region around the peak of the cross-correlation function with an ad hoc model consisting of an asymmetric double exponential component (see inset in Fig. 3).

Fig. 3
figure 3

Cross-correlation function obtained analysing simulated lightcurves obtained from a template generated starting from the Fermi GBM observations of the short GRB 120323507. See text for more details

To investigate the accuracy achievable by the method, for each GRB and specific instrument effective area, we performed 1000 Monte-Carlo simulations in which two lightcurves generated by means of randomisation of the template are cross-correlated. For each cross-correlation function we then fitted the peak, extracting the delay between the lightcurves. From the overall distribution of delays we calculated its standard deviation which we interpret as a realistic estimate of the accuracy of the time delay measured with the cross-correlation method. The left panel of Fig. 4 shows the distribution of delays obtained from 1000 Monte-Carlo simulations performed for the Long (GRB130502327) and the Short (GRB120323507) GRBs assuming a total collecting area of 100 square meters.

Fig. 4
figure 4

Upper panel: Distribution of delays obtained applying cross-correlation techniques to pairs of simulated lightcurves of the Long (top) and the Short (bottom) Fermi GBM GRBs (see text for more details) rescaled for an effective collecting area of 100 square meters. Each distribution is the result of 1000 Monte-Carlo simulations. The overlaid red line represents the best-fit normal distribution to the data. Lower panel: Dependence of the cross-correlation accuracy as a function of the effective area of the simulated instrument for the same Short and Long GRBs discussed in the upper panel. The red dashed line represents the best-fit model to the data

To proceed in the analysis of the technique we investigated the dependence of the cross-correlation accuracy, ECC, as a function of the effective area of the instrument, which reflects the number of photons collected for the GRB. To do that, we performed 1000 Monte-Carlo simulations for two Short (one synthetic and one real) and two Long (one synthetic and one real) GRBs, simulating four different instrument collecting areas, i.e. 1, 10, 50 and 100 square meters, for a total of 16000 simulations. We emphasise that each simulation performed on time scales of microseconds requires the creation of tens to hundreds of millions of photons to be allocated in lightcurves with tens of millions of bins, which are then cross-correlated in pairs. The overall process involved a substantial computational effort, which required more than 6000 hours of CPU time in a multi-core (128 logical processors) server and several terabytes of storage.

From the simulations of the synthetic GRBs (in the band 50-300 keV) we obtained the following relations between the cross-correlation accuracy, ECC, and the number of photons in the lightcurves Nph: \(E_{CC \text {Long}} = 0.014 \mu \mathrm {s}\times (3.45\times 10^{8})^{0.634}\times N_{ph}^{-0.634}\) for the Long GRB and \(E_{CC \text {Short}} = 0.014 \mu \mathrm {s}\times (6.1\times 10^{8})^{0.609}\times N_{ph}^{-0.609}\) for the Short GRB.

From the simulations of the real GRBs observed with Fermi GBM (in the band 50-300 keV) we obtained the following results (see also the lower panel of Fig. 4): \(E_{CC \text {Long}} = 0.27\times (2.83\times 10^{8})^{0.542}\times N_{ph}^{-0.542} \mu \mathrm {s}\) for the Long GRB and \(E_{CC \text {Short}} = 0.19\times (2.36\times 10^{7})^{0.536}\times N_{ph}^{-0.536} \mu \mathrm {s}\) for the Short GRB.

We can express these last relations in terms of GRB fluences F and overall effective area of the detectors, A:

$$ E_{CC \text{Long}} = 0.27 \times \left[ \left( \frac{F}{10^{-4} \text{erg cm}^{-2}} \right) \left( \frac{A}{10^{2} \text{m}^{2}} \right) \right]^{-0.542} \mu\mathrm{s} $$
(1)
$$ E_{CC \text{Short}} = 0.19\times \left[ \left( \frac{F}{10^{-5} \text{erg cm}^{-2}} \right) \left( \frac{A}{10^{2} \text{m}^{2}} \right) \right]^{-0.536} \mu\mathrm{s} $$
(2)

As expected, the cross correlation accuracy ECC scales roughly as the inverse square root of the GRB fluence F, and detector effective area A. This shows that delays as small as a few microseconds can be detected with an effective area of \(\sim 1\) square metre.

4 GrailQuest localisation capabilities

GrailQuest is designed to provide prompt (within seconds/minutes), arcminute-to-(sub)-arcsecond localisations of bright hard X-ray transients. This is the key to enable the search for faint optical transients associated with the GWEs and GRBs, because their brightness quickly fades after the event. In the GrailQuest concept, localisation is achieved by exploiting the delay between the transient’s photon arrival times at different detectors, separated by hundreds or thousands of kilometers. Delays are measured by cross-correlating the source signals detected by different instruments.

The working principle of GrailQuest can be easily understood by considering the analogy with radio interferometry.

In the case of radio interferometry obtained with N observing radio telescopes with average spatial separation d, the theoretical spatial resolution of the interferometric array results from the combination of Ntot = N × (N − 1)/2 statistically dependent pairs of interferometers, each having an angular resolution capability of

$$ \sigma_{\theta, \mathrm{i}} \sim f(\alpha;\delta)_{\mathrm{i}} \times \sigma_{\phi i} \times (\lambda/d), $$
(3)

where \(f(\alpha ;\delta )_{\mathrm {i}} \mathcal {O}\!(1)\) is a function that depends on the position of the source in the sky (α and δ are the right ascension and declination, respectively) with respect to the orientation of the vector connecting the pair of antennas of the ith interferometer, σϕi is the uncertainty in the phase differences measurable by each pair of antennas, λ is the wavelength of the observation, and i = 1,...,N. It is important to note that the number of statistically independent pairs is Nind = N − 1. In practice, however, it is useful to consider the whole set of Ntot equations to minimise the a priori unknown systematic effect on one or more radio telescopes. This system of Ntot equations can be solved for the 2 unknowns α and δ giving a statistical accuracy of

$$ \sigma_{\alpha} \sim \sigma_{\delta} \sim g(\alpha;\delta) \times \sigma_{\phi} \times(\lambda/d)/ \sqrt{N_{\text{ind}} - 2}, $$
(4)

where \(g(\alpha ;\delta ) \mathcal {O}\!(1)\) and σϕ are suitably weighted averages of f(α;δ)i and σϕi, respectively. The factor σϕ × λ represents the accuracy of the determination of the phases of the ratio signal.

In the case of GrailQuest we can imagine that, because of the intrinsic variability of the signal of the transient sources, we are able to determine the analog of the factor σϕ × λ by cross-correlating the signal recorded by each pair of detectors of the GrailQuest constellation and determining the cross-correlation delay Δti. Indeed, since λν = c, and \(\phi = {\int \limits } \nu dt \sim \nu {{\varDelta }} t\) for short signals (where c is the speed of light and ν is its frequency), σϕ × λ = νσΔtλ = cσΔt, where σΔt is a suitably weighted average (over the whole ensemble of detectors) of the accuracy in the determination of Δti. Therefore, the accuracy in the source position obtainable with a constellation of N satellites is

$$ \sigma_{\alpha} \sim \sigma_{\delta} \sim g(\alpha;\delta) (c/d)\sigma_{{{\varDelta}} t} / \sqrt{N - 3}. $$
(5)

Finally, we have to add in quadrature all the statistical errors in the determination of σΔt. In particular we have:

$$ \sigma_{{{\varDelta}} t} = \sqrt{E_{\text{CC}}^{2}+E_{\text{POS}}^{2}+ E_{\text{time}}^{2}} $$
(6)

where ECC is cross-correlation accuracy between the lightcurves recorded by two detectors, EPOS is the error induced by the uncertainty in the spatial localisation of the detectors, and Etime is the error in the absolute time reconstruction. For large N, we adopt the reasonable value \(g(\alpha ;\delta ) \sim 1\) and \(N - 3 \sim N\), \(\sigma _{\alpha } \sim \sigma _{\delta } = \sigma _{\theta }\), where σθ is the positional accuracy (PA hereinafter):

$$ \sigma_{\theta} \sim \frac{c}{d\sqrt{N}} \sqrt{E_{\text{CC}}^{2}+E_{\text{POS}}^{2}+ E_{\text{time}}^{2}}. $$
(7)

The absolute time and position reconstruction provided by commercial GPS systems are of the order of 10-30 nanoseconds and ∼10 meters (corresponding again to a few of tens of nanoseconds). Moreover, we note that uncertainties in the times coming from the detection process must be taken into account. However, the intrinsic detection process and front-end electronics readout can achieve sub- to a few nanoseconds accuracies and with careful design of the digital electronics, and a few nanoseconds timing can be achieved with heritage electronics. This leaves the error in the time delay inferred from the cross-correlation analysis to be most likely the largest term within the time delay uncertainty.

Adopting N100 = 100 satellites for the constellation, d3000 km = 3 × 108 cm, ECC10μ s = 10− 5 >> EPOS >> Etime we have

$$ \sigma_{\theta} \sim 20.6 d_{{3000}}^{-1} N_{100}^{-1} E_{\text{CC} 10 \mu s} \text{arcsecond}. $$
(8)

The PA calculated above includes statistical errors only. Systematic errors are likely to be important, but at the stage of proof of concept we can conclude that localisation at the sub-arcminute level is feasible with the above parameter settings.

5 High energy transient localisation in the multi-messenger era

As of today, the observatories dedicated to the search and study of hard X-ray transients are the NASA’s Swift and Fermi, and the ESA’s INTEGRAL.

Swift was launched in 2004 and it is equipped with the wide field of view (FoV) Burst Alert Telescope (BAT) to localise transients, and the narrow field X-ray Telescope (XRT) and the Ultra-Violet and Optical Telescope (UVOT), both of which are high sensitivity telescopes for detailed observations of the transient afterglows. BAT is a coded mask instrument with FoV∼1/6 of the full sky, and a collecting area of about 0.5 square metre [15]. It can provide GRB positions above 1 arcminute accuracy, depending on GRB strength and position in the FoV. XRT is a Wolter-I X-ray telescope, with FoV∼30 arcminute2, and collecting area ∼200 square centimeteres, that can provide positions with arcsecond accuracy of sources down to fluxes \(\sim 10^{-14}\) ergs/cm2/s. Swift has the unique capability to slew from its original pointing position to the position of the transient in tens of seconds/minutes, to study the transient with its narrow field telescopes.

INTEGRAL was launched in 2002 and it is equipped with the wide field of view IBIS camera, with FoV∼1000 square degrees and collecting area ∼1 square metre [75]. IBIS has a smaller FoV than BAT, but a better sensitivity, allowing the detection of fainter transients with respect to BAT. In addition to IBIS, the anti-coincidence scintillators of SPI, the high energy spectrometer, can be used as an all sky monitor to detect GRBs, with basically no independent localisation capability, but very useful as a point in the Interplanetary Network of GRB detectors.

Fermi was launched in 2008 and carries the GBM experiment, consisting of 12 NaI and 2 BGO scintillators, each with about 120 square centimeters of collecting area [49]. The GBM can provide GRB positions with accuracies of several degrees in the best cases.

Swift, INTEGRAL and Fermi are working nominally after more than 16, 18 and 12 years from the launch respectively, providing ∼arcminutes positions (Swift, INTEGRAL) or 10-20 degree positions (Fermi) over a large fraction of the sky. Their predicted lifetimes would extend the missions through the 2020’s, but the equipment is ageing and it is unknown how long they will survive after 2020. This time window is crucial for two main reasons:

  1. 1.

    The Advanced LIGO/Virgo detectors will reach their final sensitivity and best localisation capability for GWE in a few years. KAGRA joined the network at the beginning of 2020. However, a fifth interferometer, LIGO-India, will be required in the network (expected in 2025) to provide positions for a large fraction of GWE with accuracy smaller than 10 degrees. On the other hand, the improved sensitivity will increase the distance at which an event can be observed, to several Gpc for BH-BH events and hundreds of Mpc for NS-NS events, thus increasing the cosmic volume that can be studied. The number of optical transients in such huge volumes is from many tens to several hundreds, making it difficult to identify the one associated with the GWE. The number of high-energy transients in the same volume is much smaller, greatly helping the identification. It is instructive to consider the first identification of an electromagnetic transient with a GWE which occurred on August 17 2017. The Fermi GBM observed a gamma-ray burst within a few seconds of the GW detection. The combined LIGO/Virgo error-box was of the order of 30 square degrees [4]. However the LIGO/Virgo detection indicated a very close event (∼40 Mpc) greatly limiting the number of target galaxies. An optical transient from one of these nearby galaxies was soon discovered. There were thus two key elements that allowed the discovery and localisation of the optical transient associated with the GWE: a) the prompt gamma-ray detection from the Fermi GBM (and the Interplanetary Network triangulation with INTEGRAL), and b) the relatively limited volume that had to be searched. For fainter events, farther away, such as those that will likely be provided by ground-based interferometers during the 2020s, the volume to be searched will be much larger. The third observing run of LIGO and Virgo already revealed events more distant than GW170817 for which a well-localised high-energy counterpart becomes crucial to detect the multi-wavelength signal and identify the host galaxy. The third generation of gravitational wave detectors is expected after 2030, e.g. the Einstein Telescope; at that time the localisation of possible GRB counterparts will be crucial (see e.g. [25]) and GrailQuest will be fundamental in this respect.

  2. 2.

    By the early of the 2030s, ESA will launch its L2 mission Athena, carrying the most sensitive X-ray telescope and the highest energy resolution detector (XIFU) ever built. Among the core science goals of Athena are spectroscopic observations of bright GRBs, used as light-beacons to X-ray the inter-galactic medium (IGM). These observations may lead to the discovery and the characterisation of the bulk of the baryons in the local Universe, in the form of a warm IGM (a few millions K), through absorption line spectroscopy (see e.g. [34]). Athena will also target high-z GRBs, to assess whether they are the final end of elusive Pop-III stars (through the measurements of the abundance pattern expected from the explosion of a star made only of pristine gas). Indeed, very massive Pop-III stars are thought to collapse into proto-black holes. Subsequent accretion through a temporary disc could produce an energetic jet which, in turn, generates a burst of TeV neutrinos. This population of high energy neutrinos could be detected by the enhanced sensitivities of forthcoming detectors in the high-energy band such as AMANDA-II and IceCube [70]. This high redshift GRB population is intrinsically faint and therefore an ideal target for the unprecedented sensitivity of GrailQuest . Moreover, because of the high redshift, quantum gravity time delays (if detectable) are significant in these systems.

For these reasons several missions aimed at localising fast high-energy transients have been and will be proposed to NASA (MIDEX class) and ESA (M class), to guarantee that the study of these elusive sources can be operative and efficient during the next decades. GrailQuest will offer a fast-track and less expensive fundamental complement to these missions, since it will be an all-sky monitor able to spot transient events everywhere in the sky and to give a fast (within minutes) and precise (from below 1 degree to arcsecond, depending on the GRB flux and time variability) localisation of the event. This is extremely important to allow follow-up observations of these events with the sensitive narrow-field instruments of future complex and ambitious missions in all the bands of the electromagnetic spectrum (from radio to IR/Optical/UV and to X-rays and gamma rays).

The main parameters affecting the discovery space in this area are: 1) number of events with good localisation; 2) quality of the localisation; and 3) promptness of the localisation. GrailQuest will ensure all these three characteristics and will be fundamental to thoroughly study the electromagnetic counterparts of GWE.

6 Transients as tools to investigate the structure of space-time

6.1 GrailQuest Constellation as a single instrument of huge effective area

Once the times of arrival (ToA) of the photons in each detector of the GrailQuest constellation are corrected by the delays induced by the position of the GRB in the sky, as deduced from the optical identification of the counterpart, it is possible to add all the photons collected by the N detectors of the constellation to obtain a single lightcurves equivalent to that of a single detector of effective area Atot = Na where a is the effective area of each detector. In doing this an error in the ToA of each photon is introduced, because of the uncertainty in the position in the sky. However, since the optical counterpart will be known to within 1 arcsecond or below, the induced errors in the ToA are negligible.

6.2 Is Vacuum a dispersive medium for photons?

As discussed in Section 1, several theories proposed to describe quantum space-time predict a discrete structure for space on small scales, \(\ell _{\min \limits } \sim \ell _{\mathrm {P}}\). For a large class of these theories this space discretisation implies the onset of a dispersion relation for photons, which could be related to the possible break or violation of Lorentz invariance on such scales. Special Relativity postulates Lorentz invariance: all observers measure the same speed of light in vacuum, independent of photon energy, which is consistent with the idea that space is a three dimensional continuum. On the other hand, if space is discrete on very small scales, it is conceivable that light propagating in this lattice exhibits a sort of dispersion relation, in which the speed of photons depends on their energy. These LIV models predict a modification of the energy-momentum “dispersion” relation of the form

$$ E^{2} = (pc)^{2} + (mc^{2})^{2} + {{\varDelta}}_{\text{QG}}(E, p^{2}, M_{\text{QG}}) $$
(9)

where E is the energy of a particle of (rest) mass m and momentum p, and MQG = ζMP is the mass at which quantum space-time effects become relevant, where \(\zeta \sim 1\), and (since Special and General Relativity were thoroughly tested in the last century) \(\lim _{E/(M_{\text {QG}}c^{2}) \rightarrow 0} {{\varDelta }}_{\text {QG}}(E, p^{2}, M_{\text {QG}}) = 0\) (see e.g. [8]).

In a very general way, the equation above can be used to determine the speed of a particle (in particular a photon), given its energy. Moreover, when two photons of different energies, E2E1 = ΔEPHOT, emitted at the same time, travel over a distance DTRAV (short with respect to the cosmic distance scale, i.e. a distance over which the cosmic expansion can be neglected, see below), because of the dispersion relation above, they exhibit a delay ΔtLIV. It is possible to express this relation as a series expansion around its limit value ΔtLIV = 0 (in line with what is discussed above we must have the following asymptotic condition: \(\lim _{E_{\text {PHOT}}/(M_{\text {QG}}c^{2}) \rightarrow 0} {{\varDelta }} t_{LIV} = 0\)) as:

$$ {{\varDelta}} t_{LIV} = \pm \xi (D_{\text{TRAV}}/c) [{{\varDelta}} E_{\text{PHOT}}/(\zeta M_{\mathrm{P}}c^{2})]^{n} $$
(10)

where \(\xi \sim 1\) is the coefficient of the first relevant term in the series expansion in the small parameter ΔEPHOT/(MQGc2), the sign ± takes into account the possibility (predicted by different LIV theories) that higher energy photons are faster or slower than lower energy photons (discussed as subluminal (+ 1) or superluminal (− 1) cases in [9]). Note that ξ = 1 in some specific LIV theories (see e.g. [9, 10], in particular their equation 13). The index n = 1or2 takes into account the order of the first non-zero term in the expansion.

When the distance traveled by the photons is comparable to the cosmic distance scale, the term DTRAV/c must be changed into DEXP/c to take into account the effect of a particle propagating into an expanding Universe. The comoving trajectory of a particle is obtained by writing its Hamiltonian in terms of the comoving momentum [39]. The distance traveled by the photons, in a general Friedman-Robertson-Walker Cosmology, is determined by the different mass-energy components of the Universe. These energy contents can be expressed in units of the critical energy density \(\rho _{\text {crit}} = 3 {H_{0}^{2}}/(8\pi G) = 8.62(12) \times 10^{-30} \text {g/cm}^{3}\), where H0 = 67.74(46)km/s/Mpc is the Hubble constant (see [58], for the parameters and related uncertainties). Considering the different dependencies on the cosmological scale factor a, it is possible to divide the energy components of the Universe into: ΩΛ = ρΛ/ρcrit, ΩM = ρMatter/ρcrit, ΩR = ρRadiation/ρcrit, Ωk = 1 − (ΩΛ + ΩM + ΩR). With this notation it is possible to express the proper distance DP at the present time (or comoving distance) of an object located at redshift z as:

$$ D_{\mathrm{P}} = \frac{c}{H_{0}} {{\int}_{0}^{z}} dz \frac{1}{\sqrt{f({{\varOmega}},z)}}, $$
(11)

where

$$ f({{\varOmega}},z) = (1+z)^{3(1+w)}{{\varOmega}}_{\Lambda} + (1+z)^{2}{{\varOmega}}_{\mathrm{k}} + (1+z)^{3}{{\varOmega}}_{\mathrm{M}} + (1+z)^{4}{{\varOmega}}_{\mathrm{R}}. $$
(12)

On the other hand, the term DEXP has to take into account the fact that the U varies as the Universe expands. Photons of different energies are affected by different delays along the path, so, because of cosmological expansion, a delay produced further back in the path amounts to a larger delay on Earth. This effect of relativistic dilation introduces a factor of (1 + z) into the above integral [39].

$$ D_{\text{EXP}} = \frac{c}{H_{0}} {{\int}_{0}^{z}} dz \frac{(1+z)}{\sqrt{f({{\varOmega}},z)}}. $$
(13)

In particular, in the so-called Lambda Cold Dark Matter Cosmology (ΛCDM) the following values are adopted [58]:

H0 = 67.74(46) km/s/Mpc, Ωk = 0, curvature k = 0 that implies a flat Universe, ΩR = 0, radiation = 0 that implies a cold Universe, w = − 1, negative pressure Equation of State for the so-called Dark Energy that implies an accelerating Universe, ΩΛ = 0.6911(62) and ΩMatter = 0.3089(62). With these values we have:

$$ \frac{D_{\text{EXP}}}{c} = \frac{1}{H_{0}} {{\int}_{0}^{z}} dz \frac{(1+z)}{\sqrt{{{\varOmega}}_{\Lambda} + + (1+z)^{3}{{\varOmega}}_{\text{Matter}}}}. $$
(14)

Adopting as a firm upper limit for the distance of any GRB the radius of the visible (after recombination) Universe DP/cRV/c = 1.4 × 1018 seconds (in the ΛCDM cosmology), we find:

$$ | {{\varDelta}} t_{LIV} | \le 1.4 \times 10^{18} \xi [{{\varDelta}} E_{\text{PHOT MeV}}/ (\zeta \times 10^{21})]^{n} \mathrm{s} $$
(15)

where ΔEPHOT MeV = ΔEPHOT/(1MeV). This shows that first order effects (n = 1) would result in potentially detectable delays, while second order effects are so small that it would be impossible to detect them with this technique.

Therefore, it is possible to detect (or constrain) first order effects in space-time quantisation by detecting (or placing upper limits on) time delays between lightcurves of GRBs in different energy bands. Indeed these quantum-space-time effects modifying the propagation of light are extremely tiny, but they accumulate along the way. GRBs are among the best candidates to detect the expected delays, since i) the signal travels over cosmological distances; ii) the prompt spectrum covers more than three orders of magnitude in energy; iii) fast variability of the lightcurves is present at or below the one millisecond level (see e.g. [10]). Such a detection could directly reveal, for the first time, the deepest structure of quantum Space-Time by gauging its structure in terms of a photon dispersion relation in vacuo.

To better quantify this possibility, we considered a broad band, 5keV − 50MeV, covering a relevant fraction of the prompt emission of a typical GRB and within the energy range covered by NaI and BGO scintillators. Based on BATSE observations of GRB prompt spectra, the so-called Band function, an empirical function describing the photon energy distribution, has been developed [13]:

$$ \frac{dN_{E}(E)}{dA dt} = F \times \left\{ \begin{array}{lr} \left( \frac{E}{E_{\mathrm{B}}} \right)^{\alpha} \exp\{-(\alpha - \beta)E/E_{\mathrm{B}}\}, & E \le E_{\mathrm{B}},\\ \left( \frac{E}{E_{\mathrm{B}}} \right)^{\beta} \exp\{-(\alpha - \beta) \}, & E \ge E_{\mathrm{B}}. \end{array} \right. $$
(16)

where E is the photon energy, dNE(E)/(dAdt) is the photon intensity energy distribution in units of photons/cm2/s/keV, F is a normalisation constant in units of photons/cm2/s/keV, EB is the break energy, and EP = [(2 + α)/(αβ)]EB, which is the peak energy. For most GRBs: \(\alpha \sim -1\), \(\beta \sim -2.5\), \(E_{\mathrm {B}} \sim 225 \text {keV}\) gives EP = 150keV.

As representative spectra of long and short bright GRBs, we considered Band functions with α = − 1, β = -2.5 to -2.0 (proxies of soft and hard GRB spectra), EB = 225keV lasting for Δt = 25 seconds to 0.25 seconds respectively, having a photon flux in the band 50 − 300keV of

$$ \displaystyle{\int}_{50 \text{keV}}^{300 \text{keV}} \frac{dN_{E}(E)}{dA dt} dE = \frac{dN_{50-300 \text{keV}}}{dA dt} = 8 \text{photons/cm}^{2}\text{/s}. $$
(17)

We computed the total number of photons detected in 8 contiguous energy bands \({{\varDelta }} E_{\text {E}_{\text {i}} - \text {E}_{\text {i+1}}}\) (i = 1,...,8) in the interval considered above (5keV − 50MeV), adopting a cumulative effective area of 100 square meters.

Moreover, we considered three values of the redshift, namely z = 0.1,1,3 for the upper extreme of the integral in equation (14), adopted ξ = 1, ζ = 1, and n = 1 in equation (10), substituted DTRAV of equation (14) with DEXP in (10), and computed the delays expected for each value of z and \({{\varDelta }} E_{\text {PHOT i}} = \sqrt {\text {E}_{\text {i}} \times \text {E}_{\text {i+1}}}\)Footnote 2. The results are shown in Table 1.

Table 1 Photon fluence and expected delays induced by LIV for bright Long and Short GRBs observed with a detector of effective collecting area of 100m2

Recent Fermi LAT detections of short GRBs at GeV energies have put constraints on Δt, and thus on MQG knowing D(z). The best limit so far was obtained by [7] using GRB090510, a short GRB. They find ms/GeV, which puts \(M_{QG} \sim M_{\text {Planck}}\), at the distance of this GRB (z = 0.9). This limit, however, is obtained by assuming that a single observed 31 GeV photon was emitted simultaneously to the other GeV photons of the burst, that lasted for ∼0.2s.

Indeed, a significant class of theories of Quantum Gravity describing the Space-Time structure down to the Planck scale predict a dispersion law for the propagation of photons in vacuo that depends linearly on the ratio between the photon energy and the Planck energy. The delays induced by this relation of light dispersion depend linearly on the space travelled and are tiny, being, as shown in Table 1, in the microsecond range, for photons that travelled for a (few) billion years. GRBs are ideal targets to test, robustly, this prediction because the prompt gamma-ray emission extends, in a detectable way, over more than six orders of magnitude in energy (from keV to ten(s) of GeV) and are among the most distant objects ever detected (their maximum redshift measured to date is just above 9). Intrinsic spectral delays due to unknown characteristics of the emission process in different energy bands could easily dominate the delays observable between different spectral components, but these effects can be disentangled by i) having a sufficient number of photons in sufficiently narrow energy bands, as the emission process is the same within a narrow band; ii) having a sufficiently rich sample of GRBs at different redshifts, since the delays induced by a dispersion law for the propagation of photons in vacuo scale almost linearly (with a weak dependence on the details of the particular cosmology adopted) with redshift. This double linear dependence, in energy and redshift, is the characteristic signature of a Quantum Gravity effect.

Recently, [81, 82] and [11] found in-vacuo-dispersion-like spectral lags in GRBs seen by Fermi LAT. The magnitude of these effects is of the order of tens MPlanck, much bigger than the limit reported above obtained on GRB090510. The effects are present when considering photons with rest-frame energies higher than 40 GeV [81, 82], or 5 GeV [11]. If this is the case, the predicted delays are one order of magnitude larger than those presented in Table 1.

7 Astrophysical science with GrailQuest

Taking advantage of its huge effective area and the unprecedented timing capabilities, GrailQuest ’s science goals constitute per se an important milestone of astrophysical research; in the following we just list the main objectives of this ancillary science:

  • To produce a catalogue of 7,000–10,000 GRBs with well determined positions in the sky (between 1 degree and a few arcsecond, depending on the flux and temporal variability of the GRB). Indeed, the expected number of GRBs in the whole sky is 2-3 per day and we plan to have a lifetime for this mission of at least ten years (note that single satellite failure will not be a problem since these can be easily replaced with high-performance newer versions). With the temporal triangulation technique previously described, position determination would be possible within minutes of the prompt event, allowing a search for its counterpart in other wavelengths. Swift-BAT allows localisation of GRBs occurring in its field of view with an accuracy of a few arcminutes (FoV of 17 arcminutes), with the possibility for all of them to get an X-ray localization with XRT, and for some of them to get a subsequent optical localisation (with the UVOT) resulting in the determination of the redshift of their host galaxies. Similarly, the fast and precise GRB localisation offered by GrailQuest solely from gamma-ray observations, will allow the determination of the optical counterpart and redshift for most of the long GRBs and for the short GRBs for which an optical counterpart can be detected. Since the counterpart of the furthest GRBs may fall in the IR band because of the high redshift, once a precise localisation of the source is found, it can be effectively searched thanks to the synergy with e.g. the James Webb Space Telescope (operating in the IR band); this will allow the detection of GRBs with z > 10 (the actual record is just above z = 9, [27]), opening a brand new window for high-redshift cosmology. Moreover, if a dedicated mission such as THESEUS (a candidate for ESA’s M5 mission opportunity) is approved by ESA, it would be totally synergetic with GrailQuest since follow up observations of both soft X-ray localisations (obtained by THESEUS itself) and harder X-ray (or soft gamma-ray) localisation obtained with GrailQuest would be possible.

  • Given the huge effective area, GrailQuest will be the ultimate experiment for prompt GRB physics. In this context we plan to produce a catalogue of GRB dynamic spectra over more than three orders of magnitude in energy (from 20 keV to 10 MeV) with unprecedented statistics and moderate energy resolution. Again, the combination of huge effective area and high time resolution will provide sufficient photons in the high-energy band to follow the spectral evolution of the prompt emission on short timescales. This is particularly important to shed light on the complex and poorly studied details of the fireball model and the mechanism through which ultra-relativistic colliding shocks release the huge amount of gamma-ray photons observed in the GRB’s inner engine. GRBs are thought to be produced by the collapse of massive stars and/or by the coalescence of two compact objects. Their main observational characteristics are the huge luminosity and fast variability, often as short as one millisecond. These characteristics soon led to the development of the fireball model, i.e. a relativistic bulk flow where shocks efficiently accelerate particles. The cooling of the ultra-relativistic particles then produces the observed X-ray and gamma-ray emission. While successful in explaining GRB observations, the fireball model implies a thick photosphere, hampering direct observations of the hidden inner engine that accelerates the bulk flow. We are then left in the frustrating situation where we regularly observe the most powerful accelerators in the Universe, but we are kept in the dark over their operation. GRB fast variability is potentially the key to reveal the nature of their inner engines. Early numerical simulations (see e.g. [40, 60]), as well as modern hydro-dynamical simulations [52], and analytic studies (see e.g. [53]) suggest that the GRB lightcurves reproduce the activity of the inner engine. GRB lightcurves have been investigated in some detail down to 1 millisecond or slightly lower [45, 79]. Sub-millisecond timescales are basically unknown, as little known as the real duration of the prompt event. Furthermore, it is still unclear how many shells are ejected from the central engine, what is the frequency of ejection and what their lengths are. Pushing GRB timing capabilities by more than three orders of magnitude should help in answering at least some of these questions.

  • To add polarimetric information on the sample of GRBs detected. McConnell et al. [47] proposed to measure the linear polarisation of GRBs by comparing the asymmetry in the rate of counts of the delayed component of photons Compton-backscattered by Earth’s atmosphere as observed by different BATSE detectors. This technique might be applied to data collected by GrailQuest by comparing the photons detected by different satellites at different directions with respect to the Earth and by exploiting the timing capabilities of its instruments; in this case the method will be much more effective. Polarisation will provide other valuable information of extreme interest for the fireball model. Results from POLAR, a dedicated GRB polarimeter onboard China’s Tiangong-2 space laboratory, suggest that the gamma-ray emission is at most polarized at a relatively low level. However, the results also show intra-pulse evolution of the polarization angle. This indicates that the low polarization could be due to a variation of the polarization angle during the GRB [86]. Given the superb temporal resolution and huge effective area of GrailQuest this possibility will be thoroughly explored.

  • To scrutinise the whole sky for X- and gamma-ray transients of very short duration. Despite its lack of imaging capabilities, GrailQuest will benefit from the fact that background is relatively low at energies above a few tens of keV. The huge effective area will guarantee an unprecedented sensitivity allowing the detection (signal-to-noise ratio > 1) of transient phenomena at the shortest timescales, mitigating the effects of the quantum-detection process that are blinding our sensitivity when the number of photons detected is small. There might exist a large class of fast transients that have remained undiscovered up to now because of the small fluence associated with their short time duration. In the radio band this has been the case of the recently discovered Fast Radio Bursts (FRBs, see [44] as a review). Indeed, some theories predict, and observations have now confirmed, a high-energy counterpart of these compelling phenomena and GrailQuest is the right instrument for searching these counterparts. In particular, high-energy counterparts are predicted in the context of Quantum Gravity [14]. In the same context it is possible that black holes hide a core of Planckian density, sustained by quantum gravitational pressure. As a black hole evaporates, the core remembers the initial mass and the final explosion occurs at macroscopic scale. Under several rough assumptions, it is possible to estimate that several short gamma-ray events per day, at energies around 10 MeV, with isotropic distribution, can be expected coming from a region of a few hundred light years around us. Further predictions can be made, in particular, to show that the wavelength of these signals should depend on the size of the black hole at the moment of the explosion [14].

  • To monitor all kinds of high-energy transients, both galactic and extra-galactic events, such as the flaring activity of magnetars, and outbursts of black hole and neutron star transients. The monitoring of the high-energy sky has been very important in recent years in the discovery of new events and/or peculiar behaviours as well as for a detailed characterisation of known sources. GrailQuest will perform as a large area all-sky monitor, with good temporal- and moderate energy-resolution, able to add important information for the full understanding and the thorough study of high-energy transients, whose behaviour may lead to important advances in fundamental physics regarding strong gravity and extremely high-density matter.

  • To monitor the onset of Tidal Disruption Events (TDE, hereafter) with fast variability. Tidal disruption events [61] are generally very luminous (often above Eddington) in the soft X-ray band, with an X-ray spectrum usually dominated by a thermal component at a few keV [37]. However, a sub-class of TDEs, called “jetted TDEs” are characterised by a much harder non-thermal spectrum extending up to the gamma-ray band (see the prototypical case of Swift J16644; [17]). They are a fundamental tool in the study of the “onset” of AGN-like activity in otherwise quiescent black holes. Since most of the emission arises close to the black hole, they can be used to study relativistic phenomena such as precession induced by the black hole spin [54]. Also, they can serve as an important probe of hidden, sub-pc black hole binaries that are in the process of merging and are thus progenitors of LISA events [77]. Finally, TDEs also produce dim, but potentially detectable gravitational wave emission [41] and might thus be important electromagnetic counterparts to a sub-class of gravitational wave sources.

  • To perform high-quality timing studies of known high-energy pulsators. The most interesting sector of this population contains the millisecond pulsars (accreting and/or transitional and/or rotationally powered, see e.g. [22, 28]) and the enigmatic gamma-ray pulsars. Millisecond pulsars often display (transient) X-ray and gamma-ray emission whose properties are not completely understood yet. This emission may be caused by intra-binary shocks in the pulsar emission (consisting of both radiation and high-energy particles) with a wind of matter from the companion star. In this case, a modulation of the X- and gamma-ray emission with the orbital period is expected and may be searched for with GrailQuest. Also, the orbital period evolution of these systems is very important to address in order to investigate their formation history and their connection with Low Mass X-ray Binaries, as envisaged by the recycling scenario. Orbital evolution may also be studied in high inclination X-ray binary systems (containing black holes or neutron stars) where periodic signatures (such as dips and/or eclipses) are observed. Despite the lack of imaging capabilities and no possibility of background rejection, GrailQuest is capable of detecting any (quasi-)periodic signal for which the period is known thanks to folding techniques coupled with a huge collecting area. This makes this instrument an ideal tool to perform timing studies of any kind of high-energy (quasi-)periodic signal.

8 Detector description

The key requirements for a detector in the GrailQuest context are:

  • Overall effective area of the order of 100 square metres. This is obtained with a fleet consisting of tens/hundreds/thousands of small/micro/nano satellites each hosting a detector of effective area ranging from ∼50 to ∼100 square centimeters.

  • Capability of recording each photon (event) of the signal (no pile-up).

  • Temporal resolution in the 10-100 nanoseconds range

  • Wide energy band from a few keV to several MeV.

  • Moderate energy resolution: ΔE/E ≤ 0.2 throughout the entire energy band.

  • Wide field of view (∼ steradians).

  • Robust assembly suitable for space environment.

  • Simple design to allow for mass production.

A class of X/gamma detectors, widely used in countless space experiments, that is continuously renewed thanks to the evolution of the technology, is based on the use of scintillators coupled to suitable photodetectors and electronics. Nowadays, inorganic scintillator materials like Lanthanum Bromide (LaBr3:Ce), GAGG (Gadolinium Aluminium Gallium Garnet) or similar, combine high scintillation light emission with fast response (tens of nanoseconds), and high efficiency. We therefore have, today, a certain number of materials whose characteristics allow, when combined with a fast and efficient photodetector, the fulfillment of the GrailQuest project requirements. The criteria for the choice of scintillator can then take into account parameters like intrinsic low background of the material, low hygroscopicity, low cost, and low radiation damage. A fast photodetector for the readout of the scintillation light can be a Photomultiplier (PMT) or solid state Silicon-PMT (Si-PM), both devices having a response to a light pulse that can be contained in a few nanoseconds. Alternatively, Silicon Drift Detectors (SDDs) can be used to read out the scintillation light with timing capabilities of the order of tens of nanoseconds. Despite their relatively lower response to light pulses, SDDs have several advantages with respect to Si-PM, namely their greater robustness against radiation environment and higher efficiency (90% vs. 20-30%). Both types of devices, when optically coupled to the above mentioned scintillators, allow efficient detection of X-rays down to ∼10 keV and even below. The criteria for the choice of the photodetector can take into account the dimensions and robustness of the device, its ageing in the space radiation environment, and the availability for mass production. The architecture of each GrailQuest detector sub-unit is modular, with modules of a few square centimeters of geometric area each. The whole detector is then assembled to the necessary size by adding modules, which will also ease the processing of intense impulsive events by reducing the pile-up of signals in any given module.

9 The GrailQuest mission concept

The planning of the ESA Science Programme Voyage 2050 relies on the public discussion of open scientific questions of paramount importance for advancing our understanding of the Laws of Nature, that can be addressed by a scientific space mission within the Voyage 2050 planning cycle, covering the period from 2035 to 2050. As a part of the ESA Science Programme Voyage 2050, a new high-energy mission concept named GrailQuest (Gamma Ray Astronomy International Laboratory for QUantum Exploration of Space-Time) has been presented in this paper.

The main scientific objectives that the mission aims to address are the following: i) to localise GRB prompt emission with an accuracy of a few arcseconds. This capability is particularly relevant in light of the recent discovery that fast high-energy transients are the electromagnetic counterparts of some gravitational wave events observed by the Advanced LIGO and Virgo network; ii) to fully exploit timing capabilities down to microseconds or below at X-/gamma-ray energies, by means of an adequate combination of temporal resolution and collecting area, thus allowing an effective investigation, for the first time, of the microsecond structure of GRBs and other transient phenomena in the X-/gamma-ray energy window; iii) to probe Space-Time structure down to the Planck scale by measuring the delays between photons of different energies in the prompt emission of GRBs. More specifically, a significant class of theories of Quantum Gravity describing the Space-Time structure down to the Planck scale predict a linear (w.r.t. photon energy) dispersion relation for light in vacuo. The predicted delays are tiny, being in the microsecond range, for photons of energies in the keV-MeV range, that travelled for a (few) billion years. In particular these effects scale almost linearly with the photon energy and the redshift of the GRB. This double linear dependence, in energy and redshift, is a unique signature of a Quantum Gravity effect, allowing for a robust experimental constraint within the proposed experiment.

GrailQuest is a mission concept based on a constellation of nano/micro/small-satellites in low (or near) Earth orbits, hosting fast scintillators to probe the X-/gamma-ray emission of bright high-energy transients. The main features of this proposed experiment are: temporal resolution ≤ 100 nanoseconds, huge overall collecting area, ∼100 square meters, very broad energy band coverage, ∼1 keV-10 MeV. GrailQuest is conceived as an all-sky monitor for fast localisation of high signal-to-noise ratio transients in the broad keV-MeV band by robust triangulation techniques with accuracies at the microsecond level, and baselines of several thousand kilometers. These features allow unprecedented localisation capabilities, in the keV-MeV band, of a few arcseconds or below, depending on the temporal structure of the transient event. Despite the huge collecting area, hundred(s) of square meters, and the consequent number of nano/micro/small-satellites utilised (from thousand(s) to ten(s)), all orbiting Earth in uniformly distributed orbits, the technical capabilities and subsequent design of each base unit of the constellation are extremely simple and robust. This allows for mass-production of the base units of this experiment, namely a satellite equipped with a non-collimated (half-sky field of view) detector (effective area in the range hundred-thousand(s) square centimetres). The detector consists of segmented scintillator crystals coupled with Silicon Drift Detectors with broad energy band coverage (keV-MeV range) and excellent temporal resolution (≤ 100 nanoseconds). Although the field-of-view of the detectors is large (∼2π steradians), limited pointing capabilities are required. More specifically, instrument pointing at local zenith will not observe any Earth albedo from GRBs, which, otherwise, would greatly complicate the analysis. Nowadays, even with CubeSats, pointing accuracies of a few degrees are easily achievable. We forecast that mass production of this simple unit will allow a huge reduction of costs. Moreover, the large number of satellites involved in the GrailQuest constellation make this experiment very robust against the failure of one or more of its units.

GrailQuest is a modular experiment in which, for each of the detected photons, only three pieces of measured parameters are essential, namely: the accurate time-of-arrival of each photon (down to 100 nanoseconds, or below), the energy, with moderate resolution (a few percent), and the detector position (within a few tens of meters). This opens the compelling possibility of combining data from different kinds of detectors (aboard different kinds of satellites belonging, in principle, to different constellations) to achieve the scientific objectives of the GrailQuest project, making GrailQuest one of the few examples of modular space-based astronomy. Modular experiments have proven, in the past, to be very effective in opening up new possibilities for astronomical investigations. Just think of Very Large Baseline Interferometry, an astronomical interferometer in the radio band, involving more than thirty radio telescopes all over the world, and Cluster II, a space mission of the European Space Agency, with NASA participation, composed of a constellation of four satellites, to study the Earth’s magnetosphere, launched in 2000 and recently extended to the end of 2022. In the near future, a constellation of three satellites in formation is planned for the LISA mission, to study gravitational waves from space. Very recently, two extremely successful experiments, of paramount importance for fundamental physics, involve the combined use of several ground-based detectors. One is the LIGO/Virgo Collaboration (involving the two US-based LIGO and the European Virgo facilities) that gave us the first detection and localisation of gravitational waves. In one case, temporal triangulation techniques, conceptually similar to those proposed for the GrailQuest constellation and described in this work, effectively constrained the position of the event in the sky, allowing for fast subsequent localisation, in the electromagnetic window, of a double neutron star merging event. The other is the Event Horizon Telescope (which utilizes 8 radio/microwave observatories spread all over the world) that obtained the first image of the event horizon around a black hole. We consider these compelling results as the proof that modular astronomy, which benefits from the combined use of distributed detectors (to increase the overall detecting area and allow for unprecedented spatial resolution, in the cases of the Event Horizon Telescope and the GrailQuest project), is the new frontier of cutting-edge experimental astronomical science that is performed by exploiting the combination of a large number of detectors distributed all over the Earth’s surface. The GrailQuest project is a space-based version of this epochal revolution.

We performed accurate Monte-Carlo simulations of thousands of lightcurves of GRBs, based on true data obtained from the scintillators of the Fermi/GBM. We produced GRBs lightcurves in consecutive energy bands in the interval 10keV − 50MeV, for a range of effective areas. We then applied cross-correlation techniques to these lightcurves to determine the minimum accuracy with which potential temporal delays between these lightcurves are determined. As expected, this accuracy depends, in a complicated way, on the temporal variability scale of the GRB considered, and scales roughly with the square root of the number of photons in the energy band considered. We determined that, for temporal variabilities in the millisecond range (which are expected in at least 30% of the observed GRBs), with an overall effective area of ∼100 square meters, the statistical accuracy of these delays is always smaller (for redshifts ≥ 0.5) than the delays expected in a dispersion law for the propagation of photons in vacuo that linearly depends on the ratio between the photon energy and the Planck energy.

This proves that the GrailQuest constellation is able to achieve the ambitious objectives outlined above, within the budget of a European Space Agency M-class mission.

The biggest advantages of GrailQuest with respect to a standard all-Sky monitor for high-energy astrophysics are:

  • Modularity.

  • Unprecedented temporal resolution.

  • Limited cost and quick development.

  • Huge effective area.

The first one allows us: a) to first fly a reduced version of GrailQuest (say 4-12 units, the GrailQuest pathfinder) to prove the concept (see also § 10 below); b) avoid single (or even multiple) point failures: if one or several units are lost the constellation and the experiment are not lost; c) initially test the hardware with the first launches and then improve it, if needed, with the following ones.

The second allows GrailQuest to open a new window for studying microsecond variability in bright transients.

To achieve the third characteristic GrailQuest will exploit commercial off-the-shelf hardware as well as the trend in reducing the cost of both manufacturing and launching of micro/nano-satellites over the next years. GrailQuest would naturally fit into a scheme where production of identical units would follow the development and testing of a first test unit. The development of engineering and qualification models, and all tests at the level of critical components, will be performed only for the test unit. For the other units only flight models will be built, and these units will be tested only at the system level. All this will bring costs down and speed up the construction of the full mission.

Finally, in view of the limited costs and quick development, it is possible to build an all-sky monitor of unprecedented area (\(\sim 100 \mathrm {m}^{2}\)). The consequent sensitivity to extremely weak transients is mandatory to fully exploit the exciting possibilities offered by the birth of multi-messenger astronomy. Starting in 2025 the improved or next generation of gravitational wave detectors LIGO-Virgo, KAGRA, and the Einstein Telescope will provide detectability of NS-NS mergers events like GW170817 within a few hundred Mpc. This corresponds to faint electromagnetic counterparts that require high-sensitivity all-sky monitors to be effectively detected and studied. Moreover, the extraordinary number of photons detected with astonishing temporal accuracy from each GRB, will allow us, at least for the brightest events, to perform the first dedicated experiment in Quantum Gravity to test, with meaningful accuracy, a first order dispersion relation for light in vacuo. In this respect GrailQuest will be the first experiment potentially able to reveal a Space-Time granularity at the minuscule Planck length scale.

10 Synergy with other on-going projects

Some of the authors of this paper are developing the High Energy Rapid Modular Ensemble of Satellites, HERMES, pathfinder experiment [21, 33]. The HERMES pathfinder consists of six nano-satellites of the 3U class each equipped with a payload consisting of GAGG scintillators coupled with SDDs with a collecting area of about 55 cm2 per payload. The main goals of the HERMES pathfinder are to prove that GRB prompt events can be efficiently and routinely observed with detectors hosted by nano-satellites, and to test GRB localisation techniques based on triangulation using the delays of photon arrival times on different detectors located in low Earth orbit. The HERMES pathfinder experiment will test fast timing techniques that are at the core of the GrailQuest project. The design performance of the HERMES pathfinder detectors guarantee a temporal resolution of 300 nanoseconds, 5-10 times better than most current and past GRB experiments. The HERMES pathfinder is funded by the Italian Space Agency and by the European Community through the HERMES-SP H2020 SPACE grant. More information on the HERMES pathfinder can be found at www.hermes-sp.eu and hermes.dsf.unica.it.