With the quantitative definition of network quality given in the previous section, various optimized and ideal networks are given here. In particular, we consider properties and explicit arrangements of ideal networks as well as numerical optimizations of more realistic networks.
To better understand the characteristics of networks, it helps to define some additional formalism. Define a network with a given set of orientations as a pair of \((n\times d)\) directional matrix and \((n\times n)\) covariance matrix \(N=\{D,\Sigma \}\); for n sensitive axes and \(d=3\) spatial dimensions. Two networks \(N_0 = \{D_0,\Sigma _0\}\) and \(N_1 = \{D_1,\Sigma _1\}\) can be considered equivalent \(N_0\cong N_1\) if there exists a permutation matrix P such that \(D_1 = PD_0\) and \(\Sigma _1 = P\Sigma _0P^T\); that is, they are the same up to ordering. Additionally, a network can be decomposed into two complementary subnetworks \(N\cong N_A\oplus N_B\) if
$$\begin{aligned} N\cong \left\{ \left[ \begin{matrix} D_A \\ D_B \end{matrix} \right] , \left[ \begin{matrix}\Sigma _A &{}\quad 0 \\ 0 &{}\quad \Sigma _B \end{matrix} \right] \right\} \,. \end{aligned}$$
Observe that the subnetworks \(N_A\) and \(N_B\) are independent/uncorrelated.
In addition to the basic equivalence relation described above, there are some additional symmetries for a network. First, the sensitivity \(\beta (\hat{{\varvec{m}}})\) is invariant with respect to parity reversal of a subnetwork; i.e., \(D_A\mapsto - D_A\) for \(N\cong N_A\oplus N_B\). Further, though the sensitivity \(\beta (\hat{{\varvec{m}}})\) of a (non-ideal) network can change under arbitrary rotation of the whole network \(D^T\mapsto RD^T\), the value of the worst sensitivity, best sensitivity, and quality factor do not.
Ideal networks
There are a few useful characteristics of ideal networks that are worth considering. For an ideal network, \(q_0=1\) so the smallest and largest eigenvalues of \(D^T\Sigma ^{-1} D\) are the same which implies that this matrix is proportional to the identity matrix. In particular, \(D^T\Sigma ^{-1} D = \beta ^{-2}\mathbb {1}\). It also follows that the sensitivity of an ideal network is independent of global rotations.
Consider, now, how ideal networks are combined. Let \(N_A\) and \(N_B\) be two complementary, ideal subnetworks of N, then
$$\begin{aligned} \left[ \begin{matrix}{D_A}^T&\quad {D_B}^T\end{matrix}\right] \left[ \begin{matrix}\Sigma _A &{}\quad 0 \\ 0 &{}\quad \Sigma _B \end{matrix}\right] ^{-1} \left[ \begin{matrix} D_A \\ D_B\end{matrix}\right] = \left( \beta _A^{-2} + \beta _B^{-2} \right) \mathbb {1}\,. \end{aligned}$$
That is, the network \(N\cong N_A \oplus N_B\) is also ideal with sensitivity \(\left( \beta _A^{-2} + \beta _B^{-2} \right) ^{-1/2}\). Further, because an ideal network remains ideal under rotations, one can rotate either ideal subnetwork without affecting the sensitivity of the network N. Further, if \(N=N_A\oplus N_B\) is an ideal network and \(N_A\) is an ideal subnetwork, then \(N_B\) is also an ideal subnetwork. An ideal network that cannot be separated into ideal subnetworks is “irreducible.” Because an ideal network needs at least d sensitive axes (for \(d=3\) spatial dimensions), an ideal network with \(n<2d\) is irreducible, because it cannot be split into two ideal networks.
One can consider certain explicit cases of ideal networks with some simplified conditions. In particular, let \(N=\{ D, \Sigma \}\) be composed of n identical, independent, single-axis sensors — that is, \(\Sigma = \sigma ^2\mathbb {1}\) and \(|{\varvec{d}}_i| = \kappa \) where \({\varvec{d}}_i\) is the \(i^\text {th}\) row of D. If there are \(d=3\) sensors oriented such that their sensitive axes are orthogonal, then the resulting network will be ideal with sensitivity \(\beta =\sigma /\kappa \). Heuristically, one would like to orient the magnetometers to evenly cover all directions. One way to do this is to take some inspiration from the Platonic solids by designing a network in which the sensitive axes of the sensors are oriented from the center of the solid to each of the vertices; see Table 1. Most of these solids will generate a network with two ideal subnetworks having opposite sensitive axes. Thus, one can obtain ideal networks with three (octahedron), four (tetrahedron and cube), six (icosahedron), and ten (dodecahedron) sensors through this method; denoted \(N_3\), \(N_4\), \(N_6\), and \(N_{10}\). These arrangements have the sensitivity \(\beta =\frac{\sigma /\kappa }{\sqrt{n/3}}\) and are irreducible. With these networks alone, it is evident that there are multiple unique ways of orienting a given number of sensors that do not rely on using the same set of ideal subnetworks; for example, six sensors can be arranged as \(N_6\) or \(N_3 \oplus N_3\).
Table 1 Examples of optimal networks based on the platonic solids. The orientations of sensitive axes in an ideal network are given as lines (dashed in one direction, solid in the other). The number of vertices are separated by ideal network. For example, the cube has eight vertices, and the corresponding ideal network consists of two ideal networks with four sensors each; hence, “\(4+4\)” is listed. In each case here, \(X+X\) vertices describe two ideal networks with X sensitive axes in opposite directions Combining \(N_3\) and \(N_4\) subnetworks, one can design ideal networks with three, four, six, or more sensors. What seems to remain is a way to orient five identical, independent sensors into an ideal network. One can show that two such network arrangements \(N_{5a}\) and \(N_{5b}\) are given by
$$\begin{aligned}&D_{5a} = \kappa \left[ \begin{matrix} 0 &{} \sqrt{\frac{5}{6}} &{} \sqrt{\frac{1}{6}} \\ 0 &{} -\sqrt{\frac{5}{6}} &{} \sqrt{\frac{1}{6}} \\ \sqrt{\frac{5}{6}} &{} 0 &{} \sqrt{\frac{1}{6}} \\ -\sqrt{\frac{5}{6}} &{} 0 &{} \sqrt{\frac{1}{6}} \\ 0 &{} 0 &{} 1 \end{matrix} \right] \nonumber \\&\quad \text {and}~ D_{5b} = \kappa \left[ \begin{matrix} 1 &{} 0 &{} 0 \\ -\sqrt{\frac{1}{3}} &{} -\sqrt{\frac{2}{3}} &{} 0 \\ -\sqrt{\frac{1}{3}} &{} \sqrt{\frac{2}{3}} &{} 0 \\ 0 &{} \sqrt{\frac{1}{6}} &{} \sqrt{\frac{5}{6}} \\ 0 &{} -\sqrt{\frac{1}{6}} &{} \sqrt{\frac{5}{6}} \end{matrix} \right] \,, \end{aligned}$$
(6)
each with sensitivity \(\beta = \sqrt{\sigma /\kappa }{\sqrt{5/3}}\). These networks are unique, even when considering parity reversal of individual sensors, reordering, and global rotations; this is evident because \(D_{5b}\) has orthogonal sensors while \(D_{5a}\) does not, and orthogonality of two vectors is invariant under these operations. Further, these are irreducible ideal networks because \(n<6\). Along with \(N_3\) and \(N_4\), one can generate any ideal network with at least three identical, independent sensors using these arrangements.
Though one can always arrange three or more identical, independent sensors into an ideal network, it is not always the case that a given set of sensors can be arranged into an ideal network. For example, consider a set of \(n\ge d=3\) sensors all with the same coupling \(\kappa =1\), but \(n-1\) sensors have noise \(\sigma _0\) and the last sensor has noise \(\sigma _1 < \sigma _0/\sqrt{(n-1)/2}\). An optimized network would be arranged with the first \(n-1\) sensors evenly oriented around the plane orthogonal to the last sensor and have a quality \(q_0=\frac{\sigma _1}{\sigma _0}\sqrt{\frac{n-1}{2}} < 1\) and sensitivity \(\beta _0=\frac{\sigma _0}{\sqrt{(n-1)/2}}\). The least-sensitive direction is orthogonal to the last sensor’s sensitive axis, and any adjustments to the sensors’ orientations would worsen the sensitivity in this plane. The problem in this scenario is that one sensor is much more sensitive than the others, so they cannot compensate, even collectively. The extreme case of this would be if the first \(n-1\) sensors were so noisy that they were effectively inoperable; this arrangement would not be much different than an \(n=1\) network, which cannot be ideal.
Optimizing networks
Regardless of whether a given set of magnetometers can be arranged into an ideal network, their arrangement can always be optimized; at least at a given time. In practice, the noise in each GNOME magnetometer varies over time and magnetometers turn on and off throughout the experiment, so an optimal arrangement at one time may not be optimal at another.
A non-optimized network can still be improved via an algorithm. An example of a “greedy” algorithm would be one in which, each step, a sensor is randomly selected, removed from the network, and re-inserted in the least-sensitive direction for the network without the removed sensor. This step can then be repeated many times until reaching some optimization condition. For multi-axis sensors that always have the same relative angle between the sensitive axes, the orientation by which to re-insert the sensor is a bit more complicated. Roughly, one would apply a rotation to the sensor to align the best- and worst-directions for the multi-axis sensor and the rest of the network (see Appendix A).
This algorithm will also work regardless of whether an ideal network arrangement exists, though the resulting network may not be optimal. Subsequent steps of the algorithm may converge to a non-optimal network or alternate between network configurations.
Table 2 Optimizing GNOME for each run. In particular, a network is constructed using all magnetometers active for at least 25 % of the run. Noise is calculated as the average standard deviation of the data
after applying a 1.67 mHz high-pass filter, 20 s averaging, and notch filters to remove powerline frequencies. This choice in how the average noise is calculated suppresses the influence of brief periods in which there was a spike in noise. The columns contain the run number, number of sensors used in the optimization, network characteristics, optimized network characteristics, theoretical optimized sensitivity from Eq. (5), and factor by which the optimization algorithm improved sensitivity The optimization can be applied to the GNOME network to better understand the potential of the experiment. In particular, this was performed for GNOME’s official Science Runs:
-
Science Run 1: 6 June–5 July 2017.
-
Science Run 2: 29 November–22 December 2017.
-
Science Run 3: 1 June 2018–10 May 2019.
-
Science Run 4: 30 January–30 April 2020.
-
Science Run 5: 23 August–31 October 2021.
The results of the optimization are given in Table 2 in which the coupling was assumed to be one. For most of the Science Runs, the improvement was less than a factor of two with the largest improvement being by a factor of 2.7 from Science Run 4. Additionally, the Science Run 2 network could be made ideal with the same sensitivity as predicted in Eq. (5). It should be noted that the networks used in the optimization included sensors available at least 25 % of the run, while it is not typical for all these sensors to be active at the same time. Another example of network optimization is given in Appendix B wherein only a couple sensors are reoriented and an additional sensor is added.