Abstract
Various extensions of the Standard Model motivate the existence of stable magnetic monopoles that could have been created during an early high-energy epoch of the Universe. These primordial magnetic monopoles would be gradually accelerated by cosmic magnetic fields and could reach high velocities that make them visible in Cherenkov detectors such as IceCube. Equivalently to electrically charged particles, magnetic monopoles produce direct and indirect Cherenkov light while traversing through matter at relativistic velocities. This paper describes searches for relativistic (\(v\ge 0.76\;c\)) and mildly relativistic (\(v\ge 0.51\;c\)) monopoles, each using one year of data taken in 2008/2009 and 2011/2012, respectively. No monopole candidate was detected. For a velocity above \(0.51 \; c\) the monopole flux is constrained down to a level of \(1.55 \times 10^{-18} \; \text {cm}^{-2}\; \text {s}^{-1} \text {sr}^{-1}\). This is an improvement of almost two orders of magnitude over previous limits.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In Grand Unified Theories (GUTs) the existence of magnetic monopoles follows from general principles [1, 2]. Such a theory is defined by a non-abelian gauge group that is spontaneously broken at a high energy to the Standard Model of particle physics [3]. The condition that the broken symmetry contains the electromagnetic gauge group \(\mathrm {U(1)_\mathrm{EM}}\) is sufficient for the existence of magnetic monopoles [4]. Under these conditions the monopole is predicted to carry a magnetic charge g governed by Dirac’s quantization condition [5]
where n is an integer, \(g_D\) is the elemental magnetic charge or Dirac charge, \(\alpha \) is the fine structure constant, and e is the elemental electric charge.
In a given GUT model the monopole mass can be estimated by the unification scale \(\Lambda _{\text {GUT}}\) and the corresponding value of the running coupling constant \(\alpha _{\text {GUT}}\) as \(M c^2 \gtrsim {\Lambda _{\text {GUT}}}/{\alpha _{\text {GUT}}}\). Depending on details of the GUT model, the monopole mass can range from 10\(^7\) to \(10^{17}\, \text {GeV}/c^2\) [6, 7]. In any case, GUT monopoles are too heavy to be produced in any existing or foreseeable accelerator.
After production in the very early hot universe, their relic abundance is expected to have been exponentially diluted during inflation. However, monopoles associated with the breaking of intermediate scale gauge symmetries may have been produced in the late stages of inflation and reheating in some models [8, 9]. There is thus no robust theoretical prediction of monopole parameters such as mass and flux, nevertheless an experimental detection of a monopole today would be of fundamental significance.
In this paper we present results for monopole searches with the IceCube Neutrino telescope covering a large velocity range. Due to the different light-emitting mechanisms at play, we present two analyses, each optimized according to their velocity range: highly relativistic monopoles with \(v\ge 0.76\,c\) and mildly relativistic monopoles with \(v\ge 0.4\,c\). The highly relativistic monopole analysis was performed with IceCube in its 40-string configuration while the mildly relativistic monopole analysis uses the complete 86-string detector.
The paper is organized as follows. In Sect. 2 we introduce the neutrino detector IceCube and describe in Sect. 3 the methods to detect magnetic monopoles with Cherenkov telescopes. We describe the simulation of magnetic monopoles in Sect. 4. The analyses for highly and mildly relativistic monopoles use different analysis schemes which are described in Sects. 5 and 6. The result of both analyses and an outlook is finally shown in Sects. 7–9.
2 IceCube
The IceCube Neutrino Observatory is located at the geographic South Pole and consists of an in-ice array, IceCube [10], and a surface air shower array, IceTop [11], dedicated to neutrino and cosmic ray research, respectively. An aerial sketch of the detector layout is shown in Fig. 1.
IceCube consists of 86 strings with 60 digital optical modules (DOMs) each, deployed at depths between 1450 and \(2450\,\text {m}\), instrumenting a total volume of one cubic kilometer. Each DOM contains a \(25\;\text {cm}\) Hamamatsu photomultiplier tube (PMT) and electronics to read out and digitize the analog signal from the PMT [12]. The strings form a hexagonal grid with typical inter-string separation of \(125\,\text {m}\) and vertical DOM separation of \(17\,\text {m}\), except for six strings in the middle of the array that are more densely instrumented (with higher efficiency PMTs) and deployed closer together. These strings constitute the inner detector, DeepCore [13]. Construction of the IceCube detector started in December 2004 and was finished in December 2010, but the detector took data during construction. Specifically in this paper, we present results from two analyses, one performed with one year of data taken during 2008/2009, when the detector consisted of 40 strings, called IC40, and another analysis with data taken during 2011/2012 using the complete detector, called IC86.
IceCube uses natural ice both as target and as radiator. The analysis in the IC40 configuration of highly relativistic monopoles uses a six-parameter ice model [14] which describes the depth-dependent extrapolation of measurements of scattering and absorption valid for a wavelength of \(400\,\text {nm}\). The IC86 analysis of mildly relativistic monopoles uses an improved ice model which is based on additional measurements and accounts for different wavelengths [15].
Each DOM transmitted digitized PMT waveforms to the surface. The number of photons and their arrival times were then extracted from these waveforms. The detector is triggered when a DOM and its next or next-to-nearest DOMs record a hit within a \(1\, \upmu \text {s}\) window. Then all hits in the detector within a window of \(10\, \upmu \text {s}\) will be read-out and combined into one event [16]. A series of data filters are run on-site in order to select potentially interesting events for further analysis, reducing at the same time the amount of data to be transferred via satellite. For both analyses presented here, a filter selecting events with a high number of photo-electrons (\(>\)650 in the highly relativistic analysis and \(>\)1000 in the mildly relativistic analysis) were used. In addition filters selecting up-going track like events are used in the mildly relativistic analysis.
After the events have been sent to the IceCube’s computer farm, they undergo some standard processing, such as the removal of hits which are likely caused by noise and basic reconstruction of single particle tracks via the LineFit algorithm [17]. This reconstruction is based on a 4-dimensional (position plus time) least-square fit which yields an estimated direction and velocity for an event.
The analyses are performed in a blind way by optimizing the cuts to select a possible monopole signal on simulation and one tenth of the data sample (the burn sample). The remaining data is kept untouched until the analysis procedure is fixed [18]. In the highly relativistic analysis the burn sample consists of all events recorded in August of 2008. In the mildly relativistic analysis the burn sample consists of every 10th 8-h-run in 2011/2012.
3 Monopole signatures
Magnetic monopoles can gain kinetic energy through acceleration in magnetic fields. This acceleration follows from a generalized Lorentz force law [20] and is analogous to the acceleration of electric charges in electric fields. The kinetic energy gained by a monopole of charge \(g_D\) traversing a magnetic field \(B\) with coherence length \(L\) is \(E \sim g_D BL\,\) [7]. This gives a gain of up to \(10^{14}\,\text {GeV}\) of kinetic energy in intergalactic magnetic fields to reach relativistic velocities. At such high kinetic energies magnetic monopoles can pass through the Earth while still having relativistic velocities when reaching the IceCube detector.
In the monopole velocity range considered in these analyses, \(v \ge 0.4\,c\) at the detector, three processes generate detectable light: direct Cherenkov emission by the monopole itself, indirect Cherenkov emission from ejected \(\delta \)-electrons and luminescence. Stochastical energy losses, such as pair production and photonuclear reactions, are neglected because they just occur at ultra-relativistic velocities.
An electric charge e induces the production of Cherenkov light when its velocity v exceeds the Cherenkov threshold \(v_C=c/n_P\approx 0.76\,c\) where \(n_P\) is the refraction index of ice. A magnetic charge g moving with a velocity \(\beta =v/c\) produces an electrical field whose strength is proportional to the particle’s velocity and charge. At velocities above \(v_C\), Cherenkov light is produced analogous to the production by electrical charges [21] in an angle \(\theta \) of
The number of Cherenkov photons per unit path length x and wavelength \(\lambda \) emitted by a monopole with one magnetic charge \(g=g_D\) can be described by the usual Frank-Tamm formula [21] for a particle with effective charge \(Ze \rightarrow g_D n_P\) [22]
Thus, a minimally charged monopole generates \((g_D n_P/e)^2\approx 8200\) times more Cherenkov radiation in ice compared to an electrically charged particle with the same velocity. This is shown in Fig. 2.
In addition to this effect, a (mildly) relativistic monopole knocks electrons off their binding with an atom. These high-energy \(\delta \)-electrons can have velocities above the Cherenkov threshold. For the production of \(\delta \)-electrons the differential cross-section of Kasama, Yang and Goldhaber (KYG) is used that allows to calculate the energy transfer of the monopole to the \(\delta \)-electrons and therefore the resulting output of indirect Cherenkov light [23, 24]. The KYG cross section was calculated using QED, particularly dealing with the monopole’s vector potential and its singularity [23]. Cross sections derived prior to KYG, such as the so-called Mott cross section [25–27], are only semi-classical approximations because the mathematical tools had not been developed by then. Thus, in this work the state-of-the-art KYG cross section is used to derive the light yield. The number of photons derived with the KYG and Mott cross section are shown in Fig. 2. Above the Cherenkov threshold indirect Cherenkov light is negligible for the total light yield.
Using the KYG cross section the energy loss of magnetic monopoles per unit path length dE / dx can be calculated [28]
where \(N_e\) is the electron density, \(m_e\) is the electron mass, \(\gamma \) is the Lorentz factor of the monopole, I is the mean ionization potential, \(K(g_D)\) is the QED correction derived from the KYG cross section, \(B(g_D)\) is the Bloch correction and \(\delta \) is the density-effect correction [29].
Luminescence is the third process which may be considered in the velocity range. It has been shown that pure ice exposed to ionizing radiation emits luminescence light [30, 31]. The measured time distribution of luminescence light is fit well by several overlapping decay times which hints at several different excitation and de-excitation mechanisms [32]. The most prominent wavelength peaks are within the DOM acceptance of about 300–600 nm [15, 32]. The mechanisms are highly dependent on temperature and ice structure. Extrapolating the latest measurements of luminescence light \(dN_{\gamma }/dE\) [32, 33], the brightness \( dN_\gamma / dx \)
could be at the edge of IceCube’s sensitivity where the energy loss is calculated with Eq. 4. This means that it would not be dominant above \(0.5\, c\). The resulting brightness is almost constant for a wide velocity range from 0.1 to \(0.95\, c\). Depending on the actual brightness, luminescence light could be a promising method to detect monopoles with lower velocities. Since measurements of \(dN_\gamma /dE\) are still to be done for the parameters given in IceCube, luminescence has to be neglected in the presented analyses which is a conservative approach leading to lower limits.
4 Simulation
The simulation of an IceCube event comprises several steps. First, a particle is generated, i.e. given its start position, direction and velocity. Then it is propagated, taking into account decay and interaction probabilities, and propagating all secondary particles as well. When the particle is close to the detector, the Cherenkov light is generated and the photons are propagated through the ice accounting for its properties. Finally the response of the PMT and DOM electronics is simulated including the generation of noise and the triggering and filtering of an event (see Sect. 2). From the photon propagation onwards, the simulation is handled identically for background and monopole signal. However the photon propagation is treated differently in the two analyses presented below due to improved ice description and photon propagation software available for the latter analysis.
4.1 Background generation and propagation
The background of a monopole search consists of all other known particles which are detectable by IceCube. The most abundant background are muons or muon bundles produced in air showers caused by cosmic rays. These were modeled using the cosmic ray models Polygonato [34] for the highly relativistic and GaisserH3a [35] for the mildly relativistic analysis.
The majority of neutrino induced events are caused by neutrinos created in the atmosphere. Conventional atmospheric neutrinos, produced by the decay of charged pions and kaons, are dominating the neutrino rate from the GeV to the TeV range [36]. Prompt neutrinos, which originate from the decay of heavier mesons, i.e. containing a charm quark, are strongly suppressed at these energies [37].
Astrophysical neutrinos, which are the primary objective of IceCube, have only recently been found [38, 39]. For this reason they are only taken into account as a background in the mildly relativistic analysis, using the fit result for the astrophysical flux from Ref. [39].
Coincidences of all background signatures are also taken into account.
4.2 Signal generation and propagation
Since the theoretical mass range for magnetic monopoles is broad (see Sect. 1), and the Cherenkov emission is independent of the mass, signal simulation is focused simply on a benchmark monopole mass of \(10^{11} \; \text {GeV}\) without limiting generality. Just the ability to reach the detector after passing through the Earth depends on the mass predicted by a monopole model. The parameter range for monopoles producing a recordable light emission inside IceCube is governed by the velocities needed to produce (indirect) Cherenkov light.
The starting points of the simulated monopole tracks are generated uniformly distributed around the center of the completed detector and pointing towards the detector. For the highly relativistic analysis the simulation could be run at specific monopole velocities only and so the characteristic velocities 0.76, 0.8, 0.9 and \(0.995\, c\), were chosen.
Due to new software, described in the next sub-section, in the simulation for the mildly relativistic analysis the monopoles can be given an arbitrary characteristic velocity v below \(0.99\, c\). The light yield from indirect Cherenkov light fades out below \(0.5\, c\). To account for the smallest detectable velocities the lower velocity limit was set to \(0.4\, c\) in simulation.
The simulation also accounts for monopole deceleration via energy loss. This information is needed to simulate the light output.
4.3 Light propagation
In the highly relativistic analysis the photons from direct Cherenkov light are propagated using Photonics [40]. A more recent and GPU-enabled software propagating light in IceCube is PPC [15] which is used in the mildly relativistic analysis. The generation of direct Cherenkov light, following Eq. 3, was implemented into PPC in addition to the variable Cherenkov cone angle (Eq. 2). For indirect Cherenkov light a parametrization of the distribution in Fig. 2 is used.
Both simulation procedures are consistent with each other and deliver a signal with the following topology: through-going tracks, originating from all directions, with constant velocities and brightness inside the detector volume, see Fig. 3. All these properties are used to discriminate the monopole signal from the background in IceCube.
5 Highly relativistic analysis
This analysis covers the velocities above the Cherenkov threshold \(v_C\approx 0.76\,c\) and it is based on the IC40 data recorded from May 2008 to May 2009. This comprises about 346 days of live-time or 316 days without the burn sample. The live-time is the recording time for clean data. The analysis for the IC40 data follows the same conceptual design as a previous analysis developed for the IC22 data [41], focusing on a simple and easy to interpret set of variables.
5.1 Reconstruction
The highly relativistic analysis uses spatial and timing information from the following sources: all DOMs, fulfilling the next or next-to-nearest neighbor condition (described in Sect. 2), and DOMs that fall into the topmost 10 % of the collected-charge distribution for that event which are supposed to record less scattered photons. This selection allows definition of variables that benefit from either large statistics or precise timing information.
5.2 Event selection
The IC40 analysis selects events based on their relative brightness, arrival direction, and velocity. Some additional variables are used to identify and reject events with poor track reconstruction quality. The relative brightness is defined as the average number of photo-electrons per DOM contributing to the event. This variable has more dynamic range compared with the number of hit DOMs. The distribution of this variable after applying the first two quality cuts, described in Table 3, is shown in Fig. 4. Each event selection step up to the final level is optimized to minimize the background passing rate while keeping high signal efficiency, see Table 3.
The final event selection level aims to remove the bulk of the remaining background, mostly consisting of downward going atmospheric muon bundles. However, the dataset is first split in two mutually exclusive subsets with low and high brightness. This is done in order to isolate a well known discrepancy between experimental and simulated data in the direction distribution near the horizon which is caused by deficiencies in simulating air shower muons at high inclinations [42].
Since attenuation is stronger at large zenith angles \(\theta _z\), the brightness of the resulting events is reduced and the discrepancy is dominantly located in the low brightness subset. Only simulated monopoles with \(v = 0.76\;c\) significantly populate this subset. The final selection criterion for the low brightness subset is \(\cos \theta _z < -0.2\) where \(\theta _z\) is the reconstructed arrival angle with respect to the zenith. For the high brightness subset a 2-dimensional selection criterion is used as shown in Fig. 5. The two variables are the relative brightness described above and the cosine of the arrival angle. Above the horizon (\(\cos \theta _z > 0\)), where most of the background is located, the selection threshold increases linearly with increasing \(\cos \theta _z\). Below the horizon the selection has no directional dependence and values of both ranges coincide at \(\cos \theta _z = 0\). The optimization method applied here is the model rejection potential (MRP) method described in [41].
5.3 Uncertainties and flux calculation
Analogous to the optimization of the final event selection level, limits on the monopole flux are calculated using a MRP method. Due to the blind approach of the analysis these are derived from Monte Carlo simulations, which contain three types of uncertainties: (1) Theoretical uncertainties in the simulated models, (2) Uncertainties in the detector response, and (3) Statistical uncertainties.
For a given monopole-velocity the limit then follows from
where \(\bar{\mu }_{\alpha }\) is an average Feldman-Cousins (FC) upper limit with confidence \(\alpha \), which depends on the number of observed events \(n_{\mathrm {obs}}\). Similarly, though derived from simulation, \(\bar{n}_{\mathrm {s}}\) is the average expected number of observed signal events assuming a flux \(\Phi _0\) of magnetic monopoles. Since \(\bar{n}_{\mathrm {s}}\) is proportional to \(\Phi _0\) the final result is independent of whichever initial flux is chosen.
The averages can be independently expressed as weighted sums over values of \(\mu _{\alpha }(n_{\mathrm {obs}}, n_{\mathrm {bg}})\) and \(n_{\mathrm {s}}\) respectively with the FC upper limit here also depending on the number of expected background events \(n_{\mathrm {bg}}\) obtained from simulation. The weights are then the probabilities for observing a particular value for \(n_{\mathrm {bg}}\) or \(n_{\mathrm {s}}\). In the absence of uncertainties this probability has a Poisson distribution with the mean set to the expected number of events \(\lambda \) derived from simulations. However, in order to extend the FC approach to account for uncertainties, the distribution
is used instead to derive \(n_{\mathrm {bg}}\) and \(n_{\mathrm {s}}\). This is the weighted average of Poisson distributions where the mean value varies around the central value \(\lambda \) and the variance \(\sigma ^2\) is the quadratic sum of all individual uncertainties. Under the assumption that individual contributions to the uncertainty are symmetric and independent, the weighting function \(w(x|\sigma )\) is a normal distribution with mean 0 and variance \(\sigma ^2\). However, the Poisson distribution is only defined for positive mean values. Therefore a truncated normal distribution with the boundaries \(-\lambda \) and \(+\infty \) is used as the weighting function instead.
6 Mildly relativistic analysis
This analysis uses the data recorded from May 2011 to May 2012. It comprises about 342 days (311 days without the burn sample) of live-time. The signal simulation covers the velocity range of 0.4–\(0.99\, c\). The optimization of cuts and machine learning is done on a limited velocity range \(<\)0.76c to focus on lower velocities where indirect Cherenkov light dominates.
6.1 Reconstruction
Following the filters, described in Sect. 2, further processing of the events is done by splitting coincident events into sub-events using a time-clustering algorithm. This is useful to reject hits caused by PMT after-pulses which appear several microseconds later than signal hits.
For quality reasons events are required to have 6 DOMs on 2 strings hit, see Table 4. The remaining events are handled as tracks reconstructed with an improved version [17] of the LineFit algorithm, mentioned in Sect. 2. Since the main background in IceCube are muons from air showers which cause a down-going track signature, a cut on the reconstructed zenith angle below \(86^{\circ }\) removes most of this background.
Figure 6 shows the reconstructed particle velocity at this level. The rate for atmospheric muon events has its maximum at low velocities. This is due to mostly coincident events remaining in this sample. The muon neutrino event rate consists mainly of track-like signatures and is peaked at the velocity of light. Dim events or events traversing only part of the detector are reconstructed with lower velocities which leads to the smearing of the peak rate for muon neutrinos and monopole simulations. Electron neutrinos usually produce a cascade of particles (and light) when interacting which is easy to separate from a track signature. The velocity reconstruction for these events results mainly in low velocities which can also be used for separation from signal.
6.2 Event selection
In contrast to the highly relativistic analysis, machine learning was used. A boosted decision tree (BDT) [43] was chosen to account for limited background statistics. The multivariate method was embedded in a re-sampling method. This was combined with additional cuts to reduce the background rate and prepare the samples for an optimal training result. Besides that, these straight cuts reduce cascades, coincident events, events consisting of pure noise, improve reconstruction quality, and remove short tracks which hit the detector at the edges. See a list of all cuts in Table 4. To train the BDT on lower velocities an additional cut on the maximal velocity \(0.82\, c\) is used only during training which is shown in Fig. 6. Finally a cut on the penetration depth of a track, measured from the bottom of the detector, is performed. This is done to lead the BDT training to a suppression of air shower events underneath the neutrino rate near the signal region, as can be seen in Fig. 8.
Out of a the large number of variables provided by standard and monopole reconstructions 15 variables were chosen for the BDT using a tool called mRMR (Minimum Redundancy Maximum Relevance) [44]. These 15 variables are described in Table 5. With regard to the next step it was important to choose variables which show a good data – simulation agreement so that the BDT would not be trained on unknown differences between simulation and recorded data. The resulting BDT score distribution in Fig. 7 shows a good signal vs. background separation with reasonable simulation – data agreement. The rate of atmospheric muons and electron neutrinos induced events is suppressed sufficiently compared to the muon neutrino rate near the signal region. The main background is muon neutrinos from air showers.
6.3 Background expectation
To calculate the background expectation a method inspired by bootstrapping is used [45], called pull-validation [46]. Bootstrapping is usually used to smooth a distribution by resampling the limited available statistics. Here, the goal is to smooth especially the tail near the signal region in Fig. 7.
Usually 50 % of the available data is chosen to train a BDT which is done here just for the signal simulation. Then the other 50 % is used for testing. Here, 10 % of the burn sample are chosen randomly, to be able to consider the variability in the tails of the background.
Testing the BDT on the other 90 % of the burn sample leads to an extrapolation of the tail into the signal region. This re-sampling and BDT training/testing is repeated 200 times, each time choosing a random 10 % sample. In Fig. 8 the bin-wise average and standard deviation of 200 BDT score distributions are shown.
By BDT testing, 200 different BDT scores are assigned to each single event. The event is then transformed into a probability density distribution. When cutting on the BDT score distribution in Fig. 8 a single event i is neither completely discarded nor kept, but it is kept with a certain probability \(p_i\) which is calculated as a weight. The event is then weighted in total with \(W_i=p_i \cdot w_i\) using its survival probability and the weight \(w_i\) from the chosen flux spectrum. Therefore, many more events contribute to the cut region compared to a single BDT which reduces the uncertainty of the background expectation.
To keep the error of this statistical method low, the cut on the averaged BDT score distribution is chosen near the value where statistics in a single BDT score distribution vanishes.
The developed re-sampling method gives the expected background rate including an uncertainty for each of the single BDTs. Therefore one BDT was chosen randomly for the unblinding of the data.
6.4 Uncertainties
The uncertainties of the re-sampling method were investigated thoroughly. The Poissonian error per bin is negligible because of the averaging of 200 BDTs. Instead, there are 370 partially remaining events which contribute to the statistical error. This uncertainty \(\Delta _{\text {contr}}\) is estimated by considering the effect of omitting individual events i of the 370 events from statistics
Datasets with different simulation parameters for the detector properties are used to calculate the according uncertainties. The values of all calculated uncertainties are shown in Table 1.
The robustness of the re-sampling method was verified additionally by varying all parameters and cut values of the analysis. Several fake unblindings were done by training the analysis on a 10 % sample of the burn sample, optimizing the last cut and then applying this event selection on the other 90 % of the burn sample. This proves reliability by showing that the previously calculated background expectation is actually received with increase of statistics by one order of magnitude. The results were mostly near the mean neutrino rate, only few attempts gave a higher rate, but no attempt exceeded the calculated confidence interval.
The rate of the background events has a variability in all 200 BDTs of up to 5 times the mean value of 0.55 events per live-time (311 days) when applying the final cut on the BDT score. This contribution is dominating the total uncertainties. Therefore not a normal distribution but the real distribution is used for further calculations. This distribution is used as a probability mass function in an extended Feldman Cousin approach to calculate the 90 % confidence interval, as described in Sect. 5.3. The final cut at BDT score 0.47 is chosen near the minimum of the model rejection factor (MRF) [47]. To reduce the influence of uncertainties it was shifted to a slightly lower value. The sensitivity for many different velocities is calculated as described in Sect. 5.3 and shown in Fig. 9. This gives an 90 % confidence upper limit of 3.61 background events. The improvement of sensitivity compared to recent limits by ANTARES [19] and MACRO [48] reaches from one to almost two orders of magnitude which reflects a huge detection potential.
7 Results
After optimizing the two analyses on the burn samples, the event selection was adhered to and the remaining 90 % of the experimental data were processed (“unblinded”). The corresponding burn samples were not included while calculating the final limits.
7.1 Result of the highly relativistic analysis
In the analysis based on the IC40 detector configuration three events remain, one in the low brightness subset and two in the high brightness subset. The low brightness event is consistent with a background- only observation with 2.2 expected background events. The event itself shows characteristics typical for a neutrino induced muon. For the high brightness subset, with an expected background of 0.1 events, the observation of two events apparently contradicts the background-only hypothesis. However, a closer analysis of the two events reveals that they are unlikely to be caused by monopoles. These very bright events do not have a track like signature but a spheric development only partly contained in the detector. A possible explanation is the now established flux of cosmic neutrinos which was not included in the background expectation for this analysis. IceCube’s unblinding policy prevents any claims on these events or reanalysis with changed cuts as have been employed with IC22 [41]. Instead they are treated as an upward fluctuation of the background weakening the limit. The final limits outperform previous limits and are shown in Table 2 and Fig. 9. These limits can also be used as a conservative limit for \(v>0.995\,c\) without optimization for high values of Lorentz factor \(\gamma \) as the expected monopole signal is even brighter due to stochastic energy losses which are not considered here.
7.2 Result of the mildly relativistic analysis
In the mildly relativistic analysis three events remain after all cuts which is within the confidence interval of up to 3.6 events and therefore consistent with a background only observation. All events have reconstructed velocities above the training region of 0.76c . This is compared to the expectation from simulation in Fig. 10. Two of the events show a signature which is clearly incompatible with a monopole signature when investigated by eye because they are stopping within the detector volume. The third event, shown in Fig. 11, may have a mis-reconstructed velocity due to the large string spacing of IceCube. However, its signature is comparable with a monopole signature with a reduced light yield than described in Sect. 3. According to simulations, a monopole of this reconstructed velocity would emit about 6 times the observed light.
To be comparable to the other limits shown in Fig. 9 the final result of this analysis is calculated for different characteristic monopole velocities at the detector. The bin width of the velocity distribution in Fig. 10 is chosen to reflect the error on the velocity reconstruction. Then, the limit in each bin is calculated and normalized which gives a step function. To avoid the bias on a histogram by choosing different histogram origins, five different starting points are chosen for the distribution in Fig. 10 and the final step functions are averaged [50].
The final limit is shown in Fig. 9 and Table 2 together with the limits from the highly relativistic analysis and other recent limits.
8 Discussion
The resulting limits are placed into context by considering indirect theoretical limits and previous experimental results. The flux \(\Phi \) of magnetic monopoles can be constrained model independently by astrophysical arguments to \(\Phi _{\text {P}} \le 10^{-15} \; \text {cm}^{-2}\; \text {s}^{-1}\; \text {sr}^{-1}\) for a monopole mass below \(10^{17} \; \text {GeV}/c^2\). This value is the so-called Parker bound [49] which has already been surpassed by several experiments as shown in Fig. 9. The most comprehensive search for monopoles, regarding the velocity range, was done by the MACRO collaboration using different detection methods [48].
More stringent flux limits have been imposed by using larger detector volumes, provided by high-energy neutrino telescopes, such as ANTARES [19], BAIKAL [33], AMANDA [51], and IceCube [41]. The current best limits for non-relativistic velocities (\(\le \)0.1 c) have been established by IceCube, constraining the flux down to a level of \(\Phi _{\text {90~\%}}\ge 10^{-18} \; \text {cm}^{-2}\; \text {s}^{-1}\; \text {sr}^{-1}\) [52]. These limits hold for the proposal that monopoles catalyze proton decay. The analysis by ANTARES is the only one covering the mildly relativistic velocity range (\(\ge \)0.625 c) using a neutrino detector, to date. However, using the KYG cross section for the \(\delta \)-electron production would extend their limits to lower velocities. The Baksan collaboration has also produced limits on a monopole flux [53], both at slow and relativistic velocities, although due to its smaller size their results are not competitive with the results shown in Fig. 9.
9 Summary and outlook
We have described two searches using IceCube for cosmic magnetic monopoles for velocities \(>\)0.51 c. One analysis focused on high monopole velocities at the detector \(v>0.76\,c\) where the monopole produces Cherenkov light and the resulting detector signal is extremely bright. The other analysis considers lower velocities \(>\)0.51 c where the monopole induces the emission of Cherenkov light in an indirect way and the brightness of the final signal is decreasing largely with lower velocity. Both analyses use geometrical information in addition to the velocity and brightness of signals to suppress background. The remaining events after all cuts were identified as background. Finally the analyses bound the monopole flux to nearly two orders of magnitude below previous limits. Further details of these analyses are given in Refs. [42, 54].
Comparable sensitivities are expected from the future KM3NeT instrumentation based on scaling the latest ANTARES limit to a larger effective volume [55]. Also an ongoing ANTARES analysis plans to use six years of data and estimates competitive sensitivities for highly relativistic velocities [56].
Even better sensitivities are expected from further years of data taking with IceCube, or from proposed volume extensions of the detector [57]. A promising way to extend the search to slower monopoles with \(v \le 0.5\,c\) is to investigate the luminescence they would generate in ice which may be detectable using the proposed low energy infill array PINGU [58].
References
G. ’t Hooft, Nucl. Phys. B 79, 276 (1974)
A.M. Polyakov, JETP Lett. 20, 194 (1974)
A.H. Guth, S.H.H. Tye, Phys. Rev. Lett. 44(10), 631 (1980)
J. Polchinski, Int. J. Mod. Phys. A 19, 145 (2004). doi:10.1142/S0217751X0401866X
P. Dirac, Proc. R. Soc. A 133, 60 (1931)
J.P. Preskill, Ann. Rev. Nucl. Part. Sci. 34, 461 (1984)
S.D. Wick, T.W. Kephart, T.J. Weiler, P.L. Biermann, Astropart. Phys. 18(6), 663 (2003)
S. Dar, Q. Shafi, A. Sil, Phys. Rev. D 74, 035013 (2006)
M. Sakellariadou, Lect. Notes Phys. 738, 359 (2008)
A. Achterberg et al., Astropart. Phys. 26, 155 (2006). doi:10.1016/j.astropartphys.2006.06.007
R. Abbasi et al., Nucl. Instrum. Methods A 700, 188 (2014)
R. Abbasi et al., Nucl. Instrum. Methods A 618(1–3), 139 (2010)
R. Abbasi et al., Astropart. Phys. 35(10), 615 (2012)
M. Ackermann, et al., J. Geophys. Res. 111(D13) (2006)
M.G. Aartsen, et al., Nucl. Instrum. Methods A 711, 73 (2013).
R. Abbasi et al., Nucl. Instrum. Methods A 601(3), 2994 (2009)
M.G. Aartsen, et al., Nucl. Instrum. Methods A 736, 143 (2014).
A. Roodman, in Proceedings of the conference on Statistical Problems in Particle Physics, Astrophysics, and Cosmology (2003), p. 166. arXiv:physics/0312102
S. Adrián-Martínez, et al., Astropart. Phys. 35, 634 (2012). doi:10.1016/j.astropartphys.2012.02.007
F. Moulin, Il Nuovo Cimento B 116, 869 (2001)
E. Tamm, M. Frank, Dokl. Akad. Nauk SSSR (Akad. of Science of the USSR), 14, 107 (1937)
D.R. Tompkins, Phys. Rev. 138(1B) (1964)
T.T. Wu, C.N. Yang, Nucl. Phys. B 107, 365 (1976)
Y. Kazama, C.N. Yang, A.S. Goldhaber, Phys. Rev. D 15, 2287 (1977)
E. Bauer, Math. Proc. Camb. Philos. Soc. 47(04), 777 (1951). doi:10.1017/S0305004100027225
H.J.D. Cole, Math. Proc. Camb. Philos. Soc. 47(01), 196 (1951)
S.P. Ahlen, Phys. Rev. D 14, 2935 (1975)
S.P. Ahlen, Phys. Rev. D 17(1), 229 (1978)
R.M. Sternheimer, At. Data Nucl. Data Tables 30(2), 261 (1984)
L.I. Grossweiner, M.S. Matheson, J. Chem. Phys. 20(10), 1654 (1952). doi:10.1063/1.1700246
L.I. Grossweiner, M.S. Matheson, J. Chem. Phys. 22(9), 1514 (1954). doi:10.1063/1.1740451
T.I. Quickenden, S.M. Trotman, D.F. Sangster, J. Chem. Phys. 77, 3790 (1982). doi:10.1063/1.444352
V. Aynutdinov et al., Astrophys. J. 29, 366 (2008)
J.R. Hoerandel, Astropart. Phys. 19(2), 193 (2003). doi:10.1016/S0927-6505(02)00198-6
T.K. Gaisser, Astropart. Phys. 35(12), 801 (2012)
M. Honda, T. Kajita, K. Kasahara, S. Midorikawa, T. Sanuki, Phys. Rev. D 75(4), 043006 (2007)
R. Enberg, M.H. Reno, I. Sarcevic, Phys. Rev. D 78(4), 043005 (2008)
M.G. Aartsen, et al., Science 342(6161) (2013). doi:10.1126/science.1242856
M.G. Aartsen et al., Phys. Rev. Lett. 113(10), 101101 (2014)
J. Lundberg et al., Nucl. Instrum. Methods A 581, 619 (2007)
R. Abbasi, et al., Phys. Rev. D 87, 022001 (2013)
J. Posselt, Search for Relativistic Magnetic Monopoles with the IceCube 40-String Detector. Ph.D. thesis, University of Wuppertal (2013)
Y. Freund, Inform. Comput. 121(2), 256 (1995). doi:10.1006/inco.1995.1136
H. Peng, I.E.E.E. Trans, Pattern Anal. Mach. Intell. 27(8), 1226 (2005). doi:10.1109/TPAMI.2005.159
B. Efron, Ann. Stat. 7(1), 1 (1979)
J. Kunnen, J. Luenemann, A. Obertacke Pollmann, F. Scheriau for the IceCube Collaboration, in proceedings of the 34th International Cosmic Ray Conference (2015), p. 361. arXiv:1510.05226
G.J. Feldman, R.D. Cousins, Phys. Rev. D 57(7), 3873 (1998)
M. Ambrosio et al., Eur. Phys. J. C 25, 511 (2002)
E.N. Parker, Astrophys. J. 160, 383 (1970)
W. Haerdle, Z. Hlavka, Multivariate Statistics (Springer New York, 2007). doi:10.1007/978-0-387-73508-5
R. Abbasi et al., Eur. Phys. J. C 69, 361 (2010)
M.G. Aartsen et al., Eur. Phys. J. C 74, 2938 (2014)
Y.F. Novoseltsev, M.M. Boliev, A.V. Butkevich, S.P. Mikheev, V.B. Petkov, Nucl. Phys. B, Proc. Suppl. 151, 337 (2006). doi:DOIurl10.1016/j.nuclphysbps.2005.07.048
A. Pollmann, Search for mildly relativistic Magnetic Monopoles with IceCube. Ph.D. thesis, University of Wuppertal (Submitted)
S. Adrian-Martinez, et al. The prototype detection unit of the KM3NeT detector (2014). arXiv:1510.01561
I.E. Bojaddaini, G.E. Pavalas, in Proceedings of the 34th International Cosmic Ray Conference (2015), p. 1097
M.G. Aartsen, et al., IceCube-Gen2: a vision for the future of neutrino astronomy in Antarctica (2014). arXiv:1412.5106
M.G. Aartsen, et al., Letter of intent: the precision icecube next generation upgrade (PINGU) (2014). arXiv:1401.2046
Acknowledgments
We acknowledge the support from the following agencies: U.S. National Science Foundation-Office of Polar Programs, U.S. National Science Foundation-Physics Division, University of Wisconsin Alumni Research Foundation, the Grid Laboratory Of Wisconsin (GLOW) grid infrastructure at the University of Wisconsin - Madison, the Open Science Grid (OSG) grid infrastructure; U.S. Department of Energy, and National Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; Natural Sciences and Engineering Research Council of Canada, WestGrid and Compute/Calcul Canada; Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg Foundation, Sweden; German Ministry for Education and Research (BMBF), Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Research Department of Plasmas with Complex Interactions (Bochum), Germany; Fund for Scientific Research (FNRS-FWO), FWO Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office (Belspo); University of Oxford, United Kingdom; Marsden Fund, New Zealand; Australian Research Council; Japan Society for Promotion of Science (JSPS); the Swiss National Science Foundation (SNSF), Switzerland; National Research Foundation of Korea (NRF); Danish National Research Foundation, Denmark (DNRF).