Search for steady point-like sources in the astrophysical muon neutrino flux with 8 years of IceCube data

The IceCube Collaboration has observed a high-energy astrophysical neutrino ﬂux and recently found evidence for neutrino emission from the blazar TXS 0506 + 056. These results open a new window into the high-energy universe. However, the source or sources of most of the observed ﬂux of astrophysical neutrinos remains uncertain. Here, a search for steady point-like neutrino sources is performed using an unbinned likelihood analysis. The method searches for a spatial accumulation of muon-neutrino events using the very high-statistics sample of about 497,000 neutrinos recorded by IceCube between 2009 and 2017. The median angular resolution is ∼ 1 ◦ at 1 TeV and improves to ∼ 0 . 3 ◦ for neutrinos with an energy of 1 PeV. Compared to previous analyses, this search is optimized for point-like neutrino emission with the same ﬂux-characteristics as the observed astrophysical muon-neutrino ﬂux and introduces an improved event-reconstruction and parametrization of the background. The result is an improvement in sensitivity to the muon-neutrino ﬂux compared to the previous analysis of ∼ 35% assuming an E − 2 spectrum. The sensitivity on the muon-neutrino ﬂux is at a level of E 2 d N / d E = 3 · 10 − 13 TeV cm − 2 s − 1 . No new evidence for neutrino sources is found in a full sky scan and in an a priori candidate source list that is motivated by gamma-ray observations. Furthermore, no signiﬁcant excesses above background are found from populations of sub-threshold sources. The implications of the non-observation for potential source classes are discussed.

Abstract The IceCube Collaboration has observed a highenergy astrophysical neutrino flux and recently found evidence for neutrino emission from the blazar TXS 0506+056. These results open a new window into the high-energy universe. However, the source or sources of most of the observed flux of astrophysical neutrinos remains uncertain.
Here, a search for steady point-like neutrino sources is performed using an unbinned likelihood analysis. The method searches for a spatial accumulation of muonneutrino events using the very high-statistics sample of about 497 000 neutrinos recorded by IceCube between 2009 and 2017. The median angular resolution is ∼ 1 • at 1 TeV and improves to ∼ 0.3 • for neutrinos with an energy of 1 PeV. Compared to previous analyses, this search is optimized for point-like neutrino emission with the same fluxcharacteristics as the observed astrophysical muon-neutrino flux and introduces an improved event-reconstruction and parametrization of the background. The result is an improvement in sensitivity to the muon-neutrino flux compared to the previous analysis of ∼ 35% assuming an E −2 spectrum. The sensitivity on the muon-neutrino flux is at a level of E 2 dN/dE = 3 · 10 −13 TeV cm −2 s −1 .
No new evidence for neutrino sources is found in a full sky scan and in an a priori candidate source list that is motivated by gamma-ray observations. Furthermore, no significant excesses above background are found from populations of sub-threshold sources. The implications of the nonobservation for potential source classes are discussed.
Keywords Neutrino · IceCube · point source

Introduction
Astrophysical neutrinos are thought to be produced by hadronic interactions of cosmic-rays with matter or radiation fields in the vicinity of their acceleration sites [1]. Unlike cosmic-rays, neutrinos are not charged and are not deflected by magnetic fields and thus point back to their origin. Moreover, since neutrinos have a relatively small interaction cross section, they can escape from the sources and do not suffer absorption on their way to Earth. Hadronic interactions of high-energy cosmic rays may also result in highenergy or very-high-energy gamma-rays. Since gammarays can also arise from the interaction of relativistic leptons with low-energy photons, only neutrinos are directly linked to hadronic interactions. The most commonly assumed neutrino-flavor flux ratios in the sources result in equal or nearly equal flavor flux ratios at Earth [2]. Thus about 1/3 of the astrophysical neutrinos are expected to be muon neutrinos and muon anti-neutrinos.
In 2013, the IceCube Collaboration reported the observation of an unresolved, astrophysical, high-energy, allflavor neutrino flux, consistent with isotropy, using a sam-ple of events which begin inside the detector ('starting events') [3,4]. This observation was confirmed by the measurement of an astrophysical high-energy muon-neutrino flux using the complementary detection channel of throughgoing muons, produced in neutrino interactions in the vicinity of the detector [5][6][7]. Track-like events from throughgoing muons are ideal to search for neutrino sources because of their relatively good angular resolution. However, to date, the sources of this flux have not been identified.
In 2018, first evidence of neutrino emission from an individual source was observed for the blazar TXS 0506+056 [8,9]. Multi-messenger observations following up a highenergy muon neutrino event on September 22, 2017 resulted in the detection of this blazar being in flaring state. Furthermore, evidence was found for an earlier neutrino burst from the same direction between September 2014 and March 2015. However, the total neutrino flux from this source is less than 1% of the total observed astrophysical flux. Furthermore, the stacking of the directions of known blazars has revealed no significant excess of astrophysical neutrinos at the locations of known blazars. This indicates that blazars from the 2nd Fermi-LAT AGN catalogue contribute less than about 30% to the total observed neutrino flux assuming an unbroken power-law spectrum with spectral index of −2.5 [10]. The constraint weakens to about 40%-80% of the total observed neutrino flux assuming a spectral index of −2 [8]. Note that these results are model dependent and an extrapolation beyond the catalog is uncertain. No other previous searches have revealed a significant source or source class of astrophysical neutrinos [11][12][13][14][15][16][17][18][19][20][21].
Here, a search for point-like sources is presented that takes advantage of the improved event selection and reconstruction of a muon-neutrino sample developed in [6] and the increased livetime of eight years [7] between 2009 and 2017. The best description of the sample includes a high-energy astrophysical neutrino flux given by a single power-law with a spectral index of 2.19 ± 0.10 and a flux normalization, at 100 TeV, of Φ 100 TeV = 1.01 +0. 26 −0.23 × 10 −18 GeV −1 cm −2 s −1 sr −1 , resulting in 190 to 2145 astrophysical neutrinos in the event sample. Compared to the previous time-integrated point source publication by IceCube [14,16,[22][23][24], this analysis is optimized for sources that show similar energy spectra as the measured astrophysical muon-neutrino spectrum. Furthermore, a highstatistics Monte Carlo parametrization of the measured data, consisting of astrophysical and atmospherical neutrinos and including systematic uncertainties, is used to model the background expectation and thus increases the sensitivity.
Within this paper, the following tests are discussed: 1. a full sky scan for the most significant source in the Northern hemisphere, 2. a test for a population of sub-threshold sources based on the result of the full sky scan, 3. a search based on an a priori defined catalog of candidate objects mo-tivated by gamma-ray observations [16], 4. a test for a population of sub-threshold sources based on the result of the a priori defined catalog search, and 5. a test of the recently observed blazar TXS 0506+056. The tests are described in Section 3.4 and their results are given in Section 4.

Data sample
IceCube is a cubic-kilometer neutrino detector with 5160 digital optical modules installed on 86 cable strings in the clear ice at the geographic South Pole between depths of 1450 m and 2450 m [25,26]. The neutrino energy and directional reconstruction relies on the optical detection of Cherenkov radiation emitted by secondary particles produced in neutrino interactions in the surrounding ice or the nearby bedrock. The produced Cherenkov light is detected by digital optical modules (DOMs) each consisting of a 10 inch photomultiplier tube [27], on-board read-out electronics [28] and a high-voltage board, all contained in a spherical glass pressure vessel. Light propagation within the ice can be parametrized by the scattering and absorption behavior of the antarctic ice at the South Pole [29]. The detector construction finished in 2010. During construction, data was taken in partial detector configurations with 59 strings (IC59) from May 2009 to May 2010 and with 79 strings (IC79) from May 2010 to May 2011 before IceCube became fully operational.
For events arriving from the Southern hemisphere, the trigger rate in IceCube is dominated by atmospheric muons produced in cosmic-ray air showers. The event selection is restricted to the Northern hemisphere where these muons are shielded by the Earth. Additionally, events are considered down to −5 • declination, where the effective overburden of ice is sufficient to strongly attenuate the flux of atmospheric muons. Even after requiring reconstructed tracks from the Northern hemisphere, the event rate is dominated by mis-reconstructed atmospheric muons. However, these mis-reconstructed events can be reduced to less than 0.3% of the background using a careful event selection [6,7]. As the data were taken with different partial configurations of IceCube, the details of the event selections are different for each season. At final selection level, the sample is dominated by atmospheric muon neutrinos from cosmic-ray air showers [6]. These atmospheric neutrinos form an irreducible background to astrophysical neutrino searches and can be separated from astrophysical neutrinos on a statistical basis only.
In total, data with a livetime of 2780.85 days are analyzed containing about 497 000 events at the final selection level. A summary of the different sub-samples is shown in Tab. 1.
The performance of the event selection can be characterized by the effective area of muon-neutrino and anti-neutrino detection, the point spread function and the central 90% energy range of the resulting event sample. The performance is evaluated with a full detector Monte Carlo simulation [26].
The effective area A ν+ν eff quantifies the relation between neutrino and anti-neutrino fluxes φ ν+ν with respect to the observed rate of events dN ν+ν dt : where Ω is the solid angle, θ , φ are the detector zenith and azimuth angle and E ν is the neutrino energy. The effective area for muon neutrinos and muon anti-neutrinos averaged over the Northern hemisphere down to -5 degree declination is shown in Fig.1 (top). At high energies, the muon direction is well correlated with the muon-neutrino direction (< 0.1 • deviation above 10 TeV) and the muon is reconstructed with a median angular uncertainty ∆Ψ ν of about 0.6 • at 10 TeV. All events have been reconstructed with an improved reconstruction based on the techniques described in [30,31]. The median angular resolution is shown in Fig.1 (middle). The median angular resolution at a neutrino energy of 1 TeV is about 1 • and decreases for higher energies to about 0.3 • at 1 PeV.
The central 90% energy range is shown in Fig. 1 (bottom) as a function of sin δ , with declination δ . Energy ranges are calculated using the precise best-fit parametrization of the experimental sample. The energy range stays mostly constant as function of declination but shifts to slightly higher energies near the horizon. The central 90% energy range of atmospheric neutrinos is about 200 GeV -10 TeV.
In Fig. 2, the ratio of effective area (top) and median angular resolution (bottom) of the sub-sample IC86 2012-2016 and the sample labeled 2012-2015 from previous time-integrated point source publication by IceCube is shown [16]. The differences in effective area are declination dependent. When averaged over the full Northern hemisphere, the effective area produced by this event selection is smaller than that in [16] at low neutrino energies but is larger above 100 TeV. The median neutrino angular resolution ∆Ψ ν improves at 10 TeV by about 10% compared to the reconstruction used in [16] and improves up to 20% at higher energies. The event sample for the season from May 2011 to May 2012 has an overlap of about 80% with the selection presented in Ref. [16] using the same time range.

Likelihood & test statistics
The data sample is tested for a spatial clustering of events with an unbinned likelihood method described in [32] and  Table 1 Data samples used in this analysis and some characteristics of these samples. For each sample start date, livetime, number of observed events, and energy and declination range of the event selections are given. The energy range, calculated using a spectrum of atmospheric neutrinos and astrophysical neutrinos, spans the central 90% of the simulated events. Astrophysical neutrinos were generated using the best-fit values listed in Section 1. Note that livetime values slightly deviate from Ref. [6,7] as the livetime calculation has been corrected. used in the previous time-integrated point source publications by IceCube [14,16,[22][23][24]. At a given position x s in the sky, the likelihood function for a source at this position, assuming a power law energy spectrum with spectral index γ, is given by where i is an index of the observed neutrino events, N is the total number of events, n s is the number of signal events and P(γ) is a prior term. S i and B i are the signal and background probability densities evaluated for event i. The likelihood is maximized with respect to the source parameters n s ≥ 0 and 1 ≤ γ ≤ 4 at each tested source position in the sky given by its right ascension and declination x s = (α s , δ s ).
The signal and background probability density functions (PDF) S i and B i factorize into a spatial and an energy factor where x i = (α i , δ i ) is the reconstructed right ascension α i and declination δ i , E i is the reconstructed energy [33] and σ i is the event-by-event based estimated angular uncertainty of the reconstruction of event i [22,34]. A likelihood ratio test is performed to compare the bestfit likelihood to the null hypothesis of no significant clustering L 0 = ∏ i B i . The likelihood ratio is given by with best-fit valuesn s andγ, which is used as a test statistic. The sensitivity of the analysis is defined as the median expected 90% CL upper limit on the flux normalization in case of pure background. In addition, the discovery potential is defined as the signal strength that leads to a 5 σ deviation from background in 50% of all cases.
In previous point source publications by IceCube [14,16,[22][23][24], the spatial background PDF B spat and the energy background PDF B ener were estimated from the data. Given the best-fit parameters obtained from [6] and good data / Monte Carlo agreement, it is, however, possible to get a precise parametrization of the atmospheric and diffuse astrophysical components, including systematic uncertainties. By doing this, it is possible to take advantage of the high statistics of the full detector simulation data sets which can be used to generate smooth PDFs optimized for the sample used in this work. Thus this parametrization of the experimental data allows us to obtain a better extrapolation to sparsely populated regions in the energy-declination plane than by using only the statistically limited experimental data. This comes with the drawback that the analysis can only be applied to the Northern hemisphere since no precise parametrization is available for the Southern hemisphere. Generating PDFs from full detector simulations has already been done in previous publications for the energy signal PDF S ener , as it is not possible to estimate this PDF from data itself. The spatial signal PDF S spat is still assumed to be Gaussian with an event individual uncertainty of σ i .
It is known from the best-fit parametrization of the sample that the data contain astrophysical events. The astrophysical component has been parametrized by an unbroken power-law with best-fit spectral index of 2.19 ± 0.10 [7]. In contrast to the previous publication of time-integrated point source searches by IceCube [16], which uses a flat prior on the spectral index in the range 1 ≤ γ ≤ 4, this analysis focuses on those sources that produce the observed spectrum of astrophysical events by adding a Gaussian prior P(γ) on the spectral index in Eq. 2 with mean 2.19 and width 0.10. As the individual source spectra are not strongly constrained by the few events that contribute to a source, the prior dominates the fit of γ and thus the spectral index is effectively fixed allowing only for small variations. Due to the prior, the likelihood has reduced effective degrees of freedom to model fluctuations. As a result, the distribution of the test statistic in the case of only background becomes steeper which results in an improvement of the discovery potential assuming an E −2 source spectrum.
However, due to the reduced freedom of the likelihood by the prior on the spectral index about 80% of background trials yieldn s = 0 and thus T S = 0. This pile-up leads to an over-estimation of the median 90% upper limit as the   median is degenerate and the flux sensitivity is artificially over-estimated. Thus a different definition for the T S is introduced forn s = 0. Allowing for negativen s can lead to convergence problems due to the second free parameter of γ. Assumingn s = 0 is already close to the minimum of log L , log L can be approximated as a parabola. The likelihood is extended in a Taylor series up to second order around n s = 0. The Taylor series gives a parabola for which the value of the extremum can be calculated from the first and second order derivative of the likelihood at n s = 0. This value is used as test statistic for likelihood fits that yieldn s = 0. With this definition, the pile-up ofn s is spread towards negative values of T S and the median of the test statistic is no longer degenerate. Using this method, the sensitivity which had been overestimated due to the pile-up at n s = 0 can be recovered.

Pseudo-experiments
To calculate the performance of the analysis, pseudoexperiments containing only background and pseudoexperiments with injected signal have been generated. In this search for astrophysical point sources, atmospheric neutrinos and astrophysical neutrinos from unresolved sources make up the background. Using the precise parametrization of the reconstructed declination and energy distribution 1 from Ref. [7], pseudo-experiments are gener-ated using full detector simulation events. Due to IceCube's position at the South Pole and the high duty cycle of >99% [26], the background PDF is uniform in right ascension.
As a cross check, background samples are generated by scrambling experimental data uniformly in right ascension. The declination and energy of the events are kept fixed. This results in a smaller sampled range of event energy and declination compared to the Monte Carlo-based pseudoexperiments. In the Monte Carlo-based pseudo-experiments, events are sampled from the simulated background distributions, and thus are not limited to the values of energy and declination present in the data when scrambling. P-values for tests presented in Section 4 are calculated using the Monte Carlo method and are compared to the data scrambling method for verification (values in brackets).
Signal is injected according to a full simulation of the detector. Events are generated at a simulated source position assuming a power law energy distribution. The number of injected signal events is calculated from the assumed flux and the effective area for a small declination band around the source position. In this analysis, the declination band was reduced compared to previous publication of time-integrated point source searches by IceCube [16], resulting in a more accurate modelling of the effective area. This change in signal modeling has a visible effect on the sensitivity and discovery potential, especially at the horizon and at the celestial pole. The effect can be seen in Fig. 3 by comparing the solid (small bandwidth) and dotted (large bandwidth) lines. The bandwidth is optimized by taking into account the effect of averaging over small declination bands and limited simulation statistics to calculate the effective area. As the bandwidth cannot be made too narrow, an uncertainty of about 8% on the flux limit calculation arises and is included in the systematic error.

Sensitivity & discovery potential
The sensitivity and discovery potential for a single point source is calculated for an unbroken power law flux according to In Fig. 3, the sensitivity and discovery potential as function of sin δ is shown. Note that Fig. 3 shows which is constant in neutrino energy for an E −2 flux. The sensitivity corresponds to a 90% CL averaged upper limit and the discovery potential gives the median source flux for which a 5σ discovery would be expected. The flux is given as a muon neutrino plus muon anti-neutrino flux. For comparison, the sensitivity and discovery potential from the previous publication of time-integrated point source searches by IceCube [16] are shown. Despite only a moderate increase of livetime, this analysis outperforms the analysis in [16] by about 35% for multiple reasons: 1. the use of an improved angular reconstruction, 2. a slightly better optimized event selection near the horizon, 3. the use of background PDFs in the likelihood that are optimized on the parametrization from [6,7] which improves sensitivity especially for higher energies, 4. the fact that due to the prior on the spectral index the number of source hypotheses is reduced which results in a steeper falling background T S distribution, and 5. the use of negative T S values which avoids overestimating the sensitivity, especially in the celestial pole region (sin δ ∼ 1), where the background changes rapidly in sin δ . In Fig. 4, the differential discovery potentials for three different declination bands are shown.

Full sky scan
A scan of the full Northern hemisphere from 90 • down to −3 • declination has been performed. The edge at −3 • has been chosen to avoid computational problems due to fast changing PDFs at the boundary of the sample at −5 • . The scan is performed on a grid with a resolution of about 0.1 • . The grid was generated using the HEALPix pixelization scheme 2 [35]. For each grid point, the pre-trial p-value is calculated. As the test statistic shows a slight declination dependence, the declination dependent T S is used to calculate local p-values. T S distributions have been generated for 100 declinations equally distributed in sin δ . 10 6 trials have been  generated for each declination. Below a T S value of 5, the p-value is determined directly from trials. Above T S = 5, an exponential function is fitted to the tail of the distribution which is used to calculate p-values above T S = 5. A Kolmogorov-Smirnov test [36,37] and a χ 2 test are used to verify the agreement of the fitted function and the distribution.
The most significant point on the sky produced by the scan is selected using the pre-trial p-value. Since many points are tested in this scan, a trial correction has to be applied. Therefore, the procedure is repeated with background pseudo-experiments as described in Section 3.2. By comparing the local p-values from the most significant points in the background sample to the experimental pre-trial p-value, the post-trial p-value is calculated. The final p-value is calculated directly from ∼ 3500 trials 3 .

Population test in the full sky scan
Due to the large number of trials, only very strong sources would be identified in a full sky scan, which attempts to quantify only the most significant source. However, the obtained T S values can be tested also for a significant excess of events from multiple weaker sources without any bias towards source positions. This is done by counting p-values 3 The background distribution of the local p-value p local for the most significant point is described by dP = N(1 − p local ) N−1 d p local , with an effective number of trials N that is fitted to 241 000 ± 9 000. A rough approximation of this trial factor can be calculated by dividing the solid angle of the Northern hemisphere ∼ 2π by the squared median angular resolution. Considering that highest energy events dominate the sensitivity, we use 0.3 • for the median angular resolution. Thus we get 2π/(0.3 • ) 2 ≈ 229000 effective trials, which is in the same order of magnitude as the determined value. of local warm spots where the p-values are smaller than a preset threshold. An excess of counts with respect to the expectation from pure background sky maps can indicate the presence of multiple weak sources. From the full sky scan, local spots with p local < 10 −2 and a minimal separation of 1 • are selected. The number of expected local spots λ with a p-value smaller than p thres is estimated from background pseudo-experiments and shown in Fig. 5 as dashed line. The background expectation was found to be Poisson distributed. The threshold value is optimized to give the most significant excess above background expectation using the Poisson probability to find an excess of at least n spots. Due to the optimization of the threshold in the range on 2 < − log 10 p thres < ∞, the result has to be corrected for trials as well. To include this correction, the full sky scan population test is performed on background pseudo-experiments to calculate the post-trial p-value.

A priori source list
The detectability of sources suffers from the large number of trials within the full sky scan and thus individual significant source directions may become insignificant after the trial correction. However, gamma-ray data can help to preselect interesting neutrino source candidates. A standard IceCube and ANTARES a priori source list, containing 34 prominent candidate sources for high-energy neutrino emission on the Northern hemisphere has been tested [16], reducing the trial factor to about the number of sources in the catalog. The source catalog is summarized in Tab. 2. The sources were selected mainly based on observations in gamma rays and belong to various object classes. The sources from this list are tested individually with the unbinned likelihood from Eq. 2. For this test, p-values are calculated from 10 6 background trials without using any extrapolation. Then the most significant source is selected and a trial-correction, derived from background pseudo-experiments, is applied. Note that some sources such as MGRO J1908+06, SS 433, and Geminga are spatially extended with an apparent angular size of up to several degrees, which is larger than IceCube's point spread function. In such cases, the sensitivity of the analysis presented in this paper is reduced. E.g., for an extension of 1 degree, the sensitivity on the neutrino flux decreases by ∼ 20% [24].

Population test in the a priori source list
Similar to the population test in the full sky scan, an excess of several sources with small but not significant p-values in the a priori source list can indicate a population of weak sources. Therefore, the k most significant p-values of the source list are combined using a binomial distribution of p-values that are larger than a threshold p k . Here, N = 34 is the total number of sources in the source list. The most significant combination is used as a test statistic and assessed against background using pseudo-experiments.

Monitored source list
IceCube and ANTARES have tested the a priori source list for several years with increasingly sensitive analyses [16,[22][23][24]. Changing the source list posterior may lead to a bias on the result. However, not reporting on recently seen, interesting sources would also ignore progress in the field. A definition of an unbiased p-value is not possible as these were added later. Therefore, a second list with sources is tested to report on an updated source catalog. In this work, this second catalog so far comprises only TXS 0506+056, for which evidence for neutrino emission has been observed.

Systematic uncertainties
The p-values for the tested hypotheses are determined with simulated pseudo-experiments assuming only background (see also Section 3.2). These experiments are generated using the full detector Monte Carlo simulation, weighted to the best-fit parametrization from Ref. [7]. This parametrization includes the optimization of nuisance parameters accounting for systematic uncertainties resulting in very good agreement between experimental data and Monte Carlo. Because of this procedure, the p-values are less affected by statistical fluctuations that would occur when estimating p-values from scrambled experimental data as well as the effect of fixed event energies during scrambling. However, a good agreement of the parametrization with experimental data is a prerequisite of this method. As a cross check, p-values are also calculated using scrambled experimental data. These pvalues are given for comparison in brackets in Section 4. We find that the two methods show very similar results confirming the absence of systematic biases.
The calculation of the absolute neutrino flux normalization based on Monte Carlo simulations is affected by systematic uncertainties. These uncertainties influence the reconstruction performance and the determination of the effective area. Here, the dominant uncertainties are found to be the absolute optical efficiency of the Cherenkov light production and detection in the DOMs [27], the optical properties (absorption, scattering) of the South Pole ice [38], and the photo-nuclear interaction cross sections of high energy muons [39][40][41][42][43][44][45].
The systematic uncertainties on the sensitivity flux normalization is evaluated by propagating changed input values on the optical efficiency, ice properties and cross section values through the entire likelihood analysis for a signal energy spectrum of dN/dE ν ∝ E −2 ν . Changing the optical efficiencies by ±10% results in a change of the flux normalization by ±7.5%. The ice properties have been varied by (+10%, 0%), (0%, +10%) and (-7.1%, -7.1%) in the values of absorption and scattering length. The resulting uncertainty of the flux normalization is ±5.3%. To study the effect of the photo-nuclear interactions of high energy muons, the models in Ref. [39][40][41][42][43][44][45] have been used, which give a flux normalization variation of ±5.1%. Note, that these models are outdated and represent the extreme cases from common literature. Thus, the systematic uncertainty is estimated conservatively. The systematic uncertainties are assumed to be independent and are added in quadrature, yielding a total systematic uncertainty of ±10.5% for the ν µ +ν µ flux normalization. One should note that additionally, the modeling of point-like sources yields an uncertainty of about ±8% as discussed in Section 3.2.
Since the sample is assumed to be purely muon neutrino and muon anti-neutrino events, only ν µ +ν µ fluxes are considered. However, ν τ andν τ may also contribute to the observed astrophysical neutrinos in the data sample. Taking ν τ andν τ fluxes into account and assuming an equal flavor ratio at Earth, the sensitivity of the per-flavor flux normalization improves, depending on the declination, by 2.6% -4.3%. The expected contamination from ν e andν e is negligible.
The relative systematic uncertainty is comparable with the systematic uncertainties quoted in previous publications of time integrated point source searches by IceCube [16]. In addition, the systematic effect due to the chosen finite bandwidth is included in this analysis.

Results
No significant clustering was found in any of the hypotheses tests beyond the expectation from background. Both the full-sky scan of the Northern hemisphere and the p-values from the source list are compatible with pure background. The p-values given in this section are calculated by pseudoexperiments based on Monte Carlo simulation weighted to the best-fit parametrization of the sample (see Section 3.2). For verification, p-values calculated by pseudo-experiments from scrambled experimental data are given in brackets.

Sky scan
The pre-trial p-value map of the Northern hemisphere scan is shown in Fig. 6. The hottest spot in the scan is indicated by a black circle and is located at α = 177.89 • and δ =  Fig. 7. Events are shown as small circles where the area of the circle is proportional to the median log 10 of neutrino energy assuming the diffuse best-fit spectrum. The closest gamma-ray source from the Fermi 3FGL and Fermi 3FHL catalogs [46,47] is 3FHL J1150.3+2418 which is about 1.1 degree away from the hottest spot. The chance probability to find a 3FGL or 3FHL source within 1.1 degree is 25%, which is estimated from all-sky pseudo-experiments. At the source location of 3FHL J1150.3+2418, the T S value is 8.02 which is inconsistent with the best-fit point at the 3.6 σ level, if assuming Wilks theorem with one degree of freedom [48].

Population test in the sky scan
In Fig. 5, the number of spots with p-values below p thres are shown together with the expectation from background. The most significant deviation was found for p thres = 0.5% where 454.3 spots were expected and 492 were observed with a p-value of p poisson = 4.17%. Correcting the result for trials gives a p-value of 42.0% (54.3%) and thus the result is compatible with background.
As no significant deviation from the background hypothesis has been observed, exclusion limits are calculated as 90% CL upper limits with Neyman's method [50] for the benchmark scenario of a fixed number of sources N sources , all producing the same flux at Earth. Upper limits are calculated assuming that background consists of atmospheric neutrinos only, excluding an astrophysical component from background pseudo-experiment generation. Excluding the astrophysical component from background is necessary as the summed injected flux makes up a substantial part of the astrophysical flux in case of large N sources . However, this will over-estimate the flux sensitivity for small N sources . More realistic source scenarios are discussed in Section 5. This rather unrealistic scenario does not depend on astrophysical and cosmological assumptions about source populations and allows for a comparison between the analysis power of different analyses directly. The sensitivity and upper limits for N source sources is shown in Fig. 8 together with the analyses from [16,49] 4 . This analysis finds the most stringent exclusion limits for small number of sources to date. The gain in sensitivity compared to Ref. [16] is consistent with the gain in the sensitivity to a single point source.

A priori source list
The fit results of sources in the a priori source list are given in Tab. 2. The most significant source with a local p-value of 0.8% is 4C 38.41, which is a flat spectrum radio quasar (FSRQ) at a redshift of z = 1.8. Taking into account that 34 sources have been tested, a post-trial p-value of 23.7% (20.3%) is calculated from background pseudo-experiments which is compatible with background.
As no significant source has been found, 90% CL upper limits are calculated assuming an unbroken power law with spectral index of -2 using Neyman's method [50]. The 90% CL upper limit flux is summarized in Tab. 2 and shown in Fig. 9. In case of under-fluctuations, the limit was set to the sensitivity level of the analysis. Note that 90% upper limits can exceed the discovery potential as long as the best-fit flux is below the discovery potential.
Interestingly, a total of three sources, 4C 38.41, MGRO J1908+06 and Cyg A, have a local p-value below or close to 1%. The p-value landscapes and observed events around these three sources are shown in Fig. 10 and Fig. 11.  − log 10 (p local )

Population test in the a piori source list
The most significant combination of p-values from the a priori source list is given when combining the three most significant p-values, i.e. k = 3, with 2.59σ as shown 4 The 90% CL upper limit from Ref. [16] has been recalculated to account for an incorrect treatment of signal acceptance in the original publication. Ref. [16] this work 90% Upper Limit 5σ Discovery Potential Sensitivity Fig. 9 Sensitivity (dashed) and 5σ discovery potential (solid) of the flux normalization for an E −2 source spectrum as function of the sin δ . For comparison, the lines from [16] are shown as well. 90% CL Neyman upper limits on the flux normalization for sources in the a priori and monitored source list are shown as circles and squares, respectively.
in Fig. 12. The comparison with background pseudoexperiments yields a trial-corrected p-value of 6.6% (4.1%) which is not significant.

Monitored source list
The best-fit results for TXS 0506+056 in the monitored source list are given in Tab. 3. Note that the event selection ends in May 2017 and thus does not include the time of the alert ICECUBE-170922A [51] that led to follow-up observations and the discovery of γ-ray emission from that blazar up to 400 GeV. The data, however, include the earlier time-period of the observed neutrino flare. The local p-value here is found to be 2.93%. This is less significant than the reported significance of the time-dependent flare in [8] but is consistent with the reported time-integrated significances in [8], when taking into account that this analysis has a prior on the spectral index of the source flux and does not cover the same time-range as in [8].
The local p-value landscape around TXS 0506+056 is shown in Fig. 11 together with the observed event directions of this sample.

Implications on source populations
The non-detection of a significant point-like source and the non-detection of a population of sources within the sky scan is used to put constrains on realistic source populations. In the following calculation, source populations are characterized by their effective ν µ +ν µ single-source luminosity − log 10 (p local )  − log 10 (p local ) L eff ν µ +ν µ and their local source density ρ eff 0 . Using the software tool FIRESONG 5 [52], the resulting source count distribution dN dΦ as a function of the flux Φ for source populations are calculated for sources within z < 10 and representations of this population are simulated. To calculate the source count distribution, FIRESONG takes the source density ρ, luminosity distribution, source evolution, cosmological parameters, the energy range of the flux and the spectral index into account. Following Ref. [53], sources are sim- 5 FIRst Extragalactic Simulation Of Neutrinos and Gamma-rays (FIRESONG), https://github.com/ChrisCFTung/FIRESONG ulated with a log-normal distribution with median L eff ν µ +ν µ and a width of 0.01 in log 10 (L eff ν µ +ν µ ) which corresponds to a standard candle luminosity. The evolution of the sources was chosen to follow the parametrization of star formation rate from Hopkins and Beacom [54] assuming a flat universe with Ω M,0 = 0.308, Ω λ ,0 = 0.692 and h = 0.678 [55]. The energy range of the flux at Earth was chosen as 10 4 GeV -10 7 GeV to calculate the effective muon neutrino luminosities of sources.
Generating pseudo-experiments with signal components corresponding to the flux distribution obtained from FIRESONG, 90% CL upper limits are calculated in the ρ eff   -L eff ν µ +ν µ plane for various spectral indices assuming that background consists of atmospheric neutrinos only, as described in Section 4.2. The 90% CL upper limit is calculated based on the fact that the strongest source of a population does not give a p-value in the sky scan that is larger than the observed one. The 90% upper limits are shown as dashed lines in Fig. 13. In addition, 90% CL upper limits are cal-culated by comparing the largest excess measured with the population test in the sky scan. These 90% upper limits are shown as solid lines in Fig. 13. Populations that are compatible at the 1σ and 3σ level with the diffuse flux measured in [7] are shown as blue shaded band. 90% CL upper limits have been calculated assuming an E −2 power-law flux. The same has been performed for an E −2.19 power-law flux, which is the diffuse best-fit for this sample (this result can be found in the supplementary material). The computation of upper limits becomes very computing-intensive for large source densities. Therefore, the computation of the upper limits, resulting from the sky scan, are extrapolated to larger source densities (indicated by dotted line in Fig. 13). It can be seen that for large effective source densities and small effective luminosities, the limit resulting from the population analysis goes ∝ 1/L eff ν µ +ν µ which is the same scaling as one would expect from a diffuse flux. Indeed it is found that an excess of diffuse high-energy events, i.e. sources from which only one neutrino are detected, leads to a p-value excess in the population analysis. This is a result of taking the energy of the event into account in the likelihood. Limits from the hottest spot in the sky scan are a bit stronger for large effective luminosities while upper limits from the population test become stronger at about L eff ν µ +ν µ ∼ 10 52 erg yr .

Implications for individual source models
In Section 4.3, constraints on source fluxes assuming dN/dE ν ∝ E −2 ν have been calculated. However, more specific neutrino flux models can be obtained using γ-ray data. In pion decays, both neutrinos and γ-rays are produced. Thus γ-ray data can be used to construct models for neutrino emission under certain assumptions. Here, models for sources of the a priori source list are tested. For each model, the Model Rejection Factor (MRF) is calculated which is the ratio between the predicted flux and the 90% CL upper limit. In addition, the expected experimental result in the case of pure background is also calculated giving the MRF sensitivity. The energy range that contributes 90% to the sensitivity has been calculated by folding the differential discovery potential at the source position (similar to Fig. 4) with the flux prediction. Models for which the MRF sensitivity is larger than 10 are not discussed here.
The first source tested is the Crab Nebula, which is a Pulsar Wind Nebula (PWN) and the brightest source in TeV γ-rays. Despite the common understanding that the emission from PWNe is of leptonic nature, see e.g. [61], neutrinos  [57]. The prediction by Amato et. al assumes pion production is dominated by p-p interactions and the target density is given by n t = 10µM N R −3 pc cm −3 with M N the mass of the supernova ejecta in units of solar masses. Moreover, R pc is the radius of the supernova in units of pc and µ is an unknown factor of the order of 1 ≤ µ ≤ 20 that takes into account e.g. the intensity and structures of magnetic fields within the PWN. Here µ = 20 and a proton luminosity of 60% of the total PWN luminosity for Lorentz factors of Γ = 10 4 , 10 5 , 10 6 , 10 7 are used to provide a result that is model-independent and complementary to [56]. The model prediction by Kappes et al., assumes a dominant production of γ-rays of the HESS γ-ray spectrum [62] by p-p interactions.
The model predictions, sensitivity and 90% CL upper limit are shown in Fig. 14  Another very interesting class of sources are active galactic nuclei (AGN). Here, the models being tested come from Ref. [59] for Mrk 421, a BL Lacertae object (BL Lac) that was found in spatial and energetic agreement with a high-energy starting event and from Ref. [58] for 3C273 and 3C454.3 which are flat spectrum radio quasars (FSRQ). The models, sensitivities and 90% CL upper limits are shown in Fig. 15  one of the few sources with a local p-value below ∼ 2.5%, the 90% CL upper limit is much larger.
Another tested model was derived for the source G40.5-0.5 which is a galactic supernova remnant [60]. This supernova remnant can be associated with the TeV source MGRO J1908+06 which is the second most significant source in the a priori source catalog, although the association of G40.5-0.5 with MGRO J1908+06 is not distinct [63]. In addition, the pulsar wind nebula powered by PSR J1907+0602 may contribute to the TeV emission of the MGRO J1908+06 region. However, here the tested model for the SNR G40.5-0.5 is adapted from Ref. [60]. The model, sensitivity and 90% CL upper limit are shown in Fig. 16 and are listed in Tab. 4. The sensitivity of this analysis is a factor 1.4 above the model prediction and not yet sensitive to this model. As MGRO J1908+06 is the second most significant source in the catalog, with a local p-value of < 1%, the upper limit lies nearly a factor of five above the model prediction.

Conclusions
Eight years of IceCube data have been analyzed for a timeindependent clustering of through-going muon neutrinos using an unbinned likelihood method. The analysis includes a full sky search of the Northern hemisphere down to a declination of −3 • for a significant hot spot as well as an analysis of a possible cumulative excess of a population of weak sources. Furthermore, source-candidates from an a priori catalog and a catalog of monitored sources are tested individually and again for a cumulative excess.
The analysis method has been optimized for the observed energy spectrum of high-energy astro-physical muon neutrinos [6] and a number of improvements with respect to the previously published search [16] have been incorporated. By implementing these improvements, a sensitivity increase of about 35% has been achieved.
No significant source was found in the full-sky scan of the Northern hemisphere and the search for significant neutrino emission from objects on a a priori source list results in a post-trial p-value of 23.7% (20.3%), compatible with background. Also the tests for populations of sub-threshold sources revealed no significant excess.
Three sources on the a priori source-list, 4C 38.41, MGRO J1908+06 and Cyg A, have pre-trial p-values of only about 1%. However, these excesses are not significant. The source TXS 0506+056 in the catalog of monitored sources has a p-value of 2.9 %. This is consistent with the timeintegrated p-value in [8] for the assumed prior on the spectral index.
Based on these results, the most stringent limits on highenergy neutrino emission from point-like sources are obtained. In addition, models for neutrino emission from specific sources are tested. The model [56] for the Crab Nebula is excluded for Γ ≥ 10 6 as well as the predictions for 3C273 [58] and Mrk 421 [59]. In addition to these specific models, an exclusion of source populations as a function of local source density and single-source luminosity are derived by calculating the source count distribution for a realistic cosmological evolution model.
The quality and statistical power of a sample, w.r.t. a search for point-like sources, can be characterized by the effective area of muon-neutrino and anti-neutrino detection, the point spread function and the central 90% energy range (see Section 2). As the data were taken with different partial configurations of IceCube, the details of the event selections are different for each season. In Fig. 1 the livetime average of all sub-samples is shown. In Fig. 17 the effective area, point spread function and central 90% energy range are shown for each sub-sample individually. The plot shows that -despite of different detector configurations and event selections -the characteristics of the event samples are similar. Appendix B: Results for diffuse best-fit spectral index An E −2 power-law is often used as benchmark model and for a comparison between publications. However, the diffuse best-fit spectral index is γ = 2.19, which is, given the uncertainties is not consistent with γ = 2. Therefore, the sensitivity and discovery potential for single point sources are recalculated using this spectral index. In Fig. 18, the sensitivity and discovery potential for an E −γ spectum are shown with γ = 2.0 and γ = 2.19. The flux normalization is evaluated at a pivot energy of 100 TeV. The sensitivity and discovery potential for the assumed spectral indices turn out to be very similar.  In addition also the 90% CL upper limit on source populations as described in Section 5 are recalculated for a spectral index of γ = 2.19. The upper limit are shown in Fig. 19. Comparing with Fig. 13, there is no strong indication of a dependence on the spectral index.