Search for heavy Majorana neutrinos in same-sign dilepton channels in proton-proton collisions at $\sqrt{s} =$ 13 TeV

A search is performed for a heavy Majorana neutrino (N), produced by leptonic decay of a W boson propagator and decaying into a W boson and a lepton, with the CMS detector at the LHC. The signature used in this search consists of two same-sign leptons, in any flavor combination of electrons and muons, and at least one jet. The data were collected during 2016 in proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. The results are found to be consistent with the expected standard model backgrounds. Upper limits are set in the mass range between 20 and 1600 GeV in the context of a Type-I seesaw mechanism, on |$V_\mathrm{eN}$|$^2$, |$V_{\mu\mathrm{N}}$|$^2$, and |$V_{\mathrm{eN}}$$V^*_{\mu\mathrm{N}}$|$^2$ / (|$V_\mathrm{eN}$|$^2$ + |$V_{\mu\mathrm{N}}$|$^2$), where $V_{\ell\mathrm{N}}$ is the matrix element describing the mixing of N with the standard model neutrino of flavor $\ell =$ e, $\mu$. For N masses between 20 and 1600 GeV, the upper limits on |$V_{\ell\mathrm{N}}$|$^2$ range between 2.3 $\times$ 10$^{-5}$ and unity. These are the most restrictive direct limits for heavy Majorana neutrino masses above 430 GeV.


Introduction
The observation of neutrino oscillations [1], a mixing between several neutrino flavors, established that at least two of the standard model (SM) neutrinos have nonzero masses and that individual lepton number is violated. The nonzero masses of the neutrinos are arguably the first evidence for physics beyond the SM. Upper limits on the neutrino masses have been established from cosmological observations [1], as well as direct measurements, including those of tritium decays [2,3]. The extremely small values of these masses are difficult to explain in models that assume neutrinos to be Dirac particles [4,5].
The leading theoretical candidate to explain neutrino masses is the so-called "seesaw" mechanism [6][7][8][9][10][11][12][13][14][15][16][17][18][19], in which a new heavy Majorana neutrino N is postulated. In the seesaw mechanism, the observed small neutrino masses, m ν , result from the large mass of N, with m ν ∼ y 2 ν v 2 /m N . Here y ν is a Yukawa coupling, v is the Higgs vacuum expectation value in the SM, and m N is the mass of the heavy-neutrino state. One model that incorporates the seesaw mechanism, and can be probed at the LHC, is the neutrino minimal standard model (νMSM) [20][21][22][23]. In this model, the existence of new heavy neutrinos could not only explain the very small masses of the SM neutrinos, but also provide solutions to other problems in cosmology, such as the origin of dark matter or the matter-antimatter asymmetry of the early universe [22,23].
In this paper, we present the results of a search for a heavy Majorana neutrino in the νMSM, which incorporates new heavy-neutrino states without additional vector bosons. Searches for heavy Majorana neutrinos at hadron colliders have been proposed by many theoretical models [24][25][26][27][28]. Numerous experiments have looked for heavy neutrinos in the mass range from several keV to some hundred GeV, with no evidence seen, and a summary of the limits on |V N | 2 versus m N for these experiments is given in Ref. [29], where V N is a matrix element describing the mixing between the heavy neutrino and the SM neutrino of flavor = e, µ, or τ. Direct searches for heavy neutrinos have been performed at the CERN LEP collider [30][31][32] and, more recently, at the CERN LHC [33][34][35][36][37]. These searches use a model-independent phenomenological approach, assuming that m N and V N are free parameters.
The searches performed by the DELPHI [30] and L3 [31,32] Collaborations at LEP looked for the e + e − → Nν process, where ν is any SM neutrino. For = µ, τ the limits on |V N | 2 were set for m N < 90 GeV, while for = e the limits extend to m N < 200 GeV. Several experiments obtained limits for low neutrino masses (m N < 5 GeV), including the LHCb Collaboration [33] at the LHC, which set limits on the mixing of a heavy neutrino with an SM muon neutrino. The searches by L3, DELPHI, and LHCb include the possibility of a finite heavy-neutrino lifetime, such that N decays with a vertex displaced from the interaction point. In the search reported here, however, it is assumed that N decays close to the point of production, since in the mass range of this search (m N > 20 GeV) the decay length is expected to be less than 10 −10 m [38].
This search probes the decay of a W boson, in which an SM neutrino oscillates into a new state N. In this analysis, only = e or µ processes are considered. In the previous CMS analyses [34,35], only the Drell-Yan (DY) production of N (qq → W * → N ± → ± ± q q), shown in Fig. 1 (left) was considered, while in this study the photon-initiated production of N (qγ → Wq → N ± q → ± ± q q q), as shown in Fig. 1 (right), is also taken into account. The diagram in Fig. 1 (right) shows a possible production of N via Wγ fusion, which we refer to by the generic term vector boson fusion (VBF). The inclusion of the VBF channel enhances the sensitivity of this analysis for N masses above several hundred GeV [39], where the t-channel photon-initiated processes become the dominant production mechanism for W * → N [39,40].
Since N is a Majorana particle and can decay to a lepton of equal or opposite charge to that q q ′ Figure 1: Feynman diagram representing a resonant production of a Majorana neutrino (N), via the s-channel Drell-Yan process (left) and its decay into a lepton and two quarks, resulting in a final state with two same-sign leptons and two quarks from a W boson decay. Feynman diagram for the photon-initiated process (right). of its parent W boson, both opposite-and same-sign (SS) lepton pairs can be produced. This search targets same-sign dilepton (SS2 ) signatures since these final states have very low SM backgrounds. We search for events where the N decays to a lepton and a W boson, and the W boson decays hadronically, as this allows the reconstruction of the mass of the N without the ambiguity associated with the longitudinal momentum of an SM neutrino. For the DY channel production, the final state is + + q q. The charge-conjugate decay chain also contributes and results in an − − q q final state. In the VBF channel, production of an additional forward jet is produced in the event.
An observation of the ± ± q q(q ) process would constitute direct evidence of lepton number violation. The study of this process in different dilepton channels improves the likelihood for the discovery of N, and constrains the mixing matrix elements. The dielectron (ee), dimuon (µµ), and electron-muon (eµ) channels are searched for and allow constraints to be set on |V eN | 2 , |V µN | 2 , and |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ), respectively [38]. In the eµ channel, the leptons from the W boson and the N decay can be either e and µ, or µ and e, respectively, so the branching fraction for this channel is twice as large as that for the ee or µµ channels.
The most recent CMS search for heavy Majorana neutrinos in events with two leptons and jets was performed for the mass range m N = 40-500 GeV in the ee, µµ, and eµ channels at √ s = 8 TeV [34,35]. A similar search was also performed by the ATLAS Collaboration in the ee and µµ channels [36]. The CMS Collaboration performed a search for heavy Majorana neutrinos in final states with three leptons using the 2016 data set [37], setting limits on |V eN | 2 and |V µN | 2 , for the mass range m N = 1-1200 GeV. In the case of trilepton channels, events that contain both an electron and a muon (eeµ, µµe) present an ambiguity about which of the leptons mixes with N, and it is thus impossible to probe |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ). This ambiguity is not present in the current analysis with dilepton channels, allowing limits to be set on |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ). The CMS analysis at √ s = 8 TeV showed that the efficiency for signal events drops for masses above 400 GeV, as a consequence of the Lorentz-boosted topology of the decay products of N, which causes the signal jets to overlap and be reconstructed as a single jet. The signal efficiency can be recovered by including events containing a wide jet that is consistent with the process W → qq , where the decay products of the W boson are merged into a single jet [41]. It was also observed that the signal efficiency dropped significantly when the mass of N was below the W boson mass (m W ). For the µµ channel, the signal acceptance was 0.65 (10.9)% for m N = 60 (125) GeV. For m N < m W the final-state leptons and jets are very soft and fail the momentum requirements applied in the 8 TeV analysis. In the present analysis, cases where one of the signal jets fails the selection criteria are recovered by including events with only one jet.
In this paper, a new search for N in the ee, µµ, and eµ channels is presented using CMS data collected in 2016 at √ s = 13 TeV. We search for events with two isolated leptons with the same electric charge, with the presence of either a) two or more jets, b) exactly one jet, or c) at least one wide jet. We look for an excess of events above the expected SM background prediction by applying selection criteria to the data to optimize the signal significance for each mass hypothesis. Heavy Majorana neutrinos with a mass in the range of 20 to 1700 GeV are considered. There are three potential sources of SS2 backgrounds: SM sources in which two prompt SS leptons are produced (a prompt lepton is defined as an electron or muon originating from a W/Z/γ * boson or τ lepton decay), events resulting from misidentified leptons, and opposite-sign dilepton events (e.g., from Z → + − , W ± W ∓ → + ν − ν) in which the sign of one of the leptons is mismeasured. The last source is negligible for the µµ and eµ channels.

The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter. The solenoid provides a magnetic field of 3.8 T along the direction of the counterclockwise rotating beam as viewed from above the plane of the accelerator, taken as the z axis of the detector coordinate system, with the center of the detector defined to be at z = 0. The azimuthal angle φ is measured in radians in the plane perpendicular to the z axis, while the polar angle θ is measured with respect to this axis. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. The ECAL provides a coverage in pseudorapidity |η| < 1.48 in the barrel region and 1.48 < |η| < 3.00 in the two endcap regions, where pseudorapidity is defined as η = − ln[tan(θ/2)]. Forward calorimetry extends the pseudorapidity coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization detectors, providing a coverage of |η| < 2.4, and are embedded in the steel flux-return yoke outside the solenoid. The first level of the CMS trigger system [42], composed of custom hardware processors, uses information from the calorimeters and muon detectors to select up to 100 kHz of the most interesting events. The high-level trigger (HLT) processor farm uses information from all CMS subdetectors to further decrease the event rate to roughly 1 kHz before data storage. A more detailed description of the CMS detector can be found in Ref. [43].

Simulated samples
Samples of simulated events are used to estimate the background from SM processes containing prompt SS leptons originating from hard-scattering processes and to determine the heavy Majorana neutrino signal acceptance and selection efficiency. The backgrounds from SM sources are produced using the MADGRAPH5 aMC@NLO 2.2.2 or 2.3.3 Monte Carlo (MC) generator [44] at leading order (LO) or next-to-leading order (NLO) in perturbative quantum chromodynamics (QCD), with the exception of gg → ZZ which is simulated at LO with MCFM 7.0 [45], and the diboson production processes (WZ and ZZ) that are generated at NLO with the POWHEG v2 [46][47][48][49] generator.
The NNPDF3.0 [50] LO (NLO) parton distribution function (PDF) sets are used for the simulated samples generated at LO (NLO). For all signal and background samples, showering and hadronization are described using the PYTHIA 8.212 [51] generator, with the CUETP8M1 [52] underlying event tune. The response of the CMS detector is modeled using GEANT4 [53]. Dou-ble counting of the partons generated with MADGRAPH5 aMC@NLO and PYTHIA is removed using the MLM [54] and FxFx [55] matching schemes in the LO and NLO samples, respectively.
The N signals are generated using MADGRAPH5 aMC@NLO 2.6.0 at NLO precision, where the decay of N is simulated with MADSPIN [56], following the implementation of Refs. [57,58]. This includes the production of N via the charged-current DY and VBF processes. For the charged-current DY production mechanism, we employ the NNPDF31 NNLO HESSIAN PDFAS PDF set [50], while to include the photon PDF in the VBF (Wγ fusion) mechanism we use the LUXQED17 PLUS PDF4LHC15 NNLO 100 PDF set [59]. The NLO cross section, obtained using the generator at √ s = 13 TeV, for the DY (VBF) process has a value of 58.3 (0.050) pb for m N = 40 GeV, dropping to 0.155 (9.65 × 10 −4 ) pb for m N = 100 GeV, and to 9.92 × 10 −6 (1.69 × 10 −5 ) pb for m N = 1000 GeV, assuming |V N | 2 = 0.01. The VBF process becomes the dominant production mode for scenarios where the mass of N is greater than ≈700 GeV. Only the final states with two leptons (electrons or muons) and jets are generated.
Additional pp collisions in the same or adjacent bunch crossings (pileup) are taken into account by superimposing minimum bias interactions simulated with PYTHIA on the hard-scattering process. The simulated events are weighted such that the distribution of the number of additional pileup interactions, estimated from the measured instantaneous luminosity for each bunch crossing, matches that in data. The simulated events are processed with the same reconstruction software as used for the data.

Event reconstruction and object identification
The reconstructed vertex with the largest value of summed physics-object p 2 T is taken to be the primary pp interaction vertex, where p T is the transverse momentum of the physics-objects.
Here the physics objects are the jets, clustered using the jet finding algorithm [60,61] with the tracks assigned to the vertex as inputs, and the associated missing transverse momentum, p miss T , which is defined as the magnitude of the vector sum of the momenta of all reconstructed particles in an event.
The global event reconstruction, based on the particle-flow algorithm [62], aims to reconstruct and identify each individual particle in an event, with an optimized combination of all subdetector information. In this process, the identification of the particle type (photon, electron, muon, charged hadron, neutral hadron) plays an important role in the determination of the particle direction and energy. Photons are identified as ECAL energy clusters not linked to the extrapolation of any charged-particle trajectory to the ECAL. Electrons are identified as primary charged-particle tracks and potentially many ECAL energy clusters corresponding to this track extrapolation to the ECAL and to possible bremsstrahlung photons emitted along the way through the tracker material. Muons are identified as tracks in the central tracker consistent with either a track or several hits in the muon system, associated with no significant associated energy deposits in the calorimeters. Charged hadrons are identified as charged-particle tracks neither identified as electrons, nor as muons. Finally, neutral hadrons are identified as HCAL energy clusters not linked to any charged-hadron trajectory, or as ECAL and HCAL energy excesses with respect to the expected charged-hadron energy deposit.
The energy of photons is directly obtained from the ECAL measurement, corrected for zerosuppression effects. The energy of electrons is determined from a combination of the track momentum at the primary interaction vertex, the corresponding ECAL cluster energy, and the energy sum of all bremsstrahlung photons attached to the track. The energy of muons is obtained from the corresponding track momentum. The energy of charged hadrons is deter-mined from a combination of the track momentum and the corresponding ECAL and HCAL energy, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.

Lepton selection
Electron candidates are selected in the region |η| < 2.5, excluding 1.44 < |η| < 1.57. Their identification that is based on a multivariate discriminant built from variables that characterize the shower shape and track quality. To reject electrons originating from photon conversions in the detector material, electrons must have no measurements missing in the innermost layers of the tracking system and must not be matched to any secondary vertex containing another electron [63]. To reduce the rate of the electron sign mismeasurement, charges measured from independent techniques are required to be the same, using the "selective method" for the charge definition as explained in Ref. [63], which we refer to as "tight charge". Requiring the electrons to have tight charge reduces the signal efficiency by 1-20%, depending on m N , while the background from mismeasured sign is reduced by a factor of 10. To ensure that electron candidates are consistent with originating from the primary vertex, the transverse (longitudinal) impact parameter of the leptons with respect to this vertex must not exceed 0.1 (0.4) mm. These electrons must also satisfy |d xy |/σ(d xy ) < 4, where d xy is the transverse impact parameter relative to the primary vertex, estimated from the track fit, and σ(d xy ) is its uncertainty.
Muons are selected in the range |η| < 2.4. The muon trajectory is required to be compatible with the primary vertex, and to have a sufficient number of hits in the tracker and muon systems. The transverse (longitudinal) impact parameter of the muons with respect to this vertex must not exceed 0.05 (0.40) mm. These muons must also satisfy |d xy |/σ(d xy ) < 3.
To distinguish between prompt leptons (a prompt lepton is defined as an electron or muon originating in a W/Z/γ * boson or τ lepton decay) originating from decays of heavy particles, such as electroweak (EW) bosons or heavy neutrinos, and those produced in hadron decays or hadrons misidentified as leptons, a relative isolation variable (I rel ) is used. It is defined for electrons (muons) as the pileup-corrected [63,64] scalar p T sum of the reconstructed charged hadrons originating from the primary vertex, the neutral hadrons, and the photons, within a cone of ∆R = √ (∆η) 2 + (∆φ) 2 = 0.3 (0.4) around the lepton candidate's direction at the vertex, divided by the lepton candidate's p T .
Electrons that pass all the aforementioned requirements and satisfy I e rel < 0.08 are referred to as "tight electrons". Electrons that satisfy I e rel < 0.4, and pass less stringent requirements on the multivariate discriminant and impact parameter are referred to as "loose electrons". Muons that pass all the aforementioned requirements and satisfy I µ rel < 0.07 are referred to as "tight muons". Muons that satisfy I µ rel < 0.6, and pass a less stringent requirement on the impact parameter and track quality requirements are referred to as "loose muons". Electrons within ∆R < 0.05 of a muon are removed, as these particles are likely a photon radiated from the muon.

Identification of jets and missing transverse momentum
For each event, hadronic jets are clustered from the reconstructed particle-flow objects with the infrared and collinear safe anti-k T jet clustering algorithm [60], implemented in the FASTJET package [65]. Two different distance parameters, R = 0.4 and 0.8, are used with this algorithm, producing objects referred to as AK4 and AK8 jets, respectively. The jet momentum is determined as the vector sum of all particle momenta in the jet, and is found from simulation to be within 5 to 10% of the true parton momentum over the entire p T spectrum and detector acceptance. Additional pp interactions within the same or nearby bunch crossings can contribute additional tracks and calorimetric energy depositions to the jet momentum. To mitigate this effect, tracks identified to be originating from pileup vertices are discarded and an offset correction is applied to correct for remaining contributions. Jet energy corrections are derived from simulation to bring the measured response of jets to that of particle level jets on average. In situ measurements of the momentum balance in dijet, photon+jet, Z+jet, and multijet events are used to estimate residual differences in jet energy scale in data and simulation, and appropriate corrections are applied [66]. The jet energy resolution is typically 15% at 10 GeV, 8% at 100 GeV, and 4% at 1 TeV. Additional selection criteria are applied to remove jets potentially dominated by anomalous contributions from various subdetector components or reconstruction failures. The AK4 (AK8) jets must have p T > 20 (200) GeV and |η| < 2.7 to be considered in the subsequent steps of the analysis. To suppress jets matched to pileup vertices, AK4 jets must pass a selection based on the jet shape and the number of associated tracks that point to non-primary vertices [67].
The AK8 jets are groomed using a jet pruning algorithm [68,69]: subsequent to the clustering of AK8 jets, their constituents are reclustered with the Cambridge-Aachen algorithm [70,71], where the reclustering sequence is modified to remove soft and wide-angle particles or groups of particles. This reclustering is controlled by a soft threshold parameter z cut , which is set to 0.1, and an angular separation threshold ∆R > m jet /p T,jet . The jet pruning algorithm computes the mass of the AK8 jet after removing the soft radiation to provide a better mass resolution for jets, thus improving the signal sensitivity. The pruned jet mass is defined as the invariant mass associated with the four-momentum of the pruned jet.
In addition to the jet grooming algorithm, the "N-subjettiness" of jets [72] is used to identify boosted vector bosons that decay hadronically. This observable measures the distribution of jet constituents relative to candidate subjet axes in order to quantify how well the jet can be divided into N subjets. Subjet axes are determined by a one-pass optimization procedure that minimizes N-subjettiness [72]. The separation in the phi-azimuth plane between all of the jet constituents and their closest subjet axes are then used to compute the N-subjettiness as τ N = 1/d 0 Σ k p T,k min(∆R 1,k , ∆R 2,k , ..., ∆R N,k ) with the normalization factor d 0 = Σ k p T,k R 0 where R 0 is the clustering parameter of the original jet, p T,k is the transverse momentum of the kconstituent of the jet and ∆R N,k = √ (∆η N,k ) 2 + (∆φ N,k ) 2 is its distance to the N-th subjet. In particular, the ratio between τ 2 and τ 1 , known as τ 21 , has excellent capability for separating jets originating from boosted vector bosons from jets originating from quarks and gluons [72]. To select a high-purity sample of jets originating from a hadronically decaying W bosons, the AK8 jets are required to have τ 21 < 0.6 and a pruned jet mass between 40 and 130 GeV. We refer to these selected jets as W-tagged jets. The efficiency of the τ 21 selection for AK8 jets is measured in a tt-enriched sample in data and simulation. To correct for observed differences between the estimated and measured efficiencies a scale factor of 1.11 ± 0.08 is applied to the event for each AK8 jet that passes the τ 21 requirement in the simulation [67].
Identifying jets originating from a bottom quark can help suppress backgrounds from tt production. To identify such jets the combined secondary vertex algorithm [73] is used. This algorithm assigns to each jet a likelihood that it contains a bottom hadron, using many discriminating variables, such as track impact parameters, the properties of reconstructed decay vertices, and the presence or absence of low-p T leptons. The average b tagging efficiency for jets above 20 GeV is 63%, with an average misidentification probability for light-parton jets of about 1%.
To avoid double counting due to jets matched geometrically with a lepton, any AK8 jet that is within ∆R < 1.0 of a loose lepton is removed from the event. Moreover, if an AK4 jet is reconstructed within ∆R < 0.4 of a loose lepton or within ∆R < 0.8 of an AK8 jet, it is not used in the analysis.
The p miss T is adjusted to account for the jet energy corrections applied to the event [66]. The scalar sum of all activity in the event (S T ) is used in the selection of our signal region selection and is defined as the p T sum of all AK4 and AK8 jets, leptons, and p miss T . The transverse mass, m T , which is used as a requirement to suppress backgrounds from leptonic W boson decays, is defined as follows: where p T is the transverse momentum of the lepton and ∆φ , p miss T is the azimuthal angle difference between the lepton momentum and p miss T vector.

Event selection
Events used in this search are selected using several triggers, requiring the presence of two charged leptons (e or µ). All triggers require two loosely isolated leptons, where the leading-(trailing-)p T lepton must have p T > 23 (12) GeV for the ee, p T > 17 (8) GeV for the µµ, and p T > 23 (8) GeV for the eµ trigger at the HLT stage. The offline requirements on the leading (trailing) lepton p T are governed by the trigger thresholds, and are p T > 25 (15) GeV for the ee, p T > 20 (10) GeV for the µµ, and p T > 25 (10) GeV for the eµ channels. The efficiency for signal events to satisfy the trigger in the ee, µµ, and eµ channels is above 0.88, 0.94, and 0.88, respectively.

Preselection criteria
At a preselection stage, events are required to contain a pair of SS leptons. To remove backgrounds with soft misidentified leptons, the invariant mass of the dilepton pair is required to be above 10 GeV. Dielectron events with an invariant mass within 20 GeV of the Z boson mass [1] are excluded to reject background from Z boson decays in which one electron sign is mismeasured. In order to suppress backgrounds from diboson production, such as WZ, events with a third lepton identified using a looser set of requirements and with p T > 10 GeV are removed. Preselected events are required to have at least one AK4 or one AK8 jet passing the full jet selection. The same preselection is applied in all three channels (ee, µµ, eµ).

Selection criteria for signals
The kinematic properties of signal events from heavy-neutrino decays depend on its mass. To distinguish between the two W bosons involved in the production and decay sequence, we refer to the W boson that produces N in Fig. 1 (left) as the W boson propagator and the W boson that decays to a quark and anti-quark pair as the hadronically decaying W boson. Two search regions (SRs) are defined. In the low-mass SR (m N 80 GeV), the W boson propagator is on-shell and the final-state system of dileptons and two jets should have an invariant mass close to the W boson mass. In the high-mass SR (m N 80 GeV), the W boson propagator is off-shell but the hadronically decaying W boson is on-shell, so the invariant mass of the jets from the hadronically decaying W will be consistent with the W boson mass.
Since the kinematic properties of the signal depend on m N , we define four event categories to maximize the discovery potential over the full mass range. The low-and high-mass SRs are further split based on the jet configuration. The four signal categories used in the analysis are defined as: • low-and high-mass SR1: number of AK4 jets ≥ 2 and number of AK8 jets = 0, • low-mass SR2: number of AK4 jets = 1 and number of AK8 jets = 0, • high-mass SR2: number of AK8 jets ≥ 1.
Taking the three flavor channels into account, the analysis has 12 separate SRs.
In each SR, the technique of selecting jets associated with the hadronic W boson decay is different. If there are any W-tagged AK8 jets in the event, the AK8 jet with pruned jet mass closest to m W is assumed to be from the hadronic W boson decay. For the high-mass SRs, if there are two or more AK4 jets in the event and no AK8 jets, the two AK4 jets with the invariant mass closest to m W are assigned to the hadronically decaying W boson. In the low-mass SRs, the W boson propagator is reconstructed from N (one lepton + jet(s)) and the additional lepton, and if there are more than two jets, the jets are selected such that the mass is closest to m W . If only one jet is reconstructed in the low-mass SR then this is assigned as being from the hadronic W boson decay. The jet(s) assigned to the hadronic W boson decay are referred to by the symbol W jet to simplify notation in the rest of the paper.
Before optimizing the signal significance for each mass hypothesis we apply a set of loose selections to the preselection events to select the low-and high-mass SRs. These requirements are chosen to remove a large fraction of the backgrounds while keeping the signal efficiency high. In the low-mass SRs, the invariant mass of the two leptons and W jet is required to be less than 300 GeV. To remove backgrounds from leptonic W boson decays, events must have p miss T less than 80 GeV. To remove backgrounds from top quark decays, events are vetoed if they contain a b-tagged AK4 jet. In the high-mass SRs, the following selections are used. For SR1 the events are required to have 30 < m(W jet ) < 150 GeV for the invariant mass of the W jet and p T > 25 GeV for the leading AK4 jet. For SR2 the pruned jet mass must satisfy 40 < m(W jet ) < 130 GeV. Since the p miss T is correlated with the energy of the final-state objects, this requirement is not used in high-mass SRs. Instead, we use (p miss T ) 2 /S T , which has a stronger discriminating power between high-mass signals and backgrounds. The (p miss T ) 2 /S T must be less than 15 GeV. These selections are summarized in Table 1. Table 1: Selection requirements, after applying the preselection criteria, for the low-and highmass signal regions. A dash indicates that the variable is not used in the selection.

Optimization of signal selection
After applying the selection criteria in Table 1, the signal significance is optimized by combining several different variables using a modified Punzi figure of merit [74]. The Punzi figure of merit is defined as S /(a/2 + δB) where a is the number of standard deviation, and is set equal to 2 to be consistent with the previous CMS analysis, S is the signal selection efficiency, and δB is the uncertainty in the estimated background. The signal regions are optimized separately for each mass hypothesis and for each of the three flavor channels.
The variables used to optimize the signal selection, which are all optimized simultaneously, are: the transverse momentum of the leading lepton p 1 T , and of the trailing lepton p 2 T ; the invariant mass of the two leptons and the selected jet(s) m( ± ± W jet ); the angular separation between the W jet and the trailing lepton ∆R( 2 , W jet ); minimum and maximum requirements on the invariant mass of the lepton (leading or trailing) and the selected jet(s) m( i W jet ), where i=1,2; and the invariant mass of the two leptons m( ± ± ). We consider the variable m( i W jet ), as this should peak at m N for the signal. Since it is not known which lepton comes from the N decay, the event is accepted if either m( i W jet ) satisfies the requirements. The optimized window requirements for some SRs are enlarged to give complete coverage of the signal parameter space at negligible loss of sensitivity. The selection requirements for each mass hypothesis are summarized later in Section 8, in Tables 7-10 for both low-and high-mass SRs. The overall signal acceptance ranges between 0.10-0.27% and 17-33% for m N = 20-1500 GeV, respectively. Here, the lower acceptance at low m N is due to the selection requirements on the p T of the leptons and jets in a signal with very soft jets and leptons. The overall signal acceptance includes trigger efficiency, geometrical acceptance, and efficiencies of all selection criteria.

Background estimate
The SM backgrounds leading to a final state with two SS leptons and jets are divided into the following categories: • SM processes with multiple prompt leptons: These backgrounds are mainly from events with two vector bosons (W ± W ± , WZ, ZZ). We also consider as background a W or Z boson decaying leptonically and is accompanied by radiation of an initialor final-state photon that subsequently undergoes an asymmetric conversion. These processes produce a final state that can have three or four leptons. If one or more of the charged leptons fail the reconstruction or selection criteria these processes can appear to have only two SS leptons. • Misidentified leptons: These are processes that contain one or more leptons that are either misidentified hadrons, are from heavy-flavor jets, from light meson decays, or from a photon in a jet. These leptons are generally less isolated than a prompt lepton from a W/Z boson decay and tend to have larger impact parameters. The main processes with a misidentified lepton in the SRs include W+jets events and tt events, but multijet and DY events also contribute. • Sign mismeasurement: If the signs of leptons are mismeasured in events with jets and two opposite-sign leptons (OS2 ), these events could contaminate a search region. When the sign of a lepton is mismeasured the lepton will on average have a larger impact parameter in comparison to a lepton from a prompt EW boson decay. Although the rate of mismeasuring the sign of an electron is small, the abundance of OS2 events from DY dilepton production means that this background is significant. It is suppressed by tight requirements on the impact parameter and on the charge of the electron. The muon sign mismeasurement rate is known to be negligible, based on studies in simulation and with cosmic ray muons [75], and is not considered in this analysis.

Background from prompt SS leptons
Background events that contain two prompt SS leptons are referred to as the prompt-lepton background. These backgrounds are estimated using simulation. To remove any double counting from the misidentified-lepton background estimate based on control samples in data, the leptons have to originate in the decay of either a W/Z/γ * boson, or a τ lepton. The largest con-tribution comes from WZ, ZZ, and asymmetric photon conversions, including those in Wγ and Zγ events. The background from WZ and Wγ * production, with W → ν and Z(γ ( * ) ) → , can yield the same signature as N production: two SS isolated leptons and jets, when one of the opposite-sign same-flavor (OSSF) leptons is not identified and QCD/pileup jets are reconstructed in the event. This is the largest prompt contribution in both the low-and high-mass SRs. This background is estimated from simulation, with the simulated yield normalized to the data in a control region (CR) formed by selecting three tight leptons with p T > 25, 15, 10 GeV and requiring an OSSF lepton pair with invariant mass m( ± ∓ ) consistent with the Z boson mass: |m( ± ∓ ) − m Z | < 15 GeV. In addition, events are required to have p miss T > 50 GeV and m T ( W , p miss T ) > 20 GeV, where the W is the lepton not used in the OSSF pair that is consistent with the Z boson. The ratio of the predicted to observed WZ background yield in this CR is found to be 1.051 ± 0.065. This factor and its associated uncertainty (both statistical and systematic) is used to normalize the corresponding simulated sample. The systematic uncertainty on this factor is determined by varying, in the simulation, the properties that are listed in Section 7.2, by ±1 standard deviation from its central value.
Production of ZZ events with both Z bosons decaying leptonically, with two leptons not identified, results in a possible SS2 signature. This process is estimated from simulation, and the simulated yield is normalized in a CR containing four leptons that form two OSSF lepton pairs with invariant masses consistent with that of the Z boson. The ratio of data to simulation from the CR is found to be 0.979 ± 0.079, and is used to normalize the simulated ZZ sample. A Z boson p T -dependent EW correction to the cross section [76][77][78] is not included in the simulated samples. It would correct the cross section by at most 25%, given the range of Z boson p T probed in this analysis. Since this correction is larger than the uncertainty on the ratio of data to simulation in the CR, we increase the uncertainty on the normalization to 25%.
External and internal photon conversions can produce an SS2 final state when a photon is produced with a W or Z boson, and this photon undergoes an asymmetric external or internal conversion (γ ( * ) → + − ) in which one of the leptons has very low p T and fails the lepton selection criteria. This background mostly contributes to events in the ee and eµ channels. It is obtained from simulation and verified in a data CR enriched in both external and internal conversions from the Z+jets process, with Z → γ ( * ) and γ ( * ) → , where one of the leptons is outside the detector acceptance. The CR is defined by |m( ± ∓ ) − m Z | > 15 GeV and |m( ± ∓ ± ) − m Z | < 15 GeV. The ratio of data to expected background in the CR is 1.093 ± 0.075, and this ratio is used to normalize the MC simulation.
Other rare SM processes that can yield two SS leptons include events from EW production of SS W pairs, and double parton scattering, while any SM process that yields three or more prompt leptons produces SS2 final states if one or more of the leptons fails to pass the selection. Processes in the SM that can yield three or more prompt leptons include triboson processes and tt production associated with a boson (ttW, ttZ, and ttH). Such processes generally have very small production rates (less than 10% of total background after the preselection) and in some cases are further suppressed by the veto on b-tagged jets and requirements on p miss T . They are estimated from simulation and assigned a conservative uncertainty of 50%, which accounts for the uncertainties due to experimental effects, event simulation, and theoretical calculations of the cross sections.

Background from misidentified leptons
The most important background source for low-mass signals originates from events containing objects misidentified as prompt leptons. These originate from B hadron decays, light-quark or gluon jets, and are typically not well isolated. Examples of these backgrounds include: multijet production, in which one or more jets are misidentified as leptons; W(→ ν)+jets events, in which one of the jets is misidentified as a lepton; and tt decays, in which one of the top quark decays yields a prompt isolated lepton (t → Wb → νb) and the other lepton of same sign arises from a bottom quark decay or a jet misidentified as an isolated prompt lepton. The simulation is not reliable in estimating the misidentified-lepton background for several reasons, including the lack of statistically large samples (because of the small probability of a jet to be misidentified as a lepton) and inadequate Modeling of the parton showering process. Therefore, these backgrounds are estimated using control samples of collision data.
An independent data sample enriched in multijet events (the "measurement" sample) is used to calculate the probability misidentifying a jet that passes minimal lepton selection requirements ("loose leptons") to also pass the more stringent requirements used to define leptons after the full selection ("tight leptons"). The misidentification probability is applied as an event-byevent weight to the application sample. The application sample contains events in which one lepton passes the tight selection, while the other lepton fails the tight selection but passes the loose selection (N nn ), as well as events in which both leptons fail the tight selection, but pass the loose criteria (N n n ). The total contribution to the signal regions (i.e., the number of events with both leptons passing the tight selection, N nn ), is then obtained for each mass hypothesis by weighting events of type nn and n n by the appropriate misidentification probability factors and applying the signal selection requirements to the application sample. To account for the double counting we correct for n n events that can also be nn.
The measurement sample is selected by requiring a loose lepton and a jet, resulting in events that are mostly dijet events, with one jet containing a lepton. Only one lepton is allowed and requirements of p miss T < 80 GeV, and m T ( , p miss T ) < 25 GeV are applied. The loose lepton and jet are required to be separated in azimuth by ∆φ > 2.5 and the momentum of the jet is required to be greater than the momentum of the lepton. These requirements suppress contamination from W and Z boson decays. Contamination of prompt leptons in the measurement sample from EW processes is estimated and subtracted using simulation. The normalization of the prompt lepton simulation is validated in a data sample enriched in W+jets events by requiring events with a single lepton, p miss T > 40 GeV, and 60 < m T ( , p miss T ) < 100 GeV. The minimum uncertainty that covers the discrepancy between the data and simulation in single-lepton W+jets events (across all η and p T bins considered in the analysis) is 30 (13)% for electrons (muons) and is assigned as the uncertainty in the prompt lepton normalization. The larger uncertainty for prompt electron events is to allow for the disagreement between data and simulation in single-electron W+jets events for high-p T electrons.
The method is validated using a sample of simulated tt, W+jets, and DY events. The misidentification probabilities used in this validation are obtained from simulated events comprised of jets produced via the strong interaction, referred to as QCD multijet events. The predicted and observed numbers of events in the ee, µµ, and eµ channels agree within 10% for the W+jets and DY samples, and within 25% for tt samples. The latter figure is reduced to 18% after rejecting events with a b-tagged jet.

Background from opposite-sign leptons
To estimate backgrounds due to sign mismeasurement, the probability of mismeasuring the lepton sign is studied. Only mismeasurement of the electron sign is considered, and this background is estimated only in the ee channel. The probability of mismeasuring the sign of a prompt electron is obtained from simulated Z → e ± e ∓ events and is parametrized as a function of p T separately for electrons in the barrel and endcap calorimeters. The average value and statistical uncertainty for the sign mismeasurement probabilities are found to be (1.65 ± 0.12) × 10 −5 in the inner ECAL barrel region (|η| < 0.8), (1.07 ± 0.03) × 10 −4 in the outer ECAL barrel region (0.8 < |η| < 1.5), and (0.63 ± 0.01) × 10 −3 in the endcap region. The sign mismeasurement probabilities are then validated with data separately for the barrel and endcap regions.
To estimate the background due to sign mismeasurement in the ee channel, a weight W p is applied to data events with all the SR selections applied, except that here the leptons are required to be oppositely signed (OS2 events). W p is given by (2) is the probability for the leading (trailing) electron sign to be mismeasured and is determined from simulated events. The p T of leptons with a mismeasured sign will be misreconstructed. To correct for the misreconstructed p T measurement in the OS2 events the lepton p T is shifted up by 1.8%, which is determined from simulation.
To validate the sign mismeasurement probability for the barrel (endcap) region, a control sample of Z → e ± e ∓ events in the data is selected, requiring both electrons to pass through the barrel (endcap) region and demanding the invariant mass of the electron pair to be between 76 and 106 GeV. The difference between the observed and predicted numbers of e ± e ± events is used as a scale factor to account for the modeling in the simulation. The observed number of events in the data is determined by fitting the Z boson mass peak. The predicted number of events is determined by weighting the OS2 events with the value W p . The scale factors and their associated statistical uncertainties in the barrel and endcap regions are found to be 0.80 ± 0.03 and 0.87 ± 0.03, respectively.
To validate the combined sign mismeasurement probability and scale factors in the data, a control sample of Z → e ± e ∓ events is again selected, as described above, but here requiring that one electron is found in the endcap and the other, in the barrel region. The difference in the predicted and observed numbers of e ± e ± events in this sample is 12%. The same procedure was performed using Z → e ± e ∓ events in the data but requiring no η restrictions on the electrons and requiring that the event has only one jet, yielding an agreement within 10% between the predicted background and the data.
Prompt leptons and backgrounds from sign mismeasurement can contaminate the application sample of the misidentified-lepton background, resulting in an overprediction of this background. This contamination is removed using simulation. The contamination from the promptlepton background is generally less than 1%. However, for the backgrounds from leptons with sign mismeasurement or leptons from photon conversions, the contamination can be as large as 2% in the signal region and up to 30% in CR2, that is enriched in backgrounds with mismeasured lepton sign.

Validation of background estimates
To test the validity of the background estimation methods, several signal-free data CRs are defined. The background estimation method is applied in these regions and the results are compared with the observed yields. These CRs are used to validate the backgrounds separately in each of the three flavor channels and are defined as follows: • CR1: (SS2 ), at least one b-tagged AK4 jet, • CR2: (SS2 ), ∆R( 1 , 2 ) > 2.5 and no b-tagged AK4 jet, • CR3: (SS2 ), low-mass SR1 and either ≥ 1 b-tagged jet or p miss T > 100 GeV, • CR4: (SS2 ), low-mass SR2 and either ≥ 1 b-tagged jet or p miss T > 100 GeV,

13
• CR5: (SS2 ), high-mass SR1 and either ≥ 1 b-tagged jet or (p miss T ) 2 /S T > 20 GeV, • CR6: (SS2 ), high-mass SR2 and either ≥ 1 b-tagged jet or (p miss T ) 2 /S T > 20 GeV. The numbers of predicted and observed background events in each CR are shown in Table 2. In the control regions CR1 and CR2, the backgrounds estimated from data are dominant and validated in events both with and without b-tagged jets, while in the remaining CRs all backgrounds are validated in regions that are close to the SRs (the misidentified-lepton background accounts for about 90% of the total background in CR1 and CR2 and about 50% across the remaining CRs). The contribution from signal events is found to be negligible in all control regions, with signal accounting for less than 1% of the yields in most CRs and at most 5%, when assuming a coupling consistent with the upper limits from previous results. In all regions the predictions are in agreement with the observations within the statistical and systematic uncertainties described in Section 7, which is dominated by the 30% uncertainty in the misidentifiedlepton background. Within each region, the observed distributions of all relevant observables also agree with the predictions, within the uncertainties

Systematic uncertainties
The estimate of backgrounds and signal efficiencies are subject to a number of systematic uncertainties. The relative sizes of these uncertainties for each type of background and signal, in each SR, are listed in Table 3. Table 4 shows the contributions from the uncertainty in the signal and backgrounds (for two mass hypotheses, m N = 50 and 500 GeV), expressed as a percentage of the total uncertainty. Table 3: Summary of the relative systematic uncertainties in heavy Majorana neutrino signal yields and in the background from prompt SS leptons, both estimated from simulation. The relative systematic uncertainties assigned to the misidentified-lepton and mismeasured-sign backgrounds estimated from control regions in data and simulation are also shown. The uncertainties are given for the low-(high-)mass selections. The range given for each systematic uncertainty source covers the variation across the mass range. Upper limits are presented for the uncertainty related to the PDF choice in the background estimates, however this source of uncertainty is considered to be accounted for via the normalization uncertainty and was not applied explicitly as an uncertainty in the background. Theory: PDF variation 0-0.7 (0-0.

Background uncertainties
The main sources of systematic uncertainties are associated with the background estimates. The largest uncertainty is that related to the misidentified-lepton background. The systematic uncertainty in this background is determined by observing the change in the background estimate with respect to variations in isolation requirement (and several other selection criteria) for the loose leptons, modifying the p T requirement for the away-side jet (the jet that is required to be back-to-back with the lepton in the measurement region). In addition, uncertainties in the jet flavor dependence of the misidentification probability, and in the prompt-lepton contamination in the measurement region are taken into account By combining these sources, a systematic uncertainty of 8.9-20% is assigned. This uncertainty depends on the lepton flavor and the SR. The validity of the prediction of the misidentified lepton background was checked by estimating this background using simulated events alone. The results disagreed with those obtained from the various CRs by up to 30%, and this value is assigned as the systematic uncertainty in this background estimate.
The systematic uncertainties in the mismeasured electron sign background are determined by combining weighted average of the uncertainties in barrel/endcap scale factors from background fits, and the uncertainty on the parameterized sign mismeasurement probabilities. To evaluate the uncertainties in the sign mismeasurement probability scale factors, we vary the range and the number of bins used in the fitting of the data, as well as the requirement on the subleading lepton p T , and, when combining all these sources, we assign a systematic uncertainty in the scale factors of 9%. The uncertainty in the sign mismeasurement probability arising from the choice of parameterization variables was estimated by considering alternative variables such as (p miss T ) 2 /S T and p miss T . A variation of up to 11% was observed. The background estimate method was tested using only simulation, in which OS2 events were weighted using the sign mismeasurement probabilities with no scale factors applied. The predicted and observed number of events in simulation disagree by up to 7%, and this value is assigned as another source of systematic uncertainty in estimating the sign mismeasurement background. The three sources discussed above are combined to give a systematic uncertainty of 16% on this background. This uncertainty covers the difference between the predicted and observed numbers of events in both data samples enriched in backgrounds with mismeasured electrons as discussed in Section 6.3.
The simulated sample used to measure the sign mismeasurement probabilities has low statistics for events with electron p T above 100 GeV. When combined with the uncertainty related to the low statistics of simulated electrons in bins with high electron p T , for backgrounds from mismeasured electron sign, an overall systematic uncertainty of 29-88% is assigned, depending on electron η and p T . The large uncertainty in this background applies only to the cases where the SR has two high-p T electrons. The effect on the total systematic uncertainty in the background is at most 5%.

Simulation uncertainties
The systematic uncertainties in the normalization of the irreducible SM diboson backgrounds are taken from the data CR used to normalize the backgrounds. The assigned uncertainties are 6% for WZ, 25% for ZZ and 8% for Zγ and Wγ backgrounds. Since other SM processes that can yield two SS leptons, including triboson, ttV, and W ± W ± , have small background yields in the SR, we assign a conservative uncertainty of 50%, which includes the uncertainties due to experimental effects, event simulation, and theoretical calculations of the cross sections. The overall systematic uncertainty in the prompt-lepton background, including the contributions discussed below, is 12-18% for the low-mass selection and 16-43% for the high-mass selection, depending on the lepton channel. To evaluate the uncertainty due to imperfect knowledge of the integrated luminosity [79], jet energy/mass scale, jet energy/mass resolution [66], b tagging [73], lepton trigger and selection efficiency, as well as the uncertainty in the total inelastic cross section used in the pileup reweighting procedure in simulation, the input value of each parameter is changed by ±1 standard deviation from its central value. Energy not clustered in the detector affects the overall p miss T scale, resulting in an uncertainty in the event yield due to the upper threshold on p miss T . The theory uncertainties in the acceptance of the signal events are determined by varying the renormalization and factorization scales up and down by a factor of two relative to their nominal values, and following the PDF4LHC recommendations [80] to estimate the uncertainty due to the choice of the PDF set. The uncertainty related to the PDF choice in the background estimates was evaluated, and an upper limit on the uncertainty was added to Table 3, although this uncertainty was not applied explicitly in the results but considered to be accounted for via the normalization uncertainty taken from the normalization control regions.

Results and discussion
The data yields and background estimates after the application of the low-and high-mass SR selections are shown in Table 5. The predicted backgrounds contributed by events with prompt SS leptons, leptons with mismeasured sign, and misidentified leptons are shown along with the total background estimate and the number of events observed in data. The uncertainties shown are the statistical and systematic components, respectively. The data yields are in good agreement with the estimated backgrounds. Kinematic distributions also show good agreement between data and SM expectations. Figures 2-3 show for illustration: the invariant mass of the two leptons (of the leading p T lepton and the selected jets); the invariant mass of the trailing p T lepton and the selected jets; and the invariant mass of the two leptons and the selected jets for low-(high-)mass SRs. In Fig. 2, the m( ± ± jj) signal distribution peaks somewhat below m W , because of the selection requirements imposed. Misid. lepton bkgd.
Mismeas. sign bkgd.  The expected signal depends on both m N and the mixing matrix elements |V eN | 2 , |V µN | 2 , or |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ), and the values are summarized in Table 6 for a few mass points. Tables 7-10 show the optimized selections applied on top of the low-and high-mass SRs requirements for each mass hypothesis. These tables also present the observed event counts in data and the expected background for each signal mass hypothesis. The data are generally consistent with the predicted backgrounds in all three flavor channels. The largest deviation observed is in the µµ channel of SR1, at a signal mass of 600 GeV, and has a local significance of 2.3 standard deviations. The corresponding point of SR2 does not show a matching fluctuation. Table 6: Numbers of expected signal events after all the selections are applied. The matrix element squared is equal to 1 × 10 −4 , 1 × 10 −2 , and 1 for m N = 50, 200, and 1000 GeV, respectively.  Tables 7-10. Log-normal distributions are used for both the signal and nuisance parameters. The combined limits from SR1 and SR2, on the absolute values of the matrix elements |V eN | 2 , |V µN | 2 , and |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ) are shown in Figs. 4-5, also as a function of m N . We assume the systematic uncertainties in SR1 and SR2 to be fully correlated when calculating these limits. The limits are calculated separately for each of the three channels. For an N mass of 40 GeV the observed (expected) limits are |V eN | 2 < 9.5 (8.0) × 10 −5 , |V µN | 2 < 2.3 (1.9) × 10 −5 , and |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ) < 2.7 (2.7) × 10 −5 , and for an N mass of 1000 GeV the limits are |V eN | 2 < 0.42 (0.32), |V µN | 2 < 0.27 (0.16), and |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ) < 0.14 (0.14). The mass range below m N = 20 GeV is not considered because of the very low selection efficiency in this region. Furthermore, since the N lifetime is inversely proportional to m 5 N |V N | 2 , for m N < 20 GeV it becomes significant and results in displaced decays. Thus the prompt lepton requirement is not satisfied. The behavior of the limits around m N = 80 GeV is caused by the fact that as the mass of the heavy Majorana neutrino approaches the W boson mass, the lepton produced together with the N or the lepton from the N decay has very low p T .
The present search at 13 TeV extends the previous CMS SS2 plus jets searches at 8 TeV [34,35] to both higher N masses as well as lower masses. In those earlier searches, two AK4 jets were required in the low-and high-mass SRs, while in the present analysis at √ s = 13 TeV, the search has been extended in the low-mass SR to include events with exactly one AK4 jet, and in the high-mass SR to include events with at least one AK8 jet. As seen in Figs. 4-5, the exclusion limits for the mixing matrix elements are extended both for low and high N mass, and now cover N masses from 20 to 1600 GeV. In the range previously studied, the present limits significantly improve over the previous results except in the region from 60-80 GeV, where they are equivalent. The 13 TeV data were taken at higher collision rates and thus with higher trigger thresholds and pileup rates, which impacted the sensitivity of the search in the low-mass region. This region is covered with high efficiency by a recent search in trilepton channels [37].
Figs. 4 shows the exclusion limits for |V eN | 2 and |V µN | 2 overlaid with the 13 TeV CMS limits from the trilepton channel [37]. For low-mass signals the trilepton analysis is more sensitive, since it has both fewer backgrounds from misidentified leptons and higher signal efficiency. However for high-mass signals the signal efficiencies are compatible, and with the inclusion of the signal region using AK8 jets, and the larger signal cross section in the dilepton channel this analysis has more stringent limits for masses of N above 100 GeV.  Figure 4: Exclusion region at 95% CL in the |V eN | 2 (upper) and |V µN | 2 (lower) vs. m N plane. The dashed black curve is the expected upper limit, with one and two standard-deviation bands shown in green and yellow, respectively. The solid black curve is the observed upper limit. The brown line shows constraints from EWPD [83]. Also shown are the upper limits from other direct searches: DELPHI [30], L3 [31,32], ATLAS [36], and the upper limits from the CMS √ s = 8 TeV 2012 data [35] and the trilepton analysis [37] based on the same 2016 data set as used in this analysis. Table 7: Selection requirements on discriminating variables determined by the optimization for each Majorana neutrino mass point in the low-mass signal regions. The last columns show the overall signal acceptance for the DY channel. The quoted uncertainties include both the statistical and systematic contributions.      Figure 5: Exclusion region at 95% CL in the |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ) vs. m N plane. The dashed black curve is the expected upper limit, with one and two standard-deviation bands shown in green and yellow, respectively. The solid black curve is the observed upper limit. Also shown are the upper limits from the CMS √ s = 8 TeV 2012 data [35].

Summary
A search for heavy Majorana neutrinos, N, in final states with same-sign dileptons and jets has been performed in proton-proton collisions at a center-of-mass energy of 13 TeV, using a data set corresponding to an integrated luminosity of 35.9 fb −1 . No significant excess of events compared to the expected standard model background prediction is observed. Upper limits at 95% confidence level are set on the mixing matrix element between standard model neutrinos and N (|V N |) in the context of a Type-I seesaw model, as a function of N mass. The analysis improves on previous 8 TeV searches by including single-jet events into the signal region, which increases sensitivities. For an N mass of 40 GeV the observed (expected) limits are |V eN | 2 < 9.5 (8.0) × 10 −5 , |V µN | 2 < 2.3 (1.9) × 10 −5 , and |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ) < 2.7 (2.7) × 10 −5 , and for an N mass of 1000 GeV the limits are |V eN | 2 < 0.42 (0.32), |V µN | 2 < 0.27 (0.16), and |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ) < 0.14 (0.14). The search is sensitive to masses of N from 20 to 1600 GeV. The limits on the mixing matrix elements are placed up to 1240 GeV for |V eN | 2 , 1430 GeV for the |V µN | 2 , and 1600 GeV for |V eN V * µN | 2 /(|V eN | 2 + |V µN | 2 ). These are the most restrictive direct limits on the N mixing parameters for heavy Majorana neutrino masses greater than 430 GeV, and are the first for masses greater than 1200 GeV.