Invisible Higgs search through Vector Boson Fusion: A deep learning approach

Vector boson fusion, originally proposed as an alternative channel for finding heavy Higgs, has now established itself as a crucial search scheme to probe different properties of Higgs or for new physics. We explore the merit of deep-learning entirely from the low-level calorimeter data searching for invisibly decaying Higgs, as a choice to supersede decades-old faith on its salient underlying event structure produced in vector boson fusion. We investigate among different neural network architectures considering both low-level and high-level input variables as a detailed comparative analysis. To have a consistent comparison with existing techniques, we closely follow a recent experimental study of CMS search on invisible Higgs with 36 fb$^{-1}$ data. We find that sophisticated deep-learning techniques have the impressive capability to improve the bound on invisible branching ratio by a factor of three, utilising the same amount of data. Without relying on any exclusive event reconstruction, this novel technique can provide the most stringent bounds on the invisible branching ratio of the SM-like Higgs boson. Such an outcome has the ability to constraint many different BSM models severely.


Introduction
With the emergence of deep learning frameworks, a plethora of machine learning applications have gained immense importance in high-energy physics (HEP) recently in collider and neutrino physics [1][2][3]. Supported by substantial multilateral developments in this field, efforts are being poured in to explore different aspects of HEP phenomenology, especially in the context of Large Hadron Collider (LHC) [4][5][6][7][8][9]. In recent years, deep learning applications are widely explored to understand the formation and properties of hadronic jets, the most common structured object found in any event at LHC, created from QCD fragmentation and hadronization of fundamental quarks and gluons. More interestingly boosted heavy particles like Higgs, top or massive gauge bosons, after hadronization of their decay products, can also produce similar jet objects. Prior to the advent of deeplearning approaches, the realization that the internal dynamics of different jet objects are dissimilar received intense scrutiny [10] looking into the underlying structures as probes for new physics. For jet substructure studies, the primary deep-learning approach is to Figure 1: The figure shows a 3D depiction of a prototype signal event originated from an electroweak VBF Higgs production in a naive detector geometry in left plot. The same event is flattened in a convenient η − φ plane in right plot, where the transverse projection of calorimeter energy deposits in different pixels are drawn. Two reconstructed primary jets are shown with color circles, and corresponding transverse energy deposits are visible from height of the bars. employ calorimeter energy deposits of a jet in η − φ pixel tower converted into the pictorial description of such 'jet-images' [11] as input to Convolutional Neural Network (CNN) [12][13][14][15]. Very successful n-prong taggers are developed for the Higgs [16,17], Z /W bosons [18,19] and the top tagging [20,21] by utilizing this idea, which is further extended to discriminate between quark and gluons [14]. Deep Neural Networks (DNN) have established their importance for classification of signal and background using low/high-level variables [22][23][24][25][26][27][28][29]. Although there are some studies [30][31][32] of exploring the inclusive event information at hadron colliders as input for deep-learning neural networks, their full potential is yet to be explored extensively. For the benefit of the readers, many more such exciting approaches in machine learning framework can be followed in the recent review [1,33,34] and references therein.
Taking an analogy from jet-image classification, we use the full calorimeter image to study the invisible Higgs production in association to a pair of jets. Vector-boson fusion (VBF) production of color singlet particles, provide a unique signature in hadron colliders. First studied in reference [35][36][37], they are characterized by the presence of two hard jets in the forward regions with a large rapidity gap, and a relative absence of hadronic activity in the central regions, when the singlet particle decays non-hadronically. For illustration, left panel of figure 1 shows an event of a Higgs produced in VBF channel decaying invisibly in a simplistic tower geometry, and in the right panel same event is mapped in a flattened (η, φ) plane where the periodicity of the φ-axis is lost, with the heights of the bars corresponding to the magnitude of the transverse projection of calorimeter energy deposits in each pixel. In order to highlight the differences with non-VBF processes, it is instructive to show one such example in figure 2. This is a representative event from Z(νν) + jets background, where the jets arise from QCD vertices which inherently has a much higher hadronic activity in the central regions between the two leading jets. Even though the rapidity gap vanishes when the singlet particle decays hadronically, the absence of color connection between the two forward jets and the central region persists and has been used in the experimental analysis [38], in searches of the Higgs boson decaying to bottom quarks. VBF process was proposed as the most important mechanism for heavy Higgs searches [39] thanks to a much slower fall in cross-section compared to the s-channel mediated process. Usefulness for intermediate to light mass scalar is also subsequently realized [40] due to its unique signature at the collider. VBF process holds great importance to measure Higgs coupling with gauge bosons and fermions as it allows independent observations of Higgs decay like h 0 → W W [41], h 0 → τ τ [42]. Therefore, it also plays a vital role to determine anomalous coupling to vector boson [43] or the CP properties of the Higgs [44,45]. Its clean features make it the most sensitive channel for searching invisible decay of the Higgs boson. As the Higgs can decay invisibly only through a pair of Z bosons producing neutrinos with minuscule branching ratio in the Standard Model (SM), observation of any significant deviation can provide a strong indication towards a theory beyond the Standard Model (BSM). Hence, this search plays a crucial role to constrain many BSM scenarios, like darkmatter [46][47][48][49], supersymmetric [50], and extra-dimensional models [51].
Although being one of the most promising channels, production of invisible Higgs is challenging to probe as only a few observables can be constructed over the unique features of VBF. Ensuring a color quiet central region by so-called 'central jet veto', and rather specific choices related to the jets, the separation in pseudorapidity |∆η jj | and the dijet invariant mass m jj are the significant ones. Electroweak VBF production of Higgs can satisfy such criteria naturally with excellent efficiency. These same criteria can also ensure elimination of vast QCD backgrounds up to a large extend generated from radiating off a massive W or Z boson for invisible production. Finally, the much weaker electroweak backgrounds coming in the form of VBF W or Z become the dominant factor for such study. However, we must note at the same time that there is a significant drop in signal contribution from other dominant non-VBF Higgs production modes, such as higher-order in α s correction to gluon fusion initiated processes for Higgs productions [52].
A natural order of inquiry, therefore calls for the investigation whether deep machine learning vision in the form of CNN, together with other neural networks, can have the ability to distinguish the characteristics of VBF by learning from the data itself. Utilizing low-level and high-level variables networks can map the probability distribution functions to characterize each process. Moreover, we would like to understand how useful these learned features are, or how they correlate with our traditional characteristics of VBF. Finally, there can be enough scope to engage this very sophisticated tool to get some hybrid output in terms of maximizing the efficiency in selecting signal events, rather than classifying in separate clusters as VBF and non-VBF type.
While the present study can easily be extended for other decay modes of Higgs, we choose the invisible channel for our study to showcase the importance of deep learning quantitatively using different neural network architectures. We propose to study the full event topology of VBF by examining the calorimeter tower-image using CNN, which utilize the low-level variables. We also consider the performance of event classification using dense Artificial Neural Network (ANN) which employ high-level variables. In total, we investigate seven different neural network architectures and provide a comparative study of the performance of networks. The performance of networks are quantified in terms of expected constraints on invisible branching ratio BR(h 0 →inv) of the Higgs boson.
The latest report from ATLAS collaboration [53] puts an upper limit on BR(h 0 →inv) at 95% confidence level (CL) to 0.13, from an integrated luminosity of 139 fb −1 at the LHC. The CMS analysis also puts an upper limit at 95% CL to 0.19 for combined data set of 7, 8 and 13 TeV for 4.9 fb −1 , 19.7 fb −1 and 38.2 fb −1 integrated luminosities respectively [54]. These bounds still allow significant presence of BSM physics, our principal aim, therefore, is to study the viability of CNNs to improve these results using low-level variables in the form of the entire calorimeter image, as well as to compare its performance to DNN/ANN architecture with high-level variables as input. We find that the bounds on the BR(h 0 →inv) can indeed be significantly improved using these networks.
The rest of this paper is organized as follows. In section 2, we discuss the Higgs production mechanism via VBF channel and different SM backgrounds contributing to this process. We also discuss the generation of simulated data consistent with VBF search strategy. In section 3, we describe the details of the data representation used in the present study. Here, different classes of high-level variables are also defined. The preprocessing method of feature space is addressed in section 4. We discuss the seven different neural network architecture and its performance in section 5. The results, interpreted in terms of expected bounds on the invisible branching ratio, for all the architectures are presented in section 6. We also discuss there, the impact of pileup on the result of our analysis. Finally, we close our discussion with the summary and conclusion in the last section. Representative diagrams for production of Higgs signal through (left) electroweak VBF channel and (right) a higher order QCD process in gluon fusion where two QCD jets can be detected along with a sizable missing transverse-energy from invisible Higgs decay.
2 Vector Boson Fusion production of Higgs and analysis set-up VBF production of the SM Higgs has the second-highest production cross-section after gluon-fusion at the LHC. Loop induced Higgs production and decay depend on the presence of contributing particles and different modifiers in fermions and gauge boson coupling with the scalar. Hence, both production cross-section and decay branching ratios are modified in the presence of new physics. In this present work, we consider the production of SM like Higgs boson and constrain its invisible decay width. Such constraint is essential in many new physics scenarios, such as Higgs portal dark matter [46][47][48][49], where new particles do not modify their couplings with SM particles. Electroweak production of VBF gives the primary contribution by the fusion of two massive vector bosons which are radiated off two initial (anti-)quarks as represented in figure 3 (left plot). This exchange of color singlet state between two quarks ensures no color connection between two final jets, typically produced in a forward (backward) region of the opposite hemisphere. The central region -between these two jets, remain color quiet, lacking any jet activity even after radiation and fragmentation of the two scattered quarks while looking at the hadronic final states. As we have already discussed, an agnostic viewpoint requires a serious re-examination after inclusion of all other processes, such as non-VBF Higgs signal from gluon fusion. One such sample diagram is shown in figure 3 (right plot). Additional radiation from gluons can provide a typical VBF type signal, once again, in the absence of the key attributes like color quiet central region etc.
Another interesting feature of VBF Higgs production is that the corresponding crosssection have very modest correction under higher-order QCD, which is known for a long time [55,56]. Integrated and differential cross sections for VBF Higgs production have now been calculated up to very high-levels of accuracy. QCD corrections are known up to N3LO [57], reducing the scale-uncertainty up to 2%; while Electroweak corrections are known up to NLO [58]. Moreover, non-factorizable contributions have also been calculated for the first time [59], and show up to percent level corrections compared to the leading order (LO) distributions.
At hadron colliders, traditional searches [60][61][62][63] of non-hadronically decaying colorsinglet particles in the VBF production channel focus on rejecting the large QCD backgrounds from Z + jets, and W + jets background via a central jet-veto, after a hard cut on the separation of the two forward jets in pseudorapidity |∆η jj |, and the dijet invariant mass m jj . This opens up the possibility of using inclusive event-shape variables like N-jettiness [64], to improve the selection efficiency [65]. In this study, we explore the feasibility of using deep-learning techniques instead of event-shape variables. We study the invisible decay of the Higgs boson as a prototype channel for gauging the power of deep-learning methods in VBF since there is no contamination on the radiation patterns between the two forward jets from the decay products. We closely follow the shape-based analysis performed by the CMS experiment at CERN-LHC [54] 1 . As already commented, central jet veto played a critical role in usual searches of VBF to control the vast QCD background. However, the present analysis does not rely on a central jet veto, as the main aim is to study the VBF topology with the low-level data. Therefore, with the relaxed selection requirements on |∆η jj | and m jj , the selected signal get significant contribution from the gluon-fusion production of Higgs on top of VBF processes. Due to the relaxed selection criteria, we also get a substantial contribution from QCD backgrounds.

Signal topology
As we just discussed, the present study relies on all dominant contribution to Higgs coming both from electroweak VBF processes and also higher-order in QCD gluon fusion processes.
Here at least two jets should be reconstructed along with sizable missing transverse-energy from invisible decay of Higgs. Hence, we classify the full signal contribution in two channels: • S QCD : Gluon-fusion production of Higgs with two hard jets, where the Higgs decays invisibly.
• S EW : Vector-Boson fusion production of Higgs decaying invisibly.
The subscript EW (QCD) denote the absence (presence) of strong coupling α S , at leading order(LO) for the interested topology. This also segregates the channels with absence or presence of color exchange between the two incoming partons at LO. Figure 3 shows a representative Feynman diagrams of the signal channels in each class.

Backgrounds
The major backgrounds contributing to the invisible Higgs VBF signature can come from the different standard model processes. Among them, VBF type electroweak, and QCD production of massive vector-bosons such as W or Z contribute copiously. All these process ensure a pair of reconstructed jets along with considerable missing transverse energy from invisible decay these gauge bosons. A substantial fraction of W and Z can produce neutrinos or a lepton which remain undetected at the detector. We consider the following backgrounds in all our analyses: • Z QCD : Z(νν) + jets process contributes as the major SM background due to high cross section.
• W QCD : W ± (l ± ν) + jets process also contribute to the SM background when the lepton is not identified.
q q W ± /Z 0 qq g q W ± /Z 0 gq Figure 4: Representative diagrams for dominant background processes through (left) VBF type weak production and (right) QCD production of massive vector-bosons such as W or Z which decay invisibly by producing undetected lepton or neutrinos.
• Z EW : Electroweak production of Z decaying invisibly along with two hard jets is topologically identical with the electroweak signal and contributes significantly to the background.
• W EW : Electroweak production of W ± with two hard jets can also produce an identical signal when the lepton do not satisfy the identification criteria.
Similar to the signal processes, the subscript EW (QCD) denote the absence (presence) of strong coupling α S , at LO for the interested topology having at least two reconstructed jets in the final state. Figure 4 shows representative Feynman diagrams of the background channels divided into four different classes. There are also other background processes like top-quark production, diboson processes and QCD multijet backgrounds whose contribution would be much lesser compared to these four backgrounds. The top and diboson backgrounds would contribute in leptonic decay channels where charged leptons, if present, are not identified; while the QCD multijet background would contribute when there is severe mismeasurement of the jet energies.

Simulation details
We used MadGraph5 aMC@NLO [67] to generate parton-level events for all processes at 13 TeV LHC. These events are then showered and hadronized with Pythia8 [68]. Delphes3 [69] is used for fast-detector simulation of the CMS working conditions. The signal processes are generated using a modified version of the Higgs Effective Field Theory (HEFT) model, where the Higgs boson can decay to a pair of scalar dark matter particle at tree level. We are interested in probing high transverse momentum of Higgs, where the finite mass of top quark in gluon fusion becomes essential. Hence, we have taken into account such effect by reweighting the missing transverse energy (met) distribution of the events with recommendations from reference [70]. The parton level cross-sections of Z QCD and W QCD were also matched up to four and two jets, respectively, via the MLM procedure [71]. Since the W ± backgrounds contribute when the leptons are missed within the range of tracker or not reconstructed at the detector, the parton level cuts on the generated leptons are removed to cover the whole range in pseudorapidity (η).
For a consistent comparison with current experimental results, we repeat the shapeanalysis of reference [54] with our simulated dataset. The met cut for the deep-learning study is relaxed from 250 GeV to 200 GeV.
Baseline selection criteria: We apply the following pre-selections: • Jet p T : At least two jets with leading (sub-leading) jet having minimum transverse momentum p T > 80 (40) GeV.
• Photon-veto: Events having any photon with p T > 15 GeV in the central region, |η| < 2.5 are discarded.
• τ and b-veto: No tau-tagged jets in |η| < 2.3 with p T > 18 GeV, and no b-tagged jets in |η| < 2.5 with p T > 20 GeV are allowed. This rejects leptonic decay of single W ± , semi-leptonic tt and single top backgrounds.
• MET: Total missing transverse momentum for the event must satisfy met > 200 GeV for all our deep-learning study, whereas we compared CMS shape-analysis consistent with met > 250 GEV.
• Alignment of MET with respect to jet directions: Azimuthal angle separation between the reconstructed jet with the missing transverse momentum to satisfy min(∆φ( p met T , p j T )) > 0.5 for up to four leading jets with p T > 30 GeV and |η| < 4.7. QCD multi-jet background that arises due to severe mismeasurement is reduced significantly via this requirement.
• Jet rapidity: We require both jets to have produced with |η j | < 4.7, and at least one of the jets to have |η j i | < 3, since the L1 triggers at CMS do not use the information from the forward regions.
• Jets in opposite in hemisphere: Those events which have the two leading jets reside in the opposite hemisphere in η are selected. This is done by imposing the condition η j 1 × η j 2 < 0.
• Azimuthal angle separation between jets: Events with |∆φ jj | < 1.5 are selected. This helps in reducing all non-VBF backgrounds.
• Di-jet invariant mass: We required a minimum invariant mass of two leading jets, m jj > 200 GeV. Note that, this along with the previous selection requirements are relatively loose compared to traditional selection criteria of VBF topologies, which result in significant enhancement of the signal from S QCD , although at the cost of increased QCD backgrounds (Z QCD and W QCD ). Interestingly, one can notice that a relaxed selection requirement may give rise to additional contamination from Higgstrahlung type topologies to the S EW channel, which is included in our EW generation of events. However, these events are not expected to survive a selection of di-jet invariant mass of more than 200 GeV. After extracting the events passing the above selection requirements, and the respective selection efficiency (calculated from the weights) for S QCD ; the pre-selected events are unweighted again, so that we get equal weights for individual events. 2 The background and signal classes are formed by mixing the channels with the expected proportions using appropriate k-factors, cross-sections and the baseline-selection efficiencies. We use cross-sections quoted in reference [70] for both signal processes. For instance, the S QCD is calculated up to NNLL+NNLO accuracy [72], while for S EW it is calculated up to NNLO [73] in QCD and NLO in electroweak. All background cross-sections are calculated by scaling the LO cross-sections from Madgraph aMC@NLO with NLO k-factors [74,75]. We generated 200,000 training and 50,000 validation balanced dataset of events for the deep-learning classifier. The signal class consists of 44.8% S EW and the 55.2% S QCD channels; while the background class consists of 51.221% Z QCD , 44.896% W QCD , 2.295% Z EW and 1.587% W EW channels.
We also extract event sample for all channels with the harder selection requirement on missing transverse momentum (met > 250 GeV), the value used in reference [54], from the same set of generated events used for the deep-learning analysis. The extracted dataset contains: 39% S EW and the 61% S QCD channels for the signal class; and 54.43% Z QCD , 40.92% W QCD , 3.05% Z EW and 1.58% W EW channels for the background class. The bin-wise stacked histogram of all channels for m jj and |∆η jj | are shown in figure 5. The properties of the EW and the QCD subsets are evident from these distributions: EW contribute more at higher m jj and |∆η jj |, while the opposite is true for QCD.

Data Representation for the Network
Neural network architectures for deep-learning are mostly designed with two blocks. The first stage generally consists of locally-connected layers (with or without weight sharing) with some particular domain level specifications which extract the features. The second stage consists typically of densely connected layers, whose function is to find a direction in the learned feature-space which optimally satisfies the particular target of the network locally, by learning its projections in different representations at each subsequent layer. For instance, in classification problems, it finds the decision boundary between different classes. At the same time, in an unsupervised clustering, it compresses the feature-space so that the modes become localized in a smaller volume. A synergy between the representation of data and the network architecture is a must for efficient feature extraction. This is evident from the fact that convolutional neural networks perform best with data structures which have an underlying Euclidean structure, while recurrent networks work best with sequential data structures. In the context of classifying boosted heavy particles like W , Higgs, top quark or heavy scalars decaying to large-radius jets from QCD background, a lot of efforts [11,12,14,18,20,76] went into representing the data like an image in the (η, φ) plane to use convolutional layers for feature extraction, while some others [77,78], use physics-motivated architectures. Convolutional architectures work in these cases because the differences between the signal jet and the background (QCD), follows a Euclidean structure. 3 The Minkowski structure of space-time prohibits a direct use of convolutional architectures. Although geometric approaches [79] exist to counter the non-Euclidean nature, the number of dimensions make it computationally expensive. Graph neural networks [80,81] provide a possible workaround which is computationally less intensive, for feature learning in non-Euclidean domains.
In the present work, we want to study the difference in radiation patterns between the two forward jets for signal and background events; hence, we primarily choose a convolutional architecture for automatic feature extraction. Therefore, the low-level feature space we prefer is the tower-image, in the (η, φ)-plane, with the transverse energy E T , as the pixel values. One can take into account the different resolutions in the central and forward regions of calorimeter towers in LHC detectors. For simplicity, and also to demonstrate the resolution dependence, we construct two images -a high-resolution image with bin size 0.08 × 0.08 and a low-resolution image with bin size 0.17 × 0.17, in the full range of the tower, [−5, 5] for η and while [−π, π) for φ. Convolutional neural-networks, in general, look at global differences, and increasing the resolution does not play as important a role. We examine CNNs in these two different resolutions to inspect this for our particular case. One would like to make the network learn the inherent symmetry of our events in the φ-axis. Such periodic feature can be exploited simply by padding the image at each boundary with rows from the opposite boundary [30]. Taking the jet radius R = 0.5, which have a regular geometry since they are clustered with anti-k t algorithm [82], we choose the number of rows to be 4 (8) for the low (high)-resolution images, with one bin as a buffer. This gives a low-resolution (LR) image of 59 × 45 and a high-resolution (HR) one of 125 × 95. A significant difference between low-level and high-level feature spaces is that the modes of the data in low-level representations are not distinct. Although this is marginally enhanced by preprocessing, high-level features derived from the said low-level features have distinctly localized modes in their distribution. An exemplary ability of deep-learning algorithms is to by-pass this step and learn their own representations which perform better than the high-level variables developed by domain-specific methods. To analyze the relative performance of physics-motivated variables derived from the calorimeter deposits, we consider two classes of high-level variables. The first one consists of the following kinematic variables: φ met is the azimuthal direction of met in the lab-frame. ∆φ j 1 met , ∆φ j 2 met and ∆φ j 1 +j 2 met are the azimuthal separation of met with the direction of the leading, sub-leading and the vector sum of these two jets, respectively. Clearly, these do not contain any information about the radiation pattern between the tagging jets. The second class of variables: the sum of E T of the tower constituents in the interval [−η C , η C ], incorporates this information: E denote the set of chosen η C 's. We vary η C uniformly in the interval [1,5]: to get 16 such variables. Their inclusion helps us to provide a thorough comparison of the high-level and low-level feature spaces. Figure 6 shows the signal vs background distribution of some important kinematic-variables. The channel-wise contributions to the parent class are also stacked with different colors/lines. We see that the characteristics of the m jj and |∆η jj | are the same with figure 5, with the electroweak processes contributing more at higher values. A feature seen for |∆φ jj |, is the shape of the signal and background distributions. Clearly, the difference is due to the S EW contribution since S QCD has a very similar shape as that of the background. This is another characteristic of VBF processes that the leading jets, originating from electroweak vertices have lower separation in φ compared to those originating from QCD. Similar plots for the remaining four kinematic variables and the R set of variables are shown in figure 19 and figure 17 in appendix B. A brief discussion of the two feature spaces (mainly R) is also presented. We denote the combined high-level feature-space as H, which is 24-dimensional.
In order to gauge the discriminating power of each feature x, we determine the separation [83] defined as, p S (x) and p B (x) denote the normalized probability distribution of the signal and background classes. It gives a classifier-independent discrimination power of the feature x. A value of zero (one) denote identical (non-overlapping) distributions. We plot the separation (in percentage) of the seven highest important variables out of the 24 features in figure 7.
It is interesting to note that out of these, there are five variables from R, even though the first and the second are from K, and they are much greater in magnitude.

Preprocessing of feature space
Preprocessing of features is indispensable for shallow machine learning as it helps maximize the statistical output from smaller data sizes. In deep-learning applications, it helps in faster convergence of the training, and also to approach optimal accuracy with a lesser amount of data using simpler architectures. Even though the primary aim of our model is to learn the differing QCD radiation patterns, we can only devise preprocessing operations that preserve the Lorentz symmetries of the event. The spatial orientation of the events in general, can be regularized by the following procedure: 1. Identify principal directions: Choose three final-state directions {n 1 ,n 2 ,n 3 }. These can be any three final state objects which are the interest of our study like photons, leptons, and jets; or they can be chosen to be generic directions in the lab frame.
After this operation, the orientation ofn 1 is same for all events.
3. Second Rotation: Rotate the event alongn a such that: The plane formed byn 1 andn 2 has the same orientation for all events after this operation.

Reflection:
Reflect along yz-plane such that: The half-space containingn 3 becomes same for all events after this step.
These are passive operations which affect the orientation of the reference frame without changing the physics. For most event topologies, we can see that will be better feature regularisation whenn 2 andn 3 are equal. In hadron colliders, due to the unknown partonic center-of-mass energy √ŝ , we set the z-axis asn 1 , preserving the transverse momentum of all final state particles. We choose two different instances ofn 2 ∈ {n met ,n j 1 }. For our choice ofn 1 , the z-direction ofn 2 does not matter and we can take its value forn met to be zero. However, the z-direction becomes important for the third operation and we choosê n 3 =n j 1 . This translates to applying the following operations to the four-momenta of each events: 1. Rotate along z-axis such that φ 0 = 0. We choose two instances of φ 0 ∈ {φ met , φ j 1 }.
2. Reflect along the xy-plane, such that the leading jet's η is always positive. After these two steps, the tower-constituents are binned in the resolutions as mentioned earlier, and then padded on the φ-axis. We denote the feature-spaces obtained after preprocessing with the two instances of φ 0 as P met and P J . Figure 8 shows the different steps of preprocessing steps for an event taking φ 0 = φ j 1 . Averaged low-resolution image of validation dataset of each class without preprocessing, and for both instances of φ 0 are shown in figure 9. As emphasized earlier, it is seen that there is a better regularization whenn 2 =n 3 (φ j 1 = 0, η j 1 > 0). Clearly, the dominant features are the jets, and while for P J , these lie in the center; for P met they lie at the φ-boundary. Thus, the effect of padding is much more pronounced in P met . In analogy, it becomes crucial when the Higgs boson decays in a hadronic channel (say h 0 → bb or even h 0 → τ + τ − ), where we would desire the jets arising from Higgs -be it normal or large-radius, to be at the center of the image. Combining the instances of preprocessing and resolutions, there are four low-level feature spaces, namely: P LR met , P HR met , P LR J and P HR J . The superscripts LR and HR denote the low and high-resolutions. We notice that all the high-level variables except φ met , are invariant under the two preprocessing operations, although, for our purpose we extract them prior to their application. This follows from the usual physical intuition that absolute positions in the lab-frame are of no particular importance, and the useful information comes from the relative position of the different final-states. We regularize the high-level features by mapping the distribution of each variable to their z-scores. Calculating the meanx j , and the standard deviation σ j for each feature of the whole dataset (training and validation data of both classes together), we perform the following operation on each variable of all events, (4.1) We can see that as we go from left to right, there is a discernible improvement in regularization of the features. There are no distinctly localized hard regions for the unprocessed case, while there are some for the φ met = 0 instance, which becomes harder for φ j1 = 0 case with the hardest region around the leading jet.
The superscript j, denote the feature index, and the subscript i, denote the per-event index. It is particularly useful since, the features have very different ranges (for instance m jj and |∆η jj |), and the operation minimizes this disparity. Furthermore, the features of z j are now dimensionless. A caveat here is that the values of mean and standard deviations used are calculated from a balanced dataset. In experimental data, the presence of both classes, if at all, there is a positive signal, is never balanced. We justify our choice by their class independence, by virtue of which the relative differences in the shape of the signal and background distributions are conserved; and the same set of values can be used when applying to unknown data with no labels.

Neural Network architecture and performance
In the previous sections, we have defined seven feature spaces, which are broadly grouped into high-level classes comprising of K (kinematic), R (QCD-radiative) and H (a combination of the two previous spaces); while low-level spaces are: P LR met , P HR met , P LR J and P HR J . With these as inputs, we train neural-networks for classification. The generic architecture chosen for the high-level feature spaces are dense Artificial Neural Networks (ANNs) while for low-level ones are Convolutional Neural Networks. Hence, we name the 7 networks as: K-ANN, R-ANN, H-ANN, P LR met -CNN, P HR met -CNN, P LR J -CNN and P HR J -CNN. All networks were executed in Keras [84] with TensorFlow [85] backend.

Choice of hyperparameters
The CNN is composed of three modules with each module formed by two convolutional layers followed by an average-pooling layer. Each convolutional layer consists of sixty-four filters with size 4 × 4, with a single stride in each dimension. The inputs are padded so that the output after each convolution is maintained. The pool-size is chosen to be 2 × 2 for all three modules with 2 × 2 stride size. The output after the third module is flattened and fed into a dense network of three layers having three hundred nodes each, which is then passed into the final layer with the two nodes and softmax activation. The convolutional layers, as well as the dense layers prior to the final layer, have ReLu activations. In total, the CNNs for the high-resolution (low-resolution) images have approximately 3.7 (1.2) million trainable parameters. The ANN architectures are inspired by the information bottleneck principle [86]. This has close connections to coarse-graining of the renormalization-group evolution and was, in fact, priorly pointed out in reference [87]. We choose the number of nodes in the first layer to be equal to the number of input-nodes which is then successively reduced after two layers of the same dimension. 4 These reductions in successive nodes are chosen to be five for the R-ANN and H-ANN, while for K-ANN, we consider four due to the low-dimensionality of the input. The process is stopped when there is no further reduction possible, or after four such reductions. We checked two activation functions: sigmoid and ReLu for the ANNs. We found that sigmoid activation gave the best validation accuracy for R-ANN and H-ANN, while it decreased over ReLu activations for K-ANN. In total, the K-ANN, R-ANN and the H-ANN have 210, 991 and 2790 trainable parameters, respectively. Since this is a first exploratory study, we do not optimize the hyperparameters and use the values specified here for extracting the results. Simplified architecture flowchart for each of the different networks are given in figure 10.
We chose categorical-cross entropy as the loss function. The cross-entropy between two probability distributions y 0 and y t , is defined as, where the distributions are functions of the feature-vector x. It is a measure of how well a modeled distribution y 0 , corresponding to the network-output; resemble a true distribution y t , the true values provided during training. For a fixed true-distribution y t , minimizing the cross-entropy essentially minimizes the KL-divergence [88], which is a measure of the similarity between two distributions, and becomes zero if they are exactly identical. We used Nadam [89] optimizer with a learning rate of 0.001 to minimize the loss function for all neural-networks. The adaptive nature of the optimizer: smaller updates for frequently occurring features while larger updates for rare features, helps in better convergence for the sparse image-data that we have, with the added benefits of Nesterov accelerated gradient descent [90]. Moreover, learning-rate is no longer a hyperparameter. For the CNNs, training doesn't require more than ten epochs to reach the optimal validation accuracy. Nevertheless, we train them five times from random initialization for twenty epochs. The ANNs are trained for more epochs since the relatively fewer parameters make the convergence slower. For the ANNs, ReLu activation networks are trained for two hundred epochs, while sigmoid activation networks are trained for one thousand epochs due to their relative difference in convergence compounded with the less number of parameters. A batch-size of three hundred was chosen for training all networks. During training, each model, including all of its parameters, are stored after every epoch in the Keras-provided "hdf5" format. Out of these, we use the best performing model with the highest validation accuracy for further analysis.

Network Outputs
We extract the network output y 0 , which is the probability of the event being a signal, from the best performing model from each class of networks. The class-wise binned distribution of y 0 , for training and validation datasets of the low-level and high-level feature spaces, are shown in figure 11 and 12, respectively. These also show the channel wise contribution to their parent class. The choice of binning is set to the same ones used in extracting the bounds on invisible branching ratio of the Higgs in Sect. 6. It has been set such that the minimum number of entry of each class for the validation data in the edge bins have enough numbers to reduce the statistical fluctuations to less than 15%. Contributions of the S EW and S QCD components to the signal class follow a definite pattern. As expected, all networks find it difficult to distinguish the S QCD signal from the QCD dominated background. Hence, S QCD contributes more in the bins closer to zero, which is governed by the background class. S EW shows the opposite behavior dominating near one. This same feature, although a little inconspicuous, is present for the background class's EW subset as well. It may be pointed out that even for traditional analysis methods, there is significant contamination from S QCD . A relevant machine-learning paradigm [22] where mixed samples are used in place of pure ones, could have an interesting application in reducing this S QCD contamination of the signal for precision studies. Another notable feature prominent in the CNN outputs are the relative contribution of the Z QCD and W QCD channels to the background in the first bin, which are dominated by W QCD . This can be apprehended from the fact that some of the leptons from W ± decay, although not reconstructed, can still make calorimeter deposits on top of the QCD radiation to make itself visible to CNNs.
Receiver operating characteristic (ROC) curves between the signal acceptance S , and the background rejection 1/ B ; and also the area under the curve (AUC) for all networks are shown in figure 13. The AUCs were calculated using y 0 and the true class labels y t with the scikit-learn [91] package. It is interesting to see that the so-called QCD-radiative variables (R), perform almost as good as the kinematic-variables (K) with only less than a percent difference in the validation AUCs. It can be understood by recalling that the definition of the radiative variables includes the radiation pattern of the event, including the radiation inside the jet in cumulative η bins. This, in principle, has similar information to |∆η jj |, which is one of the kinematic-variables with high separation. We confirm this by observing the correlations (shown in figure 14) between the variables H η C =2.07 T and H η C =1.8 T with |∆η jj | and m jj . They are relatively more correlated with |∆η jj | than with m jj . However, the AUC for our combined variable H-ANN shows that the R variables may contain some extra information on top of what is extracted from the kinematic variables. As emphasized earlier, we get less than 0.1 percent difference in the validation AUCs of the low and high-resolution networks. The difference in AUC between P J and P met , although small, is still significant. It can be understood by looking at figure 9: there is better feature regularization in P J due to the choice of φ 0 than in P met . CNNs, in general, are supposed to be robust to these kinds of differences owing to their properties of translational invariance [79]. In our case, the presence of fully-connected layers and the relatively small training sample hamper the generalization power of the CNNs. Application of global-pooling instead of using fully-connected layers and increase in data size coupled with proper hyper-parameter optimization should reduce this difference in AUCs. These can be explored in future studies.
The class-wise linear correlation matrix between the network-outputs along with the four high-level variables possessing the highest separations are shown in figure 14. As expected, the outputs within the respective subset of networks are highly correlated. The outputs of the ANNs and the CNNs are also correlated significantly. A closer look reflects the addition of information in the high-level feature spaces: the correlations increase as we go from R/K-ANN to H-ANN. In fact, if we extrapolate this argument in conjunction with the relative increase in AUC, we find that the CNNs have extracted the most information from the low-level data which is not present in any of the high-level variables. A detailed description of the correlation of high-level variables and the ANN outputs are given in appendix C.

Bounds on Higgs invisible Branching Ratio
In order to quantify our network performance in terms of expected improvements in the invisible Higgs search results at LHC, we obtain expected upper limits on the Higgs to invisible BRs from the distribution of the network output. We use CL s method [92,93] in the asymptotic approximation [94], to calculate the upper limit on the invisible BR at 95% CL. The method is briefly discussed as follows. In a binned Poisson counting experiment of expected signal s i and background b i (which are functions of nuisance parameters jointly denoted by θ) in a bin with observed number n i of some observable, we can write the likelihood function as: where N b is the total number of bins. N b and the bin-edges for the different variables are chosen as shown in their respective distribution plots (figures 5,6, 11 and 12). The profile-likelihood ratio: where the arguments of the denominator maximizes L, andθ conditionally maximizes L for the particular µ, is used as a test-statistic in the form of log-likelihood, 3) The distribution of the test statistic for different values of µ, is required to extract frequentist confidence intervals/limits. Since, we have fixed the total weight of the signal events with respect to the background to correspond to the ones expected with the total expected production cross-section from SM for each channel(S EW and S QCD ), µ corresponds to the invisible branching ratio of the Higgs. In the asymptotic method, for one parameter of interest, approximate analytical expressions for the distribution are derived using a result from Wald [95], in the form of a non-central Chi-square distribution. Monte-Carlo simulations required to extract the unknown parameters are by-passed by choosing the best representative data called the Asimov data, by the authors of reference [94]; which is defined as the data when used to estimate the parameters, produces their true values. We used HistFactory [96] to create the statistical model, and the RooStats [97] package to obtain the expected limits. This provides us with greater ease to handle the systematic uncertainties. As stated before, we also redo the shape-based analysis of reference [54] with our dataset only considering a few simpler systematics, to consistently gauge the increased sensitivity of the deep-learning approach. We incorporate three overall-systematics: uncertainty of the total cross-section, statistical uncertainty of Monte Carlo simulated events and approximate luminosity uncertainties. We do not take into account the possible change in the shape of the distributions due to Monte Carlo simulation effects. The per-bin statistical error is taken into consideration by activating the statistical-error of each sample, while creating the statistical model in HistFactory. This is essentially a shape-systematics which takes into account the bin-wise change in shape due to the statistical uncertainties. Its inclusion increases the median expected upper-limit by around three percent in the reproduced shape-analysis. The number of events for the analysis with the higher met cut is set to the expected number at 36 fb −1 for all background channels. This result is also scaled for the other luminosities. For the ones with the lower met cut, we use the validation data scaled by appropriate weights for the respective luminosities.
The median expected upper limit on the invisible branching ratio of SM Higgs at 95% CL along with the one and two sigma error bands are shown in figure 15 for integrated luminosities of 36 fb −1 and 140 fb −1 . A short description of the datasets used, and the corresponding median-expected upper limits with 95 % CL is tabulated in Table 1. This also contains the projected limits for 300 fb −1 , the integrated luminosity expected at the end of LHC Run III. We emphasize that even though we scale to 300 fb −1 luminosity, we use the same dataset and hence, the statistical uncertainties are not reduced. Consequently, our estimation for 300 fb −1 is a conservative one. First and foremost, one can notice that the reproduced result of the shape-analysis of reference [54] for an integrated luminosity of 36 fb −1 is quite consistent, and the difference can be accounted to the excluded background channels and experimental systematics. We repeat this analysis with the weaker    figure 15 and the expected median upper-limit on BR(h 0 → inv) at 95% CL for each integrated luminosities which also include projections for L = 300fb −1 .
selection criteria and see a modest improvement in the median-expected upper-limit. We also perform similar analyses with |∆η jj | distributions, and get an improvement of 2.9 % for met > 200 GeV, and 2.6 % for met > 250 GeV cuts. The worst (best) performing neural-network R-ANN (P HR J -CNN) has an improvement of 8.8% (14.6%) from the repeated experimental analysis. This although, is with different cuts and for the same cut in met, we have an improvement of 5.3% (12.1%) for R-ANN (P HR J -CNN). For an integrated luminosity of 140 fb −1 , we get an improvement of 2.2 % and 7.3 % for R-ANN and P HR J -CNN, respectively. The reduced difference for higher luminosities is of course expected, since the significance does not scale linearly with increase in datasize. An expected median upper limit of about 3.5% can be achieved with 300 fb −1 of data using the highest performing network P HR J -CNN. The results of the different feature spaces follow the expected trend. For this discussion, we quote the numbers for an integrated luminosity of 36 fb −1 . Comparing the performance of high-level feature spaces, we see that R performs the worst while the combined space H puts the most stringent bounds. The difference is minimal (0.7 %) with K-ANN, and appreciable (4.4%) with R-ANN. Amongst the image-networks, the difference between the low and high-resolution networks are less than a percent (0.8 % for P J , and 0.6% for P met ). Differences in performances of the different preprocessing instances are reflected in this analysis: P J puts nominally stricter bounds on the branching ratio (1.4 % for LR, and 1.6 % for HR).
Up to now, we demonstrated the capability of our CNN based low-level networks and also, ANN-based networks considering particle level data, including detector effects as well as underlying events during our simulations as discussed in section 2. However, we neglected the effect of simultaneous occurrences of multiple proton-proton interactions (pileup) in our analysis. Amount of pileup is relatively moderate in low luminosity data, but increasingly significant once we move towards high luminosity. We believe that its presence would not alter our primary results substantially from the calorimeter image data. CNN architectures look into the global features of an input image. Calorimeter deposits due to pileup are expected to be similar for different classes since, they are independent of the hard scattering processes. The same can be identified as redundant information, as a consequence of the optimization algorithm effectively searching for dissimilarities between the two classes. Optimal pdfs acquired by CNNs remain very similar whether it is with or without pileup. This issue was analyzed before, where it was shown that contrary to high-level methods deep-learning from calorimeter deposits show robustness to pileup effects in the classification of jet-image [18]. Although, in these studies, the jets have large transverse boosts and mostly reside in the central regions where its effect is less. However, various other studies [31,32] have also shown that deep-learning on the full calorimeter information is less prone to pileup effects as well. These existing results, further elucidate our presumption that CNNs would be less affected by higher pileup expected at future runs of LHC. In contrast, the other analyses, including the ANNs trained on high-level feature spaces, can be relatively more affected.
To present our arguments in perspective, we combined each event (tower-image) with an additional N randomly chosen minimum bias events with CMS switch through Pythia8 and Delphes without any pileup subtraction. At the same time, N follows a Poisson distribution with < N >= 20, 50, 50 for integrated luminosity 36, 140 and 300 fb −1 , respectively. Merged tower-image with pileup is then trained and tested for our high-resolution CNN scenario (P HR J -CNN, which can be noted from Sl.No (6) in Table 1). We found a very mild depreciation over our estimated median upper limit at 0.076, 0.059 and 0.045, which all lie within the 1σ error band in the branching ratio constraints. Note that no effort was made to mitigate the effects of the pileup during these estimates, which won't be the case in experimental analysis. In fact, there are extensive studies [98,99] of using powerful machine-learning algorithms specially designed to reduce pileup contamination of events. A new interpretation of collider events in terms of optimal transport [100,101] have also provided promising new techniques for pileup mitigation on top of reinterpretation of existing ones [102,103]. These developments offer further optimism for better mitigation of pileup effects in the future.

Summary and Conclusion
HEP experimental community is one of the frontrunners in utilizing machine learning algorithms for the last several decades in tagging and characterizing different objects and analyzing the massive data samples with the help of neural-network or boosted decision trees. However, recent developments in deep learning approaches have shown immense prospect in a variety of other applications.
Large Hadron Collider, after its breakthrough discovery of SM like Higgs boson, keep accumulating an enormous amount of data, pinpointing its different properties and also constraining diverse BSM scenario at the TeV scale. While such high energy data are opening up scope for new analysis techniques filling possible gaps in previous investigations, it is prudent to review the effectiveness of some of the compelling machine learning tools.
While proposed as an alternative channel for Higgs search, vector boson fusion (VBF) mechanism has shown the tremendous possibility not only in extracting Higgs properties but many other BSM searches. As a whole, this mechanism reckons upon some of the fundamental features of event shape, vastly used to control the backgrounds.
We choose VBF production of the Higgs boson decaying to invisible particles, as a case study for neural networks to learn the entire event topology without any reconstructed objects. We use the compelling capability of Convolutional Neural Networks (CNN) to examine the potential of deep-learning algorithms using low-level variables. Instead of identifying any particular objects, we utilize the entire calorimeter image to study the event topology which aims to learn the difference in radiation patterns between the two forward jets of the VBF signal. We specifically develop preprocessing steps which preserve the Lorentz symmetry of the events and are essential to maximize the statistical output of the data.
Apart from low-level variable as calorimeter image for CNN, we also consider two sets of high-level features. One such set is based on the kinematics of the VBF, whereas the other set of variables are designed to portray the radiation pattern H T calculated in different η range of the calorimeter. For a comprehensive analysis, we constructed several neural network architecture and demonstrated the comparative performance of CNN and ANN using different feature spaces. All these networks achieved excellent separation between signal and background. However, we found that CNN based low-level P HR J -CNN performs the best among all the networks, which is based on the high-resolution images, though the dependence on image resolution is relatively insignificant. We also note that deeplearning on the full calorimeter information is less prone to pileup effects as well. Without relying on any exclusive event reconstruction, this novel technique can provide the most stringent bounds on the invisible branching ratio of the SM-like Higgs boson, which can be expected to be constrained up to 4.3% (3.5%) using a dataset corresponding to an integrated luminosity of 140 fb −1 (300 fb −1 ). These limits can severely constrain many BSM scenarios, especially in the context of (Higgs-portal) dark matter models. The techniques presented in this work can easily be extended to a more complex event topology.

A Incorporating finite mass effect of top quark in gluon-fusion events
We generate the gluon-fusion production of the Higgs boson by using the Higgs Effective Field Theory (HEFT) model where the interaction of the Higgs boson with gluons is approximated by an effective vertex calculated by taking the top-quark mass to infinity. This is a reasonable approximation only when all relevant scales in the physical process are less than 2 m t . The distribution of p T of the Higgs boson (equivalently met with detector effects introduced via Delphes) has a significant portion of events in regions where the approximation is not valid. We remove this inconsistency by reweighting the met distribution of the events obtained after Delphes. We extract weights (ratio of the full SM results to HEFT) and bins in p T of the Higgs for the present final state topology from figure 30 on reference [70]. Each event is then assigned the corresponding weight of the bin of its met. After reweighting the events, we apply the preselection-cuts and extract the cut efficiency using the weights.
Since, we need unweighted events for the neural network training, the passed events are again unweighted. This is done in the following steps. We divide all events into sets with unique weights. This is nothing but grouping the events into the extracted bins in met. We get mutually exclusive subsets of events S i , with i being the bin-index. The per-bin weights are divided by their maximum value. We get a weight w i ∈ (0, 1] for each S i . From each set S i , we randomly choose w i proportion of events rounded to the closest integer. We show in figure 16, the distribution of some kinematic-variables of the three datasets: unweighted events generated with HEFT model, weighted events with finite-top mass effects and unweighted events used in neural network training. The effect of rounding We can see that for higher values of η C the signal and background are not that different and the difference grows as we approach the cut value of η cut.
to the nearest integer is seen in the later bins in met where the statistics are weaker due to fewer events.

B Characteristics of High-level variables
In this section, we take a closer look at the high-level variables, especially the R variables defined in eq. 3.2. A key element in extraction of variables belonging to the two spaces K and R, is that the K variables are functions of four-momenta of reconstructed objects while the R variables are functions of four-momenta of tower-constituents (in our case from the Tower class of Delphes). The R variables do not take into account the tower-resolutions in the strict sense. This may point to a further reduction in the performance of ANNs compared to CNNs, where the tower-resolutions are better modelled.  We show the signal vs background distribution of all R variables in figure 17. The contribution of S EW and S QCD to the total signal is stacked. The separation as defined in eq. 3.4, is shown for such variables for the total signal (also, S EW ) and background in figure 18. We can see that the trends in the distribution are in accordance with their respective values of separation. The shape of S QCD and the background distributions are similar for all values of η C , and the overall differences if any, comes from the contribution of S EW . The separation is minimal and remains constant for η C > 4. This can be attributed to the fact that above these values almost all of the calorimeter hits contribute to H η C T . It increases continuously up to η C = 1.8 and then decreases till η C = 1.0. The increase is expected from the VBF topology, while the decrease can be attributed to the smallness of the region [−η C , η C ].
In figure 19, we show the remaining kinematic-variables not shown in figure 6. As can be seen, there is not much discriminatory information in any of these variables: φ met is uniform for all channels since the beams are unpolarized, while ∆φ J met (J ∈ {j 1 , j 2 , j 1 +j 2 }) has most contributions around ±π, due to the imposed separation of two jets ∆φ jj and momentum conservation in the recoil of quarks/gluons against heavy bosons (W ± , Z 0 and h 0 ).

C Correlation between High-level variables and network-outputs
Salient features of the correlation of important variables with all neural network outputs have been given in the main text ( figure 14). We examine the correlation of the ANNs with their inputs in this section. All correlations have been calculated using the inbuilt function in NumPy [104].
In figure 20 we show the correlations amongst the K variables including the K-ANN network output for each class. As expected, the K-ANN output is highly correlated with the two most discriminating variables |∆η jj | and m jj . The next highest correlation with K-ANN is found to be with met for background and |∆φ jj | for signal. Except for |∆φ jj |, all other φ variables are almost uncorrelated with K-ANN for both classes. The uniformity of φ met results in its negligible correlation with all other variables. In the correlation among K variables, we can see two distinct sets of variables with comparatively moderate to high correlations formed amonsgst {|∆η jj |, m jj , met} and {∆φ j 1 met , ∆φ j 2 met , ∆φ j 1 +j 2 met }. In the first set, |∆η jj | and m jj are almost completely correlated since, the angular opening between two four vectors p µ j 1 and p µ j 2 , determine the invariant mass m jj = (p µ j 1 + p µ j 2 ) 2 . The met shows a moderate correlation with both |∆η jj | and m jj as momentum conservation forces | p j 1 + p j 2 | to be higher for higher met. The correlation amongst the second subset can also be explained by transverse momentum conservation in the collision, with contamination from subsidiary QCD radiation and detector effects.
The class-wise correlations amongst the outputs of R-ANN and H-ANN along with six variables from R with high separation, and the two kinematic variables |∆η jj | and m jj are shown in figure 21. As expected, we see that the R variables are highly correlated with one another, which decreases with increasing distance in η C . Another highlight is the negative correlation between them and the kinematic variables. It can be understood if we recall that the dominant radiation in the tower comes from the two leading jets, and an increase in |∆η jj | will decrease the calorimeter hits in the central regions. In the case of correlations between neural-network outputs and their respective inputs, the sign of the correlation is not much relevant for binary classification due to the probabilistic interpretation of the outputs y i : y 0 + y 1 = 1 and y i > 0. On the contrary, the relative difference in sign and magnitude in correlations between the different input features and the output is relevant. In the case of H-ANN, we can see that in terms of both magnitude (importance as plotted in figure 7) and sign (as discussed here), the relations amongst K and R variables are carried over to their corresponding correlations with the network-output.