\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$hh+\text {Jet}$$\end{document}hh+Jet production at 100 TeV

Higgs pair production is a crucial phenomenological process in deciphering the nature of the TeV scale and the mechanism underlying electroweak symmetry breaking. At the Large Hadron Collider, this process is statistically limited. Pushing the energy frontier beyond the LHC’s reach will create new opportunities to exploit the rich phenomenology at higher centre-of-mass energies and luminosities. In this work, we perform a comparative analysis of the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$hh+\text {jet}$$\end{document}hh+jet channel at a future 100 TeV hadron collider. We focus on the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$hh\rightarrow b\bar{b} b\bar{b}$$\end{document}hh→bb¯bb¯ and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$hh \rightarrow b\bar{b} \tau ^+\tau ^-$$\end{document}hh→bb¯τ+τ- channels and employ a range of analysis techniques to estimate the sensitivity potential that can be gained by including this jet-associated Higgs pair production to the list of sensitive collider processes in such an environment. In particular, we observe that \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$hh \rightarrow b\bar{b} \tau ^+\tau ^-$$\end{document}hh→bb¯τ+τ- in the boosted regime exhibits a large sensitivity to the Higgs boson self-coupling and the Higgs self-coupling could be constrained at the 8% level in this channel alone.


Introduction
The observed lack of any conclusive evidence for new interactions beyond the Standard Model (BSM) during the LHC's run-1 and the first 13 TeV analyses has tightly constrained a range of well-motivated BSM scenarios. For instance, the ATLAS and CMS collaborations have already set tight limits on top partners in supersymmetric (e.g. [1,2]) and stronglyinteracting theories (e.g. [3,4]), which makes a natural interpretation of the TeV scale after the Higgs boson discovery more challenging than ever.
With traditional BSM paradigms facing increasing challenges as more data becomes available, a more bottom-up approach to parametrising potential new physics interactions has received attention recently. By interpreting Higgs analyses using Effective Field Theory (EFT), any heavy new physics scenario that is relevant for the Higgs sector can be investigated largely model-independently [5,6], at the price of many ad hoc interactions to lowest order [7] in the EFT expansion.
Current measurements as well as first extrapolations of these approaches to the high luminosity (HL) phase of the LHC have provided first results as well as extrapolations of EFT parameters [8][9][10][11][12]. One of the parameters, which is particularly sensitive to electroweak symmetry breaking potential yet with poor LHC sensitivity prospects is the Higgs self-interaction. Constraining the trilinear self-interaction directly requires a measurement of (at least) pp → hh [13][14][15][16][17]; accessing quartic interactions in triple Higgs production is not possible at the LHC [18,19] and seems challenging at future hadron colliders at best [20,21]. Early studies of the LHC's potential to observe Higgs pair production have shown the most promising channels to be the hh → bbγ γ [22] and hh → bbτ + τ − channels [23,24]. Recent projections by ATLAS [25] and CMS [26], based on an integrated luminosity of 3 ab −1 and on the pileup conditions foreseen for the HL-LHC, estimate a sensitivity to the di-Higgs signal in the range of 1-2σ . Recent phenomenological papers [27][28][29], combining the sensitivity to several different di-Higgs final states, reach similar conclusions. ATLAS [25] quotes a sensitivity to the value of the Higgs self-coupling (assuming SM-like coupling values for all other relevant interactions) in the range of −0.8 < λ/λ SM < 7.7, at 95% confidence limit. Improving this sensitivity baseline is one of the main motivations of future high energy hadron colliders, and proof-ofprinciple analyses suggest that a vastly improved extraction of trilinear Higgs coupling should become possible [30][31][32][33][34] at a future 100 TeV collider.
Most of these extrapolations have focused on gluon fusion production p(g) p(g) → hh. Owing to large gluon densities at low momentum fractions, the associated di-Higgs cross section increases by a factor of ∼ 39 compared to 14 TeV collisions [35,36], with QCD corrections still dominated by additional unsuppressed initial state radiation [37][38][39][40][41][42][43]. While the process' kinematic characteristics of Higgs pair production remain qualitatively identical to the LHC environment, extra jet emission becomes significantly less suppressed leading to a cross section enhancement of pp → hh j of ∼ 80 1 compared to 14 TeV collisions. This provides another opportunity for the 100 TeV collider: Since the measurement of the self-coupling is largely an effect driven by the top quark threshold [17], accessing relatively low di-Higgs invariant masses is the driving force behind the self-coupling measurement. In fact, recoiling a collimated Higgs pair against a jet kinematically decorrelates p T,h and m hh . Compared to pp → hh, it thus exhibits a much higher sensitivity to the variation of the Higgs trilinear interaction while keeping p T,h large [24], which is beneficial for the reconstruction and separation from backgrounds. However, such an approach is statistically limited at the LHC. Given the large increase in pp → hh + jet production in this kinematical regime as well as the increased luminosity expectations at a 100 TeV collider, it can be expected that jet-associated Higgs pair production can add significant sensitivity to self-coupling studies at a 100 TeV machine.
Quantifying this sensitivity gain in a range of exclusive final states with different phenomenological techniques is 1 We impose p T ( j) > 100 GeV at the parton level. the purpose of this work. More specifically we consider final states with largest accessible branching fractions hh → bbbb [23,44,45] and hh → bbτ + τ − [23,24,46], where we also differentiate between leptonic and hadronic τ decays (and consider their combination).
This work is organised as follows: We consider the bbτ τ channel in Sect. 2. In particular we compare the performance gain of a fully-resolved di-Higgs final state analysis extended by substructure techniques highlighting the importance of high-transverse momentum Higgs pairs that are copious at 100 TeV. We discuss the bbbb channel in Sect. 3.

General comments
Let us first turn to the jbbτ τ channels. We will see that these are more sensitive to variations of the trilinear Higgs coupling and they therefore constitute the main result of this work. This is in line with similar studies at the LHC (see Refs. [24,44,46]) that show that the signal vs. background ratio can be expected to be better for this channel than for the four b case.
We study the various decay modes of the taus and consider two exclusive final states, purely leptonic tau decays h → τ τ and mixed hadronic-leptonic decays h → τ τ h , where the subscripts and h denote the leptonic (to e, μ) and hadronic decays of the taus, respectively. The scenario involving the purely hadronic decays, h → τ h τ h will undoubtedly add to the significance. However, scenarios involving two hadronic taus will incur stronger QCD backgrounds and hence we will need to simulate various fake backgrounds and will also require an accurate knowledge of the j → τ j fake rate, where j denotes a light jet. At this stage, we do not feel confident that we can reliably estimate these fake backgrounds, and hence neglect this decay mode in the present study.
There are three categories of backgrounds that we consider for this scenario. The most dominant background results from tt j with the leptonic top decays (t → bW → b ν), which includes decays to all the three charged leptons. 2 Furthermore, we have the pure EW background and a mixed QCD-EW background of jbbτ + τ − . 3 The pure EW and QCD+EW processes consist of various sub-processes. A typical example for the pure EW scenario is pp → H Z/γ * + jet → bbτ + τ − + jet. Whereas, for the QCD+EW processes, a typical example is pp → bbZ/γ * + jet → bbτ + τ − + jet. In all these background processes, either from the τ decays or from the W -boson decays (for the tt j background), we may encounter leptons (e, μ). There are potentially other irreducible backgrounds like W (→ ν)+ jets but these turn out to be completely subdominant when compared to the other backgrounds. This is shown in the context of the hh → bbτ τ present and future analyses by ATLAS [47] and CMS [48,49]. Similar conclusions will hold in the present study. Hence we neglect such backgrounds from our present analysis. All samples, including the signal, are generated with MadGraph5_aMC@NLO [50] in Born-level mode, and we neglect effects from jet merging up to higher jet multiplicities. For our signal samples, the Higgs bosons are decayed using MadSpin [51,52]; the showering is performed using Pythia 8 [53]. To account for QCD corrections we use global K factors for the signal of K = 1.8 for the EW contributions (extrapolating from [54]), K = 1.5 for the QCD+EW contribution [55] as well as K = 1.0 for tt j following [56].
To operate with an efficient Monte Carlo tool chain, we generate the EW and mixed QCD+EW events with the following generator level cuts: GeV and 90 GeV < M , < 200 GeV, where = e, μ, τ and b denotes final state bottom quarks. R is the azimuthal anglepseudo-rapidity (φ-η) distance and M denotes invariant masses. The same requirements are imposed on tt j, however, without a lower bound on M . The only event generator cut applied to the signal is transverse momentum cut on the light flavor jet p j T > 100 GeV. Given the discriminating power of m T 2 which was motivated in Ref. [46] to reduce the tt background, we consider a similar variable with the aim to reduce the dominant tt + jet background. The top background final state can be described schematically through a decay chain where m T denotes the transverse mass constructed from b T , c T and m B 1d) with transverse energy e 2 i = m 2 i + p 2 i,T , i = B, C. m T refers to the same observable calculated from the primed quantities in Eq. (2.1). The minimisation in Eq. (2.1c) is performed over all momenta c T and c T , subject to the condition that their sum needs to reproduce the correct p T , which is normally chosen to coincide with the overall missing energy p T . However, because the tau's decay is partially observable, we can modify the m T 2 definition to include the visible transverse momenta of the tau leptons by identifying (2.1e) As we will see below, this modified m T 2 plays a crucial role in suppressing the dominant tt j background. We must emphasise here that many distinctly different definitions of m T 2 have been considered in Ref. [58]. The authors in Ref. [46] have considered several such definitions of the m T 2 variable and found them having very similar discriminatory power.

The resolved τ τ channel
The leptonic di-tau final states are undoubtedly the cleanest channels out of the three di-tau options. We can identify exactly two leptons (e, μ), two b-tagged jets and at least one hard non b-tagged jet. We therefore pre-select the events by requiring the following cuts at reconstruction level: 4 jets are clustered with size 0.4 and p j T > 30 GeV in |η| < 4.5; the hardest jet is required to have p j 1 T > 105 GeV. Leptons are required to have p T > 10 GeV and |η| < 2.5. We require two leptons and select two jets with p T > 30 GeV and |η| < 2.5, which are subsequently b-tagged. All objects need to be well separated R(b, b/j 1 / ), R( , j 1 ) > 0.4 and R( , ) > 0.2. To efficiently suppress the Z -induced background we demand 105 GeV < M b,b < 145 GeV. Furthermore we require a significant amount of missing energy E T > 50 GeV.
After these pre-selection requirements we apply a boosted decision tree (BDT) analysis which is the experiments' weapon of choice when facing a small signal vs. background ratio (see e.g. the very recent ATLAS tth analysis [62]). We include a large amount of (redundant) kinematic information 5 to the training phase, as listed in Table 1. 6 4 Jets are defined through the anti-kT algorithm [59,60] with a jet resolution parameter 0.4 inside the rapidity range |η j | < 4.5. For the b-tagging efficiency we choose 60% at a 2% mistagging rate, which is a realistic at the LHC [61]. Isolated muons and electrons are defined by requiring a small hadronic energy deposit in the vicinity of the lepton candidate, E had / p T < 10% within R < 0.2. 5 The redundant variables which do not affect the significance are: where the index refers to the p T ordering of an object. 6 We have checked our results for overtraining.
We focus on a training of the boosted decision tree for a SM-like value of the trilinear Higgs coupling λ SM . We employ the boosted decision tree algorithm of the TMVA framework [63] on the basis of 30 ab −1 of data at 100 TeV. Our results are tabulated in Table 2. As can be seen, we can typically expect small signal vs background ratios at small signal cross sections. The latter is mostly due to the small fully-leptonic branching ratios of the tau pairs.

The resolved τ τ h channel
Given the small S/B for the fully leptonic channel of the previous section we consider the case where one tau lepton decays leptonically while the other tau decays hadronically.
Recently, a major CMS level-1 trigger update has increased the hadronic tau tagging efficiency by a factor of two [65][66][67] for tau candidates with p T 20 GeV, robust against pile-up effects. Fully-hadronic di-tau decays of the Higgs boson for 13 TeV collisions can be tagged at 70% with a background rejection of around 0.999. These improvements suggest that a single tau tagging performance of 70% in a busier environment of the hh j final state at 100 TeV is not unrealistic and we adopt this working point in the following, assuming a sufficiently large background rejection for fakes to be negligible.
We follow the analysis of the previous section and employ similar variables for the BDT. The only difference here is that here we demand 2 b-tagged jets, one τ -tagged jet, one lepton and at least one hard non b, τ -tagged jet. All the aforementioned variables for the τ τ scenario, Table 1, can be utilised here with the only difference of replacing one lepton by a τ h . 7 The distributions are shown in Figs. 1, 2 and 3 and the results are tabulated in Table 3. As can be seen, different to the fullyleptonic case, the increase in signal allows us to suppress the dominant tt j background further without compromising the signal count too much. This leads to a much larger expected sensitivity in the τ τ h channels.
Combining the results of the previous section with the τ τ h results into a log-likelihood CLs hypothesis test [68][69][70] assuming the SM as null hypothesis values of (assuming no systematic uncertainties) at 68% confidence level. Here, κ λ = λ/λ SM , is the measure of the deviation of the Higgs trilinear coupling with respect to the SM expectation. So far our strategy has focused on resolved particle-level objects without making concessions for the larger expected sensitivity of the high p T final states. Jet-substructure techniques (see e.g. [71]) are expected to be particularly suited for kinematic configurations for which h → bb recoils against the light-flavor and hard jet [24], while the h → τ τ decay happens at reasonably low transverse momentum. This way, although one Higgs is hard, low invariant Higgs pair-masses can be accessed from an isotropic h → τ τ decay given a collimated bb pair. This particular kinematic configuration is not highlighted in the previous section and we can expect that the sensitivity of Eq. (2.2) will increase once we focus with jetsubstructure variables on this phase-space region which is highly relevant for our purposes. The benefit of this analysis will hence be two-fold: firstly we will exploit the background rejection of the non-Higgs final states through the adapted strategies of jet-substructure techniques. And secondly we will directly focus on a phase space region where we can expect the impact of κ λ = 1 to be most pronounced.
To isolate this particular region, we change the analysis approach of Sects. 2.2 and 2.3. Before passing the events to the BDT we require at least two so-called fat jets of size R = 1.5 and p j T > 110 GeV. One of these fat jets is required to contain displaced vertices associated with B mesons. We remove the jet constituents (that can contain leptons) and recluster the event along with our standard anti-kT choice. We then require either two isolated leptons (τ τ cases) or one  isolated lepton together with one τ -tagged jet ( p T > 30 GeV) using again a tagging efficiency of 70% (τ τ h cases). All these objects are required to be in the central part of the detector |η| < 2.5. Subsequently we apply substructure techniques to the jet containing displaced vertices following the by-now standard procedure of Ref. [71] (we refer the reader for details to this publication and limit ourselves to quoting our choices of mass drop parameter 0.667 and √ y = 0.3). After jetfiltering we double-b tag the two hardest subjets with an efficiency of 70% (2% mistag rate) and require the identified B-mesons to have p T > 25 GeV. 8 Finally, we require the leptons to be separated by R( ) > 0.2 in the τ τ case.
In the τ h τ case we require the lepton to be sufficiently wellseparated from the hadronic tau R( , τ h ) > 0.4. We use the (jet-substructure) observables of Table 4 as BDT input 9 (for a discussion of redundancies of the used observables see below). The signal vs. background discriminating power is shown in Figs. 4 and 5. We can increase the sensitivity of the signal by using the collinear approximation outlined in Ref. [72] for the τ τ pair. 9 Here also we obtain no change in sensitivity upon removing the redun- The combined results are tabulated in Table 5. As can be seen, this approach retains larger signal and background cross sections compared to the fully-resolved approach that has a combined S/B 0.08. The sensitivity to κ λ is slightly more pronounced in the jet-substructure approach as expected. Together with the increased statistical control we can therefore constrain κ λ slightly more tightly (assuming again no systematic uncertainties) 0.76 < κ λ < 1.28 3/ab, (2.4) 0.92 < κ λ < 1.08 30/ab, (2.5) at 68% confidence level using the identical CLs approach as above. Before concluding this section we note that for our bbτ + τ − analyses, the S/B values are 10% or more for the boosted combined (τ τ h +τ τ ) analysis and the resolved τ l τ h analysis. For the τ τ analysis however, we get S/B below 5%. Such values of S/B are not uncommon in Higgs analyses at the LHC. For example the S/B in the inclusive H → γ γ search is 1/30, and in the observation of V H(→ bb), the S/B is in the range of 1-2% [73], depending on the vector boson decay mode. Ultimately, what counts is the precision with which the background rate can be determined. In our case, as in the LHC examples given above, the background rate can be extracted directly from the data, using the sidebands of the various kinematical distributions that we consider.

Comments on cut-and-count experiments and redundancies
A possible source of criticism of BDT based signal selection is that they cannot be straightforwardly mapped onto cut-and-count analyses, and the obtained signal region does not necessarily consist of connected physical phase space regions. In a busy collider environment with many competing processes and background rates that exceed the expected signal by orders of magnitude, multivariate methods are nevertheless very powerful tools that allow to extract information in various forms. 10 The kinematics of pp → hh j is fully determined by five independent parameters. This raises the question whether the observed correlations of observables might allow us to consider subsets of the observables listed above. We investigate  this by systematically removing correlated observables to trace their impact on our final sensitivity; we focus on the boosted selection as it shows the largest physics potential. When removing observables which exhibit correlations of more that 70%, we find our signal yields decreased in the percent range while the background (most notably tt j) increases by 15%. The impact on the signal, although small in size, is such that the κ λ -dependence of the cross section becomes flatter. In total, focussing on observables with less than 70% correlation therefore translates into constraints on the trilinear coupling 0.89 < κ λ < 1.28 at 30/ab, which is clearly worse than the projection of Eq. (2.4). Decreasing our correlation threshold to 60%, we find our sensitivity even further decreased. This, together with a uniform relative importance of the observables for the BDT output score, indicates that the comprehensive list of observables indeed provides important discriminatory power, in particular when fighting against the large tt j background.
We can test the robustness of our analysis by comparing it against a more traditional cut-and-count approach. As part of the BDT analysis we can use the BDT's observable ranking to choose rectangular cuts in a particularly adapted way. From the cut-flow documented in Table 6, we see, that we can reproduce the BDT S/B sensitivity within a factor of two.

The j bbbb channel
Finally, we consider the bbbb j channel for completeness. In order to compete with the large pure QCD background that contributes to this process and to trigger the event we need to consider very hard jets, p j 1 T 300 GeV. For a more efficient background simulation, we therefore again generate the background events already with relatively hard cuts at the generator level. We choose the jet transverse momentum p j T > 250 GeV, the R separation between bottom quarks and the light jet R b, j > 0.4, bottom quark transverse momentum threshold p b T > 15 GeV, as well as bottom rapidity range |η b | < 3.0. Furthermore, the jet rapidity range is restricted to |η j | < 5.0 and we also require the bottom quarks to be separated in distance R b,b > 0.2 as well as invariant mass M b,b > 30 GeV. 11 For the signal, we only impose the generation level cut, p j 1 T > 200 GeV. Throughout this part of the analysis, we will include a flat b-tagging efficiency of 70% with mistag efficiency 2%.
To account for QCD corrections we use again global K factors as described above. In addition to the backgrounds discussed for the τ channels, we also need to include a pure 11 Jets are defined as in Sect. 2.2.
QCD background leading to four final state b quarks. The QCD corrections for this highly-involved final state are not available. We choose to use K = 1. We note that this is consistent with the range of K factors for inclusive 4 jet production discussed in Ref. [75].

The resolved channel
The signal vs. background ratio is small for such inclusive selections. Therefore, in order to assess the sensitivity that can be reached in principle, we will again employ a multivariate analysis strategy. Before passing the events to the multi-variate algorithm, we pre-select events according to the hh + jet signal event topology. For the resolved analysis we Again we input a number of kinematic distributions to the BDT, detailed in Table 7, the results are shown in Table 8. The signal vs. background ratio is extremely small, O(10 −3 ), leaving the analysis highly sensitive to systematic uncertainties with only little improvement possible using jetsubstructure approaches.

The boosted channel
We follow here the philosophy of Sect. 2.4, by exploiting the fact that most of the sensitivity to the Higgs self-coupling comes from configurations where the di-Higgs system has a small invariant mass. This can be achieved by requiring the di-Higgs system to recoil against one or more high p j T jets. If the Higgses have enough transverse momentum, their decay products, the bb pairs, will be collimated and eventually will be clustered as large radius jets. Such jets can be identified and disentangled from QCD jets with the use of standard substructure techniques.
Events are first pre-selected by requiring at least two central fat jets with parameter R = 0.8 that contain at least two b-subjets. The fat jets are selected if p j T > 300 GeV and |η j | < 2.5. We assume, as previously, a conservative 70% b-tagging efficiency. We further ask the di-fatjet pair to be sufficiently boosted, p j j T > 250 GeV, and the lead- T > 400 GeV. Finally, we require that R( j 1 , j 2 ) < 3.0 as well as ( p The last steps of the event selection make use of jetsubstructure observables and are designed to identify the collimated Higgs fat jets with high purity. The main background contribution is QCD g → bb events, where configurations are dominated by soft and collinear splittings. The resulting jets are hence often characterized by one hard prong, as opposed to fat jets containing the Higgs decay products, that will feature a clear two-prong structure. The "2" versus "1" prong hypotheses of a jet can be tested with the τ 2,1 observable [76]. Moreover Higgs jets typically have an invariant mass close to m H = 125 GeV, as opposed to QCD jets that tend to have a small mass. QCD jets can therefore be rejected by requiring a soft-dropped mass m S D [77] of the order of the Higgs mass. These two observables are shown in Fig. 6 for the leading reconstructed fat-jet. The Higgs-jet tag consists in selecting jets with τ 2,1 < 0.35 and 100 < m S D < 130 GeV. This simple selection yields a tagging efficiency of 6% and a mistag rate of 0.1%. We apply the Higgs-jet tagging procedure to the two fat jets.
The final results for the boosted analysis are summarized in Table 9. Although we find only a mild improvement on the significance compared to the resolved analysis, there is a clear improvement on the signal over background ratio ∼ 0.02, allowing to better control background systematics.

Summary and conclusions
Di-Higgs searches and their associated interpretation in terms of new, non-resonant physics are a key motivation for a future high-energy pp collider. Recent analyses have mainly focused on direct pp → hh production, which has the shortcoming of back-to-back Higgs production generically accessing a phase space region with only limited sensitivity to the modifications of the trilinear Higgs coupling. This situation can be improved by accessing kinematical configurations where a collinear Higgs pair recoils against a hard jet, thus accessing small invariant masses M hh 2m t over a broad range of final state kinematics. This is the region where modifications of the trilinear Higgs coupling are most pronounced.  In this work, we have focussed on this hh j final state at a 100 TeV collider. As exclusive final state cross sections are small, we focus in particular on the dominant hh → bbbb and hh → bbτ + τ − decay channels. Multi-Higgs final states suffer from small rates even in these dominant Higgs decay modes, which necessitates considering multivariate analysis techniques. We find that although the four b final state is challenged by backgrounds with some opportunities to enhance sensitivity at large momenta, the hh → bbτ τ final states provide a promising avenue to add significant sensitivity to the search for non-standard Higgs interactions. In particular, the hadronic tau decay channels which can be isolated with cutting-edge reconstruction techniques introduced by the CMS collaboration, drives the sensitivity. Relying on boosted final states, we show that hh j production could in principle allow to constrain the Higgs self-coupling at the 8% level at 30/ab (assuming no systematic uncertainties and other couplings to be SM-like). This precision is thus worse than the ∼ 4% result obtained for the inclusive hh(→ bbγ γ ) channel shown in Ref. [34]. Given the complexities of these analyses involving the Higgs self-coupling, we find it important that there be several independent modes to probe its value with a precision below the 10% threshold. Furthermore, the different kinematical regimes probed by the hh and the hh j measurements could be sensitive in different ways to possible deviations from the SM expectations. This motivates pp → hh j with semi-leptonic tau decays as an additional main search channel for modified Higgs physics.